You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
### Release Quality, Docs, and Developer Experience
21
+
22
+
#### Added
23
+
-**Container/Devcontainer**: Baked `hadolint` into Docker images and devcontainer so pre-commit hooks work reliably.
24
+
-**Dockerfiles**: Added `git-lfs` to CPU/GPU Dockerfiles for smoother model/asset workflows.
21
25
22
26
#### Changed
23
-
-**Documentation**: Archived historical/versioned docs under `docs/archive/` and updated inbound links.
24
-
-**Documentation**: Standardized examples on the canonical API port `18011` and Docker Compose-first workflows.
25
-
-**CLI/Logging**: Removed stale v1.2.0 references and ensured first-run messaging reflects the current package version.
27
+
-**Documentation**: Consolidated the JOSS manuscript into `paper/paper.md` and replaced `docs/joss.md` with a pointer to avoid divergence.
28
+
-**Repository Hygiene**: Moved top-level helper scripts into organized subfolders under `scripts/` and updated imports to the `videoannotator.*` package namespace.
29
+
-**Entrypoints**: Updated `api_server.py` to act as a compatibility wrapper; documentation now recommends using the `videoannotator` CLI.
30
+
-**README**: Rationalized repeated setup/install instructions, fixed broken/non-links, and replaced hard-coded test/coverage claims with CI status.
26
31
27
32
#### Fixed
33
+
-**Docs**: Standardized examples on the canonical API port `18011` and corrected Docker run port mappings.
28
34
-**Docs**: Replaced placeholder `docs/usage/accessing_results.md` with a real results retrieval guide.
**Automated video analysis toolkit for human interaction research** - Extract comprehensive behavioral annotations from videos using AI pipelines, with an intuitive web interface for visualization and analysis.
@@ -34,7 +34,7 @@ VideoAnnotator provides both **automated processing** and **interactive visualiz
34
34
- Supports batch processing and custom configurations
- Output Naming Conventions: [docs/development/output_naming_conventions.md](docs/development/output_naming_conventions.md) (stable patterns for downstream tooling)
109
+
- Emotion Validator Utility: [src/videoannotator/validation/emotion_validator.py](src/videoannotator/validation/emotion_validator.py) (programmatic validation of `.emotion.json` files)
110
+
- CLI Validation: `uv run videoannotator validate-emotion path/to/file.emotion.json` returns non-zero exit on failure
111
111
Client tools (e.g. the Video Annotation Viewer) should rely on those sources or the `/api/v1/pipelines` endpoint rather than hard-coding pipeline assumptions.
-**ELAN**: Linguistic annotation software compatibility
215
+
VideoAnnotator produces machine-readable outputs (primarily JSON files and API responses) intended to be easy to consume from common data tools.
226
216
227
-
### **Analysis Platforms**
217
+
-**Python**: Load JSON into pandas / numpy for analysis (see [examples/](examples/))
218
+
-**R / MATLAB**: Not currently supported with official helper packages, but the JSON outputs can be consumed using standard JSON readers
219
+
-**Visualization**: Use the companion [Video Annotation Viewer](https://github.com/InfantLab/video-annotation-viewer) for interactive playback + overlays
0 commit comments