You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This release accompanies the JOSS submission of VideoAnnotator and its companion project Video Annotation Viewer.
23
+
24
+
#### Changed
25
+
26
+
-**CLIP migration**: Migrated scene-classification pipeline from `clip` to `open_clip`, using the LAION-2B pretrained `ViT-B-32` model for improved availability and reproducibility.
27
+
-**HuggingFace auth**: Updated diarization and Whisper pipelines to use the current `token` parameter instead of the deprecated `use_auth_token`.
28
+
-**Devcontainer**: Simplified forwarded-port list to the single default API port (18011).
29
+
30
+
#### Fixed
31
+
32
+
-**Database GUID handling**: Added defensive `try/except` in the `GUID` type decorator to gracefully handle malformed UUID values.
33
+
-**Diarization init**: Wrapped model loading in explicit error handling with a clear log message on failure.
34
+
35
+
#### Removed
36
+
37
+
-**Voice emotion baseline**: Removed `voice_emotion_baseline` pipeline metadata and associated tests (superseded by LAION EmoNet voice pipeline).
38
+
39
+
#### Documentation
40
+
41
+
- Added JOSS cover letter (`paper/cover_letter.md`).
42
+
- Updated paper bibliography version to v1.4.2.
43
+
18
44
## [1.4.1] - 2025-12-26
19
45
20
46
### Release Quality, Docs, and Developer Experience
**VideoAnnotator: an extensible, reproducible toolkit for automated video annotation in behavioral research**
3
+
4
+
Dear Editor,
5
+
6
+
We are pleased to submit *VideoAnnotator* for consideration by the Journal of Open Source Software. This is an open-source Python toolkit that provides a unified, locally deployed framework for automated multi-modal video annotation — covering person tracking, facial analysis, scene detection, and audio processing — aimed at behavioral, social, and health researchers.
7
+
8
+
We believe the submission addresses the JOSS review criteria as follows:
9
+
10
+
-**Open license**: The software is released under the MIT license.
11
+
-**Repository and archival**: The source is hosted on GitHub at [InfantLab/VideoAnnotator](https://github.com/InfantLab/VideoAnnotator) and we will generate a versioned Zenodo DOI upon acceptance.
12
+
-**Contribution and community guidelines**: The repository includes `CONTRIBUTING.md`, `CODE_OF_CONDUCT.md`, issue templates, and `CITATION.cff`.
13
+
-**Automated tests and CI**: A pytest suite of 74 test files (unit, integration, and performance) runs via GitHub Actions on Ubuntu, Windows, and macOS with Python 3.12, alongside ruff linting, mypy type-checking, Trivy security scanning, and Codecov reporting.
14
+
-**Functionality documentation**: Full API documentation and usage guides are provided in the repository and rendered online.
15
+
-**Statement of need and state of the field**: The paper includes both sections, positioning VideoAnnotator relative to existing tools (ELAN, Datavyu, DeepLabCut, Py-Feat, OpenFace, openSMILE, PySceneDetect, pyannote) and explaining the gap it fills.
16
+
-**References**: All key upstream models and comparable tools are cited in the bibliography.
17
+
-**Research application**: The paper includes a research-impact statement describing current use at Stellenbosch University and the University of Oxford for caregiver–child interaction studies under the Global Parenting Initiative.
18
+
-**AI disclosure**: Included per JOSS policy.
19
+
20
+
We would also like to note that a parallel JOSS submission is being prepared for the companion project, **Video Annotation Viewer** ([InfantLab/video-annotation-viewer](https://github.com/InfantLab/video-annotation-viewer)), which provides the interactive browser-based interface for reviewing and validating VideoAnnotator outputs. The two packages are designed to work together but are independently installable and have distinct codebases.
21
+
22
+
This is our first submission to JOSS, so we very much appreciate any guidance you can offer throughout the review process. We are happy to address any feedback promptly.
23
+
24
+
Thank you for your time and consideration.
25
+
26
+
Sincerely,
27
+
28
+
Caspar Addyman (corresponding author), Jeremiah Ishaya, Irene Uwerikowe, Daniel Stamate, Jamie Lachman, and Mark Tomlinson
0 commit comments