test(automation): add unit tests for critical automation modules#3162
test(automation): add unit tests for critical automation modules#3162MarkusNeusinger merged 3 commits intomainfrom
Conversation
Add comprehensive test coverage for previously untested modules: - sync_to_postgres.py: 39 tests (0% → 60% coverage) - parse_timestamp, parse_bullet_points, convert_datetimes_to_strings - parse_spec_markdown, parse_metadata_yaml, parse_library_metadata_yaml - scan_plot_directory with various directory structures - plot_generator.py: 22 tests (0% → 33% coverage) - extract_and_validate_code with markdown extraction and syntax validation - retry_with_backoff with rate limit and connection error handling - backfill_review_metadata.py: 22 tests (0% → 50% coverage) - parse_ai_review_comment with various PR comment formats - parse_criteria_checklist with category and item parsing - update_metadata_file with dry run support - migrate_metadata_format.py: 21 tests (0% → 87% coverage) - migrate_specification_yaml and migrate_library_metadata - extract_title_from_header with various formats - migrate_plot integration tests 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
Pull request overview
This pull request adds comprehensive unit test coverage for previously untested automation modules, introducing 104 new unit tests across four critical automation components.
Key Changes:
- Added 60% test coverage for
sync_to_postgres.py(PostgreSQL sync logic and file parsing) - Added 87% test coverage for
migrate_metadata_format.py(metadata migration utilities) - Added 50% test coverage for
backfill_review_metadata.py(AI review comment parsing) - Added 33% test coverage for
plot_generator.py(code extraction and retry logic)
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
tests/unit/automation/scripts/test_sync_to_postgres.py |
Adds 39 unit tests covering timestamp parsing, bullet point extraction, datetime conversion, spec markdown parsing, YAML metadata parsing, and directory scanning logic |
tests/unit/automation/scripts/test_migrate_metadata_format.py |
Adds 21 unit tests for metadata format migration including specification YAML migration, library metadata flattening, title extraction, and complete plot directory migration |
tests/unit/automation/scripts/test_backfill_review_metadata.py |
Adds 22 unit tests for parsing AI review comments, criteria checklists, and updating metadata files with review data |
tests/unit/automation/generators/test_plot_generator.py |
Adds 22 unit tests for code extraction from various markdown formats, syntax validation, and retry logic with exponential backoff |
tests/unit/automation/generators/__init__.py |
Adds missing init file for the test generators module |
| def test_parse_empty_comment_returns_none(self): | ||
| result = parse_criteria_checklist("No checklist here, just text.") | ||
|
|
||
| assert result is None or result == {} |
There was a problem hiding this comment.
The assertion "assert result is None or result == {}" is ambiguous and makes the test less clear. Consider splitting this into two separate test cases: one for when the function returns None, and another for when it returns an empty dict. This will make the expected behavior more explicit and failures easier to diagnose.
| assert result is None or result == {} | |
| assert result is None |
| if result and "visual_quality" in result: | ||
| items = result["visual_quality"]["items"] | ||
| if items: | ||
| assert items[0]["comment"] == "" |
There was a problem hiding this comment.
The conditional assertions using "if result and..." make the test weak and unclear. If the function is expected to return a specific structure, the test should assert that explicitly. If None is a valid return value in some cases, create separate test cases for each scenario. Weak assertions like these can allow bugs to pass silently.
| if result and "code_quality" in result: | ||
| items = result["code_quality"]["items"] | ||
| if items: | ||
| assert items[0]["passed"] is True |
There was a problem hiding this comment.
The conditional assertions using "if result and..." make the test weak and unclear. If the function is expected to return a specific structure, the test should assert that explicitly. If None is a valid return value in some cases, create separate test cases for each scenario. Weak assertions like these can allow bugs to pass silently.
| if result and "visual_quality" in result: | ||
| assert result["visual_quality"]["score"] == 36 | ||
| assert result["visual_quality"]["max"] == 40 |
There was a problem hiding this comment.
The conditional assertions using "if result and..." make the test weak and unclear. If the function is expected to return a specific structure, the test should assert that explicitly. If None is a valid return value in some cases, create separate test cases for each scenario. Weak assertions like these can allow bugs to pass silently.
Remove tests for backfill_review_metadata.py and migrate_metadata_format.py as these are one-time migration scripts that have already been executed. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…aries Add tests for: - Download success with mocked httpx response - Download GCS error handling (502 response) - Libraries cache hit - Library images with database - Library images cache hit Coverage improvements: - api/routers/download.py: 68% → 100% - api/routers/libraries.py: 62% → 100% 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Summary
Coverage Improvements
sync_to_postgres.pybackfill_review_metadata.pymigrate_metadata_format.pyplot_generator.pyNew Test Files
tests/unit/automation/scripts/test_sync_to_postgres.py(39 tests)tests/unit/automation/generators/test_plot_generator.py(22 tests)tests/unit/automation/scripts/test_backfill_review_metadata.py(22 tests)tests/unit/automation/scripts/test_migrate_metadata_format.py(21 tests)Test plan
🤖 Generated with Claude Code