Skip to content

feat(evaluations): auto-pop schema + openspec change (chunk A)#3299

Open
snopoke wants to merge 1 commit intomainfrom
sk/auto-pop-eval-datasets-schema
Open

feat(evaluations): auto-pop schema + openspec change (chunk A)#3299
snopoke wants to merge 1 commit intomainfrom
sk/auto-pop-eval-datasets-schema

Conversation

@snopoke
Copy link
Copy Markdown
Contributor

@snopoke snopoke commented May 6, 2026

Product Description

No user-facing change in this PR. Lays the schema foundation for the upcoming auto-populate evaluation datasets feature, where teams can configure rules that automatically append matching sessions/messages from a source experiment to an evaluation dataset, and optionally trigger a delta evaluation run on each append. UI, ingestion task, and auto-trigger logic land in subsequent chunks.

Technical Description

This is Chunk A of the openspec change auto-populate-eval-datasets. The full proposal, design, specs, and task list are committed under `openspec/changes/auto-populate-eval-datasets/`.

Schema additions (additive only, no backfills):

  • `DatasetAutoPopulationRule` — team-scoped rule on an `EvaluationDataset`, holds source `Experiment`, evaluation mode, persisted `FilterParams` query string, enabled flag, high-water mark (`last_ingested_at`, defaulting to creation time so existing history is not backfilled), and run status fields (`last_run_at`, `last_run_status`, `last_error`, `consecutive_failure_count`).
  • `DatasetIngestionEntry` — provenance / dedup record with two partial unique constraints: `(rule, source_message)` for message-mode rules and `(rule, source_session)` for session-mode rules (where `source_message` is null). Provides per-rule, per-source idempotency at the database level.
  • `EvaluationConfig.auto_run_on_append` — opt-in boolean (default `False`) that controls whether a delta run fires when the dataset gains rows.
  • `EvaluationRunType.DELTA` — new run type for evaluations scoped to a subset of dataset messages.
  • `EvaluationRun.scoped_messages` M2M — captured at enqueue time so a delta run is unaffected by concurrent appends to the dataset.
  • New Waffle flag `flag_auto_populate_eval_datasets` (gated under `flag_evaluations`). View and beat-task gating land with chunks B and C.
  • Admin entries for the two new models.

Migration: `0015_evaluationconfig_auto_run_on_append_and_more.py`. New nullable / defaulted columns and two new tables; no existing data is rewritten.

Out of scope for this PR (chunks B–E): rule create/edit forms and views, the periodic ingestion task, the dataset-append → delta-run hook, the run-history UI for delta runs, integration tests, docs.

Validation:

  • `uv run python manage.py check` — clean.
  • `uv run ruff check apps/evaluations apps/teams/flags.py` — clean.
  • `uv run ty check apps/evaluations` — clean.
  • `uv run pytest apps/evaluations/tests/` — 194 passed (no regressions in existing eval flows).

Migrations

  • The migrations are backwards compatible

Demo

n/a — schema-only PR, no runtime behaviour change. Demo will accompany Chunk C (ingestion task) and D (auto-trigger).

Docs and Changelog

  • This PR requires docs/changelog update

The openspec change definition itself ships in this PR (proposal/design/specs/tasks). Feature-flag and developer docs land alongside chunks C/D when the user-facing behaviour exists.

🤖 Generated with Claude Code

Chunk A of openspec change `auto-populate-eval-datasets`. Lays the schema
foundation for rule-driven dataset auto-population and delta evaluation
runs. No runtime behaviour change yet — views, ingestion task, and
auto-trigger logic land in subsequent chunks (B–E).

- Add `DatasetAutoPopulationRule` with high-water-mark + status fields.
- Add `DatasetIngestionEntry` with partial unique constraints providing
  per-rule, per-source idempotency for both message and session modes.
- Add `EvaluationConfig.auto_run_on_append` opt-in flag.
- Add `EvaluationRunType.DELTA` choice and `EvaluationRun.scoped_messages`
  M2M to support runs scoped to a subset of dataset messages.
- Register `flag_auto_populate_eval_datasets` Waffle flag (gated under
  `flag_evaluations`).
- Wire up admin entries for the two new models.
- Include the openspec change definition (proposal/design/specs/tasks).

Migration is additive only — defaults preserve existing-row behaviour.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 6, 2026

Warning

Rate limit exceeded

@snopoke has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 59 minutes and 48 seconds before requesting another review.

To continue reviewing without waiting, purchase usage credits in the billing tab.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 3c95356d-d6b6-4f04-a4e5-5a313567d061

📥 Commits

Reviewing files that changed from the base of the PR and between 11adc45 and ee534fc.

📒 Files selected for processing (11)
  • apps/evaluations/admin.py
  • apps/evaluations/migrations/0015_evaluationconfig_auto_run_on_append_and_more.py
  • apps/evaluations/models.py
  • apps/teams/flags.py
  • openspec/changes/auto-populate-eval-datasets/.openspec.yaml
  • openspec/changes/auto-populate-eval-datasets/design.md
  • openspec/changes/auto-populate-eval-datasets/proposal.md
  • openspec/changes/auto-populate-eval-datasets/specs/dataset-auto-population/spec.md
  • openspec/changes/auto-populate-eval-datasets/specs/dataset-linked-evaluations/spec.md
  • openspec/changes/auto-populate-eval-datasets/tasks.md
  • openspec/config.yaml

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov-commenter
Copy link
Copy Markdown

❌ 1 Tests Failed:

Tests completed Failed Passed Skipped
3014 1 3013 2
View the top 1 failed test(s) by shortest run time
apps/teams/tests/test_permissions.py::test_missing_content_types
Stack Traces | 0.002s run time
.../teams/tests/test_permissions.py:97: in test_missing_content_types
    assert not missing, f"Missing content types for {missing} in {app_label}"
E   AssertionError: Missing content types for {'datasetautopopulationrule', 'datasetingestionentry'} in evaluations
E   assert not {'datasetautopopulationrule', 'datasetingestionentry'}

To view more test analytics, go to the Test Analytics Dashboard
📋 Got 3 mins? Take this short survey to help us improve Test Analytics.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants