This directory is reserved for future skill evaluation tests.
Evaluation tests (evals) validate that AI assistants correctly understand and apply the rules defined in this skill when generating code or providing guidance.
When implemented, evals will follow this structure:
evals/
├── naming/
│ ├── test-object-names.md
│ ├── test-field-keys.md
│ └── test-option-values.md
├── relationships/
│ ├── test-lookup-vs-master-detail.md
│ └── test-junction-patterns.md
├── validation/
│ ├── test-script-inversion.md
│ └── test-state-machine.md
└── ...
Each eval file will contain:
- Scenario — Description of the task
- Expected Output — Correct implementation
- Common Mistakes — Incorrect patterns to avoid
- Validation Criteria — How to score the output
When adding evals:
- Each eval should test a single, specific rule or pattern
- Include both positive (correct) and negative (incorrect) examples
- Reference the corresponding rule file in
rules/ - Use realistic scenarios from actual ObjectStack projects