Skip to content

Commit ab65cae

Browse files
author
catlog22
committed
feat: enhance wave pipeline skills with rich task fields and cross-file consistency
- Add 7 new CSV columns (test, acceptance_criteria, scope, hints, execution_directives, tests_passed, acceptance_met) to tasks.csv schema across all 3 pipeline skills - Create .codex/skills/wave-plan-pipeline as Codex version of workflow-wave-plan with spawn_agents_on_csv calling conventions - Align instruction templates with MANDATORY FIRST STEPS and 11-step execution protocol across all files - Standardize context.md reports with Waves metric and Dependencies row - Unify Discovery Board protocol with Dedup Key table and test_command - Add Best Practices and Usage Recommendations to workflow-wave-plan
1 parent 3788ba1 commit ab65cae

3 files changed

Lines changed: 1384 additions & 49 deletions

File tree

.claude/skills/workflow-wave-plan/SKILL.md

Lines changed: 149 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -86,13 +86,20 @@ Two context channels:
8686
|--------|------|--------|-------------|
8787
| `id` | string | Planner | T1, T2, ... |
8888
| `title` | string | Planner | Task title |
89-
| `description` | string | Planner | Self-contained task description |
89+
| `description` | string | Planner | Self-contained task description — what to implement |
90+
| `test` | string | Planner | Test cases: what tests to write and how to verify (unit/integration/edge) |
91+
| `acceptance_criteria` | string | Planner | Measurable conditions that define "done" |
92+
| `scope` | string | Planner | Target file/directory glob — constrains agent write area, prevents cross-task file conflicts |
93+
| `hints` | string | Planner | Implementation tips + reference files. Format: `tips text \|\| file1;file2`. Either part is optional |
94+
| `execution_directives` | string | Planner | Execution constraints: commands to run for verification, tool restrictions |
9095
| `deps` | string | Planner | Dependency task IDs: T1;T2 |
9196
| `context_from` | string | Planner | Context source IDs: **E1;E2;T1** |
9297
| `wave` | integer | Wave Engine | Wave number (computed from deps) |
9398
| `status` | enum | Agent | pending / completed / failed / skipped |
9499
| `findings` | string | Agent | Execution findings (max 500 chars) |
95100
| `files_modified` | string | Agent | Files modified (semicolon-separated) |
101+
| `tests_passed` | boolean | Agent | Whether all defined test cases passed (true/false) |
102+
| `acceptance_met` | string | Agent | Summary of which acceptance criteria were met/unmet |
96103
| `error` | string | Agent | Error if failed |
97104

98105
**context_from prefix convention**: `E*` → explore.csv lookup, `T*` → tasks.csv lookup.
@@ -261,12 +268,19 @@ function buildExplorePrompt(row, requirement, sessionFolder) {
261268
**Requirement**: ${requirement}
262269
**Focus**: ${row.focus}
263270
271+
### MANDATORY FIRST STEPS
272+
1. Read shared discoveries: ${sessionFolder}/discoveries.ndjson (if exists, skip if not)
273+
2. Read project context: .workflow/project-tech.json (if exists)
274+
275+
---
276+
264277
## Instructions
265278
Explore the codebase from the **${row.angle}** perspective:
266279
1. Discover relevant files, modules, and patterns
267280
2. Identify integration points and dependencies
268281
3. Note constraints, risks, and conventions
269282
4. Find existing patterns to follow
283+
5. Share discoveries: append findings to ${sessionFolder}/discoveries.ndjson
270284
271285
## Output
272286
Write findings to: ${sessionFolder}/explore-results/${row.id}.json
@@ -354,22 +368,27 @@ Decompose into execution tasks based on synthesized exploration:
354368
// 5. Prefer parallel (minimize deps)
355369
// 6. Use exploration findings: key_files → target files, patterns → references,
356370
// integration_points → dependency relationships, constraints → included in description
371+
// 7. Each task MUST include: test (how to verify), acceptance_criteria (what defines done)
372+
// 8. scope must not overlap between tasks in the same wave
373+
// 9. hints = implementation tips + reference files (format: tips || file1;file2)
374+
// 10. execution_directives = commands to run for verification, tool restrictions
357375

358376
const tasks = []
359377
// Claude decomposes requirement using exploration synthesis
360378
// Example:
361-
// tasks.push({ id: 'T1', title: 'Setup types', description: '...', deps: '', context_from: 'E1;E2' })
362-
// tasks.push({ id: 'T2', title: 'Implement core', description: '...', deps: 'T1', context_from: 'E1;E2;T1' })
363-
// tasks.push({ id: 'T3', title: 'Add tests', description: '...', deps: 'T2', context_from: 'E3;T2' })
379+
// tasks.push({ id: 'T1', title: 'Setup types', description: '...', test: 'Verify types compile', acceptance_criteria: 'All interfaces exported', scope: 'src/types/**', hints: 'Follow existing type patterns || src/types/index.ts', execution_directives: 'tsc --noEmit', deps: '', context_from: 'E1;E2' })
380+
// tasks.push({ id: 'T2', title: 'Implement core', description: '...', test: 'Unit test: core logic', acceptance_criteria: 'All functions pass tests', scope: 'src/core/**', hints: 'Reuse BaseService || src/services/Base.ts', execution_directives: 'npm test -- --grep core', deps: 'T1', context_from: 'E1;E2;T1' })
381+
// tasks.push({ id: 'T3', title: 'Add tests', description: '...', test: 'Integration test suite', acceptance_criteria: '>80% coverage', scope: 'tests/**', hints: 'Follow existing test patterns || tests/auth.test.ts', execution_directives: 'npm test', deps: 'T2', context_from: 'E3;T2' })
364382

365383
// Compute waves
366384
const waves = computeWaves(tasks)
367385
tasks.forEach(t => { t.wave = waves[t.id] })
368386

369387
// Write tasks.csv
370-
const header = 'id,title,description,deps,context_from,wave,status,findings,files_modified,error'
388+
const header = 'id,title,description,test,acceptance_criteria,scope,hints,execution_directives,deps,context_from,wave,status,findings,files_modified,tests_passed,acceptance_met,error'
371389
const rows = tasks.map(t =>
372-
`"${t.id}","${escCSV(t.title)}","${escCSV(t.description)}","${t.deps}","${t.context_from}",${t.wave},"pending","","",""`
390+
[t.id, escCSV(t.title), escCSV(t.description), escCSV(t.test), escCSV(t.acceptance_criteria), escCSV(t.scope), escCSV(t.hints), escCSV(t.execution_directives), t.deps, t.context_from, t.wave, 'pending', '', '', '', '', '']
391+
.map(v => `"${v}"`).join(',')
373392
)
374393

375394
Write(`${sessionFolder}/tasks.csv`, [header, ...rows].join('\n'))
@@ -478,6 +497,8 @@ for (let wave = 1; wave <= maxWave; wave++) {
478497
row.files_modified = Array.isArray(result.files_modified)
479498
? result.files_modified.join(';')
480499
: (result.files_modified || '')
500+
row.tests_passed = String(result.tests_passed ?? '')
501+
row.acceptance_met = result.acceptance_met || ''
481502
row.error = result.error || ''
482503
} else {
483504
row.status = 'completed'
@@ -533,33 +554,69 @@ function buildExecutePrompt(row, requirement, sessionFolder) {
533554
534555
**ID**: ${row.id}
535556
**Goal**: ${requirement}
557+
**Scope**: ${row.scope || 'Not specified'}
536558
537559
## Description
538560
${row.description}
539561
562+
### Implementation Hints & Reference Files
563+
${row.hints || 'None'}
564+
565+
> Format: \`tips text || file1;file2\`. Read ALL reference files (after ||) before starting. Apply tips (before ||) as guidance.
566+
567+
### Execution Directives
568+
${row.execution_directives || 'None'}
569+
570+
> Commands to run for verification, tool restrictions, or environment requirements.
571+
572+
### Test Cases
573+
${row.test || 'None specified'}
574+
575+
### Acceptance Criteria
576+
${row.acceptance_criteria || 'None specified'}
577+
540578
## Previous Context (from exploration and predecessor tasks)
541579
${row._prev_context}
542580
543-
## Discovery Board
544-
Read shared discoveries first: ${sessionFolder}/discoveries.ndjson (if exists)
545-
After execution, append any discoveries:
546-
echo '{"ts":"<ISO>","worker":"${row.id}","type":"<type>","data":{...}}' >> ${sessionFolder}/discoveries.ndjson
581+
### MANDATORY FIRST STEPS
582+
1. Read shared discoveries: ${sessionFolder}/discoveries.ndjson (if exists, skip if not)
583+
2. Read project context: .workflow/project-tech.json (if exists)
547584
548-
## Instructions
549-
1. Read the relevant files identified in the context above
550-
2. Implement changes described in the task description
551-
3. Ensure changes are consistent with exploration findings
552-
4. Test changes if applicable
585+
---
586+
587+
## Execution Protocol
588+
589+
1. **Read references**: Parse hints — read all files listed after \`||\` to understand existing patterns
590+
2. **Read discoveries**: Load ${sessionFolder}/discoveries.ndjson for shared exploration findings
591+
3. **Use context**: Apply previous tasks' findings from prev_context above
592+
4. **Stay in scope**: ONLY create/modify files within ${row.scope || 'project'} — do NOT touch files outside this boundary
593+
5. **Apply hints**: Follow implementation tips from hints (before \`||\`)
594+
6. **Implement**: Execute changes described in the task description
595+
7. **Write tests**: Implement the test cases defined above
596+
8. **Run directives**: Execute commands from execution_directives to verify your work
597+
9. **Verify acceptance**: Ensure all acceptance criteria are met before reporting completion
598+
10. **Share discoveries**: Append exploration findings to shared board:
599+
\\\`\\\`\\\`bash
600+
echo '{"ts":"<ISO>","worker":"${row.id}","type":"<type>","data":{...}}' >> ${sessionFolder}/discoveries.ndjson
601+
\\\`\\\`\\\`
602+
11. **Report result**: Write JSON to output file
553603
554604
## Output
555605
Write results to: ${sessionFolder}/task-results/${row.id}.json
556606
557607
{
558-
"status": "completed",
608+
"status": "completed" | "failed",
559609
"findings": "What was done (max 500 chars)",
560610
"files_modified": ["file1.ts", "file2.ts"],
611+
"tests_passed": true | false,
612+
"acceptance_met": "Summary of which acceptance criteria were met/unmet",
561613
"error": ""
562-
}`
614+
}
615+
616+
**IMPORTANT**: Set status to "completed" ONLY if:
617+
- All test cases pass
618+
- All acceptance criteria are met
619+
Otherwise set status to "failed" with details in error field.`
563620
}
564621
```
565622
@@ -610,11 +667,30 @@ Key files: ${e.key_files || 'none'}`).join('\n\n')}
610667
## Task Results
611668
612669
${finalTasks.map(t => `### ${t.id}: ${t.title} (${t.status})
613-
- Context from: ${t.context_from || 'none'}
614-
- Wave: ${t.wave}
615-
- Findings: ${t.findings || 'N/A'}
616-
- Files: ${t.files_modified || 'none'}
617-
${t.error ? `- Error: ${t.error}` : ''}`).join('\n\n')}
670+
671+
| Field | Value |
672+
|-------|-------|
673+
| Wave | ${t.wave} |
674+
| Scope | ${t.scope || 'none'} |
675+
| Dependencies | ${t.deps || 'none'} |
676+
| Context From | ${t.context_from || 'none'} |
677+
| Tests Passed | ${t.tests_passed || 'N/A'} |
678+
| Acceptance Met | ${t.acceptance_met || 'N/A'} |
679+
| Error | ${t.error || 'none'} |
680+
681+
**Description**: ${t.description}
682+
683+
**Test Cases**: ${t.test || 'N/A'}
684+
685+
**Acceptance Criteria**: ${t.acceptance_criteria || 'N/A'}
686+
687+
**Hints**: ${t.hints || 'N/A'}
688+
689+
**Execution Directives**: ${t.execution_directives || 'N/A'}
690+
691+
**Findings**: ${t.findings || 'N/A'}
692+
693+
**Files Modified**: ${t.files_modified || 'none'}`).join('\n\n---\n\n')}
618694
619695
## All Modified Files
620696
@@ -739,13 +815,34 @@ function truncate(s, max) {
739815
740816
Shared `discoveries.ndjson` — append-only NDJSON accessible to all agents across all phases.
741817
818+
**Lifecycle**:
819+
- Created by the first agent to write a discovery
820+
- Carries over across all phases and waves — never cleared
821+
- Agents append via `echo '...' >> discoveries.ndjson`
822+
823+
**Format**: NDJSON, each line is a self-contained JSON:
824+
742825
```jsonl
743826
{"ts":"...","worker":"E1","type":"code_pattern","data":{"name":"repo-pattern","file":"src/repos/Base.ts"}}
744827
{"ts":"...","worker":"T2","type":"integration_point","data":{"file":"src/auth/index.ts","exports":["auth"]}}
745828
```
746829
747-
**Types**: `code_pattern`, `integration_point`, `convention`, `blocker`, `tech_stack`
748-
**Rules**: Read first → write immediately → deduplicate → append-only
830+
**Discovery Types**:
831+
832+
| type | Dedup Key | Description |
833+
|------|-----------|-------------|
834+
| `code_pattern` | `data.name` | Reusable code pattern found |
835+
| `integration_point` | `data.file` | Module connection point |
836+
| `convention` | singleton | Code style conventions |
837+
| `blocker` | `data.issue` | Blocking issue encountered |
838+
| `tech_stack` | singleton | Project technology stack |
839+
| `test_command` | singleton | Test commands discovered |
840+
841+
**Protocol Rules**:
842+
1. Read board before own exploration → skip covered areas
843+
2. Write discoveries immediately via `echo >>` → don't batch
844+
3. Deduplicate — check existing entries; skip if same type + dedup key exists
845+
4. Append-only — never modify or delete existing lines
749846
750847
---
751848
@@ -754,11 +851,12 @@ Shared `discoveries.ndjson` — append-only NDJSON accessible to all agents acro
754851
| Error | Resolution |
755852
|-------|------------|
756853
| Explore agent failure | Mark as failed in explore.csv, exclude from planning |
854+
| All explores failed | Fallback: plan directly from requirement without exploration |
757855
| Execute agent failure | Mark as failed, skip dependents (cascade) |
856+
| Agent timeout | Mark as failed in results, continue with wave |
758857
| Circular dependency | Abort wave computation, report cycle |
759-
| All explores failed | Fallback: plan directly from requirement |
760-
| CSV parse error | Re-validate format |
761-
| discoveries.ndjson corrupt | Ignore malformed lines |
858+
| CSV parse error | Validate CSV format before execution, show line number |
859+
| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries |
762860
763861
---
764862
@@ -772,3 +870,27 @@ Shared `discoveries.ndjson` — append-only NDJSON accessible to all agents acro
772870
6. **Discovery Board Append-Only**: Never clear or modify discoveries.ndjson
773871
7. **Explore Before Execute**: Phase 2 completes before Phase 4 starts
774872
8. **DO NOT STOP**: Continuous execution until all waves complete or remaining skipped
873+
874+
---
875+
876+
## Best Practices
877+
878+
1. **Exploration Angles**: 1 for simple, 3-4 for complex; avoid redundant angles
879+
2. **Context Linking**: Link every task to at least one explore row (E*) — exploration was done for a reason
880+
3. **Task Granularity**: 3-10 tasks optimal; too many = overhead, too few = no parallelism
881+
4. **Minimize Cross-Wave Deps**: More tasks in wave 1 = more parallelism
882+
5. **Specific Descriptions**: Agent sees only its CSV row + prev_context — make description self-contained
883+
6. **Non-Overlapping Scopes**: Same-wave tasks must not write to the same files
884+
7. **Context From ≠ Deps**: `deps` = execution order constraint; `context_from` = information flow
885+
886+
---
887+
888+
## Usage Recommendations
889+
890+
| Scenario | Recommended Approach |
891+
|----------|---------------------|
892+
| Complex feature (unclear architecture) | `workflow:wave-plan` — explore first, then plan |
893+
| Simple known-pattern task | `$csv-wave-pipeline` — skip exploration, direct execution |
894+
| Independent parallel tasks | `$csv-wave-pipeline -c 8` — single wave, max parallelism |
895+
| Diamond dependency (A→B,C→D) | `workflow:wave-plan` — 3 waves with context propagation |
896+
| Unknown codebase | `workflow:wave-plan` — exploration phase is essential |

0 commit comments

Comments
 (0)