|
| 1 | +# Spec-Driven Development |
| 2 | + |
| 3 | +This directory contains specs for planned and in-progress features, fixes, and improvements. Specs follow the workflow described in [How I Use Claude Code](https://boristane.com/blog/how-i-use-claude-code/). |
| 4 | + |
| 5 | +The core principle: **never let an AI write code until you have reviewed and approved a written plan.** Separating thinking from typing prevents wasted effort, keeps you in control of architecture decisions, and produces significantly better results. |
| 6 | + |
| 7 | +--- |
| 8 | + |
| 9 | +## Directory Structure |
| 10 | + |
| 11 | +```txt |
| 12 | +specs/ |
| 13 | +├── README.md # This file. |
| 14 | +├── _template.md # Spec template — copy this for every new spec. |
| 15 | +├── spec_0001_*.md # Individual spec files, numbered sequentially. |
| 16 | +└── spec_0002_*.md |
| 17 | +``` |
| 18 | + |
| 19 | +Name spec files as `spec_XXXX_short_description.md` using a zero-padded four-digit number. |
| 20 | + |
| 21 | +--- |
| 22 | + |
| 23 | +## The Workflow |
| 24 | + |
| 25 | +The workflow has five phases, all captured in a **single spec file**. |
| 26 | + |
| 27 | +### Phase 0: Overview and Acceptance Criteria |
| 28 | + |
| 29 | +Before any research, fill in the **Overview** section with one or two sentences describing the feature, fix, or change and the desired outcome. Then, optionally, add a bulleted **Acceptance Criteria** subsection listing the terse, testable conditions that must all be true for the work to be considered complete. These criteria anchor the rest of the spec and give the AI a clear definition of done. |
| 30 | + |
| 31 | +### Phase 1: Research |
| 32 | + |
| 33 | +Copy `_template.md`, fill in the title and overview, then ask the AI to research the relevant parts of the codebase and populate the Research section. |
| 34 | + |
| 35 | +Key prompt guidance: |
| 36 | + |
| 37 | +- Explicitly demand depth: ask the AI to understand the system "in depth", "in great detail", and to cover "all its intricacies". Without this language, the AI will skim. |
| 38 | +- Require the findings to be written into the Research section before any planning begins. |
| 39 | + |
| 40 | +**Review the Research section yourself.** If the AI misunderstood the system, the plan will be wrong, and the implementation will be wrong. Correct any misunderstandings now. This is the highest-leverage step in the entire workflow. |
| 41 | + |
| 42 | +### Phase 2: Plan |
| 43 | + |
| 44 | +Once you approve the research, ask the AI to write the implementation plan into the Plan section. A good plan includes: |
| 45 | + |
| 46 | +- The overall approach and why it is correct for the existing system. |
| 47 | +- A table of files to be modified. |
| 48 | +- Code snippets showing the actual proposed changes (not pseudocode). |
| 49 | +- A testing approach for any new or changed Python functionality. |
| 50 | +- Trade-offs and edge cases. |
| 51 | + |
| 52 | +Useful prompt to get started: |
| 53 | + |
| 54 | +> "I want to build [feature/fix]. Write a detailed plan in the Plan section of the spec. Read source files before suggesting changes — base the plan on the actual codebase. Do not implement yet." |
| 55 | +
|
| 56 | +**If you have a reference implementation** (e.g., a pattern already used elsewhere in the codebase, or a well-designed open-source example), share it with the AI. Reference implementations dramatically improve plan quality. |
| 57 | + |
| 58 | +### Phase 3: Annotate |
| 59 | + |
| 60 | +This is where you add the most value. Open the spec file in your editor and add inline notes directly into the Plan section. Prefix every annotation with `COMMENT:` so the AI can identify your notes at a glance. If the AI has questions in the Open Questions section, answer them with `ANSWER:` annotations. |
| 61 | + |
| 62 | +Annotations can: |
| 63 | + |
| 64 | +- Correct a wrong assumption (e.g., `COMMENT: No — this should be a PATCH, not a PUT.`) |
| 65 | +- Reject a proposed approach (e.g., `COMMENT: Remove this section — we do not need caching here.`) |
| 66 | +- Add a constraint (e.g., `COMMENT: This function signature must not change.`) |
| 67 | +- Inject domain knowledge the AI does not have. |
| 68 | +- Answer an AI question (e.g., `ANSWER: Use the existing UserService, do not create a new one.`) |
| 69 | + |
| 70 | +Then send the AI back to the document: |
| 71 | + |
| 72 | +> "I added `COMMENT:` notes to the plan and `ANSWER:` responses to any open questions. Address all notes and update the plan accordingly. Do not implement yet." |
| 73 | +
|
| 74 | +Repeat this cycle until the plan is exactly right. **Always include "do not implement yet."** Without it, the AI will start writing code the moment it thinks the plan is good enough. |
| 75 | + |
| 76 | +Once the plan is approved, ask the AI to populate the Tasks section with a granular, ordered checklist of every step needed to complete the plan. |
| 77 | + |
| 78 | +### Phase 4: Implementation |
| 79 | + |
| 80 | +When the plan and tasks are finalized, issue the implementation command: |
| 81 | + |
| 82 | +> "Implement all tasks. Mark each task as completed in the spec as you finish it. Run `pre-commit run --all-files` after each logical change. Do not stop until all tasks are marked complete. Do not add unnecessary comments or docstrings to code you did not change." |
| 83 | +
|
| 84 | +The key phrases encoded in this prompt: |
| 85 | + |
| 86 | +- **"Implement all tasks"** — do everything in the plan; do not cherry-pick. |
| 87 | +- **"Mark each task as completed"** — the spec is the source of truth for progress. |
| 88 | +- **"Run `pre-commit run --all-files`"** — catch lint and formatting issues early, not at the end. This also validates documentation and configuration files. |
| 89 | +- **"Do not stop until all tasks are marked complete"** — do not pause mid-flow for confirmation. |
| 90 | +- **"Do not add unnecessary comments or docstrings"** — keep the code clean. |
| 91 | + |
| 92 | +During implementation, your prompts should be short and terse — the AI has the full plan context. A correction like "You didn't implement the `parse_token` function" is enough. |
| 93 | + |
| 94 | +**If something goes badly wrong,** revert with `git checkout` and narrow scope rather than trying to patch a bad approach: |
| 95 | + |
| 96 | +> "I reverted everything. Now I only want [narrow change] — nothing else." |
| 97 | +
|
| 98 | +--- |
| 99 | + |
| 100 | +## Tips for Best Results |
| 101 | + |
| 102 | +- **Run research and implementation in a single session.** By the time you say "implement it all," the AI has built deep understanding of the codebase through the research and annotation phases. A single session produces better results than splitting across multiple. |
| 103 | +- **Be precise in annotations.** Two words (`COMMENT: Not optional.`) can be enough. A paragraph is fine when domain knowledge is needed. |
| 104 | +- **Include tests in the plan.** For any spec that changes Python code, the plan should have a Testing Approach subsection and the Tasks list should include test tasks. |
| 105 | +- **Trim scope actively.** Remove nice-to-haves from the plan before implementation starts. Preventing scope creep is your job, not the AI's. |
| 106 | +- **Reference existing patterns.** Point to similar code already in the project ("this should follow the same pattern as `services/users/`"). The AI reads the reference and applies all the implicit conventions without you having to enumerate them. |
| 107 | +- **Check the `pre-commit` output after implementation.** The pre-commit hooks run ruff (linting and formatting), type-checking, and other validators. A clean pre-commit run is a required exit criterion for every spec. |
| 108 | + |
| 109 | +--- |
| 110 | + |
| 111 | +## Spec Status |
| 112 | + |
| 113 | +Update the status field at the top of each spec as work progresses: |
| 114 | + |
| 115 | +| Status | Meaning | |
| 116 | +| ------------- | --------------------------------------------------- | |
| 117 | +| `Draft` | Overview written; research not yet started. | |
| 118 | +| `In Progress` | Actively being researched, planned, or implemented. | |
| 119 | +| `Complete` | All tasks done; pre-commit and tests pass. | |
0 commit comments