|
| 1 | +--- |
| 2 | +description: Conduct a structured retrospective analysis of the completed development cycle with metrics, learnings, and improvement suggestions. |
| 3 | +scripts: |
| 4 | + sh: scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks |
| 5 | + ps: scripts/powershell/check-prerequisites.ps1 -Json -RequireTasks -IncludeTasks |
| 6 | +--- |
| 7 | + |
| 8 | +## User Input |
| 9 | + |
| 10 | +```text |
| 11 | +$ARGUMENTS |
| 12 | +``` |
| 13 | + |
| 14 | +You **MUST** consider the user input before proceeding (if not empty). |
| 15 | + |
| 16 | +## Pre-Execution Checks |
| 17 | + |
| 18 | +**Check for extension hooks (before retro)**: |
| 19 | +- Check if `.specify/extensions.yml` exists in the project root. |
| 20 | +- If it exists, read it and look for entries under the `hooks.before_retro` key |
| 21 | +- If the YAML cannot be parsed or is invalid, skip hook checking silently and continue normally |
| 22 | +- Filter out hooks where `enabled` is explicitly `false`. Treat hooks without an `enabled` field as enabled by default. |
| 23 | +- For each remaining hook, do **not** attempt to interpret or evaluate hook `condition` expressions: |
| 24 | + - If the hook has no `condition` field, or it is null/empty, treat the hook as executable |
| 25 | + - If the hook defines a non-empty `condition`, skip the hook and leave condition evaluation to the HookExecutor implementation |
| 26 | +- For each executable hook, output the following based on its `optional` flag: |
| 27 | + - **Optional hook** (`optional: true`): |
| 28 | + ``` |
| 29 | + ## Extension Hooks |
| 30 | +
|
| 31 | + **Optional Pre-Hook**: {extension} |
| 32 | + Command: `/{command}` |
| 33 | + Description: {description} |
| 34 | +
|
| 35 | + Prompt: {prompt} |
| 36 | + To execute: `/{command}` |
| 37 | + ``` |
| 38 | + - **Mandatory hook** (`optional: false`): |
| 39 | + ``` |
| 40 | + ## Extension Hooks |
| 41 | +
|
| 42 | + **Automatic Pre-Hook**: {extension} |
| 43 | + Executing: `/{command}` |
| 44 | + EXECUTE_COMMAND: {command} |
| 45 | +
|
| 46 | + Wait for the result of the hook command before proceeding to the Outline. |
| 47 | + ``` |
| 48 | +- If no hooks are registered or `.specify/extensions.yml` does not exist, skip silently |
| 49 | +
|
| 50 | +## Goal |
| 51 | +
|
| 52 | +Conduct a structured retrospective analysis of the completed development cycle β from specification through shipping. Analyze what went well, what didn't, and generate actionable improvement suggestions for future iterations. Track metrics over time to identify trends and continuously improve the spec-driven development process. |
| 53 | +
|
| 54 | +## Operating Constraints |
| 55 | +
|
| 56 | +**CONSTRUCTIVE FOCUS**: The retrospective should be balanced β celebrating successes alongside identifying improvements. Avoid blame; focus on process improvements. |
| 57 | +
|
| 58 | +**DATA-DRIVEN**: Base analysis on actual artifacts, git history, and measurable outcomes rather than subjective impressions. |
| 59 | +
|
| 60 | +**OPTIONAL WRITES**: The retro report is always written. Updates to `constitution.md` with new learnings are offered but require explicit user approval. |
| 61 | +
|
| 62 | +## Outline |
| 63 | +
|
| 64 | +1. Run `{SCRIPT}` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot"). |
| 65 | +
|
| 66 | +2. **Gather Retrospective Data**: |
| 67 | + Load all available artifacts from the development cycle: |
| 68 | + - **REQUIRED**: Read `spec.md` β original specification and requirements |
| 69 | + - **REQUIRED**: Read `tasks.md` β task breakdown and completion status |
| 70 | + - **IF EXISTS**: Read `plan.md` β technical plan and architecture decisions |
| 71 | + - **IF EXISTS**: Read review reports in FEATURE_DIR/reviews/ β code review findings |
| 72 | + - **IF EXISTS**: Read QA reports in FEATURE_DIR/qa/ β testing results |
| 73 | + - **IF EXISTS**: Read release artifacts in FEATURE_DIR/releases/ β shipping data |
| 74 | + - **IF EXISTS**: Read critique reports in FEATURE_DIR/critiques/ β pre-implementation review |
| 75 | + - **IF EXISTS**: Read previous retros in FEATURE_DIR/retros/ β historical context |
| 76 | + - **IF EXISTS**: Read `/memory/constitution.md` β project principles |
| 77 | +
|
| 78 | +3. **Collect Git Metrics**: |
| 79 | + Gather quantitative data from the git history: |
| 80 | +
|
| 81 | + ```bash |
| 82 | + # Commit count for the feature |
| 83 | + git rev-list --count origin/{target_branch}..HEAD |
| 84 | +
|
| 85 | + # Files changed |
| 86 | + git diff --stat origin/{target_branch}..HEAD |
| 87 | +
|
| 88 | + # Lines added/removed |
| 89 | + git diff --shortstat origin/{target_branch}..HEAD |
| 90 | +
|
| 91 | + # Number of authors |
| 92 | + git log origin/{target_branch}..HEAD --format='%an' | sort -u | wc -l |
| 93 | +
|
| 94 | + # Date range (first commit to last) |
| 95 | + git log origin/{target_branch}..HEAD --format='%ai' | tail -1 |
| 96 | + git log origin/{target_branch}..HEAD --format='%ai' | head -1 |
| 97 | + ``` |
| 98 | + |
| 99 | + If git data is not available (e.g., already merged), use artifact timestamps and content analysis as fallback. |
| 100 | + |
| 101 | +4. **Specification Accuracy Analysis**: |
| 102 | + Compare the original spec against what was actually built: |
| 103 | + |
| 104 | + - **Requirements fulfilled**: Count of spec requirements that were fully implemented |
| 105 | + - **Requirements partially fulfilled**: Requirements that were implemented with deviations |
| 106 | + - **Requirements not implemented**: Spec items that were deferred or dropped |
| 107 | + - **Unplanned additions**: Features implemented that were NOT in the original spec (scope creep) |
| 108 | + - **Surprises**: Requirements that turned out to be much harder or easier than expected |
| 109 | + - **Accuracy score**: (fulfilled + partialΓ0.5) / total requirements Γ 100% |
| 110 | + |
| 111 | +5. **Plan Effectiveness Analysis**: |
| 112 | + Evaluate how well the technical plan guided implementation: |
| 113 | + |
| 114 | + - **Architecture decisions validated**: Did the chosen patterns/stack work as planned? |
| 115 | + - **Architecture decisions revised**: Were any plan decisions changed during implementation? |
| 116 | + - **Task scoping accuracy**: Were tasks well-sized? Any tasks that were much larger/smaller than expected? |
| 117 | + - **Missing tasks**: Were any tasks added during implementation that weren't in the original breakdown? |
| 118 | + - **Task ordering issues**: Were there dependency problems or tasks that should have been reordered? |
| 119 | + - **Plan score**: Qualitative assessment (EXCELLENT / GOOD / ADEQUATE / NEEDS IMPROVEMENT) |
| 120 | + |
| 121 | +6. **Implementation Quality Analysis**: |
| 122 | + Analyze the quality of the implementation based on review and QA data: |
| 123 | + |
| 124 | + - **Review findings summary**: Total findings by severity from review reports |
| 125 | + - **Blocker resolution**: Were all blockers resolved before shipping? |
| 126 | + - **QA results summary**: Pass/fail rates from QA testing |
| 127 | + - **Test coverage**: Test suite results and coverage metrics |
| 128 | + - **Code quality indicators**: Lines of code, test-to-code ratio, cyclomatic complexity (if available) |
| 129 | + - **Quality score**: Based on review verdict and QA pass rate |
| 130 | + |
| 131 | +7. **Process Metrics Dashboard**: |
| 132 | + Compile a metrics summary: |
| 133 | + |
| 134 | + ``` |
| 135 | + π Development Cycle Metrics |
| 136 | + ββββββββββββββββββββββββββ |
| 137 | + Feature: {feature_name} |
| 138 | + Duration: {first_commit} β {last_commit} |
| 139 | + |
| 140 | + π Specification |
| 141 | + Requirements: {total} total, {fulfilled} fulfilled, {partial} partial |
| 142 | + Spec Accuracy: {accuracy}% |
| 143 | + |
| 144 | + π Planning |
| 145 | + Tasks: {total_tasks} total, {completed} completed |
| 146 | + Added during impl: {unplanned_tasks} |
| 147 | + Plan Score: {plan_score} |
| 148 | + |
| 149 | + π» Implementation |
| 150 | + Commits: {commit_count} |
| 151 | + Files changed: {files_changed} |
| 152 | + Lines: +{additions} / -{deletions} |
| 153 | + Test/Code ratio: {test_ratio} |
| 154 | + |
| 155 | + π Quality |
| 156 | + Review findings: π΄{blockers} π‘{warnings} π’{suggestions} |
| 157 | + QA pass rate: {qa_pass_rate}% |
| 158 | + Quality Score: {quality_score} |
| 159 | + ``` |
| 160 | + |
| 161 | +8. **What Went Well** (Keep Doing): |
| 162 | + Identify and celebrate successes: |
| 163 | + - Aspects of the spec that were clear and led to smooth implementation |
| 164 | + - Architecture decisions that proved effective |
| 165 | + - Tasks that were well-scoped and completed without issues |
| 166 | + - Quality practices that caught real issues |
| 167 | + - Any particularly efficient or elegant solutions |
| 168 | + |
| 169 | +9. **What Could Improve** (Start/Stop Doing): |
| 170 | + Identify areas for improvement: |
| 171 | + - Spec gaps that caused confusion or rework during implementation |
| 172 | + - Plan decisions that needed revision |
| 173 | + - Tasks that were poorly scoped or had missing dependencies |
| 174 | + - Quality issues that slipped through review/QA |
| 175 | + - Process friction points (tool issues, unclear workflows) |
| 176 | + |
| 177 | +10. **Actionable Improvement Suggestions**: |
| 178 | + Generate specific, actionable suggestions: |
| 179 | + - Rank by impact (HIGH / MEDIUM / LOW) |
| 180 | + - Each suggestion should be concrete and implementable |
| 181 | + - Group by category: Specification, Planning, Implementation, Quality, Process |
| 182 | + |
| 183 | + Example format: |
| 184 | + ``` |
| 185 | + IMP-001 [HIGH] Add data model validation to spec template |
| 186 | + β The spec lacked entity relationship details, causing 3 unplanned tasks during implementation. |
| 187 | + β Suggestion: Add a "Data Model" section to the spec template with entity, attribute, and relationship requirements. |
| 188 | + |
| 189 | + IMP-002 [MEDIUM] Include browser compatibility in QA checklist |
| 190 | + β QA missed a CSS rendering issue in Safari that was caught post-merge. |
| 191 | + β Suggestion: Add cross-browser testing scenarios to the QA test plan. |
| 192 | + ``` |
| 193 | +
|
| 194 | +11. **Historical Trend Analysis** (if previous retros exist): |
| 195 | + If FEATURE_DIR/retros/ contains previous retrospective reports: |
| 196 | + - Compare key metrics across cycles (spec accuracy, QA pass rate, review findings) |
| 197 | + - Identify improving trends (celebrate!) and declining trends (flag for attention) |
| 198 | + - Check if previous improvement suggestions were adopted and whether they helped |
| 199 | + - Output a trend summary table |
| 200 | +
|
| 201 | +12. **Generate Retrospective Report**: |
| 202 | + Create the retro report at `FEATURE_DIR/retros/retro-{timestamp}.md` using the retrospective report template. |
| 203 | +
|
| 204 | +13. **Offer Constitution Update**: |
| 205 | + Based on the retrospective findings, offer to update `/memory/constitution.md` with new learnings: |
| 206 | +
|
| 207 | + - "Based on this retrospective, I suggest adding the following principles to your constitution:" |
| 208 | + - List specific principle additions or modifications |
| 209 | + - **Wait for explicit user approval** before making any changes |
| 210 | + - If approved, append new principles with a "Learned from: {feature_name} retro" annotation |
| 211 | +
|
| 212 | +14. **Suggest Next Actions**: |
| 213 | + - If this was a successful cycle: "Great work! Consider starting your next feature with `/speckit.specify`" |
| 214 | + - If improvements were identified: List the top 3 most impactful improvements to adopt |
| 215 | + - If trends are declining: Recommend a process review or team discussion |
| 216 | +
|
| 217 | +**Check for extension hooks (after retro)**: |
| 218 | +- Check if `.specify/extensions.yml` exists in the project root. |
| 219 | +- If it exists, read it and look for entries under the `hooks.after_retro` key |
| 220 | +- If the YAML cannot be parsed or is invalid, skip hook checking silently and continue normally |
| 221 | +- Filter out hooks where `enabled` is explicitly `false`. Treat hooks without an `enabled` field as enabled by default. |
| 222 | +- For each remaining hook, do **not** attempt to interpret or evaluate hook `condition` expressions: |
| 223 | + - If the hook has no `condition` field, or it is null/empty, treat the hook as executable |
| 224 | + - If the hook defines a non-empty `condition`, skip the hook and leave condition evaluation to the HookExecutor implementation |
| 225 | +- For each executable hook, output the following based on its `optional` flag: |
| 226 | + - **Optional hook** (`optional: true`): |
| 227 | + ``` |
| 228 | + ## Extension Hooks |
| 229 | +
|
| 230 | + **Optional Hook**: {extension} |
| 231 | + Command: `/{command}` |
| 232 | + Description: {description} |
| 233 | +
|
| 234 | + Prompt: {prompt} |
| 235 | + To execute: `/{command}` |
| 236 | + ``` |
| 237 | + - **Mandatory hook** (`optional: false`): |
| 238 | + ``` |
| 239 | + ## Extension Hooks |
| 240 | +
|
| 241 | + **Automatic Hook**: {extension} |
| 242 | + Executing: `/{command}` |
| 243 | + EXECUTE_COMMAND: {command} |
| 244 | + ``` |
| 245 | +- If no hooks are registered or `.specify/extensions.yml` does not exist, skip silently |
0 commit comments