Skip to content

Commit 6eb15a7

Browse files
arunt14Copilot
andcommitted
feat: add 5 gstack-inspired lifecycle commands (critique, review, qa, ship, retro)
Add 5 new core command templates inspired by Garry Tan's GStack to complete the spec-driven development lifecycle: - /speckit.critique: Dual-lens product + engineering review before implementation - /speckit.review: Staff-level code review (correctness, security, performance) - /speckit.qa: Systematic QA testing (browser-driven and CLI modes) - /speckit.ship: Release automation (pre-flight, changelog, CI, PR creation) - /speckit.retro: Sprint retrospective with metrics and improvement suggestions Each command includes: - Command template in templates/commands/ - Output report template in templates/ - Extension hook support (before_*/after_*) - YAML frontmatter with prerequisite scripts Updated README.md workflow from 6 to 11 steps and added CHANGELOG entry. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
1 parent 3899dcc commit 6eb15a7

File tree

12 files changed

+1664
-2
lines changed

12 files changed

+1664
-2
lines changed

CHANGELOG.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,17 @@
22

33
<!-- insert new changelog below this comment -->
44

5+
## [Unreleased]
6+
7+
### Added
8+
9+
- feat: Add `/speckit.critique` command — dual-lens strategic and technical review (product + engineering perspectives) before implementation
10+
- feat: Add `/speckit.review` command — staff-level code review focused on correctness, security, performance, and spec compliance
11+
- feat: Add `/speckit.qa` command — systematic QA testing with browser-driven and CLI-based modes, validating acceptance criteria
12+
- feat: Add `/speckit.ship` command — release engineering automation (pre-flight checks, branch sync, changelog, CI verification, PR creation)
13+
- feat: Add `/speckit.retro` command — sprint retrospective with metrics, learnings, and improvement suggestions
14+
- feat: Add output templates for review, QA, ship, retro, and critique reports
15+
516
## [0.4.3] - 2026-03-26
617

718
### Changed

README.md

Lines changed: 42 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -134,22 +134,62 @@ Use the **`/speckit.plan`** command to provide your tech stack and architecture
134134
/speckit.plan The application uses Vite with minimal number of libraries. Use vanilla HTML, CSS, and JavaScript as much as possible. Images are not uploaded anywhere and metadata is stored in a local SQLite database.
135135
```
136136

137-
### 5. Break down into tasks
137+
### 5. Challenge the plan
138+
139+
Use **`/speckit.critique`** to critically evaluate your spec and plan from both product strategy and engineering risk perspectives before committing to implementation.
140+
141+
```bash
142+
/speckit.critique
143+
```
144+
145+
### 6. Break down into tasks
138146

139147
Use **`/speckit.tasks`** to create an actionable task list from your implementation plan.
140148

141149
```bash
142150
/speckit.tasks
143151
```
144152

145-
### 6. Execute implementation
153+
### 7. Execute implementation
146154

147155
Use **`/speckit.implement`** to execute all tasks and build your feature according to the plan.
148156

149157
```bash
150158
/speckit.implement
151159
```
152160

161+
### 8. Review the code
162+
163+
Use **`/speckit.review`** to perform a staff-level code review focused on correctness, security, performance, and spec compliance.
164+
165+
```bash
166+
/speckit.review
167+
```
168+
169+
### 9. Run QA testing
170+
171+
Use **`/speckit.qa`** to systematically test the implemented feature against acceptance criteria, using browser-driven or CLI-based testing.
172+
173+
```bash
174+
/speckit.qa
175+
```
176+
177+
### 10. Ship it
178+
179+
Use **`/speckit.ship`** to automate the release pipeline — pre-flight checks, branch sync, changelog generation, CI verification, and PR creation.
180+
181+
```bash
182+
/speckit.ship
183+
```
184+
185+
### 11. Retrospective
186+
187+
Use **`/speckit.retro`** to reflect on the completed development cycle with metrics, learnings, and improvement suggestions for the next iteration.
188+
189+
```bash
190+
/speckit.retro
191+
```
192+
153193
For detailed step-by-step instructions, see our [comprehensive guide](./spec-driven.md).
154194

155195
## 📽️ Video Overview

templates/commands/critique.md

Lines changed: 238 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,238 @@
1+
---
2+
description: Perform a dual-lens critical review of the specification and plan from both product strategy and engineering risk perspectives before implementation.
3+
scripts:
4+
sh: scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks
5+
ps: scripts/powershell/check-prerequisites.ps1 -Json -RequireTasks -IncludeTasks
6+
---
7+
8+
## User Input
9+
10+
```text
11+
$ARGUMENTS
12+
```
13+
14+
You **MUST** consider the user input before proceeding (if not empty).
15+
16+
## Pre-Execution Checks
17+
18+
**Check for extension hooks (before critique)**:
19+
- Check if `.specify/extensions.yml` exists in the project root.
20+
- If it exists, read it and look for entries under the `hooks.before_critique` key
21+
- If the YAML cannot be parsed or is invalid, skip hook checking silently and continue normally
22+
- Filter out hooks where `enabled` is explicitly `false`. Treat hooks without an `enabled` field as enabled by default.
23+
- For each remaining hook, do **not** attempt to interpret or evaluate hook `condition` expressions:
24+
- If the hook has no `condition` field, or it is null/empty, treat the hook as executable
25+
- If the hook defines a non-empty `condition`, skip the hook and leave condition evaluation to the HookExecutor implementation
26+
- For each executable hook, output the following based on its `optional` flag:
27+
- **Optional hook** (`optional: true`):
28+
```
29+
## Extension Hooks
30+
31+
**Optional Pre-Hook**: {extension}
32+
Command: `/{command}`
33+
Description: {description}
34+
35+
Prompt: {prompt}
36+
To execute: `/{command}`
37+
```
38+
- **Mandatory hook** (`optional: false`):
39+
```
40+
## Extension Hooks
41+
42+
**Automatic Pre-Hook**: {extension}
43+
Executing: `/{command}`
44+
EXECUTE_COMMAND: {command}
45+
46+
Wait for the result of the hook command before proceeding to the Outline.
47+
```
48+
- If no hooks are registered or `.specify/extensions.yml` does not exist, skip silently
49+
50+
## Goal
51+
52+
Challenge the specification and implementation plan through two distinct expert lenses BEFORE committing to implementation. The **Product Lens** evaluates whether the right problem is being solved in the right way for users. The **Engineering Lens** evaluates whether the technical approach is sound, scalable, and free of hidden risks. This dual review prevents costly mid-implementation pivots and catches strategic and technical blind spots early.
53+
54+
## Operating Constraints
55+
56+
**STRICTLY READ-ONLY**: Do **not** modify `spec.md`, `plan.md`, or any other files. Output a structured critique report. Offer to apply approved changes only after the user reviews findings.
57+
58+
**CONSTRUCTIVE CHALLENGE**: The goal is to strengthen the spec and plan, not to block progress. Every critique item must include a constructive suggestion for improvement.
59+
60+
**Constitution Authority**: The project constitution (`/memory/constitution.md`) defines non-negotiable principles. Any spec/plan element conflicting with the constitution is automatically a 🎯 Must-Address item.
61+
62+
## Outline
63+
64+
1. Run `{SCRIPT}` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
65+
66+
2. **Load Critique Context**:
67+
- **REQUIRED**: Read `spec.md` for requirements, user stories, and acceptance criteria
68+
- **REQUIRED**: Read `plan.md` for architecture, tech stack, and implementation phases
69+
- **IF EXISTS**: Read `/memory/constitution.md` for governing principles
70+
- **IF EXISTS**: Read `tasks.md` for task breakdown (if already generated)
71+
- **IF EXISTS**: Read previous critique reports in FEATURE_DIR/critiques/ for context
72+
73+
3. **Product Lens Review** (CEO/Product Lead Perspective):
74+
75+
Adopt the mindset of an experienced product leader who cares deeply about user value, market fit, and business impact. Evaluate:
76+
77+
#### 3a. Problem Validation
78+
- Is the problem statement clear and well-defined?
79+
- Is this solving a real user pain point, or is it a solution looking for a problem?
80+
- What evidence supports the need for this feature? (user research, data, customer requests)
81+
- Is the scope appropriate — not too broad (trying to do everything) or too narrow (missing the core value)?
82+
83+
#### 3b. User Value Assessment
84+
- Does every user story deliver tangible user value?
85+
- Are the acceptance criteria written from the user's perspective (outcomes, not implementation)?
86+
- Is the user journey complete — or are there gaps where users would get stuck?
87+
- What's the simplest version that would deliver 80% of the value? (MVP analysis)
88+
- Are there unnecessary features that add complexity without proportional value?
89+
90+
#### 3c. Alternative Approaches
91+
- Could a simpler solution achieve the same outcome?
92+
- Are there existing tools, libraries, or services that could replace custom implementation?
93+
- What would a competitor's approach look like?
94+
- What would happen if this feature were NOT built? What's the cost of inaction?
95+
96+
#### 3d. Edge Cases & User Experience
97+
- What happens when things go wrong? (error states, empty states, loading states)
98+
- How does this feature interact with existing functionality?
99+
- Are accessibility considerations addressed?
100+
- Is the feature discoverable and intuitive?
101+
- What are the onboarding/migration implications for existing users?
102+
103+
#### 3e. Success Measurement
104+
- Are the success criteria measurable and time-bound?
105+
- How will you know if this feature is successful after launch?
106+
- What metrics should be tracked?
107+
- What would trigger a rollback decision?
108+
109+
4. **Engineering Lens Review** (Staff Engineer Perspective):
110+
111+
Adopt the mindset of a senior staff engineer who has seen projects fail due to hidden technical risks. Evaluate:
112+
113+
#### 4a. Architecture Soundness
114+
- Does the architecture follow established patterns for this type of system?
115+
- Are boundaries and interfaces well-defined (separation of concerns)?
116+
- Is the architecture testable at each layer?
117+
- Are there circular dependencies or tight coupling risks?
118+
- Does the architecture support future evolution without major refactoring?
119+
120+
#### 4b. Failure Mode Analysis
121+
- What are the most likely failure modes? (network failures, data corruption, resource exhaustion)
122+
- How does the system degrade gracefully under each failure mode?
123+
- What happens under peak load? Is there a scaling bottleneck?
124+
- What are the blast radius implications — can a failure in this feature affect other parts of the system?
125+
- Are retry, timeout, and circuit-breaker strategies defined?
126+
127+
#### 4c. Security & Privacy Review
128+
- What is the threat model? What attack vectors does this feature introduce?
129+
- Are trust boundaries clearly defined (user input, API responses, third-party data)?
130+
- Is sensitive data handled appropriately (encryption, access control, retention)?
131+
- Are there compliance implications (GDPR, SOC2, HIPAA)?
132+
- Is the principle of least privilege followed?
133+
134+
#### 4d. Performance & Scalability
135+
- Are there potential bottlenecks in the data flow?
136+
- What are the expected data volumes? Will the design handle 10x growth?
137+
- Are caching strategies appropriate and cache invalidation well-defined?
138+
- Are database queries optimized (indexing, pagination, query complexity)?
139+
- Are there resource-intensive operations that should be async or batched?
140+
141+
#### 4e. Testing Strategy
142+
- Is the testing plan comprehensive (unit, integration, E2E)?
143+
- Are the critical paths identified for priority testing?
144+
- Is the test data strategy realistic?
145+
- Are there testability concerns (hard-to-mock dependencies, race conditions)?
146+
- Is the test coverage target appropriate for the risk level?
147+
148+
#### 4f. Operational Readiness
149+
- Is observability planned (logging, metrics, tracing)?
150+
- Are alerting thresholds defined?
151+
- Is there a rollback strategy?
152+
- Are database migrations reversible?
153+
- Is the deployment strategy clear (blue-green, canary, feature flags)?
154+
155+
#### 4g. Dependencies & Integration Risks
156+
- Are third-party dependencies well-understood (stability, licensing, maintenance)?
157+
- Are integration points with existing systems well-defined?
158+
- What happens if an external service is unavailable?
159+
- Are API versioning and backward compatibility considered?
160+
161+
5. **Cross-Lens Synthesis**:
162+
Identify items where both lenses converge (these are highest priority):
163+
- Product simplification that also reduces engineering risk
164+
- Engineering constraints that affect user experience
165+
- Scope adjustments that improve both value delivery and technical feasibility
166+
167+
6. **Severity Classification**:
168+
Classify each finding:
169+
170+
- 🎯 **Must-Address**: Blocks proceeding to implementation. Critical product gap, security vulnerability, architecture flaw, or constitution violation. Must be resolved before `/speckit.tasks`.
171+
- 💡 **Recommendation**: Strongly suggested improvement that would significantly improve quality, value, or risk profile. Should be addressed but won't block progress.
172+
- 🤔 **Question**: Ambiguity or assumption that needs stakeholder input. Cannot be resolved by the development team alone.
173+
174+
7. **Generate Critique Report**:
175+
Create the critique report at `FEATURE_DIR/critiques/critique-{timestamp}.md` using the critique report template. The report must include:
176+
177+
- **Executive Summary**: Overall assessment and readiness to proceed
178+
- **Product Lens Findings**: Organized by subcategory (3a-3e)
179+
- **Engineering Lens Findings**: Organized by subcategory (4a-4g)
180+
- **Cross-Lens Insights**: Items where both perspectives converge
181+
- **Findings Summary Table**: All items with ID, lens, severity, summary, suggestion
182+
183+
**Findings Table Format**:
184+
| ID | Lens | Severity | Category | Finding | Suggestion |
185+
|----|------|----------|----------|---------|------------|
186+
| P1 | Product | 🎯 | Problem Validation | No evidence of user need | Conduct 5 user interviews or reference support tickets |
187+
| E1 | Engineering | 💡 | Failure Modes | No retry strategy for API calls | Add exponential backoff with circuit breaker |
188+
| X1 | Both | 🎯 | Scope × Risk | Feature X adds complexity with unclear value | Defer to v2; reduces both scope and technical risk |
189+
190+
8. **Provide Verdict**:
191+
Based on findings, provide one of:
192+
- ✅ **PROCEED**: No must-address items. Spec and plan are solid. Run `/speckit.tasks` to proceed.
193+
- ⚠️ **PROCEED WITH UPDATES**: Must-address items found but are resolvable. Offer to apply fixes to spec/plan, then proceed.
194+
- 🛑 **RETHINK**: Fundamental product or architecture concerns. Recommend revisiting the spec with `/speckit.specify` or the plan with `/speckit.plan`.
195+
196+
9. **Offer Remediation**:
197+
For each must-address item and recommendation:
198+
- Provide a specific suggested edit to `spec.md` or `plan.md`
199+
- Ask: "Would you like me to apply these changes? (all / select / none)"
200+
- If user approves, apply changes to the relevant files
201+
- After applying changes, recommend re-running `/speckit.critique` to verify
202+
203+
## Post-Critique Actions
204+
205+
Suggest next steps based on verdict:
206+
- If PROCEED: "Run `/speckit.tasks` to break the plan into actionable tasks"
207+
- If PROCEED WITH UPDATES: "Review the suggested changes, then run `/speckit.tasks`"
208+
- If RETHINK: "Consider running `/speckit.specify` to refine the spec or `/speckit.plan` to revise the architecture"
209+
210+
**Check for extension hooks (after critique)**:
211+
- Check if `.specify/extensions.yml` exists in the project root.
212+
- If it exists, read it and look for entries under the `hooks.after_critique` key
213+
- If the YAML cannot be parsed or is invalid, skip hook checking silently and continue normally
214+
- Filter out hooks where `enabled` is explicitly `false`. Treat hooks without an `enabled` field as enabled by default.
215+
- For each remaining hook, do **not** attempt to interpret or evaluate hook `condition` expressions:
216+
- If the hook has no `condition` field, or it is null/empty, treat the hook as executable
217+
- If the hook defines a non-empty `condition`, skip the hook and leave condition evaluation to the HookExecutor implementation
218+
- For each executable hook, output the following based on its `optional` flag:
219+
- **Optional hook** (`optional: true`):
220+
```
221+
## Extension Hooks
222+
223+
**Optional Hook**: {extension}
224+
Command: `/{command}`
225+
Description: {description}
226+
227+
Prompt: {prompt}
228+
To execute: `/{command}`
229+
```
230+
- **Mandatory hook** (`optional: false`):
231+
```
232+
## Extension Hooks
233+
234+
**Automatic Hook**: {extension}
235+
Executing: `/{command}`
236+
EXECUTE_COMMAND: {command}
237+
```
238+
- If no hooks are registered or `.specify/extensions.yml` does not exist, skip silently

0 commit comments

Comments
 (0)