I'm running BMAD v6.0.4 (BMM module, Claude Code tools) on a production hybrid project — Next.js quoting tool with 2800+ tests, 20 stories across 5 epics built in the last ~10 sessions.
After analyzing 8 prod smoke test failures across those sessions, I ran a BMAD party review focused on root cause classification. The breakdown:
- 4 spec gaps — FRs that were ambiguous or missing boundary definitions
- 4 agent implementation misses — dev agents built logic correctly but missed data contract or UX details
- 1 repeat failure — same business rule violated in two different endpoints because the lesson from the first fix was never encoded anywhere persistent
Two of the root causes trace back to BMAD workflow gaps that could be addressed in the package itself, benefiting all users.
Proposal 1: FR Workflow-Step Requirement in create-epics-and-stories
Problem: An FR that said "search returns results from all pricing sources" was spec'd without specifying WHICH user workflow the search served. The dev agent built one search component serving two distinct workflows (part selection typeahead vs distributor comparison). Three rounds of bug fixes failed before a party review redesigned the architecture.
Proposed change: Add a validation check to the FR writing step in the create-epics-and-stories workflow. Any FR involving user-facing behavior (search, display, input, navigation) should include a Workflow Step field:
FR-[ID]: [requirement text]
Workflow Step: [step name] — [what the user is doing when they encounter this]
If a UI-facing FR lacks this field, flag it as incomplete during the workflow. This forces the analyst to separate distinct user journeys at spec time — before they reach architecture or dev.
Proposal 2: QA Regression Guard Process in Story-0
Problem: A smoke test caught an SP pricing leak in session 98. We fixed it. In session 102, a NEW endpoint had the exact same class of bug — because the fix was never promoted to a cross-cutting invariant. Story-specific tests passed. The business rule wasn't tested globally.
Proposed change: Extend QA's story-0 responsibility (test spec writing at epic start) to include maintenance of a regression guard file — a project-level test file containing cross-cutting business rule invariants that run on every build.
At story-0 of every epic, QA:
- Writes test specs for new stories (existing process)
- Reviews whether the epic introduces new cross-cutting invariants and adds them to the guard file
The dev agent runs the full guard file as part of every story's test pass. A story that passes its own tests but breaks a guard is not done.
This is a process addition to the QA workflow, not a new artifact owned by BMAD. The guard file itself lives in each project. But QA's responsibility to maintain it should be prompted by the BMAD workflow.
Evidence
Full failure analysis with root cause classification available. Happy to share the detailed spec if helpful for implementation.
Environment
- BMAD v6.0.4
- BMM module + Claude Code tools
- Claude Opus 4.6 (1M context)
- macOS, Next.js 14 project
I'm running BMAD v6.0.4 (BMM module, Claude Code tools) on a production hybrid project — Next.js quoting tool with 2800+ tests, 20 stories across 5 epics built in the last ~10 sessions.
After analyzing 8 prod smoke test failures across those sessions, I ran a BMAD party review focused on root cause classification. The breakdown:
Two of the root causes trace back to BMAD workflow gaps that could be addressed in the package itself, benefiting all users.
Proposal 1: FR Workflow-Step Requirement in
create-epics-and-storiesProblem: An FR that said "search returns results from all pricing sources" was spec'd without specifying WHICH user workflow the search served. The dev agent built one search component serving two distinct workflows (part selection typeahead vs distributor comparison). Three rounds of bug fixes failed before a party review redesigned the architecture.
Proposed change: Add a validation check to the FR writing step in the
create-epics-and-storiesworkflow. Any FR involving user-facing behavior (search, display, input, navigation) should include a Workflow Step field:If a UI-facing FR lacks this field, flag it as incomplete during the workflow. This forces the analyst to separate distinct user journeys at spec time — before they reach architecture or dev.
Proposal 2: QA Regression Guard Process in Story-0
Problem: A smoke test caught an SP pricing leak in session 98. We fixed it. In session 102, a NEW endpoint had the exact same class of bug — because the fix was never promoted to a cross-cutting invariant. Story-specific tests passed. The business rule wasn't tested globally.
Proposed change: Extend QA's story-0 responsibility (test spec writing at epic start) to include maintenance of a regression guard file — a project-level test file containing cross-cutting business rule invariants that run on every build.
At story-0 of every epic, QA:
The dev agent runs the full guard file as part of every story's test pass. A story that passes its own tests but breaks a guard is not done.
This is a process addition to the QA workflow, not a new artifact owned by BMAD. The guard file itself lives in each project. But QA's responsibility to maintain it should be prompted by the BMAD workflow.
Evidence
Full failure analysis with root cause classification available. Happy to share the detailed spec if helpful for implementation.
Environment