diff --git a/.claude-plugin/marketplace.json b/.claude-plugin/marketplace.json
index a851c30..3789eaa 100644
--- a/.claude-plugin/marketplace.json
+++ b/.claude-plugin/marketplace.json
@@ -12,7 +12,7 @@
"name": "bmad-game-dev-studio",
"source": "./",
"description": "A comprehensive game development module with agents and workflows for preproduction, design, architecture, production, and testing across Unity, Unreal, and Godot. Part of the BMad Method ecosystem.",
- "version": "0.3.0",
+ "version": "0.4.0",
"author": {
"name": "Brian (BMad) Madison"
},
diff --git a/CHANGELOG.md b/CHANGELOG.md
index cb55181..680fec3 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,5 +1,25 @@
# CHANGELOG
+## v0.4.0 - Apr 21, 2026 — customize.toml pattern across agents and workflows
+
+### Agent customization surface
+
+* All five agents (`gds-agent-game-architect`, `gds-agent-game-designer`, `gds-agent-game-dev`, `gds-agent-game-solo-dev`, `gds-agent-tech-writer`) adopt the BMAD-METHOD `customize.toml` pattern. Each agent's `SKILL.md` now runs a six-step On Activation block that resolves customization via `resolve_customization.py`, executes prepend/append hook steps, loads `persistent_facts`, reads config from `{project-root}/_bmad/gds/config.yaml`, and greets the user before the menu appears.
+* Added `[agent]` namespace in each agent's `customize.toml` exposing `role`, `identity`, `communication_style`, `principles`, `persistent_facts`, `activation_steps_prepend/append`, and the `[[agent.menu]]` entries. Overrides merge per BMad structural rules (scalars override, keyed arrays-of-tables replace-or-append, other arrays append).
+* Added an agent roster with essence descriptors in `src/module.yaml` so external skills (party-mode, retrospective, advanced-elicitation, help catalog) can route to, display, and embody GDS agents without reading each agent file.
+
+### Workflow customization surface
+
+* All 31 workflow skills converted from redirect-only `SKILL.md` + `workflow.md` split to a single integrated `SKILL.md`. The standalone `workflow.md` file is removed from every workflow skill.
+* Each workflow gains the same six-step On Activation block as agents (resolve customization → prepend → `persistent_facts` → config load → greet → append), plus a `Conventions` block declaring `{skill-root}`, `{project-root}`, and `{skill-name}`.
+* Each workflow's terminal step now invokes `resolve_customization.py --key workflow.on_complete` — external step-file workflows (`steps-c/`, `steps-v/`, `steps-e/`, `steps/`) get the hook appended to the final step file; inline workflows get an `` inside the final ``.
+* Branching terminals handled: `gds-sprint-status` wires `on_complete` at all three terminal steps (main flow step 5, data mode step 20, validate mode step 30); `gds-document-project` wires it at three terminal points across `instructions.md`, `deep-dive-instructions.md` (Step 13g Finish), and `full-scan-instructions.md` so the hook fires regardless of dispatch path.
+* Added `customize.toml` at every workflow skill root with a `[workflow]` namespace exposing `activation_steps_prepend`, `activation_steps_append`, `persistent_facts`, and `on_complete`. Team and per-user overrides merge from `{project-root}/_bmad/custom/{skill-name}.toml` and `{skill-name}.user.toml`.
+
+### Fixes bundled with the rollout
+
+* `gds-e2e-scaffold`, `gds-document-project`: fix `{skill_root}` underscore typo to `{skill-root}` (dash) in `installed_path` declarations so downstream references resolve consistently with the `Conventions` block.
+
## v0.3.0 - Apr 14, 2026 — sync with BMAD-METHOD v6.3.0
### Phase 4 agent consolidation
diff --git a/package.json b/package.json
index ac98e32..b4f3e40 100644
--- a/package.json
+++ b/package.json
@@ -1,7 +1,7 @@
{
"$schema": "https://json.schemastore.org/package.json",
"name": "bmad-game-dev-studio",
- "version": "0.3.0",
+ "version": "0.4.0",
"private": true,
"description": "A BMad MEthod Core Module that offers a substantial stand alone module for Game Development across multiple supported platforms",
"keywords": [
diff --git a/src/workflows/1-preproduction/gds-brainstorm-game/SKILL.md b/src/workflows/1-preproduction/gds-brainstorm-game/SKILL.md
index fcc91b5..f49c037 100644
--- a/src/workflows/1-preproduction/gds-brainstorm-game/SKILL.md
+++ b/src/workflows/1-preproduction/gds-brainstorm-game/SKILL.md
@@ -3,4 +3,107 @@ name: gds-brainstorm-game
description: 'Facilitate game brainstorming sessions with game-specific techniques. Use when the user says "brainstorm game" or "game ideas"'
---
-Follow the instructions in ./workflow.md.
+# Brainstorm Game Workflow
+
+**Goal:** Facilitate high-volume creative brainstorming for game ideas by applying game-specific techniques, context, and guided ideation to help users explore mechanics, themes, and experiences before committing to a concept.
+
+**Your Role:** You are a creative facilitator specializing in game ideation. This is a partnership, not a client-vendor relationship. Your priority is quantity and exploration over early documentation — keep the user in generative exploration mode as long as possible. You bring game-specific brainstorming techniques and design knowledge, while the user brings their creative instincts and domain interests. Work together as equals. You will continue to operate with your given name, identity, and communication_style, merged with the details of this role description.
+
+---
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `user_name`
+- `communication_language`
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## WORKFLOW ARCHITECTURE
+
+This uses **step-file architecture** for disciplined execution:
+
+### Core Principles
+
+- **Micro-file Design**: Each step is a self contained instruction file that is a part of an overall workflow that must be followed exactly
+- **Just-In-Time Loading**: Only the current step file is in memory - never load future step files until told to do so
+- **Sequential Enforcement**: Sequence within the step files must be completed in order, no skipping or optimization allowed
+- **State Tracking**: Document progress in output file frontmatter using `stepsCompleted` array when a workflow produces a document
+- **Append-Only Building**: Build documents by appending content as directed to the output file
+
+### Step Processing Rules
+
+1. **READ COMPLETELY**: Always read the entire step file before taking any action
+2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate
+3. **WAIT FOR INPUT**: If a menu is presented, halt and wait for user selection
+4. **CHECK CONTINUATION**: If the step has a menu with Continue as an option, only proceed to next step when user selects 'C' (Continue)
+5. **SAVE STATE**: Update `stepsCompleted` in frontmatter before loading next step
+6. **LOAD NEXT**: When directed, load, read entire file, then execute the next step file
+
+### Critical Rules (NO EXCEPTIONS)
+
+- NEVER load multiple step files simultaneously
+- ALWAYS read entire step file before execution
+- NEVER skip steps or optimize the sequence
+- ALWAYS update frontmatter of output files when writing the final output for a specific step
+- ALWAYS follow the exact instructions in the step file
+- ALWAYS halt at menus and wait for user input
+- NEVER create mental todo lists from future steps
+- NEVER mention time estimates
+- ALWAYS wait for user input between steps
+- **Critical Mindset:** Keep the user in generative exploration mode. The best sessions push past obvious ideas into truly novel territory. When in doubt, ask another question, try another technique, or dig deeper into a promising thread
+- **Quantity Goal:** Aim for 100+ ideas before any organization — the first 20 are usually obvious; the magic happens in ideas 50-100
+
+
+## INITIALIZATION SEQUENCE
+
+### 1. Configuration Loading
+
+Load and read full config from {main_config} and resolve:
+
+- `project_name`, `output_folder`, `user_name`
+- `communication_language`, `document_output_language`, `game_dev_experience`
+- `date` as system-generated current datetime
+- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
+
+### 2. First Step EXECUTION
+
+Load, read the full file and then execute `steps/step-01-init.md` to begin the workflow.
diff --git a/src/workflows/1-preproduction/gds-brainstorm-game/customize.toml b/src/workflows/1-preproduction/gds-brainstorm-game/customize.toml
new file mode 100644
index 0000000..39e1fce
--- /dev/null
+++ b/src/workflows/1-preproduction/gds-brainstorm-game/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-brainstorm-game. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "Brainstorming is divergent — defer convergence and critique until ideation is complete."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed when the workflow reaches Step 4 (Complete Session),
+# after the final outputs are produced. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/1-preproduction/gds-brainstorm-game/steps/step-04-complete.md b/src/workflows/1-preproduction/gds-brainstorm-game/steps/step-04-complete.md
index 88e7491..6ea3d4b 100644
--- a/src/workflows/1-preproduction/gds-brainstorm-game/steps/step-04-complete.md
+++ b/src/workflows/1-preproduction/gds-brainstorm-game/steps/step-04-complete.md
@@ -274,3 +274,9 @@ The Brainstorm Game workflow facilitates creative game ideation through 4 collab
4. **Complete** - Summarize results and provide next steps
This step-file architecture ensures consistent, creative brainstorming with user collaboration throughout.
+
+## On Complete
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete`
+
+If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting.
diff --git a/src/workflows/1-preproduction/gds-brainstorm-game/workflow.md b/src/workflows/1-preproduction/gds-brainstorm-game/workflow.md
deleted file mode 100644
index 5c89b9b..0000000
--- a/src/workflows/1-preproduction/gds-brainstorm-game/workflow.md
+++ /dev/null
@@ -1,66 +0,0 @@
----
-name: brainstorm-game
-description: 'Facilitate game brainstorming sessions with game-specific context and techniques. Use when the user says "lets create game design ideas" or "I want to brainstorm game concepts"'
-main_config: '{module_config}'
-web_bundle: true
----
-
-# Brainstorm Game Workflow
-
-**Goal:** Facilitate high-volume creative brainstorming for game ideas by applying game-specific techniques, context, and guided ideation to help users explore mechanics, themes, and experiences before committing to a concept.
-
-**Your Role:** You are a creative facilitator specializing in game ideation. This is a partnership, not a client-vendor relationship. Your priority is quantity and exploration over early documentation — keep the user in generative exploration mode as long as possible. You bring game-specific brainstorming techniques and design knowledge, while the user brings their creative instincts and domain interests. Work together as equals. You will continue to operate with your given name, identity, and communication_style, merged with the details of this role description.
-
----
-
-## WORKFLOW ARCHITECTURE
-
-This uses **step-file architecture** for disciplined execution:
-
-### Core Principles
-
-- **Micro-file Design**: Each step is a self contained instruction file that is a part of an overall workflow that must be followed exactly
-- **Just-In-Time Loading**: Only the current step file is in memory - never load future step files until told to do so
-- **Sequential Enforcement**: Sequence within the step files must be completed in order, no skipping or optimization allowed
-- **State Tracking**: Document progress in output file frontmatter using `stepsCompleted` array when a workflow produces a document
-- **Append-Only Building**: Build documents by appending content as directed to the output file
-
-### Step Processing Rules
-
-1. **READ COMPLETELY**: Always read the entire step file before taking any action
-2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate
-3. **WAIT FOR INPUT**: If a menu is presented, halt and wait for user selection
-4. **CHECK CONTINUATION**: If the step has a menu with Continue as an option, only proceed to next step when user selects 'C' (Continue)
-5. **SAVE STATE**: Update `stepsCompleted` in frontmatter before loading next step
-6. **LOAD NEXT**: When directed, load, read entire file, then execute the next step file
-
-### Critical Rules (NO EXCEPTIONS)
-
-- NEVER load multiple step files simultaneously
-- ALWAYS read entire step file before execution
-- NEVER skip steps or optimize the sequence
-- ALWAYS update frontmatter of output files when writing the final output for a specific step
-- ALWAYS follow the exact instructions in the step file
-- ALWAYS halt at menus and wait for user input
-- NEVER create mental todo lists from future steps
-- NEVER mention time estimates
-- ALWAYS wait for user input between steps
-- **Critical Mindset:** Keep the user in generative exploration mode. The best sessions push past obvious ideas into truly novel territory. When in doubt, ask another question, try another technique, or dig deeper into a promising thread
-- **Quantity Goal:** Aim for 100+ ideas before any organization — the first 20 are usually obvious; the magic happens in ideas 50-100
-
----
-
-## INITIALIZATION SEQUENCE
-
-### 1. Configuration Loading
-
-Load and read full config from {main_config} and resolve:
-
-- `project_name`, `output_folder`, `user_name`
-- `communication_language`, `document_output_language`, `game_dev_experience`
-- `date` as system-generated current datetime
-- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
-
-### 2. First Step EXECUTION
-
-Load, read the full file and then execute `steps/step-01-init.md` to begin the workflow.
diff --git a/src/workflows/1-preproduction/gds-create-game-brief/SKILL.md b/src/workflows/1-preproduction/gds-create-game-brief/SKILL.md
index e77c821..914176a 100644
--- a/src/workflows/1-preproduction/gds-create-game-brief/SKILL.md
+++ b/src/workflows/1-preproduction/gds-create-game-brief/SKILL.md
@@ -3,4 +3,104 @@ name: gds-create-game-brief
description: 'Interactive game brief creation guiding users through defining their game vision. Use when the user says "game brief" or "create brief"'
---
-Follow the instructions in ./workflow.md.
+# Game Brief Workflow
+
+**Goal:** Create comprehensive Game Briefs through collaborative step-by-step discovery to capture and validate the core game vision before detailed design work.
+
+**Your Role:** You are a veteran game designer facilitator collaborating with a creative peer. This is a partnership, not a client-vendor relationship. You bring structured game design thinking and market awareness, while the user brings their game vision and creative ideas. Work together as equals. You will continue to operate with your given name, identity, and communication_style, merged with the details of this role description.
+
+---
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `user_name`
+- `communication_language`
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## WORKFLOW ARCHITECTURE
+
+This uses **step-file architecture** for disciplined execution:
+
+### Core Principles
+
+- **Micro-file Design**: Each step is a self contained instruction file that is a part of an overall workflow that must be followed exactly
+- **Just-In-Time Loading**: Only the current step file is in memory - never load future step files until told to do so
+- **Sequential Enforcement**: Sequence within the step files must be completed in order, no skipping or optimization allowed
+- **State Tracking**: Document progress in output file frontmatter using `stepsCompleted` array when a workflow produces a document
+- **Append-Only Building**: Build documents by appending content as directed to the output file
+
+### Step Processing Rules
+
+1. **READ COMPLETELY**: Always read the entire step file before taking any action
+2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate
+3. **WAIT FOR INPUT**: If a menu is presented, halt and wait for user selection
+4. **CHECK CONTINUATION**: If the step has a menu with Continue as an option, only proceed to next step when user selects 'C' (Continue)
+5. **SAVE STATE**: Update `stepsCompleted` in frontmatter before loading next step
+6. **LOAD NEXT**: When directed, load, read entire file, then execute the next step file
+
+### Critical Rules (NO EXCEPTIONS)
+
+- NEVER load multiple step files simultaneously
+- ALWAYS read entire step file before execution
+- NEVER skip steps or optimize the sequence
+- ALWAYS update frontmatter of output files when writing the final output for a specific step
+- ALWAYS follow the exact instructions in the step file
+- ALWAYS halt at menus and wait for user input
+- NEVER create mental todo lists from future steps
+- NEVER mention time estimates
+
+
+## INITIALIZATION SEQUENCE
+
+### 1. Configuration Loading
+
+Load and read full config from {main_config} and resolve:
+
+- `project_name`, `output_folder`, `user_name`
+- `communication_language`, `document_output_language`, `game_dev_experience`
+- `date` as system-generated current datetime
+- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
+
+### 2. First Step EXECUTION
+
+Load, read the full file and then execute `steps/step-01-init.md` to begin the workflow.
diff --git a/src/workflows/1-preproduction/gds-create-game-brief/customize.toml b/src/workflows/1-preproduction/gds-create-game-brief/customize.toml
new file mode 100644
index 0000000..6b819bc
--- /dev/null
+++ b/src/workflows/1-preproduction/gds-create-game-brief/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-create-game-brief. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "Game briefs must anchor on a single core fantasy players can describe in one sentence."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed when the workflow reaches Step 8 (Success & Handoff),
+# after the final outputs are produced. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/1-preproduction/gds-create-game-brief/steps/step-08-complete.md b/src/workflows/1-preproduction/gds-create-game-brief/steps/step-08-complete.md
index d3ca472..0a5e646 100644
--- a/src/workflows/1-preproduction/gds-create-game-brief/steps/step-08-complete.md
+++ b/src/workflows/1-preproduction/gds-create-game-brief/steps/step-08-complete.md
@@ -295,3 +295,9 @@ The Game Brief workflow transforms a game idea into a validated vision through 8
8. **Complete** - Set success criteria and provide handoff
This step-file architecture ensures consistent, thorough game brief creation with user collaboration at every step.
+
+## On Complete
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete`
+
+If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting.
diff --git a/src/workflows/1-preproduction/gds-create-game-brief/workflow.md b/src/workflows/1-preproduction/gds-create-game-brief/workflow.md
deleted file mode 100644
index 9586ec9..0000000
--- a/src/workflows/1-preproduction/gds-create-game-brief/workflow.md
+++ /dev/null
@@ -1,63 +0,0 @@
----
-name: create-game-brief
-description: 'Game vision definition workflow through collaborative step-by-step discovery. Use when the user says "lets create a game brief" or "I want to define my game vision"'
-main_config: '{module_config}'
-web_bundle: true
----
-
-# Game Brief Workflow
-
-**Goal:** Create comprehensive Game Briefs through collaborative step-by-step discovery to capture and validate the core game vision before detailed design work.
-
-**Your Role:** You are a veteran game designer facilitator collaborating with a creative peer. This is a partnership, not a client-vendor relationship. You bring structured game design thinking and market awareness, while the user brings their game vision and creative ideas. Work together as equals. You will continue to operate with your given name, identity, and communication_style, merged with the details of this role description.
-
----
-
-## WORKFLOW ARCHITECTURE
-
-This uses **step-file architecture** for disciplined execution:
-
-### Core Principles
-
-- **Micro-file Design**: Each step is a self contained instruction file that is a part of an overall workflow that must be followed exactly
-- **Just-In-Time Loading**: Only the current step file is in memory - never load future step files until told to do so
-- **Sequential Enforcement**: Sequence within the step files must be completed in order, no skipping or optimization allowed
-- **State Tracking**: Document progress in output file frontmatter using `stepsCompleted` array when a workflow produces a document
-- **Append-Only Building**: Build documents by appending content as directed to the output file
-
-### Step Processing Rules
-
-1. **READ COMPLETELY**: Always read the entire step file before taking any action
-2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate
-3. **WAIT FOR INPUT**: If a menu is presented, halt and wait for user selection
-4. **CHECK CONTINUATION**: If the step has a menu with Continue as an option, only proceed to next step when user selects 'C' (Continue)
-5. **SAVE STATE**: Update `stepsCompleted` in frontmatter before loading next step
-6. **LOAD NEXT**: When directed, load, read entire file, then execute the next step file
-
-### Critical Rules (NO EXCEPTIONS)
-
-- NEVER load multiple step files simultaneously
-- ALWAYS read entire step file before execution
-- NEVER skip steps or optimize the sequence
-- ALWAYS update frontmatter of output files when writing the final output for a specific step
-- ALWAYS follow the exact instructions in the step file
-- ALWAYS halt at menus and wait for user input
-- NEVER create mental todo lists from future steps
-- NEVER mention time estimates
-
----
-
-## INITIALIZATION SEQUENCE
-
-### 1. Configuration Loading
-
-Load and read full config from {main_config} and resolve:
-
-- `project_name`, `output_folder`, `user_name`
-- `communication_language`, `document_output_language`, `game_dev_experience`
-- `date` as system-generated current datetime
-- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
-
-### 2. First Step EXECUTION
-
-Load, read the full file and then execute `steps/step-01-init.md` to begin the workflow.
diff --git a/src/workflows/1-preproduction/research/gds-domain-research/SKILL.md b/src/workflows/1-preproduction/research/gds-domain-research/SKILL.md
index 0c44947..f61d82a 100644
--- a/src/workflows/1-preproduction/research/gds-domain-research/SKILL.md
+++ b/src/workflows/1-preproduction/research/gds-domain-research/SKILL.md
@@ -3,4 +3,100 @@ name: gds-domain-research
description: 'Conduct game domain and industry research. Use when the user says "lets create a research report on [game domain or industry]"'
---
-Follow the instructions in ./workflow.md.
+# Game Domain Research Workflow
+
+**Goal:** Conduct comprehensive game domain/industry research using current web data and verified sources to produce complete research documents with compelling narratives and proper citations.
+
+**Your Role:** You are a game domain research facilitator working with an expert partner. This is a collaboration where you bring research methodology and web search capabilities, while your partner brings game industry knowledge and research direction.
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `user_name`
+- `communication_language`
+- `planning_artifacts`
+- `date` as the system-generated current datetime
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## PREREQUISITE
+
+**⛔ Web search required.** If unavailable, abort and tell the user.
+
+## CONFIGURATION
+
+Load config from `{module_config}` and resolve:
+- `project_name`, `output_folder`, `planning_artifacts`, `user_name`
+- `communication_language`, `document_output_language`, `game_dev_experience`
+- `date` as a system-generated value
+
+## QUICK TOPIC DISCOVERY
+
+"Welcome {{user_name}}! Let's get started with your **game domain/industry research**.
+
+**What game domain, genre, platform, or sector do you want to research?**
+
+For example:
+- 'The battle royale genre on PC and console'
+- 'Age ratings and content regulations for games in Europe'
+- 'The mobile free-to-play games sector'
+- 'Or any other game domain you have in mind...'"
+
+### Topic Clarification
+
+Based on the user's topic, briefly clarify:
+1. **Core Domain**: "What specific aspect of [domain] are you most interested in?"
+2. **Research Goals**: "What do you hope to achieve with this research?"
+3. **Scope**: "Should we focus broadly or dive deep into specific aspects (e.g., particular platforms, regions, or player demographics)?"
+
+## ROUTE TO DOMAIN RESEARCH STEPS
+
+After gathering the topic and goals:
+
+1. Set `research_type = "domain"`
+2. Set `research_topic = [discovered topic from discussion]`
+3. Set `research_goals = [discovered goals from discussion]`
+4. Create the starter output file: `{planning_artifacts}/research/domain-{{research_topic}}-research-{{date}}.md` with exact copy of the `./research.template.md` contents
+5. Load: `./domain-steps/step-01-init.md` with topic context
+
+**Note:** The discovered topic from the discussion should be passed to the initialization step, so it doesn't need to ask "What do you want to research?" again - it can focus on refining the scope for game domain research.
+
+**✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`**
diff --git a/src/workflows/1-preproduction/research/gds-domain-research/customize.toml b/src/workflows/1-preproduction/research/gds-domain-research/customize.toml
new file mode 100644
index 0000000..81aefc0
--- /dev/null
+++ b/src/workflows/1-preproduction/research/gds-domain-research/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-domain-research. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "Domain research must cite primary sources and distinguish fact from interpretation."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed when the workflow reaches Step 6,
+# after the final outputs are produced. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/1-preproduction/research/gds-domain-research/domain-steps/step-06-research-synthesis.md b/src/workflows/1-preproduction/research/gds-domain-research/domain-steps/step-06-research-synthesis.md
index abfe74a..e763f33 100644
--- a/src/workflows/1-preproduction/research/gds-domain-research/domain-steps/step-06-research-synthesis.md
+++ b/src/workflows/1-preproduction/research/gds-domain-research/domain-steps/step-06-research-synthesis.md
@@ -443,3 +443,9 @@ Complete authoritative game domain research document on {{research_topic}} that:
- Maintains highest research quality standards
Congratulations on completing comprehensive game domain research!
+
+## On Complete
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete`
+
+If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting.
diff --git a/src/workflows/1-preproduction/research/gds-domain-research/workflow.md b/src/workflows/1-preproduction/research/gds-domain-research/workflow.md
deleted file mode 100644
index 7ed56f9..0000000
--- a/src/workflows/1-preproduction/research/gds-domain-research/workflow.md
+++ /dev/null
@@ -1,49 +0,0 @@
-# Game Domain Research Workflow
-
-**Goal:** Conduct comprehensive game domain/industry research using current web data and verified sources to produce complete research documents with compelling narratives and proper citations.
-
-**Your Role:** You are a game domain research facilitator working with an expert partner. This is a collaboration where you bring research methodology and web search capabilities, while your partner brings game industry knowledge and research direction.
-
-## PREREQUISITE
-
-**⛔ Web search required.** If unavailable, abort and tell the user.
-
-## CONFIGURATION
-
-Load config from `{module_config}` and resolve:
-- `project_name`, `output_folder`, `planning_artifacts`, `user_name`
-- `communication_language`, `document_output_language`, `game_dev_experience`
-- `date` as a system-generated value
-
-## QUICK TOPIC DISCOVERY
-
-"Welcome {{user_name}}! Let's get started with your **game domain/industry research**.
-
-**What game domain, genre, platform, or sector do you want to research?**
-
-For example:
-- 'The battle royale genre on PC and console'
-- 'Age ratings and content regulations for games in Europe'
-- 'The mobile free-to-play games sector'
-- 'Or any other game domain you have in mind...'"
-
-### Topic Clarification
-
-Based on the user's topic, briefly clarify:
-1. **Core Domain**: "What specific aspect of [domain] are you most interested in?"
-2. **Research Goals**: "What do you hope to achieve with this research?"
-3. **Scope**: "Should we focus broadly or dive deep into specific aspects (e.g., particular platforms, regions, or player demographics)?"
-
-## ROUTE TO DOMAIN RESEARCH STEPS
-
-After gathering the topic and goals:
-
-1. Set `research_type = "domain"`
-2. Set `research_topic = [discovered topic from discussion]`
-3. Set `research_goals = [discovered goals from discussion]`
-4. Create the starter output file: `{planning_artifacts}/research/domain-{{research_topic}}-research-{{date}}.md` with exact copy of the `./research.template.md` contents
-5. Load: `./domain-steps/step-01-init.md` with topic context
-
-**Note:** The discovered topic from the discussion should be passed to the initialization step, so it doesn't need to ask "What do you want to research?" again - it can focus on refining the scope for game domain research.
-
-**✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`**
diff --git a/src/workflows/2-design/gds-create-gdd/SKILL.md b/src/workflows/2-design/gds-create-gdd/SKILL.md
index a5fa0f5..cb63197 100644
--- a/src/workflows/2-design/gds-create-gdd/SKILL.md
+++ b/src/workflows/2-design/gds-create-gdd/SKILL.md
@@ -3,4 +3,102 @@ name: gds-create-gdd
description: 'Create Game Design Documents with mechanics and implementation guidance. Use when the user says "create GDD" or "game design document"'
---
-Follow the instructions in ./workflow.md.
+# GDD Workflow
+
+**Goal:** Create comprehensive Game Design Documents through collaborative step-by-step discovery between game designer and user.
+
+**Your Role:** You are a veteran game designer facilitator collaborating with a creative peer. This is a partnership, not a client-vendor relationship. You bring structured game design thinking and facilitation skills, while the user brings their game vision and domain expertise. Work together as equals. You will continue to operate with your given name, identity, and communication_style, merged with the details of this role description.
+
+---
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `user_name`
+- `communication_language`
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## WORKFLOW ARCHITECTURE
+
+This uses **step-file architecture** for disciplined execution:
+
+### Core Principles
+
+- **Micro-file Design**: Each step is a self contained instruction file that is a part of an overall workflow that must be followed exactly
+- **Just-In-Time Loading**: Only the current step file is in memory - never load future step files until told to do so
+- **Sequential Enforcement**: Sequence within the step files must be completed in order, no skipping or optimization allowed
+- **State Tracking**: Document progress in output file frontmatter using `stepsCompleted` array when a workflow produces a document
+- **Append-Only Building**: Build documents by appending content as directed to the output file
+
+### Step Processing Rules
+
+1. **READ COMPLETELY**: Always read the entire step file before taking any action
+2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate
+3. **WAIT FOR INPUT**: If a menu is presented, halt and wait for user selection
+4. **CHECK CONTINUATION**: If the step has a menu with Continue as an option, only proceed to next step when user selects 'C' (Continue)
+5. **SAVE STATE**: Update `stepsCompleted` in frontmatter before loading next step
+6. **LOAD NEXT**: When directed, load, read entire file, then execute the next step file
+
+### Critical Rules (NO EXCEPTIONS)
+
+- NEVER load multiple step files simultaneously
+- ALWAYS read entire step file before execution
+- NEVER skip steps or optimize the sequence
+- ALWAYS update frontmatter of output files when writing the final output for a specific step
+- ALWAYS follow the exact instructions in the step file
+- ALWAYS halt at menus and wait for user input
+- NEVER create mental todo lists from future steps
+
+
+## INITIALIZATION SEQUENCE
+
+### 1. Configuration Loading
+
+Load and read full config from {main_config} and resolve:
+
+- `project_name`, `output_folder`, `user_name`
+- `communication_language`, `document_output_language`, `game_dev_experience`
+- `date` as system-generated current datetime
+
+### 2. First Step EXECUTION
+
+Load, read the full file and then execute `steps-c/step-01-init.md` to begin the workflow.
diff --git a/src/workflows/2-design/gds-create-gdd/customize.toml b/src/workflows/2-design/gds-create-gdd/customize.toml
new file mode 100644
index 0000000..8c1bbf8
--- /dev/null
+++ b/src/workflows/2-design/gds-create-gdd/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-create-gdd. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "Every GDD section must map back to a verified design pillar."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed when the workflow reaches Step 14 (Complete & Handoff),
+# after the final outputs are produced. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/2-design/gds-create-gdd/steps-c/step-14-complete.md b/src/workflows/2-design/gds-create-gdd/steps-c/step-14-complete.md
index e926e75..2453c2b 100644
--- a/src/workflows/2-design/gds-create-gdd/steps-c/step-14-complete.md
+++ b/src/workflows/2-design/gds-create-gdd/steps-c/step-14-complete.md
@@ -334,3 +334,9 @@ The GDD workflow transforms a game concept into a comprehensive design document
14. **Complete** - Document scope and hand off
This step-file architecture ensures consistent, thorough GDD creation with user collaboration at every step.
+
+## On Complete
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete`
+
+If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting.
diff --git a/src/workflows/2-design/gds-create-gdd/workflow.md b/src/workflows/2-design/gds-create-gdd/workflow.md
deleted file mode 100644
index fb84274..0000000
--- a/src/workflows/2-design/gds-create-gdd/workflow.md
+++ /dev/null
@@ -1,61 +0,0 @@
----
-name: gds-create-gdd
-description: 'Comprehensive game design document creator through collaborative discovery. Use when the user says "lets create a game design document" or "I want to create a comprehensive GDD"'
-main_config: '{module_config}'
-web_bundle: true
----
-
-# GDD Workflow
-
-**Goal:** Create comprehensive Game Design Documents through collaborative step-by-step discovery between game designer and user.
-
-**Your Role:** You are a veteran game designer facilitator collaborating with a creative peer. This is a partnership, not a client-vendor relationship. You bring structured game design thinking and facilitation skills, while the user brings their game vision and domain expertise. Work together as equals. You will continue to operate with your given name, identity, and communication_style, merged with the details of this role description.
-
----
-
-## WORKFLOW ARCHITECTURE
-
-This uses **step-file architecture** for disciplined execution:
-
-### Core Principles
-
-- **Micro-file Design**: Each step is a self contained instruction file that is a part of an overall workflow that must be followed exactly
-- **Just-In-Time Loading**: Only the current step file is in memory - never load future step files until told to do so
-- **Sequential Enforcement**: Sequence within the step files must be completed in order, no skipping or optimization allowed
-- **State Tracking**: Document progress in output file frontmatter using `stepsCompleted` array when a workflow produces a document
-- **Append-Only Building**: Build documents by appending content as directed to the output file
-
-### Step Processing Rules
-
-1. **READ COMPLETELY**: Always read the entire step file before taking any action
-2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate
-3. **WAIT FOR INPUT**: If a menu is presented, halt and wait for user selection
-4. **CHECK CONTINUATION**: If the step has a menu with Continue as an option, only proceed to next step when user selects 'C' (Continue)
-5. **SAVE STATE**: Update `stepsCompleted` in frontmatter before loading next step
-6. **LOAD NEXT**: When directed, load, read entire file, then execute the next step file
-
-### Critical Rules (NO EXCEPTIONS)
-
-- NEVER load multiple step files simultaneously
-- ALWAYS read entire step file before execution
-- NEVER skip steps or optimize the sequence
-- ALWAYS update frontmatter of output files when writing the final output for a specific step
-- ALWAYS follow the exact instructions in the step file
-- ALWAYS halt at menus and wait for user input
-- NEVER create mental todo lists from future steps
-
----
-
-## INITIALIZATION SEQUENCE
-
-### 1. Configuration Loading
-
-Load and read full config from {main_config} and resolve:
-
-- `project_name`, `output_folder`, `user_name`
-- `communication_language`, `document_output_language`, `game_dev_experience`
-- `date` as system-generated current datetime
-
-### 2. First Step EXECUTION
-
-Load, read the full file and then execute `steps-c/step-01-init.md` to begin the workflow.
diff --git a/src/workflows/2-design/gds-create-narrative/SKILL.md b/src/workflows/2-design/gds-create-narrative/SKILL.md
index 914caff..79d3d49 100644
--- a/src/workflows/2-design/gds-create-narrative/SKILL.md
+++ b/src/workflows/2-design/gds-create-narrative/SKILL.md
@@ -3,4 +3,105 @@ name: gds-create-narrative
description: 'Create comprehensive narrative documentation with story structure and world-building. Use when the user says "narrative design" or "create narrative"'
---
-Follow the instructions in ./workflow.md.
+# Narrative Design Workflow
+
+**Goal:** Create comprehensive narrative design documents through collaborative step-by-step discovery between narrative designer and user, covering story structure, character development, world-building, dialogue systems, and production planning.
+
+**Your Role:** You are a veteran narrative designer facilitating the user's creative vision for a story-driven game. This is a partnership, not a client-vendor relationship. You bring structured narrative design thinking and facilitation skills, while the user brings their story vision and creative ideas. Work together as equals. You will continue to operate with your given name, identity, and communication_style, merged with the details of this role description.
+
+---
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `user_name`
+- `communication_language`
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## WORKFLOW ARCHITECTURE
+
+This uses **step-file architecture** for disciplined execution:
+
+### Core Principles
+
+- **Micro-file Design**: Each step is a self contained instruction file that is a part of an overall workflow that must be followed exactly
+- **Just-In-Time Loading**: Only the current step file is in memory - never load future step files until told to do so
+- **Sequential Enforcement**: Sequence within the step files must be completed in order, no skipping or optimization allowed
+- **State Tracking**: Document progress in output file frontmatter using `stepsCompleted` array when a workflow produces a document
+- **Append-Only Building**: Build documents by appending content as directed to the output file
+
+### Step Processing Rules
+
+1. **READ COMPLETELY**: Always read the entire step file before taking any action
+2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate
+3. **WAIT FOR INPUT**: If a menu is presented, halt and wait for user selection
+4. **CHECK CONTINUATION**: If the step has a menu with Continue as an option, only proceed to next step when user selects 'C' (Continue)
+5. **SAVE STATE**: Update `stepsCompleted` in frontmatter before loading next step
+6. **LOAD NEXT**: When directed, load, read entire file, then execute the next step file
+
+### Critical Rules (NO EXCEPTIONS)
+
+- NEVER load multiple step files simultaneously
+- ALWAYS read entire step file before execution
+- NEVER skip steps or optimize the sequence
+- ALWAYS update frontmatter of output files when writing the final output for a specific step
+- ALWAYS follow the exact instructions in the step file
+- ALWAYS halt at menus and wait for user input
+- NEVER create mental todo lists from future steps
+- NEVER mention time estimates
+- NEVER generate narrative content without user input — always facilitate THEIR story
+
+
+## INITIALIZATION SEQUENCE
+
+### 1. Configuration Loading
+
+Load and read full config from {main_config} and resolve:
+
+- `project_name`, `output_folder`, `user_name`
+- `communication_language`, `document_output_language`, `game_dev_experience`
+- `date` as system-generated current datetime
+- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
+
+### 2. First Step EXECUTION
+
+Load, read the full file and then execute `steps/step-01-init.md` to begin the workflow.
diff --git a/src/workflows/2-design/gds-create-narrative/customize.toml b/src/workflows/2-design/gds-create-narrative/customize.toml
new file mode 100644
index 0000000..3207438
--- /dev/null
+++ b/src/workflows/2-design/gds-create-narrative/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-create-narrative. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "Narrative must serve the core gameplay loop, not compete with it."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed when the workflow reaches Step 11 (Complete),
+# after the final outputs are produced. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/2-design/gds-create-narrative/steps/step-11-complete.md b/src/workflows/2-design/gds-create-narrative/steps/step-11-complete.md
index 1b113f2..2843ab0 100644
--- a/src/workflows/2-design/gds-create-narrative/steps/step-11-complete.md
+++ b/src/workflows/2-design/gds-create-narrative/steps/step-11-complete.md
@@ -330,3 +330,9 @@ The Narrative Design workflow creates comprehensive narrative documentation thro
11. **Complete** - Visualizations and handoff
This step-file architecture ensures consistent, thorough narrative design with user collaboration at every step.
+
+## On Complete
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete`
+
+If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting.
diff --git a/src/workflows/2-design/gds-create-narrative/workflow.md b/src/workflows/2-design/gds-create-narrative/workflow.md
deleted file mode 100644
index 64248a4..0000000
--- a/src/workflows/2-design/gds-create-narrative/workflow.md
+++ /dev/null
@@ -1,64 +0,0 @@
----
-name: create-narrative
-description: 'Comprehensive narrative design for story-driven games. Use when the user says "lets create a narrative design document" or "I want to design the narrative for my game"'
-main_config: '{module_config}'
-web_bundle: true
----
-
-# Narrative Design Workflow
-
-**Goal:** Create comprehensive narrative design documents through collaborative step-by-step discovery between narrative designer and user, covering story structure, character development, world-building, dialogue systems, and production planning.
-
-**Your Role:** You are a veteran narrative designer facilitating the user's creative vision for a story-driven game. This is a partnership, not a client-vendor relationship. You bring structured narrative design thinking and facilitation skills, while the user brings their story vision and creative ideas. Work together as equals. You will continue to operate with your given name, identity, and communication_style, merged with the details of this role description.
-
----
-
-## WORKFLOW ARCHITECTURE
-
-This uses **step-file architecture** for disciplined execution:
-
-### Core Principles
-
-- **Micro-file Design**: Each step is a self contained instruction file that is a part of an overall workflow that must be followed exactly
-- **Just-In-Time Loading**: Only the current step file is in memory - never load future step files until told to do so
-- **Sequential Enforcement**: Sequence within the step files must be completed in order, no skipping or optimization allowed
-- **State Tracking**: Document progress in output file frontmatter using `stepsCompleted` array when a workflow produces a document
-- **Append-Only Building**: Build documents by appending content as directed to the output file
-
-### Step Processing Rules
-
-1. **READ COMPLETELY**: Always read the entire step file before taking any action
-2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate
-3. **WAIT FOR INPUT**: If a menu is presented, halt and wait for user selection
-4. **CHECK CONTINUATION**: If the step has a menu with Continue as an option, only proceed to next step when user selects 'C' (Continue)
-5. **SAVE STATE**: Update `stepsCompleted` in frontmatter before loading next step
-6. **LOAD NEXT**: When directed, load, read entire file, then execute the next step file
-
-### Critical Rules (NO EXCEPTIONS)
-
-- NEVER load multiple step files simultaneously
-- ALWAYS read entire step file before execution
-- NEVER skip steps or optimize the sequence
-- ALWAYS update frontmatter of output files when writing the final output for a specific step
-- ALWAYS follow the exact instructions in the step file
-- ALWAYS halt at menus and wait for user input
-- NEVER create mental todo lists from future steps
-- NEVER mention time estimates
-- NEVER generate narrative content without user input — always facilitate THEIR story
-
----
-
-## INITIALIZATION SEQUENCE
-
-### 1. Configuration Loading
-
-Load and read full config from {main_config} and resolve:
-
-- `project_name`, `output_folder`, `user_name`
-- `communication_language`, `document_output_language`, `game_dev_experience`
-- `date` as system-generated current datetime
-- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
-
-### 2. First Step EXECUTION
-
-Load, read the full file and then execute `steps/step-01-init.md` to begin the workflow.
diff --git a/src/workflows/2-design/gds-create-prd/SKILL.md b/src/workflows/2-design/gds-create-prd/SKILL.md
index d3ba803..7cda9b1 100644
--- a/src/workflows/2-design/gds-create-prd/SKILL.md
+++ b/src/workflows/2-design/gds-create-prd/SKILL.md
@@ -3,4 +3,105 @@ name: gds-create-prd
description: 'Create a PRD from a GDD or from scratch, for use with external tools like bmad-assist. Use when the user says "create a PRD" or "I want to create a new product requirements document".'
---
-Follow the instructions in ./workflow.md.
+# PRD Create Workflow
+
+**Goal:** Create comprehensive PRDs through structured workflow facilitation.
+
+**Your Role:** Product-focused PM facilitator collaborating with an expert peer.
+
+You will continue to operate with your given name, identity, and communication_style, merged with the details of this role description.
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `user_name`
+- `communication_language`
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## WORKFLOW ARCHITECTURE
+
+This uses **step-file architecture** for disciplined execution:
+
+### Core Principles
+
+- **Micro-file Design**: Each step is a self contained instruction file that is a part of an overall workflow that must be followed exactly
+- **Just-In-Time Loading**: Only the current step file is in memory - never load future step files until told to do so
+- **Sequential Enforcement**: Sequence within the step files must be completed in order, no skipping or optimization allowed
+- **State Tracking**: Document progress in output file frontmatter using `stepsCompleted` array when a workflow produces a document
+- **Append-Only Building**: Build documents by appending content as directed to the output file
+
+### Step Processing Rules
+
+1. **READ COMPLETELY**: Always read the entire step file before taking any action
+2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate
+3. **WAIT FOR INPUT**: If a menu is presented, halt and wait for user selection
+4. **CHECK CONTINUATION**: If the step has a menu with Continue as an option, only proceed to next step when user selects 'C' (Continue)
+5. **SAVE STATE**: Update `stepsCompleted` in frontmatter before loading next step
+6. **LOAD NEXT**: When directed, read fully and follow the next step file
+
+### Critical Rules (NO EXCEPTIONS)
+
+- 🛑 **NEVER** load multiple step files simultaneously
+- 📖 **ALWAYS** read entire step file before execution
+- 🚫 **NEVER** skip steps or optimize the sequence
+- 💾 **ALWAYS** update frontmatter of output files when writing the final output for a specific step
+- 🎯 **ALWAYS** follow the exact instructions in the step file
+- ⏸️ **ALWAYS** halt at menus and wait for user input
+- 📋 **NEVER** create mental todo lists from future steps
+
+## INITIALIZATION SEQUENCE
+
+### 1. Configuration Loading
+
+Load and read full config from {main_config} and resolve:
+
+- `project_name`, `output_folder`, `planning_artifacts`, `user_name`
+- `communication_language`, `document_output_language`, `game_dev_experience`
+- `date` as system-generated current datetime
+
+✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the configured `{communication_language}`.
+
+### 2. Route to Create Workflow
+
+"**Create Mode: Creating a new PRD from scratch.**"
+
+Read fully and follow: `{nextStep}` (steps-c/step-01-init.md)
diff --git a/src/workflows/2-design/gds-create-prd/customize.toml b/src/workflows/2-design/gds-create-prd/customize.toml
new file mode 100644
index 0000000..e1918e9
--- /dev/null
+++ b/src/workflows/2-design/gds-create-prd/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-create-prd. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "Every PRD requirement must be testable and traceable to a GDD pillar."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed when the workflow reaches Step 12 (Workflow Completion),
+# after the final outputs are produced. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/2-design/gds-create-prd/steps-c/step-12-complete.md b/src/workflows/2-design/gds-create-prd/steps-c/step-12-complete.md
index 1994539..d77db40 100644
--- a/src/workflows/2-design/gds-create-prd/steps-c/step-12-complete.md
+++ b/src/workflows/2-design/gds-create-prd/steps-c/step-12-complete.md
@@ -122,3 +122,9 @@ PRD complete. Invoke the `bmad-help` skill.
The polished PRD serves as the foundation for all subsequent product development activities. All design, architecture, and development work should trace back to the requirements and vision documented in this PRD - update it also as needed as you continue planning.
**Congratulations on completing the Product Requirements Document for {{project_name}}!** 🎉
+
+## On Complete
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete`
+
+If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting.
diff --git a/src/workflows/2-design/gds-create-prd/workflow.md b/src/workflows/2-design/gds-create-prd/workflow.md
deleted file mode 100644
index 4e7bbad..0000000
--- a/src/workflows/2-design/gds-create-prd/workflow.md
+++ /dev/null
@@ -1,63 +0,0 @@
----
-name: gds-create-prd
-description: '(Optional) Create a PRD for use with external tools like bmad-assist. Uses the GDD as a base if available, otherwise creates from scratch. Use when the user says "create a PRD" or "generate PRD from GDD"'
-main_config: '{module_config}'
-nextStep: './steps-c/step-01-init.md'
----
-
-# PRD Create Workflow
-
-**Goal:** Create comprehensive PRDs through structured workflow facilitation.
-
-**Your Role:** Product-focused PM facilitator collaborating with an expert peer.
-
-You will continue to operate with your given name, identity, and communication_style, merged with the details of this role description.
-
-## WORKFLOW ARCHITECTURE
-
-This uses **step-file architecture** for disciplined execution:
-
-### Core Principles
-
-- **Micro-file Design**: Each step is a self contained instruction file that is a part of an overall workflow that must be followed exactly
-- **Just-In-Time Loading**: Only the current step file is in memory - never load future step files until told to do so
-- **Sequential Enforcement**: Sequence within the step files must be completed in order, no skipping or optimization allowed
-- **State Tracking**: Document progress in output file frontmatter using `stepsCompleted` array when a workflow produces a document
-- **Append-Only Building**: Build documents by appending content as directed to the output file
-
-### Step Processing Rules
-
-1. **READ COMPLETELY**: Always read the entire step file before taking any action
-2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate
-3. **WAIT FOR INPUT**: If a menu is presented, halt and wait for user selection
-4. **CHECK CONTINUATION**: If the step has a menu with Continue as an option, only proceed to next step when user selects 'C' (Continue)
-5. **SAVE STATE**: Update `stepsCompleted` in frontmatter before loading next step
-6. **LOAD NEXT**: When directed, read fully and follow the next step file
-
-### Critical Rules (NO EXCEPTIONS)
-
-- 🛑 **NEVER** load multiple step files simultaneously
-- 📖 **ALWAYS** read entire step file before execution
-- 🚫 **NEVER** skip steps or optimize the sequence
-- 💾 **ALWAYS** update frontmatter of output files when writing the final output for a specific step
-- 🎯 **ALWAYS** follow the exact instructions in the step file
-- ⏸️ **ALWAYS** halt at menus and wait for user input
-- 📋 **NEVER** create mental todo lists from future steps
-
-## INITIALIZATION SEQUENCE
-
-### 1. Configuration Loading
-
-Load and read full config from {main_config} and resolve:
-
-- `project_name`, `output_folder`, `planning_artifacts`, `user_name`
-- `communication_language`, `document_output_language`, `game_dev_experience`
-- `date` as system-generated current datetime
-
-✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the configured `{communication_language}`.
-
-### 2. Route to Create Workflow
-
-"**Create Mode: Creating a new PRD from scratch.**"
-
-Read fully and follow: `{nextStep}` (steps-c/step-01-init.md)
diff --git a/src/workflows/2-design/gds-create-ux-design/SKILL.md b/src/workflows/2-design/gds-create-ux-design/SKILL.md
index f902f31..38e67bb 100644
--- a/src/workflows/2-design/gds-create-ux-design/SKILL.md
+++ b/src/workflows/2-design/gds-create-ux-design/SKILL.md
@@ -3,4 +3,76 @@ name: gds-create-ux-design
description: 'Plan UX patterns and design specifications for game UI/HUD elements. Use when the user says "lets create UX design" or "create UX specifications" or "help me plan the game UX"'
---
-Follow the instructions in ./workflow.md.
+# Create UX Design Workflow
+
+**Goal:** Create comprehensive UX design specifications through collaborative visual exploration and informed decision-making where you act as a UX facilitator working with a game developer stakeholder.
+
+---
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `user_name`
+- `communication_language`
+- `planning_artifacts`
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## WORKFLOW ARCHITECTURE
+
+This uses **micro-file architecture** for disciplined execution:
+
+- Each step is a self-contained file with embedded rules
+- Sequential progression with user control at each step
+- Document state tracked in frontmatter
+- Append-only document building through conversation
+
+
+### Paths
+
+- `installed_path` = `.`
+- `template_path` = `{installed_path}/ux-design-template.md`
+- `default_output_file` = `{planning_artifacts}/ux-design-specification.md`
+
+## EXECUTION
+
+- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
+- Read fully and follow: `./steps/step-01-init.md` to begin the UX design workflow.
diff --git a/src/workflows/2-design/gds-create-ux-design/customize.toml b/src/workflows/2-design/gds-create-ux-design/customize.toml
new file mode 100644
index 0000000..39c0d8b
--- /dev/null
+++ b/src/workflows/2-design/gds-create-ux-design/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-create-ux-design. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "UX must reduce friction to the core fantasy, never decorate the UI at the cost of clarity."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed when the workflow reaches Step 14 (Workflow Completion),
+# after the final outputs are produced. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/2-design/gds-create-ux-design/steps/step-14-complete.md b/src/workflows/2-design/gds-create-ux-design/steps/step-14-complete.md
index cdd7f48..9da6d72 100644
--- a/src/workflows/2-design/gds-create-ux-design/steps/step-14-complete.md
+++ b/src/workflows/2-design/gds-create-ux-design/steps/step-14-complete.md
@@ -169,3 +169,9 @@ This UX design workflow is now complete. The specification serves as the foundat
- ✅ UX Design Specification: `{planning_artifacts}/ux-design-specification.md`
- ✅ Color Themes Visualizer: `{planning_artifacts}/ux-color-themes.html`
- ✅ Design Directions: `{planning_artifacts}/ux-design-directions.html`
+
+## On Complete
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete`
+
+If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting.
diff --git a/src/workflows/2-design/gds-create-ux-design/workflow.md b/src/workflows/2-design/gds-create-ux-design/workflow.md
deleted file mode 100644
index 5d44be7..0000000
--- a/src/workflows/2-design/gds-create-ux-design/workflow.md
+++ /dev/null
@@ -1,42 +0,0 @@
----
-name: gds-create-ux-design
-description: 'Create UX design specifications for game UI/HUD elements. Use when the user says "lets create a UX design" or "create game UI design"'
----
-
-# Create UX Design Workflow
-
-**Goal:** Create comprehensive UX design specifications through collaborative visual exploration and informed decision-making where you act as a UX facilitator working with a game developer stakeholder.
-
----
-
-## WORKFLOW ARCHITECTURE
-
-This uses **micro-file architecture** for disciplined execution:
-
-- Each step is a self-contained file with embedded rules
-- Sequential progression with user control at each step
-- Document state tracked in frontmatter
-- Append-only document building through conversation
-
----
-
-## INITIALIZATION
-
-### Configuration Loading
-
-Load config from `{module_config}` and resolve:
-
-- `project_name`, `output_folder`, `planning_artifacts`, `user_name`
-- `communication_language`, `document_output_language`, `game_dev_experience`
-- `date` as system-generated current datetime
-
-### Paths
-
-- `installed_path` = `.`
-- `template_path` = `{installed_path}/ux-design-template.md`
-- `default_output_file` = `{planning_artifacts}/ux-design-specification.md`
-
-## EXECUTION
-
-- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
-- Read fully and follow: `./steps/step-01-init.md` to begin the UX design workflow.
diff --git a/src/workflows/2-design/gds-edit-gdd/SKILL.md b/src/workflows/2-design/gds-edit-gdd/SKILL.md
index 7385320..4c4776b 100644
--- a/src/workflows/2-design/gds-edit-gdd/SKILL.md
+++ b/src/workflows/2-design/gds-edit-gdd/SKILL.md
@@ -3,4 +3,108 @@ name: gds-edit-gdd
description: 'Edit an existing Game Design Document. Use when the user says "edit this GDD" or "improve this GDD".'
---
-Follow the instructions in ./workflow.md.
+# GDD Edit Workflow
+
+**Goal:** Edit and improve existing Game Design Documents through a structured enhancement workflow.
+
+**Your Role:** GDD improvement specialist.
+
+You will continue to operate with your given name, identity, and communication_style, merged with the details of this role description.
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `user_name`
+- `communication_language`
+- `planning_artifacts`
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## WORKFLOW ARCHITECTURE
+
+This uses **step-file architecture** for disciplined execution:
+
+### Core Principles
+
+- **Micro-file Design**: Each step is a self contained instruction file that is a part of an overall workflow that must be followed exactly
+- **Just-In-Time Loading**: Only the current step file is in memory - never load future step files until told to do so
+- **Sequential Enforcement**: Sequence within the step files must be completed in order, no skipping or optimization allowed
+- **State Tracking**: Document progress in output file frontmatter using `stepsCompleted` array when a workflow produces a document
+- **Append-Only Building**: Build documents by appending content as directed to the output file
+
+### Step Processing Rules
+
+1. **READ COMPLETELY**: Always read the entire step file before taking any action
+2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate
+3. **WAIT FOR INPUT**: If a menu is presented, halt and wait for user selection
+4. **CHECK CONTINUATION**: If the step has a menu with Continue as an option, only proceed to next step when user selects 'C' (Continue)
+5. **SAVE STATE**: Update `stepsCompleted` in frontmatter before loading next step
+6. **LOAD NEXT**: When directed, read fully and follow the next step file
+
+### Critical Rules (NO EXCEPTIONS)
+
+- 🛑 **NEVER** load multiple step files simultaneously
+- 📖 **ALWAYS** read entire step file before execution
+- 🚫 **NEVER** skip steps or optimize the sequence
+- 💾 **ALWAYS** update frontmatter of output files when writing the final output for a specific step
+- 🎯 **ALWAYS** follow the exact instructions in the step file
+- ⏸️ **ALWAYS** halt at menus and wait for user input
+- 📋 **NEVER** create mental todo lists from future steps
+
+## INITIALIZATION SEQUENCE
+
+### 1. Configuration Loading
+
+Load and read full config from {main_config} and resolve:
+
+- `project_name`, `output_folder`, `planning_artifacts`, `user_name`
+- `communication_language`, `document_output_language`, `game_dev_experience`
+- `date` as system-generated current datetime
+
+✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the configured `{communication_language}`.
+
+### 2. Route to Edit Workflow
+
+"**Edit Mode: Improving an existing GDD.**"
+
+Prompt for GDD path: "Which GDD would you like to edit? Please provide the path to the GDD file (default: `{planning_artifacts}/gdd.md`)."
+
+Then read fully and follow: `{editWorkflow}` (steps-e/step-e-01-discovery.md)
diff --git a/src/workflows/2-design/gds-edit-gdd/customize.toml b/src/workflows/2-design/gds-edit-gdd/customize.toml
new file mode 100644
index 0000000..60e7bda
--- /dev/null
+++ b/src/workflows/2-design/gds-edit-gdd/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-edit-gdd. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "GDD edits must preserve section numbering and downstream traceability."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed when the workflow reaches Step 4 (Complete & Validate),
+# after the final outputs are produced. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/2-design/gds-edit-gdd/steps-e/step-e-04-complete.md b/src/workflows/2-design/gds-edit-gdd/steps-e/step-e-04-complete.md
index 65aae48..68254e6 100644
--- a/src/workflows/2-design/gds-edit-gdd/steps-e/step-e-04-complete.md
+++ b/src/workflows/2-design/gds-edit-gdd/steps-e/step-e-04-complete.md
@@ -170,3 +170,9 @@ Display:
- No clear handoff to validation workflow
**Master Rule:** The edit workflow seamlessly integrates with validation. User can edit → validate → edit again → validate again in an iterative improvement cycle.
+
+## On Complete
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete`
+
+If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting.
diff --git a/src/workflows/2-design/gds-edit-gdd/workflow.md b/src/workflows/2-design/gds-edit-gdd/workflow.md
deleted file mode 100644
index d320514..0000000
--- a/src/workflows/2-design/gds-edit-gdd/workflow.md
+++ /dev/null
@@ -1,65 +0,0 @@
----
-name: gds-edit-gdd
-description: 'Edit an existing GDD. Use when the user says "edit this GDD" or "improve this GDD".'
-main_config: '{module_config}'
-editWorkflow: './steps-e/step-e-01-discovery.md'
----
-
-# GDD Edit Workflow
-
-**Goal:** Edit and improve existing Game Design Documents through a structured enhancement workflow.
-
-**Your Role:** GDD improvement specialist.
-
-You will continue to operate with your given name, identity, and communication_style, merged with the details of this role description.
-
-## WORKFLOW ARCHITECTURE
-
-This uses **step-file architecture** for disciplined execution:
-
-### Core Principles
-
-- **Micro-file Design**: Each step is a self contained instruction file that is a part of an overall workflow that must be followed exactly
-- **Just-In-Time Loading**: Only the current step file is in memory - never load future step files until told to do so
-- **Sequential Enforcement**: Sequence within the step files must be completed in order, no skipping or optimization allowed
-- **State Tracking**: Document progress in output file frontmatter using `stepsCompleted` array when a workflow produces a document
-- **Append-Only Building**: Build documents by appending content as directed to the output file
-
-### Step Processing Rules
-
-1. **READ COMPLETELY**: Always read the entire step file before taking any action
-2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate
-3. **WAIT FOR INPUT**: If a menu is presented, halt and wait for user selection
-4. **CHECK CONTINUATION**: If the step has a menu with Continue as an option, only proceed to next step when user selects 'C' (Continue)
-5. **SAVE STATE**: Update `stepsCompleted` in frontmatter before loading next step
-6. **LOAD NEXT**: When directed, read fully and follow the next step file
-
-### Critical Rules (NO EXCEPTIONS)
-
-- 🛑 **NEVER** load multiple step files simultaneously
-- 📖 **ALWAYS** read entire step file before execution
-- 🚫 **NEVER** skip steps or optimize the sequence
-- 💾 **ALWAYS** update frontmatter of output files when writing the final output for a specific step
-- 🎯 **ALWAYS** follow the exact instructions in the step file
-- ⏸️ **ALWAYS** halt at menus and wait for user input
-- 📋 **NEVER** create mental todo lists from future steps
-
-## INITIALIZATION SEQUENCE
-
-### 1. Configuration Loading
-
-Load and read full config from {main_config} and resolve:
-
-- `project_name`, `output_folder`, `planning_artifacts`, `user_name`
-- `communication_language`, `document_output_language`, `game_dev_experience`
-- `date` as system-generated current datetime
-
-✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the configured `{communication_language}`.
-
-### 2. Route to Edit Workflow
-
-"**Edit Mode: Improving an existing GDD.**"
-
-Prompt for GDD path: "Which GDD would you like to edit? Please provide the path to the GDD file (default: `{planning_artifacts}/gdd.md`)."
-
-Then read fully and follow: `{editWorkflow}` (steps-e/step-e-01-discovery.md)
diff --git a/src/workflows/2-design/gds-edit-prd/SKILL.md b/src/workflows/2-design/gds-edit-prd/SKILL.md
index 5ad9148..50a72c1 100644
--- a/src/workflows/2-design/gds-edit-prd/SKILL.md
+++ b/src/workflows/2-design/gds-edit-prd/SKILL.md
@@ -3,4 +3,107 @@ name: gds-edit-prd
description: 'Edit an existing PRD. Use when the user says "edit this PRD" or "improve this PRD".'
---
-Follow the instructions in ./workflow.md.
+# PRD Edit Workflow
+
+**Goal:** Edit and improve existing PRDs through structured enhancement workflow.
+
+**Your Role:** PRD improvement specialist.
+
+You will continue to operate with your given name, identity, and communication_style, merged with the details of this role description.
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `user_name`
+- `communication_language`
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## WORKFLOW ARCHITECTURE
+
+This uses **step-file architecture** for disciplined execution:
+
+### Core Principles
+
+- **Micro-file Design**: Each step is a self contained instruction file that is a part of an overall workflow that must be followed exactly
+- **Just-In-Time Loading**: Only the current step file is in memory - never load future step files until told to do so
+- **Sequential Enforcement**: Sequence within the step files must be completed in order, no skipping or optimization allowed
+- **State Tracking**: Document progress in output file frontmatter using `stepsCompleted` array when a workflow produces a document
+- **Append-Only Building**: Build documents by appending content as directed to the output file
+
+### Step Processing Rules
+
+1. **READ COMPLETELY**: Always read the entire step file before taking any action
+2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate
+3. **WAIT FOR INPUT**: If a menu is presented, halt and wait for user selection
+4. **CHECK CONTINUATION**: If the step has a menu with Continue as an option, only proceed to next step when user selects 'C' (Continue)
+5. **SAVE STATE**: Update `stepsCompleted` in frontmatter before loading next step
+6. **LOAD NEXT**: When directed, read fully and follow the next step file
+
+### Critical Rules (NO EXCEPTIONS)
+
+- 🛑 **NEVER** load multiple step files simultaneously
+- 📖 **ALWAYS** read entire step file before execution
+- 🚫 **NEVER** skip steps or optimize the sequence
+- 💾 **ALWAYS** update frontmatter of output files when writing the final output for a specific step
+- 🎯 **ALWAYS** follow the exact instructions in the step file
+- ⏸️ **ALWAYS** halt at menus and wait for user input
+- 📋 **NEVER** create mental todo lists from future steps
+
+## INITIALIZATION SEQUENCE
+
+### 1. Configuration Loading
+
+Load and read full config from {main_config} and resolve:
+
+- `project_name`, `output_folder`, `planning_artifacts`, `user_name`
+- `communication_language`, `document_output_language`, `game_dev_experience`
+- `date` as system-generated current datetime
+
+✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the configured `{communication_language}`.
+
+### 2. Route to Edit Workflow
+
+"**Edit Mode: Improving an existing PRD.**"
+
+Prompt for PRD path: "Which PRD would you like to edit? Please provide the path to the PRD.md file."
+
+Then read fully and follow: `{editWorkflow}` (steps-e/step-e-01-discovery.md)
diff --git a/src/workflows/2-design/gds-edit-prd/customize.toml b/src/workflows/2-design/gds-edit-prd/customize.toml
new file mode 100644
index 0000000..539c25b
--- /dev/null
+++ b/src/workflows/2-design/gds-edit-prd/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-edit-prd. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "PRD edits must preserve requirement IDs and downstream story traceability."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed when the workflow reaches Step 4 (Complete & Validate),
+# after the final outputs are produced. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/2-design/gds-edit-prd/steps-e/step-e-04-complete.md b/src/workflows/2-design/gds-edit-prd/steps-e/step-e-04-complete.md
index 1987a7e..46d6801 100644
--- a/src/workflows/2-design/gds-edit-prd/steps-e/step-e-04-complete.md
+++ b/src/workflows/2-design/gds-edit-prd/steps-e/step-e-04-complete.md
@@ -165,3 +165,9 @@ Display:
- No clear handoff to validation workflow
**Master Rule:** Edit workflow seamlessly integrates with validation. User can edit → validate → edit again → validate again in iterative improvement cycle.
+
+## On Complete
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete`
+
+If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting.
diff --git a/src/workflows/2-design/gds-edit-prd/workflow.md b/src/workflows/2-design/gds-edit-prd/workflow.md
deleted file mode 100644
index 3a4894e..0000000
--- a/src/workflows/2-design/gds-edit-prd/workflow.md
+++ /dev/null
@@ -1,65 +0,0 @@
----
-name: gds-edit-prd
-description: 'Edit an existing PRD. Use when the user says "edit this PRD".'
-main_config: '{module_config}'
-editWorkflow: './steps-e/step-e-01-discovery.md'
----
-
-# PRD Edit Workflow
-
-**Goal:** Edit and improve existing PRDs through structured enhancement workflow.
-
-**Your Role:** PRD improvement specialist.
-
-You will continue to operate with your given name, identity, and communication_style, merged with the details of this role description.
-
-## WORKFLOW ARCHITECTURE
-
-This uses **step-file architecture** for disciplined execution:
-
-### Core Principles
-
-- **Micro-file Design**: Each step is a self contained instruction file that is a part of an overall workflow that must be followed exactly
-- **Just-In-Time Loading**: Only the current step file is in memory - never load future step files until told to do so
-- **Sequential Enforcement**: Sequence within the step files must be completed in order, no skipping or optimization allowed
-- **State Tracking**: Document progress in output file frontmatter using `stepsCompleted` array when a workflow produces a document
-- **Append-Only Building**: Build documents by appending content as directed to the output file
-
-### Step Processing Rules
-
-1. **READ COMPLETELY**: Always read the entire step file before taking any action
-2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate
-3. **WAIT FOR INPUT**: If a menu is presented, halt and wait for user selection
-4. **CHECK CONTINUATION**: If the step has a menu with Continue as an option, only proceed to next step when user selects 'C' (Continue)
-5. **SAVE STATE**: Update `stepsCompleted` in frontmatter before loading next step
-6. **LOAD NEXT**: When directed, read fully and follow the next step file
-
-### Critical Rules (NO EXCEPTIONS)
-
-- 🛑 **NEVER** load multiple step files simultaneously
-- 📖 **ALWAYS** read entire step file before execution
-- 🚫 **NEVER** skip steps or optimize the sequence
-- 💾 **ALWAYS** update frontmatter of output files when writing the final output for a specific step
-- 🎯 **ALWAYS** follow the exact instructions in the step file
-- ⏸️ **ALWAYS** halt at menus and wait for user input
-- 📋 **NEVER** create mental todo lists from future steps
-
-## INITIALIZATION SEQUENCE
-
-### 1. Configuration Loading
-
-Load and read full config from {main_config} and resolve:
-
-- `project_name`, `output_folder`, `planning_artifacts`, `user_name`
-- `communication_language`, `document_output_language`, `game_dev_experience`
-- `date` as system-generated current datetime
-
-✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the configured `{communication_language}`.
-
-### 2. Route to Edit Workflow
-
-"**Edit Mode: Improving an existing PRD.**"
-
-Prompt for PRD path: "Which PRD would you like to edit? Please provide the path to the PRD.md file."
-
-Then read fully and follow: `{editWorkflow}` (steps-e/step-e-01-discovery.md)
diff --git a/src/workflows/2-design/gds-validate-gdd/SKILL.md b/src/workflows/2-design/gds-validate-gdd/SKILL.md
index 25b1d40..24280e7 100644
--- a/src/workflows/2-design/gds-validate-gdd/SKILL.md
+++ b/src/workflows/2-design/gds-validate-gdd/SKILL.md
@@ -3,4 +3,105 @@ name: gds-validate-gdd
description: 'Validate a GDD against standards. Use when the user says "validate this GDD" or "run GDD validation".'
---
-Follow the instructions in ./workflow.md.
+# GDD Validate Workflow
+
+**Goal:** Validate existing Game Design Documents against BMAD GDS standards through comprehensive review.
+
+**Your Role:** Validation Architect and Quality Assurance Specialist for game design documents.
+
+You will continue to operate with your given name, identity, and communication_style, merged with the details of this role description.
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `user_name`
+- `communication_language`
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## WORKFLOW ARCHITECTURE
+
+This uses **step-file architecture** for disciplined execution:
+
+### Core Principles
+
+- **Micro-file Design**: Each step is a self contained instruction file that is a part of an overall workflow that must be followed exactly
+- **Just-In-Time Loading**: Only the current step file is in memory - never load future step files until told to do so
+- **Sequential Enforcement**: Sequence within the step files must be completed in order, no skipping or optimization allowed
+- **State Tracking**: Document progress in output file frontmatter using `stepsCompleted` array when a workflow produces a document
+- **Append-Only Building**: Build documents by appending content as directed to the output file
+
+### Step Processing Rules
+
+1. **READ COMPLETELY**: Always read the entire step file before taking any action
+2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate
+3. **WAIT FOR INPUT**: If a menu is presented, halt and wait for user selection
+4. **CHECK CONTINUATION**: If the step has a menu with Continue as an option, only proceed to next step when user selects 'C' (Continue)
+5. **SAVE STATE**: Update `stepsCompleted` in frontmatter before loading next step
+6. **LOAD NEXT**: When directed, read fully and follow the next step file
+
+### Critical Rules (NO EXCEPTIONS)
+
+- 🛑 **NEVER** load multiple step files simultaneously
+- 📖 **ALWAYS** read entire step file before execution
+- 🚫 **NEVER** skip steps or optimize the sequence
+- 💾 **ALWAYS** update frontmatter of output files when writing the final output for a specific step
+- 🎯 **ALWAYS** follow the exact instructions in the step file
+- ⏸️ **ALWAYS** halt at menus and wait for user input
+- 📋 **NEVER** create mental todo lists from future steps
+
+## INITIALIZATION SEQUENCE
+
+### 1. Configuration Loading
+
+Load and read full config from {main_config} and resolve:
+
+- `project_name`, `output_folder`, `planning_artifacts`, `user_name`
+- `communication_language`, `document_output_language`, `game_dev_experience`
+- `date` as system-generated current datetime
+
+✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the configured `{communication_language}`.
+
+### 2. Route to Validate Workflow
+
+"**Validate Mode: Validating an existing GDD against BMAD GDS standards.**"
+
+Then read fully and follow: `{validateWorkflow}` (steps-v/step-v-01-discovery.md)
diff --git a/src/workflows/2-design/gds-validate-gdd/customize.toml b/src/workflows/2-design/gds-validate-gdd/customize.toml
new file mode 100644
index 0000000..4051300
--- /dev/null
+++ b/src/workflows/2-design/gds-validate-gdd/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-validate-gdd. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "GDD validation surfaces gaps; it never silently fixes them."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed when the workflow reaches Step 13 (Validation Report Complete),
+# after the final outputs are produced. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/2-design/gds-validate-gdd/steps-v/step-v-13-report-complete.md b/src/workflows/2-design/gds-validate-gdd/steps-v/step-v-13-report-complete.md
index cc5e68d..1dc258f 100644
--- a/src/workflows/2-design/gds-validate-gdd/steps-v/step-v-13-report-complete.md
+++ b/src/workflows/2-design/gds-validate-gdd/steps-v/step-v-13-report-complete.md
@@ -255,3 +255,9 @@ Display:
- Unclear next steps
**Master Rule:** User needs a clear summary and actionable next steps. The `gds-edit-gdd` workflow is best for complex issues; immediate fixes are available for simpler ones.
+
+## On Complete
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete`
+
+If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting.
diff --git a/src/workflows/2-design/gds-validate-gdd/workflow.md b/src/workflows/2-design/gds-validate-gdd/workflow.md
deleted file mode 100644
index b07d3cf..0000000
--- a/src/workflows/2-design/gds-validate-gdd/workflow.md
+++ /dev/null
@@ -1,63 +0,0 @@
----
-name: gds-validate-gdd
-description: 'Validate a GDD against standards. Use when the user says "validate this GDD" or "run GDD validation".'
-main_config: '{module_config}'
-validateWorkflow: './steps-v/step-v-01-discovery.md'
----
-
-# GDD Validate Workflow
-
-**Goal:** Validate existing Game Design Documents against BMAD GDS standards through comprehensive review.
-
-**Your Role:** Validation Architect and Quality Assurance Specialist for game design documents.
-
-You will continue to operate with your given name, identity, and communication_style, merged with the details of this role description.
-
-## WORKFLOW ARCHITECTURE
-
-This uses **step-file architecture** for disciplined execution:
-
-### Core Principles
-
-- **Micro-file Design**: Each step is a self contained instruction file that is a part of an overall workflow that must be followed exactly
-- **Just-In-Time Loading**: Only the current step file is in memory - never load future step files until told to do so
-- **Sequential Enforcement**: Sequence within the step files must be completed in order, no skipping or optimization allowed
-- **State Tracking**: Document progress in output file frontmatter using `stepsCompleted` array when a workflow produces a document
-- **Append-Only Building**: Build documents by appending content as directed to the output file
-
-### Step Processing Rules
-
-1. **READ COMPLETELY**: Always read the entire step file before taking any action
-2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate
-3. **WAIT FOR INPUT**: If a menu is presented, halt and wait for user selection
-4. **CHECK CONTINUATION**: If the step has a menu with Continue as an option, only proceed to next step when user selects 'C' (Continue)
-5. **SAVE STATE**: Update `stepsCompleted` in frontmatter before loading next step
-6. **LOAD NEXT**: When directed, read fully and follow the next step file
-
-### Critical Rules (NO EXCEPTIONS)
-
-- 🛑 **NEVER** load multiple step files simultaneously
-- 📖 **ALWAYS** read entire step file before execution
-- 🚫 **NEVER** skip steps or optimize the sequence
-- 💾 **ALWAYS** update frontmatter of output files when writing the final output for a specific step
-- 🎯 **ALWAYS** follow the exact instructions in the step file
-- ⏸️ **ALWAYS** halt at menus and wait for user input
-- 📋 **NEVER** create mental todo lists from future steps
-
-## INITIALIZATION SEQUENCE
-
-### 1. Configuration Loading
-
-Load and read full config from {main_config} and resolve:
-
-- `project_name`, `output_folder`, `planning_artifacts`, `user_name`
-- `communication_language`, `document_output_language`, `game_dev_experience`
-- `date` as system-generated current datetime
-
-✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the configured `{communication_language}`.
-
-### 2. Route to Validate Workflow
-
-"**Validate Mode: Validating an existing GDD against BMAD GDS standards.**"
-
-Then read fully and follow: `{validateWorkflow}` (steps-v/step-v-01-discovery.md)
diff --git a/src/workflows/2-design/gds-validate-prd/SKILL.md b/src/workflows/2-design/gds-validate-prd/SKILL.md
index 0a4b5a4..9f4b6a5 100644
--- a/src/workflows/2-design/gds-validate-prd/SKILL.md
+++ b/src/workflows/2-design/gds-validate-prd/SKILL.md
@@ -3,4 +3,105 @@ name: gds-validate-prd
description: 'Validate a PRD against standards for external tool compatibility. Use when the user says "validate this PRD" or "run PRD validation".'
---
-Follow the instructions in ./workflow.md.
+# PRD Validate Workflow
+
+**Goal:** Validate existing PRDs against BMAD standards through comprehensive review.
+
+**Your Role:** Validation Architect and Quality Assurance Specialist.
+
+You will continue to operate with your given name, identity, and communication_style, merged with the details of this role description.
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `user_name`
+- `communication_language`
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## WORKFLOW ARCHITECTURE
+
+This uses **step-file architecture** for disciplined execution:
+
+### Core Principles
+
+- **Micro-file Design**: Each step is a self contained instruction file that is a part of an overall workflow that must be followed exactly
+- **Just-In-Time Loading**: Only the current step file is in memory - never load future step files until told to do so
+- **Sequential Enforcement**: Sequence within the step files must be completed in order, no skipping or optimization allowed
+- **State Tracking**: Document progress in output file frontmatter using `stepsCompleted` array when a workflow produces a document
+- **Append-Only Building**: Build documents by appending content as directed to the output file
+
+### Step Processing Rules
+
+1. **READ COMPLETELY**: Always read the entire step file before taking any action
+2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate
+3. **WAIT FOR INPUT**: If a menu is presented, halt and wait for user selection
+4. **CHECK CONTINUATION**: If the step has a menu with Continue as an option, only proceed to next step when user selects 'C' (Continue)
+5. **SAVE STATE**: Update `stepsCompleted` in frontmatter before loading next step
+6. **LOAD NEXT**: When directed, read fully and follow the next step file
+
+### Critical Rules (NO EXCEPTIONS)
+
+- 🛑 **NEVER** load multiple step files simultaneously
+- 📖 **ALWAYS** read entire step file before execution
+- 🚫 **NEVER** skip steps or optimize the sequence
+- 💾 **ALWAYS** update frontmatter of output files when writing the final output for a specific step
+- 🎯 **ALWAYS** follow the exact instructions in the step file
+- ⏸️ **ALWAYS** halt at menus and wait for user input
+- 📋 **NEVER** create mental todo lists from future steps
+
+## INITIALIZATION SEQUENCE
+
+### 1. Configuration Loading
+
+Load and read full config from {main_config} and resolve:
+
+- `project_name`, `output_folder`, `planning_artifacts`, `user_name`
+- `communication_language`, `document_output_language`, `game_dev_experience`
+- `date` as system-generated current datetime
+
+✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the configured `{communication_language}`.
+
+### 2. Route to Validate Workflow
+
+"**Validate Mode: Validating an existing PRD against BMAD standards.**"
+
+Then read fully and follow: `{validateWorkflow}` (steps-v/step-v-01-discovery.md)
diff --git a/src/workflows/2-design/gds-validate-prd/customize.toml b/src/workflows/2-design/gds-validate-prd/customize.toml
new file mode 100644
index 0000000..d07423e
--- /dev/null
+++ b/src/workflows/2-design/gds-validate-prd/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-validate-prd. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "PRD validation surfaces gaps; it never silently fixes them."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed when the workflow reaches Step 13 (Validation Report Complete),
+# after the final outputs are produced. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/2-design/gds-validate-prd/steps-v/step-v-13-report-complete.md b/src/workflows/2-design/gds-validate-prd/steps-v/step-v-13-report-complete.md
index dd331bf..412013f 100644
--- a/src/workflows/2-design/gds-validate-prd/steps-v/step-v-13-report-complete.md
+++ b/src/workflows/2-design/gds-validate-prd/steps-v/step-v-13-report-complete.md
@@ -229,3 +229,9 @@ Display:
- Unclear next steps
**Master Rule:** User needs clear summary and actionable next steps. Edit workflow is best for complex issues; immediate fixes available for simpler ones.
+
+## On Complete
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete`
+
+If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting.
diff --git a/src/workflows/2-design/gds-validate-prd/workflow.md b/src/workflows/2-design/gds-validate-prd/workflow.md
deleted file mode 100644
index 3083017..0000000
--- a/src/workflows/2-design/gds-validate-prd/workflow.md
+++ /dev/null
@@ -1,63 +0,0 @@
----
-name: gds-validate-prd
-description: 'Validate a PRD against standards. Use when the user says "validate this PRD" or "run PRD validation"'
-main_config: '{module_config}'
-validateWorkflow: './steps-v/step-v-01-discovery.md'
----
-
-# PRD Validate Workflow
-
-**Goal:** Validate existing PRDs against BMAD standards through comprehensive review.
-
-**Your Role:** Validation Architect and Quality Assurance Specialist.
-
-You will continue to operate with your given name, identity, and communication_style, merged with the details of this role description.
-
-## WORKFLOW ARCHITECTURE
-
-This uses **step-file architecture** for disciplined execution:
-
-### Core Principles
-
-- **Micro-file Design**: Each step is a self contained instruction file that is a part of an overall workflow that must be followed exactly
-- **Just-In-Time Loading**: Only the current step file is in memory - never load future step files until told to do so
-- **Sequential Enforcement**: Sequence within the step files must be completed in order, no skipping or optimization allowed
-- **State Tracking**: Document progress in output file frontmatter using `stepsCompleted` array when a workflow produces a document
-- **Append-Only Building**: Build documents by appending content as directed to the output file
-
-### Step Processing Rules
-
-1. **READ COMPLETELY**: Always read the entire step file before taking any action
-2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate
-3. **WAIT FOR INPUT**: If a menu is presented, halt and wait for user selection
-4. **CHECK CONTINUATION**: If the step has a menu with Continue as an option, only proceed to next step when user selects 'C' (Continue)
-5. **SAVE STATE**: Update `stepsCompleted` in frontmatter before loading next step
-6. **LOAD NEXT**: When directed, read fully and follow the next step file
-
-### Critical Rules (NO EXCEPTIONS)
-
-- 🛑 **NEVER** load multiple step files simultaneously
-- 📖 **ALWAYS** read entire step file before execution
-- 🚫 **NEVER** skip steps or optimize the sequence
-- 💾 **ALWAYS** update frontmatter of output files when writing the final output for a specific step
-- 🎯 **ALWAYS** follow the exact instructions in the step file
-- ⏸️ **ALWAYS** halt at menus and wait for user input
-- 📋 **NEVER** create mental todo lists from future steps
-
-## INITIALIZATION SEQUENCE
-
-### 1. Configuration Loading
-
-Load and read full config from {main_config} and resolve:
-
-- `project_name`, `output_folder`, `planning_artifacts`, `user_name`
-- `communication_language`, `document_output_language`, `game_dev_experience`
-- `date` as system-generated current datetime
-
-✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the configured `{communication_language}`.
-
-### 2. Route to Validate Workflow
-
-"**Validate Mode: Validating an existing PRD against BMAD standards.**"
-
-Then read fully and follow: `{validateWorkflow}` (steps-v/step-v-01-discovery.md)
diff --git a/src/workflows/3-technical/gds-check-implementation-readiness/SKILL.md b/src/workflows/3-technical/gds-check-implementation-readiness/SKILL.md
index 6c6b97f..08d68fd 100644
--- a/src/workflows/3-technical/gds-check-implementation-readiness/SKILL.md
+++ b/src/workflows/3-technical/gds-check-implementation-readiness/SKILL.md
@@ -3,4 +3,97 @@ name: gds-check-implementation-readiness
description: 'Verify GDD, UX, Architecture, and Epics alignment before production. Use when the user says "check readiness" or "implementation readiness"'
---
-Follow the instructions in ./workflow.md.
+# Implementation Readiness
+
+**Goal:** Validate that GDD, Architecture, Epics and Stories are complete and aligned before Phase 4 implementation starts, with a focus on ensuring epics and stories are logical and have accounted for all requirements and planning.
+
+**Your Role:** You are an expert Game Producer and Scrum Master, renowned and respected in the field of requirements traceability and spotting gaps in planning. Your success is measured in spotting the failures others have made in planning or preparation of epics and stories to produce the user's game vision.
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `user_name`
+- `communication_language`
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## WORKFLOW ARCHITECTURE
+
+### Core Principles
+
+- **Micro-file Design**: Each step of the overall goal is a self contained instruction file that you will adhere too 1 file as directed at a time
+- **Just-In-Time Loading**: Only 1 current step file will be loaded and followed to completion - never load future step files until told to do so
+- **Sequential Enforcement**: Sequence within the step files must be completed in order, no skipping or optimization allowed
+- **State Tracking**: Document progress in output file frontmatter using `stepsCompleted` array when a workflow produces a document
+- **Append-Only Building**: Build documents by appending content as directed to the output file
+
+### Step Processing Rules
+
+1. **READ COMPLETELY**: Always read the entire step file before taking any action
+2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate
+3. **WAIT FOR INPUT**: If a menu is presented, halt and wait for user selection
+4. **CHECK CONTINUATION**: If the step has a menu with Continue as an option, only proceed to next step when user selects 'C' (Continue)
+5. **SAVE STATE**: Update `stepsCompleted` in frontmatter before loading next step
+6. **LOAD NEXT**: When directed, read fully and follow the next step file
+
+### Critical Rules (NO EXCEPTIONS)
+
+- 🛑 **NEVER** load multiple step files simultaneously
+- 📖 **ALWAYS** read entire step file before execution
+- 🚫 **NEVER** skip steps or optimize the sequence
+- 💾 **ALWAYS** update frontmatter of output files when writing the final output for a specific step
+- 🎯 **ALWAYS** follow the exact instructions in the step file
+- ⏸️ **ALWAYS** halt at menus and wait for user input
+- 📋 **NEVER** create mental todo lists from future steps
+
+
+## INITIALIZATION SEQUENCE
+
+### 1. Module Configuration Loading
+
+Load and read full config from {module_config} and resolve:
+
+- `project_name`, `output_folder`, `planning_artifacts`, `user_name`, `communication_language`, `document_output_language`
+- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
+
+### 2. First Step EXECUTION
+
+Read fully and follow: `./steps/step-01-document-discovery.md` to begin the workflow.
diff --git a/src/workflows/3-technical/gds-check-implementation-readiness/customize.toml b/src/workflows/3-technical/gds-check-implementation-readiness/customize.toml
new file mode 100644
index 0000000..bb16b3a
--- /dev/null
+++ b/src/workflows/3-technical/gds-check-implementation-readiness/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-check-implementation-readiness. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "Readiness checks must be blocking — do not promote gaps into optional warnings."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed when the workflow reaches Step 6 (Final Assessment),
+# after the final outputs are produced. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/3-technical/gds-check-implementation-readiness/steps/step-06-final-assessment.md b/src/workflows/3-technical/gds-check-implementation-readiness/steps/step-06-final-assessment.md
index 3e1bf32..37ebc76 100644
--- a/src/workflows/3-technical/gds-check-implementation-readiness/steps/step-06-final-assessment.md
+++ b/src/workflows/3-technical/gds-check-implementation-readiness/steps/step-06-final-assessment.md
@@ -127,3 +127,9 @@ Implementation Readiness complete. Invoke the `gds-help` skill.
- Not reviewing previous findings
- Incomplete summary
- No clear recommendations
+
+## On Complete
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete`
+
+If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting.
diff --git a/src/workflows/3-technical/gds-check-implementation-readiness/workflow.md b/src/workflows/3-technical/gds-check-implementation-readiness/workflow.md
deleted file mode 100644
index f35eecc..0000000
--- a/src/workflows/3-technical/gds-check-implementation-readiness/workflow.md
+++ /dev/null
@@ -1,54 +0,0 @@
----
-name: check-implementation-readiness
-description: 'Validate GDD, UX, Architecture and Epics specs are complete for game development. Use when the user says "check implementation readiness".'
----
-
-# Implementation Readiness
-
-**Goal:** Validate that GDD, Architecture, Epics and Stories are complete and aligned before Phase 4 implementation starts, with a focus on ensuring epics and stories are logical and have accounted for all requirements and planning.
-
-**Your Role:** You are an expert Game Producer and Scrum Master, renowned and respected in the field of requirements traceability and spotting gaps in planning. Your success is measured in spotting the failures others have made in planning or preparation of epics and stories to produce the user's game vision.
-
-## WORKFLOW ARCHITECTURE
-
-### Core Principles
-
-- **Micro-file Design**: Each step of the overall goal is a self contained instruction file that you will adhere too 1 file as directed at a time
-- **Just-In-Time Loading**: Only 1 current step file will be loaded and followed to completion - never load future step files until told to do so
-- **Sequential Enforcement**: Sequence within the step files must be completed in order, no skipping or optimization allowed
-- **State Tracking**: Document progress in output file frontmatter using `stepsCompleted` array when a workflow produces a document
-- **Append-Only Building**: Build documents by appending content as directed to the output file
-
-### Step Processing Rules
-
-1. **READ COMPLETELY**: Always read the entire step file before taking any action
-2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate
-3. **WAIT FOR INPUT**: If a menu is presented, halt and wait for user selection
-4. **CHECK CONTINUATION**: If the step has a menu with Continue as an option, only proceed to next step when user selects 'C' (Continue)
-5. **SAVE STATE**: Update `stepsCompleted` in frontmatter before loading next step
-6. **LOAD NEXT**: When directed, read fully and follow the next step file
-
-### Critical Rules (NO EXCEPTIONS)
-
-- 🛑 **NEVER** load multiple step files simultaneously
-- 📖 **ALWAYS** read entire step file before execution
-- 🚫 **NEVER** skip steps or optimize the sequence
-- 💾 **ALWAYS** update frontmatter of output files when writing the final output for a specific step
-- 🎯 **ALWAYS** follow the exact instructions in the step file
-- ⏸️ **ALWAYS** halt at menus and wait for user input
-- 📋 **NEVER** create mental todo lists from future steps
-
----
-
-## INITIALIZATION SEQUENCE
-
-### 1. Module Configuration Loading
-
-Load and read full config from {module_config} and resolve:
-
-- `project_name`, `output_folder`, `planning_artifacts`, `user_name`, `communication_language`, `document_output_language`
-- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
-
-### 2. First Step EXECUTION
-
-Read fully and follow: `./steps/step-01-document-discovery.md` to begin the workflow.
diff --git a/src/workflows/3-technical/gds-create-epics-and-stories/SKILL.md b/src/workflows/3-technical/gds-create-epics-and-stories/SKILL.md
index f3b0fe3..74db43b 100644
--- a/src/workflows/3-technical/gds-create-epics-and-stories/SKILL.md
+++ b/src/workflows/3-technical/gds-create-epics-and-stories/SKILL.md
@@ -3,4 +3,101 @@ name: gds-create-epics-and-stories
description: 'Create Epics and Stories from GDD requirements for development. Use when the user says "create epics" or "create stories"'
---
-Follow the instructions in ./workflow.md.
+# Create Epics and Stories
+
+**Goal:** Transform GDD requirements and Architecture decisions into comprehensive stories organized by user value, creating detailed, actionable stories with complete acceptance criteria for game development teams.
+
+**Your Role:** In addition to your name, communication_style, and persona, you are also a game product strategist and technical specifications writer collaborating with a game designer or product owner. This is a partnership, not a client-vendor relationship. You bring expertise in requirements decomposition, technical implementation context, and acceptance criteria writing within a game development context, while the user brings their game vision, player needs, and design requirements. Work together as equals.
+
+---
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `user_name`
+- `communication_language`
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## WORKFLOW ARCHITECTURE
+
+This uses **step-file architecture** for disciplined execution:
+
+### Core Principles
+
+- **Micro-file Design**: Each step of the overall goal is a self contained instruction file that you will adhere too 1 file as directed at a time
+- **Just-In-Time Loading**: Only 1 current step file will be loaded and followed to completion - never load future step files until told to do so
+- **Sequential Enforcement**: Sequence within the step files must be completed in order, no skipping or optimization allowed
+- **State Tracking**: Document progress in output file frontmatter using `stepsCompleted` array when a workflow produces a document
+- **Append-Only Building**: Build documents by appending content as directed to the output file
+
+### Step Processing Rules
+
+1. **READ COMPLETELY**: Always read the entire step file before taking any action
+2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate
+3. **WAIT FOR INPUT**: If a menu is presented, halt and wait for user selection
+4. **CHECK CONTINUATION**: If the step has a menu with Continue as an option, only proceed to next step when user selects 'C' (Continue)
+5. **SAVE STATE**: Update `stepsCompleted` in frontmatter before loading next step
+6. **LOAD NEXT**: When directed, read fully and follow the next step file
+
+### Critical Rules (NO EXCEPTIONS)
+
+- 🛑 **NEVER** load multiple step files simultaneously
+- 📖 **ALWAYS** read entire step file before execution
+- 🚫 **NEVER** skip steps or optimize the sequence
+- 💾 **ALWAYS** update frontmatter of output files when writing the final output for a specific step
+- 🎯 **ALWAYS** follow the exact instructions in the step file
+- ⏸️ **ALWAYS** halt at menus and wait for user input
+- 📋 **NEVER** create mental todo lists from future steps
+
+
+## INITIALIZATION SEQUENCE
+
+### 1. Configuration Loading
+
+Load and read full config from {module_config} and resolve:
+
+- `project_name`, `output_folder`, `planning_artifacts`, `user_name`, `communication_language`, `document_output_language`
+- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
+
+### 2. First Step EXECUTION
+
+Read fully and follow: `{installed_path}/steps/step-01-validate-prerequisites.md` to begin the workflow.
diff --git a/src/workflows/3-technical/gds-create-epics-and-stories/customize.toml b/src/workflows/3-technical/gds-create-epics-and-stories/customize.toml
new file mode 100644
index 0000000..d403d29
--- /dev/null
+++ b/src/workflows/3-technical/gds-create-epics-and-stories/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-create-epics-and-stories. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "Epics must each deliver an independently shippable increment of the GDD."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed when the workflow reaches Step 4 (Final Validation),
+# after the final outputs are produced. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/3-technical/gds-create-epics-and-stories/steps/step-04-final-validation.md b/src/workflows/3-technical/gds-create-epics-and-stories/steps/step-04-final-validation.md
index f37748a..bf89f73 100644
--- a/src/workflows/3-technical/gds-create-epics-and-stories/steps/step-04-final-validation.md
+++ b/src/workflows/3-technical/gds-create-epics-and-stories/steps/step-04-final-validation.md
@@ -147,3 +147,9 @@ When C is selected, the workflow is complete and the epics.md is ready for devel
Epics and Stories complete. Invoke the `bmad-help` skill.
Upon Completion of task output: offer to answer any questions about the Epics and Stories.
+
+## On Complete
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete`
+
+If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting.
diff --git a/src/workflows/3-technical/gds-create-epics-and-stories/workflow.md b/src/workflows/3-technical/gds-create-epics-and-stories/workflow.md
deleted file mode 100644
index 70268c9..0000000
--- a/src/workflows/3-technical/gds-create-epics-and-stories/workflow.md
+++ /dev/null
@@ -1,58 +0,0 @@
----
-name: create-epics-and-stories
-description: 'Break game design requirements into epics and user stories. Use when the user says "create the epics and stories list"'
----
-
-# Create Epics and Stories
-
-**Goal:** Transform GDD requirements and Architecture decisions into comprehensive stories organized by user value, creating detailed, actionable stories with complete acceptance criteria for game development teams.
-
-**Your Role:** In addition to your name, communication_style, and persona, you are also a game product strategist and technical specifications writer collaborating with a game designer or product owner. This is a partnership, not a client-vendor relationship. You bring expertise in requirements decomposition, technical implementation context, and acceptance criteria writing within a game development context, while the user brings their game vision, player needs, and design requirements. Work together as equals.
-
----
-
-## WORKFLOW ARCHITECTURE
-
-This uses **step-file architecture** for disciplined execution:
-
-### Core Principles
-
-- **Micro-file Design**: Each step of the overall goal is a self contained instruction file that you will adhere too 1 file as directed at a time
-- **Just-In-Time Loading**: Only 1 current step file will be loaded and followed to completion - never load future step files until told to do so
-- **Sequential Enforcement**: Sequence within the step files must be completed in order, no skipping or optimization allowed
-- **State Tracking**: Document progress in output file frontmatter using `stepsCompleted` array when a workflow produces a document
-- **Append-Only Building**: Build documents by appending content as directed to the output file
-
-### Step Processing Rules
-
-1. **READ COMPLETELY**: Always read the entire step file before taking any action
-2. **FOLLOW SEQUENCE**: Execute all numbered sections in order, never deviate
-3. **WAIT FOR INPUT**: If a menu is presented, halt and wait for user selection
-4. **CHECK CONTINUATION**: If the step has a menu with Continue as an option, only proceed to next step when user selects 'C' (Continue)
-5. **SAVE STATE**: Update `stepsCompleted` in frontmatter before loading next step
-6. **LOAD NEXT**: When directed, read fully and follow the next step file
-
-### Critical Rules (NO EXCEPTIONS)
-
-- 🛑 **NEVER** load multiple step files simultaneously
-- 📖 **ALWAYS** read entire step file before execution
-- 🚫 **NEVER** skip steps or optimize the sequence
-- 💾 **ALWAYS** update frontmatter of output files when writing the final output for a specific step
-- 🎯 **ALWAYS** follow the exact instructions in the step file
-- ⏸️ **ALWAYS** halt at menus and wait for user input
-- 📋 **NEVER** create mental todo lists from future steps
-
----
-
-## INITIALIZATION SEQUENCE
-
-### 1. Configuration Loading
-
-Load and read full config from {module_config} and resolve:
-
-- `project_name`, `output_folder`, `planning_artifacts`, `user_name`, `communication_language`, `document_output_language`
-- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
-
-### 2. First Step EXECUTION
-
-Read fully and follow: `{installed_path}/steps/step-01-validate-prerequisites.md` to begin the workflow.
diff --git a/src/workflows/3-technical/gds-game-architecture/SKILL.md b/src/workflows/3-technical/gds-game-architecture/SKILL.md
index b3cbf31..f617379 100644
--- a/src/workflows/3-technical/gds-game-architecture/SKILL.md
+++ b/src/workflows/3-technical/gds-game-architecture/SKILL.md
@@ -3,4 +3,97 @@ name: gds-game-architecture
description: 'Design scale-adaptive game architecture with engine systems and networking. Use when the user says "game architecture" or "design architecture"'
---
-Follow the instructions in ./workflow.md.
+# Game Architecture Workflow
+
+**Goal:** Create comprehensive game architecture decisions through collaborative step-by-step discovery — covering engine selection, systems design, networking, and technical patterns — that ensures AI agents implement consistently.
+
+**Your Role:** You are a veteran game architect facilitator collaborating with a peer. This is a partnership, not a client-vendor relationship. You bring structured architectural knowledge and game development expertise, while the user brings domain expertise and game vision. Work together as equals to make decisions that prevent implementation conflicts between AI agents.
+
+---
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `user_name`
+- `communication_language`
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## WORKFLOW ARCHITECTURE
+
+This uses **micro-file architecture** for disciplined execution:
+
+- Each step is a self-contained file with embedded rules
+- Sequential progression with user control at each step
+- Document state tracked in frontmatter
+- Append-only document building through conversation
+- You NEVER proceed to a step file if the current step file indicates the user must approve and indicate continuation.
+
+
+### Paths
+
+- `installed_path` = `{skill_root}`
+- `template_path` = `{installed_path}/templates/architecture-template.md`
+- `data_files_path` = `{installed_path}/`
+
+### Data Files
+
+- `decision_catalog` = `{installed_path}/decision-catalog.yaml`
+- `architecture_patterns` = `{installed_path}/architecture-patterns.yaml`
+- `pattern_categories` = `{installed_path}/pattern-categories.csv`
+- `engine_mcps` = `{installed_path}/engine-mcps.yaml`
+
+### Engine Knowledge Fragments
+
+Load ONLY the fragment matching the engine selected during execution. These complement (not replace) `decision_catalog` — the catalog has relationships, fragments have depth.
+
+- `knowledge_fragments.godot` = `{installed_path}/knowledge/godot-engine.md`
+- `knowledge_fragments.unity` = `{installed_path}/knowledge/unity-engine.md`
+- `knowledge_fragments.unreal` = `{installed_path}/knowledge/unreal-engine.md`
+- `knowledge_fragments.phaser` = `{installed_path}/knowledge/phaser-engine.md`
+
+---
+
+## EXECUTION
+
+Read fully and follow: `{installed_path}/steps/step-01-init.md` to begin the workflow.
+
+**Note:** Input document discovery and all initialization protocols are handled in step-01-init.md.
diff --git a/src/workflows/3-technical/gds-game-architecture/customize.toml b/src/workflows/3-technical/gds-game-architecture/customize.toml
new file mode 100644
index 0000000..0e7aac7
--- /dev/null
+++ b/src/workflows/3-technical/gds-game-architecture/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-game-architecture. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "Architecture decisions must be justified against game-specific load, not generic best practices."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed when the workflow reaches Step 9 (Completion),
+# after the final outputs are produced. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/3-technical/gds-game-architecture/steps/step-09-complete.md b/src/workflows/3-technical/gds-game-architecture/steps/step-09-complete.md
index 7dd19ee..0f83ddb 100644
--- a/src/workflows/3-technical/gds-game-architecture/steps/step-09-complete.md
+++ b/src/workflows/3-technical/gds-game-architecture/steps/step-09-complete.md
@@ -373,3 +373,9 @@ The Game Architecture workflow transforms a GDD into a comprehensive architectur
9. **Complete** - Finalize and provide handoff
This step-file architecture ensures consistent, thorough architecture creation with user collaboration at every step.
+
+## On Complete
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete`
+
+If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting.
diff --git a/src/workflows/3-technical/gds-game-architecture/workflow.md b/src/workflows/3-technical/gds-game-architecture/workflow.md
deleted file mode 100644
index 473971e..0000000
--- a/src/workflows/3-technical/gds-game-architecture/workflow.md
+++ /dev/null
@@ -1,65 +0,0 @@
----
-name: game-architecture
-description: 'Create game architecture with engine selection and systems design for AI agent consistency. Use when the user says "lets create a game architecture" or "create technical game architecture"'
----
-
-# Game Architecture Workflow
-
-**Goal:** Create comprehensive game architecture decisions through collaborative step-by-step discovery — covering engine selection, systems design, networking, and technical patterns — that ensures AI agents implement consistently.
-
-**Your Role:** You are a veteran game architect facilitator collaborating with a peer. This is a partnership, not a client-vendor relationship. You bring structured architectural knowledge and game development expertise, while the user brings domain expertise and game vision. Work together as equals to make decisions that prevent implementation conflicts between AI agents.
-
----
-
-## WORKFLOW ARCHITECTURE
-
-This uses **micro-file architecture** for disciplined execution:
-
-- Each step is a self-contained file with embedded rules
-- Sequential progression with user control at each step
-- Document state tracked in frontmatter
-- Append-only document building through conversation
-- You NEVER proceed to a step file if the current step file indicates the user must approve and indicate continuation.
-
----
-
-## INITIALIZATION
-
-### Configuration Loading
-
-Load config from `{module_config}` and resolve:
-
-- `project_name`, `output_folder`, `planning_artifacts`, `user_name`
-- `communication_language`, `document_output_language`, `game_dev_experience`
-- `date` as system-generated current datetime
-- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
-
-### Paths
-
-- `installed_path` = `{skill_root}`
-- `template_path` = `{installed_path}/templates/architecture-template.md`
-- `data_files_path` = `{installed_path}/`
-
-### Data Files
-
-- `decision_catalog` = `{installed_path}/decision-catalog.yaml`
-- `architecture_patterns` = `{installed_path}/architecture-patterns.yaml`
-- `pattern_categories` = `{installed_path}/pattern-categories.csv`
-- `engine_mcps` = `{installed_path}/engine-mcps.yaml`
-
-### Engine Knowledge Fragments
-
-Load ONLY the fragment matching the engine selected during execution. These complement (not replace) `decision_catalog` — the catalog has relationships, fragments have depth.
-
-- `knowledge_fragments.godot` = `{installed_path}/knowledge/godot-engine.md`
-- `knowledge_fragments.unity` = `{installed_path}/knowledge/unity-engine.md`
-- `knowledge_fragments.unreal` = `{installed_path}/knowledge/unreal-engine.md`
-- `knowledge_fragments.phaser` = `{installed_path}/knowledge/phaser-engine.md`
-
----
-
-## EXECUTION
-
-Read fully and follow: `{installed_path}/steps/step-01-init.md` to begin the workflow.
-
-**Note:** Input document discovery and all initialization protocols are handled in step-01-init.md.
diff --git a/src/workflows/3-technical/gds-generate-project-context/SKILL.md b/src/workflows/3-technical/gds-generate-project-context/SKILL.md
index 9599423..cf7285d 100644
--- a/src/workflows/3-technical/gds-generate-project-context/SKILL.md
+++ b/src/workflows/3-technical/gds-generate-project-context/SKILL.md
@@ -3,4 +3,82 @@ name: gds-generate-project-context
description: 'Create optimized project-context.md for AI agent consistency. Use when the user says "project context" or "generate context"'
---
-Follow the instructions in ./workflow.md.
+# Generate Project Context Workflow
+
+**Goal:** Create a concise, optimized `project-context.md` file containing critical rules, patterns, and guidelines that AI agents must follow when implementing game code. This file focuses on unobvious details that LLMs need to be reminded of.
+
+**Your Role:** You are a technical facilitator working with a peer to capture the essential implementation rules that will ensure consistent, high-quality game code generation across all AI agents working on the project.
+
+---
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `user_name`
+- `communication_language`
+- `output_folder`
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## WORKFLOW ARCHITECTURE
+
+This uses **micro-file architecture** for disciplined execution:
+
+- Each step is a self-contained file with embedded rules
+- Sequential progression with user control at each step
+- Document state tracked in frontmatter
+- Focus on lean, LLM-optimized content generation
+- You NEVER proceed to a step file if the current step file indicates the user must approve and indicate continuation.
+
+
+### Paths
+
+- `installed_path` = `{skill_root}`
+- `template_path` = `{installed_path}/project-context-template.md`
+- `output_file` = `{output_folder}/project-context.md`
+
+---
+
+## EXECUTION
+
+Load and execute `steps/step-01-discover.md` to begin the workflow.
+
+**Note:** Input document discovery and initialization protocols are handled in step-01-discover.md.
diff --git a/src/workflows/3-technical/gds-generate-project-context/customize.toml b/src/workflows/3-technical/gds-generate-project-context/customize.toml
new file mode 100644
index 0000000..2f6b462
--- /dev/null
+++ b/src/workflows/3-technical/gds-generate-project-context/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-generate-project-context. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "Project context must capture conventions actually in use, not conventions aspired to."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed when the workflow reaches Step 3 (Context Completion & Finalization),
+# after the final outputs are produced. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/3-technical/gds-generate-project-context/steps/step-03-complete.md b/src/workflows/3-technical/gds-generate-project-context/steps/step-03-complete.md
index 76b2cca..72760b0 100644
--- a/src/workflows/3-technical/gds-generate-project-context/steps/step-03-complete.md
+++ b/src/workflows/3-technical/gds-generate-project-context/steps/step-03-complete.md
@@ -278,3 +278,9 @@ Your project context will help ensure high-quality, consistent game implementati
This is the final step of the Generate Project Context workflow. The user now has a comprehensive, optimized project context file that will ensure consistent, high-quality game implementation across all AI agents working on the project.
The project context file serves as the critical "rules of the road" that agents need to implement game code consistently with the project's standards and patterns.
+
+## On Complete
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete`
+
+If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting.
diff --git a/src/workflows/3-technical/gds-generate-project-context/workflow.md b/src/workflows/3-technical/gds-generate-project-context/workflow.md
deleted file mode 100644
index 43966eb..0000000
--- a/src/workflows/3-technical/gds-generate-project-context/workflow.md
+++ /dev/null
@@ -1,49 +0,0 @@
----
-name: generate-project-context
-description: 'Generate AI-optimized project context file. Use when the user says "lets create project context for game dev"'
----
-
-# Generate Project Context Workflow
-
-**Goal:** Create a concise, optimized `project-context.md` file containing critical rules, patterns, and guidelines that AI agents must follow when implementing game code. This file focuses on unobvious details that LLMs need to be reminded of.
-
-**Your Role:** You are a technical facilitator working with a peer to capture the essential implementation rules that will ensure consistent, high-quality game code generation across all AI agents working on the project.
-
----
-
-## WORKFLOW ARCHITECTURE
-
-This uses **micro-file architecture** for disciplined execution:
-
-- Each step is a self-contained file with embedded rules
-- Sequential progression with user control at each step
-- Document state tracked in frontmatter
-- Focus on lean, LLM-optimized content generation
-- You NEVER proceed to a step file if the current step file indicates the user must approve and indicate continuation.
-
----
-
-## INITIALIZATION
-
-### Configuration Loading
-
-Load config from `{module_config}` and resolve:
-
-- `project_name`, `output_folder`, `user_name`
-- `communication_language`, `document_output_language`, `game_dev_experience`
-- `date` as system-generated current datetime
-- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
-
-### Paths
-
-- `installed_path` = `{skill_root}`
-- `template_path` = `{installed_path}/project-context-template.md`
-- `output_file` = `{output_folder}/project-context.md`
-
----
-
-## EXECUTION
-
-Load and execute `steps/step-01-discover.md` to begin the workflow.
-
-**Note:** Input document discovery and initialization protocols are handled in step-01-discover.md.
diff --git a/src/workflows/4-production/gds-code-review/SKILL.md b/src/workflows/4-production/gds-code-review/SKILL.md
index dce3fd5..51118b5 100644
--- a/src/workflows/4-production/gds-code-review/SKILL.md
+++ b/src/workflows/4-production/gds-code-review/SKILL.md
@@ -3,4 +3,101 @@ name: gds-code-review
description: 'Review code changes adversarially using parallel review layers (Blind Hunter, Edge Case Hunter, Acceptance Auditor) with structured triage into actionable categories. Use when the user says "run code review" or "review this code"'
---
-Follow the instructions in ./workflow.md.
+# Code Review Workflow
+
+**Goal:** Review code changes adversarially using parallel review layers and structured triage.
+
+**Your Role:** You are an elite code reviewer. You gather context, launch parallel adversarial reviews, triage findings with precision, and present actionable results. No noise, no filler.
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `user_name`
+- `communication_language`
+- `game_dev_experience`
+- `implementation_artifacts`
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## WORKFLOW ARCHITECTURE
+
+This uses **step-file architecture** for disciplined execution:
+
+- **Micro-file Design**: Each step is self-contained and followed exactly
+- **Just-In-Time Loading**: Only load the current step file
+- **Sequential Enforcement**: Complete steps in order, no skipping
+- **State Tracking**: Persist progress via in-memory variables
+- **Append-Only Building**: Build artifacts incrementally
+
+### Step Processing Rules
+
+1. **READ COMPLETELY**: Read the entire step file before acting
+2. **FOLLOW SEQUENCE**: Execute sections in order
+3. **WAIT FOR INPUT**: Halt at checkpoints and wait for human
+4. **LOAD NEXT**: When directed, read fully and follow the next step file
+
+### Critical Rules (NO EXCEPTIONS)
+
+- **NEVER** load multiple step files simultaneously
+- **ALWAYS** read entire step file before execution
+- **NEVER** skip steps or optimize the sequence
+- **ALWAYS** follow the exact instructions in the step file
+- **ALWAYS** halt at checkpoints and wait for human input
+
+
+## INITIALIZATION SEQUENCE
+
+### 1. Configuration Loading
+
+Load and read full config from `{main_config}` and resolve:
+
+- `project_name`, `planning_artifacts`, `implementation_artifacts`, `user_name`
+- `communication_language`, `document_output_language`, `game_dev_experience`
+- `date` as system-generated current datetime
+- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml`
+- `project_context` = `**/project-context.md` (load if exists)
+- CLAUDE.md / memory files (load if exist)
+
+YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`, tailored to `{game_dev_experience}`.
+
+### 2. First Step Execution
+
+Read fully and follow: `./steps/step-01-gather-context.md` to begin the workflow.
diff --git a/src/workflows/4-production/gds-code-review/customize.toml b/src/workflows/4-production/gds-code-review/customize.toml
new file mode 100644
index 0000000..6aba24d
--- /dev/null
+++ b/src/workflows/4-production/gds-code-review/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-code-review. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "Code review must prioritize risk and correctness over stylistic preference."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed when the workflow reaches Step 4 (Present and Act),
+# after the final outputs are produced. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/4-production/gds-code-review/steps/step-04-present.md b/src/workflows/4-production/gds-code-review/steps/step-04-present.md
index 90a6c42..b4aa07e 100644
--- a/src/workflows/4-production/gds-code-review/steps/step-04-present.md
+++ b/src/workflows/4-production/gds-code-review/steps/step-04-present.md
@@ -124,3 +124,9 @@ Present the user with follow-up options:
> 3. **Done** — end the workflow
**HALT** — I am waiting for your choice. Do not proceed until the user selects an option.
+
+## On Complete
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete`
+
+If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting.
diff --git a/src/workflows/4-production/gds-code-review/workflow.md b/src/workflows/4-production/gds-code-review/workflow.md
deleted file mode 100644
index 6392fb5..0000000
--- a/src/workflows/4-production/gds-code-review/workflow.md
+++ /dev/null
@@ -1,55 +0,0 @@
----
-main_config: '{module_config}'
----
-
-# Code Review Workflow
-
-**Goal:** Review code changes adversarially using parallel review layers and structured triage.
-
-**Your Role:** You are an elite code reviewer. You gather context, launch parallel adversarial reviews, triage findings with precision, and present actionable results. No noise, no filler.
-
-
-## WORKFLOW ARCHITECTURE
-
-This uses **step-file architecture** for disciplined execution:
-
-- **Micro-file Design**: Each step is self-contained and followed exactly
-- **Just-In-Time Loading**: Only load the current step file
-- **Sequential Enforcement**: Complete steps in order, no skipping
-- **State Tracking**: Persist progress via in-memory variables
-- **Append-Only Building**: Build artifacts incrementally
-
-### Step Processing Rules
-
-1. **READ COMPLETELY**: Read the entire step file before acting
-2. **FOLLOW SEQUENCE**: Execute sections in order
-3. **WAIT FOR INPUT**: Halt at checkpoints and wait for human
-4. **LOAD NEXT**: When directed, read fully and follow the next step file
-
-### Critical Rules (NO EXCEPTIONS)
-
-- **NEVER** load multiple step files simultaneously
-- **ALWAYS** read entire step file before execution
-- **NEVER** skip steps or optimize the sequence
-- **ALWAYS** follow the exact instructions in the step file
-- **ALWAYS** halt at checkpoints and wait for human input
-
-
-## INITIALIZATION SEQUENCE
-
-### 1. Configuration Loading
-
-Load and read full config from `{main_config}` and resolve:
-
-- `project_name`, `planning_artifacts`, `implementation_artifacts`, `user_name`
-- `communication_language`, `document_output_language`, `game_dev_experience`
-- `date` as system-generated current datetime
-- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml`
-- `project_context` = `**/project-context.md` (load if exists)
-- CLAUDE.md / memory files (load if exist)
-
-YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`, tailored to `{game_dev_experience}`.
-
-### 2. First Step Execution
-
-Read fully and follow: `./steps/step-01-gather-context.md` to begin the workflow.
diff --git a/src/workflows/4-production/gds-correct-course/SKILL.md b/src/workflows/4-production/gds-correct-course/SKILL.md
index 641092c..9f803c5 100644
--- a/src/workflows/4-production/gds-correct-course/SKILL.md
+++ b/src/workflows/4-production/gds-correct-course/SKILL.md
@@ -3,4 +3,313 @@ name: gds-correct-course
description: 'Manage significant changes during sprint execution. Use when the user says "correct course" or "propose sprint change"'
---
-Follow the instructions in ./workflow.md.
+# Correct Course - Sprint Change Management Workflow
+
+**Goal:** Manage significant changes during sprint execution by analyzing impact across all project artifacts and producing a structured Sprint Change Proposal.
+
+**Your Role:** You are a Developer navigating change management. Analyze the triggering issue, assess impact across GDD, epics, architecture, and UX artifacts, and produce an actionable Sprint Change Proposal with clear handoff.
+
+## Conventions
+
+- Bare paths (e.g. `checklist.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `project_name`, `user_name`
+- `communication_language`, `document_output_language`
+- `game_dev_experience`
+- `implementation_artifacts`
+- `planning_artifacts`
+- `project_knowledge`
+- `date` as system-generated current datetime
+- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
+- Language MUST be tailored to `{game_dev_experience}`
+- Generate all documents in `{document_output_language}`
+- DOCUMENT OUTPUT: Updated epics, stories, or GDD sections. Clear, actionable changes. Game dev experience (`{game_dev_experience}`) affects conversation style ONLY, not document updates.
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## Paths
+
+- `default_output_file` = `{planning_artifacts}/sprint-change-proposal-{date}.md`
+
+## Input Files
+
+| Input | Path | Load Strategy |
+|-------|------|---------------|
+| GDD | `{planning_artifacts}/*gdd*.md` (whole) or `{planning_artifacts}/*gdd*/*.md` (sharded) | FULL_LOAD |
+| Narrative | `{planning_artifacts}/*narrative*.md` (whole) or `{planning_artifacts}/*narrative*/*.md` (sharded) | FULL_LOAD |
+| Epics | `{planning_artifacts}/*epic*.md` (whole) or `{planning_artifacts}/*epic*/*.md` (sharded) | FULL_LOAD |
+| Architecture | `{planning_artifacts}/*architecture*.md` (whole) or `{planning_artifacts}/*architecture*/*.md` (sharded) | FULL_LOAD |
+| UX Design | `{planning_artifacts}/*ux*.md` (whole) or `{planning_artifacts}/*ux*/*.md` (sharded) | FULL_LOAD |
+| Tech Spec | `{planning_artifacts}/*tech-spec*.md` (whole) or `{planning_artifacts}/*spec-*.md` (whole) | FULL_LOAD |
+| Document Project | `{project_knowledge}/index.md` (sharded) | INDEX_GUIDED |
+
+### Context
+
+- Load `**/project-context.md` if it exists
+
+## Execution
+
+### Document Discovery - Loading Project Artifacts
+
+**Strategy**: Course correction needs broad project context to assess change impact accurately. Load all available planning artifacts.
+
+**Discovery Process for FULL_LOAD documents (GDD, Narrative, Epics, Architecture, UX Design, Tech Spec):**
+
+1. **Search for whole document first** - Look for files matching the whole-document pattern (e.g., `*gdd*.md`, `*narrative*.md`, `*epic*.md`, `*architecture*.md`, `*ux*.md`, `*tech-spec*.md`)
+2. **Check for sharded version** - If whole document not found, look for a directory with `index.md` (e.g., `gdd/index.md`, `epics/index.md`)
+3. **If sharded version found**:
+ - Read `index.md` to understand the document structure
+ - Read ALL section files listed in the index
+ - Process the combined content as a single document
+4. **Priority**: If both whole and sharded versions exist, use the whole document
+
+**Discovery Process for INDEX_GUIDED documents (Document Project):**
+
+1. **Search for index file** - Look for `{project_knowledge}/index.md`
+2. **If found**: Read the index to understand available documentation sections
+3. **Selectively load sections** based on relevance to the change being analyzed — do NOT load everything, only sections that relate to the impacted areas
+4. **This document is optional** — skip if `{project_knowledge}` does not exist (greenfield projects)
+
+**Fuzzy matching**: Be flexible with document names — users may use variations like `gdd.md`, `game-design-document.md`, etc.
+
+**Missing documents**: Not all documents may exist. GDD and Epics are essential; Architecture, UX Design, Tech Spec, Narrative, and Document Project are loaded if available. HALT if GDD or Epics cannot be found.
+
+
+
+
+ Load **/project-context.md for coding standards and project-wide patterns (if exists)
+ Confirm change trigger and gather user description of the issue
+ Ask: "What specific issue or change has been identified that requires navigation?"
+ Verify access to required project documents:
+ - GDD (Game Design Document)
+ - Current Epics and Stories
+ - Architecture documentation
+ - UI/UX specifications
+ - Narrative Design documentation
+ Ask user for mode preference:
+ - **Incremental** (recommended): Refine each edit collaboratively
+ - **Batch**: Present all changes at once for review
+ Store mode selection for use throughout workflow
+
+HALT: "Cannot navigate change without clear understanding of the triggering issue. Please provide specific details about what needs to change and why."
+
+HALT: "Need access to project documents (GDD, Epics, Architecture, UI/UX) to assess change impact. Please ensure these documents are accessible."
+
+
+
+ Read fully and follow the systematic analysis from: checklist.md
+ Work through each checklist section interactively with the user
+ Record status for each checklist item:
+ - [x] Done - Item completed successfully
+ - [N/A] Skip - Item not applicable to this change
+ - [!] Action-needed - Item requires attention or follow-up
+ Maintain running notes of findings and impacts discovered
+ Present checklist progress after each major section
+
+Identify blocking issues and work with user to resolve before continuing
+
+
+
+Based on checklist findings, create explicit edit proposals for each identified artifact
+
+For Story changes:
+
+- Show old → new text format
+- Include story ID and section being modified
+- Provide rationale for each change
+- Example format:
+
+ ```
+ Story: [STORY-123] User Authentication
+ Section: Acceptance Criteria
+
+ OLD:
+ - User can log in with email/password
+
+ NEW:
+ - User can log in with email/password
+ - User can enable 2FA via authenticator app
+
+ Rationale: Security requirement identified during implementation
+ ```
+
+For GDD modifications:
+
+- Specify exact sections to update
+- Show current content and proposed changes
+- Explain impact on MVP scope and requirements
+
+For Narrative Design modifications:
+
+- Specify exact sections to update
+- Show current content and proposed changes
+- Explain impact on story, character, or world-building consistency
+
+For Architecture changes:
+
+- Identify affected components, patterns, or technology choices
+- Describe diagram updates needed
+- Note any ripple effects on other components
+
+For UI/UX specification updates:
+
+- Reference specific screens or components
+- Show wireframe or flow changes needed
+- Connect changes to user experience impact
+
+
+ Present each edit proposal individually
+ Review and refine this change? Options: Approve [a], Edit [e], Skip [s]
+ Iterate on each proposal based on user feedback
+
+
+Collect all edit proposals and present together at end of step
+
+
+
+
+Compile comprehensive Sprint Change Proposal document with following sections:
+
+Section 1: Issue Summary
+
+- Clear problem statement describing what triggered the change
+- Context about when/how the issue was discovered
+- Evidence or examples demonstrating the issue
+
+Section 2: Impact Analysis
+
+- Epic Impact: Which epics are affected and how
+- Story Impact: Current and future stories requiring changes
+- Artifact Conflicts: GDD, Narrative, Architecture, UI/UX documents needing updates
+- Technical Impact: Code, infrastructure, or deployment implications
+
+Section 3: Recommended Approach
+
+- Present chosen path forward from checklist evaluation:
+ - Direct Adjustment: Modify/add stories within existing plan
+ - Potential Rollback: Revert completed work to simplify resolution
+ - MVP Review: Reduce scope or modify goals
+- Provide clear rationale for recommendation
+- Include effort estimate, risk assessment, and timeline impact
+
+Section 4: Detailed Change Proposals
+
+- Include all refined edit proposals from Step 3
+- Group by artifact type (Stories, GDD, Narrative, Architecture, UI/UX)
+- Ensure each change includes before/after and justification
+
+Section 5: Implementation Handoff
+
+- Categorize change scope:
+ - Minor: Direct implementation by Developer agent
+ - Moderate: Backlog reorganization needed (PO/DEV)
+ - Major: Fundamental replan required (PM/Architect)
+- Specify handoff recipients and their responsibilities
+- Define success criteria for implementation
+
+Present complete Sprint Change Proposal to user
+Write Sprint Change Proposal document to {default_output_file}
+Review complete proposal. Continue [c] or Edit [e]?
+
+
+
+Get explicit user approval for complete proposal
+Do you approve this Sprint Change Proposal for implementation? (yes/no/revise)
+
+
+ Gather specific feedback on what needs adjustment
+ Return to appropriate step to address concerns
+ If changes needed to edit proposals
+ If changes needed to overall proposal structure
+
+
+
+
+ Finalize Sprint Change Proposal document
+ Determine change scope classification:
+
+- **Minor**: Can be implemented directly by Developer agent
+- **Moderate**: Requires backlog reorganization and PO/DEV coordination
+- **Major**: Needs fundamental replan with PM/Architect involvement
+
+Provide appropriate handoff based on scope:
+
+
+
+
+ Route to: Developer agent for direct implementation
+ Deliverables: Finalized edit proposals and implementation tasks
+
+
+
+ Route to: Product Owner / Developer agents
+ Deliverables: Sprint Change Proposal + backlog reorganization plan
+
+
+
+ Route to: Product Manager / Solution Architect
+ Deliverables: Complete Sprint Change Proposal + escalation notice
+
+Confirm handoff completion and next steps with user
+Document handoff in workflow execution log
+
+
+
+
+
+Summarize workflow execution:
+ - Issue addressed: {{change_trigger}}
+ - Change scope: {{scope_classification}}
+ - Artifacts modified: {{list_of_artifacts}}
+ - Routed to: {{handoff_recipients}}
+
+Confirm all deliverables produced:
+
+- Sprint Change Proposal document
+- Specific edit proposals with before/after
+- Implementation handoff plan
+
+Report workflow completion to user with personalized message: "Correct Course workflow complete, {user_name}!"
+Remind user of success criteria and next steps for Developer agent
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete` — if the resolved value is non-empty, follow it as the final terminal instruction before exiting.
+
+
+
diff --git a/src/workflows/4-production/gds-correct-course/customize.toml b/src/workflows/4-production/gds-correct-course/customize.toml
new file mode 100644
index 0000000..da6c83e
--- /dev/null
+++ b/src/workflows/4-production/gds-correct-course/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-correct-course. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "Every change proposal must preserve GDD-to-epic traceability."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed when the workflow reaches Step 6 (Workflow Completion),
+# after the Sprint Change Proposal is routed and handoff is confirmed. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/4-production/gds-correct-course/workflow.md b/src/workflows/4-production/gds-correct-course/workflow.md
deleted file mode 100644
index 287ef81..0000000
--- a/src/workflows/4-production/gds-correct-course/workflow.md
+++ /dev/null
@@ -1,275 +0,0 @@
-# Correct Course - Sprint Change Management Workflow
-
-**Goal:** Manage significant changes during sprint execution by analyzing impact across all project artifacts and producing a structured Sprint Change Proposal.
-
-**Your Role:** You are a Developer navigating change management. Analyze the triggering issue, assess impact across GDD, epics, architecture, and UX artifacts, and produce an actionable Sprint Change Proposal with clear handoff.
-
----
-
-## INITIALIZATION
-
-### Configuration Loading
-
-Load config from `{module_config}` and resolve:
-
-- `project_name`, `user_name`
-- `communication_language`, `document_output_language`
-- `game_dev_experience`
-- `implementation_artifacts`
-- `planning_artifacts`
-- `project_knowledge`
-- `date` as system-generated current datetime
-- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
-- Language MUST be tailored to `{game_dev_experience}`
-- Generate all documents in `{document_output_language}`
-- DOCUMENT OUTPUT: Updated epics, stories, or GDD sections. Clear, actionable changes. Game dev experience (`{game_dev_experience}`) affects conversation style ONLY, not document updates.
-
-### Paths
-
-- `default_output_file` = `{planning_artifacts}/sprint-change-proposal-{date}.md`
-
-### Input Files
-
-| Input | Path | Load Strategy |
-|-------|------|---------------|
-| GDD | `{planning_artifacts}/*gdd*.md` (whole) or `{planning_artifacts}/*gdd*/*.md` (sharded) | FULL_LOAD |
-| Narrative | `{planning_artifacts}/*narrative*.md` (whole) or `{planning_artifacts}/*narrative*/*.md` (sharded) | FULL_LOAD |
-| Epics | `{planning_artifacts}/*epic*.md` (whole) or `{planning_artifacts}/*epic*/*.md` (sharded) | FULL_LOAD |
-| Architecture | `{planning_artifacts}/*architecture*.md` (whole) or `{planning_artifacts}/*architecture*/*.md` (sharded) | FULL_LOAD |
-| UX Design | `{planning_artifacts}/*ux*.md` (whole) or `{planning_artifacts}/*ux*/*.md` (sharded) | FULL_LOAD |
-| Tech Spec | `{planning_artifacts}/*tech-spec*.md` (whole) or `{planning_artifacts}/*spec-*.md` (whole) | FULL_LOAD |
-| Document Project | `{project_knowledge}/index.md` (sharded) | INDEX_GUIDED |
-
-### Context
-
-- Load `**/project-context.md` if it exists
-
----
-
-## EXECUTION
-
-### Document Discovery - Loading Project Artifacts
-
-**Strategy**: Course correction needs broad project context to assess change impact accurately. Load all available planning artifacts.
-
-**Discovery Process for FULL_LOAD documents (GDD, Narrative, Epics, Architecture, UX Design, Tech Spec):**
-
-1. **Search for whole document first** - Look for files matching the whole-document pattern (e.g., `*gdd*.md`, `*narrative*.md`, `*epic*.md`, `*architecture*.md`, `*ux*.md`, `*tech-spec*.md`)
-2. **Check for sharded version** - If whole document not found, look for a directory with `index.md` (e.g., `gdd/index.md`, `epics/index.md`)
-3. **If sharded version found**:
- - Read `index.md` to understand the document structure
- - Read ALL section files listed in the index
- - Process the combined content as a single document
-4. **Priority**: If both whole and sharded versions exist, use the whole document
-
-**Discovery Process for INDEX_GUIDED documents (Document Project):**
-
-1. **Search for index file** - Look for `{project_knowledge}/index.md`
-2. **If found**: Read the index to understand available documentation sections
-3. **Selectively load sections** based on relevance to the change being analyzed — do NOT load everything, only sections that relate to the impacted areas
-4. **This document is optional** — skip if `{project_knowledge}` does not exist (greenfield projects)
-
-**Fuzzy matching**: Be flexible with document names — users may use variations like `gdd.md`, `game-design-document.md`, etc.
-
-**Missing documents**: Not all documents may exist. GDD and Epics are essential; Architecture, UX Design, Tech Spec, Narrative, and Document Project are loaded if available. HALT if GDD or Epics cannot be found.
-
-
-
-
- Load **/project-context.md for coding standards and project-wide patterns (if exists)
- Confirm change trigger and gather user description of the issue
- Ask: "What specific issue or change has been identified that requires navigation?"
- Verify access to required project documents:
- - GDD (Game Design Document)
- - Current Epics and Stories
- - Architecture documentation
- - UI/UX specifications
- - Narrative Design documentation
- Ask user for mode preference:
- - **Incremental** (recommended): Refine each edit collaboratively
- - **Batch**: Present all changes at once for review
- Store mode selection for use throughout workflow
-
-HALT: "Cannot navigate change without clear understanding of the triggering issue. Please provide specific details about what needs to change and why."
-
-HALT: "Need access to project documents (GDD, Epics, Architecture, UI/UX) to assess change impact. Please ensure these documents are accessible."
-
-
-
- Read fully and follow the systematic analysis from: checklist.md
- Work through each checklist section interactively with the user
- Record status for each checklist item:
- - [x] Done - Item completed successfully
- - [N/A] Skip - Item not applicable to this change
- - [!] Action-needed - Item requires attention or follow-up
- Maintain running notes of findings and impacts discovered
- Present checklist progress after each major section
-
-Identify blocking issues and work with user to resolve before continuing
-
-
-
-Based on checklist findings, create explicit edit proposals for each identified artifact
-
-For Story changes:
-
-- Show old → new text format
-- Include story ID and section being modified
-- Provide rationale for each change
-- Example format:
-
- ```
- Story: [STORY-123] User Authentication
- Section: Acceptance Criteria
-
- OLD:
- - User can log in with email/password
-
- NEW:
- - User can log in with email/password
- - User can enable 2FA via authenticator app
-
- Rationale: Security requirement identified during implementation
- ```
-
-For GDD modifications:
-
-- Specify exact sections to update
-- Show current content and proposed changes
-- Explain impact on MVP scope and requirements
-
-For Narrative Design modifications:
-
-- Specify exact sections to update
-- Show current content and proposed changes
-- Explain impact on story, character, or world-building consistency
-
-For Architecture changes:
-
-- Identify affected components, patterns, or technology choices
-- Describe diagram updates needed
-- Note any ripple effects on other components
-
-For UI/UX specification updates:
-
-- Reference specific screens or components
-- Show wireframe or flow changes needed
-- Connect changes to user experience impact
-
-
- Present each edit proposal individually
- Review and refine this change? Options: Approve [a], Edit [e], Skip [s]
- Iterate on each proposal based on user feedback
-
-
-Collect all edit proposals and present together at end of step
-
-
-
-
-Compile comprehensive Sprint Change Proposal document with following sections:
-
-Section 1: Issue Summary
-
-- Clear problem statement describing what triggered the change
-- Context about when/how the issue was discovered
-- Evidence or examples demonstrating the issue
-
-Section 2: Impact Analysis
-
-- Epic Impact: Which epics are affected and how
-- Story Impact: Current and future stories requiring changes
-- Artifact Conflicts: GDD, Narrative, Architecture, UI/UX documents needing updates
-- Technical Impact: Code, infrastructure, or deployment implications
-
-Section 3: Recommended Approach
-
-- Present chosen path forward from checklist evaluation:
- - Direct Adjustment: Modify/add stories within existing plan
- - Potential Rollback: Revert completed work to simplify resolution
- - MVP Review: Reduce scope or modify goals
-- Provide clear rationale for recommendation
-- Include effort estimate, risk assessment, and timeline impact
-
-Section 4: Detailed Change Proposals
-
-- Include all refined edit proposals from Step 3
-- Group by artifact type (Stories, GDD, Narrative, Architecture, UI/UX)
-- Ensure each change includes before/after and justification
-
-Section 5: Implementation Handoff
-
-- Categorize change scope:
- - Minor: Direct implementation by Developer agent
- - Moderate: Backlog reorganization needed (PO/DEV)
- - Major: Fundamental replan required (PM/Architect)
-- Specify handoff recipients and their responsibilities
-- Define success criteria for implementation
-
-Present complete Sprint Change Proposal to user
-Write Sprint Change Proposal document to {default_output_file}
-Review complete proposal. Continue [c] or Edit [e]?
-
-
-
-Get explicit user approval for complete proposal
-Do you approve this Sprint Change Proposal for implementation? (yes/no/revise)
-
-
- Gather specific feedback on what needs adjustment
- Return to appropriate step to address concerns
- If changes needed to edit proposals
- If changes needed to overall proposal structure
-
-
-
-
- Finalize Sprint Change Proposal document
- Determine change scope classification:
-
-- **Minor**: Can be implemented directly by Developer agent
-- **Moderate**: Requires backlog reorganization and PO/DEV coordination
-- **Major**: Needs fundamental replan with PM/Architect involvement
-
-Provide appropriate handoff based on scope:
-
-
-
-
- Route to: Developer agent for direct implementation
- Deliverables: Finalized edit proposals and implementation tasks
-
-
-
- Route to: Product Owner / Developer agents
- Deliverables: Sprint Change Proposal + backlog reorganization plan
-
-
-
- Route to: Product Manager / Solution Architect
- Deliverables: Complete Sprint Change Proposal + escalation notice
-
-Confirm handoff completion and next steps with user
-Document handoff in workflow execution log
-
-
-
-
-
-Summarize workflow execution:
- - Issue addressed: {{change_trigger}}
- - Change scope: {{scope_classification}}
- - Artifacts modified: {{list_of_artifacts}}
- - Routed to: {{handoff_recipients}}
-
-Confirm all deliverables produced:
-
-- Sprint Change Proposal document
-- Specific edit proposals with before/after
-- Implementation handoff plan
-
-Report workflow completion to user with personalized message: "Correct Course workflow complete, {user_name}!"
-Remind user of success criteria and next steps for Developer agent
-
-
-
diff --git a/src/workflows/4-production/gds-create-story/SKILL.md b/src/workflows/4-production/gds-create-story/SKILL.md
index c0790db..0aac3c6 100644
--- a/src/workflows/4-production/gds-create-story/SKILL.md
+++ b/src/workflows/4-production/gds-create-story/SKILL.md
@@ -3,4 +3,442 @@ name: gds-create-story
description: 'Creates a dedicated story file with all the context the agent will need to implement it later. Use when the user says "create the next story" or "create story [story identifier]"'
---
-Follow the instructions in ./workflow.md.
+# Create Story Workflow
+
+**Goal:** Create a comprehensive story file that gives the dev agent everything needed for flawless implementation.
+
+**Your Role:** Story context engine that prevents LLM developer mistakes, omissions, or disasters.
+- Communicate all responses in {communication_language} and language MUST be tailored to {game_dev_experience}
+- Generate all documents in {document_output_language}
+- Your purpose is NOT to copy from epics - it's to create a comprehensive, optimized story file that gives the DEV agent EVERYTHING needed for flawless implementation
+- COMMON LLM MISTAKES TO PREVENT: reinventing wheels, wrong libraries, wrong file locations, breaking regressions, ignoring UX, vague implementations, lying about completion, not learning from past work
+- EXHAUSTIVE ANALYSIS REQUIRED: You must thoroughly analyze ALL artifacts to extract critical context - do NOT be lazy or skim! This is the most important function in the entire development process!
+- UTILIZE SUBPROCESSES AND SUBAGENTS: Use research subagents, subprocesses or parallel processing if available to thoroughly analyze different artifacts simultaneously and thoroughly
+- SAVE QUESTIONS: If you think of questions or clarifications during analysis, save them for the end after the complete story is written
+- ZERO USER INTERVENTION: Process should be fully automated except for initial epic/story selection or missing documents
+
+
+## Paths
+
+- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml`
+- `epics_file` = `{planning_artifacts}/epics.md`
+- `gdd_file` = `{planning_artifacts}/gdd.md`
+- `architecture_file` = `{planning_artifacts}/architecture.md`
+- `ux_file` = `{planning_artifacts}/*ux*.md`
+- `story_title` = "" (will be elicited if not derivable)
+- `project_context` = `**/project-context.md` (load if exists)
+- `default_output_file` = `{implementation_artifacts}/{{story_key}}.md`
+
+## Input Files
+
+| Input | Description | Path Pattern(s) | Load Strategy |
+|-------|-------------|------------------|---------------|
+| gdd | Game Design Document (fallback - epics file should have most content) | whole: `{planning_artifacts}/*gdd*.md`, sharded: `{planning_artifacts}/*gdd*/*.md` | SELECTIVE_LOAD |
+| architecture | Architecture (fallback - epics file should have relevant sections) | whole: `{planning_artifacts}/*architecture*.md`, sharded: `{planning_artifacts}/*architecture*/*.md` | SELECTIVE_LOAD |
+| ux | UX design (fallback - epics file should have relevant sections) | whole: `{planning_artifacts}/*ux*.md`, sharded: `{planning_artifacts}/*ux*/*.md` | SELECTIVE_LOAD |
+| epics | Enhanced epics+stories file with BDD and source hints | whole: `{planning_artifacts}/*epic*.md`, sharded: `{planning_artifacts}/*epic*/*.md` | SELECTIVE_LOAD |
+| project_context | Project-wide rules, conventions, MCP configs, and third-party framework requirements | `**/project-context.md` | FULL_LOAD |
+
+---
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `user_name`
+- `communication_language`
+- `document_output_language`
+- `game_dev_experience`
+- `planning_artifacts`
+- `implementation_artifacts`
+- `sprint_status`
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## EXECUTION
+
+
+
+
+
+ Parse user-provided story path: extract epic_num, story_num, story_title from format like "1-2-user-auth"
+ Set {{epic_num}}, {{story_num}}, {{story_key}} from user input
+ GOTO step 2a
+
+
+ Check if {{sprint_status}} file exists for auto discover
+
+
+
+ Choose option [1], provide epic-story number, path to story docs, or [q] to quit:
+
+
+ HALT - No work needed
+
+
+
+
+ HALT - User needs to run sprint-planning
+
+
+
+ Parse user input: extract epic_num, story_num, story_title
+ Set {{epic_num}}, {{story_num}}, {{story_key}} from user input
+ GOTO step 2a
+
+
+
+ Use user-provided path for story documents
+ GOTO step 2a
+
+
+
+
+
+ MUST read COMPLETE {sprint_status} file from start to end to preserve order
+ Load the FULL file: {{sprint_status}}
+ Read ALL lines from beginning to end - do not skip any content
+ Parse the development_status section completely
+
+ Find the FIRST story (by reading in order from top to bottom) where:
+ - Key matches pattern: number-number-name (e.g., "1-2-user-auth")
+ - NOT an epic key (epic-X) or retrospective (epic-X-retrospective)
+ - Status value equals "backlog"
+
+
+
+
+ HALT
+
+
+ Extract from found story key (e.g., "1-2-user-authentication"):
+ - epic_num: first number before dash (e.g., "1")
+ - story_num: second number after first dash (e.g., "2")
+ - story_title: remainder after second dash (e.g., "user-authentication")
+
+ Set {{story_id}} = "{{epic_num}}.{{story_num}}"
+ Store story_key for later use (e.g., "1-2-user-authentication")
+
+
+ Check if this is the first story in epic {{epic_num}} by looking for {{epic_num}}-1-* pattern
+
+ Load {{sprint_status}} and check epic-{{epic_num}} status
+ If epic status is "backlog" → update to "in-progress"
+ If epic status is "contexted" (legacy status) → update to "in-progress" (backward compatibility)
+ If epic status is "in-progress" → no change needed
+
+
+
+
+
+
+ HALT - Cannot proceed
+
+
+
+
+
+ HALT - Cannot proceed
+
+
+
+
+ GOTO step 2a
+
+ Load the FULL file: {{sprint_status}}
+ Read ALL lines from beginning to end - do not skip any content
+ Parse the development_status section completely
+
+ Find the FIRST story (by reading in order from top to bottom) where:
+ - Key matches pattern: number-number-name (e.g., "1-2-user-auth")
+ - NOT an epic key (epic-X) or retrospective (epic-X-retrospective)
+ - Status value equals "backlog"
+
+
+
+
+ HALT
+
+
+ Extract from found story key (e.g., "1-2-user-authentication"):
+ - epic_num: first number before dash (e.g., "1")
+ - story_num: second number after first dash (e.g., "2")
+ - story_title: remainder after second dash (e.g., "user-authentication")
+
+ Set {{story_id}} = "{{epic_num}}.{{story_num}}"
+ Store story_key for later use (e.g., "1-2-user-authentication")
+
+
+ Check if this is the first story in epic {{epic_num}} by looking for {{epic_num}}-1-* pattern
+
+ Load {{sprint_status}} and check epic-{{epic_num}} status
+ If epic status is "backlog" → update to "in-progress"
+ If epic status is "contexted" (legacy status) → update to "in-progress" (backward compatibility)
+ If epic status is "in-progress" → no change needed
+
+
+
+
+
+
+ HALT - Cannot proceed
+
+
+
+
+
+ HALT - Cannot proceed
+
+
+
+
+ GOTO step 2a
+
+
+
+ 🔬 EXHAUSTIVE ARTIFACT ANALYSIS - This is where you prevent future developer mistakes!
+
+
+ Read fully and follow `./discover-inputs.md` to load all input files
+ Available content: {epics_content}, {gdd_content}, {architecture_content}, {ux_content},
+ {project_context}
+
+
+ From {epics_content}, extract Epic {{epic_num}} complete context: **EPIC ANALYSIS:** - Epic
+ objectives and business value - ALL stories in this epic for cross-story context - Our specific story's requirements, user story
+ statement, acceptance criteria - Technical requirements and constraints - Dependencies on other stories/epics - Source hints pointing to
+ original documents
+ Extract our story ({{epic_num}}-{{story_num}}) details: **STORY FOUNDATION:** - User story statement
+ (As a, I want, so that) - Detailed acceptance criteria (already BDD formatted) - Technical requirements specific to this story -
+ Business context and value - Success criteria
+
+ Find {{previous_story_num}}: scan {implementation_artifacts} for the story file in epic {{epic_num}} with the highest story number less than {{story_num}}
+ Load previous story file: {implementation_artifacts}/{{epic_num}}-{{previous_story_num}}-*.md **PREVIOUS STORY INTELLIGENCE:** -
+ Dev notes and learnings from previous story - Review feedback and corrections needed - Files that were created/modified and their
+ patterns - Testing approaches that worked/didn't work - Problems encountered and solutions found - Code patterns established Extract
+ all learnings that could impact current story implementation
+
+
+
+
+ Analyze {project_context} for story-relevant rules and constraints:
+ **PROJECT CONTEXT EXTRACTION:**
+ - Third-party frameworks and libraries required by the project
+ - MCP server configurations and integrations to use
+ - Coding conventions and patterns to follow
+ - Project-wide constraints (e.g., specific APIs, deployment targets)
+ - Any rules that apply to this story's implementation domain
+ Store extracted project rules as {{project_rules}} for embedding in the story file
+
+
+
+
+ Get last 5 commit titles to understand recent work patterns
+ Analyze 1-5 most recent commits for relevance to current story:
+ - Files created/modified
+ - Code patterns and conventions used
+ - Library dependencies added/changed
+ - Architecture decisions implemented
+ - Testing approaches used
+
+ Extract actionable insights for current story implementation
+
+
+
+
+ 🏗️ ARCHITECTURE INTELLIGENCE - Extract everything the developer MUST follow! **ARCHITECTURE DOCUMENT ANALYSIS:** Systematically
+ analyze architecture content for story-relevant requirements:
+
+
+
+ Load complete {architecture_content}
+
+
+ Load architecture index and scan all architecture files
+ **CRITICAL ARCHITECTURE EXTRACTION:** For
+ each architecture section, determine if relevant to this story: - **Technical Stack:** Languages, frameworks, libraries with
+ versions - **Code Structure:** Folder organization, naming conventions, file patterns - **API Patterns:** Service structure, endpoint
+ patterns, data contracts - **Database Schemas:** Tables, relationships, constraints relevant to story - **Security Requirements:**
+ Authentication patterns, authorization rules - **Performance Requirements:** Caching strategies, optimization patterns - **Testing
+ Standards:** Testing frameworks, coverage expectations, test patterns - **Deployment Patterns:** Environment configurations, build
+ processes - **Integration Patterns:** External service integrations, data flows Extract any story-specific requirements that the
+ developer MUST follow
+ Identify any architectural decisions that override previous patterns
+
+
+
+ 🌐 ENSURE LATEST TECH KNOWLEDGE - Prevent outdated implementations! **WEB INTELLIGENCE:** Identify specific
+ technical areas that require latest version knowledge:
+
+
+ From architecture analysis, identify specific libraries, APIs, or
+ frameworks
+ For each critical technology, research latest stable version and key changes:
+ - Latest API documentation and breaking changes
+ - Security vulnerabilities or updates
+ - Performance improvements or deprecations
+ - Best practices for current version
+
+ **EXTERNAL CONTEXT INCLUSION:** Include in story any critical latest information the developer needs:
+ - Specific library versions and why chosen
+ - API endpoints with parameters and authentication
+ - Recent security patches or considerations
+ - Performance optimization techniques
+ - Migration considerations if upgrading
+
+
+
+
+ 📝 CREATE ULTIMATE STORY FILE - The developer's master implementation guide!
+
+ Initialize from template.md:
+ {default_output_file}
+ story_header
+
+
+ story_requirements
+
+
+
+ developer_context_section **DEV AGENT GUARDRAILS:**
+ technical_requirements
+ architecture_compliance
+ library_framework_requirements
+
+ file_structure_requirements
+ testing_requirements
+
+
+
+ previous_story_intelligence
+
+
+
+
+ git_intelligence_summary
+
+
+
+
+ latest_tech_information
+
+
+
+
+ project_context_reference
+ Populate the Project Context Rules section with ALL extracted {{project_rules}} including:
+ - Required third-party frameworks and how they apply to this story
+ - MCP integrations the developer must use
+ - Project-wide conventions and constraints
+ - Any domain-specific rules relevant to this story's tasks
+
+
+
+
+ story_completion_status
+
+
+ Set story Status to: "ready-for-dev"
+ Add completion note: "Ultimate
+ context engine analysis completed - comprehensive developer guide created"
+
+
+
+ Validate the newly created story file {default_output_file} against `./checklist.md` and apply any required fixes before finalizing
+ Save story document unconditionally
+
+
+
+ Update {{sprint_status}}
+ Load the FULL file and read all development_status entries
+ Find development_status key matching {{story_key}}
+ Verify current status is "backlog" (expected previous state)
+ Update development_status[{{story_key}}] = "ready-for-dev"
+ Update last_updated field to current date
+ Save file, preserving ALL comments and structure including STATUS DEFINITIONS
+
+
+ Report completion
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete` — if the resolved value is non-empty, follow it as the final terminal instruction before exiting.
+
+
+
diff --git a/src/workflows/4-production/gds-create-story/customize.toml b/src/workflows/4-production/gds-create-story/customize.toml
new file mode 100644
index 0000000..2b1b3c8
--- /dev/null
+++ b/src/workflows/4-production/gds-create-story/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-create-story. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "Every story must be self-contained and implementable without additional context."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed when the workflow reaches Step 6 (Update sprint status and finalize),
+# after the final outputs are produced. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/4-production/gds-create-story/workflow.md b/src/workflows/4-production/gds-create-story/workflow.md
deleted file mode 100644
index 2bedd48..0000000
--- a/src/workflows/4-production/gds-create-story/workflow.md
+++ /dev/null
@@ -1,400 +0,0 @@
-# Create Story Workflow
-
-**Goal:** Create a comprehensive story file that gives the dev agent everything needed for flawless implementation.
-
-**Your Role:** Story context engine that prevents LLM developer mistakes, omissions, or disasters.
-- Communicate all responses in {communication_language} and language MUST be tailored to {game_dev_experience}
-- Generate all documents in {document_output_language}
-- Your purpose is NOT to copy from epics - it's to create a comprehensive, optimized story file that gives the DEV agent EVERYTHING needed for flawless implementation
-- COMMON LLM MISTAKES TO PREVENT: reinventing wheels, wrong libraries, wrong file locations, breaking regressions, ignoring UX, vague implementations, lying about completion, not learning from past work
-- EXHAUSTIVE ANALYSIS REQUIRED: You must thoroughly analyze ALL artifacts to extract critical context - do NOT be lazy or skim! This is the most important function in the entire development process!
-- UTILIZE SUBPROCESSES AND SUBAGENTS: Use research subagents, subprocesses or parallel processing if available to thoroughly analyze different artifacts simultaneously and thoroughly
-- SAVE QUESTIONS: If you think of questions or clarifications during analysis, save them for the end after the complete story is written
-- ZERO USER INTERVENTION: Process should be fully automated except for initial epic/story selection or missing documents
-
----
-
-## INITIALIZATION
-
-### Configuration Loading
-
-Load config from `{module_config}` and resolve:
-
-- `project_name`, `user_name`
-- `communication_language`, `document_output_language`
-- `game_dev_experience`
-- `planning_artifacts`, `implementation_artifacts`
-- `date` as system-generated current datetime
-
-### Paths
-
-- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml`
-- `epics_file` = `{planning_artifacts}/epics.md`
-- `gdd_file` = `{planning_artifacts}/gdd.md`
-- `architecture_file` = `{planning_artifacts}/architecture.md`
-- `ux_file` = `{planning_artifacts}/*ux*.md`
-- `story_title` = "" (will be elicited if not derivable)
-- `project_context` = `**/project-context.md` (load if exists)
-- `default_output_file` = `{implementation_artifacts}/{{story_key}}.md`
-
-### Input Files
-
-| Input | Description | Path Pattern(s) | Load Strategy |
-|-------|-------------|------------------|---------------|
-| gdd | Game Design Document (fallback - epics file should have most content) | whole: `{planning_artifacts}/*gdd*.md`, sharded: `{planning_artifacts}/*gdd*/*.md` | SELECTIVE_LOAD |
-| architecture | Architecture (fallback - epics file should have relevant sections) | whole: `{planning_artifacts}/*architecture*.md`, sharded: `{planning_artifacts}/*architecture*/*.md` | SELECTIVE_LOAD |
-| ux | UX design (fallback - epics file should have relevant sections) | whole: `{planning_artifacts}/*ux*.md`, sharded: `{planning_artifacts}/*ux*/*.md` | SELECTIVE_LOAD |
-| epics | Enhanced epics+stories file with BDD and source hints | whole: `{planning_artifacts}/*epic*.md`, sharded: `{planning_artifacts}/*epic*/*.md` | SELECTIVE_LOAD |
-| project_context | Project-wide rules, conventions, MCP configs, and third-party framework requirements | `**/project-context.md` | FULL_LOAD |
-
----
-
-## EXECUTION
-
-
-
-
-
- Parse user-provided story path: extract epic_num, story_num, story_title from format like "1-2-user-auth"
- Set {{epic_num}}, {{story_num}}, {{story_key}} from user input
- GOTO step 2a
-
-
- Check if {{sprint_status}} file exists for auto discover
-
-
-
- Choose option [1], provide epic-story number, path to story docs, or [q] to quit:
-
-
- HALT - No work needed
-
-
-
-
- HALT - User needs to run sprint-planning
-
-
-
- Parse user input: extract epic_num, story_num, story_title
- Set {{epic_num}}, {{story_num}}, {{story_key}} from user input
- GOTO step 2a
-
-
-
- Use user-provided path for story documents
- GOTO step 2a
-
-
-
-
-
- MUST read COMPLETE {sprint_status} file from start to end to preserve order
- Load the FULL file: {{sprint_status}}
- Read ALL lines from beginning to end - do not skip any content
- Parse the development_status section completely
-
- Find the FIRST story (by reading in order from top to bottom) where:
- - Key matches pattern: number-number-name (e.g., "1-2-user-auth")
- - NOT an epic key (epic-X) or retrospective (epic-X-retrospective)
- - Status value equals "backlog"
-
-
-
-
- HALT
-
-
- Extract from found story key (e.g., "1-2-user-authentication"):
- - epic_num: first number before dash (e.g., "1")
- - story_num: second number after first dash (e.g., "2")
- - story_title: remainder after second dash (e.g., "user-authentication")
-
- Set {{story_id}} = "{{epic_num}}.{{story_num}}"
- Store story_key for later use (e.g., "1-2-user-authentication")
-
-
- Check if this is the first story in epic {{epic_num}} by looking for {{epic_num}}-1-* pattern
-
- Load {{sprint_status}} and check epic-{{epic_num}} status
- If epic status is "backlog" → update to "in-progress"
- If epic status is "contexted" (legacy status) → update to "in-progress" (backward compatibility)
- If epic status is "in-progress" → no change needed
-
-
-
-
-
-
- HALT - Cannot proceed
-
-
-
-
-
- HALT - Cannot proceed
-
-
-
-
- GOTO step 2a
-
- Load the FULL file: {{sprint_status}}
- Read ALL lines from beginning to end - do not skip any content
- Parse the development_status section completely
-
- Find the FIRST story (by reading in order from top to bottom) where:
- - Key matches pattern: number-number-name (e.g., "1-2-user-auth")
- - NOT an epic key (epic-X) or retrospective (epic-X-retrospective)
- - Status value equals "backlog"
-
-
-
-
- HALT
-
-
- Extract from found story key (e.g., "1-2-user-authentication"):
- - epic_num: first number before dash (e.g., "1")
- - story_num: second number after first dash (e.g., "2")
- - story_title: remainder after second dash (e.g., "user-authentication")
-
- Set {{story_id}} = "{{epic_num}}.{{story_num}}"
- Store story_key for later use (e.g., "1-2-user-authentication")
-
-
- Check if this is the first story in epic {{epic_num}} by looking for {{epic_num}}-1-* pattern
-
- Load {{sprint_status}} and check epic-{{epic_num}} status
- If epic status is "backlog" → update to "in-progress"
- If epic status is "contexted" (legacy status) → update to "in-progress" (backward compatibility)
- If epic status is "in-progress" → no change needed
-
-
-
-
-
-
- HALT - Cannot proceed
-
-
-
-
-
- HALT - Cannot proceed
-
-
-
-
- GOTO step 2a
-
-
-
- 🔬 EXHAUSTIVE ARTIFACT ANALYSIS - This is where you prevent future developer mistakes!
-
-
- Read fully and follow `./discover-inputs.md` to load all input files
- Available content: {epics_content}, {gdd_content}, {architecture_content}, {ux_content},
- {project_context}
-
-
- From {epics_content}, extract Epic {{epic_num}} complete context: **EPIC ANALYSIS:** - Epic
- objectives and business value - ALL stories in this epic for cross-story context - Our specific story's requirements, user story
- statement, acceptance criteria - Technical requirements and constraints - Dependencies on other stories/epics - Source hints pointing to
- original documents
- Extract our story ({{epic_num}}-{{story_num}}) details: **STORY FOUNDATION:** - User story statement
- (As a, I want, so that) - Detailed acceptance criteria (already BDD formatted) - Technical requirements specific to this story -
- Business context and value - Success criteria
-
- Find {{previous_story_num}}: scan {implementation_artifacts} for the story file in epic {{epic_num}} with the highest story number less than {{story_num}}
- Load previous story file: {implementation_artifacts}/{{epic_num}}-{{previous_story_num}}-*.md **PREVIOUS STORY INTELLIGENCE:** -
- Dev notes and learnings from previous story - Review feedback and corrections needed - Files that were created/modified and their
- patterns - Testing approaches that worked/didn't work - Problems encountered and solutions found - Code patterns established Extract
- all learnings that could impact current story implementation
-
-
-
-
- Analyze {project_context} for story-relevant rules and constraints:
- **PROJECT CONTEXT EXTRACTION:**
- - Third-party frameworks and libraries required by the project
- - MCP server configurations and integrations to use
- - Coding conventions and patterns to follow
- - Project-wide constraints (e.g., specific APIs, deployment targets)
- - Any rules that apply to this story's implementation domain
- Store extracted project rules as {{project_rules}} for embedding in the story file
-
-
-
-
- Get last 5 commit titles to understand recent work patterns
- Analyze 1-5 most recent commits for relevance to current story:
- - Files created/modified
- - Code patterns and conventions used
- - Library dependencies added/changed
- - Architecture decisions implemented
- - Testing approaches used
-
- Extract actionable insights for current story implementation
-
-
-
-
- 🏗️ ARCHITECTURE INTELLIGENCE - Extract everything the developer MUST follow! **ARCHITECTURE DOCUMENT ANALYSIS:** Systematically
- analyze architecture content for story-relevant requirements:
-
-
-
- Load complete {architecture_content}
-
-
- Load architecture index and scan all architecture files
- **CRITICAL ARCHITECTURE EXTRACTION:** For
- each architecture section, determine if relevant to this story: - **Technical Stack:** Languages, frameworks, libraries with
- versions - **Code Structure:** Folder organization, naming conventions, file patterns - **API Patterns:** Service structure, endpoint
- patterns, data contracts - **Database Schemas:** Tables, relationships, constraints relevant to story - **Security Requirements:**
- Authentication patterns, authorization rules - **Performance Requirements:** Caching strategies, optimization patterns - **Testing
- Standards:** Testing frameworks, coverage expectations, test patterns - **Deployment Patterns:** Environment configurations, build
- processes - **Integration Patterns:** External service integrations, data flows Extract any story-specific requirements that the
- developer MUST follow
- Identify any architectural decisions that override previous patterns
-
-
-
- 🌐 ENSURE LATEST TECH KNOWLEDGE - Prevent outdated implementations! **WEB INTELLIGENCE:** Identify specific
- technical areas that require latest version knowledge:
-
-
- From architecture analysis, identify specific libraries, APIs, or
- frameworks
- For each critical technology, research latest stable version and key changes:
- - Latest API documentation and breaking changes
- - Security vulnerabilities or updates
- - Performance improvements or deprecations
- - Best practices for current version
-
- **EXTERNAL CONTEXT INCLUSION:** Include in story any critical latest information the developer needs:
- - Specific library versions and why chosen
- - API endpoints with parameters and authentication
- - Recent security patches or considerations
- - Performance optimization techniques
- - Migration considerations if upgrading
-
-
-
-
- 📝 CREATE ULTIMATE STORY FILE - The developer's master implementation guide!
-
- Initialize from template.md:
- {default_output_file}
- story_header
-
-
- story_requirements
-
-
-
- developer_context_section **DEV AGENT GUARDRAILS:**
- technical_requirements
- architecture_compliance
- library_framework_requirements
-
- file_structure_requirements
- testing_requirements
-
-
-
- previous_story_intelligence
-
-
-
-
- git_intelligence_summary
-
-
-
-
- latest_tech_information
-
-
-
-
- project_context_reference
- Populate the Project Context Rules section with ALL extracted {{project_rules}} including:
- - Required third-party frameworks and how they apply to this story
- - MCP integrations the developer must use
- - Project-wide conventions and constraints
- - Any domain-specific rules relevant to this story's tasks
-
-
-
-
- story_completion_status
-
-
- Set story Status to: "ready-for-dev"
- Add completion note: "Ultimate
- context engine analysis completed - comprehensive developer guide created"
-
-
-
- Validate the newly created story file {default_output_file} against `./checklist.md` and apply any required fixes before finalizing
- Save story document unconditionally
-
-
-
- Update {{sprint_status}}
- Load the FULL file and read all development_status entries
- Find development_status key matching {{story_key}}
- Verify current status is "backlog" (expected previous state)
- Update development_status[{{story_key}}] = "ready-for-dev"
- Update last_updated field to current date
- Save file, preserving ALL comments and structure including STATUS DEFINITIONS
-
-
- Report completion
-
-
-
-
diff --git a/src/workflows/4-production/gds-dev-story/SKILL.md b/src/workflows/4-production/gds-dev-story/SKILL.md
index 08d97a5..34299f9 100644
--- a/src/workflows/4-production/gds-dev-story/SKILL.md
+++ b/src/workflows/4-production/gds-dev-story/SKILL.md
@@ -3,4 +3,503 @@ name: gds-dev-story
description: 'Execute story implementation following a context filled story spec file. Use when the user says "dev this story [story file]" or "implement the next story in the sprint plan"'
---
-Follow the instructions in ./workflow.md.
+# Dev Story Workflow
+
+**Goal:** Execute story implementation following a context filled story spec file.
+
+**Your Role:** Developer implementing the story.
+- Communicate all responses in {communication_language} and language MUST be tailored to {game_dev_experience}
+- Generate all documents in {document_output_language}
+- Only modify the story file in these areas: Tasks/Subtasks checkboxes, Dev Agent Record (Debug Log, Completion Notes), File List, Change Log, and Status
+- Execute ALL steps in exact order; do NOT skip steps
+- Absolutely DO NOT stop because of "milestones", "significant progress", or "session boundaries". Continue in a single execution until the story is COMPLETE (all ACs satisfied and all tasks/subtasks checked) UNLESS a HALT condition is triggered or the USER gives other instruction.
+- Do NOT schedule a "next session" or request review pauses unless a HALT condition applies. Only Step 6 decides completion.
+- Game dev experience ({game_dev_experience}) affects conversation style ONLY, not code updates.
+
+
+## Paths
+
+- `story_file` = `` (explicit story path; auto-discovered if empty)
+- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml`
+
+## Context
+
+- `project_context` = `**/project-context.md` (load if exists)
+
+---
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `user_name`
+- `communication_language`
+- `document_output_language`
+- `game_dev_experience`
+- `implementation_artifacts`
+- `sprint_status`
+- `date` as the system-generated current datetime
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## EXECUTION
+
+
+ Communicate all responses in {communication_language} and language MUST be tailored to {game_dev_experience}
+ Generate all documents in {document_output_language}
+ Only modify the story file in these areas: Tasks/Subtasks checkboxes, Dev Agent Record (Debug Log, Completion Notes), File List,
+ Change Log, and Status
+ Execute ALL steps in exact order; do NOT skip steps
+ Absolutely DO NOT stop because of "milestones", "significant progress", or "session boundaries". Continue in a single execution
+ until the story is COMPLETE (all ACs satisfied and all tasks/subtasks checked) UNLESS a HALT condition is triggered or the USER gives
+ other instruction.
+ Do NOT schedule a "next session" or request review pauses unless a HALT condition applies. Only Step 6 decides completion.
+ Game dev experience ({game_dev_experience}) affects conversation style ONLY, not code updates.
+
+
+
+ Use {{story_path}} directly
+ Read COMPLETE story file
+ Extract story_key from filename or metadata
+
+
+
+
+
+ MUST read COMPLETE sprint-status.yaml file from start to end to preserve order
+ Load the FULL file: {{sprint_status}}
+ Read ALL lines from beginning to end - do not skip any content
+ Parse the development_status section completely to understand story order
+
+ Find the FIRST story (by reading in order from top to bottom) where:
+ - Key matches pattern: number-number-name (e.g., "1-2-user-auth")
+ - NOT an epic key (epic-X) or retrospective (epic-X-retrospective)
+ - Status value equals "ready-for-dev"
+
+
+
+
+ Choose option [1], [2], [3], or [4], or specify story file path:
+
+
+ HALT - Run create-story to create next story
+
+
+
+ HALT - Run validate-create-story to improve existing stories
+
+
+
+ Provide the story file path to develop:
+ Store user-provided story path as {{story_path}}
+
+
+
+
+
+ Display detailed sprint status analysis
+ HALT - User can review sprint status and provide story path
+
+
+
+ Store user-provided story path as {{story_path}}
+
+
+
+
+
+
+
+ Search {implementation_artifacts} for stories directly
+ Find stories with "ready-for-dev" status in files
+ Look for story files matching pattern: *-*-*.md
+ Read each candidate story file to check Status section
+
+
+
+ What would you like to do? Choose option [1], [2], or [3]:
+
+
+ HALT - Run create-story to create next story
+
+
+
+ HALT - Run validate-create-story to improve existing stories
+
+
+
+ It's unclear what story you want developed. Please provide the full path to the story file:
+ Store user-provided story path as {{story_path}}
+ Continue with provided story file
+
+
+
+
+ Use discovered story file and extract story_key
+
+
+
+ Store the found story_key (e.g., "1-2-user-authentication") for later status updates
+ Find matching story file in {implementation_artifacts} using story_key pattern: {{story_key}}.md
+ Read COMPLETE story file from discovered path
+
+
+
+ Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Dev Agent Record, File List, Change Log, Status
+
+ Load comprehensive context from story file's Dev Notes section
+ Extract developer guidance from Dev Notes: architecture requirements, previous learnings, technical specifications
+ Use enhanced story context to inform implementation decisions and approaches
+
+ Identify first incomplete task (unchecked [ ]) in Tasks/Subtasks
+
+
+ Completion sequence
+
+ HALT: "Cannot develop story without access to story file"
+ ASK user to clarify or HALT
+
+
+
+ Load all available context to inform implementation
+
+ Load {project_context} for coding standards and project-wide patterns (if exists)
+
+ Extract actionable rules from {project_context}:
+ - Required third-party frameworks and libraries (MUST use these, not alternatives)
+ - MCP server integrations to leverage during implementation
+ - Coding conventions and naming patterns
+ - Project-wide constraints and mandatory patterns
+
+ Store as {{active_project_rules}} — these override default implementation choices
+
+ Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Dev Agent Record, File List, Change Log, Status
+ Load comprehensive context from story file's Dev Notes section
+ Check story's "Project Context Rules" section in Dev Notes for pre-extracted project rules
+ Extract developer guidance from Dev Notes: architecture requirements, previous learnings, technical specifications
+ Use enhanced story context AND {{active_project_rules}} to inform implementation decisions
+
+
+
+
+ Determine if this is a fresh start or continuation after code review
+
+ Check if "Senior Developer Review (AI)" section exists in the story file
+ Check if "Review Follow-ups (AI)" subsection exists under Tasks/Subtasks
+
+
+ Set review_continuation = true
+ Extract from "Senior Developer Review (AI)" section:
+ - Review outcome (Approve/Changes Requested/Blocked)
+ - Review date
+ - Total action items with checkboxes (count checked vs unchecked)
+ - Severity breakdown (High/Med/Low counts)
+
+ Count unchecked [ ] review follow-up tasks in "Review Follow-ups (AI)" subsection
+ Store list of unchecked review items as {{pending_review_items}}
+
+
+
+
+
+ Set review_continuation = false
+ Set {{pending_review_items}} = empty
+
+
+
+
+
+
+
+ Load the FULL file: {{sprint_status}}
+ Read all development_status entries to find {{story_key}}
+ Get current status value for development_status[{{story_key}}]
+
+
+ Update the story in the sprint status report to = "in-progress"
+ Update last_updated field to current date
+
+
+
+
+
+
+
+
+
+
+
+ Store {{current_sprint_status}} for later use
+
+
+
+
+ Set {{current_sprint_status}} = "no-sprint-tracking"
+
+
+
+
+ FOLLOW THE STORY FILE TASKS/SUBTASKS SEQUENCE EXACTLY AS WRITTEN - NO DEVIATION
+
+ Review the current task/subtask from the story file - this is your authoritative implementation guide
+ Plan implementation following red-green-refactor cycle
+
+
+ Write FAILING tests first for the task/subtask functionality
+ Confirm tests fail before implementation - this validates test correctness
+
+
+ Implement MINIMAL code to make tests pass
+ Run tests to confirm they now pass
+ Handle error conditions and edge cases as specified in task/subtask
+
+
+ Improve code structure while keeping tests green
+ Ensure code follows architecture patterns and coding standards from Dev Notes
+ Apply {{active_project_rules}} — use required frameworks, MCP integrations, and conventions from project-context.md
+
+ Document technical approach and decisions in Dev Agent Record → Implementation Plan
+
+ HALT: "Additional dependencies need user approval"
+ HALT and request guidance
+ HALT: "Cannot proceed without necessary configuration files"
+
+ NEVER implement anything not mapped to a specific task/subtask in the story file
+ NEVER proceed to next task until current task/subtask is complete AND tests pass
+ Execute continuously without pausing until all tasks/subtasks are complete or explicit HALT condition
+ Do NOT propose to pause for review until Step 9 completion gates are satisfied
+
+
+
+ Create unit tests for business logic and core functionality introduced/changed by the task
+ Add integration tests for component interactions specified in story requirements
+ Include end-to-end tests for critical user flows when story requirements demand them
+ Cover edge cases and error handling scenarios identified in story Dev Notes
+
+
+
+ Determine how to run tests for this repo (infer test framework from project structure)
+ Run all existing tests to ensure no regressions
+ Run the new tests to verify implementation correctness
+ Run linting and code quality checks if configured in project
+ Validate implementation meets ALL story acceptance criteria; enforce quantitative thresholds explicitly
+ STOP and fix before continuing - identify breaking changes immediately
+ STOP and fix before continuing - ensure implementation correctness
+
+
+
+ NEVER mark a task complete unless ALL conditions are met - NO LYING OR CHEATING
+
+
+ Verify ALL tests for this task/subtask ACTUALLY EXIST and PASS 100%
+ Confirm implementation matches EXACTLY what the task/subtask specifies - no extra features
+ Validate that ALL acceptance criteria related to this task are satisfied
+ Run full test suite to ensure NO regressions introduced
+
+
+
+ Extract review item details (severity, description, related AC/file)
+ Add to resolution tracking list: {{resolved_review_items}}
+
+
+ Mark task checkbox [x] in "Tasks/Subtasks → Review Follow-ups (AI)" section
+
+
+ Find matching action item in "Senior Developer Review (AI) → Action Items" section by matching description
+ Mark that action item checkbox [x] as resolved
+
+ Add to Dev Agent Record → Completion Notes: "✅ Resolved review finding [{{severity}}]: {{description}}"
+
+
+
+
+ ONLY THEN mark the task (and subtasks) checkbox with [x]
+ Update File List section with ALL new, modified, or deleted files (paths relative to repo root)
+ Add completion notes to Dev Agent Record summarizing what was ACTUALLY implemented and tested
+
+
+
+ DO NOT mark task complete - fix issues first
+ HALT if unable to fix validation failures
+
+
+
+ Count total resolved review items in this session
+ Add Change Log entry: "Addressed code review findings - {{resolved_count}} items resolved (Date: {{date}})"
+
+
+ Save the story file
+ Determine if more incomplete tasks remain
+
+ Next task
+
+
+ Completion
+
+
+
+
+ Verify ALL tasks and subtasks are marked [x] (re-scan the story document now)
+ Run the full regression suite (do not skip)
+ Confirm File List includes every changed file
+ Execute enhanced definition-of-done validation
+ Update the story Status to: "review"
+
+
+ Validate definition-of-done checklist with essential requirements:
+ - All tasks/subtasks marked complete with [x]
+ - Implementation satisfies every Acceptance Criterion
+ - Unit tests for core functionality added/updated
+ - Integration tests for component interactions added when required
+ - End-to-end tests for critical flows added when story demands them
+ - All tests pass (no regressions, new tests successful)
+ - Code quality checks pass (linting, static analysis if configured)
+ - File List includes every new/modified/deleted file (relative paths)
+ - Dev Agent Record contains implementation notes
+ - Change Log includes summary of changes
+ - Only permitted story sections were modified
+
+
+
+
+ Load the FULL file: {sprint_status}
+ Find development_status key matching {{story_key}}
+ Verify current status is "in-progress" (expected previous state)
+ Update development_status[{{story_key}}] = "review"
+ Update last_updated field to current date
+ Save file, preserving ALL comments and structure including STATUS DEFINITIONS
+
+
+
+
+
+
+
+
+
+
+
+
+ HALT - Complete remaining tasks before marking ready for review
+ HALT - Fix regression issues before completing
+ HALT - Update File List with all changed files
+ HALT - Address DoD failures before completing
+
+
+
+ Execute the enhanced definition-of-done checklist using the validation framework
+ Prepare a concise summary in Dev Agent Record → Completion Notes
+
+ Communicate to {user_name} that story implementation is complete and ready for review
+ Summarize key accomplishments: story ID, story key, title, key changes made, tests added, files modified
+ Provide the story file path and current status (now "review")
+
+ Based on {game_dev_experience}, ask if user needs any explanations about:
+ - What was implemented and how it works
+ - Why certain technical decisions were made
+ - How to test or verify the changes
+ - Any patterns, libraries, or approaches used
+ - Anything else they'd like clarified
+
+
+
+ Provide clear, contextual explanations tailored to {game_dev_experience}
+ Use examples and references to specific code when helpful
+
+
+ Once explanations are complete (or user indicates no questions), suggest logical next steps
+ Recommended next steps (flexible based on project setup):
+ - Review the implemented story and test the changes
+ - Verify all acceptance criteria are met
+ - Ensure deployment readiness if applicable
+ - Run `code-review` workflow for peer review
+ - Optional: If Test Architect module installed, run `/bmad:gds:workflows:automate` to expand guardrail tests
+
+
+
+
+ Suggest checking {sprint_status} to see project progress
+
+ Remain flexible - allow user to choose their own path or ask for other assistance
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete` — if the resolved value is non-empty, follow it as the final terminal instruction before exiting.
+
+
+
diff --git a/src/workflows/4-production/gds-dev-story/customize.toml b/src/workflows/4-production/gds-dev-story/customize.toml
new file mode 100644
index 0000000..d20fb57
--- /dev/null
+++ b/src/workflows/4-production/gds-dev-story/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-dev-story. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "Stories are implemented against a frozen spec; scope creep requires a correct-course cycle."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed when the workflow reaches Step 10 (Completion communication and user support),
+# after the final outputs are produced. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/4-production/gds-dev-story/workflow.md b/src/workflows/4-production/gds-dev-story/workflow.md
deleted file mode 100644
index 10ece12..0000000
--- a/src/workflows/4-production/gds-dev-story/workflow.md
+++ /dev/null
@@ -1,461 +0,0 @@
-# Dev Story Workflow
-
-**Goal:** Execute story implementation following a context filled story spec file.
-
-**Your Role:** Developer implementing the story.
-- Communicate all responses in {communication_language} and language MUST be tailored to {game_dev_experience}
-- Generate all documents in {document_output_language}
-- Only modify the story file in these areas: Tasks/Subtasks checkboxes, Dev Agent Record (Debug Log, Completion Notes), File List, Change Log, and Status
-- Execute ALL steps in exact order; do NOT skip steps
-- Absolutely DO NOT stop because of "milestones", "significant progress", or "session boundaries". Continue in a single execution until the story is COMPLETE (all ACs satisfied and all tasks/subtasks checked) UNLESS a HALT condition is triggered or the USER gives other instruction.
-- Do NOT schedule a "next session" or request review pauses unless a HALT condition applies. Only Step 6 decides completion.
-- Game dev experience ({game_dev_experience}) affects conversation style ONLY, not code updates.
-
----
-
-## INITIALIZATION
-
-### Configuration Loading
-
-Load config from `{module_config}` and resolve:
-
-- `project_name`, `user_name`
-- `communication_language`, `document_output_language`
-- `game_dev_experience`
-- `implementation_artifacts`
-- `date` as system-generated current datetime
-
-### Paths
-
-- `story_file` = `` (explicit story path; auto-discovered if empty)
-- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml`
-
-### Context
-
-- `project_context` = `**/project-context.md` (load if exists)
-
----
-
-## EXECUTION
-
-
- Communicate all responses in {communication_language} and language MUST be tailored to {game_dev_experience}
- Generate all documents in {document_output_language}
- Only modify the story file in these areas: Tasks/Subtasks checkboxes, Dev Agent Record (Debug Log, Completion Notes), File List,
- Change Log, and Status
- Execute ALL steps in exact order; do NOT skip steps
- Absolutely DO NOT stop because of "milestones", "significant progress", or "session boundaries". Continue in a single execution
- until the story is COMPLETE (all ACs satisfied and all tasks/subtasks checked) UNLESS a HALT condition is triggered or the USER gives
- other instruction.
- Do NOT schedule a "next session" or request review pauses unless a HALT condition applies. Only Step 6 decides completion.
- Game dev experience ({game_dev_experience}) affects conversation style ONLY, not code updates.
-
-
-
- Use {{story_path}} directly
- Read COMPLETE story file
- Extract story_key from filename or metadata
-
-
-
-
-
- MUST read COMPLETE sprint-status.yaml file from start to end to preserve order
- Load the FULL file: {{sprint_status}}
- Read ALL lines from beginning to end - do not skip any content
- Parse the development_status section completely to understand story order
-
- Find the FIRST story (by reading in order from top to bottom) where:
- - Key matches pattern: number-number-name (e.g., "1-2-user-auth")
- - NOT an epic key (epic-X) or retrospective (epic-X-retrospective)
- - Status value equals "ready-for-dev"
-
-
-
-
- Choose option [1], [2], [3], or [4], or specify story file path:
-
-
- HALT - Run create-story to create next story
-
-
-
- HALT - Run validate-create-story to improve existing stories
-
-
-
- Provide the story file path to develop:
- Store user-provided story path as {{story_path}}
-
-
-
-
-
- Display detailed sprint status analysis
- HALT - User can review sprint status and provide story path
-
-
-
- Store user-provided story path as {{story_path}}
-
-
-
-
-
-
-
- Search {implementation_artifacts} for stories directly
- Find stories with "ready-for-dev" status in files
- Look for story files matching pattern: *-*-*.md
- Read each candidate story file to check Status section
-
-
-
- What would you like to do? Choose option [1], [2], or [3]:
-
-
- HALT - Run create-story to create next story
-
-
-
- HALT - Run validate-create-story to improve existing stories
-
-
-
- It's unclear what story you want developed. Please provide the full path to the story file:
- Store user-provided story path as {{story_path}}
- Continue with provided story file
-
-
-
-
- Use discovered story file and extract story_key
-
-
-
- Store the found story_key (e.g., "1-2-user-authentication") for later status updates
- Find matching story file in {implementation_artifacts} using story_key pattern: {{story_key}}.md
- Read COMPLETE story file from discovered path
-
-
-
- Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Dev Agent Record, File List, Change Log, Status
-
- Load comprehensive context from story file's Dev Notes section
- Extract developer guidance from Dev Notes: architecture requirements, previous learnings, technical specifications
- Use enhanced story context to inform implementation decisions and approaches
-
- Identify first incomplete task (unchecked [ ]) in Tasks/Subtasks
-
-
- Completion sequence
-
- HALT: "Cannot develop story without access to story file"
- ASK user to clarify or HALT
-
-
-
- Load all available context to inform implementation
-
- Load {project_context} for coding standards and project-wide patterns (if exists)
-
- Extract actionable rules from {project_context}:
- - Required third-party frameworks and libraries (MUST use these, not alternatives)
- - MCP server integrations to leverage during implementation
- - Coding conventions and naming patterns
- - Project-wide constraints and mandatory patterns
-
- Store as {{active_project_rules}} — these override default implementation choices
-
- Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes, Dev Agent Record, File List, Change Log, Status
- Load comprehensive context from story file's Dev Notes section
- Check story's "Project Context Rules" section in Dev Notes for pre-extracted project rules
- Extract developer guidance from Dev Notes: architecture requirements, previous learnings, technical specifications
- Use enhanced story context AND {{active_project_rules}} to inform implementation decisions
-
-
-
-
- Determine if this is a fresh start or continuation after code review
-
- Check if "Senior Developer Review (AI)" section exists in the story file
- Check if "Review Follow-ups (AI)" subsection exists under Tasks/Subtasks
-
-
- Set review_continuation = true
- Extract from "Senior Developer Review (AI)" section:
- - Review outcome (Approve/Changes Requested/Blocked)
- - Review date
- - Total action items with checkboxes (count checked vs unchecked)
- - Severity breakdown (High/Med/Low counts)
-
- Count unchecked [ ] review follow-up tasks in "Review Follow-ups (AI)" subsection
- Store list of unchecked review items as {{pending_review_items}}
-
-
-
-
-
- Set review_continuation = false
- Set {{pending_review_items}} = empty
-
-
-
-
-
-
-
- Load the FULL file: {{sprint_status}}
- Read all development_status entries to find {{story_key}}
- Get current status value for development_status[{{story_key}}]
-
-
- Update the story in the sprint status report to = "in-progress"
- Update last_updated field to current date
-
-
-
-
-
-
-
-
-
-
-
- Store {{current_sprint_status}} for later use
-
-
-
-
- Set {{current_sprint_status}} = "no-sprint-tracking"
-
-
-
-
- FOLLOW THE STORY FILE TASKS/SUBTASKS SEQUENCE EXACTLY AS WRITTEN - NO DEVIATION
-
- Review the current task/subtask from the story file - this is your authoritative implementation guide
- Plan implementation following red-green-refactor cycle
-
-
- Write FAILING tests first for the task/subtask functionality
- Confirm tests fail before implementation - this validates test correctness
-
-
- Implement MINIMAL code to make tests pass
- Run tests to confirm they now pass
- Handle error conditions and edge cases as specified in task/subtask
-
-
- Improve code structure while keeping tests green
- Ensure code follows architecture patterns and coding standards from Dev Notes
- Apply {{active_project_rules}} — use required frameworks, MCP integrations, and conventions from project-context.md
-
- Document technical approach and decisions in Dev Agent Record → Implementation Plan
-
- HALT: "Additional dependencies need user approval"
- HALT and request guidance
- HALT: "Cannot proceed without necessary configuration files"
-
- NEVER implement anything not mapped to a specific task/subtask in the story file
- NEVER proceed to next task until current task/subtask is complete AND tests pass
- Execute continuously without pausing until all tasks/subtasks are complete or explicit HALT condition
- Do NOT propose to pause for review until Step 9 completion gates are satisfied
-
-
-
- Create unit tests for business logic and core functionality introduced/changed by the task
- Add integration tests for component interactions specified in story requirements
- Include end-to-end tests for critical user flows when story requirements demand them
- Cover edge cases and error handling scenarios identified in story Dev Notes
-
-
-
- Determine how to run tests for this repo (infer test framework from project structure)
- Run all existing tests to ensure no regressions
- Run the new tests to verify implementation correctness
- Run linting and code quality checks if configured in project
- Validate implementation meets ALL story acceptance criteria; enforce quantitative thresholds explicitly
- STOP and fix before continuing - identify breaking changes immediately
- STOP and fix before continuing - ensure implementation correctness
-
-
-
- NEVER mark a task complete unless ALL conditions are met - NO LYING OR CHEATING
-
-
- Verify ALL tests for this task/subtask ACTUALLY EXIST and PASS 100%
- Confirm implementation matches EXACTLY what the task/subtask specifies - no extra features
- Validate that ALL acceptance criteria related to this task are satisfied
- Run full test suite to ensure NO regressions introduced
-
-
-
- Extract review item details (severity, description, related AC/file)
- Add to resolution tracking list: {{resolved_review_items}}
-
-
- Mark task checkbox [x] in "Tasks/Subtasks → Review Follow-ups (AI)" section
-
-
- Find matching action item in "Senior Developer Review (AI) → Action Items" section by matching description
- Mark that action item checkbox [x] as resolved
-
- Add to Dev Agent Record → Completion Notes: "✅ Resolved review finding [{{severity}}]: {{description}}"
-
-
-
-
- ONLY THEN mark the task (and subtasks) checkbox with [x]
- Update File List section with ALL new, modified, or deleted files (paths relative to repo root)
- Add completion notes to Dev Agent Record summarizing what was ACTUALLY implemented and tested
-
-
-
- DO NOT mark task complete - fix issues first
- HALT if unable to fix validation failures
-
-
-
- Count total resolved review items in this session
- Add Change Log entry: "Addressed code review findings - {{resolved_count}} items resolved (Date: {{date}})"
-
-
- Save the story file
- Determine if more incomplete tasks remain
-
- Next task
-
-
- Completion
-
-
-
-
- Verify ALL tasks and subtasks are marked [x] (re-scan the story document now)
- Run the full regression suite (do not skip)
- Confirm File List includes every changed file
- Execute enhanced definition-of-done validation
- Update the story Status to: "review"
-
-
- Validate definition-of-done checklist with essential requirements:
- - All tasks/subtasks marked complete with [x]
- - Implementation satisfies every Acceptance Criterion
- - Unit tests for core functionality added/updated
- - Integration tests for component interactions added when required
- - End-to-end tests for critical flows added when story demands them
- - All tests pass (no regressions, new tests successful)
- - Code quality checks pass (linting, static analysis if configured)
- - File List includes every new/modified/deleted file (relative paths)
- - Dev Agent Record contains implementation notes
- - Change Log includes summary of changes
- - Only permitted story sections were modified
-
-
-
-
- Load the FULL file: {sprint_status}
- Find development_status key matching {{story_key}}
- Verify current status is "in-progress" (expected previous state)
- Update development_status[{{story_key}}] = "review"
- Update last_updated field to current date
- Save file, preserving ALL comments and structure including STATUS DEFINITIONS
-
-
-
-
-
-
-
-
-
-
-
-
- HALT - Complete remaining tasks before marking ready for review
- HALT - Fix regression issues before completing
- HALT - Update File List with all changed files
- HALT - Address DoD failures before completing
-
-
-
- Execute the enhanced definition-of-done checklist using the validation framework
- Prepare a concise summary in Dev Agent Record → Completion Notes
-
- Communicate to {user_name} that story implementation is complete and ready for review
- Summarize key accomplishments: story ID, story key, title, key changes made, tests added, files modified
- Provide the story file path and current status (now "review")
-
- Based on {game_dev_experience}, ask if user needs any explanations about:
- - What was implemented and how it works
- - Why certain technical decisions were made
- - How to test or verify the changes
- - Any patterns, libraries, or approaches used
- - Anything else they'd like clarified
-
-
-
- Provide clear, contextual explanations tailored to {game_dev_experience}
- Use examples and references to specific code when helpful
-
-
- Once explanations are complete (or user indicates no questions), suggest logical next steps
- Recommended next steps (flexible based on project setup):
- - Review the implemented story and test the changes
- - Verify all acceptance criteria are met
- - Ensure deployment readiness if applicable
- - Run `code-review` workflow for peer review
- - Optional: If Test Architect module installed, run `/bmad:gds:workflows:automate` to expand guardrail tests
-
-
-
-
- Suggest checking {sprint_status} to see project progress
-
- Remain flexible - allow user to choose their own path or ask for other assistance
-
-
-
diff --git a/src/workflows/4-production/gds-retrospective/SKILL.md b/src/workflows/4-production/gds-retrospective/SKILL.md
index 4927d0a..a4896fa 100644
--- a/src/workflows/4-production/gds-retrospective/SKILL.md
+++ b/src/workflows/4-production/gds-retrospective/SKILL.md
@@ -3,4 +3,1523 @@ name: gds-retrospective
description: 'Post-epic review to extract lessons and assess success. Use when the user says "run a retrospective" or "lets retro the epic [epic]"'
---
-Follow the instructions in ./workflow.md.
+# Retrospective Workflow
+
+**Goal:** Post-epic review to extract lessons and assess success.
+
+**Your Role:** Developer facilitating retrospective.
+- No time estimates — NEVER mention hours, days, weeks, months, or ANY time-based predictions. AI has fundamentally changed development speed.
+- Communicate all responses in {communication_language} and language MUST be tailored to {game_dev_experience}
+- Generate all documents in {document_output_language}
+- Document output: Retrospective analysis. Concise insights, lessons learned, action items. Game dev experience ({game_dev_experience}) affects conversation style ONLY, not retrospective content.
+- Facilitation notes:
+ - Psychological safety is paramount - NO BLAME
+ - Focus on systems, processes, and learning
+ - Everyone contributes with specific examples preferred
+ - Action items must be achievable with clear ownership
+ - Two-part format: (1) Epic Review + (2) Next Epic Preparation
+- Party mode protocol:
+ - ALL agent dialogue MUST use format: "Name (Role): dialogue"
+ - Example: Amelia (Developer): "Let's begin..."
+ - Example: {user_name} (Project Lead): [User responds]
+ - Create natural back-and-forth with user actively participating
+ - Show disagreements, diverse perspectives, authentic team dynamics
+
+
+## Paths
+
+- `sprint_status_file` = `{implementation_artifacts}/sprint-status.yaml`
+
+## Input Files
+
+| Input | Description | Path Pattern(s) | Load Strategy |
+|-------|-------------|------------------|---------------|
+| epics | The completed epic for retrospective | whole: `{planning_artifacts}/*epic*.md`, sharded_index: `{planning_artifacts}/*epic*/index.md`, sharded_single: `{planning_artifacts}/*epic*/epic-{{epic_num}}.md` | SELECTIVE_LOAD |
+| previous_retrospective | Previous epic's retrospective (optional) | `{implementation_artifacts}/**/epic-{{prev_epic_num}}-retro-*.md` | SELECTIVE_LOAD |
+| architecture | System architecture for context | whole: `{planning_artifacts}/*architecture*.md`, sharded: `{planning_artifacts}/*architecture*/*.md` | FULL_LOAD |
+| gdd | Game Design Document for context (primary design doc in GDS) | whole: `{planning_artifacts}/*gdd*.md`, sharded: `{planning_artifacts}/*gdd*/*.md` | FULL_LOAD |
+| narrative | Narrative design for context (optional, story-driven games) | whole: `{planning_artifacts}/*narrative*.md`, sharded: `{planning_artifacts}/*narrative*/*.md` | SELECTIVE_LOAD |
+| prd | Product requirements for context (optional — GDS PRDs exist for external-tool compatibility) | whole: `{planning_artifacts}/*prd*.md`, sharded: `{planning_artifacts}/*prd*/*.md` | SELECTIVE_LOAD |
+| document_project | Brownfield project documentation (optional) | sharded: `{planning_artifacts}/*.md` | INDEX_GUIDED |
+
+## Required Inputs
+
+- `agent_manifest` = `{project-root}/_bmad/_config/agent-manifest.csv`
+
+## Context
+
+- `project_context` = `**/project-context.md` (load if exists)
+
+---
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `user_name`
+- `communication_language`
+- `document_output_language`
+- `game_dev_experience`
+- `planning_artifacts`
+- `implementation_artifacts`
+- `date` as the system-generated current datetime
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## EXECUTION
+
+
+
+
+
+Load {project_context} for project-wide patterns and conventions (if exists)
+Explain to {user_name} the epic discovery process using natural dialogue
+
+
+
+PRIORITY 1: Check {sprint_status_file} first
+
+Load the FULL file: {sprint_status_file}
+Read ALL development_status entries
+Find the highest epic number with at least one story marked "done"
+Extract epic number from keys like "epic-X-retrospective" or story keys like "X-Y-story-name"
+Set {{detected_epic}} = highest epic number found with completed stories
+
+
+ Present finding to user with context
+
+
+
+WAIT for {user_name} to confirm or correct
+
+
+ Set {{epic_number}} = {{detected_epic}}
+
+
+
+ Set {{epic_number}} = user-provided number
+
+
+
+
+
+ PRIORITY 2: Ask user directly
+
+
+
+WAIT for {user_name} to provide epic number
+Set {{epic_number}} = user-provided number
+
+
+
+ PRIORITY 3: Fallback to stories folder
+
+Scan {implementation_artifacts} for highest numbered story files
+Extract epic numbers from story filenames (pattern: epic-X-Y-story-name.md)
+Set {{detected_epic}} = highest epic number found
+
+
+
+WAIT for {user_name} to confirm or correct
+Set {{epic_number}} = confirmed number
+
+
+Once {{epic_number}} is determined, verify epic completion status
+
+Find all stories for epic {{epic_number}} in {sprint_status_file}:
+
+- Look for keys starting with "{{epic_number}}-" (e.g., "1-1-", "1-2-", etc.)
+- Exclude epic key itself ("epic-{{epic_number}}")
+- Exclude retrospective key ("epic-{{epic_number}}-retrospective")
+
+
+Count total stories found for this epic
+Count stories with status = "done"
+Collect list of pending story keys (status != "done")
+Determine if complete: true if all stories are done, false otherwise
+
+
+
+
+Continue with incomplete epic? (yes/no)
+
+
+
+ HALT
+
+
+Set {{partial_retrospective}} = true
+
+
+
+
+
+
+
+
+
+
+ Load input files according to the Input Files table in INITIALIZATION. For SELECTIVE_LOAD inputs, load only the epic matching {{epic_number}}. For FULL_LOAD inputs, load the complete document. For INDEX_GUIDED inputs, check the index first and load relevant sections. After discovery, these content variables are available: {epics_content} (selective load for this epic), {architecture_content}, {gdd_content}, {narrative_content}, {prd_content}, {document_project_content}
+ After discovery, these content variables are available: {epics_content} (selective load for this epic), {architecture_content}, {gdd_content}, {narrative_content}, {prd_content}, {document_project_content}
+
+
+
+
+
+
+For each story in epic {{epic_number}}, read the complete story file from {implementation_artifacts}/{{epic_number}}-{{story_num}}-*.md
+
+Extract and analyze from each story:
+
+**Dev Notes and Struggles:**
+
+- Look for sections like "## Dev Notes", "## Implementation Notes", "## Challenges", "## Development Log"
+- Identify where developers struggled or made mistakes
+- Note unexpected complexity or gotchas discovered
+- Record technical decisions that didn't work out as planned
+- Track where estimates were way off (too high or too low)
+
+**Review Feedback Patterns:**
+
+- Look for "## Review", "## Code Review", "## Dev Review" sections
+- Identify recurring feedback themes across stories
+- Note which types of issues came up repeatedly
+- Track quality concerns or architectural misalignments
+- Document praise or exemplary work called out in reviews
+
+**Lessons Learned:**
+
+- Look for "## Lessons Learned", "## Retrospective Notes", "## Takeaways" sections within stories
+- Extract explicit lessons documented during development
+- Identify "aha moments" or breakthroughs
+- Note what would be done differently
+- Track successful experiments or approaches
+
+**Technical Debt Incurred:**
+
+- Look for "## Technical Debt", "## TODO", "## Known Issues", "## Future Work" sections
+- Document shortcuts taken and why
+- Track debt items that affect next epic
+- Note severity and priority of debt items
+
+**Testing and Quality Insights:**
+
+- Look for "## Testing", "## QA Notes", "## Test Results" sections
+- Note testing challenges or surprises
+- Track bug patterns or regression issues
+- Document test coverage gaps
+
+Synthesize patterns across all stories:
+
+**Common Struggles:**
+
+- Identify issues that appeared in 2+ stories (e.g., "3 out of 5 stories had API authentication issues")
+- Note areas where team consistently struggled
+- Track where complexity was underestimated
+
+**Recurring Review Feedback:**
+
+- Identify feedback themes (e.g., "Error handling was flagged in every review")
+- Note quality patterns (positive and negative)
+- Track areas where team improved over the course of epic
+
+**Breakthrough Moments:**
+
+- Document key discoveries (e.g., "Story 3 discovered the caching pattern we used for rest of epic")
+- Note when team velocity improved dramatically
+- Track innovative solutions worth repeating
+
+**Velocity Patterns:**
+
+- Calculate average completion time per story
+- Note velocity trends (e.g., "First 2 stories took 3x longer than estimated")
+- Identify which types of stories went faster/slower
+
+**Team Collaboration Highlights:**
+
+- Note moments of excellent collaboration mentioned in stories
+- Track where pair programming or mob programming was effective
+- Document effective problem-solving sessions
+
+Store this synthesis - these patterns will drive the retrospective discussion
+
+
+
+
+
+
+
+Calculate previous epic number: {{prev_epic_num}} = {{epic_number}} - 1
+
+
+ Search for previous retrospectives using pattern: {implementation_artifacts}/epic-{{prev_epic_num}}-retro-*.md
+
+
+
+
+ Read the previous retrospectives
+
+ Extract key elements:
+ - **Action items committed**: What did the team agree to improve?
+ - **Lessons learned**: What insights were captured?
+ - **Process improvements**: What changes were agreed upon?
+ - **Technical debt flagged**: What debt was documented?
+ - **Team agreements**: What commitments were made?
+ - **Preparation tasks**: What was needed for this epic?
+
+ Cross-reference with current epic execution:
+
+ **Action Item Follow-Through:**
+ - For each action item from Epic {{prev_epic_num}} retro, check if it was completed
+ - Look for evidence in current epic's story records
+ - Mark each action item: ✅ Completed, ⏳ In Progress, ❌ Not Addressed
+
+ **Lessons Applied:**
+ - For each lesson from Epic {{prev_epic_num}}, check if team applied it in Epic {{epic_number}}
+ - Look for evidence in dev notes, review feedback, or outcomes
+ - Document successes and missed opportunities
+
+ **Process Improvements Effectiveness:**
+ - For each process change agreed to in Epic {{prev_epic_num}}, assess if it helped
+ - Did the change improve velocity, quality, or team satisfaction?
+ - Should we keep, modify, or abandon the change?
+
+ **Technical Debt Status:**
+ - For each debt item from Epic {{prev_epic_num}}, check if it was addressed
+ - Did unaddressed debt cause problems in Epic {{epic_number}}?
+ - Did the debt grow or shrink?
+
+ Prepare "continuity insights" for the retrospective discussion
+
+ Identify wins where previous lessons were applied successfully:
+ - Document specific examples of applied learnings
+ - Note positive impact on Epic {{epic_number}} outcomes
+ - Celebrate team growth and improvement
+
+ Identify missed opportunities where previous lessons were ignored:
+ - Document where team repeated previous mistakes
+ - Note impact of not applying lessons (without blame)
+ - Explore barriers that prevented application
+
+
+
+
+
+
+
+Set {{first_retrospective}} = true
+
+
+
+
+
+Set {{first_retrospective}} = true
+
+
+
+
+
+
+Calculate next epic number: {{next_epic_num}} = {{epic_number}} + 1
+
+
+
+Attempt to load next epic using selective loading strategy:
+
+**Try sharded first (more specific):**
+Check if file exists: {planning_artifacts}/epic*/epic-{{next_epic_num}}.md
+
+
+ Load {planning_artifacts}/*epic*/epic-{{next_epic_num}}.md
+ Set {{next_epic_source}} = "sharded"
+
+
+**Fallback to whole document:**
+
+Check if file exists: {planning_artifacts}/epic*.md
+
+
+ Load entire epics document
+ Extract Epic {{next_epic_num}} section
+ Set {{next_epic_source}} = "whole"
+
+
+
+
+ Analyze next epic for:
+ - Epic title and objectives
+ - Planned stories and complexity estimates
+ - Dependencies on Epic {{epic_number}} work
+ - New technical requirements or capabilities needed
+ - Potential risks or unknowns
+ - Business goals and success criteria
+
+Identify dependencies on completed work:
+
+- What components from Epic {{epic_number}} does Epic {{next_epic_num}} rely on?
+- Are all prerequisites complete and stable?
+- Any incomplete work that creates blocking dependencies?
+
+Note potential gaps or preparation needed:
+
+- Technical setup required (infrastructure, tools, libraries)
+- Knowledge gaps to fill (research, training, spikes)
+- Refactoring needed before starting next epic
+- Documentation or specifications to create
+
+Check for technical prerequisites:
+
+- APIs or integrations that must be ready
+- Data migrations or schema changes needed
+- Testing infrastructure requirements
+- Deployment or environment setup
+
+
+
+Set {{next_epic_exists}} = true
+
+
+
+
+
+Set {{next_epic_exists}} = false
+
+
+
+
+
+
+Load agent configurations from {agent_manifest}
+Identify which agents participated in Epic {{epic_number}} based on story records
+Ensure key roles present: Product Owner, Developer (facilitating), Testing/QA, Architect
+
+
+
+WAIT for {user_name} to respond or indicate readiness
+
+
+
+
+
+
+
+Amelia (Developer) naturally turns to {user_name} to engage them in the discussion
+
+
+
+WAIT for {user_name} to respond - this is a KEY USER INTERACTION moment
+
+After {user_name} responds, have 1-2 team members react to or build on what {user_name} shared
+
+
+
+Continue facilitating natural dialogue, periodically bringing {user_name} back into the conversation
+
+After covering successes, guide the transition to challenges with care
+
+
+
+WAIT for {user_name} to respond and help facilitate the conflict resolution
+
+Use {user_name}'s response to guide the discussion toward systemic understanding rather than blame
+
+
+
+Continue the discussion, weaving in patterns discovered from the deep story analysis (Step 2)
+
+
+
+WAIT for {user_name} to share their observations
+
+Continue the retrospective discussion, creating moments where:
+
+- Team members ask {user_name} questions directly
+- {user_name}'s input shifts the discussion direction
+- Disagreements arise naturally and get resolved
+- Quieter team members are invited to contribute
+- Specific stories are referenced with real examples
+- Emotions are authentic (frustration, pride, concern, hope)
+
+
+
+
+WAIT for {user_name} to respond
+
+Use the previous retro follow-through as a learning moment about commitment and accountability
+
+
+
+
+Allow team members to add any final thoughts on the epic review
+Ensure {user_name} has opportunity to add their perspective
+
+
+
+
+
+
+
+ Skip to Step 8
+
+
+
+
+WAIT for {user_name} to share their assessment
+
+Use {user_name}'s input to guide deeper exploration of preparation needs
+
+
+
+WAIT for {user_name} to provide direction on preparation approach
+
+Create space for debate and disagreement about priorities
+
+
+
+WAIT for {user_name} to validate or adjust the preparation strategy
+
+Continue working through preparation needs across all dimensions:
+
+- Dependencies on Epic {{epic_number}} work
+- Technical setup and infrastructure
+- Knowledge gaps and research needs
+- Documentation or specification work
+- Testing infrastructure
+- Refactoring or debt reduction
+- External dependencies (APIs, integrations, etc.)
+
+For each preparation area, facilitate team discussion that:
+
+- Identifies specific needs with concrete examples
+- Estimates effort realistically based on Epic {{epic_number}} experience
+- Assigns ownership to specific agents
+- Determines criticality and timing
+- Surfaces risks of NOT doing the preparation
+- Explores parallel work opportunities
+- Brings {user_name} in for key decisions
+
+
+
+WAIT for {user_name} final validation of preparation plan
+
+
+
+
+
+
+
+Synthesize themes from Epic {{epic_number}} review discussion into actionable improvements
+
+Create specific action items with:
+
+- Clear description of the action
+- Assigned owner (specific agent or role)
+- Timeline or deadline
+- Success criteria (how we'll know it's done)
+- Category (process, technical, documentation, team, etc.)
+
+Ensure action items are SMART:
+
+- Specific: Clear and unambiguous
+- Measurable: Can verify completion
+- Achievable: Realistic given constraints
+- Relevant: Addresses real issues from retro
+- Time-bound: Has clear deadline
+
+
+
+WAIT for {user_name} to help resolve priority discussions
+
+
+
+CRITICAL ANALYSIS - Detect if discoveries require epic updates
+
+Check if any of the following are true based on retrospective discussion:
+
+- Architectural assumptions from planning proven wrong during Epic {{epic_number}}
+- Major scope changes or descoping occurred that affects next epic
+- Technical approach needs fundamental change for Epic {{next_epic_num}}
+- Dependencies discovered that Epic {{next_epic_num}} doesn't account for
+- User needs significantly different than originally understood
+- Performance/scalability concerns that affect Epic {{next_epic_num}} design
+- Security or compliance issues discovered that change approach
+- Integration assumptions proven incorrect
+- Team capacity or skill gaps more severe than planned
+- Technical debt level unsustainable without intervention
+
+
+
+
+WAIT for {user_name} to decide on how to handle the significant changes
+
+Add epic review session to critical path if user agrees
+
+
+
+
+
+
+
+
+
+
+Give each agent with assignments a moment to acknowledge their ownership
+
+Ensure {user_name} approves the complete action plan
+
+
+
+
+
+
+
+Explore testing and quality state through natural conversation
+
+
+
+WAIT for {user_name} to describe testing status
+
+
+
+WAIT for {user_name} to assess quality readiness
+
+
+
+Add testing completion to critical path
+
+
+Explore deployment and release status
+
+
+
+WAIT for {user_name} to provide deployment status
+
+
+
+
+WAIT for {user_name} to clarify deployment timeline
+
+Add deployment milestone to critical path with agreed timeline
+
+
+Explore stakeholder acceptance
+
+
+
+WAIT for {user_name} to describe stakeholder acceptance status
+
+
+
+
+WAIT for {user_name} decision
+
+Add stakeholder acceptance to critical path if user agrees
+
+
+Explore technical health and stability
+
+
+
+WAIT for {user_name} to assess codebase health
+
+
+
+
+WAIT for {user_name} decision
+
+Add stability work to preparation sprint if user agrees
+
+
+Explore unresolved blockers
+
+
+
+WAIT for {user_name} to surface any blockers
+
+
+
+
+Assign blocker resolution to appropriate agent
+Add to critical path with priority and deadline
+
+
+Synthesize the readiness assessment
+
+
+
+WAIT for {user_name} to confirm or correct the assessment
+
+
+
+
+
+
+
+
+
+WAIT for {user_name} to share final reflections
+
+
+
+Prepare to save retrospective summary document
+
+
+
+
+
+Ensure retrospectives folder exists: {implementation_artifacts}
+Create folder if it doesn't exist
+
+Generate comprehensive retrospective summary document including:
+
+- Epic summary and metrics
+- Team participants
+- Successes and strengths identified
+- Challenges and growth areas
+- Key insights and learnings
+- Previous retro follow-through analysis (if applicable)
+- Next epic preview and dependencies
+- Action items with owners and timelines
+- Preparation tasks for next epic
+- Critical path items
+- Significant discoveries and epic update recommendations (if any)
+- Readiness assessment
+- Commitments and next steps
+
+Format retrospective document as readable markdown with clear sections
+Set filename: {implementation_artifacts}/epic-{{epic_number}}-retro-{date}.md
+Save retrospective document
+
+
+
+Update {sprint_status_file} to mark retrospective as completed
+
+Load the FULL file: {sprint_status_file}
+Find development_status key "epic-{{epic_number}}-retrospective"
+Verify current status (typically "optional" or "pending")
+Update development_status["epic-{{epic_number}}-retrospective"] = "done"
+Update last_updated field to current date
+Save file, preserving ALL comments and structure including STATUS DEFINITIONS
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete` — if the resolved value is non-empty, follow it as the final terminal instruction before exiting.
+
+
+
+
+
+PARTY MODE REQUIRED: All agent dialogue uses "Name (Role): dialogue" format
+Amelia (Developer) maintains psychological safety throughout - no blame or judgment
+Focus on systems and processes, not individual performance
+Create authentic team dynamics: disagreements, diverse perspectives, emotions
+User ({user_name}) is active participant, not passive observer
+Encourage specific examples over general statements
+Balance celebration of wins with honest assessment of challenges
+Ensure every voice is heard - all agents contribute
+Action items must be specific, achievable, and owned
+Forward-looking mindset - how do we improve for next epic?
+Intent-based facilitation, not scripted phrases
+Deep story analysis provides rich material for discussion
+Previous retro integration creates accountability and continuity
+Significant change detection prevents epic misalignment
+Critical verification prevents starting next epic prematurely
+Document everything - retrospective insights are valuable for future reference
+Two-part structure ensures both reflection AND preparation
+
diff --git a/src/workflows/4-production/gds-retrospective/customize.toml b/src/workflows/4-production/gds-retrospective/customize.toml
new file mode 100644
index 0000000..0b2ce89
--- /dev/null
+++ b/src/workflows/4-production/gds-retrospective/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-retrospective. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "Retrospectives must focus on systems and process, not individuals."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed when the workflow reaches Step 12 (Final Summary and Handoff),
+# after the final outputs are produced. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/4-production/gds-retrospective/workflow.md b/src/workflows/4-production/gds-retrospective/workflow.md
deleted file mode 100644
index a60b1a0..0000000
--- a/src/workflows/4-production/gds-retrospective/workflow.md
+++ /dev/null
@@ -1,1483 +0,0 @@
-# Retrospective Workflow
-
-**Goal:** Post-epic review to extract lessons and assess success.
-
-**Your Role:** Developer facilitating retrospective.
-- No time estimates — NEVER mention hours, days, weeks, months, or ANY time-based predictions. AI has fundamentally changed development speed.
-- Communicate all responses in {communication_language} and language MUST be tailored to {game_dev_experience}
-- Generate all documents in {document_output_language}
-- Document output: Retrospective analysis. Concise insights, lessons learned, action items. Game dev experience ({game_dev_experience}) affects conversation style ONLY, not retrospective content.
-- Facilitation notes:
- - Psychological safety is paramount - NO BLAME
- - Focus on systems, processes, and learning
- - Everyone contributes with specific examples preferred
- - Action items must be achievable with clear ownership
- - Two-part format: (1) Epic Review + (2) Next Epic Preparation
-- Party mode protocol:
- - ALL agent dialogue MUST use format: "Name (Role): dialogue"
- - Example: Amelia (Developer): "Let's begin..."
- - Example: {user_name} (Project Lead): [User responds]
- - Create natural back-and-forth with user actively participating
- - Show disagreements, diverse perspectives, authentic team dynamics
-
----
-
-## INITIALIZATION
-
-### Configuration Loading
-
-Load config from `{module_config}` and resolve:
-
-- `project_name`, `user_name`
-- `communication_language`, `document_output_language`
-- `game_dev_experience`
-- `planning_artifacts`, `implementation_artifacts`
-- `date` as system-generated current datetime
-- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
-
-### Paths
-
-- `sprint_status_file` = `{implementation_artifacts}/sprint-status.yaml`
-
-### Input Files
-
-| Input | Description | Path Pattern(s) | Load Strategy |
-|-------|-------------|------------------|---------------|
-| epics | The completed epic for retrospective | whole: `{planning_artifacts}/*epic*.md`, sharded_index: `{planning_artifacts}/*epic*/index.md`, sharded_single: `{planning_artifacts}/*epic*/epic-{{epic_num}}.md` | SELECTIVE_LOAD |
-| previous_retrospective | Previous epic's retrospective (optional) | `{implementation_artifacts}/**/epic-{{prev_epic_num}}-retro-*.md` | SELECTIVE_LOAD |
-| architecture | System architecture for context | whole: `{planning_artifacts}/*architecture*.md`, sharded: `{planning_artifacts}/*architecture*/*.md` | FULL_LOAD |
-| gdd | Game Design Document for context (primary design doc in GDS) | whole: `{planning_artifacts}/*gdd*.md`, sharded: `{planning_artifacts}/*gdd*/*.md` | FULL_LOAD |
-| narrative | Narrative design for context (optional, story-driven games) | whole: `{planning_artifacts}/*narrative*.md`, sharded: `{planning_artifacts}/*narrative*/*.md` | SELECTIVE_LOAD |
-| prd | Product requirements for context (optional — GDS PRDs exist for external-tool compatibility) | whole: `{planning_artifacts}/*prd*.md`, sharded: `{planning_artifacts}/*prd*/*.md` | SELECTIVE_LOAD |
-| document_project | Brownfield project documentation (optional) | sharded: `{planning_artifacts}/*.md` | INDEX_GUIDED |
-
-### Required Inputs
-
-- `agent_manifest` = `{project-root}/_bmad/_config/agent-manifest.csv`
-
-### Context
-
-- `project_context` = `**/project-context.md` (load if exists)
-
----
-
-## EXECUTION
-
-
-
-
-
-Load {project_context} for project-wide patterns and conventions (if exists)
-Explain to {user_name} the epic discovery process using natural dialogue
-
-
-
-PRIORITY 1: Check {sprint_status_file} first
-
-Load the FULL file: {sprint_status_file}
-Read ALL development_status entries
-Find the highest epic number with at least one story marked "done"
-Extract epic number from keys like "epic-X-retrospective" or story keys like "X-Y-story-name"
-Set {{detected_epic}} = highest epic number found with completed stories
-
-
- Present finding to user with context
-
-
-
-WAIT for {user_name} to confirm or correct
-
-
- Set {{epic_number}} = {{detected_epic}}
-
-
-
- Set {{epic_number}} = user-provided number
-
-
-
-
-
- PRIORITY 2: Ask user directly
-
-
-
-WAIT for {user_name} to provide epic number
-Set {{epic_number}} = user-provided number
-
-
-
- PRIORITY 3: Fallback to stories folder
-
-Scan {implementation_artifacts} for highest numbered story files
-Extract epic numbers from story filenames (pattern: epic-X-Y-story-name.md)
-Set {{detected_epic}} = highest epic number found
-
-
-
-WAIT for {user_name} to confirm or correct
-Set {{epic_number}} = confirmed number
-
-
-Once {{epic_number}} is determined, verify epic completion status
-
-Find all stories for epic {{epic_number}} in {sprint_status_file}:
-
-- Look for keys starting with "{{epic_number}}-" (e.g., "1-1-", "1-2-", etc.)
-- Exclude epic key itself ("epic-{{epic_number}}")
-- Exclude retrospective key ("epic-{{epic_number}}-retrospective")
-
-
-Count total stories found for this epic
-Count stories with status = "done"
-Collect list of pending story keys (status != "done")
-Determine if complete: true if all stories are done, false otherwise
-
-
-
-
-Continue with incomplete epic? (yes/no)
-
-
-
- HALT
-
-
-Set {{partial_retrospective}} = true
-
-
-
-
-
-
-
-
-
-
- Load input files according to the Input Files table in INITIALIZATION. For SELECTIVE_LOAD inputs, load only the epic matching {{epic_number}}. For FULL_LOAD inputs, load the complete document. For INDEX_GUIDED inputs, check the index first and load relevant sections. After discovery, these content variables are available: {epics_content} (selective load for this epic), {architecture_content}, {gdd_content}, {narrative_content}, {prd_content}, {document_project_content}
- After discovery, these content variables are available: {epics_content} (selective load for this epic), {architecture_content}, {gdd_content}, {narrative_content}, {prd_content}, {document_project_content}
-
-
-
-
-
-
-For each story in epic {{epic_number}}, read the complete story file from {implementation_artifacts}/{{epic_number}}-{{story_num}}-*.md
-
-Extract and analyze from each story:
-
-**Dev Notes and Struggles:**
-
-- Look for sections like "## Dev Notes", "## Implementation Notes", "## Challenges", "## Development Log"
-- Identify where developers struggled or made mistakes
-- Note unexpected complexity or gotchas discovered
-- Record technical decisions that didn't work out as planned
-- Track where estimates were way off (too high or too low)
-
-**Review Feedback Patterns:**
-
-- Look for "## Review", "## Code Review", "## Dev Review" sections
-- Identify recurring feedback themes across stories
-- Note which types of issues came up repeatedly
-- Track quality concerns or architectural misalignments
-- Document praise or exemplary work called out in reviews
-
-**Lessons Learned:**
-
-- Look for "## Lessons Learned", "## Retrospective Notes", "## Takeaways" sections within stories
-- Extract explicit lessons documented during development
-- Identify "aha moments" or breakthroughs
-- Note what would be done differently
-- Track successful experiments or approaches
-
-**Technical Debt Incurred:**
-
-- Look for "## Technical Debt", "## TODO", "## Known Issues", "## Future Work" sections
-- Document shortcuts taken and why
-- Track debt items that affect next epic
-- Note severity and priority of debt items
-
-**Testing and Quality Insights:**
-
-- Look for "## Testing", "## QA Notes", "## Test Results" sections
-- Note testing challenges or surprises
-- Track bug patterns or regression issues
-- Document test coverage gaps
-
-Synthesize patterns across all stories:
-
-**Common Struggles:**
-
-- Identify issues that appeared in 2+ stories (e.g., "3 out of 5 stories had API authentication issues")
-- Note areas where team consistently struggled
-- Track where complexity was underestimated
-
-**Recurring Review Feedback:**
-
-- Identify feedback themes (e.g., "Error handling was flagged in every review")
-- Note quality patterns (positive and negative)
-- Track areas where team improved over the course of epic
-
-**Breakthrough Moments:**
-
-- Document key discoveries (e.g., "Story 3 discovered the caching pattern we used for rest of epic")
-- Note when team velocity improved dramatically
-- Track innovative solutions worth repeating
-
-**Velocity Patterns:**
-
-- Calculate average completion time per story
-- Note velocity trends (e.g., "First 2 stories took 3x longer than estimated")
-- Identify which types of stories went faster/slower
-
-**Team Collaboration Highlights:**
-
-- Note moments of excellent collaboration mentioned in stories
-- Track where pair programming or mob programming was effective
-- Document effective problem-solving sessions
-
-Store this synthesis - these patterns will drive the retrospective discussion
-
-
-
-
-
-
-
-Calculate previous epic number: {{prev_epic_num}} = {{epic_number}} - 1
-
-
- Search for previous retrospectives using pattern: {implementation_artifacts}/epic-{{prev_epic_num}}-retro-*.md
-
-
-
-
- Read the previous retrospectives
-
- Extract key elements:
- - **Action items committed**: What did the team agree to improve?
- - **Lessons learned**: What insights were captured?
- - **Process improvements**: What changes were agreed upon?
- - **Technical debt flagged**: What debt was documented?
- - **Team agreements**: What commitments were made?
- - **Preparation tasks**: What was needed for this epic?
-
- Cross-reference with current epic execution:
-
- **Action Item Follow-Through:**
- - For each action item from Epic {{prev_epic_num}} retro, check if it was completed
- - Look for evidence in current epic's story records
- - Mark each action item: ✅ Completed, ⏳ In Progress, ❌ Not Addressed
-
- **Lessons Applied:**
- - For each lesson from Epic {{prev_epic_num}}, check if team applied it in Epic {{epic_number}}
- - Look for evidence in dev notes, review feedback, or outcomes
- - Document successes and missed opportunities
-
- **Process Improvements Effectiveness:**
- - For each process change agreed to in Epic {{prev_epic_num}}, assess if it helped
- - Did the change improve velocity, quality, or team satisfaction?
- - Should we keep, modify, or abandon the change?
-
- **Technical Debt Status:**
- - For each debt item from Epic {{prev_epic_num}}, check if it was addressed
- - Did unaddressed debt cause problems in Epic {{epic_number}}?
- - Did the debt grow or shrink?
-
- Prepare "continuity insights" for the retrospective discussion
-
- Identify wins where previous lessons were applied successfully:
- - Document specific examples of applied learnings
- - Note positive impact on Epic {{epic_number}} outcomes
- - Celebrate team growth and improvement
-
- Identify missed opportunities where previous lessons were ignored:
- - Document where team repeated previous mistakes
- - Note impact of not applying lessons (without blame)
- - Explore barriers that prevented application
-
-
-
-
-
-
-
-Set {{first_retrospective}} = true
-
-
-
-
-
-Set {{first_retrospective}} = true
-
-
-
-
-
-
-Calculate next epic number: {{next_epic_num}} = {{epic_number}} + 1
-
-
-
-Attempt to load next epic using selective loading strategy:
-
-**Try sharded first (more specific):**
-Check if file exists: {planning_artifacts}/epic*/epic-{{next_epic_num}}.md
-
-
- Load {planning_artifacts}/*epic*/epic-{{next_epic_num}}.md
- Set {{next_epic_source}} = "sharded"
-
-
-**Fallback to whole document:**
-
-Check if file exists: {planning_artifacts}/epic*.md
-
-
- Load entire epics document
- Extract Epic {{next_epic_num}} section
- Set {{next_epic_source}} = "whole"
-
-
-
-
- Analyze next epic for:
- - Epic title and objectives
- - Planned stories and complexity estimates
- - Dependencies on Epic {{epic_number}} work
- - New technical requirements or capabilities needed
- - Potential risks or unknowns
- - Business goals and success criteria
-
-Identify dependencies on completed work:
-
-- What components from Epic {{epic_number}} does Epic {{next_epic_num}} rely on?
-- Are all prerequisites complete and stable?
-- Any incomplete work that creates blocking dependencies?
-
-Note potential gaps or preparation needed:
-
-- Technical setup required (infrastructure, tools, libraries)
-- Knowledge gaps to fill (research, training, spikes)
-- Refactoring needed before starting next epic
-- Documentation or specifications to create
-
-Check for technical prerequisites:
-
-- APIs or integrations that must be ready
-- Data migrations or schema changes needed
-- Testing infrastructure requirements
-- Deployment or environment setup
-
-
-
-Set {{next_epic_exists}} = true
-
-
-
-
-
-Set {{next_epic_exists}} = false
-
-
-
-
-
-
-Load agent configurations from {agent_manifest}
-Identify which agents participated in Epic {{epic_number}} based on story records
-Ensure key roles present: Product Owner, Developer (facilitating), Testing/QA, Architect
-
-
-
-WAIT for {user_name} to respond or indicate readiness
-
-
-
-
-
-
-
-Amelia (Developer) naturally turns to {user_name} to engage them in the discussion
-
-
-
-WAIT for {user_name} to respond - this is a KEY USER INTERACTION moment
-
-After {user_name} responds, have 1-2 team members react to or build on what {user_name} shared
-
-
-
-Continue facilitating natural dialogue, periodically bringing {user_name} back into the conversation
-
-After covering successes, guide the transition to challenges with care
-
-
-
-WAIT for {user_name} to respond and help facilitate the conflict resolution
-
-Use {user_name}'s response to guide the discussion toward systemic understanding rather than blame
-
-
-
-Continue the discussion, weaving in patterns discovered from the deep story analysis (Step 2)
-
-
-
-WAIT for {user_name} to share their observations
-
-Continue the retrospective discussion, creating moments where:
-
-- Team members ask {user_name} questions directly
-- {user_name}'s input shifts the discussion direction
-- Disagreements arise naturally and get resolved
-- Quieter team members are invited to contribute
-- Specific stories are referenced with real examples
-- Emotions are authentic (frustration, pride, concern, hope)
-
-
-
-
-WAIT for {user_name} to respond
-
-Use the previous retro follow-through as a learning moment about commitment and accountability
-
-
-
-
-Allow team members to add any final thoughts on the epic review
-Ensure {user_name} has opportunity to add their perspective
-
-
-
-
-
-
-
- Skip to Step 8
-
-
-
-
-WAIT for {user_name} to share their assessment
-
-Use {user_name}'s input to guide deeper exploration of preparation needs
-
-
-
-WAIT for {user_name} to provide direction on preparation approach
-
-Create space for debate and disagreement about priorities
-
-
-
-WAIT for {user_name} to validate or adjust the preparation strategy
-
-Continue working through preparation needs across all dimensions:
-
-- Dependencies on Epic {{epic_number}} work
-- Technical setup and infrastructure
-- Knowledge gaps and research needs
-- Documentation or specification work
-- Testing infrastructure
-- Refactoring or debt reduction
-- External dependencies (APIs, integrations, etc.)
-
-For each preparation area, facilitate team discussion that:
-
-- Identifies specific needs with concrete examples
-- Estimates effort realistically based on Epic {{epic_number}} experience
-- Assigns ownership to specific agents
-- Determines criticality and timing
-- Surfaces risks of NOT doing the preparation
-- Explores parallel work opportunities
-- Brings {user_name} in for key decisions
-
-
-
-WAIT for {user_name} final validation of preparation plan
-
-
-
-
-
-
-
-Synthesize themes from Epic {{epic_number}} review discussion into actionable improvements
-
-Create specific action items with:
-
-- Clear description of the action
-- Assigned owner (specific agent or role)
-- Timeline or deadline
-- Success criteria (how we'll know it's done)
-- Category (process, technical, documentation, team, etc.)
-
-Ensure action items are SMART:
-
-- Specific: Clear and unambiguous
-- Measurable: Can verify completion
-- Achievable: Realistic given constraints
-- Relevant: Addresses real issues from retro
-- Time-bound: Has clear deadline
-
-
-
-WAIT for {user_name} to help resolve priority discussions
-
-
-
-CRITICAL ANALYSIS - Detect if discoveries require epic updates
-
-Check if any of the following are true based on retrospective discussion:
-
-- Architectural assumptions from planning proven wrong during Epic {{epic_number}}
-- Major scope changes or descoping occurred that affects next epic
-- Technical approach needs fundamental change for Epic {{next_epic_num}}
-- Dependencies discovered that Epic {{next_epic_num}} doesn't account for
-- User needs significantly different than originally understood
-- Performance/scalability concerns that affect Epic {{next_epic_num}} design
-- Security or compliance issues discovered that change approach
-- Integration assumptions proven incorrect
-- Team capacity or skill gaps more severe than planned
-- Technical debt level unsustainable without intervention
-
-
-
-
-WAIT for {user_name} to decide on how to handle the significant changes
-
-Add epic review session to critical path if user agrees
-
-
-
-
-
-
-
-
-
-
-Give each agent with assignments a moment to acknowledge their ownership
-
-Ensure {user_name} approves the complete action plan
-
-
-
-
-
-
-
-Explore testing and quality state through natural conversation
-
-
-
-WAIT for {user_name} to describe testing status
-
-
-
-WAIT for {user_name} to assess quality readiness
-
-
-
-Add testing completion to critical path
-
-
-Explore deployment and release status
-
-
-
-WAIT for {user_name} to provide deployment status
-
-
-
-
-WAIT for {user_name} to clarify deployment timeline
-
-Add deployment milestone to critical path with agreed timeline
-
-
-Explore stakeholder acceptance
-
-
-
-WAIT for {user_name} to describe stakeholder acceptance status
-
-
-
-
-WAIT for {user_name} decision
-
-Add stakeholder acceptance to critical path if user agrees
-
-
-Explore technical health and stability
-
-
-
-WAIT for {user_name} to assess codebase health
-
-
-
-
-WAIT for {user_name} decision
-
-Add stability work to preparation sprint if user agrees
-
-
-Explore unresolved blockers
-
-
-
-WAIT for {user_name} to surface any blockers
-
-
-
-
-Assign blocker resolution to appropriate agent
-Add to critical path with priority and deadline
-
-
-Synthesize the readiness assessment
-
-
-
-WAIT for {user_name} to confirm or correct the assessment
-
-
-
-
-
-
-
-
-
-WAIT for {user_name} to share final reflections
-
-
-
-Prepare to save retrospective summary document
-
-
-
-
-
-Ensure retrospectives folder exists: {implementation_artifacts}
-Create folder if it doesn't exist
-
-Generate comprehensive retrospective summary document including:
-
-- Epic summary and metrics
-- Team participants
-- Successes and strengths identified
-- Challenges and growth areas
-- Key insights and learnings
-- Previous retro follow-through analysis (if applicable)
-- Next epic preview and dependencies
-- Action items with owners and timelines
-- Preparation tasks for next epic
-- Critical path items
-- Significant discoveries and epic update recommendations (if any)
-- Readiness assessment
-- Commitments and next steps
-
-Format retrospective document as readable markdown with clear sections
-Set filename: {implementation_artifacts}/epic-{{epic_number}}-retro-{date}.md
-Save retrospective document
-
-
-
-Update {sprint_status_file} to mark retrospective as completed
-
-Load the FULL file: {sprint_status_file}
-Find development_status key "epic-{{epic_number}}-retrospective"
-Verify current status (typically "optional" or "pending")
-Update development_status["epic-{{epic_number}}-retrospective"] = "done"
-Update last_updated field to current date
-Save file, preserving ALL comments and structure including STATUS DEFINITIONS
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-PARTY MODE REQUIRED: All agent dialogue uses "Name (Role): dialogue" format
-Amelia (Developer) maintains psychological safety throughout - no blame or judgment
-Focus on systems and processes, not individual performance
-Create authentic team dynamics: disagreements, diverse perspectives, emotions
-User ({user_name}) is active participant, not passive observer
-Encourage specific examples over general statements
-Balance celebration of wins with honest assessment of challenges
-Ensure every voice is heard - all agents contribute
-Action items must be specific, achievable, and owned
-Forward-looking mindset - how do we improve for next epic?
-Intent-based facilitation, not scripted phrases
-Deep story analysis provides rich material for discussion
-Previous retro integration creates accountability and continuity
-Significant change detection prevents epic misalignment
-Critical verification prevents starting next epic prematurely
-Document everything - retrospective insights are valuable for future reference
-Two-part structure ensures both reflection AND preparation
-
diff --git a/src/workflows/4-production/gds-sprint-planning/SKILL.md b/src/workflows/4-production/gds-sprint-planning/SKILL.md
index 2dd62c2..6d92210 100644
--- a/src/workflows/4-production/gds-sprint-planning/SKILL.md
+++ b/src/workflows/4-production/gds-sprint-planning/SKILL.md
@@ -3,4 +3,302 @@ name: gds-sprint-planning
description: 'Generate sprint status tracking from epics. Use when the user says "run sprint planning" or "generate sprint plan"'
---
-Follow the instructions in ./workflow.md.
+# Sprint Planning Workflow
+
+**Goal:** Generate sprint status tracking from epics, detecting current story statuses and building a complete sprint-status.yaml file.
+
+**Your Role:** You are a Developer generating and maintaining sprint tracking. Parse epic files, detect story statuses, and produce a structured sprint-status.yaml.
+
+
+## Paths
+
+- `tracking_system` = `file-system`
+- `project_key` = `NOKEY`
+- `story_location` = `{implementation_artifacts}`
+- `story_location_absolute` = `{implementation_artifacts}`
+- `epics_location` = `{planning_artifacts}`
+- `epics_pattern` = `*epic*.md`
+- `status_file` = `{implementation_artifacts}/sprint-status.yaml`
+
+## Input Files
+
+| Input | Path | Load Strategy |
+|-------|------|---------------|
+| Epics | `{planning_artifacts}/*epic*.md` (whole) or `{planning_artifacts}/*epic*/*.md` (sharded) | FULL_LOAD |
+
+## Context
+
+- `project_context` = `**/project-context.md` (load if exists)
+
+---
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `project_name`
+- `user_name`
+- `communication_language`
+- `planning_artifacts`
+- `implementation_artifacts`
+- `date` as the system-generated current datetime
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## EXECUTION
+
+### Document Discovery - Full Epic Loading
+
+**Strategy**: Sprint planning needs ALL epics and stories to build complete status tracking.
+
+**Epic Discovery Process:**
+
+1. **Search for whole document first** - Look for `epics.md`, `gds-epics.md`, or any `*epic*.md` file
+2. **Check for sharded version** - If whole document not found, look for `epics/index.md`
+3. **If sharded version found**:
+ - Read `index.md` to understand the document structure
+ - Read ALL epic section files listed in the index (e.g., `epic-1.md`, `epic-2.md`, etc.)
+ - Process all epics and their stories from the combined content
+ - This ensures complete sprint status coverage
+4. **Priority**: If both whole and sharded versions exist, use the whole document
+
+**Fuzzy matching**: Be flexible with document names - users may use variations like `epics.md`, `gds-epics.md`, `user-stories.md`, etc.
+
+
+
+
+Load {project_context} for project-wide patterns and conventions (if exists)
+Communicate in {communication_language} with {user_name}
+Look for all files matching `{epics_pattern}` in {epics_location}
+Could be a single `epics.md` file or multiple `epic-1.md`, `epic-2.md` files
+
+For each epic file found, extract:
+
+- Epic numbers from headers like `## Epic 1:` or `## Epic 2:`
+- Story IDs and titles from patterns like `### Story 1.1: User Authentication`
+- Convert story format from `Epic.Story: Title` to kebab-case key: `epic-story-title`
+
+**Story ID Conversion Rules:**
+
+- Original: `### Story 1.1: User Authentication`
+- Replace period with dash: `1-1`
+- Convert title to kebab-case: `user-authentication`
+- Final key: `1-1-user-authentication`
+
+Build complete inventory of all epics and stories from all epic files
+
+
+
+For each epic found, create entries in this order:
+
+1. **Epic entry** - Key: `epic-{num}`, Default status: `backlog`
+2. **Story entries** - Key: `{epic}-{story}-{title}`, Default status: `backlog`
+3. **Retrospective entry** - Key: `epic-{num}-retrospective`, Default status: `optional`
+
+**Example structure:**
+
+```yaml
+development_status:
+ epic-1: backlog
+ 1-1-user-authentication: backlog
+ 1-2-account-management: backlog
+ epic-1-retrospective: optional
+```
+
+
+
+
+For each story, detect current status by checking files:
+
+**Story file detection:**
+
+- Check: `{story_location_absolute}/{story-key}.md` (e.g., `stories/1-1-user-authentication.md`)
+- If exists → upgrade status to at least `ready-for-dev`
+
+**Preservation rule:**
+
+- If existing `{status_file}` exists and has more advanced status, preserve it
+- Never downgrade status (e.g., don't change `done` to `ready-for-dev`)
+
+**Status Flow Reference:**
+
+- Epic: `backlog` → `in-progress` → `done`
+- Story: `backlog` → `ready-for-dev` → `in-progress` → `review` → `done`
+- Retrospective: `optional` ↔ `done`
+
+
+
+Create or update {status_file} with:
+
+**File Structure:**
+
+```yaml
+# generated: {date}
+# last_updated: {date}
+# project: {project_name}
+# project_key: {project_key}
+# tracking_system: {tracking_system}
+# story_location: {story_location}
+
+# STATUS DEFINITIONS:
+# ==================
+# Epic Status:
+# - backlog: Epic not yet started
+# - in-progress: Epic actively being worked on
+# - done: All stories in epic completed
+#
+# Epic Status Transitions:
+# - backlog → in-progress: Automatically when first story is created (via create-story)
+# - in-progress → done: Manually when all stories reach 'done' status
+#
+# Story Status:
+# - backlog: Story only exists in epic file
+# - ready-for-dev: Story file created in stories folder
+# - in-progress: Developer actively working on implementation
+# - review: Ready for code review (via Dev's code-review workflow)
+# - done: Story completed
+#
+# Retrospective Status:
+# - optional: Can be completed but not required
+# - done: Retrospective has been completed
+#
+# WORKFLOW NOTES:
+# ===============
+# - Epic transitions to 'in-progress' automatically when first story is created
+# - Stories can be worked in parallel if team capacity allows
+# - Developer typically creates next story after previous one is 'done' to incorporate learnings
+# - Dev moves story to 'review', then runs code-review (fresh context, different LLM recommended)
+
+generated: { date }
+last_updated: { date }
+project: { project_name }
+project_key: { project_key }
+tracking_system: { tracking_system }
+story_location: { story_location }
+
+development_status:
+ # All epics, stories, and retrospectives in order
+```
+
+Write the complete sprint status YAML to {status_file}
+CRITICAL: Metadata appears TWICE - once as comments (#) for documentation, once as YAML key:value fields for parsing
+Ensure all items are ordered: epic, its stories, its retrospective, next epic...
+
+
+
+Perform validation checks:
+
+- [ ] Every epic in epic files appears in {status_file}
+- [ ] Every story in epic files appears in {status_file}
+- [ ] Every epic has a corresponding retrospective entry
+- [ ] No items in {status_file} that don't exist in epic files
+- [ ] All status values are legal (match state machine definitions)
+- [ ] File is valid YAML syntax
+
+Count totals:
+
+- Total epics: {{epic_count}}
+- Total stories: {{story_count}}
+- Epics in-progress: {{in_progress_count}}
+- Stories done: {{done_count}}
+
+Display completion summary to {user_name} in {communication_language}:
+
+**Sprint Status Generated Successfully**
+
+- **File Location:** {status_file}
+- **Total Epics:** {{epic_count}}
+- **Total Stories:** {{story_count}}
+- **Epics In Progress:** {{in_progress_count}}
+- **Stories Completed:** {{done_count}}
+
+**Next Steps:**
+
+1. Review the generated {status_file}
+2. Use this file to track development progress
+3. Agents will update statuses as they work
+4. Re-run this workflow to refresh auto-detected statuses
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete` — if the resolved value is non-empty, follow it as the final terminal instruction before exiting.
+
+
+
+
+## Additional Documentation
+
+### Status State Machine
+
+**Epic Status Flow:**
+
+```
+backlog → in-progress → done
+```
+
+- **backlog**: Epic not yet started
+- **in-progress**: Epic actively being worked on (stories being created/implemented)
+- **done**: All stories in epic completed
+
+**Story Status Flow:**
+
+```
+backlog → ready-for-dev → in-progress → review → done
+```
+
+- **backlog**: Story only exists in epic file
+- **ready-for-dev**: Story file created (e.g., `stories/1-3-plant-naming.md`)
+- **in-progress**: Developer actively working
+- **review**: Ready for code review (via Dev's code-review workflow)
+- **done**: Completed
+
+**Retrospective Status:**
+
+```
+optional ↔ done
+```
+
+- **optional**: Ready to be conducted but not required
+- **done**: Finished
+
+### Guidelines
+
+1. **Epic Activation**: Mark epic as `in-progress` when starting work on its first story
+2. **Sequential Default**: Stories are typically worked in order, but parallel work is supported
+3. **Parallel Work Supported**: Multiple stories can be `in-progress` if team capacity allows
+4. **Review Before Done**: Stories should pass through `review` before `done`
+5. **Learning Transfer**: Developer typically creates next story after previous one is `done` to incorporate learnings
diff --git a/src/workflows/4-production/gds-sprint-planning/customize.toml b/src/workflows/4-production/gds-sprint-planning/customize.toml
new file mode 100644
index 0000000..8e29a45
--- /dev/null
+++ b/src/workflows/4-production/gds-sprint-planning/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-sprint-planning. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "Sprint plans must be driven by epic priority, not by whatever is loudest in the moment."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed when the workflow reaches Step 5 (Validate and report),
+# after the final outputs are produced. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/4-production/gds-sprint-planning/workflow.md b/src/workflows/4-production/gds-sprint-planning/workflow.md
deleted file mode 100644
index 4731f34..0000000
--- a/src/workflows/4-production/gds-sprint-planning/workflow.md
+++ /dev/null
@@ -1,263 +0,0 @@
-# Sprint Planning Workflow
-
-**Goal:** Generate sprint status tracking from epics, detecting current story statuses and building a complete sprint-status.yaml file.
-
-**Your Role:** You are a Developer generating and maintaining sprint tracking. Parse epic files, detect story statuses, and produce a structured sprint-status.yaml.
-
----
-
-## INITIALIZATION
-
-### Configuration Loading
-
-Load config from `{module_config}` and resolve:
-
-- `project_name`, `user_name`
-- `communication_language`, `document_output_language`
-- `implementation_artifacts`
-- `planning_artifacts`
-- `date` as system-generated current datetime
-- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
-
-### Paths
-
-- `tracking_system` = `file-system`
-- `project_key` = `NOKEY`
-- `story_location` = `{implementation_artifacts}`
-- `story_location_absolute` = `{implementation_artifacts}`
-- `epics_location` = `{planning_artifacts}`
-- `epics_pattern` = `*epic*.md`
-- `status_file` = `{implementation_artifacts}/sprint-status.yaml`
-
-### Input Files
-
-| Input | Path | Load Strategy |
-|-------|------|---------------|
-| Epics | `{planning_artifacts}/*epic*.md` (whole) or `{planning_artifacts}/*epic*/*.md` (sharded) | FULL_LOAD |
-
-### Context
-
-- `project_context` = `**/project-context.md` (load if exists)
-
----
-
-## EXECUTION
-
-### Document Discovery - Full Epic Loading
-
-**Strategy**: Sprint planning needs ALL epics and stories to build complete status tracking.
-
-**Epic Discovery Process:**
-
-1. **Search for whole document first** - Look for `epics.md`, `gds-epics.md`, or any `*epic*.md` file
-2. **Check for sharded version** - If whole document not found, look for `epics/index.md`
-3. **If sharded version found**:
- - Read `index.md` to understand the document structure
- - Read ALL epic section files listed in the index (e.g., `epic-1.md`, `epic-2.md`, etc.)
- - Process all epics and their stories from the combined content
- - This ensures complete sprint status coverage
-4. **Priority**: If both whole and sharded versions exist, use the whole document
-
-**Fuzzy matching**: Be flexible with document names - users may use variations like `epics.md`, `gds-epics.md`, `user-stories.md`, etc.
-
-
-
-
-Load {project_context} for project-wide patterns and conventions (if exists)
-Communicate in {communication_language} with {user_name}
-Look for all files matching `{epics_pattern}` in {epics_location}
-Could be a single `epics.md` file or multiple `epic-1.md`, `epic-2.md` files
-
-For each epic file found, extract:
-
-- Epic numbers from headers like `## Epic 1:` or `## Epic 2:`
-- Story IDs and titles from patterns like `### Story 1.1: User Authentication`
-- Convert story format from `Epic.Story: Title` to kebab-case key: `epic-story-title`
-
-**Story ID Conversion Rules:**
-
-- Original: `### Story 1.1: User Authentication`
-- Replace period with dash: `1-1`
-- Convert title to kebab-case: `user-authentication`
-- Final key: `1-1-user-authentication`
-
-Build complete inventory of all epics and stories from all epic files
-
-
-
-For each epic found, create entries in this order:
-
-1. **Epic entry** - Key: `epic-{num}`, Default status: `backlog`
-2. **Story entries** - Key: `{epic}-{story}-{title}`, Default status: `backlog`
-3. **Retrospective entry** - Key: `epic-{num}-retrospective`, Default status: `optional`
-
-**Example structure:**
-
-```yaml
-development_status:
- epic-1: backlog
- 1-1-user-authentication: backlog
- 1-2-account-management: backlog
- epic-1-retrospective: optional
-```
-
-
-
-
-For each story, detect current status by checking files:
-
-**Story file detection:**
-
-- Check: `{story_location_absolute}/{story-key}.md` (e.g., `stories/1-1-user-authentication.md`)
-- If exists → upgrade status to at least `ready-for-dev`
-
-**Preservation rule:**
-
-- If existing `{status_file}` exists and has more advanced status, preserve it
-- Never downgrade status (e.g., don't change `done` to `ready-for-dev`)
-
-**Status Flow Reference:**
-
-- Epic: `backlog` → `in-progress` → `done`
-- Story: `backlog` → `ready-for-dev` → `in-progress` → `review` → `done`
-- Retrospective: `optional` ↔ `done`
-
-
-
-Create or update {status_file} with:
-
-**File Structure:**
-
-```yaml
-# generated: {date}
-# last_updated: {date}
-# project: {project_name}
-# project_key: {project_key}
-# tracking_system: {tracking_system}
-# story_location: {story_location}
-
-# STATUS DEFINITIONS:
-# ==================
-# Epic Status:
-# - backlog: Epic not yet started
-# - in-progress: Epic actively being worked on
-# - done: All stories in epic completed
-#
-# Epic Status Transitions:
-# - backlog → in-progress: Automatically when first story is created (via create-story)
-# - in-progress → done: Manually when all stories reach 'done' status
-#
-# Story Status:
-# - backlog: Story only exists in epic file
-# - ready-for-dev: Story file created in stories folder
-# - in-progress: Developer actively working on implementation
-# - review: Ready for code review (via Dev's code-review workflow)
-# - done: Story completed
-#
-# Retrospective Status:
-# - optional: Can be completed but not required
-# - done: Retrospective has been completed
-#
-# WORKFLOW NOTES:
-# ===============
-# - Epic transitions to 'in-progress' automatically when first story is created
-# - Stories can be worked in parallel if team capacity allows
-# - Developer typically creates next story after previous one is 'done' to incorporate learnings
-# - Dev moves story to 'review', then runs code-review (fresh context, different LLM recommended)
-
-generated: { date }
-last_updated: { date }
-project: { project_name }
-project_key: { project_key }
-tracking_system: { tracking_system }
-story_location: { story_location }
-
-development_status:
- # All epics, stories, and retrospectives in order
-```
-
-Write the complete sprint status YAML to {status_file}
-CRITICAL: Metadata appears TWICE - once as comments (#) for documentation, once as YAML key:value fields for parsing
-Ensure all items are ordered: epic, its stories, its retrospective, next epic...
-
-
-
-Perform validation checks:
-
-- [ ] Every epic in epic files appears in {status_file}
-- [ ] Every story in epic files appears in {status_file}
-- [ ] Every epic has a corresponding retrospective entry
-- [ ] No items in {status_file} that don't exist in epic files
-- [ ] All status values are legal (match state machine definitions)
-- [ ] File is valid YAML syntax
-
-Count totals:
-
-- Total epics: {{epic_count}}
-- Total stories: {{story_count}}
-- Epics in-progress: {{in_progress_count}}
-- Stories done: {{done_count}}
-
-Display completion summary to {user_name} in {communication_language}:
-
-**Sprint Status Generated Successfully**
-
-- **File Location:** {status_file}
-- **Total Epics:** {{epic_count}}
-- **Total Stories:** {{story_count}}
-- **Epics In Progress:** {{in_progress_count}}
-- **Stories Completed:** {{done_count}}
-
-**Next Steps:**
-
-1. Review the generated {status_file}
-2. Use this file to track development progress
-3. Agents will update statuses as they work
-4. Re-run this workflow to refresh auto-detected statuses
-
-
-
-
-
-## Additional Documentation
-
-### Status State Machine
-
-**Epic Status Flow:**
-
-```
-backlog → in-progress → done
-```
-
-- **backlog**: Epic not yet started
-- **in-progress**: Epic actively being worked on (stories being created/implemented)
-- **done**: All stories in epic completed
-
-**Story Status Flow:**
-
-```
-backlog → ready-for-dev → in-progress → review → done
-```
-
-- **backlog**: Story only exists in epic file
-- **ready-for-dev**: Story file created (e.g., `stories/1-3-plant-naming.md`)
-- **in-progress**: Developer actively working
-- **review**: Ready for code review (via Dev's code-review workflow)
-- **done**: Completed
-
-**Retrospective Status:**
-
-```
-optional ↔ done
-```
-
-- **optional**: Ready to be conducted but not required
-- **done**: Finished
-
-### Guidelines
-
-1. **Epic Activation**: Mark epic as `in-progress` when starting work on its first story
-2. **Sequential Default**: Stories are typically worked in order, but parallel work is supported
-3. **Parallel Work Supported**: Multiple stories can be `in-progress` if team capacity allows
-4. **Review Before Done**: Stories should pass through `review` before `done`
-5. **Learning Transfer**: Developer typically creates next story after previous one is `done` to incorporate learnings
diff --git a/src/workflows/4-production/gds-sprint-status/SKILL.md b/src/workflows/4-production/gds-sprint-status/SKILL.md
index 64c4988..a47c052 100644
--- a/src/workflows/4-production/gds-sprint-status/SKILL.md
+++ b/src/workflows/4-production/gds-sprint-status/SKILL.md
@@ -3,4 +3,304 @@ name: gds-sprint-status
description: 'Summarize sprint status and surface risks. Use when the user says "check sprint status" or "show sprint status"'
---
-Follow the instructions in ./workflow.md.
+# Sprint Status Workflow
+
+**Goal:** Summarize sprint status, surface risks, and recommend the next workflow action.
+
+**Your Role:** You are a Developer providing clear, actionable sprint visibility. No time estimates — focus on status, risks, and next steps.
+
+
+## Paths
+
+- `sprint_status_file` = `{implementation_artifacts}/sprint-status.yaml`
+
+## Input Files
+
+| Input | Path | Load Strategy |
+|-------|------|---------------|
+| Sprint status | `{sprint_status_file}` | FULL_LOAD |
+
+## Context
+
+- `project_context` = `**/project-context.md` (load if exists)
+
+---
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `user_name`
+- `communication_language`
+- `implementation_artifacts`
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## EXECUTION
+
+
+
+
+ Set mode = {{mode}} if provided by caller; otherwise mode = "interactive"
+
+
+ Jump to Step 20
+
+
+
+ Jump to Step 30
+
+
+
+ Continue to Step 1
+
+
+
+
+ Load {project_context} for project-wide patterns and conventions (if exists)
+ Try {sprint_status_file}
+
+
+ Exit workflow
+
+ Continue to Step 2
+
+
+
+ Read the FULL file: {sprint_status_file}
+ Parse fields: generated, last_updated, project, project_key, tracking_system, story_location
+ Parse development_status map. Classify keys:
+ - Epics: keys starting with "epic-" (and not ending with "-retrospective")
+ - Retrospectives: keys ending with "-retrospective"
+ - Stories: everything else (e.g., 1-2-login-form)
+ Map legacy story status "drafted" → "ready-for-dev"
+ Count story statuses: backlog, ready-for-dev, in-progress, review, done
+ Map legacy epic status "contexted" → "in-progress"
+ Count epic statuses: backlog, in-progress, done
+ Count retrospective statuses: optional, done
+
+Validate all statuses against known values:
+
+- Valid story statuses: backlog, ready-for-dev, in-progress, review, done, drafted (legacy)
+- Valid epic statuses: backlog, in-progress, done, contexted (legacy)
+- Valid retrospective statuses: optional, done
+
+
+
+ How should these be corrected?
+ {{#each invalid_entries}}
+ {{@index}}. {{key}}: "{{status}}" → [select valid status]
+ {{/each}}
+
+Enter corrections (e.g., "1=in-progress, 2=backlog") or "skip" to continue without fixing:
+
+Update sprint-status.yaml with corrected values
+Re-parse the file with corrected statuses
+
+
+
+Detect risks:
+
+- IF any story has status "review": suggest `/bmad:gds:workflows:code-review`
+- IF any story has status "in-progress" AND no stories have status "ready-for-dev": recommend staying focused on active story
+- IF all epics have status "backlog" AND no stories have status "ready-for-dev": prompt `/bmad:gds:workflows:create-story`
+- IF `last_updated` timestamp is more than 7 days old (or `last_updated` is missing, fall back to `generated`): warn "sprint-status.yaml may be stale"
+- IF any story key doesn't match an epic pattern (e.g., story "5-1-..." but no "epic-5"): warn "orphaned story detected"
+- IF any epic has status in-progress but has no associated stories: warn "in-progress epic has no stories"
+
+
+
+ Pick the next recommended workflow using priority:
+ When selecting "first" story: sort by epic number, then story number (e.g., 1-1 before 1-2 before 2-1)
+ 1. If any story status == in-progress → recommend `dev-story` for the first in-progress story (next_agent = DEVELOPER)
+ 2. Else if any story status == review → recommend `code-review` for the first review story (next_agent = REVIEWER)
+ 3. Else if any story status == ready-for-dev → recommend `dev-story` (next_agent = DEVELOPER)
+ 4. Else if any story status == backlog → recommend `create-story` (next_agent = STORY_AUTHOR)
+ 5. Else if any retrospective status == optional → recommend `retrospective` (next_agent = FACILITATOR)
+ 6. Else → All implementation items done; congratulate the user - you both did amazing work together!
+ Store selected recommendation as: next_story_id, next_workflow_id, next_agent. Note: in the consolidated GDS Phase 4 model, all role labels (DEVELOPER, REVIEWER, STORY_AUTHOR, FACILITATOR) map to the same agent (gds-agent-game-dev / Link Freeman). The labels are kept for downstream consumers that distinguish them semantically.
+
+
+
+
+
+
+
+ Pick an option:
+1) Run recommended workflow now
+2) Show all stories grouped by status
+3) Show raw sprint-status.yaml
+4) Exit
+Choice:
+
+
+
+
+
+
+
+
+
+
+ Display the full contents of {sprint_status_file}
+
+
+
+ Exit workflow
+
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete` — if the resolved value is non-empty, follow it as the final terminal instruction before exiting.
+
+
+
+
+
+
+
+ Load and parse {sprint_status_file} same as Step 2
+ Compute recommendation same as Step 3
+ next_workflow_id = {{next_workflow_id}}
+ next_story_id = {{next_story_id}}
+ next_agent = {{next_agent}}
+ count_backlog = {{count_backlog}}
+ count_ready = {{count_ready}}
+ count_in_progress = {{count_in_progress}}
+ count_review = {{count_review}}
+ count_done = {{count_done}}
+ epic_backlog = {{epic_backlog}}
+ epic_in_progress = {{epic_in_progress}}
+ epic_done = {{epic_done}}
+ risks = {{risks}}
+ Return to caller
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete` — if the resolved value is non-empty, follow it as the final terminal instruction before exiting.
+
+
+
+
+
+
+
+ Check that {sprint_status_file} exists
+
+ is_valid = false
+ error = "sprint-status.yaml missing"
+ suggestion = "Run sprint-planning to create it"
+ Return
+
+
+Read and parse {sprint_status_file}
+
+Validate required metadata fields exist: generated, project, project_key, tracking_system, story_location (last_updated is optional for backward compatibility)
+
+is_valid = false
+error = "Missing required field(s): {{missing_fields}}"
+suggestion = "Re-run sprint-planning or add missing fields manually"
+Return
+
+
+Verify development_status section exists with at least one entry
+
+is_valid = false
+error = "development_status missing or empty"
+suggestion = "Re-run sprint-planning or repair the file manually"
+Return
+
+
+Validate all status values against known valid statuses:
+
+- Stories: backlog, ready-for-dev, in-progress, review, done (legacy: drafted)
+- Epics: backlog, in-progress, done (legacy: contexted)
+- Retrospectives: optional, done
+
+ is_valid = false
+ error = "Invalid status values: {{invalid_entries}}"
+ suggestion = "Fix invalid statuses in sprint-status.yaml"
+ Return
+
+
+is_valid = true
+message = "sprint-status.yaml valid: metadata complete, all statuses recognized"
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete` — if the resolved value is non-empty, follow it as the final terminal instruction before exiting.
+
+
+
diff --git a/src/workflows/4-production/gds-sprint-status/customize.toml b/src/workflows/4-production/gds-sprint-status/customize.toml
new file mode 100644
index 0000000..8c5d136
--- /dev/null
+++ b/src/workflows/4-production/gds-sprint-status/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-sprint-status. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "Sprint status must report reality — including blockers and slippage — not aspiration."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed when the workflow reaches Step 30 (Validate sprint-status file),
+# after the final outputs are produced. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/4-production/gds-sprint-status/workflow.md b/src/workflows/4-production/gds-sprint-status/workflow.md
deleted file mode 100644
index 5c1023e..0000000
--- a/src/workflows/4-production/gds-sprint-status/workflow.md
+++ /dev/null
@@ -1,262 +0,0 @@
-# Sprint Status Workflow
-
-**Goal:** Summarize sprint status, surface risks, and recommend the next workflow action.
-
-**Your Role:** You are a Developer providing clear, actionable sprint visibility. No time estimates — focus on status, risks, and next steps.
-
----
-
-## INITIALIZATION
-
-### Configuration Loading
-
-Load config from `{module_config}` and resolve:
-
-- `project_name`, `user_name`
-- `communication_language`, `document_output_language`
-- `implementation_artifacts`
-- `date` as system-generated current datetime
-- YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`
-
-### Paths
-
-- `sprint_status_file` = `{implementation_artifacts}/sprint-status.yaml`
-
-### Input Files
-
-| Input | Path | Load Strategy |
-|-------|------|---------------|
-| Sprint status | `{sprint_status_file}` | FULL_LOAD |
-
-### Context
-
-- `project_context` = `**/project-context.md` (load if exists)
-
----
-
-## EXECUTION
-
-
-
-
- Set mode = {{mode}} if provided by caller; otherwise mode = "interactive"
-
-
- Jump to Step 20
-
-
-
- Jump to Step 30
-
-
-
- Continue to Step 1
-
-
-
-
- Load {project_context} for project-wide patterns and conventions (if exists)
- Try {sprint_status_file}
-
-
- Exit workflow
-
- Continue to Step 2
-
-
-
- Read the FULL file: {sprint_status_file}
- Parse fields: generated, last_updated, project, project_key, tracking_system, story_location
- Parse development_status map. Classify keys:
- - Epics: keys starting with "epic-" (and not ending with "-retrospective")
- - Retrospectives: keys ending with "-retrospective"
- - Stories: everything else (e.g., 1-2-login-form)
- Map legacy story status "drafted" → "ready-for-dev"
- Count story statuses: backlog, ready-for-dev, in-progress, review, done
- Map legacy epic status "contexted" → "in-progress"
- Count epic statuses: backlog, in-progress, done
- Count retrospective statuses: optional, done
-
-Validate all statuses against known values:
-
-- Valid story statuses: backlog, ready-for-dev, in-progress, review, done, drafted (legacy)
-- Valid epic statuses: backlog, in-progress, done, contexted (legacy)
-- Valid retrospective statuses: optional, done
-
-
-
- How should these be corrected?
- {{#each invalid_entries}}
- {{@index}}. {{key}}: "{{status}}" → [select valid status]
- {{/each}}
-
-Enter corrections (e.g., "1=in-progress, 2=backlog") or "skip" to continue without fixing:
-
-Update sprint-status.yaml with corrected values
-Re-parse the file with corrected statuses
-
-
-
-Detect risks:
-
-- IF any story has status "review": suggest `/bmad:gds:workflows:code-review`
-- IF any story has status "in-progress" AND no stories have status "ready-for-dev": recommend staying focused on active story
-- IF all epics have status "backlog" AND no stories have status "ready-for-dev": prompt `/bmad:gds:workflows:create-story`
-- IF `last_updated` timestamp is more than 7 days old (or `last_updated` is missing, fall back to `generated`): warn "sprint-status.yaml may be stale"
-- IF any story key doesn't match an epic pattern (e.g., story "5-1-..." but no "epic-5"): warn "orphaned story detected"
-- IF any epic has status in-progress but has no associated stories: warn "in-progress epic has no stories"
-
-
-
- Pick the next recommended workflow using priority:
- When selecting "first" story: sort by epic number, then story number (e.g., 1-1 before 1-2 before 2-1)
- 1. If any story status == in-progress → recommend `dev-story` for the first in-progress story (next_agent = DEVELOPER)
- 2. Else if any story status == review → recommend `code-review` for the first review story (next_agent = REVIEWER)
- 3. Else if any story status == ready-for-dev → recommend `dev-story` (next_agent = DEVELOPER)
- 4. Else if any story status == backlog → recommend `create-story` (next_agent = STORY_AUTHOR)
- 5. Else if any retrospective status == optional → recommend `retrospective` (next_agent = FACILITATOR)
- 6. Else → All implementation items done; congratulate the user - you both did amazing work together!
- Store selected recommendation as: next_story_id, next_workflow_id, next_agent. Note: in the consolidated GDS Phase 4 model, all role labels (DEVELOPER, REVIEWER, STORY_AUTHOR, FACILITATOR) map to the same agent (gds-agent-game-dev / Link Freeman). The labels are kept for downstream consumers that distinguish them semantically.
-
-
-
-
-
-
-
- Pick an option:
-1) Run recommended workflow now
-2) Show all stories grouped by status
-3) Show raw sprint-status.yaml
-4) Exit
-Choice:
-
-
-
-
-
-
-
-
-
-
- Display the full contents of {sprint_status_file}
-
-
-
- Exit workflow
-
-
-
-
-
-
-
-
- Load and parse {sprint_status_file} same as Step 2
- Compute recommendation same as Step 3
- next_workflow_id = {{next_workflow_id}}
- next_story_id = {{next_story_id}}
- next_agent = {{next_agent}}
- count_backlog = {{count_backlog}}
- count_ready = {{count_ready}}
- count_in_progress = {{count_in_progress}}
- count_review = {{count_review}}
- count_done = {{count_done}}
- epic_backlog = {{epic_backlog}}
- epic_in_progress = {{epic_in_progress}}
- epic_done = {{epic_done}}
- risks = {{risks}}
- Return to caller
-
-
-
-
-
-
-
- Check that {sprint_status_file} exists
-
- is_valid = false
- error = "sprint-status.yaml missing"
- suggestion = "Run sprint-planning to create it"
- Return
-
-
-Read and parse {sprint_status_file}
-
-Validate required metadata fields exist: generated, project, project_key, tracking_system, story_location (last_updated is optional for backward compatibility)
-
-is_valid = false
-error = "Missing required field(s): {{missing_fields}}"
-suggestion = "Re-run sprint-planning or add missing fields manually"
-Return
-
-
-Verify development_status section exists with at least one entry
-
-is_valid = false
-error = "development_status missing or empty"
-suggestion = "Re-run sprint-planning or repair the file manually"
-Return
-
-
-Validate all status values against known valid statuses:
-
-- Stories: backlog, ready-for-dev, in-progress, review, done (legacy: drafted)
-- Epics: backlog, in-progress, done (legacy: contexted)
-- Retrospectives: optional, done
-
- is_valid = false
- error = "Invalid status values: {{invalid_entries}}"
- suggestion = "Fix invalid statuses in sprint-status.yaml"
- Return
-
-
-is_valid = true
-message = "sprint-status.yaml valid: metadata complete, all statuses recognized"
-
-
-
diff --git a/src/workflows/gametest/gds-e2e-scaffold/SKILL.md b/src/workflows/gametest/gds-e2e-scaffold/SKILL.md
index 27306a7..69a24d4 100644
--- a/src/workflows/gametest/gds-e2e-scaffold/SKILL.md
+++ b/src/workflows/gametest/gds-e2e-scaffold/SKILL.md
@@ -3,4 +3,302 @@ name: gds-e2e-scaffold
description: 'Scaffold end-to-end testing infrastructure. Use when the user says "e2e scaffold" or "set up e2e testing"'
---
-Follow the instructions in ./workflow.md.
+# E2E Test Infrastructure Scaffold Workflow
+
+**Goal:** Scaffold complete E2E testing infrastructure for an existing game project — creating the foundation required for reliable, maintainable end-to-end tests: test fixtures, scenario builders, input simulators, and async assertion utilities, all tailored to the project's specific architecture.
+
+**Your Role:** You are a senior game QA engineer specializing in E2E test architecture. E2E tests validate complete player journeys. Without proper infrastructure, they become brittle nightmares. Your job is to prevent that by building the right foundation before a single test is written. Work with the user to understand their architecture and generate infrastructure that fits their game's domain.
+
+---
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `user_name`
+- `communication_language`
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## WORKFLOW ARCHITECTURE
+
+This uses an **inline workflow pattern** for autonomous execution:
+
+- Steps execute sequentially with critical architecture analysis upfront
+- Engine detection and domain discovery drive all generated code
+- All infrastructure files are written to disk as they are generated
+- A working example test proves the infrastructure functions correctly
+
+### Triggers
+
+- `ES`
+- `e2e-scaffold`
+- `scaffold e2e`
+- `e2e infrastructure`
+- `setup e2e`
+
+### Preflight Requirements
+
+**Critical:** Verify these requirements before proceeding. If any fail, HALT and guide the user.
+
+- Test framework already initialized (run `test-framework` workflow first)
+- Game has identifiable state manager class
+- Main gameplay scene exists and is functional
+- No existing E2E infrastructure (check for `Tests/PlayMode/E2E/` or engine equivalent)
+
+
+### Paths
+
+- `installed_path` = `{skill-root}`
+- `validation` = `{installed_path}/checklist.md`
+
+### Inputs (Collect from User or Auto-Detect)
+
+| Input | Description | Required | Default |
+|-------|-------------|----------|---------|
+| `game_state_class` | Primary game state manager class name | Yes | — |
+| `main_scene` | Scene name where core gameplay occurs | Yes | — |
+| `input_system` | Input system in use | No | `auto-detect` |
+
+### Knowledge Fragments
+
+Load `{installed_path}/knowledge/e2e-testing.md` before proceeding. Load the engine-specific fragment after detection in Step 1:
+
+- Unity: `{installed_path}/knowledge/unity-testing.md`
+- Unreal: `{installed_path}/knowledge/unreal-testing.md`
+- Godot: `{installed_path}/knowledge/godot-testing.md`
+
+---
+
+## EXECUTION
+
+
+
+
+ Detect Game Engine by checking for engine-specific project files:
+ - Unity: `Assets/`, `ProjectSettings/`, `*.unity` scenes
+ - Unreal: `*.uproject`, `Source/`, `Config/DefaultEngine.ini`
+ - Godot: `project.godot`, `*.tscn`, `*.gd` files
+
+ Load the appropriate engine-specific knowledge fragment
+
+ Identify core systems:
+ 1. Game State Manager — the primary class holding game state. Look for: `GameManager`, `GameStateManager`, `GameController`, `GameMode`. Note: initialization method, ready state property, save/load methods
+ 2. Input Handling — Unity New Input System vs Legacy, Unreal Enhanced Input vs Legacy, Godot built-in Input, or custom abstraction layer
+ 3. Event/Messaging System — event bus, C# events/delegates, UnityEvents, Godot Signals
+ 4. Scene Structure — main gameplay scene name, loading approach (additive/single), bootstrap/initialization flow
+
+
+ Identify domain concepts for the ScenarioBuilder:
+ - Primary Entities: units, players, items, enemies, etc.
+ - State Machine States: turn phases, game modes, player states
+ - Spatial System: grid/hex positions, world coordinates, regions
+ - Resources: currency, health, mana, ammunition, etc.
+
+
+ Check existing test structure. If `Tests/PlayMode/E2E/` (or engine equivalent) already exists, HALT and ask user how to proceed.
+
+
+
+ Create the E2E directory structure:
+```
+Tests/PlayMode/E2E/ (Unity)
+├── E2E.asmdef
+├── Infrastructure/
+│ ├── GameE2ETestFixture.cs
+│ ├── ScenarioBuilder.cs
+│ ├── InputSimulator.cs
+│ └── AsyncAssert.cs
+├── Scenarios/
+│ └── (empty - user will add tests here)
+├── TestData/
+│ └── (empty - user will add fixtures here)
+└── README.md
+```
+
+
+
+
+ Generate Assembly Definition `E2E.asmdef`:
+```json
+{
+ "name": "E2E",
+ "rootNamespace": "{ProjectNamespace}.Tests.E2E",
+ "references": ["{GameAssemblyName}", "Unity.InputSystem", "Unity.InputSystem.TestFramework"],
+ "includePlatforms": [],
+ "excludePlatforms": [],
+ "allowUnsafeCode": false,
+ "overrideReferences": true,
+ "precompiledReferences": ["nunit.framework.dll", "UnityEngine.TestRunner.dll", "UnityEditor.TestRunner.dll"],
+ "autoReferenced": false,
+ "defineConstraints": ["UNITY_INCLUDE_TESTS"],
+ "versionDefines": [],
+ "noEngineReferences": false
+}
+```
+ Replace `{ProjectNamespace}` with detected project namespace and `{GameAssemblyName}` with main game assembly. Include `Unity.InputSystem` references only if Input System package detected.
+
+
+ Generate `GameE2ETestFixture.cs` base class. Customize placeholders:
+ - `{Namespace}` = detected project namespace
+ - `{MainSceneName}` = detected main gameplay scene
+ - `{GameStateClass}` = identified game state manager class
+ - `{IsReadyProperty}` = property indicating game is initialized (e.g., `IsReady`, `IsInitialized`)
+ The fixture must handle: scene loading/unloading, game ready state waiting, access to GameState/Input/Scenario, cleanup guarantees, and failure screenshot capture.
+
+ Generate `ScenarioBuilder.cs` with fluent API. Analyze the game's domain model from Step 1 and add 3-5 concrete configuration methods based on identified entities. Include `FromSaveFile(string fileName)` as base method. Add domain-specific methods in the `#region State Configuration` block with TODO comments documenting the pattern.
+
+ Generate `InputSimulator.cs`. If New Input System detected:
+ - `ClickWorldPosition(Vector3)`, `ClickScreenPosition(Vector2)`, `ClickButton(string)`, `DragFromTo(Vector3, Vector3, float)` using `InputState.Change` and `StateEvent.From`
+ - `PressKey(Key)`, `HoldKey(Key, float)` for keyboard
+ - `Reset()` and `RefreshCamera()` utility methods
+ If Legacy Input detected, generate simpler version using UI event triggering.
+
+ Generate `AsyncAssert.cs` static utility class with:
+ - `WaitUntil(Func, string, float)` — core wait-for-condition
+ - `WaitUntilVerbose(...)` — with periodic debug logging
+ - `WaitForValue(...)` — wait for specific value (exact equality)
+ - `WaitForValueApprox(...)` — float/double with tolerance
+ - `WaitForValueNot(...)` — wait for value to change
+ - `WaitForNotNull(...)` and `WaitForUnityObject(...)`
+ - `AssertNeverTrue(...)` — assert something doesn't happen
+ - `WaitFrames(int)` and `WaitForPhysics(int)` utility methods
+
+
+
+
+
+ Generate equivalent infrastructure files under `Source/{ProjectName}/Tests/E2E/`:
+ - `GameE2ETestBase.h/.cpp` — base test class
+ - `ScenarioBuilder.h/.cpp` — fluent scenario configuration
+ - `InputSimulator.h/.cpp` — input abstraction
+ - `AsyncAssert.h` — wait-for-condition utilities
+ - `{ProjectName}E2ETests.Build.cs` — build configuration
+
+
+
+
+
+ Generate equivalent infrastructure files under `tests/e2e/infrastructure/`:
+ - `game_e2e_test_fixture.gd`
+ - `scenario_builder.gd`
+ - `input_simulator.gd`
+ - `async_assert.gd`
+
+
+
+ Write all infrastructure files to disk
+
+
+
+ Create a working E2E test that exercises the infrastructure and proves it works
+
+
+ Generate `ExampleE2ETest.cs` with three tests:
+ 1. `Infrastructure_GameLoadsAndReachesReadyState` — verifies base fixture, GameState, Input, Scenario are all non-null and game reaches ready state
+ 2. `Infrastructure_InputSimulatorCanClickButtons` — demonstrates input simulation pattern with commented example for the user to customize
+ 3. `Infrastructure_ScenarioBuilderCanConfigureState` — demonstrates ScenarioBuilder usage with commented domain-specific example
+ Apply `[Category("E2E")]` attribute to the class.
+
+
+
+
+ Generate equivalent example test in the engine-appropriate location and language, covering the same three verification patterns
+
+
+ Write example test file to disk
+
+
+
+ Create `README.md` in the E2E root directory covering:
+ - Quick Start (inherit from GameE2ETestFixture, use Scenario/Input/AsyncAssert)
+ - Example test using Given-When-Then structure
+ - Component documentation for GameE2ETestFixture, ScenarioBuilder, InputSimulator, AsyncAssert
+ - Directory structure explanation
+ - Running tests (Editor UI and command line)
+ - Best practices (wait for conditions not time, one journey per test, descriptive assertions)
+ - Extension guide (adding Scenario methods, adding Input methods)
+ - Troubleshooting table for common issues
+
+ Write README.md to disk
+
+
+
+ Load and apply `{validation}` checklist to verify all deliverables are complete
+ Present a summary to the user:
+
+```markdown
+## E2E Infrastructure Scaffold Complete
+
+**Engine**: {Unity | Unreal | Godot}
+**Version**: {detected_version}
+
+### Files Created
+
+[Directory tree of all created files]
+
+### Configuration
+
+| Setting | Value |
+|---------|-------|
+| Game State Class | `{GameStateClass}` |
+| Main Scene | `{MainSceneName}` |
+| Input System | `{InputSystemType}` |
+| Ready Property | `{IsReadyProperty}` |
+
+### Customization Required
+
+1. ScenarioBuilder: Add domain-specific setup methods for your game entities
+2. InputSimulator: Add game-specific input methods (e.g., hex clicking, gesture shortcuts)
+3. ExampleE2ETest: Modify example tests to use your actual UI elements
+
+### Next Steps
+
+1. Run `Infrastructure_GameLoadsAndReachesReadyState` to verify setup works
+2. Extend `ScenarioBuilder` with your domain methods
+3. Extend `InputSimulator` with game-specific input helpers
+4. Use `test-design` workflow to identify E2E scenarios
+5. Use `automate` workflow to generate E2E tests from scenarios
+```
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete` — if the resolved value is non-empty, follow it as the final terminal instruction before exiting.
+
+
+
diff --git a/src/workflows/gametest/gds-e2e-scaffold/customize.toml b/src/workflows/gametest/gds-e2e-scaffold/customize.toml
new file mode 100644
index 0000000..fb7e94c
--- /dev/null
+++ b/src/workflows/gametest/gds-e2e-scaffold/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-e2e-scaffold. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "E2E scaffolding must favor deterministic, reproducible runs over clever shortcuts."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed when the workflow reaches Step 5 (Output Summary),
+# after the final outputs are produced. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/gametest/gds-e2e-scaffold/workflow.md b/src/workflows/gametest/gds-e2e-scaffold/workflow.md
deleted file mode 100644
index 7f8aebf..0000000
--- a/src/workflows/gametest/gds-e2e-scaffold/workflow.md
+++ /dev/null
@@ -1,270 +0,0 @@
----
-name: e2e-scaffold
-description: 'E2E testing infrastructure scaffolder. Use when the user says "lets scaffold e2e testing infrastructure for game project" or "setup e2e" or "e2e infrastructure"'
-main_config: '{module_config}'
----
-
-# E2E Test Infrastructure Scaffold Workflow
-
-**Goal:** Scaffold complete E2E testing infrastructure for an existing game project — creating the foundation required for reliable, maintainable end-to-end tests: test fixtures, scenario builders, input simulators, and async assertion utilities, all tailored to the project's specific architecture.
-
-**Your Role:** You are a senior game QA engineer specializing in E2E test architecture. E2E tests validate complete player journeys. Without proper infrastructure, they become brittle nightmares. Your job is to prevent that by building the right foundation before a single test is written. Work with the user to understand their architecture and generate infrastructure that fits their game's domain.
-
----
-
-## WORKFLOW ARCHITECTURE
-
-This uses an **inline workflow pattern** for autonomous execution:
-
-- Steps execute sequentially with critical architecture analysis upfront
-- Engine detection and domain discovery drive all generated code
-- All infrastructure files are written to disk as they are generated
-- A working example test proves the infrastructure functions correctly
-
-### Triggers
-
-- `ES`
-- `e2e-scaffold`
-- `scaffold e2e`
-- `e2e infrastructure`
-- `setup e2e`
-
-### Preflight Requirements
-
-**Critical:** Verify these requirements before proceeding. If any fail, HALT and guide the user.
-
-- Test framework already initialized (run `test-framework` workflow first)
-- Game has identifiable state manager class
-- Main gameplay scene exists and is functional
-- No existing E2E infrastructure (check for `Tests/PlayMode/E2E/` or engine equivalent)
-
----
-
-## INITIALIZATION
-
-### Configuration Loading
-
-Load config from `{module_config}` and resolve:
-
-- `project_name`, `output_folder`, `user_name`
-- `communication_language`, `document_output_language`, `game_dev_experience`
-- `date` as system-generated current datetime
-- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
-
-### Paths
-
-- `installed_path` = `{skill_root}`
-- `validation` = `{installed_path}/checklist.md`
-
-### Inputs (Collect from User or Auto-Detect)
-
-| Input | Description | Required | Default |
-|-------|-------------|----------|---------|
-| `game_state_class` | Primary game state manager class name | Yes | — |
-| `main_scene` | Scene name where core gameplay occurs | Yes | — |
-| `input_system` | Input system in use | No | `auto-detect` |
-
-### Knowledge Fragments
-
-Load `{installed_path}/knowledge/e2e-testing.md` before proceeding. Load the engine-specific fragment after detection in Step 1:
-
-- Unity: `{installed_path}/knowledge/unity-testing.md`
-- Unreal: `{installed_path}/knowledge/unreal-testing.md`
-- Godot: `{installed_path}/knowledge/godot-testing.md`
-
----
-
-## EXECUTION
-
-
-
-
- Detect Game Engine by checking for engine-specific project files:
- - Unity: `Assets/`, `ProjectSettings/`, `*.unity` scenes
- - Unreal: `*.uproject`, `Source/`, `Config/DefaultEngine.ini`
- - Godot: `project.godot`, `*.tscn`, `*.gd` files
-
- Load the appropriate engine-specific knowledge fragment
-
- Identify core systems:
- 1. Game State Manager — the primary class holding game state. Look for: `GameManager`, `GameStateManager`, `GameController`, `GameMode`. Note: initialization method, ready state property, save/load methods
- 2. Input Handling — Unity New Input System vs Legacy, Unreal Enhanced Input vs Legacy, Godot built-in Input, or custom abstraction layer
- 3. Event/Messaging System — event bus, C# events/delegates, UnityEvents, Godot Signals
- 4. Scene Structure — main gameplay scene name, loading approach (additive/single), bootstrap/initialization flow
-
-
- Identify domain concepts for the ScenarioBuilder:
- - Primary Entities: units, players, items, enemies, etc.
- - State Machine States: turn phases, game modes, player states
- - Spatial System: grid/hex positions, world coordinates, regions
- - Resources: currency, health, mana, ammunition, etc.
-
-
- Check existing test structure. If `Tests/PlayMode/E2E/` (or engine equivalent) already exists, HALT and ask user how to proceed.
-
-
-
- Create the E2E directory structure:
-```
-Tests/PlayMode/E2E/ (Unity)
-├── E2E.asmdef
-├── Infrastructure/
-│ ├── GameE2ETestFixture.cs
-│ ├── ScenarioBuilder.cs
-│ ├── InputSimulator.cs
-│ └── AsyncAssert.cs
-├── Scenarios/
-│ └── (empty - user will add tests here)
-├── TestData/
-│ └── (empty - user will add fixtures here)
-└── README.md
-```
-
-
-
-
- Generate Assembly Definition `E2E.asmdef`:
-```json
-{
- "name": "E2E",
- "rootNamespace": "{ProjectNamespace}.Tests.E2E",
- "references": ["{GameAssemblyName}", "Unity.InputSystem", "Unity.InputSystem.TestFramework"],
- "includePlatforms": [],
- "excludePlatforms": [],
- "allowUnsafeCode": false,
- "overrideReferences": true,
- "precompiledReferences": ["nunit.framework.dll", "UnityEngine.TestRunner.dll", "UnityEditor.TestRunner.dll"],
- "autoReferenced": false,
- "defineConstraints": ["UNITY_INCLUDE_TESTS"],
- "versionDefines": [],
- "noEngineReferences": false
-}
-```
- Replace `{ProjectNamespace}` with detected project namespace and `{GameAssemblyName}` with main game assembly. Include `Unity.InputSystem` references only if Input System package detected.
-
-
- Generate `GameE2ETestFixture.cs` base class. Customize placeholders:
- - `{Namespace}` = detected project namespace
- - `{MainSceneName}` = detected main gameplay scene
- - `{GameStateClass}` = identified game state manager class
- - `{IsReadyProperty}` = property indicating game is initialized (e.g., `IsReady`, `IsInitialized`)
- The fixture must handle: scene loading/unloading, game ready state waiting, access to GameState/Input/Scenario, cleanup guarantees, and failure screenshot capture.
-
- Generate `ScenarioBuilder.cs` with fluent API. Analyze the game's domain model from Step 1 and add 3-5 concrete configuration methods based on identified entities. Include `FromSaveFile(string fileName)` as base method. Add domain-specific methods in the `#region State Configuration` block with TODO comments documenting the pattern.
-
- Generate `InputSimulator.cs`. If New Input System detected:
- - `ClickWorldPosition(Vector3)`, `ClickScreenPosition(Vector2)`, `ClickButton(string)`, `DragFromTo(Vector3, Vector3, float)` using `InputState.Change` and `StateEvent.From`
- - `PressKey(Key)`, `HoldKey(Key, float)` for keyboard
- - `Reset()` and `RefreshCamera()` utility methods
- If Legacy Input detected, generate simpler version using UI event triggering.
-
- Generate `AsyncAssert.cs` static utility class with:
- - `WaitUntil(Func, string, float)` — core wait-for-condition
- - `WaitUntilVerbose(...)` — with periodic debug logging
- - `WaitForValue(...)` — wait for specific value (exact equality)
- - `WaitForValueApprox(...)` — float/double with tolerance
- - `WaitForValueNot(...)` — wait for value to change
- - `WaitForNotNull(...)` and `WaitForUnityObject(...)`
- - `AssertNeverTrue(...)` — assert something doesn't happen
- - `WaitFrames(int)` and `WaitForPhysics(int)` utility methods
-
-
-
-
-
- Generate equivalent infrastructure files under `Source/{ProjectName}/Tests/E2E/`:
- - `GameE2ETestBase.h/.cpp` — base test class
- - `ScenarioBuilder.h/.cpp` — fluent scenario configuration
- - `InputSimulator.h/.cpp` — input abstraction
- - `AsyncAssert.h` — wait-for-condition utilities
- - `{ProjectName}E2ETests.Build.cs` — build configuration
-
-
-
-
-
- Generate equivalent infrastructure files under `tests/e2e/infrastructure/`:
- - `game_e2e_test_fixture.gd`
- - `scenario_builder.gd`
- - `input_simulator.gd`
- - `async_assert.gd`
-
-
-
- Write all infrastructure files to disk
-
-
-
- Create a working E2E test that exercises the infrastructure and proves it works
-
-
- Generate `ExampleE2ETest.cs` with three tests:
- 1. `Infrastructure_GameLoadsAndReachesReadyState` — verifies base fixture, GameState, Input, Scenario are all non-null and game reaches ready state
- 2. `Infrastructure_InputSimulatorCanClickButtons` — demonstrates input simulation pattern with commented example for the user to customize
- 3. `Infrastructure_ScenarioBuilderCanConfigureState` — demonstrates ScenarioBuilder usage with commented domain-specific example
- Apply `[Category("E2E")]` attribute to the class.
-
-
-
-
- Generate equivalent example test in the engine-appropriate location and language, covering the same three verification patterns
-
-
- Write example test file to disk
-
-
-
- Create `README.md` in the E2E root directory covering:
- - Quick Start (inherit from GameE2ETestFixture, use Scenario/Input/AsyncAssert)
- - Example test using Given-When-Then structure
- - Component documentation for GameE2ETestFixture, ScenarioBuilder, InputSimulator, AsyncAssert
- - Directory structure explanation
- - Running tests (Editor UI and command line)
- - Best practices (wait for conditions not time, one journey per test, descriptive assertions)
- - Extension guide (adding Scenario methods, adding Input methods)
- - Troubleshooting table for common issues
-
- Write README.md to disk
-
-
-
- Load and apply `{validation}` checklist to verify all deliverables are complete
- Present a summary to the user:
-
-```markdown
-## E2E Infrastructure Scaffold Complete
-
-**Engine**: {Unity | Unreal | Godot}
-**Version**: {detected_version}
-
-### Files Created
-
-[Directory tree of all created files]
-
-### Configuration
-
-| Setting | Value |
-|---------|-------|
-| Game State Class | `{GameStateClass}` |
-| Main Scene | `{MainSceneName}` |
-| Input System | `{InputSystemType}` |
-| Ready Property | `{IsReadyProperty}` |
-
-### Customization Required
-
-1. ScenarioBuilder: Add domain-specific setup methods for your game entities
-2. InputSimulator: Add game-specific input methods (e.g., hex clicking, gesture shortcuts)
-3. ExampleE2ETest: Modify example tests to use your actual UI elements
-
-### Next Steps
-
-1. Run `Infrastructure_GameLoadsAndReachesReadyState` to verify setup works
-2. Extend `ScenarioBuilder` with your domain methods
-3. Extend `InputSimulator` with game-specific input helpers
-4. Use `test-design` workflow to identify E2E scenarios
-5. Use `automate` workflow to generate E2E tests from scenarios
-```
-
-
-
-
diff --git a/src/workflows/gametest/gds-performance-test/SKILL.md b/src/workflows/gametest/gds-performance-test/SKILL.md
index f6a0f31..2524e37 100644
--- a/src/workflows/gametest/gds-performance-test/SKILL.md
+++ b/src/workflows/gametest/gds-performance-test/SKILL.md
@@ -3,4 +3,378 @@ name: gds-performance-test
description: 'Design game performance testing strategy. Use when the user says "performance test" or "benchmark"'
---
-Follow the instructions in ./workflow.md.
+# Performance Testing Strategy Workflow
+
+**Goal:** Design a comprehensive performance testing strategy covering frame rate, memory usage, loading times, and platform-specific requirements. Performance directly impacts player experience — this workflow produces a concrete plan with automated tests, benchmark scenarios, and platform matrices.
+
+**Your Role:** You are a senior game performance engineer and QA strategist. Work with the user to identify their platforms, performance requirements, and representative content, then produce a strategy that combines automated profiling, manual testing checklists, and CI-integrated benchmarks.
+
+---
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `project_name`
+- `user_name`
+- `communication_language`
+- `output_folder`
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## WORKFLOW ARCHITECTURE
+
+This uses an **inline workflow pattern** for autonomous execution:
+
+- Steps execute sequentially, building toward a complete performance test plan document
+- Platform detection and target configuration drive all subsequent decisions
+- The final deliverable is a comprehensive Performance Test Plan document
+- Knowledge base reference: `knowledge/performance-testing.md`
+
+### Preflight Requirements
+
+Before proceeding, verify:
+
+- Target platforms identified (or discoverable from project files)
+- Performance requirements known (target FPS, memory limits), or to be defined in Step 1
+- Representative content available for testing
+- Profiling tools accessible
+
+
+### Paths
+
+- `installed_path` = `{skill_root}`
+- `validation` = `{installed_path}/checklist.md`
+- `template` = `{installed_path}/performance-template.md`
+- `default_output_file` = `{output_folder}/performance-test-plan.md`
+
+### Variables
+
+- `target_fps` = `60` (configurable per platform in Step 1)
+- `target_platform` = `auto` (options: `auto`, `pc`, `console`, `mobile`)
+- `game_engine` = `auto` (options: `auto`, `unity`, `unreal`, `godot`)
+
+---
+
+## EXECUTION
+
+
+
+
+ Detect game engine and target platforms from project files. If ambiguous, ask the user.
+ Establish frame rate targets per platform:
+
+| Platform | Target FPS | Minimum FPS | Notes |
+| ----------------- | ---------- | ----------- | ------------------ |
+| PC (High) | 60+ | 30 | Uncapped option |
+| PC (Low) | 30 | 30 | Scalable settings |
+| PS5/Xbox X | 60 | 60 | Performance mode |
+| PS4/Xbox One | 30 | 30 | Locked |
+| Switch Docked | 30 | 30 | Stable |
+| Switch Handheld | 30 | 25 | Power saving |
+| Mobile (High) | 60 | 30 | Device dependent |
+| Mobile (Standard) | 30 | 30 | Thermal throttling |
+
+ Filter this table to the user's actual target platforms. Adjust targets based on game genre and user input.
+
+ Establish memory budgets per target platform:
+
+| Platform | Total RAM | Game Budget | Notes |
+| ------------- | --------- | ----------- | ------------------- |
+| PC (Min spec) | 8 GB | 4 GB | Leave room for OS |
+| PS5 | 16 GB | 12 GB | Unified memory |
+| Xbox Series X | 16 GB | 13 GB | With Smart Delivery |
+| Switch | 4 GB | 2.5 GB | Tight constraints |
+| Mobile | 4-6 GB | 1.5-2 GB | Background apps |
+
+
+ Establish loading time targets:
+
+| Scenario | Target | Maximum |
+| ------------ | ------ | ------- |
+| Initial boot | < 10s | 30s |
+| Level load | < 15s | 30s |
+| Fast travel | < 5s | 10s |
+| Respawn | < 3s | 5s |
+
+ Adjust based on genre (e.g., fast travel may not apply to linear games).
+
+
+
+
+ Define stress test scenarios for frame rate validation:
+```
+SCENARIO: Maximum Entity Count
+ GIVEN game level with normal enemy spawn
+ WHEN enemy count reaches 50+
+ THEN frame rate stays above minimum
+ AND no visual artifacts
+ AND audio doesn't stutter
+
+SCENARIO: Particle System Stress
+ GIVEN combat with multiple effects
+ WHEN 20+ particle systems active
+ THEN frame rate degradation < 20%
+ AND memory allocation stable
+
+SCENARIO: Draw Call Stress
+ GIVEN level with maximum visible geometry
+ WHEN camera shows worst-case view
+ THEN frame rate stays above minimum
+ AND no hitching or stuttering
+```
+
+ Define memory test scenarios:
+```
+SCENARIO: Extended Play Session
+ GIVEN game running for 4+ hours
+ WHEN normal gameplay occurs
+ THEN memory usage remains stable
+ AND no memory leaks detected
+ AND no crash from fragmentation
+
+SCENARIO: Level Transition
+ GIVEN player completes level
+ WHEN transitioning to new level
+ THEN previous level fully unloaded
+ AND memory baseline returns
+ AND no cumulative growth
+```
+
+ Define loading test scenarios:
+```
+SCENARIO: Cold Boot
+ GIVEN game not in memory
+ WHEN launching game
+ THEN reaches interactive state in < target
+ AND loading feedback shown
+ AND no apparent hang
+
+SCENARIO: Save/Load Performance
+ GIVEN large save file (max progress)
+ WHEN loading save
+ THEN completes in < target
+ AND no corruption
+ AND gameplay resumes smoothly
+```
+
+ Adapt scenario details to match the specific game type and identified systems
+
+
+
+ Generate automated performance test code for the detected engine
+
+
+ Generate Unity Performance Test Runner examples:
+```csharp
+[UnityTest]
+public IEnumerator Performance_CombatScene_MaintainsFPS()
+{
+ using (Measure.ProfilerMarkers(new[] { "Main Thread" }))
+ {
+ SceneManager.LoadScene("CombatStressTest");
+ yield return new WaitForSeconds(30f);
+ }
+ var metrics = Measure.Custom(new SampleGroupDefinition("FPS"));
+ Assert.Greater(metrics.Median, 30, "FPS should stay above 30");
+}
+```
+
+
+
+
+ Generate Unreal Automation test examples:
+```cpp
+bool FPerformanceTest::RunTest(const FString& Parameters)
+{
+ float StartTime = FPlatformTime::Seconds();
+ for (int i = 0; i < 100; i++)
+ GetWorld()->SpawnActor();
+ float FrameTime = FApp::GetDeltaTime();
+ TestTrue("Frame time under budget", FrameTime < 0.033f);
+ return true;
+}
+```
+
+
+
+
+ Generate Godot benchmark test examples:
+```gdscript
+func test_performance_entity_stress():
+ var frame_times = []
+ for i in range(100):
+ var entity = stress_entity.instantiate()
+ add_child(entity)
+ for i in range(300):
+ await get_tree().process_frame
+ frame_times.append(Performance.get_monitor(Performance.TIME_PROCESS))
+ var avg_frame_time = frame_times.reduce(func(a, b): return a + b) / frame_times.size()
+ assert_lt(avg_frame_time, 0.033, "Average frame time under 33ms (30 FPS)")
+```
+
+
+
+ Define manual profiling checklists:
+
+**CPU Profiling**
+- [ ] Identify hotspots using engine profiler
+- [ ] Check GC frequency and allocation pressure
+- [ ] Verify multithreading usage and thread contention
+
+**GPU Profiling**
+- [ ] Draw call count at target scenes
+- [ ] Overdraw analysis on complex areas
+- [ ] Shader complexity assessment
+
+**Memory Profiling**
+- [ ] Heap allocation patterns over a 30-minute session
+- [ ] Asset memory usage by category
+- [ ] Leak detection across multiple level loads
+
+
+
+
+ Define the benchmark levels and their purpose:
+
+| Benchmark | Purpose | Duration |
+| --------------- | ------------------------ | -------- |
+| Combat Stress | Max entities, effects | 60s |
+| Open World | Draw distance, streaming | 120s |
+| Menu Navigation | UI performance | 30s |
+| Save/Load | Persistence performance | 30s |
+
+ Adapt benchmark names and durations to match the actual game content.
+
+ Define baseline capture process:
+ 1. Run benchmarks on reference hardware (document hardware specs)
+ 2. Record baseline metrics (avg FPS, P95 frame time, peak memory)
+ 3. Set regression thresholds (e.g., 10% FPS degradation = fail, 5% memory growth = fail)
+ 4. Integrate benchmarks into CI pipeline as gated checks
+
+
+
+
+ Define platform-specific testing requirements for each target platform
+
+
+ PC testing requirements:
+ - Test across min/recommended hardware specs
+ - Verify quality settings (Low/Medium/High/Ultra) all perform within budget
+ - Check VRAM usage at each quality tier
+ - Test at multiple resolutions (1080p, 1440p, 4K)
+
+
+
+
+ Console testing requirements:
+ - Test in both Performance and Quality modes if applicable
+ - Verify thermal throttling behavior during extended sessions
+ - Check suspend/resume impact on frame rate and memory
+ - Test with varying storage speeds (internal SSD vs extended storage)
+
+
+
+
+ Mobile testing requirements:
+ - Test on low/mid/high tier representative devices
+ - Monitor thermal throttling onset time and severity
+ - Measure battery drain per hour of gameplay
+ - Test with background apps consuming memory
+
+
+
+
+
+ Load `{template}` and use it as the structural foundation for the output document
+ Compile all information from Steps 1-5 into a comprehensive Performance Test Plan at `{default_output_file}` with this structure:
+
+```markdown
+# Performance Test Plan: {project_name}
+
+## Performance Targets
+[FPS tables filtered to target platforms]
+[Memory budget tables]
+[Loading time targets]
+
+## Test Scenarios
+
+### Frame Rate Tests
+[Stress test scenarios from Step 2]
+
+### Memory Tests
+[Extended play and leak detection scenarios]
+
+### Loading Tests
+[Boot, level load, save/load scenarios]
+
+## Methodology
+
+### Automated Tests
+[Engine-specific code examples]
+[CI integration instructions]
+
+### Manual Profiling
+[Checklists from Step 3]
+[Tools to use per engine]
+
+## Benchmark Suite
+[Benchmark definitions from Step 4]
+[Baseline capture process]
+[Regression thresholds]
+
+## Platform Matrix
+[Platform-specific requirements from Step 5]
+
+## Regression Criteria
+[Quantified thresholds: FPS drop %, memory growth %, load time delta]
+[CI gate configuration]
+
+## Schedule
+[When performance tests run: nightly, per-sprint, pre-release]
+[Who reviews results and owns regressions]
+```
+
+ Load and apply `{validation}` checklist to verify all deliverables are complete
+ Present a summary of what was produced and the recommended next steps to the user
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete` — if the resolved value is non-empty, follow it as the final terminal instruction before exiting.
+
+
+
diff --git a/src/workflows/gametest/gds-performance-test/customize.toml b/src/workflows/gametest/gds-performance-test/customize.toml
new file mode 100644
index 0000000..bb31611
--- /dev/null
+++ b/src/workflows/gametest/gds-performance-test/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-performance-test. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "Performance budgets must be measurable and tied to player-perceived quality."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed when the workflow reaches Step 6 (Generate Performance Test Plan),
+# after the final outputs are produced. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/gametest/gds-performance-test/workflow.md b/src/workflows/gametest/gds-performance-test/workflow.md
deleted file mode 100644
index 0671409..0000000
--- a/src/workflows/gametest/gds-performance-test/workflow.md
+++ /dev/null
@@ -1,344 +0,0 @@
----
-name: gametest-performance
-description: 'Performance test strategy designer. Use when the user says "lets create a performance test plan" or "design game performance testing strategy"'
-main_config: '{module_config}'
----
-
-# Performance Testing Strategy Workflow
-
-**Goal:** Design a comprehensive performance testing strategy covering frame rate, memory usage, loading times, and platform-specific requirements. Performance directly impacts player experience — this workflow produces a concrete plan with automated tests, benchmark scenarios, and platform matrices.
-
-**Your Role:** You are a senior game performance engineer and QA strategist. Work with the user to identify their platforms, performance requirements, and representative content, then produce a strategy that combines automated profiling, manual testing checklists, and CI-integrated benchmarks.
-
----
-
-## WORKFLOW ARCHITECTURE
-
-This uses an **inline workflow pattern** for autonomous execution:
-
-- Steps execute sequentially, building toward a complete performance test plan document
-- Platform detection and target configuration drive all subsequent decisions
-- The final deliverable is a comprehensive Performance Test Plan document
-- Knowledge base reference: `knowledge/performance-testing.md`
-
-### Preflight Requirements
-
-Before proceeding, verify:
-
-- Target platforms identified (or discoverable from project files)
-- Performance requirements known (target FPS, memory limits), or to be defined in Step 1
-- Representative content available for testing
-- Profiling tools accessible
-
----
-
-## INITIALIZATION
-
-### Configuration Loading
-
-Load config from `{module_config}` and resolve:
-
-- `project_name`, `output_folder`, `user_name`
-- `communication_language`, `document_output_language`, `game_dev_experience`
-- `date` as system-generated current datetime
-- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
-
-### Paths
-
-- `installed_path` = `{skill_root}`
-- `validation` = `{installed_path}/checklist.md`
-- `template` = `{installed_path}/performance-template.md`
-- `default_output_file` = `{output_folder}/performance-test-plan.md`
-
-### Variables
-
-- `target_fps` = `60` (configurable per platform in Step 1)
-- `target_platform` = `auto` (options: `auto`, `pc`, `console`, `mobile`)
-- `game_engine` = `auto` (options: `auto`, `unity`, `unreal`, `godot`)
-
----
-
-## EXECUTION
-
-
-
-
- Detect game engine and target platforms from project files. If ambiguous, ask the user.
- Establish frame rate targets per platform:
-
-| Platform | Target FPS | Minimum FPS | Notes |
-| ----------------- | ---------- | ----------- | ------------------ |
-| PC (High) | 60+ | 30 | Uncapped option |
-| PC (Low) | 30 | 30 | Scalable settings |
-| PS5/Xbox X | 60 | 60 | Performance mode |
-| PS4/Xbox One | 30 | 30 | Locked |
-| Switch Docked | 30 | 30 | Stable |
-| Switch Handheld | 30 | 25 | Power saving |
-| Mobile (High) | 60 | 30 | Device dependent |
-| Mobile (Standard) | 30 | 30 | Thermal throttling |
-
- Filter this table to the user's actual target platforms. Adjust targets based on game genre and user input.
-
- Establish memory budgets per target platform:
-
-| Platform | Total RAM | Game Budget | Notes |
-| ------------- | --------- | ----------- | ------------------- |
-| PC (Min spec) | 8 GB | 4 GB | Leave room for OS |
-| PS5 | 16 GB | 12 GB | Unified memory |
-| Xbox Series X | 16 GB | 13 GB | With Smart Delivery |
-| Switch | 4 GB | 2.5 GB | Tight constraints |
-| Mobile | 4-6 GB | 1.5-2 GB | Background apps |
-
-
- Establish loading time targets:
-
-| Scenario | Target | Maximum |
-| ------------ | ------ | ------- |
-| Initial boot | < 10s | 30s |
-| Level load | < 15s | 30s |
-| Fast travel | < 5s | 10s |
-| Respawn | < 3s | 5s |
-
- Adjust based on genre (e.g., fast travel may not apply to linear games).
-
-
-
-
- Define stress test scenarios for frame rate validation:
-```
-SCENARIO: Maximum Entity Count
- GIVEN game level with normal enemy spawn
- WHEN enemy count reaches 50+
- THEN frame rate stays above minimum
- AND no visual artifacts
- AND audio doesn't stutter
-
-SCENARIO: Particle System Stress
- GIVEN combat with multiple effects
- WHEN 20+ particle systems active
- THEN frame rate degradation < 20%
- AND memory allocation stable
-
-SCENARIO: Draw Call Stress
- GIVEN level with maximum visible geometry
- WHEN camera shows worst-case view
- THEN frame rate stays above minimum
- AND no hitching or stuttering
-```
-
- Define memory test scenarios:
-```
-SCENARIO: Extended Play Session
- GIVEN game running for 4+ hours
- WHEN normal gameplay occurs
- THEN memory usage remains stable
- AND no memory leaks detected
- AND no crash from fragmentation
-
-SCENARIO: Level Transition
- GIVEN player completes level
- WHEN transitioning to new level
- THEN previous level fully unloaded
- AND memory baseline returns
- AND no cumulative growth
-```
-
- Define loading test scenarios:
-```
-SCENARIO: Cold Boot
- GIVEN game not in memory
- WHEN launching game
- THEN reaches interactive state in < target
- AND loading feedback shown
- AND no apparent hang
-
-SCENARIO: Save/Load Performance
- GIVEN large save file (max progress)
- WHEN loading save
- THEN completes in < target
- AND no corruption
- AND gameplay resumes smoothly
-```
-
- Adapt scenario details to match the specific game type and identified systems
-
-
-
- Generate automated performance test code for the detected engine
-
-
- Generate Unity Performance Test Runner examples:
-```csharp
-[UnityTest]
-public IEnumerator Performance_CombatScene_MaintainsFPS()
-{
- using (Measure.ProfilerMarkers(new[] { "Main Thread" }))
- {
- SceneManager.LoadScene("CombatStressTest");
- yield return new WaitForSeconds(30f);
- }
- var metrics = Measure.Custom(new SampleGroupDefinition("FPS"));
- Assert.Greater(metrics.Median, 30, "FPS should stay above 30");
-}
-```
-
-
-
-
- Generate Unreal Automation test examples:
-```cpp
-bool FPerformanceTest::RunTest(const FString& Parameters)
-{
- float StartTime = FPlatformTime::Seconds();
- for (int i = 0; i < 100; i++)
- GetWorld()->SpawnActor();
- float FrameTime = FApp::GetDeltaTime();
- TestTrue("Frame time under budget", FrameTime < 0.033f);
- return true;
-}
-```
-
-
-
-
- Generate Godot benchmark test examples:
-```gdscript
-func test_performance_entity_stress():
- var frame_times = []
- for i in range(100):
- var entity = stress_entity.instantiate()
- add_child(entity)
- for i in range(300):
- await get_tree().process_frame
- frame_times.append(Performance.get_monitor(Performance.TIME_PROCESS))
- var avg_frame_time = frame_times.reduce(func(a, b): return a + b) / frame_times.size()
- assert_lt(avg_frame_time, 0.033, "Average frame time under 33ms (30 FPS)")
-```
-
-
-
- Define manual profiling checklists:
-
-**CPU Profiling**
-- [ ] Identify hotspots using engine profiler
-- [ ] Check GC frequency and allocation pressure
-- [ ] Verify multithreading usage and thread contention
-
-**GPU Profiling**
-- [ ] Draw call count at target scenes
-- [ ] Overdraw analysis on complex areas
-- [ ] Shader complexity assessment
-
-**Memory Profiling**
-- [ ] Heap allocation patterns over a 30-minute session
-- [ ] Asset memory usage by category
-- [ ] Leak detection across multiple level loads
-
-
-
-
- Define the benchmark levels and their purpose:
-
-| Benchmark | Purpose | Duration |
-| --------------- | ------------------------ | -------- |
-| Combat Stress | Max entities, effects | 60s |
-| Open World | Draw distance, streaming | 120s |
-| Menu Navigation | UI performance | 30s |
-| Save/Load | Persistence performance | 30s |
-
- Adapt benchmark names and durations to match the actual game content.
-
- Define baseline capture process:
- 1. Run benchmarks on reference hardware (document hardware specs)
- 2. Record baseline metrics (avg FPS, P95 frame time, peak memory)
- 3. Set regression thresholds (e.g., 10% FPS degradation = fail, 5% memory growth = fail)
- 4. Integrate benchmarks into CI pipeline as gated checks
-
-
-
-
- Define platform-specific testing requirements for each target platform
-
-
- PC testing requirements:
- - Test across min/recommended hardware specs
- - Verify quality settings (Low/Medium/High/Ultra) all perform within budget
- - Check VRAM usage at each quality tier
- - Test at multiple resolutions (1080p, 1440p, 4K)
-
-
-
-
- Console testing requirements:
- - Test in both Performance and Quality modes if applicable
- - Verify thermal throttling behavior during extended sessions
- - Check suspend/resume impact on frame rate and memory
- - Test with varying storage speeds (internal SSD vs extended storage)
-
-
-
-
- Mobile testing requirements:
- - Test on low/mid/high tier representative devices
- - Monitor thermal throttling onset time and severity
- - Measure battery drain per hour of gameplay
- - Test with background apps consuming memory
-
-
-
-
-
- Load `{template}` and use it as the structural foundation for the output document
- Compile all information from Steps 1-5 into a comprehensive Performance Test Plan at `{default_output_file}` with this structure:
-
-```markdown
-# Performance Test Plan: {project_name}
-
-## Performance Targets
-[FPS tables filtered to target platforms]
-[Memory budget tables]
-[Loading time targets]
-
-## Test Scenarios
-
-### Frame Rate Tests
-[Stress test scenarios from Step 2]
-
-### Memory Tests
-[Extended play and leak detection scenarios]
-
-### Loading Tests
-[Boot, level load, save/load scenarios]
-
-## Methodology
-
-### Automated Tests
-[Engine-specific code examples]
-[CI integration instructions]
-
-### Manual Profiling
-[Checklists from Step 3]
-[Tools to use per engine]
-
-## Benchmark Suite
-[Benchmark definitions from Step 4]
-[Baseline capture process]
-[Regression thresholds]
-
-## Platform Matrix
-[Platform-specific requirements from Step 5]
-
-## Regression Criteria
-[Quantified thresholds: FPS drop %, memory growth %, load time delta]
-[CI gate configuration]
-
-## Schedule
-[When performance tests run: nightly, per-sprint, pre-release]
-[Who reviews results and owns regressions]
-```
-
- Load and apply `{validation}` checklist to verify all deliverables are complete
- Present a summary of what was produced and the recommended next steps to the user
-
-
-
diff --git a/src/workflows/gametest/gds-playtest-plan/SKILL.md b/src/workflows/gametest/gds-playtest-plan/SKILL.md
index 4488e01..f192b46 100644
--- a/src/workflows/gametest/gds-playtest-plan/SKILL.md
+++ b/src/workflows/gametest/gds-playtest-plan/SKILL.md
@@ -3,4 +3,391 @@ name: gds-playtest-plan
description: 'Create structured playtesting plans for user feedback. Use when the user says "playtest plan" or "playtesting"'
---
-Follow the instructions in ./workflow.md.
+# Playtest Planning
+
+**Workflow ID**: `gds-playtest-plan`
+**Version**: 1.0 (BMad v6)
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `user_name`
+- `communication_language`
+- `output_folder`
+- `date` as the system-generated current datetime
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## Goal
+
+Create structured playtesting sessions to validate gameplay, gather user feedback, and identify issues that automated testing cannot catch. Playtesting validates "feel" and player experience.
+
+## Role
+
+You are a Game QA Specialist with expertise in designing and facilitating playtesting sessions. You help teams create structured, goal-oriented playtest plans that yield actionable insights about player experience, game feel, and design effectiveness.
+
+---
+
+## WORKFLOW ARCHITECTURE
+
+This workflow produces a complete playtesting plan including session structure, observation guides, note-taking templates, and post-session analysis frameworks.
+
+**Primary Output**: `{output_folder}/playtest-plan.md`
+
+**Supporting Components**:
+- Validation: `{installed_path}/checklist.md`
+- Template: `{installed_path}/playtest-template.md`
+- Knowledge Base: `knowledge/playtesting.md`
+
+**Input Files** (auto-located):
+- GDD: `{output_folder}/*gdd*.md` or `{output_folder}/*gdd*/*.md` — game mechanics to validate
+- Game Brief: `{output_folder}/*brief*.md` — core pillars
+
+
+Load and resolve configuration from `{module_config}`:
+
+```yaml
+output_folder: {from config}
+user_name: {from config}
+communication_language: {from config}
+document_output_language: {from config}
+game_dev_experience: {from config}
+date: {system-generated}
+```
+
+Resolve workflow variables:
+```yaml
+playtest_type: "internal" # internal | external | focused
+session_duration: 60 # minutes
+participant_count: 5
+```
+
+Greet the user by name (`user_name`) and confirm the playtest type and scope before proceeding.
+
+---
+
+## EXECUTION
+
+### Preflight Requirements
+
+Verify before proceeding:
+- Playable build available
+- Test objectives defined
+- Participant criteria known
+
+---
+
+### Step 1: Define Playtest Objectives
+
+Ask the user (or infer from GDD/game brief):
+
+1. **What are we testing?**
+ - Core gameplay loop
+ - Specific feature
+ - Difficulty curve
+ - Tutorial effectiveness
+ - Overall experience
+
+2. **What decisions will this inform?**
+ - Design changes
+ - Difficulty tuning
+ - Feature prioritization
+ - Ship/no-ship decision
+
+3. **What metrics will we collect?**
+ - Completion rates
+ - Time-on-task
+ - Failure points
+ - Player sentiment
+
+---
+
+### Step 2: Choose Playtest Type
+
+Present options and confirm with user:
+
+#### Internal Playtest
+**Best for**: Early validation, bug finding, quick iterations
+
+| Aspect | Details |
+| ------------ | ------------------------- |
+| Participants | Team members, other teams |
+| Duration | 30-60 minutes |
+| Frequency | Weekly or per-milestone |
+| Setup | Minimal, informal |
+
+#### External Playtest
+**Best for**: Unbiased feedback, market validation
+
+| Aspect | Details |
+| ------------ | --------------------------------- |
+| Participants | Target audience, external testers |
+| Duration | 1-2 hours |
+| Frequency | Monthly or milestone |
+| Setup | Formal, NDA if needed |
+
+#### Focused Playtest
+**Best for**: Specific feature validation
+
+| Aspect | Details |
+| ------------ | ---------------------------- |
+| Participants | Selected for specific traits |
+| Duration | 20-45 minutes |
+| Frequency | As needed |
+| Setup | Specific build/scenario |
+
+---
+
+### Step 3: Create Session Structure
+
+#### Pre-Session (10-15 min)
+
+1. **Welcome & Context**
+ - Brief game description (no spoilers)
+ - Session goals (what we're testing)
+ - Comfort check (breaks, questions)
+
+2. **Consent & Setup**
+ - Recording consent (if applicable)
+ - Controller/input preferences
+ - Any accessibility needs
+
+3. **Instructions**
+ - "Play as you normally would"
+ - "Think aloud if comfortable"
+ - "There are no wrong answers"
+
+#### Gameplay Session (30-90 min)
+
+1. **Observation Focus Areas**
+ - Where do players get stuck?
+ - What do they try first?
+ - What surprises them?
+ - Where do they express frustration/joy?
+
+2. **Note-Taking Template**
+
+ ```
+ [TIME] [LOCATION] [OBSERVATION] [PLAYER REACTION]
+ 0:05 Tutorial Skipped help text Seemed impatient
+ 0:12 Combat Died to first enemy Frustrated, retried
+ ```
+
+3. **Intervention Rules**
+ - Let players struggle (within reason)
+ - Note when you want to help
+ - Only intervene for:
+ - Critical bugs
+ - Genuine distress
+ - Session time running out
+
+#### Post-Session (10-20 min)
+
+1. **Immediate Reactions**
+ - "What was your overall impression?"
+ - "What stood out most?"
+ - "Would you play again?"
+
+2. **Specific Questions**
+ - Feature-specific feedback
+ - Difficulty perception
+ - Clarity of objectives
+
+3. **Open Feedback**
+ - "Anything else?"
+ - "Questions for us?"
+
+---
+
+### Step 4: Create Observation Guide
+
+| Category | Signals | Record |
+| ----------- | ------------------------------------- | ------------------ |
+| Confusion | Pausing, wandering, repeating actions | Location, duration |
+| Frustration | Sighing, repeated failures, quitting | Cause, frequency |
+| Engagement | Leaning in, exclaiming, continuing | Features that work |
+| Boredom | Checking phone, disengaging | Drop-off points |
+
+**Quantitative Metrics**:
+- Time to complete tutorial
+- Deaths per section
+- Items/features discovered
+- Session duration
+- Completion rate
+
+---
+
+### Step 5: Generate Playtest Plan Document
+
+Write `{output_folder}/playtest-plan.md` using the `playtest-template.md` structure:
+
+```markdown
+# Playtest Plan: {Build/Feature Name}
+
+## Overview
+
+- Build version: {version}
+- Session date(s): {dates}
+- Objective: {primary goal}
+
+## Participant Criteria
+
+- Target: {player type}
+- Experience: {gaming background}
+- Count: {number}
+
+## Session Structure
+
+### Pre-Session (15 min)
+
+- Welcome and consent
+- Setup and preferences
+- Brief instructions
+
+### Gameplay (60 min)
+
+- Free play / guided tasks
+- Observation focus: {areas}
+- Intervention threshold: {criteria}
+
+### Post-Session (15 min)
+
+- Immediate reactions
+- Structured questions
+- Open feedback
+
+## Observation Guide
+
+{observation_template}
+
+## Data Collection
+
+- Recording: {yes/no}
+- Notes template: {attached}
+- Metrics: {list}
+
+## Team Roles
+
+- Facilitator: {name}
+- Note-taker: {name}
+- Technical support: {name}
+
+## Post-Playtest Analysis
+
+- Session debrief: {date}
+- Report due: {date}
+- Action items review: {date}
+```
+
+---
+
+### Step 6: Post-Playtest Analysis Framework
+
+Include in the plan document:
+
+#### Synthesize Findings
+
+1. **Pattern Identification**
+ - What issues appeared multiple times?
+ - What worked consistently well?
+
+2. **Severity Assessment**
+ - Critical: Blocks progression
+ - Major: Significantly impacts experience
+ - Minor: Noticeable but manageable
+
+3. **Recommendations**
+ - Immediate fixes
+ - Design considerations
+ - Further investigation needed
+
+#### Report Template
+
+```markdown
+## Playtest Report: {Session}
+
+### Summary
+
+- Participants: {count}
+- Completion rate: {%}
+- Overall sentiment: {positive/mixed/negative}
+
+### Key Findings
+
+1. {Finding with evidence}
+2. {Finding with evidence}
+
+### Recommendations
+
+| Issue | Severity | Recommendation | Priority |
+| ------- | -------- | -------------- | -------- |
+| {issue} | {sev} | {rec} | {P0-P3} |
+
+### Quotes
+
+> "{Notable player quote}" - Participant {N}
+
+### Next Steps
+
+1. {action item}
+2. {action item}
+```
+
+---
+
+## Deliverables
+
+1. **Playtest Plan Document** — Session structure and logistics
+2. **Observation Guide** — What to watch for
+3. **Note-Taking Template** — Standardized recording
+4. **Report Template** — Post-session analysis format
+
+---
+
+## Validation
+
+Refer to `checklist.md` for validation criteria.
+
+## On Complete
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete`
+
+If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting.
diff --git a/src/workflows/gametest/gds-playtest-plan/customize.toml b/src/workflows/gametest/gds-playtest-plan/customize.toml
new file mode 100644
index 0000000..4885eeb
--- /dev/null
+++ b/src/workflows/gametest/gds-playtest-plan/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-playtest-plan. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "Playtest plans must capture hypotheses up front so results stay falsifiable."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed at the end of the workflow — after the final
+# deliverables, output summary, and validation references are produced. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/gametest/gds-playtest-plan/workflow.md b/src/workflows/gametest/gds-playtest-plan/workflow.md
deleted file mode 100644
index bf7eae9..0000000
--- a/src/workflows/gametest/gds-playtest-plan/workflow.md
+++ /dev/null
@@ -1,353 +0,0 @@
----
-name: gametest-playtest-plan
-description: 'Playtest session planner. Use when the user says "lets create a playtest plan" or "I want to schedule gameplay testing"'
-main_config: '{module_config}'
-tags:
- - qa
- - playtesting
- - user-research
- - design-validation
- - feedback
-execution_hints:
- interactive: true
- autonomous: false
- iterative: true
----
-
-# Playtest Planning
-
-**Workflow ID**: `gds-playtest-plan`
-**Version**: 1.0 (BMad v6)
-
-## Goal
-
-Create structured playtesting sessions to validate gameplay, gather user feedback, and identify issues that automated testing cannot catch. Playtesting validates "feel" and player experience.
-
-## Role
-
-You are a Game QA Specialist with expertise in designing and facilitating playtesting sessions. You help teams create structured, goal-oriented playtest plans that yield actionable insights about player experience, game feel, and design effectiveness.
-
----
-
-## WORKFLOW ARCHITECTURE
-
-This workflow produces a complete playtesting plan including session structure, observation guides, note-taking templates, and post-session analysis frameworks.
-
-**Primary Output**: `{output_folder}/playtest-plan.md`
-
-**Supporting Components**:
-- Validation: `{installed_path}/checklist.md`
-- Template: `{installed_path}/playtest-template.md`
-- Knowledge Base: `knowledge/playtesting.md`
-
-**Input Files** (auto-located):
-- GDD: `{output_folder}/*gdd*.md` or `{output_folder}/*gdd*/*.md` — game mechanics to validate
-- Game Brief: `{output_folder}/*brief*.md` — core pillars
-
----
-
-## INITIALIZATION
-
-Load and resolve configuration from `{module_config}`:
-
-```yaml
-output_folder: {from config}
-user_name: {from config}
-communication_language: {from config}
-document_output_language: {from config}
-game_dev_experience: {from config}
-date: {system-generated}
-```
-
-Resolve workflow variables:
-```yaml
-playtest_type: "internal" # internal | external | focused
-session_duration: 60 # minutes
-participant_count: 5
-```
-
-Greet the user by name (`user_name`) and confirm the playtest type and scope before proceeding.
-
----
-
-## EXECUTION
-
-### Preflight Requirements
-
-Verify before proceeding:
-- Playable build available
-- Test objectives defined
-- Participant criteria known
-
----
-
-### Step 1: Define Playtest Objectives
-
-Ask the user (or infer from GDD/game brief):
-
-1. **What are we testing?**
- - Core gameplay loop
- - Specific feature
- - Difficulty curve
- - Tutorial effectiveness
- - Overall experience
-
-2. **What decisions will this inform?**
- - Design changes
- - Difficulty tuning
- - Feature prioritization
- - Ship/no-ship decision
-
-3. **What metrics will we collect?**
- - Completion rates
- - Time-on-task
- - Failure points
- - Player sentiment
-
----
-
-### Step 2: Choose Playtest Type
-
-Present options and confirm with user:
-
-#### Internal Playtest
-**Best for**: Early validation, bug finding, quick iterations
-
-| Aspect | Details |
-| ------------ | ------------------------- |
-| Participants | Team members, other teams |
-| Duration | 30-60 minutes |
-| Frequency | Weekly or per-milestone |
-| Setup | Minimal, informal |
-
-#### External Playtest
-**Best for**: Unbiased feedback, market validation
-
-| Aspect | Details |
-| ------------ | --------------------------------- |
-| Participants | Target audience, external testers |
-| Duration | 1-2 hours |
-| Frequency | Monthly or milestone |
-| Setup | Formal, NDA if needed |
-
-#### Focused Playtest
-**Best for**: Specific feature validation
-
-| Aspect | Details |
-| ------------ | ---------------------------- |
-| Participants | Selected for specific traits |
-| Duration | 20-45 minutes |
-| Frequency | As needed |
-| Setup | Specific build/scenario |
-
----
-
-### Step 3: Create Session Structure
-
-#### Pre-Session (10-15 min)
-
-1. **Welcome & Context**
- - Brief game description (no spoilers)
- - Session goals (what we're testing)
- - Comfort check (breaks, questions)
-
-2. **Consent & Setup**
- - Recording consent (if applicable)
- - Controller/input preferences
- - Any accessibility needs
-
-3. **Instructions**
- - "Play as you normally would"
- - "Think aloud if comfortable"
- - "There are no wrong answers"
-
-#### Gameplay Session (30-90 min)
-
-1. **Observation Focus Areas**
- - Where do players get stuck?
- - What do they try first?
- - What surprises them?
- - Where do they express frustration/joy?
-
-2. **Note-Taking Template**
-
- ```
- [TIME] [LOCATION] [OBSERVATION] [PLAYER REACTION]
- 0:05 Tutorial Skipped help text Seemed impatient
- 0:12 Combat Died to first enemy Frustrated, retried
- ```
-
-3. **Intervention Rules**
- - Let players struggle (within reason)
- - Note when you want to help
- - Only intervene for:
- - Critical bugs
- - Genuine distress
- - Session time running out
-
-#### Post-Session (10-20 min)
-
-1. **Immediate Reactions**
- - "What was your overall impression?"
- - "What stood out most?"
- - "Would you play again?"
-
-2. **Specific Questions**
- - Feature-specific feedback
- - Difficulty perception
- - Clarity of objectives
-
-3. **Open Feedback**
- - "Anything else?"
- - "Questions for us?"
-
----
-
-### Step 4: Create Observation Guide
-
-| Category | Signals | Record |
-| ----------- | ------------------------------------- | ------------------ |
-| Confusion | Pausing, wandering, repeating actions | Location, duration |
-| Frustration | Sighing, repeated failures, quitting | Cause, frequency |
-| Engagement | Leaning in, exclaiming, continuing | Features that work |
-| Boredom | Checking phone, disengaging | Drop-off points |
-
-**Quantitative Metrics**:
-- Time to complete tutorial
-- Deaths per section
-- Items/features discovered
-- Session duration
-- Completion rate
-
----
-
-### Step 5: Generate Playtest Plan Document
-
-Write `{output_folder}/playtest-plan.md` using the `playtest-template.md` structure:
-
-```markdown
-# Playtest Plan: {Build/Feature Name}
-
-## Overview
-
-- Build version: {version}
-- Session date(s): {dates}
-- Objective: {primary goal}
-
-## Participant Criteria
-
-- Target: {player type}
-- Experience: {gaming background}
-- Count: {number}
-
-## Session Structure
-
-### Pre-Session (15 min)
-
-- Welcome and consent
-- Setup and preferences
-- Brief instructions
-
-### Gameplay (60 min)
-
-- Free play / guided tasks
-- Observation focus: {areas}
-- Intervention threshold: {criteria}
-
-### Post-Session (15 min)
-
-- Immediate reactions
-- Structured questions
-- Open feedback
-
-## Observation Guide
-
-{observation_template}
-
-## Data Collection
-
-- Recording: {yes/no}
-- Notes template: {attached}
-- Metrics: {list}
-
-## Team Roles
-
-- Facilitator: {name}
-- Note-taker: {name}
-- Technical support: {name}
-
-## Post-Playtest Analysis
-
-- Session debrief: {date}
-- Report due: {date}
-- Action items review: {date}
-```
-
----
-
-### Step 6: Post-Playtest Analysis Framework
-
-Include in the plan document:
-
-#### Synthesize Findings
-
-1. **Pattern Identification**
- - What issues appeared multiple times?
- - What worked consistently well?
-
-2. **Severity Assessment**
- - Critical: Blocks progression
- - Major: Significantly impacts experience
- - Minor: Noticeable but manageable
-
-3. **Recommendations**
- - Immediate fixes
- - Design considerations
- - Further investigation needed
-
-#### Report Template
-
-```markdown
-## Playtest Report: {Session}
-
-### Summary
-
-- Participants: {count}
-- Completion rate: {%}
-- Overall sentiment: {positive/mixed/negative}
-
-### Key Findings
-
-1. {Finding with evidence}
-2. {Finding with evidence}
-
-### Recommendations
-
-| Issue | Severity | Recommendation | Priority |
-| ------- | -------- | -------------- | -------- |
-| {issue} | {sev} | {rec} | {P0-P3} |
-
-### Quotes
-
-> "{Notable player quote}" - Participant {N}
-
-### Next Steps
-
-1. {action item}
-2. {action item}
-```
-
----
-
-## Deliverables
-
-1. **Playtest Plan Document** — Session structure and logistics
-2. **Observation Guide** — What to watch for
-3. **Note-Taking Template** — Standardized recording
-4. **Report Template** — Post-session analysis format
-
----
-
-## Validation
-
-Refer to `checklist.md` for validation criteria.
diff --git a/src/workflows/gametest/gds-test-automate/SKILL.md b/src/workflows/gametest/gds-test-automate/SKILL.md
index a46115e..c7e050f 100644
--- a/src/workflows/gametest/gds-test-automate/SKILL.md
+++ b/src/workflows/gametest/gds-test-automate/SKILL.md
@@ -3,4 +3,387 @@ name: gds-test-automate
description: 'Generate automated game tests for gameplay systems. Use when the user says "automate tests" or "generate tests"'
---
-Follow the instructions in ./workflow.md.
+# Game Test Automation Workflow
+
+**Goal:** Generate automated test code for game projects based on test design scenarios or by analyzing existing game code. Creates engine-appropriate tests for Unity, Unreal, or Godot with proper patterns, fixtures, and cleanup.
+
+**Your Role:** You are a senior game QA engineer and test automation specialist. Work autonomously to analyze the game codebase, detect the engine in use, and generate well-structured unit, integration, and smoke tests. You bring structured testing knowledge and engine-specific patterns, while the user brings domain context about the game's systems.
+
+---
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `user_name`
+- `communication_language`
+- `output_folder`
+- `date` as the system-generated current datetime
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## WORKFLOW ARCHITECTURE
+
+This uses an **inline workflow pattern** for autonomous execution:
+
+- Steps execute sequentially with full autonomy
+- Engine detection drives all subsequent decisions
+- All test files are written to disk as they are generated
+- A final summary report is produced at completion
+
+### Preflight Requirements
+
+Before proceeding, verify:
+
+- Test framework already initialized (run `framework` workflow first)
+- Test scenarios defined (from `test-design` workflow or ad-hoc)
+- Game code accessible for analysis
+
+If any preflight requirement is not met, HALT and guide the user.
+
+
+### Paths
+
+- `installed_path` = `{skill_root}`
+- `validation` = `{installed_path}/checklist.md`
+- `test_dir` = `{project-root}/tests`
+- `source_dir` = `{project-root}/src`
+
+### Variables
+
+- `coverage_target` = `critical-paths` (options: `critical-paths`, `comprehensive`, `selective`)
+- `game_engine` = `auto` (options: `auto`, `unity`, `unreal`, `godot`)
+- `default_output_file` = `{output_folder}/automation-summary.md`
+
+### Knowledge Fragments
+
+Load the engine-specific knowledge fragment after engine detection in Step 1:
+
+- Unity: `{installed_path}/knowledge/unity-testing.md`
+- Unreal: `{installed_path}/knowledge/unreal-testing.md`
+- Godot: `{installed_path}/knowledge/godot-testing.md`
+- E2E patterns: `{installed_path}/knowledge/e2e-testing.md`
+
+---
+
+## EXECUTION
+
+
+
+
+ Detect Game Engine by checking for engine-specific project files:
+ - Unity: `Assets/`, `ProjectSettings/`, `*.unity` scenes
+ - Unreal: `*.uproject`, `Source/`, `Config/DefaultEngine.ini`
+ - Godot: `project.godot`, `*.tscn`, `*.gd` files
+
+ Load the appropriate engine-specific knowledge fragment
+ Identify testable systems in the codebase:
+ - Pure logic classes (calculators, managers)
+ - State machines (AI, gameplay)
+ - Data structures (inventory, save data)
+
+ Locate existing tests:
+ - Find test directory structure
+ - Identify test patterns already in use
+ - Check for test helpers/fixtures
+
+
+
+
+ For each identified testable system, generate a test file using the appropriate engine template below
+
+
+
+ Generate NUnit test fixtures following this pattern:
+```csharp
+using NUnit.Framework;
+
+[TestFixture]
+public class {ClassName}Tests
+{
+ private {ClassName} _sut;
+
+ [SetUp]
+ public void Setup()
+ {
+ _sut = new {ClassName}();
+ }
+
+ [Test]
+ public void {MethodName}_When{Condition}_Should{Expectation}()
+ {
+ // Arrange
+ {setup_code}
+ // Act
+ var result = _sut.{MethodName}({parameters});
+ // Assert
+ Assert.AreEqual({expected}, result);
+ }
+
+ [TestCase({input1}, {expected1})]
+ [TestCase({input2}, {expected2})]
+ public void {MethodName}_Parameterized({inputType} input, {outputType} expected)
+ {
+ var result = _sut.{MethodName}(input);
+ Assert.AreEqual(expected, result);
+ }
+}
+```
+
+
+
+
+
+ Generate Automation Test macros following this pattern:
+```cpp
+#include "Misc/AutomationTest.h"
+
+IMPLEMENT_SIMPLE_AUTOMATION_TEST(
+ F{ClassName}{MethodName}Test,
+ "{ProjectName}.{Category}.{TestName}",
+ EAutomationTestFlags::ApplicationContextMask |
+ EAutomationTestFlags::ProductFilter
+)
+
+bool F{ClassName}{MethodName}Test::RunTest(const FString& Parameters)
+{
+ // Arrange
+ {setup_code}
+ // Act
+ auto Result = {ClassName}::{MethodName}({parameters});
+ // Assert
+ TestEqual("{assertion_message}", Result, {expected});
+ return true;
+}
+```
+
+
+
+
+
+ Generate GUT test files following this pattern:
+```gdscript
+extends GutTest
+
+var _sut: {ClassName}
+
+func before_each():
+ _sut = {ClassName}.new()
+
+func after_each():
+ _sut.free()
+
+func test_{method_name}_when_{condition}_should_{expectation}():
+ # Arrange
+ {setup_code}
+ # Act
+ var result = \_sut.{method_name}({parameters})
+ # Assert
+ assert_eq(result, {expected}, "{assertion_message}")
+
+func test_{method_name}_parameterized():
+ var test_cases = [
+ {"input": {input1}, "expected": {expected1}},
+ {"input": {input2}, "expected": {expected2}}
+ ]
+ for tc in test_cases:
+ var result = \_sut.{method_name}(tc.input)
+ assert_eq(result, tc.expected)
+```
+
+
+
+ Write each generated unit test file to the appropriate location under `{test_dir}/unit/`
+
+
+
+ Generate scene/level integration tests using the appropriate engine template
+
+
+ Generate Unity Play Mode integration tests:
+```csharp
+[UnityTest]
+public IEnumerator {SceneName}_Loads_WithoutErrors()
+{
+ SceneManager.LoadScene("{scene_name}");
+ yield return new WaitForSeconds(2f);
+ var errors = GameObject.FindObjectsOfType()
+ .Where(e => e.HasErrors);
+ Assert.IsEmpty(errors, "Scene should load without errors");
+}
+```
+
+
+
+
+ Generate Unreal Functional Test actors:
+```cpp
+void A{TestName}::StartTest()
+{
+ Super::StartTest();
+ {setup}
+ if ({condition})
+ FinishTest(EFunctionalTestResult::Succeeded, "{message}");
+ else
+ FinishTest(EFunctionalTestResult::Failed, "{failure_message}");
+}
+```
+
+
+
+
+ Generate Godot integration tests:
+```gdscript
+func test_{feature}_integration():
+ var scene = load("res://scenes/{scene}.tscn").instantiate()
+ add_child(scene)
+ await get_tree().process_frame
+ {test_code}
+ scene.queue_free()
+```
+
+
+
+ Write each generated integration test file to `{test_dir}/integration/`
+
+
+
+ Before generating E2E tests, scaffold the required infrastructure components:
+ 1. Test Fixture Base Class — scene loading/unloading, game ready state waiting, common service access, cleanup guarantees
+ 2. Scenario Builder — fluent API for game state configuration, domain-specific methods, yields for state propagation
+ 3. Input Simulator — click/drag abstractions, button press simulation, keyboard input queuing
+ 4. Async Assertions — WaitUntil with timeout and message, WaitForEvent for event-driven flows, WaitForState for state machine transitions
+
+ Generate the GameE2ETestFixture base class using this template:
+```csharp
+public abstract class GameE2ETestFixture
+{
+ protected {GameStateClass} GameState;
+ protected {InputSimulatorClass} Input;
+ protected {ScenarioBuilderClass} Scenario;
+
+ [UnitySetUp]
+ public IEnumerator BaseSetUp()
+ {
+ yield return LoadScene("{main_scene}");
+ GameState = Object.FindFirstObjectByType<{GameStateClass}>();
+ Input = new {InputSimulatorClass}();
+ Scenario = new {ScenarioBuilderClass}(GameState);
+ yield return WaitForReady();
+ }
+}
+```
+
+ Write infrastructure files to `{test_dir}/e2e/infrastructure/` or the engine-appropriate equivalent
+ After scaffolding infrastructure, proceed to generate actual E2E tests
+
+
+
+ Create critical path tests that run on every build, covering:
+ 1. Game launches without crash
+ 2. Main menu is navigable
+ 3. New game starts successfully
+ 4. Core gameplay loop executes
+ 5. Save/load works
+
+ Generate engine-appropriate smoke tests, for example (Unity):
+```csharp
+[UnityTest, Timeout(60000)]
+public IEnumerator Smoke_NewGame_StartsSuccessfully()
+{
+ SceneManager.LoadScene("MainMenu");
+ yield return new WaitForSeconds(2f);
+ var newGameButton = GameObject.Find("NewGameButton");
+ newGameButton.GetComponent
+ Write smoke tests to `{test_dir}/smoke/`
+
+
+ Ensure generated tests do NOT:
+ - Test engine functionality (not game logic)
+ - Use hard-coded waits as primary sync (use signals/events)
+ - Depend on execution order
+ - Lack cleanup in teardown
+
+
+
+
+ After all test files have been written, create an automation summary at `{default_output_file}` using this structure:
+
+```markdown
+## Automation Summary
+
+**Engine**: {Unity | Unreal | Godot}
+**Tests Generated**: {count}
+**Date**: {date}
+
+### Test Distribution
+
+| Type | Count | Coverage |
+| ----------- | ----- | ------------- |
+| Unit Tests | {n} | {systems} |
+| Integration | {n} | {features} |
+| Smoke Tests | {n} | Critical path |
+
+### Files Created
+
+- `tests/unit/{file1}.{ext}`
+- `tests/integration/{file2}.{ext}`
+- `tests/smoke/{file3}.{ext}`
+
+### Next Steps
+
+1. Review generated tests
+2. Fill in test-specific logic where placeholders remain
+3. Run tests to verify they pass
+4. Add to CI pipeline
+```
+
+ Load and apply `{validation}` checklist to verify all deliverables are complete
+ Present the automation summary to the user
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete` — if the resolved value is non-empty, follow it as the final terminal instruction before exiting.
+
+
+
diff --git a/src/workflows/gametest/gds-test-automate/customize.toml b/src/workflows/gametest/gds-test-automate/customize.toml
new file mode 100644
index 0000000..23fde1f
--- /dev/null
+++ b/src/workflows/gametest/gds-test-automate/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-test-automate. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "Automated tests must fail loudly on regression and never on flake."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed when the workflow reaches Step 5 (Generate Test Report),
+# after the final outputs are produced. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/gametest/gds-test-automate/workflow.md b/src/workflows/gametest/gds-test-automate/workflow.md
deleted file mode 100644
index 51e2cf7..0000000
--- a/src/workflows/gametest/gds-test-automate/workflow.md
+++ /dev/null
@@ -1,353 +0,0 @@
----
-name: gametest-automate
-description: 'Automated test scenario generator. Use when the user says "I want to create automated game tests" or "Generate test scenarios for Unity Unreal or Godot"'
-main_config: '{module_config}'
----
-
-# Game Test Automation Workflow
-
-**Goal:** Generate automated test code for game projects based on test design scenarios or by analyzing existing game code. Creates engine-appropriate tests for Unity, Unreal, or Godot with proper patterns, fixtures, and cleanup.
-
-**Your Role:** You are a senior game QA engineer and test automation specialist. Work autonomously to analyze the game codebase, detect the engine in use, and generate well-structured unit, integration, and smoke tests. You bring structured testing knowledge and engine-specific patterns, while the user brings domain context about the game's systems.
-
----
-
-## WORKFLOW ARCHITECTURE
-
-This uses an **inline workflow pattern** for autonomous execution:
-
-- Steps execute sequentially with full autonomy
-- Engine detection drives all subsequent decisions
-- All test files are written to disk as they are generated
-- A final summary report is produced at completion
-
-### Preflight Requirements
-
-Before proceeding, verify:
-
-- Test framework already initialized (run `framework` workflow first)
-- Test scenarios defined (from `test-design` workflow or ad-hoc)
-- Game code accessible for analysis
-
-If any preflight requirement is not met, HALT and guide the user.
-
----
-
-## INITIALIZATION
-
-### Configuration Loading
-
-Load config from `{module_config}` and resolve:
-
-- `project_name`, `output_folder`, `user_name`
-- `communication_language`, `document_output_language`, `game_dev_experience`
-- `date` as system-generated current datetime
-- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
-
-### Paths
-
-- `installed_path` = `{skill_root}`
-- `validation` = `{installed_path}/checklist.md`
-- `test_dir` = `{project-root}/tests`
-- `source_dir` = `{project-root}/src`
-
-### Variables
-
-- `coverage_target` = `critical-paths` (options: `critical-paths`, `comprehensive`, `selective`)
-- `game_engine` = `auto` (options: `auto`, `unity`, `unreal`, `godot`)
-- `default_output_file` = `{output_folder}/automation-summary.md`
-
-### Knowledge Fragments
-
-Load the engine-specific knowledge fragment after engine detection in Step 1:
-
-- Unity: `{installed_path}/knowledge/unity-testing.md`
-- Unreal: `{installed_path}/knowledge/unreal-testing.md`
-- Godot: `{installed_path}/knowledge/godot-testing.md`
-- E2E patterns: `{installed_path}/knowledge/e2e-testing.md`
-
----
-
-## EXECUTION
-
-
-
-
- Detect Game Engine by checking for engine-specific project files:
- - Unity: `Assets/`, `ProjectSettings/`, `*.unity` scenes
- - Unreal: `*.uproject`, `Source/`, `Config/DefaultEngine.ini`
- - Godot: `project.godot`, `*.tscn`, `*.gd` files
-
- Load the appropriate engine-specific knowledge fragment
- Identify testable systems in the codebase:
- - Pure logic classes (calculators, managers)
- - State machines (AI, gameplay)
- - Data structures (inventory, save data)
-
- Locate existing tests:
- - Find test directory structure
- - Identify test patterns already in use
- - Check for test helpers/fixtures
-
-
-
-
- For each identified testable system, generate a test file using the appropriate engine template below
-
-
-
- Generate NUnit test fixtures following this pattern:
-```csharp
-using NUnit.Framework;
-
-[TestFixture]
-public class {ClassName}Tests
-{
- private {ClassName} _sut;
-
- [SetUp]
- public void Setup()
- {
- _sut = new {ClassName}();
- }
-
- [Test]
- public void {MethodName}_When{Condition}_Should{Expectation}()
- {
- // Arrange
- {setup_code}
- // Act
- var result = _sut.{MethodName}({parameters});
- // Assert
- Assert.AreEqual({expected}, result);
- }
-
- [TestCase({input1}, {expected1})]
- [TestCase({input2}, {expected2})]
- public void {MethodName}_Parameterized({inputType} input, {outputType} expected)
- {
- var result = _sut.{MethodName}(input);
- Assert.AreEqual(expected, result);
- }
-}
-```
-
-
-
-
-
- Generate Automation Test macros following this pattern:
-```cpp
-#include "Misc/AutomationTest.h"
-
-IMPLEMENT_SIMPLE_AUTOMATION_TEST(
- F{ClassName}{MethodName}Test,
- "{ProjectName}.{Category}.{TestName}",
- EAutomationTestFlags::ApplicationContextMask |
- EAutomationTestFlags::ProductFilter
-)
-
-bool F{ClassName}{MethodName}Test::RunTest(const FString& Parameters)
-{
- // Arrange
- {setup_code}
- // Act
- auto Result = {ClassName}::{MethodName}({parameters});
- // Assert
- TestEqual("{assertion_message}", Result, {expected});
- return true;
-}
-```
-
-
-
-
-
- Generate GUT test files following this pattern:
-```gdscript
-extends GutTest
-
-var _sut: {ClassName}
-
-func before_each():
- _sut = {ClassName}.new()
-
-func after_each():
- _sut.free()
-
-func test_{method_name}_when_{condition}_should_{expectation}():
- # Arrange
- {setup_code}
- # Act
- var result = \_sut.{method_name}({parameters})
- # Assert
- assert_eq(result, {expected}, "{assertion_message}")
-
-func test_{method_name}_parameterized():
- var test_cases = [
- {"input": {input1}, "expected": {expected1}},
- {"input": {input2}, "expected": {expected2}}
- ]
- for tc in test_cases:
- var result = \_sut.{method_name}(tc.input)
- assert_eq(result, tc.expected)
-```
-
-
-
- Write each generated unit test file to the appropriate location under `{test_dir}/unit/`
-
-
-
- Generate scene/level integration tests using the appropriate engine template
-
-
- Generate Unity Play Mode integration tests:
-```csharp
-[UnityTest]
-public IEnumerator {SceneName}_Loads_WithoutErrors()
-{
- SceneManager.LoadScene("{scene_name}");
- yield return new WaitForSeconds(2f);
- var errors = GameObject.FindObjectsOfType()
- .Where(e => e.HasErrors);
- Assert.IsEmpty(errors, "Scene should load without errors");
-}
-```
-
-
-
-
- Generate Unreal Functional Test actors:
-```cpp
-void A{TestName}::StartTest()
-{
- Super::StartTest();
- {setup}
- if ({condition})
- FinishTest(EFunctionalTestResult::Succeeded, "{message}");
- else
- FinishTest(EFunctionalTestResult::Failed, "{failure_message}");
-}
-```
-
-
-
-
- Generate Godot integration tests:
-```gdscript
-func test_{feature}_integration():
- var scene = load("res://scenes/{scene}.tscn").instantiate()
- add_child(scene)
- await get_tree().process_frame
- {test_code}
- scene.queue_free()
-```
-
-
-
- Write each generated integration test file to `{test_dir}/integration/`
-
-
-
- Before generating E2E tests, scaffold the required infrastructure components:
- 1. Test Fixture Base Class — scene loading/unloading, game ready state waiting, common service access, cleanup guarantees
- 2. Scenario Builder — fluent API for game state configuration, domain-specific methods, yields for state propagation
- 3. Input Simulator — click/drag abstractions, button press simulation, keyboard input queuing
- 4. Async Assertions — WaitUntil with timeout and message, WaitForEvent for event-driven flows, WaitForState for state machine transitions
-
- Generate the GameE2ETestFixture base class using this template:
-```csharp
-public abstract class GameE2ETestFixture
-{
- protected {GameStateClass} GameState;
- protected {InputSimulatorClass} Input;
- protected {ScenarioBuilderClass} Scenario;
-
- [UnitySetUp]
- public IEnumerator BaseSetUp()
- {
- yield return LoadScene("{main_scene}");
- GameState = Object.FindFirstObjectByType<{GameStateClass}>();
- Input = new {InputSimulatorClass}();
- Scenario = new {ScenarioBuilderClass}(GameState);
- yield return WaitForReady();
- }
-}
-```
-
- Write infrastructure files to `{test_dir}/e2e/infrastructure/` or the engine-appropriate equivalent
- After scaffolding infrastructure, proceed to generate actual E2E tests
-
-
-
- Create critical path tests that run on every build, covering:
- 1. Game launches without crash
- 2. Main menu is navigable
- 3. New game starts successfully
- 4. Core gameplay loop executes
- 5. Save/load works
-
- Generate engine-appropriate smoke tests, for example (Unity):
-```csharp
-[UnityTest, Timeout(60000)]
-public IEnumerator Smoke_NewGame_StartsSuccessfully()
-{
- SceneManager.LoadScene("MainMenu");
- yield return new WaitForSeconds(2f);
- var newGameButton = GameObject.Find("NewGameButton");
- newGameButton.GetComponent
- Write smoke tests to `{test_dir}/smoke/`
-
-
- Ensure generated tests do NOT:
- - Test engine functionality (not game logic)
- - Use hard-coded waits as primary sync (use signals/events)
- - Depend on execution order
- - Lack cleanup in teardown
-
-
-
-
- After all test files have been written, create an automation summary at `{default_output_file}` using this structure:
-
-```markdown
-## Automation Summary
-
-**Engine**: {Unity | Unreal | Godot}
-**Tests Generated**: {count}
-**Date**: {date}
-
-### Test Distribution
-
-| Type | Count | Coverage |
-| ----------- | ----- | ------------- |
-| Unit Tests | {n} | {systems} |
-| Integration | {n} | {features} |
-| Smoke Tests | {n} | Critical path |
-
-### Files Created
-
-- `tests/unit/{file1}.{ext}`
-- `tests/integration/{file2}.{ext}`
-- `tests/smoke/{file3}.{ext}`
-
-### Next Steps
-
-1. Review generated tests
-2. Fill in test-specific logic where placeholders remain
-3. Run tests to verify they pass
-4. Add to CI pipeline
-```
-
- Load and apply `{validation}` checklist to verify all deliverables are complete
- Present the automation summary to the user
-
-
-
diff --git a/src/workflows/gametest/gds-test-design/SKILL.md b/src/workflows/gametest/gds-test-design/SKILL.md
index 40ee993..43d46d8 100644
--- a/src/workflows/gametest/gds-test-design/SKILL.md
+++ b/src/workflows/gametest/gds-test-design/SKILL.md
@@ -3,4 +3,426 @@ name: gds-test-design
description: 'Create comprehensive game test scenarios. Use when the user says "test design" or "design tests"'
---
-Follow the instructions in ./workflow.md.
+# Game Test Design
+
+**Workflow ID**: `gds-test-design`
+**Version**: 1.0 (BMad v6)
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `project_name`
+- `user_name`
+- `communication_language`
+- `output_folder`
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## Goal
+
+Create comprehensive test scenarios for game projects, covering gameplay mechanics, progression systems, multiplayer functionality, and platform requirements. This workflow produces a prioritized test plan based on risk assessment and player impact.
+
+## Role
+
+You are a Game QA Engineer specializing in test design. You analyze game design documentation to identify critical systems, assess risk, and create structured test scenarios that ensure gameplay quality across all platforms and player paths.
+
+---
+
+## WORKFLOW ARCHITECTURE
+
+This workflow analyzes the game project and produces a complete test design document with prioritized scenarios, a coverage matrix, and automation recommendations.
+
+**Primary Output**: `{output_folder}/game-test-design.md`
+
+**Supporting Components**:
+- Validation: `{installed_path}/checklist.md`
+- Template: `{installed_path}/test-design-template.md`
+
+**Knowledge Base References**:
+- `knowledge/playtesting.md`
+- `knowledge/save-testing.md`
+- `knowledge/multiplayer-testing.md`
+- `knowledge/certification-testing.md`
+- `knowledge/e2e-testing.md`
+- `knowledge/test-priorities.md`
+
+
+Load and resolve configuration from `{module_config}`:
+
+```yaml
+output_folder: {from config}
+user_name: {from config}
+communication_language: {from config}
+document_output_language: {from config}
+game_dev_experience: {from config}
+date: {system-generated}
+```
+
+Resolve workflow variables:
+```yaml
+design_level: "full" # full | targeted | minimal
+focus_area: "auto" # auto | gameplay | progression | multiplayer | performance
+```
+
+Search the project for game design documentation before proceeding.
+
+---
+
+## EXECUTION
+
+### Preflight Requirements
+
+Verify before proceeding:
+- Game design documentation available (GDD, feature specs)
+- Understanding of target platforms
+- Knowledge of core gameplay loop
+
+---
+
+### Step 1: Gather Context
+
+#### Actions
+
+1. **Read Game Design Documentation**
+ - Locate GDD or game-design.md
+ - Identify core mechanics and features
+ - Note target platforms and certification requirements
+
+2. **Identify Critical Systems**
+ - Core gameplay loop
+ - Progression/save systems
+ - Multiplayer (if applicable)
+ - Monetization (if applicable)
+
+3. **Assess Risk Areas**
+ - Player-facing features (highest priority)
+ - Data persistence (save/load)
+ - Platform certification requirements
+ - Performance-critical paths
+
+---
+
+### Step 2: Define Test Categories
+
+#### Core Gameplay Testing
+
+**Knowledge Base Reference**: `knowledge/playtesting.md`
+
+| Category | Focus | Priority |
+| ------------------ | -------------------------- | -------- |
+| Core Loop | Primary mechanic execution | P0 |
+| Combat/Interaction | Hit detection, feedback | P0 |
+| Movement | Physics, collision, feel | P0 |
+| UI/UX | Menu navigation, HUD | P1 |
+| Audio | Sound triggers, music | P2 |
+
+#### Progression Testing
+
+**Knowledge Base Reference**: `knowledge/save-testing.md`
+
+| Category | Focus | Priority |
+| ------------ | ------------------ | -------- |
+| Save/Load | Data persistence | P0 |
+| Unlocks | Content gating | P1 |
+| Economy | Currency, rewards | P1 |
+| Achievements | Trigger conditions | P2 |
+
+#### Multiplayer Testing (if applicable)
+
+**Knowledge Base Reference**: `knowledge/multiplayer-testing.md`
+
+| Category | Focus | Priority |
+| --------------- | ------------------- | -------- |
+| Connectivity | Join/leave handling | P0 |
+| Synchronization | State consistency | P0 |
+| Latency | Degraded network | P1 |
+| Matchmaking | Player grouping | P1 |
+
+#### Platform Testing
+
+**Knowledge Base Reference**: `knowledge/certification-testing.md`
+
+| Category | Focus | Priority |
+| ------------- | ------------------- | -------- |
+| Certification | TRC/XR requirements | P0 |
+| Input | Controller support | P0 |
+| Performance | FPS, loading times | P1 |
+| Accessibility | Assist features | P1 |
+
+#### E2E Journey Testing
+
+**Knowledge Base Reference**: `knowledge/e2e-testing.md`
+
+| Category | Focus | Priority |
+| ------------------ | --------------------------- | -------- |
+| Core Loop | Complete gameplay cycle | P0 |
+| Turn Lifecycle | Full turn from start to end | P0 |
+| Save/Load Roundtrip| Save → quit → load → resume | P0 |
+| Scene Transitions | Menu → Game → Back | P1 |
+| Win/Lose Paths | Victory and defeat conditions| P1 |
+
+---
+
+### Step 3: Create Test Scenarios
+
+#### Scenario Format
+
+For each critical feature, create scenarios using this format:
+
+```
+SCENARIO: [Descriptive Name]
+ GIVEN [Initial state/preconditions]
+ WHEN [Action taken]
+ THEN [Expected outcome]
+ PRIORITY: P0/P1/P2/P3
+ CATEGORY: [gameplay/progression/multiplayer/platform]
+```
+
+#### Example Scenarios
+
+**Gameplay - Combat**
+
+```
+SCENARIO: Basic Attack Hits Enemy
+ GIVEN player is within attack range of enemy
+ AND enemy has 100 health
+ WHEN player performs basic attack
+ THEN enemy receives damage
+ AND damage feedback plays (visual + audio)
+ AND enemy health decreases
+ PRIORITY: P0
+ CATEGORY: gameplay
+```
+
+**Progression - Save System**
+
+```
+SCENARIO: Save Preserves Player Progress
+ GIVEN player has 500 gold and 3 items
+ AND player is at checkpoint
+ WHEN game saves
+ AND game is reloaded
+ THEN player has 500 gold
+ AND player has same 3 items
+ AND player is at same checkpoint
+ PRIORITY: P0
+ CATEGORY: progression
+```
+
+**Multiplayer - Network Degradation**
+
+```
+SCENARIO: Gameplay Under High Latency
+ GIVEN 2 players in session
+ AND network latency is 200ms
+ WHEN Player 1 attacks Player 2
+ THEN damage is applied correctly
+ AND positions remain synchronized
+ AND no desync occurs
+ PRIORITY: P1
+ CATEGORY: multiplayer
+```
+
+#### E2E Scenario Format
+
+For player journey tests, use this extended format:
+
+```
+E2E SCENARIO: [Player Journey Name]
+ GIVEN [Initial game state - use ScenarioBuilder terms]
+ WHEN [Sequence of player actions]
+ THEN [Observable outcomes]
+ TIMEOUT: [Expected max duration in seconds]
+ PRIORITY: P0/P1
+ CATEGORY: e2e
+ INFRASTRUCTURE: [Required fixtures/builders]
+```
+
+**Example E2E Scenario**:
+
+```
+E2E SCENARIO: Complete Combat Encounter
+ GIVEN game loaded with player unit adjacent to enemy
+ AND player unit has full health and actions
+ WHEN player selects unit
+ AND player clicks attack on enemy
+ AND player confirms attack
+ AND attack animation completes
+ AND enemy responds (if alive)
+ THEN enemy health is reduced OR enemy is defeated
+ AND turn state advances appropriately
+ AND UI reflects new state
+ TIMEOUT: 15
+ PRIORITY: P0
+ CATEGORY: e2e
+ INFRASTRUCTURE: ScenarioBuilder, InputSimulator, AsyncAssert
+```
+
+---
+
+### Step 4: Prioritize Test Coverage
+
+**Knowledge Base Reference**: `knowledge/test-priorities.md`
+
+| Priority | Criteria | Unit | Integration | E2E | Manual |
+| -------- | --------------- | ---- | ----------- | ---------- | --------- |
+| P0 | Ship blockers | 100% | 80% | Core flows | Smoke |
+| P1 | Major features | 90% | 70% | Happy paths| Full |
+| P2 | Secondary | 80% | 50% | - | Targeted |
+| P3 | Edge cases | 60% | - | - | As needed |
+
+**Risk-Based Ordering**:
+
+1. **Critical Path** — Main gameplay loop
+2. **Data Integrity** — Save/load, progression
+3. **Platform Requirements** — Certification items
+4. **User Experience** — Feel, polish, accessibility
+
+---
+
+### Step 5: Generate Test Design Document
+
+Write `{output_folder}/game-test-design.md` using the `test-design-template.md` structure:
+
+```markdown
+# Game Test Design: [Project Name]
+
+## Overview
+
+- Game type and core mechanics
+- Target platforms
+- Test scope and objectives
+
+## Risk Assessment
+
+- High-risk areas identified
+- Mitigation strategies
+
+## Test Categories
+
+### Gameplay Tests
+
+[Scenarios...]
+
+### Progression Tests
+
+[Scenarios...]
+
+### Multiplayer Tests (if applicable)
+
+[Scenarios...]
+
+### Platform Tests
+
+[Scenarios...]
+
+## Coverage Matrix
+
+| Feature | P0 | P1 | P2 | P3 |
+| ------- | --- | --- | --- | --- |
+| Combat | 5 | 10 | 8 | 4 |
+| ... | | | | |
+
+## Automation Strategy
+
+- Unit test candidates
+- Integration test candidates
+- Manual-only scenarios
+
+## Next Steps
+
+1. Implement P0 tests
+2. Set up CI integration
+3. Plan playtesting sessions
+```
+
+---
+
+## Deliverables
+
+1. **Test Design Document** — `{output_folder}/game-test-design.md`
+2. **Scenario List** — Prioritized test scenarios
+3. **Coverage Matrix** — Feature vs priority breakdown
+4. **Automation Recommendations** — What to automate vs manual test
+
+---
+
+## Output Summary
+
+After completing, provide:
+
+```markdown
+## Test Design Complete
+
+**Project**: {project_name}
+**Scenarios Created**: {count}
+**Priority Breakdown**:
+
+- P0 (Critical): {p0_count}
+- P1 (High): {p1_count}
+- P2 (Medium): {p2_count}
+- P3 (Low): {p3_count}
+
+**Focus Areas Covered**:
+
+- Core Gameplay
+- Progression/Save
+- Platform Requirements
+- {Multiplayer if applicable}
+
+**Next Steps**:
+
+1. Review scenarios with team
+2. Use `automate` workflow to generate test code
+3. Use `playtest-plan` for manual testing sessions
+```
+
+---
+
+## Validation
+
+Refer to `checklist.md` for validation criteria.
+
+## On Complete
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete`
+
+If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting.
diff --git a/src/workflows/gametest/gds-test-design/customize.toml b/src/workflows/gametest/gds-test-design/customize.toml
new file mode 100644
index 0000000..70b38e8
--- /dev/null
+++ b/src/workflows/gametest/gds-test-design/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-test-design. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "Test design must begin from acceptance criteria, not from implementation details."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed at the end of the workflow — after the final
+# deliverables, output summary, and validation references are produced. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/gametest/gds-test-design/workflow.md b/src/workflows/gametest/gds-test-design/workflow.md
deleted file mode 100644
index a1ffa4a..0000000
--- a/src/workflows/gametest/gds-test-design/workflow.md
+++ /dev/null
@@ -1,388 +0,0 @@
----
-name: gametest-test-design
-description: 'Game test scenario creator. Use when the user says "lets create game test scenarios"'
-main_config: '{module_config}'
-tags:
- - qa
- - planning
- - game-testing
- - risk-assessment
- - coverage
-execution_hints:
- interactive: false
- autonomous: true
- iterative: true
----
-
-# Game Test Design
-
-**Workflow ID**: `gds-test-design`
-**Version**: 1.0 (BMad v6)
-
-## Goal
-
-Create comprehensive test scenarios for game projects, covering gameplay mechanics, progression systems, multiplayer functionality, and platform requirements. This workflow produces a prioritized test plan based on risk assessment and player impact.
-
-## Role
-
-You are a Game QA Engineer specializing in test design. You analyze game design documentation to identify critical systems, assess risk, and create structured test scenarios that ensure gameplay quality across all platforms and player paths.
-
----
-
-## WORKFLOW ARCHITECTURE
-
-This workflow analyzes the game project and produces a complete test design document with prioritized scenarios, a coverage matrix, and automation recommendations.
-
-**Primary Output**: `{output_folder}/game-test-design.md`
-
-**Supporting Components**:
-- Validation: `{installed_path}/checklist.md`
-- Template: `{installed_path}/test-design-template.md`
-
-**Knowledge Base References**:
-- `knowledge/playtesting.md`
-- `knowledge/save-testing.md`
-- `knowledge/multiplayer-testing.md`
-- `knowledge/certification-testing.md`
-- `knowledge/e2e-testing.md`
-- `knowledge/test-priorities.md`
-
----
-
-## INITIALIZATION
-
-Load and resolve configuration from `{module_config}`:
-
-```yaml
-output_folder: {from config}
-user_name: {from config}
-communication_language: {from config}
-document_output_language: {from config}
-game_dev_experience: {from config}
-date: {system-generated}
-```
-
-Resolve workflow variables:
-```yaml
-design_level: "full" # full | targeted | minimal
-focus_area: "auto" # auto | gameplay | progression | multiplayer | performance
-```
-
-Search the project for game design documentation before proceeding.
-
----
-
-## EXECUTION
-
-### Preflight Requirements
-
-Verify before proceeding:
-- Game design documentation available (GDD, feature specs)
-- Understanding of target platforms
-- Knowledge of core gameplay loop
-
----
-
-### Step 1: Gather Context
-
-#### Actions
-
-1. **Read Game Design Documentation**
- - Locate GDD or game-design.md
- - Identify core mechanics and features
- - Note target platforms and certification requirements
-
-2. **Identify Critical Systems**
- - Core gameplay loop
- - Progression/save systems
- - Multiplayer (if applicable)
- - Monetization (if applicable)
-
-3. **Assess Risk Areas**
- - Player-facing features (highest priority)
- - Data persistence (save/load)
- - Platform certification requirements
- - Performance-critical paths
-
----
-
-### Step 2: Define Test Categories
-
-#### Core Gameplay Testing
-
-**Knowledge Base Reference**: `knowledge/playtesting.md`
-
-| Category | Focus | Priority |
-| ------------------ | -------------------------- | -------- |
-| Core Loop | Primary mechanic execution | P0 |
-| Combat/Interaction | Hit detection, feedback | P0 |
-| Movement | Physics, collision, feel | P0 |
-| UI/UX | Menu navigation, HUD | P1 |
-| Audio | Sound triggers, music | P2 |
-
-#### Progression Testing
-
-**Knowledge Base Reference**: `knowledge/save-testing.md`
-
-| Category | Focus | Priority |
-| ------------ | ------------------ | -------- |
-| Save/Load | Data persistence | P0 |
-| Unlocks | Content gating | P1 |
-| Economy | Currency, rewards | P1 |
-| Achievements | Trigger conditions | P2 |
-
-#### Multiplayer Testing (if applicable)
-
-**Knowledge Base Reference**: `knowledge/multiplayer-testing.md`
-
-| Category | Focus | Priority |
-| --------------- | ------------------- | -------- |
-| Connectivity | Join/leave handling | P0 |
-| Synchronization | State consistency | P0 |
-| Latency | Degraded network | P1 |
-| Matchmaking | Player grouping | P1 |
-
-#### Platform Testing
-
-**Knowledge Base Reference**: `knowledge/certification-testing.md`
-
-| Category | Focus | Priority |
-| ------------- | ------------------- | -------- |
-| Certification | TRC/XR requirements | P0 |
-| Input | Controller support | P0 |
-| Performance | FPS, loading times | P1 |
-| Accessibility | Assist features | P1 |
-
-#### E2E Journey Testing
-
-**Knowledge Base Reference**: `knowledge/e2e-testing.md`
-
-| Category | Focus | Priority |
-| ------------------ | --------------------------- | -------- |
-| Core Loop | Complete gameplay cycle | P0 |
-| Turn Lifecycle | Full turn from start to end | P0 |
-| Save/Load Roundtrip| Save → quit → load → resume | P0 |
-| Scene Transitions | Menu → Game → Back | P1 |
-| Win/Lose Paths | Victory and defeat conditions| P1 |
-
----
-
-### Step 3: Create Test Scenarios
-
-#### Scenario Format
-
-For each critical feature, create scenarios using this format:
-
-```
-SCENARIO: [Descriptive Name]
- GIVEN [Initial state/preconditions]
- WHEN [Action taken]
- THEN [Expected outcome]
- PRIORITY: P0/P1/P2/P3
- CATEGORY: [gameplay/progression/multiplayer/platform]
-```
-
-#### Example Scenarios
-
-**Gameplay - Combat**
-
-```
-SCENARIO: Basic Attack Hits Enemy
- GIVEN player is within attack range of enemy
- AND enemy has 100 health
- WHEN player performs basic attack
- THEN enemy receives damage
- AND damage feedback plays (visual + audio)
- AND enemy health decreases
- PRIORITY: P0
- CATEGORY: gameplay
-```
-
-**Progression - Save System**
-
-```
-SCENARIO: Save Preserves Player Progress
- GIVEN player has 500 gold and 3 items
- AND player is at checkpoint
- WHEN game saves
- AND game is reloaded
- THEN player has 500 gold
- AND player has same 3 items
- AND player is at same checkpoint
- PRIORITY: P0
- CATEGORY: progression
-```
-
-**Multiplayer - Network Degradation**
-
-```
-SCENARIO: Gameplay Under High Latency
- GIVEN 2 players in session
- AND network latency is 200ms
- WHEN Player 1 attacks Player 2
- THEN damage is applied correctly
- AND positions remain synchronized
- AND no desync occurs
- PRIORITY: P1
- CATEGORY: multiplayer
-```
-
-#### E2E Scenario Format
-
-For player journey tests, use this extended format:
-
-```
-E2E SCENARIO: [Player Journey Name]
- GIVEN [Initial game state - use ScenarioBuilder terms]
- WHEN [Sequence of player actions]
- THEN [Observable outcomes]
- TIMEOUT: [Expected max duration in seconds]
- PRIORITY: P0/P1
- CATEGORY: e2e
- INFRASTRUCTURE: [Required fixtures/builders]
-```
-
-**Example E2E Scenario**:
-
-```
-E2E SCENARIO: Complete Combat Encounter
- GIVEN game loaded with player unit adjacent to enemy
- AND player unit has full health and actions
- WHEN player selects unit
- AND player clicks attack on enemy
- AND player confirms attack
- AND attack animation completes
- AND enemy responds (if alive)
- THEN enemy health is reduced OR enemy is defeated
- AND turn state advances appropriately
- AND UI reflects new state
- TIMEOUT: 15
- PRIORITY: P0
- CATEGORY: e2e
- INFRASTRUCTURE: ScenarioBuilder, InputSimulator, AsyncAssert
-```
-
----
-
-### Step 4: Prioritize Test Coverage
-
-**Knowledge Base Reference**: `knowledge/test-priorities.md`
-
-| Priority | Criteria | Unit | Integration | E2E | Manual |
-| -------- | --------------- | ---- | ----------- | ---------- | --------- |
-| P0 | Ship blockers | 100% | 80% | Core flows | Smoke |
-| P1 | Major features | 90% | 70% | Happy paths| Full |
-| P2 | Secondary | 80% | 50% | - | Targeted |
-| P3 | Edge cases | 60% | - | - | As needed |
-
-**Risk-Based Ordering**:
-
-1. **Critical Path** — Main gameplay loop
-2. **Data Integrity** — Save/load, progression
-3. **Platform Requirements** — Certification items
-4. **User Experience** — Feel, polish, accessibility
-
----
-
-### Step 5: Generate Test Design Document
-
-Write `{output_folder}/game-test-design.md` using the `test-design-template.md` structure:
-
-```markdown
-# Game Test Design: [Project Name]
-
-## Overview
-
-- Game type and core mechanics
-- Target platforms
-- Test scope and objectives
-
-## Risk Assessment
-
-- High-risk areas identified
-- Mitigation strategies
-
-## Test Categories
-
-### Gameplay Tests
-
-[Scenarios...]
-
-### Progression Tests
-
-[Scenarios...]
-
-### Multiplayer Tests (if applicable)
-
-[Scenarios...]
-
-### Platform Tests
-
-[Scenarios...]
-
-## Coverage Matrix
-
-| Feature | P0 | P1 | P2 | P3 |
-| ------- | --- | --- | --- | --- |
-| Combat | 5 | 10 | 8 | 4 |
-| ... | | | | |
-
-## Automation Strategy
-
-- Unit test candidates
-- Integration test candidates
-- Manual-only scenarios
-
-## Next Steps
-
-1. Implement P0 tests
-2. Set up CI integration
-3. Plan playtesting sessions
-```
-
----
-
-## Deliverables
-
-1. **Test Design Document** — `{output_folder}/game-test-design.md`
-2. **Scenario List** — Prioritized test scenarios
-3. **Coverage Matrix** — Feature vs priority breakdown
-4. **Automation Recommendations** — What to automate vs manual test
-
----
-
-## Output Summary
-
-After completing, provide:
-
-```markdown
-## Test Design Complete
-
-**Project**: {project_name}
-**Scenarios Created**: {count}
-**Priority Breakdown**:
-
-- P0 (Critical): {p0_count}
-- P1 (High): {p1_count}
-- P2 (Medium): {p2_count}
-- P3 (Low): {p3_count}
-
-**Focus Areas Covered**:
-
-- Core Gameplay
-- Progression/Save
-- Platform Requirements
-- {Multiplayer if applicable}
-
-**Next Steps**:
-
-1. Review scenarios with team
-2. Use `automate` workflow to generate test code
-3. Use `playtest-plan` for manual testing sessions
-```
-
----
-
-## Validation
-
-Refer to `checklist.md` for validation criteria.
diff --git a/src/workflows/gametest/gds-test-framework/SKILL.md b/src/workflows/gametest/gds-test-framework/SKILL.md
index 7e6271b..8ad683e 100644
--- a/src/workflows/gametest/gds-test-framework/SKILL.md
+++ b/src/workflows/gametest/gds-test-framework/SKILL.md
@@ -3,4 +3,440 @@ name: gds-test-framework
description: 'Initialize game test framework for Unity, Unreal, or Godot. Use when the user says "test framework" or "set up testing"'
---
-Follow the instructions in ./workflow.md.
+# Game Test Framework Setup
+
+**Workflow ID**: `gds-test-framework`
+**Version**: 1.0 (BMad v6)
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `user_name`
+- `communication_language`
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## Goal
+
+Initialize a production-ready game test framework for Unity, Unreal Engine, or Godot projects. This workflow scaffolds the complete testing infrastructure including unit tests, integration tests, and play mode tests appropriate for the detected game engine.
+
+## Role
+
+You are a Game QA Architect specializing in test infrastructure. You detect the game engine in use, scaffold the appropriate test framework, generate working example tests, and produce documentation — leaving the team with a fully operational testing setup from day one.
+
+---
+
+## WORKFLOW ARCHITECTURE
+
+This workflow detects the game engine and creates all necessary test infrastructure files directly in the game project.
+
+**Primary Output**: `{test_dir}/README.md`
+
+**Supporting Components**:
+- Validation: `{installed_path}/checklist.md`
+
+**Knowledge Base References**:
+- `knowledge/unity-testing.md`
+- `knowledge/unreal-testing.md`
+- `knowledge/godot-testing.md`
+
+
+Load and resolve configuration from `{module_config}`:
+
+```yaml
+output_folder: {from config}
+user_name: {from config}
+communication_language: {from config}
+document_output_language: {from config}
+game_dev_experience: {from config}
+date: {system-generated}
+```
+
+Resolve workflow variables:
+```yaml
+test_dir: "{project-root}/tests" # Root test directory
+game_engine: "auto" # auto | unity | unreal | godot
+test_framework: "auto" # auto | gut (godot) | unity-test-framework | unreal-automation
+```
+
+---
+
+## EXECUTION
+
+### Preflight Requirements
+
+**Critical:** Verify these requirements before proceeding. If any fail, HALT and notify the user.
+
+- Game project exists with identifiable engine
+- No test framework already configured (check for existing test directories)
+- Project structure is accessible
+
+---
+
+### Step 1: Detect Game Engine
+
+#### Actions
+
+1. **Identify Engine Type**
+
+ Look for engine-specific files:
+ - **Unity**: `Assets/`, `ProjectSettings/ProjectSettings.asset`, `*.unity` scene files
+ - **Unreal**: `*.uproject`, `Source/`, `Config/DefaultEngine.ini`
+ - **Godot**: `project.godot`, `*.tscn`, `*.gd` files
+
+2. **Verify Engine Version**
+ - Unity: Check `ProjectSettings/ProjectVersion.txt`
+ - Unreal: Check `*.uproject` file for `EngineAssociation`
+ - Godot: Check `project.godot` for `config_version`
+
+3. **Check for Existing Test Framework**
+ - Unity: Check for `Tests/` folder, `*.Tests.asmdef`
+ - Unreal: Check for `Tests/` in Source, `*Tests.Build.cs`
+ - Godot: Check for `tests/` folder, GUT plugin in `addons/gut/`
+
+**Halt Condition:** If existing framework detected, offer upgrade path or HALT.
+
+---
+
+### Step 2: Scaffold Framework
+
+#### Unity Test Framework
+
+**Knowledge Base Reference**: `knowledge/unity-testing.md`
+
+1. **Create Directory Structure**
+
+ ```
+ Assets/
+ ├── Tests/
+ │ ├── EditMode/
+ │ │ ├── EditModeTests.asmdef
+ │ │ └── ExampleEditModeTest.cs
+ │ └── PlayMode/
+ │ ├── PlayModeTests.asmdef
+ │ └── ExamplePlayModeTest.cs
+ ```
+
+2. **Generate Assembly Definitions**
+
+ `EditModeTests.asmdef`:
+
+ ```json
+ {
+ "name": "EditModeTests",
+ "references": [""],
+ "includePlatforms": ["Editor"],
+ "defineConstraints": ["UNITY_INCLUDE_TESTS"],
+ "optionalUnityReferences": ["TestAssemblies"]
+ }
+ ```
+
+ `PlayModeTests.asmdef`:
+
+ ```json
+ {
+ "name": "PlayModeTests",
+ "references": [""],
+ "includePlatforms": [],
+ "defineConstraints": ["UNITY_INCLUDE_TESTS"],
+ "optionalUnityReferences": ["TestAssemblies"]
+ }
+ ```
+
+3. **Generate Sample Tests**
+
+ Edit Mode test example:
+
+ ```csharp
+ using NUnit.Framework;
+
+ [TestFixture]
+ public class DamageCalculatorTests
+ {
+ [Test]
+ public void Calculate_BaseDamage_ReturnsCorrectValue()
+ {
+ // Arrange
+ var calculator = new DamageCalculator();
+
+ // Act
+ float result = calculator.Calculate(100f, 1f);
+
+ // Assert
+ Assert.AreEqual(100f, result);
+ }
+ }
+ ```
+
+ Play Mode test example:
+
+ ```csharp
+ using System.Collections;
+ using NUnit.Framework;
+ using UnityEngine;
+ using UnityEngine.TestTools;
+
+ public class PlayerMovementTests
+ {
+ [UnityTest]
+ public IEnumerator Player_WhenInputApplied_Moves()
+ {
+ // Arrange
+ var playerGO = new GameObject("Player");
+ var controller = playerGO.AddComponent();
+
+ // Act
+ controller.SetMoveInput(Vector2.right);
+ yield return new WaitForSeconds(0.5f);
+
+ // Assert
+ Assert.Greater(playerGO.transform.position.x, 0f);
+
+ // Cleanup
+ Object.Destroy(playerGO);
+ }
+ }
+ ```
+
+---
+
+#### Unreal Engine Automation
+
+**Knowledge Base Reference**: `knowledge/unreal-testing.md`
+
+1. **Create Directory Structure**
+
+ ```
+ Source/
+ ├── /
+ │ └── ...
+ └── Tests/
+ ├── Tests.Build.cs
+ └── Private/
+ ├── DamageCalculationTests.cpp
+ └── PlayerCombatTests.cpp
+ ```
+
+2. **Generate Module Build File**
+
+ `Tests.Build.cs`:
+
+ ```csharp
+ using UnrealBuildTool;
+
+ public class Tests : ModuleRules
+ {
+ public Tests(ReadOnlyTargetRules Target) : base(Target)
+ {
+ PCHUsage = ModuleRules.PCHUsageMode.UseExplicitOrSharedPCHs;
+
+ PublicDependencyModuleNames.AddRange(new string[] {
+ "Core",
+ "CoreUObject",
+ "Engine",
+ ""
+ });
+
+ PrivateDependencyModuleNames.AddRange(new string[] {
+ "AutomationController"
+ });
+ }
+ }
+ ```
+
+3. **Generate Sample Tests**
+
+ ```cpp
+ #include "Misc/AutomationTest.h"
+
+ IMPLEMENT_SIMPLE_AUTOMATION_TEST(
+ FDamageCalculationTest,
+ ".Combat.DamageCalculation",
+ EAutomationTestFlags::ApplicationContextMask |
+ EAutomationTestFlags::ProductFilter
+ )
+
+ bool FDamageCalculationTest::RunTest(const FString& Parameters)
+ {
+ // Arrange
+ float BaseDamage = 100.f;
+ float CritMultiplier = 2.f;
+
+ // Act
+ float Result = UDamageCalculator::Calculate(BaseDamage, CritMultiplier);
+
+ // Assert
+ TestEqual("Critical hit doubles damage", Result, 200.f);
+
+ return true;
+ }
+ ```
+
+---
+
+#### Godot GUT Framework
+
+**Knowledge Base Reference**: `knowledge/godot-testing.md`
+
+1. **Create Directory Structure**
+
+ ```
+ project/
+ ├── addons/
+ │ └── gut/ (plugin files)
+ ├── tests/
+ │ ├── unit/
+ │ │ └── test_damage_calculator.gd
+ │ └── integration/
+ │ └── test_player_combat.gd
+ └── gut_config.json
+ ```
+
+2. **Generate GUT Configuration**
+
+ `gut_config.json`:
+
+ ```json
+ {
+ "dirs": ["res://tests/"],
+ "include_subdirs": true,
+ "prefix": "test_",
+ "suffix": ".gd",
+ "should_exit": true,
+ "should_exit_on_success": true,
+ "log_level": 1,
+ "junit_xml_file": "results.xml"
+ }
+ ```
+
+3. **Generate Sample Tests**
+
+ `tests/unit/test_damage_calculator.gd`:
+
+ ```gdscript
+ extends GutTest
+
+ var calculator: DamageCalculator
+
+ func before_each():
+ calculator = DamageCalculator.new()
+
+ func after_each():
+ calculator.free()
+
+ func test_calculate_base_damage():
+ var result = calculator.calculate(100.0, 1.0)
+ assert_eq(result, 100.0, "Base damage should equal input")
+
+ func test_calculate_critical_hit():
+ var result = calculator.calculate(100.0, 2.0)
+ assert_eq(result, 200.0, "Critical hit should double damage")
+ ```
+
+---
+
+### Step 3: Generate Documentation
+
+Create `tests/README.md` with:
+
+- Test framework overview for the detected engine
+- Directory structure explanation
+- Running tests locally
+- CI integration commands
+- Best practices for game testing
+- Links to knowledge base fragments
+
+---
+
+### Step 4: Deliverables
+
+#### Primary Artifacts Created
+
+1. **Directory Structure** — Engine-appropriate test folders
+2. **Configuration Files** — Framework-specific config (asmdef, Build.cs, gut_config.json)
+3. **Sample Tests** — Working examples for unit and integration tests
+4. **Documentation** — `tests/README.md`
+
+---
+
+## Output Summary
+
+After completing this workflow, provide a summary:
+
+```markdown
+## Game Test Framework Scaffold Complete
+
+**Engine Detected**: {Unity | Unreal | Godot}
+**Framework**: {Unity Test Framework | Unreal Automation | GUT}
+
+**Artifacts Created**:
+
+- Test directory structure
+- Framework configuration
+- Sample unit tests
+- Sample integration/play mode tests
+- Documentation
+
+**Next Steps**:
+
+1. Review sample tests and adapt to your game
+2. Run initial tests to verify setup
+3. Use `test-design` workflow to plan comprehensive test coverage
+4. Use `automate` workflow to generate additional tests
+
+**Knowledge Base References Applied**:
+
+- {engine}-testing.md
+- qa-automation.md
+- test-priorities.md
+```
+
+---
+
+## Validation
+
+Refer to `checklist.md` for comprehensive validation criteria.
+
+## On Complete
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete`
+
+If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting.
diff --git a/src/workflows/gametest/gds-test-framework/customize.toml b/src/workflows/gametest/gds-test-framework/customize.toml
new file mode 100644
index 0000000..1759e22
--- /dev/null
+++ b/src/workflows/gametest/gds-test-framework/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-test-framework. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "Test frameworks must be chosen to serve the team's workflow, not resume signals."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed at the end of the workflow — after the final
+# deliverables, output summary, and validation references are produced. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/gametest/gds-test-framework/workflow.md b/src/workflows/gametest/gds-test-framework/workflow.md
deleted file mode 100644
index 8468496..0000000
--- a/src/workflows/gametest/gds-test-framework/workflow.md
+++ /dev/null
@@ -1,404 +0,0 @@
----
-name: gametest-framework
-description: 'Game test framework initializer. Use when the user says "lets create a test framework" or "initialize game testing infrastructure"'
-main_config: '{module_config}'
-tags:
- - qa
- - setup
- - game-testing
- - framework
- - initialization
-execution_hints:
- interactive: false
- autonomous: true
- iterative: true
----
-
-# Game Test Framework Setup
-
-**Workflow ID**: `gds-test-framework`
-**Version**: 1.0 (BMad v6)
-
-## Goal
-
-Initialize a production-ready game test framework for Unity, Unreal Engine, or Godot projects. This workflow scaffolds the complete testing infrastructure including unit tests, integration tests, and play mode tests appropriate for the detected game engine.
-
-## Role
-
-You are a Game QA Architect specializing in test infrastructure. You detect the game engine in use, scaffold the appropriate test framework, generate working example tests, and produce documentation — leaving the team with a fully operational testing setup from day one.
-
----
-
-## WORKFLOW ARCHITECTURE
-
-This workflow detects the game engine and creates all necessary test infrastructure files directly in the game project.
-
-**Primary Output**: `{test_dir}/README.md`
-
-**Supporting Components**:
-- Validation: `{installed_path}/checklist.md`
-
-**Knowledge Base References**:
-- `knowledge/unity-testing.md`
-- `knowledge/unreal-testing.md`
-- `knowledge/godot-testing.md`
-
----
-
-## INITIALIZATION
-
-Load and resolve configuration from `{module_config}`:
-
-```yaml
-output_folder: {from config}
-user_name: {from config}
-communication_language: {from config}
-document_output_language: {from config}
-game_dev_experience: {from config}
-date: {system-generated}
-```
-
-Resolve workflow variables:
-```yaml
-test_dir: "{project-root}/tests" # Root test directory
-game_engine: "auto" # auto | unity | unreal | godot
-test_framework: "auto" # auto | gut (godot) | unity-test-framework | unreal-automation
-```
-
----
-
-## EXECUTION
-
-### Preflight Requirements
-
-**Critical:** Verify these requirements before proceeding. If any fail, HALT and notify the user.
-
-- Game project exists with identifiable engine
-- No test framework already configured (check for existing test directories)
-- Project structure is accessible
-
----
-
-### Step 1: Detect Game Engine
-
-#### Actions
-
-1. **Identify Engine Type**
-
- Look for engine-specific files:
- - **Unity**: `Assets/`, `ProjectSettings/ProjectSettings.asset`, `*.unity` scene files
- - **Unreal**: `*.uproject`, `Source/`, `Config/DefaultEngine.ini`
- - **Godot**: `project.godot`, `*.tscn`, `*.gd` files
-
-2. **Verify Engine Version**
- - Unity: Check `ProjectSettings/ProjectVersion.txt`
- - Unreal: Check `*.uproject` file for `EngineAssociation`
- - Godot: Check `project.godot` for `config_version`
-
-3. **Check for Existing Test Framework**
- - Unity: Check for `Tests/` folder, `*.Tests.asmdef`
- - Unreal: Check for `Tests/` in Source, `*Tests.Build.cs`
- - Godot: Check for `tests/` folder, GUT plugin in `addons/gut/`
-
-**Halt Condition:** If existing framework detected, offer upgrade path or HALT.
-
----
-
-### Step 2: Scaffold Framework
-
-#### Unity Test Framework
-
-**Knowledge Base Reference**: `knowledge/unity-testing.md`
-
-1. **Create Directory Structure**
-
- ```
- Assets/
- ├── Tests/
- │ ├── EditMode/
- │ │ ├── EditModeTests.asmdef
- │ │ └── ExampleEditModeTest.cs
- │ └── PlayMode/
- │ ├── PlayModeTests.asmdef
- │ └── ExamplePlayModeTest.cs
- ```
-
-2. **Generate Assembly Definitions**
-
- `EditModeTests.asmdef`:
-
- ```json
- {
- "name": "EditModeTests",
- "references": [""],
- "includePlatforms": ["Editor"],
- "defineConstraints": ["UNITY_INCLUDE_TESTS"],
- "optionalUnityReferences": ["TestAssemblies"]
- }
- ```
-
- `PlayModeTests.asmdef`:
-
- ```json
- {
- "name": "PlayModeTests",
- "references": [""],
- "includePlatforms": [],
- "defineConstraints": ["UNITY_INCLUDE_TESTS"],
- "optionalUnityReferences": ["TestAssemblies"]
- }
- ```
-
-3. **Generate Sample Tests**
-
- Edit Mode test example:
-
- ```csharp
- using NUnit.Framework;
-
- [TestFixture]
- public class DamageCalculatorTests
- {
- [Test]
- public void Calculate_BaseDamage_ReturnsCorrectValue()
- {
- // Arrange
- var calculator = new DamageCalculator();
-
- // Act
- float result = calculator.Calculate(100f, 1f);
-
- // Assert
- Assert.AreEqual(100f, result);
- }
- }
- ```
-
- Play Mode test example:
-
- ```csharp
- using System.Collections;
- using NUnit.Framework;
- using UnityEngine;
- using UnityEngine.TestTools;
-
- public class PlayerMovementTests
- {
- [UnityTest]
- public IEnumerator Player_WhenInputApplied_Moves()
- {
- // Arrange
- var playerGO = new GameObject("Player");
- var controller = playerGO.AddComponent();
-
- // Act
- controller.SetMoveInput(Vector2.right);
- yield return new WaitForSeconds(0.5f);
-
- // Assert
- Assert.Greater(playerGO.transform.position.x, 0f);
-
- // Cleanup
- Object.Destroy(playerGO);
- }
- }
- ```
-
----
-
-#### Unreal Engine Automation
-
-**Knowledge Base Reference**: `knowledge/unreal-testing.md`
-
-1. **Create Directory Structure**
-
- ```
- Source/
- ├── /
- │ └── ...
- └── Tests/
- ├── Tests.Build.cs
- └── Private/
- ├── DamageCalculationTests.cpp
- └── PlayerCombatTests.cpp
- ```
-
-2. **Generate Module Build File**
-
- `Tests.Build.cs`:
-
- ```csharp
- using UnrealBuildTool;
-
- public class Tests : ModuleRules
- {
- public Tests(ReadOnlyTargetRules Target) : base(Target)
- {
- PCHUsage = ModuleRules.PCHUsageMode.UseExplicitOrSharedPCHs;
-
- PublicDependencyModuleNames.AddRange(new string[] {
- "Core",
- "CoreUObject",
- "Engine",
- ""
- });
-
- PrivateDependencyModuleNames.AddRange(new string[] {
- "AutomationController"
- });
- }
- }
- ```
-
-3. **Generate Sample Tests**
-
- ```cpp
- #include "Misc/AutomationTest.h"
-
- IMPLEMENT_SIMPLE_AUTOMATION_TEST(
- FDamageCalculationTest,
- ".Combat.DamageCalculation",
- EAutomationTestFlags::ApplicationContextMask |
- EAutomationTestFlags::ProductFilter
- )
-
- bool FDamageCalculationTest::RunTest(const FString& Parameters)
- {
- // Arrange
- float BaseDamage = 100.f;
- float CritMultiplier = 2.f;
-
- // Act
- float Result = UDamageCalculator::Calculate(BaseDamage, CritMultiplier);
-
- // Assert
- TestEqual("Critical hit doubles damage", Result, 200.f);
-
- return true;
- }
- ```
-
----
-
-#### Godot GUT Framework
-
-**Knowledge Base Reference**: `knowledge/godot-testing.md`
-
-1. **Create Directory Structure**
-
- ```
- project/
- ├── addons/
- │ └── gut/ (plugin files)
- ├── tests/
- │ ├── unit/
- │ │ └── test_damage_calculator.gd
- │ └── integration/
- │ └── test_player_combat.gd
- └── gut_config.json
- ```
-
-2. **Generate GUT Configuration**
-
- `gut_config.json`:
-
- ```json
- {
- "dirs": ["res://tests/"],
- "include_subdirs": true,
- "prefix": "test_",
- "suffix": ".gd",
- "should_exit": true,
- "should_exit_on_success": true,
- "log_level": 1,
- "junit_xml_file": "results.xml"
- }
- ```
-
-3. **Generate Sample Tests**
-
- `tests/unit/test_damage_calculator.gd`:
-
- ```gdscript
- extends GutTest
-
- var calculator: DamageCalculator
-
- func before_each():
- calculator = DamageCalculator.new()
-
- func after_each():
- calculator.free()
-
- func test_calculate_base_damage():
- var result = calculator.calculate(100.0, 1.0)
- assert_eq(result, 100.0, "Base damage should equal input")
-
- func test_calculate_critical_hit():
- var result = calculator.calculate(100.0, 2.0)
- assert_eq(result, 200.0, "Critical hit should double damage")
- ```
-
----
-
-### Step 3: Generate Documentation
-
-Create `tests/README.md` with:
-
-- Test framework overview for the detected engine
-- Directory structure explanation
-- Running tests locally
-- CI integration commands
-- Best practices for game testing
-- Links to knowledge base fragments
-
----
-
-### Step 4: Deliverables
-
-#### Primary Artifacts Created
-
-1. **Directory Structure** — Engine-appropriate test folders
-2. **Configuration Files** — Framework-specific config (asmdef, Build.cs, gut_config.json)
-3. **Sample Tests** — Working examples for unit and integration tests
-4. **Documentation** — `tests/README.md`
-
----
-
-## Output Summary
-
-After completing this workflow, provide a summary:
-
-```markdown
-## Game Test Framework Scaffold Complete
-
-**Engine Detected**: {Unity | Unreal | Godot}
-**Framework**: {Unity Test Framework | Unreal Automation | GUT}
-
-**Artifacts Created**:
-
-- Test directory structure
-- Framework configuration
-- Sample unit tests
-- Sample integration/play mode tests
-- Documentation
-
-**Next Steps**:
-
-1. Review sample tests and adapt to your game
-2. Run initial tests to verify setup
-3. Use `test-design` workflow to plan comprehensive test coverage
-4. Use `automate` workflow to generate additional tests
-
-**Knowledge Base References Applied**:
-
-- {engine}-testing.md
-- qa-automation.md
-- test-priorities.md
-```
-
----
-
-## Validation
-
-Refer to `checklist.md` for comprehensive validation criteria.
diff --git a/src/workflows/gametest/gds-test-review/SKILL.md b/src/workflows/gametest/gds-test-review/SKILL.md
index 023ae74..648fde2 100644
--- a/src/workflows/gametest/gds-test-review/SKILL.md
+++ b/src/workflows/gametest/gds-test-review/SKILL.md
@@ -3,4 +3,361 @@ name: gds-test-review
description: 'Review test quality and coverage. Use when the user says "test review" or "review tests"'
---
-Follow the instructions in ./workflow.md.
+# Test Review
+
+**Workflow ID**: `gds-test-review`
+**Version**: 1.0 (BMad v6)
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `user_name`
+- `communication_language`
+- `output_folder`
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## Goal
+
+Review existing test suite quality, identify coverage gaps, and recommend improvements. Regular test review prevents test rot and maintains test value over time.
+
+## Role
+
+You are a Game QA Lead with expertise in test suite analysis. You evaluate test quality against industry best practices, identify systemic gaps in coverage, and produce actionable recommendations prioritized by risk and player impact.
+
+---
+
+## WORKFLOW ARCHITECTURE
+
+This workflow analyzes the existing test suite and produces a comprehensive review report with prioritized action items.
+
+**Primary Output**: `{output_folder}/test-review-report.md`
+
+**Supporting Components**:
+- Validation: `{installed_path}/checklist.md`
+- Template: `{installed_path}/test-review-template.md`
+
+**Knowledge Base References**:
+- `knowledge/regression-testing.md`
+- `knowledge/test-priorities.md`
+
+
+Load and resolve configuration from `{module_config}`:
+
+```yaml
+output_folder: {from config}
+user_name: {from config}
+communication_language: {from config}
+document_output_language: {from config}
+game_dev_experience: {from config}
+date: {system-generated}
+```
+
+Resolve workflow variables:
+```yaml
+review_scope: "full" # full | targeted | quick
+game_engine: "auto" # auto | unity | unreal | godot
+```
+
+Search the project for existing test files and results before proceeding.
+
+---
+
+## EXECUTION
+
+### Preflight Requirements
+
+Verify before proceeding:
+- Test suite exists (some tests to review)
+- Access to test execution results
+- Understanding of game features
+
+---
+
+### Step 1: Gather Test Suite Metrics
+
+#### Actions
+
+1. **Count Tests by Type**
+
+ | Type | Count | Pass Rate | Avg Duration |
+ | -------------------- | ----- | --------- | ------------ |
+ | Unit | | | |
+ | Integration | | | |
+ | Play Mode/Functional | | | |
+ | Performance | | | |
+ | **Total** | | | |
+
+2. **Analyze Test Results**
+ - Recent pass rate (last 10 runs)
+ - Flaky tests (inconsistent results)
+ - Slow tests (> 30s individual)
+ - Disabled/skipped tests
+
+3. **Map Coverage**
+ - Features with tests
+ - Features without tests
+ - Critical paths covered
+
+---
+
+### Step 2: Assess Test Quality
+
+#### Quality Criteria
+
+For each test, evaluate:
+
+| Criterion | Good | Bad |
+| ----------------- | -------------------------------- | ---------------------------- |
+| **Deterministic** | Same input = same result | Flaky, timing-dependent |
+| **Isolated** | No shared state | Tests affect each other |
+| **Fast** | < 5s (unit), < 30s (integration) | Minutes per test |
+| **Readable** | Clear intent, good names | Cryptic, no comments |
+| **Maintained** | Up-to-date, passing | Disabled, stale |
+| **Valuable** | Tests real behavior | Tests implementation details |
+
+#### Anti-Pattern Detection
+
+Look for these common issues:
+
+```
+Hard-coded waits:
+ await Task.Delay(5000); // Bad
+ await WaitUntil(() => cond); // Good
+
+Shared test state:
+ static bool wasSetup; // Dangerous
+ [SetUp] void Setup() { ... } // Good
+
+Testing private implementation:
+ var result = obj.GetPrivateField(); // Bad
+ var result = obj.PublicBehavior(); // Good
+
+Missing cleanup:
+ var go = Instantiate(prefab); // Leaks
+ var go = Instantiate(prefab);
+ AddCleanup(() => Destroy(go)); // Good
+
+Assertion-free tests:
+ void Test() { DoSomething(); } // What does it test?
+ void Test() { DoSomething(); Assert.That(...); } // Clear
+```
+
+---
+
+### Step 3: Identify Coverage Gaps
+
+#### Critical Areas to Verify
+
+| Area | P0 Coverage | P1 Coverage | Gap? |
+| ------------- | ----------- | ----------- | ---- |
+| Core Loop | | | |
+| Save/Load | | | |
+| Progression | | | |
+| Combat | | | |
+| UI/Menus | | | |
+| Multiplayer | | | |
+| Platform Cert | | | |
+
+#### Gap Identification Process
+
+1. List all game features
+2. Check if each feature has tests
+3. Assess test depth (happy path only vs edge cases)
+4. Prioritize gaps by risk
+
+---
+
+### Step 4: Review Test Infrastructure
+
+#### Framework Health
+
+- [ ] Tests run in CI
+- [ ] Results are visible to team
+- [ ] Failures block deployments
+- [ ] Test data is versioned
+- [ ] Fixtures are reusable
+- [ ] Helpers reduce duplication
+
+#### Maintenance Burden
+
+- How often do tests need updates?
+- Are updates proportional to code changes?
+- Do refactors break tests unnecessarily?
+
+---
+
+### Step 5: Generate Recommendations
+
+#### Priority Matrix
+
+| Finding | Severity | Effort | Recommendation |
+| --------- | -------------- | -------------- | -------------- |
+| {finding} | {High/Med/Low} | {High/Med/Low} | {action} |
+
+#### Common Recommendations
+
+**For Flaky Tests**:
+- Replace `Thread.Sleep` with explicit waits
+- Add proper synchronization
+- Isolate test state
+
+**For Slow Tests**:
+- Move to nightly builds
+- Optimize test setup
+- Mock expensive dependencies
+
+**For Coverage Gaps**:
+- Prioritize P0/P1 features
+- Add smoke tests first
+- Use test-design workflow
+
+**For Maintenance Issues**:
+- Refactor common patterns
+- Create test utilities
+- Improve documentation
+
+---
+
+### Step 6: Generate Test Review Report
+
+Write `{output_folder}/test-review-report.md` using the `test-review-template.md` structure:
+
+```markdown
+# Test Review Report: {Project Name}
+
+## Executive Summary
+
+- Overall health: {Good/Needs Work/Critical}
+- Key findings: {3-5 bullet points}
+- Recommended actions: {prioritized list}
+
+## Metrics
+
+### Test Suite Statistics
+
+[Tables from Step 1]
+
+### Recent History
+
+[Pass rates, trends]
+
+## Quality Assessment
+
+### Strengths
+
+- {What's working well}
+
+### Issues Found
+
+| Issue | Severity | Tests Affected | Fix |
+| ----- | -------- | -------------- | --- |
+| | | | |
+
+## Coverage Analysis
+
+### Current Coverage
+
+[Feature coverage table]
+
+### Critical Gaps
+
+[Prioritized list of missing coverage]
+
+## Recommendations
+
+### Immediate (This Sprint)
+
+1. {Fix critical issues}
+
+### Short-term (This Milestone)
+
+1. {Address major gaps}
+
+### Long-term (Ongoing)
+
+1. {Improve infrastructure}
+
+## Appendix
+
+### Flaky Tests
+
+[List with failure patterns]
+
+### Slow Tests
+
+[List with durations]
+
+### Disabled Tests
+
+[List with reasons]
+```
+
+---
+
+## Review Frequency
+
+| Review Type | Frequency | Scope | Owner |
+| ----------- | --------- | ----------------------- | --------- |
+| Quick Check | Weekly | Pass rates, flaky tests | QA |
+| Full Review | Monthly | Coverage, quality | Tech Lead |
+| Deep Dive | Quarterly | Infrastructure, strategy| Team |
+
+---
+
+## Deliverables
+
+1. **Test Review Report** — Comprehensive analysis
+2. **Action Items** — Prioritized improvements
+3. **Coverage Matrix** — Visual gap identification
+4. **Technical Debt List** — Tests needing refactor
+
+---
+
+## Validation
+
+Refer to `checklist.md` for validation criteria.
+
+## On Complete
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete`
+
+If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting.
diff --git a/src/workflows/gametest/gds-test-review/customize.toml b/src/workflows/gametest/gds-test-review/customize.toml
new file mode 100644
index 0000000..d18ef57
--- /dev/null
+++ b/src/workflows/gametest/gds-test-review/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-test-review. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "Test reviews must evaluate coverage of risk, not just coverage of lines."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed at the end of the workflow — after the final
+# deliverables, output summary, and validation references are produced. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/gametest/gds-test-review/workflow.md b/src/workflows/gametest/gds-test-review/workflow.md
deleted file mode 100644
index caae145..0000000
--- a/src/workflows/gametest/gds-test-review/workflow.md
+++ /dev/null
@@ -1,324 +0,0 @@
----
-name: gametest-test-review
-description: 'Game test coverage reviewer. Use when the user says "lets review game tests" or "I want to evaluate test coverage"'
-main_config: '{module_config}'
-tags:
- - qa
- - review
- - coverage
- - quality
- - maintenance
-execution_hints:
- interactive: false
- autonomous: true
- iterative: true
----
-
-# Test Review
-
-**Workflow ID**: `gds-test-review`
-**Version**: 1.0 (BMad v6)
-
-## Goal
-
-Review existing test suite quality, identify coverage gaps, and recommend improvements. Regular test review prevents test rot and maintains test value over time.
-
-## Role
-
-You are a Game QA Lead with expertise in test suite analysis. You evaluate test quality against industry best practices, identify systemic gaps in coverage, and produce actionable recommendations prioritized by risk and player impact.
-
----
-
-## WORKFLOW ARCHITECTURE
-
-This workflow analyzes the existing test suite and produces a comprehensive review report with prioritized action items.
-
-**Primary Output**: `{output_folder}/test-review-report.md`
-
-**Supporting Components**:
-- Validation: `{installed_path}/checklist.md`
-- Template: `{installed_path}/test-review-template.md`
-
-**Knowledge Base References**:
-- `knowledge/regression-testing.md`
-- `knowledge/test-priorities.md`
-
----
-
-## INITIALIZATION
-
-Load and resolve configuration from `{module_config}`:
-
-```yaml
-output_folder: {from config}
-user_name: {from config}
-communication_language: {from config}
-document_output_language: {from config}
-game_dev_experience: {from config}
-date: {system-generated}
-```
-
-Resolve workflow variables:
-```yaml
-review_scope: "full" # full | targeted | quick
-game_engine: "auto" # auto | unity | unreal | godot
-```
-
-Search the project for existing test files and results before proceeding.
-
----
-
-## EXECUTION
-
-### Preflight Requirements
-
-Verify before proceeding:
-- Test suite exists (some tests to review)
-- Access to test execution results
-- Understanding of game features
-
----
-
-### Step 1: Gather Test Suite Metrics
-
-#### Actions
-
-1. **Count Tests by Type**
-
- | Type | Count | Pass Rate | Avg Duration |
- | -------------------- | ----- | --------- | ------------ |
- | Unit | | | |
- | Integration | | | |
- | Play Mode/Functional | | | |
- | Performance | | | |
- | **Total** | | | |
-
-2. **Analyze Test Results**
- - Recent pass rate (last 10 runs)
- - Flaky tests (inconsistent results)
- - Slow tests (> 30s individual)
- - Disabled/skipped tests
-
-3. **Map Coverage**
- - Features with tests
- - Features without tests
- - Critical paths covered
-
----
-
-### Step 2: Assess Test Quality
-
-#### Quality Criteria
-
-For each test, evaluate:
-
-| Criterion | Good | Bad |
-| ----------------- | -------------------------------- | ---------------------------- |
-| **Deterministic** | Same input = same result | Flaky, timing-dependent |
-| **Isolated** | No shared state | Tests affect each other |
-| **Fast** | < 5s (unit), < 30s (integration) | Minutes per test |
-| **Readable** | Clear intent, good names | Cryptic, no comments |
-| **Maintained** | Up-to-date, passing | Disabled, stale |
-| **Valuable** | Tests real behavior | Tests implementation details |
-
-#### Anti-Pattern Detection
-
-Look for these common issues:
-
-```
-Hard-coded waits:
- await Task.Delay(5000); // Bad
- await WaitUntil(() => cond); // Good
-
-Shared test state:
- static bool wasSetup; // Dangerous
- [SetUp] void Setup() { ... } // Good
-
-Testing private implementation:
- var result = obj.GetPrivateField(); // Bad
- var result = obj.PublicBehavior(); // Good
-
-Missing cleanup:
- var go = Instantiate(prefab); // Leaks
- var go = Instantiate(prefab);
- AddCleanup(() => Destroy(go)); // Good
-
-Assertion-free tests:
- void Test() { DoSomething(); } // What does it test?
- void Test() { DoSomething(); Assert.That(...); } // Clear
-```
-
----
-
-### Step 3: Identify Coverage Gaps
-
-#### Critical Areas to Verify
-
-| Area | P0 Coverage | P1 Coverage | Gap? |
-| ------------- | ----------- | ----------- | ---- |
-| Core Loop | | | |
-| Save/Load | | | |
-| Progression | | | |
-| Combat | | | |
-| UI/Menus | | | |
-| Multiplayer | | | |
-| Platform Cert | | | |
-
-#### Gap Identification Process
-
-1. List all game features
-2. Check if each feature has tests
-3. Assess test depth (happy path only vs edge cases)
-4. Prioritize gaps by risk
-
----
-
-### Step 4: Review Test Infrastructure
-
-#### Framework Health
-
-- [ ] Tests run in CI
-- [ ] Results are visible to team
-- [ ] Failures block deployments
-- [ ] Test data is versioned
-- [ ] Fixtures are reusable
-- [ ] Helpers reduce duplication
-
-#### Maintenance Burden
-
-- How often do tests need updates?
-- Are updates proportional to code changes?
-- Do refactors break tests unnecessarily?
-
----
-
-### Step 5: Generate Recommendations
-
-#### Priority Matrix
-
-| Finding | Severity | Effort | Recommendation |
-| --------- | -------------- | -------------- | -------------- |
-| {finding} | {High/Med/Low} | {High/Med/Low} | {action} |
-
-#### Common Recommendations
-
-**For Flaky Tests**:
-- Replace `Thread.Sleep` with explicit waits
-- Add proper synchronization
-- Isolate test state
-
-**For Slow Tests**:
-- Move to nightly builds
-- Optimize test setup
-- Mock expensive dependencies
-
-**For Coverage Gaps**:
-- Prioritize P0/P1 features
-- Add smoke tests first
-- Use test-design workflow
-
-**For Maintenance Issues**:
-- Refactor common patterns
-- Create test utilities
-- Improve documentation
-
----
-
-### Step 6: Generate Test Review Report
-
-Write `{output_folder}/test-review-report.md` using the `test-review-template.md` structure:
-
-```markdown
-# Test Review Report: {Project Name}
-
-## Executive Summary
-
-- Overall health: {Good/Needs Work/Critical}
-- Key findings: {3-5 bullet points}
-- Recommended actions: {prioritized list}
-
-## Metrics
-
-### Test Suite Statistics
-
-[Tables from Step 1]
-
-### Recent History
-
-[Pass rates, trends]
-
-## Quality Assessment
-
-### Strengths
-
-- {What's working well}
-
-### Issues Found
-
-| Issue | Severity | Tests Affected | Fix |
-| ----- | -------- | -------------- | --- |
-| | | | |
-
-## Coverage Analysis
-
-### Current Coverage
-
-[Feature coverage table]
-
-### Critical Gaps
-
-[Prioritized list of missing coverage]
-
-## Recommendations
-
-### Immediate (This Sprint)
-
-1. {Fix critical issues}
-
-### Short-term (This Milestone)
-
-1. {Address major gaps}
-
-### Long-term (Ongoing)
-
-1. {Improve infrastructure}
-
-## Appendix
-
-### Flaky Tests
-
-[List with failure patterns]
-
-### Slow Tests
-
-[List with durations]
-
-### Disabled Tests
-
-[List with reasons]
-```
-
----
-
-## Review Frequency
-
-| Review Type | Frequency | Scope | Owner |
-| ----------- | --------- | ----------------------- | --------- |
-| Quick Check | Weekly | Pass rates, flaky tests | QA |
-| Full Review | Monthly | Coverage, quality | Tech Lead |
-| Deep Dive | Quarterly | Infrastructure, strategy| Team |
-
----
-
-## Deliverables
-
-1. **Test Review Report** — Comprehensive analysis
-2. **Action Items** — Prioritized improvements
-3. **Coverage Matrix** — Visual gap identification
-4. **Technical Debt List** — Tests needing refactor
-
----
-
-## Validation
-
-Refer to `checklist.md` for validation criteria.
diff --git a/src/workflows/gds-document-project/SKILL.md b/src/workflows/gds-document-project/SKILL.md
index 574554e..d9be59e 100644
--- a/src/workflows/gds-document-project/SKILL.md
+++ b/src/workflows/gds-document-project/SKILL.md
@@ -3,4 +3,69 @@ name: gds-document-project
description: 'Analyze existing game projects to produce useful documentation. Use when the user says "document project" or "generate docs"'
---
-Follow the instructions in ./workflow.md.
+# Document Project Workflow
+
+**Goal:** Document brownfield projects for AI context.
+
+**Your Role:** Project documentation specialist.
+- Communicate all responses in {communication_language}
+
+
+## Paths
+
+- `installed_path` = `{skill-root}`
+- `instructions` = `{installed_path}/instructions.md`
+- `validation` = `{installed_path}/checklist.md`
+- `documentation_requirements_csv` = `{installed_path}/documentation-requirements.csv`
+
+---
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `user_name`
+- `communication_language`
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## EXECUTION
+
+Read fully and follow: `{installed_path}/instructions.md`
diff --git a/src/workflows/gds-document-project/customize.toml b/src/workflows/gds-document-project/customize.toml
new file mode 100644
index 0000000..5e5a290
--- /dev/null
+++ b/src/workflows/gds-document-project/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-document-project. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "Brownfield documentation must reflect the codebase as it is, not as it should be."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed when the workflow reaches Step 4 (Update status and complete),
+# after the final outputs are produced. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/gds-document-project/instructions.md b/src/workflows/gds-document-project/instructions.md
index 2456ce5..a32411d 100644
--- a/src/workflows/gds-document-project/instructions.md
+++ b/src/workflows/gds-document-project/instructions.md
@@ -215,7 +215,7 @@ Since no workflow is in progress:
- Or run `workflow-init` to create a workflow path and get guided next steps
{{/if}}
-
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete` — if the resolved value is non-empty, follow it as the final terminal instruction before exiting.
diff --git a/src/workflows/gds-document-project/workflow.md b/src/workflows/gds-document-project/workflow.md
deleted file mode 100644
index 276e9d2..0000000
--- a/src/workflows/gds-document-project/workflow.md
+++ /dev/null
@@ -1,39 +0,0 @@
----
-name: document-project
-description: 'Document brownfield projects for AI context. Use when the user says "document this project" or "generate project docs"'
----
-
-# Document Project Workflow
-
-**Goal:** Document brownfield projects for AI context.
-
-**Your Role:** Project documentation specialist.
-- Communicate all responses in {communication_language}
-
----
-
-## INITIALIZATION
-
-### Configuration Loading
-
-Load config from `{module_config}` and resolve:
-
-- `project_knowledge`
-- `user_name`
-- `communication_language`
-- `document_output_language`
-- `game_dev_experience`
-- `date` as system-generated current datetime
-
-### Paths
-
-- `installed_path` = `{skill_root}`
-- `instructions` = `{installed_path}/instructions.md`
-- `validation` = `{installed_path}/checklist.md`
-- `documentation_requirements_csv` = `{installed_path}/documentation-requirements.csv`
-
----
-
-## EXECUTION
-
-Read fully and follow: `{installed_path}/instructions.md`
diff --git a/src/workflows/gds-document-project/workflows/deep-dive-instructions.md b/src/workflows/gds-document-project/workflows/deep-dive-instructions.md
index c88dfb0..7f04202 100644
--- a/src/workflows/gds-document-project/workflows/deep-dive-instructions.md
+++ b/src/workflows/gds-document-project/workflows/deep-dive-instructions.md
@@ -290,6 +290,7 @@ These comprehensive docs are now ready for:
Thank you for using the document-project workflow!
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete` — if the resolved value is non-empty, follow it as the final terminal instruction before exiting.Exit workflow
diff --git a/src/workflows/gds-document-project/workflows/full-scan-instructions.md b/src/workflows/gds-document-project/workflows/full-scan-instructions.md
index 1340f75..28f10e1 100644
--- a/src/workflows/gds-document-project/workflows/full-scan-instructions.md
+++ b/src/workflows/gds-document-project/workflows/full-scan-instructions.md
@@ -1103,4 +1103,6 @@ When ready to plan new features, run the PRD workflow and provide this index as
Display: "State file saved: {{output_folder}}/project-scan-report.json"
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete` — if the resolved value is non-empty, follow it as the final terminal instruction before exiting.
+
diff --git a/src/workflows/gds-quick-flow/gds-quick-dev/SKILL.md b/src/workflows/gds-quick-flow/gds-quick-dev/SKILL.md
index 5d0a1b9..7da67ff 100644
--- a/src/workflows/gds-quick-flow/gds-quick-dev/SKILL.md
+++ b/src/workflows/gds-quick-flow/gds-quick-dev/SKILL.md
@@ -3,4 +3,121 @@ name: gds-quick-dev
description: 'Implements any user intent, requirement, story, bug fix or change request by producing clean working code artifacts that follow the project''s existing architecture, patterns and conventions. Use when the user wants to build, fix, tweak, refactor, add or modify any code, component or feature.'
---
-Follow the instructions in ./workflow.md.
+# Quick Dev New Preview Workflow
+
+**Goal:** Turn user intent into a hardened, reviewable artifact.
+
+**CRITICAL:** If a step says "read fully and follow step-XX", you read and follow step-XX. No exceptions.
+
+## Conventions
+
+- Bare paths (e.g. `template.md`) resolve from the skill root.
+- `{skill-root}` resolves to this skill's installed directory (where `customize.toml` lives).
+- `{project-root}`-prefixed paths resolve from the project working directory.
+- `{skill-name}` resolves to the skill directory's basename.
+
+## On Activation
+
+### Step 1: Resolve the Workflow Block
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow`
+
+**If the script fails**, resolve the `workflow` block yourself by reading these three files in base → team → user order and applying the same structural merge rules as the resolver:
+
+1. `{skill-root}/customize.toml` — defaults
+2. `{project-root}/_bmad/custom/{skill-name}.toml` — team overrides
+3. `{project-root}/_bmad/custom/{skill-name}.user.toml` — personal overrides
+
+Any missing file is skipped. Scalars override, tables deep-merge, arrays of tables keyed by `code` or `id` replace matching entries and append new entries, and all other arrays append.
+
+### Step 2: Execute Prepend Steps
+
+Execute each entry in `{workflow.activation_steps_prepend}` in order before proceeding.
+
+### Step 3: Load Persistent Facts
+
+Treat every entry in `{workflow.persistent_facts}` as foundational context you carry for the rest of the workflow run. Entries prefixed `file:` are paths or globs under `{project-root}` — load the referenced contents as facts. All other entries are facts verbatim.
+
+### Step 4: Load Config
+
+Load config from `{project-root}/_bmad/gds/config.yaml` and resolve:
+
+- `user_name`
+- `communication_language`
+- `implementation_artifacts`
+
+### Step 5: Greet the User
+
+Greet `{user_name}`, speaking in `{communication_language}`.
+
+### Step 6: Execute Append Steps
+
+Execute each entry in `{workflow.activation_steps_append}` in order.
+
+Activation is complete. Begin the workflow below.
+
+## READY FOR DEVELOPMENT STANDARD
+
+A specification is "Ready for Development" when:
+
+- **Actionable**: Every task has a file path and specific action.
+- **Logical**: Tasks ordered by dependency.
+- **Testable**: All ACs use Given/When/Then.
+- **Complete**: No placeholders or TBDs.
+
+
+## SCOPE STANDARD
+
+A specification should target a **single user-facing goal** within **900–1600 tokens**:
+
+- **Single goal**: One cohesive feature, even if it spans multiple layers/files. Multi-goal means >=2 **top-level independent shippable deliverables** — each could be reviewed, tested, and merged as a separate PR without breaking the others. Never count surface verbs, "and" conjunctions, or noun phrases. Never split cross-layer implementation details inside one user goal.
+ - Split: "add dark mode toggle AND refactor auth to JWT AND build admin dashboard"
+ - Don't split: "add validation and display errors" / "support drag-and-drop AND paste AND retry"
+- **900–1600 tokens**: Optimal range for LLM consumption. Below 900 risks ambiguity; above 1600 risks context-rot in implementation agents.
+- **Neither limit is a gate.** Both are proposals with user override.
+
+
+## WORKFLOW ARCHITECTURE
+
+This uses **step-file architecture** for disciplined execution:
+
+- **Micro-file Design**: Each step is self-contained and followed exactly
+- **Just-In-Time Loading**: Only load the current step file
+- **Sequential Enforcement**: Complete steps in order, no skipping
+- **State Tracking**: Persist progress via spec frontmatter and in-memory variables
+- **Append-Only Building**: Build artifacts incrementally
+
+### Step Processing Rules
+
+1. **READ COMPLETELY**: Read the entire step file before acting
+2. **FOLLOW SEQUENCE**: Execute sections in order
+3. **WAIT FOR INPUT**: Halt at checkpoints and wait for human
+4. **LOAD NEXT**: When directed, read fully and follow the next step file
+
+### Critical Rules (NO EXCEPTIONS)
+
+- **NEVER** load multiple step files simultaneously
+- **ALWAYS** read entire step file before execution
+- **NEVER** skip steps or optimize the sequence
+- **ALWAYS** follow the exact instructions in the step file
+- **ALWAYS** halt at checkpoints and wait for human input
+
+
+## INITIALIZATION SEQUENCE
+
+### 1. Configuration Loading
+
+Load and read full config from `{main_config}` and resolve:
+
+- `project_name`, `planning_artifacts`, `implementation_artifacts`, `user_name`
+- `communication_language`, `document_output_language`, `game_dev_experience`
+- `date` as system-generated current datetime
+- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml`
+- `project_context` = `**/project-context.md` (load if exists)
+- CLAUDE.md / memory files (load if exist)
+
+YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`.
+
+### 2. First Step Execution
+
+Read fully and follow: `./step-01-clarify-and-route.md` to begin the workflow.
diff --git a/src/workflows/gds-quick-flow/gds-quick-dev/customize.toml b/src/workflows/gds-quick-flow/gds-quick-dev/customize.toml
new file mode 100644
index 0000000..1fa64db
--- /dev/null
+++ b/src/workflows/gds-quick-flow/gds-quick-dev/customize.toml
@@ -0,0 +1,41 @@
+# DO NOT EDIT -- overwritten on every update.
+#
+# Workflow customization surface for gds-quick-dev. Mirrors the
+# agent customization shape under the [workflow] namespace.
+
+[workflow]
+
+# --- Configurable below. Overrides merge per BMad structural rules: ---
+# scalars: override wins • arrays (persistent_facts, activation_steps_*): append
+# arrays-of-tables with `code`/`id`: replace matching items, append new ones.
+
+# Steps to run before the standard activation (config load, greet).
+# Overrides append. Use for pre-flight loads, compliance checks, etc.
+
+activation_steps_prepend = []
+
+# Steps to run after greet but before the workflow begins.
+# Overrides append. Use for context-heavy setup that should happen
+# once the user has been acknowledged.
+
+activation_steps_append = []
+
+# Persistent facts the workflow keeps in mind for the whole run
+# (standards, compliance constraints, stylistic guardrails).
+# Distinct from the runtime memory sidecar — these are static context
+# loaded on activation. Overrides append.
+#
+# Each entry is either:
+# - a literal sentence, e.g. "Quick dev work must follow the project's existing architecture and conventions, not invent new ones."
+# - a file reference prefixed with `file:`, e.g. "file:{project-root}/docs/standards.md"
+# (glob patterns are supported; the file's contents are loaded and treated as facts).
+
+persistent_facts = [
+ "file:{project-root}/**/project-context.md",
+]
+
+# Scalar: executed when the workflow reaches step-05-present.md (final present/review
+# step), after the artifact is produced and reviewed. Override wins.
+# Leave empty for no custom post-completion behavior.
+
+on_complete = ""
diff --git a/src/workflows/gds-quick-flow/gds-quick-dev/step-05-present.md b/src/workflows/gds-quick-flow/gds-quick-dev/step-05-present.md
index 6b1a150..5efe961 100644
--- a/src/workflows/gds-quick-flow/gds-quick-dev/step-05-present.md
+++ b/src/workflows/gds-quick-flow/gds-quick-dev/step-05-present.md
@@ -70,3 +70,9 @@ Display summary of your work to the user, including the commit hash if one was c
- Offer to push and/or create a pull request.
Workflow complete.
+
+## On Complete
+
+Run: `python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete`
+
+If the resolved `workflow.on_complete` is non-empty, follow it as the final terminal instruction before exiting.
diff --git a/src/workflows/gds-quick-flow/gds-quick-dev/workflow.md b/src/workflows/gds-quick-flow/gds-quick-dev/workflow.md
deleted file mode 100644
index f82d1a3..0000000
--- a/src/workflows/gds-quick-flow/gds-quick-dev/workflow.md
+++ /dev/null
@@ -1,76 +0,0 @@
----
-main_config: '{module_config}'
----
-
-# Quick Dev New Preview Workflow
-
-**Goal:** Turn user intent into a hardened, reviewable artifact.
-
-**CRITICAL:** If a step says "read fully and follow step-XX", you read and follow step-XX. No exceptions.
-
-
-## READY FOR DEVELOPMENT STANDARD
-
-A specification is "Ready for Development" when:
-
-- **Actionable**: Every task has a file path and specific action.
-- **Logical**: Tasks ordered by dependency.
-- **Testable**: All ACs use Given/When/Then.
-- **Complete**: No placeholders or TBDs.
-
-
-## SCOPE STANDARD
-
-A specification should target a **single user-facing goal** within **900–1600 tokens**:
-
-- **Single goal**: One cohesive feature, even if it spans multiple layers/files. Multi-goal means >=2 **top-level independent shippable deliverables** — each could be reviewed, tested, and merged as a separate PR without breaking the others. Never count surface verbs, "and" conjunctions, or noun phrases. Never split cross-layer implementation details inside one user goal.
- - Split: "add dark mode toggle AND refactor auth to JWT AND build admin dashboard"
- - Don't split: "add validation and display errors" / "support drag-and-drop AND paste AND retry"
-- **900–1600 tokens**: Optimal range for LLM consumption. Below 900 risks ambiguity; above 1600 risks context-rot in implementation agents.
-- **Neither limit is a gate.** Both are proposals with user override.
-
-
-## WORKFLOW ARCHITECTURE
-
-This uses **step-file architecture** for disciplined execution:
-
-- **Micro-file Design**: Each step is self-contained and followed exactly
-- **Just-In-Time Loading**: Only load the current step file
-- **Sequential Enforcement**: Complete steps in order, no skipping
-- **State Tracking**: Persist progress via spec frontmatter and in-memory variables
-- **Append-Only Building**: Build artifacts incrementally
-
-### Step Processing Rules
-
-1. **READ COMPLETELY**: Read the entire step file before acting
-2. **FOLLOW SEQUENCE**: Execute sections in order
-3. **WAIT FOR INPUT**: Halt at checkpoints and wait for human
-4. **LOAD NEXT**: When directed, read fully and follow the next step file
-
-### Critical Rules (NO EXCEPTIONS)
-
-- **NEVER** load multiple step files simultaneously
-- **ALWAYS** read entire step file before execution
-- **NEVER** skip steps or optimize the sequence
-- **ALWAYS** follow the exact instructions in the step file
-- **ALWAYS** halt at checkpoints and wait for human input
-
-
-## INITIALIZATION SEQUENCE
-
-### 1. Configuration Loading
-
-Load and read full config from `{main_config}` and resolve:
-
-- `project_name`, `planning_artifacts`, `implementation_artifacts`, `user_name`
-- `communication_language`, `document_output_language`, `game_dev_experience`
-- `date` as system-generated current datetime
-- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml`
-- `project_context` = `**/project-context.md` (load if exists)
-- CLAUDE.md / memory files (load if exist)
-
-YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the config `{communication_language}`.
-
-### 2. First Step Execution
-
-Read fully and follow: `./step-01-clarify-and-route.md` to begin the workflow.