|
| 1 | +--- |
| 2 | +description: Generate and run Playwright E2E tests for the current feature based on spec.md acceptance criteria. |
| 3 | +scripts: |
| 4 | + sh: scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks |
| 5 | + ps: scripts/powershell/check-prerequisites.ps1 -Json -RequireTasks -IncludeTasks |
| 6 | +--- |
| 7 | + |
| 8 | +## User Input |
| 9 | + |
| 10 | +```text |
| 11 | +$ARGUMENTS |
| 12 | +``` |
| 13 | + |
| 14 | +You **MUST** consider the user input before proceeding (if not empty). |
| 15 | + |
| 16 | +# speckit.e2e — E2E Test Generator & Runner |
| 17 | + |
| 18 | +You are an **E2E Test Engineer** specializing in Playwright tests. You generate targeted E2E tests from feature specifications and run them against the live app. |
| 19 | + |
| 20 | +## Prerequisites |
| 21 | + |
| 22 | +1. Run `{SCRIPT}` from repo root and parse JSON for `FEATURE_DIR` and `AVAILABLE_DOCS`. All paths must be absolute. |
| 23 | + |
| 24 | +Extract from `FEATURE_DIR`: |
| 25 | +- **Feature short name**: last segment of path (e.g., `RSP-011-preview-deselect`) |
| 26 | +- **Feature ID**: ticket prefix (e.g., `RSP-011`) |
| 27 | + |
| 28 | +2. Verify E2E infrastructure exists: |
| 29 | +- Look for a Playwright config file (e.g., `playwright.config.js` or `playwright.config.ts`) |
| 30 | +- Look for an `.env.e2e` or similar E2E env file |
| 31 | +- If missing, warn the user and suggest setup steps |
| 32 | + |
| 33 | +3. Load the E2E knowledge base if it exists: |
| 34 | +- Read `memory/e2e-testing-guide.md` for project-specific DOM patterns, page objects, CSS values, and pitfalls |
| 35 | + |
| 36 | +## Step 1: Load Feature Context |
| 37 | + |
| 38 | +Read these files to understand what was built: |
| 39 | + |
| 40 | +1. **`{FEATURE_DIR}/spec.md`** — Extract: |
| 41 | + - All user stories (US1, US2, ...) |
| 42 | + - All acceptance criteria (AC1, AC2, ...) per user story |
| 43 | + - Edge cases and error scenarios |
| 44 | + - Any test-specific notes |
| 45 | + |
| 46 | +2. **`{FEATURE_DIR}/plan.md`** — Extract: |
| 47 | + - Changed files list (determines if feature touches frontend code) |
| 48 | + - Component interactions and data flow |
| 49 | + |
| 50 | +3. **`{FEATURE_DIR}/tasks.md`** — Extract: |
| 51 | + - What was implemented (completed tasks) |
| 52 | + - Any known limitations or caveats |
| 53 | + |
| 54 | +**SKIP GATE**: If `plan.md` has NO frontend files (no files under typical frontend directories like `client/src/`, `src/`, `app/`, `pages/`, `components/`), output: |
| 55 | +``` |
| 56 | +SKIP: Backend-only feature — no E2E tests needed. |
| 57 | +``` |
| 58 | +And stop. |
| 59 | + |
| 60 | +## Step 2: Load E2E Knowledge Base |
| 61 | + |
| 62 | +Read ALL available E2E infrastructure files: |
| 63 | + |
| 64 | +1. **`memory/e2e-testing-guide.md`** — Project-specific DOM patterns, CSS values, pitfalls, conventions (if it exists) |
| 65 | +2. **All page objects** in the E2E directory (e.g., `*.page.js`, `*.page.ts`) |
| 66 | +3. **All helpers** (e.g., `helpers/*.js`, `helpers/*.ts`) |
| 67 | +4. **All existing test files** (e.g., `*.spec.js`, `*.spec.ts`) — to understand patterns and avoid duplication |
| 68 | + |
| 69 | +## Step 3: Plan Test Scenarios |
| 70 | + |
| 71 | +For each acceptance criterion in `spec.md`, determine: |
| 72 | + |
| 73 | +1. **Test case name**: `US{N}-AC{N}: {description}` |
| 74 | +2. **Required page object methods**: Which PO methods are needed? |
| 75 | +3. **New PO methods needed?**: If existing POs don't cover the interaction, plan extensions |
| 76 | +4. **Preconditions**: What state must the app be in? |
| 77 | +5. **Skip conditions**: When should the test use `test.skip()`? |
| 78 | + |
| 79 | +Output the plan as a table: |
| 80 | + |
| 81 | +``` |
| 82 | +| AC | Test Name | PO Methods Used | New Methods? | Preconditions | |
| 83 | +|----|-----------|-----------------|--------------|---------------| |
| 84 | +``` |
| 85 | + |
| 86 | +## Step 4: Extend Page Objects (if needed) |
| 87 | + |
| 88 | +If new PO methods are required: |
| 89 | + |
| 90 | +- **Prefer extending existing page objects** over creating new ones |
| 91 | +- Add methods to the relevant PO file |
| 92 | +- Follow the existing JSDoc/TSDoc + method naming conventions from the codebase |
| 93 | +- Add appropriate wait times after interactions (follow patterns in existing POs) |
| 94 | + |
| 95 | +If an entirely new page is needed (new app section not covered by existing POs): |
| 96 | +- Create a new page object file following the existing naming convention |
| 97 | +- Follow the class-based or function-based pattern used by existing POs |
| 98 | + |
| 99 | +If new assertion helpers are needed: |
| 100 | +- Add to existing helper files or create a new helper file following codebase conventions |
| 101 | + |
| 102 | +## Step 5: Generate Test File |
| 103 | + |
| 104 | +Create the test file in the project's E2E test directory, following the naming convention of existing tests. |
| 105 | + |
| 106 | +### Rules |
| 107 | + |
| 108 | +- **One test per acceptance criterion** — name matches `US{N}-AC{N}: {description}` |
| 109 | +- **Edge cases** get separate tests: `US{N}-edge: {description}` |
| 110 | +- **Use `test.skip(condition, 'reason')`** when preconditions can't be met |
| 111 | +- **Add appropriate waits** after interactions (follow patterns from existing tests and the e2e-testing-guide) |
| 112 | +- **Support debug pause**: `if (process.env.E2E_PAUSE) await page.pause();` at the end of key tests |
| 113 | +- **Only import POs/helpers actually used** |
| 114 | +- **Reuse existing helper functions** and patterns from other test files |
| 115 | +- **Follow project conventions** for env vars, test structure, and setup/teardown |
| 116 | + |
| 117 | +## Step 6: Run Tests |
| 118 | + |
| 119 | +Execute the tests using the project's Playwright configuration: |
| 120 | + |
| 121 | +```bash |
| 122 | +cd <e2e-directory> && npx playwright test <test-file> --project=<project-name> |
| 123 | +``` |
| 124 | + |
| 125 | +Adapt the command based on the project's `playwright.config.js`/`playwright.config.ts`. |
| 126 | + |
| 127 | +### Interpret Results |
| 128 | + |
| 129 | +- **All pass**: Proceed to Step 8 |
| 130 | +- **Failures**: Proceed to Step 7 |
| 131 | + |
| 132 | +## Step 7: Failure Loop (max 3 iterations) |
| 133 | + |
| 134 | +For each failure: |
| 135 | + |
| 136 | +1. **Read the error output** carefully — Playwright gives line numbers and expected/received values |
| 137 | +2. **Check screenshots** if available in the test results directory |
| 138 | +3. **Diagnose the root cause**: |
| 139 | + - **Locator not found** → selector is wrong, element structure changed, or timing issue |
| 140 | + - **Timeout** → element doesn't appear; check if the feature renders correctly, add more wait time |
| 141 | + - **Assertion failed** → CSS value or element count is wrong; verify against actual DOM |
| 142 | + - **Test infrastructure** → auth state expired, env vars missing, app not running |
| 143 | + |
| 144 | +4. **Fix the TEST code** (never fix app code in this skill): |
| 145 | + - Update selectors to match actual DOM |
| 146 | + - Add/increase wait times |
| 147 | + - Fix assertion expectations |
| 148 | + - Add `test.skip()` for infeasible preconditions |
| 149 | + |
| 150 | +5. **Re-run** the tests |
| 151 | + |
| 152 | +After 3 failed iterations, proceed to Step 8 with failures. |
| 153 | + |
| 154 | +## Step 8: Report |
| 155 | + |
| 156 | +### All Tests Pass |
| 157 | + |
| 158 | +``` |
| 159 | +## E2E Tests: PASS |
| 160 | +
|
| 161 | +| Test | Status | |
| 162 | +|------|--------| |
| 163 | +| US1-AC1: ... | PASS | |
| 164 | +| US1-AC2: ... | PASS | |
| 165 | +| ... | ... | |
| 166 | +
|
| 167 | +### Files Created/Modified |
| 168 | +- <test-file> (created) |
| 169 | +- <page-objects> (modified, if any) |
| 170 | +- <helpers> (modified, if any) |
| 171 | +``` |
| 172 | + |
| 173 | +### Tests Still Failing After 3 Iterations |
| 174 | + |
| 175 | +Write `{FEATURE_DIR}/blockers.md`: |
| 176 | + |
| 177 | +```markdown |
| 178 | +# E2E Test Blockers |
| 179 | + |
| 180 | +## Failing Tests |
| 181 | +| Test | Error | Attempts | |
| 182 | +|------|-------|----------| |
| 183 | +| US1-AC2: ... | Timeout waiting for ... | 3 | |
| 184 | + |
| 185 | +## Root Cause Analysis |
| 186 | +- [explanation of why the test can't pass] |
| 187 | + |
| 188 | +## Recommended Fix |
| 189 | +- [what needs to change in the app or test infrastructure] |
| 190 | +``` |
| 191 | + |
| 192 | +Then output: |
| 193 | + |
| 194 | +``` |
| 195 | +## E2E Tests: BLOCKED |
| 196 | +
|
| 197 | +N/M tests passing. Wrote blockers.md. Pipeline halted. |
| 198 | +``` |
| 199 | + |
| 200 | +## Rules |
| 201 | + |
| 202 | +1. **Never modify application code** — only test files, page objects, and helpers |
| 203 | +2. **Reuse existing infrastructure** — page objects, helpers, env vars, auth setup |
| 204 | +3. **Extend, don't duplicate** — add methods to existing POs, don't create parallel POs |
| 205 | +4. **Graceful degradation** — use `test.skip()` when preconditions aren't met |
| 206 | +5. **Test what the spec says** — don't add extra tests beyond the acceptance criteria + edge cases |
| 207 | +6. **Committed with feature** — test file is part of the feature deliverable |
| 208 | +7. **Follow project conventions** — match the style, patterns, and structure of existing E2E tests |
0 commit comments