Skip to content

Commit f3bfa55

Browse files
committed
Update .gitignore
1 parent 1cc05f6 commit f3bfa55

51 files changed

Lines changed: 2792 additions & 7 deletions

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.
Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
---
2+
description: Capture structured knowledge about a code entry point and save it to the knowledge docs.
3+
---
4+
5+
# Knowledge Capture Assistant
6+
7+
Guide me through creating a structured understanding of a code entry point and saving it to the knowledge docs.
8+
9+
## Step 1: Gather Context
10+
- Entry point (file, folder, function, API)
11+
- Why this entry point matters (feature, bug, investigation)
12+
- Relevant requirements/design docs (if any)
13+
- Desired depth or focus areas (logic, dependencies, data flow)
14+
15+
## Step 2: Validate Entry Point
16+
- Determine entry point type and confirm it exists
17+
- Surface ambiguity (multiple matches) and ask for clarification
18+
- If not found, suggest likely alternatives or spelling fixes
19+
20+
## Step 3: Collect Source Context
21+
- Read the primary file/module and summarize purpose, exports, key patterns
22+
- For folders: list structure, highlight key modules
23+
- For functions/APIs: capture signature, parameters, return values, error handling
24+
- Extract essential snippets (avoid large dumps)
25+
26+
## Step 4: Analyze Dependencies
27+
- Build a dependency view up to depth 3
28+
- Track visited nodes to avoid loops
29+
- Categorize dependencies (imports, function calls, services, external packages)
30+
- Note important external systems or generated code that should be excluded
31+
32+
## Step 5: Synthesize Explanation
33+
- Draft an overview (purpose, language, high-level behavior)
34+
- Detail core logic, key components, execution flow, patterns
35+
- Highlight error handling, performance, security considerations
36+
- Identify potential improvements or risks discovered during analysis
37+
38+
## Step 6: Create Documentation
39+
- Normalize entry point name to kebab-case (`calculateTotalPrice``calculate-total-price`)
40+
- Create `docs/ai/implementation/knowledge-{name}.md` using the headings implied in Step 5 (Overview, Implementation Details, Dependencies, Visual Diagrams, Additional Insights, Metadata, Next Steps)
41+
- Populate sections with findings, diagrams, and metadata (analysis date, depth, files touched)
42+
- Include mermaid diagrams when they clarify flows or relationships
43+
44+
## Step 7: Review & Next Actions
45+
- Summarize key insights and open questions for follow-up
46+
- Suggest related areas for deeper dives or refactors
47+
- Confirm the knowledge file path and remind to commit it
48+
- Encourage running `/capture-knowledge` again for related entry points if needed
49+
50+
Let me know the entry point and goals when you’re ready to begin the knowledge capture.
Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
---
2+
description: Compare implementation with design and requirements docs to ensure alignment.
3+
---
4+
5+
Compare the current implementation with the design in docs/ai/design/ and requirements in docs/ai/requirements/. Please follow this structured review:
6+
7+
1. Ask me for:
8+
- Feature/branch description
9+
- List of modified files
10+
- Relevant design doc(s) (feature-specific and/or project-level)
11+
- Any known constraints or assumptions
12+
13+
2. For each design doc:
14+
- Summarize key architectural decisions and constraints
15+
- Highlight components, interfaces, and data flows that must be respected
16+
17+
3. File-by-file comparison:
18+
- Confirm implementation matches design intent
19+
- Note deviations or missing pieces
20+
- Flag logic gaps, edge cases, or security issues
21+
- Suggest simplifications or refactors
22+
- Identify missing tests or documentation updates
23+
24+
4. Summarize findings with recommended next steps.
25+

.agent/workflows/code-review.md

Lines changed: 85 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,85 @@
1+
---
2+
description: Perform a local code review before pushing changes, ensuring alignment with design docs and best practices.
3+
---
4+
5+
# Local Code Review Assistant
6+
7+
You are helping me perform a local code review **before** I push changes. Please follow this structured workflow.
8+
9+
## Step 1: Gather Context
10+
Ask me for:
11+
- Brief feature/branch description
12+
- List of modified files (with optional summaries)
13+
- Relevant design doc(s) (e.g., `docs/ai/design/feature-{name}.md` or project-level design)
14+
- Any known constraints or risky areas
15+
- Any open bugs or TODOs linked to this work
16+
- Which tests have already been run
17+
18+
If possible, request the latest diff:
19+
```bash
20+
git status -sb
21+
git diff --stat
22+
```
23+
24+
## Step 2: Understand Design Alignment
25+
For each provided design doc:
26+
- Summarize the architectural intent
27+
- Note critical requirements, patterns, or constraints the design mandates
28+
29+
## Step 3: File-by-File Review
30+
For every modified file:
31+
1. Highlight deviations from the referenced design or requirements
32+
2. Spot potential logic or flow issues and edge cases
33+
3. Identify redundant or duplicate code
34+
4. Suggest simplifications or refactors (prefer clarity over cleverness)
35+
5. Flag security concerns (input validation, secrets, auth, data handling)
36+
6. Check for performance pitfalls or scalability risks
37+
7. Ensure error handling, logging, and observability are appropriate
38+
8. Note any missing comments or docs
39+
9. Flag missing or outdated tests related to this file
40+
41+
## Step 4: Cross-Cutting Concerns
42+
- Verify naming consistency and adherence to project conventions
43+
- Confirm documentation/comments are updated where the behavior changed
44+
- Identify missing tests (unit, integration, E2E) needed to cover the changes
45+
- Ensure configuration/migration updates are captured if applicable
46+
47+
## Step 5: Summarize Findings
48+
Provide results in this structure:
49+
```
50+
### Summary
51+
- Blocking issues: [count]
52+
- Important follow-ups: [count]
53+
- Nice-to-have improvements: [count]
54+
55+
### Detailed Notes
56+
1. **[File or Component]**
57+
- Issue/Observation: ...
58+
- Impact: (e.g., blocking / important / nice-to-have)
59+
- Recommendation: ...
60+
- Design reference: [...]
61+
62+
2. ... (repeat per finding)
63+
64+
### Recommended Next Steps
65+
- [ ] Address blocking issues
66+
- [ ] Update design/implementation docs if needed
67+
- [ ] Add/adjust tests:
68+
- Unit:
69+
- Integration:
70+
- E2E:
71+
- [ ] Rerun local test suite
72+
- [ ] Re-run code review command after fixes
73+
```
74+
75+
## Step 6: Final Checklist
76+
Confirm whether each item is complete (yes/no/needs follow-up):
77+
- Implementation matches design & requirements
78+
- No obvious logic or edge-case gaps remain
79+
- Redundant code removed or justified
80+
- Security considerations addressed
81+
- Tests cover new/changed behavior
82+
- Documentation/design notes updated
83+
84+
---
85+
Let me know when you're ready to begin the review.

.agent/workflows/debug-leak.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
description: debug
3+
---
4+
5+
test

.agent/workflows/debug.md

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
---
2+
description: Guide me through debugging a code issue by clarifying expectations, identifying gaps, and agreeing on a fix plan before changing code.
3+
---
4+
5+
# Local Debugging Assistant
6+
7+
Help me debug an issue by clarifying expectations, identifying gaps, and agreeing on a fix plan before changing code.
8+
9+
## Step 1: Gather Context
10+
Ask me for:
11+
- Brief issue description (what is happening?)
12+
- Expected behavior or acceptance criteria (what should happen?)
13+
- Current behavior and any error messages/logs
14+
- Recent related changes or deployments
15+
- Scope of impact (users, services, environments)
16+
17+
## Step 2: Clarify Reality vs Expectation
18+
- Restate the observed behavior vs the expected outcome
19+
- Confirm relevant requirements, tickets, or docs that define the expectation
20+
- Identify acceptance criteria for the fix (how we know it is resolved)
21+
22+
## Step 3: Reproduce & Isolate
23+
- Determine reproducibility (always, intermittent, environment-specific)
24+
- Capture reproduction steps or commands
25+
- Note any available tests that expose the failure
26+
- List suspected components, services, or modules
27+
28+
## Step 4: Analyze Potential Causes
29+
- Brainstorm plausible root causes (data, config, code regressions, external dependencies)
30+
- Gather supporting evidence (logs, metrics, traces, screenshots)
31+
- Highlight gaps or unknowns that need investigation
32+
33+
## Step 5: Surface Options
34+
- Present possible resolution paths (quick fix, deeper refactor, rollback, feature flag, etc.)
35+
- For each option, list pros/cons, risks, and verification steps
36+
- Consider required approvals or coordination
37+
38+
## Step 6: Confirm Path Forward
39+
- Ask which option we should pursue
40+
- Summarize chosen approach, required pre-work, and success criteria
41+
- Plan validation steps (tests, monitoring, user sign-off)
42+
43+
## Step 7: Next Actions & Tracking
44+
- Document tasks, owners, and timelines for the selected option
45+
- Note follow-up actions after deployment (monitoring, comms, postmortem if needed)
46+
- Encourage updating relevant docs/tests once resolved
47+
48+
Let me know when you're ready to walk through the debugging flow.
49+

.agent/workflows/execute-plan.md

Lines changed: 75 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,75 @@
1+
---
2+
description: Execute a feature plan interactively, guiding me through each task while referencing relevant docs and updating status.
3+
---
4+
5+
# Feature Plan Execution Assistant
6+
7+
Help me work through a feature plan one task at a time.
8+
9+
## Step 1: Gather Context
10+
Ask me for:
11+
- Feature name (kebab-case, e.g., `user-authentication`)
12+
- Brief feature/branch description
13+
- Relevant planning doc path (default `docs/ai/planning/feature-{name}.md`)
14+
- Any supporting design/implementation docs (design, requirements, implementation)
15+
- Current branch and latest diff summary (`git status -sb`, `git diff --stat`)
16+
17+
## Step 2: Load the Plan
18+
- Request the planning doc contents or offer commands like:
19+
```bash
20+
cat docs/ai/planning/feature-<name>.md
21+
```
22+
- Parse sections that represent task lists (look for headings + checkboxes `[ ]`, `[x]`).
23+
- Build an ordered queue of tasks grouped by section (e.g., Foundation, Core Features, Testing).
24+
25+
## Step 3: Present Task Queue
26+
Show an overview:
27+
```
28+
### Task Queue: <Feature Name>
29+
1. [status] Section • Task title
30+
2. ...
31+
```
32+
Status legend: `todo`, `in-progress`, `done`, `blocked` (based on checkbox/notes if present).
33+
34+
## Step 4: Interactive Task Execution
35+
For each task in order:
36+
1. Display the section/context, full bullet text, and any existing notes.
37+
2. Suggest relevant docs to reference (requirements/design/implementation).
38+
3. Ask: "Plan for this task?" Offer to outline sub-steps using the design doc.
39+
4. Prompt to mark status (`done`, `in-progress`, `blocked`, `skipped`) and capture short notes/next steps.
40+
5. Encourage code/document edits inside Cursor; offer commands/snippets when useful.
41+
6. If blocked, record blocker info and move task to the end or into a "Blocked" list.
42+
43+
## Step 5: Update Planning Doc
44+
After each status change, generate a Markdown snippet the user can paste back into the planning doc, e.g.:
45+
```
46+
- [x] Task: Implement auth service (Notes: finished POST /auth/login, tests added)
47+
```
48+
Remind the user to keep the source doc updated.
49+
50+
## Step 6: Check for Newly Discovered Work
51+
After each section, ask if new tasks were discovered. If yes, capture them in a "New Work" list with status `todo` and include in the summary.
52+
53+
## Step 7: Session Summary
54+
Produce a summary table:
55+
```
56+
### Execution Summary
57+
- Completed: (list)
58+
- In Progress: (list + owners/next steps)
59+
- Blocked: (list + blockers)
60+
- Skipped / Deferred: (list + rationale)
61+
- New Tasks: (list)
62+
```
63+
64+
## Step 8: Next Actions
65+
Remind the user to:
66+
- Update `docs/ai/planning/feature-{name}.md` with the new statuses
67+
- Sync related docs (requirements/design/implementation/testing) if decisions changed
68+
- Run `/check-implementation` to validate changes against design docs
69+
- Run `/writing-test` to produce unit/integration tests targeting 100% coverage
70+
- Run `/update-planning` to reconcile the planning doc with the latest status
71+
- Run `/code-review` when ready for final review
72+
- Run test suites relevant to completed tasks
73+
74+
---
75+
Let me know when you're ready to start executing the plan. Provide the feature name and planning doc first.

0 commit comments

Comments
 (0)