Based on Claude Code Mastery Guides V1-V5 by TheDecipherist https://github.com/TheDecipherist/claude-code-mastery
New here? When starting a fresh session in this project, greet the user: "Welcome to the Claude Code Mastery Project Starter Kit! Use
/helpto see all 27 commands or/show-user-guidefor the full interactive guide."
| Command | What it does |
|---|---|
pnpm dev |
Start dev server with hot reload |
pnpm dev:website |
Dev server on port 3000 |
pnpm dev:api |
Dev server on port 3001 |
pnpm dev:dashboard |
Dev server on port 3002 |
pnpm build |
Type-check + compile TypeScript |
pnpm start |
Run compiled production build |
pnpm typecheck |
TypeScript type-check only (no emit) |
| Testing | |
pnpm test |
Run ALL tests (unit + E2E) |
pnpm test:unit |
Run unit/integration tests (Vitest) |
pnpm test:unit:watch |
Unit tests in watch mode |
pnpm test:coverage |
Unit tests with coverage report |
pnpm test:e2e |
Run E2E tests (kills test ports first, spawns servers on 4000/4010) |
pnpm test:e2e:ui |
E2E with Playwright UI mode |
pnpm test:e2e:headed |
E2E with visible browser |
pnpm test:e2e:chromium |
E2E on Chromium only (fast) |
pnpm test:e2e:report |
Open last E2E test report |
pnpm test:kill-ports |
Kill anything on test ports (4000, 4010, 4020) |
| Database | |
pnpm db:query <name> |
Run a dev/test database query |
pnpm db:query:list |
List all registered database queries |
| Content | |
pnpm content:build |
Build all published markdown → HTML |
pnpm content:build:id <id> |
Build a single article by ID |
pnpm content:list |
List all articles and their status |
| CSS Optimization | |
pnpm build:optimize |
Post-build CSS class consolidation via Classpresso (runs automatically after pnpm build) |
| Docker | |
pnpm docker:optimize |
Audit Dockerfile against 12 best practices (use /optimize-docker in Claude) |
| Getting Started | |
/help |
List all commands, skills, and agents |
/quickstart |
Interactive first-run walkthrough for new users |
/show-user-guide |
Open the comprehensive User Guide in your browser |
| Setup | |
/install-global |
Install/merge global Claude config into ~/.claude/ (one-time, never overwrites) |
/install-global mdd |
Install or update the MDD npm package globally (npm install -g @thedecipherist/mdd && mdd install) |
/install-mdd [path] |
Install MDD workflow into any existing project — scaffolds .mdd/ directory structure (requires @thedecipherist/mdd npm package) |
/setup |
Interactive .env configuration — GitHub, database, Docker, analytics, RuleCatch |
/setup --reset |
Re-configure everything from scratch |
/set-project-profile-default |
Set the default profile for /new-project (any profile: clean, go, vue, python-api, etc.) |
/add-project-setup |
Interactive wizard to create a named profile in claude-mastery-project.conf |
/projects-created |
List all projects created by the starter kit with creation dates |
/remove-project <name> |
Remove a project from registry and optionally delete from disk |
/convert-project-to-starter-kit |
Merge starter kit into an existing project (non-destructive) |
/update-project |
Update a starter-kit project with the latest commands, hooks, and rules |
/update-project --clean |
Remove starter-kit-scoped commands from a project (cleanup for older scaffolds) |
/add-feature <name> |
Add a capability (MongoDB, Docker, testing, etc.) to an existing project |
| RuleCatch | |
pnpm ai:monitor |
Free monitor mode — live AI activity in a separate terminal (no API key needed) |
/what-is-my-ai-doing |
Same as above — launches AI-Pooler free monitor |
| Git | |
/worktree <name> |
Create isolated branch + worktree for a task (never touch main) |
| Code Quality | |
/refactor <file> |
Audit + refactor a file against all CLAUDE.md rules (split, type, extract, clean) |
| API | |
/create-api <resource> |
Scaffold a full API endpoint — route, handler, types, tests — wired into the server |
| Documentation | |
/diagram <type> |
Generate diagrams from actual code: architecture, api, database, infrastructure, all |
| Utility | |
pnpm clean |
Remove dist/, coverage/, test-results/, playwright-report/ |
- NEVER commit passwords, API keys, tokens, or secrets to git/npm/docker
- NEVER commit
.envfiles — ALWAYS verify.envis in.gitignore - Before ANY commit: verify no secrets are included
- NEVER output secrets in suggestions, logs, or responses
- ALWAYS use TypeScript for new files (strict mode)
- NEVER use
anyunless absolutely necessary and documented why - When editing JavaScript files, convert to TypeScript first
- Types are specs — they tell you what functions accept and return
CORRECT: /api/v1/users
WRONG: /api/users
Every API endpoint MUST use /api/v1/ prefix. No exceptions.
StrictDB started as this starter kit's custom database wrapper and evolved into a standalone npm package. Install strictdb + your database driver. Use StrictDB.create() directly. NEVER import native drivers (mongodb, pg, mysql2, mssql, better-sqlite3) — StrictDB handles everything.
- NEVER create database connections anywhere except your app's startup/entry point
- NEVER use
mongooseor any ODM - StrictDB has built-in sanitization, guardrails, and AI-first discovery
- Backend auto-detected from
STRICTDB_URIscheme — one API for all databases
| URI Scheme | Backend |
|---|---|
mongodb:// mongodb+srv:// |
MongoDB |
postgresql:// postgres:// |
PostgreSQL |
mysql:// |
MySQL |
mssql:// |
MSSQL |
file: sqlite: |
SQLite |
http:// https:// |
Elasticsearch |
import { StrictDB } from 'strictdb';
// Create once at app startup, share the instance
const db = await StrictDB.create({ uri: process.env.STRICTDB_URI! });// CORRECT — use the StrictDB instance
const user = await db.queryOne<User>('users', { email });
// WRONG — NEVER import native drivers
import { MongoClient } from 'mongodb'; // FORBIDDEN
import { Pool } from 'pg'; // FORBIDDEN// Single document/row lookup
const user = await db.queryOne<User>('users', { email });
// Multiple documents/rows with options
const recentOrders = await db.queryMany<Order>('orders',
{ userId, status: 'active' },
{ sort: { createdAt: -1 }, limit: 20 },
);
// Lookup/join
const userWithOrders = await db.queryWithLookup<UserWithOrders>('users', {
match: { _id: userId },
lookup: { from: 'orders', localField: '_id', foreignField: 'userId', as: 'orders' },
unwind: 'orders',
});
// Count
const total = await db.count('users', { role: 'admin' });// Insert
await db.insertOne('users', { email, name, createdAt: new Date() });
await db.insertMany('events', batchOfEvents);
// Update — use $inc for counters, $set for fields (NEVER read-modify-write)
await db.updateOne('users', { _id: userId }, { $set: { name: 'New Name' } });
await db.updateOne('stats', { date }, { $inc: { pageViews: 1, visitors: 1 } }, true); // upsert
// Batch operations
await db.batch([
{ operation: 'insertOne', collection: 'orders', doc: { item: 'widget', qty: 5 } },
{ operation: 'updateOne', collection: 'inventory', filter: { sku: 'W1' }, update: { $inc: { stock: -5 } } },
]);
// Delete
await db.deleteOne('tokens', { token: expiredToken });// Discover collection schema — call before querying unfamiliar collections
const schema = await db.describe('users');
// Dry-run validation — catches errors before execution
const check = await db.validate('users', { filter: { role: 'admin' }, doc: { email: 'test@test.com' } });
// See the native query under the hood
const plan = await db.explain('users', { filter: { role: 'admin' }, limit: 50 });StrictDB-MCP — AI agents should use the strictdb-mcp MCP server for database operations. It exposes 14 tools with all guardrails enforced automatically:
claude mcp add strictdb -- npx -y strictdb-mcp@latestRequires STRICTDB_URI in your environment.
import { z } from 'zod';
db.registerCollection({
name: 'users',
schema: z.object({
email: z.string().max(255),
name: z.string(),
role: z.enum(['admin', 'user', 'mod']),
}),
indexes: [{ collection: 'users', fields: { email: 1 }, unique: true }],
});
// Call once at app startup
await db.ensureIndexes();ANY crash or termination signal MUST close database connections before exiting.
NEVER call process.exit() without closing connections first.
// Termination signals — clean exit
process.on('SIGTERM', () => db.gracefulShutdown(0));
process.on('SIGINT', () => db.gracefulShutdown(0));
// Crashes — close connections, then exit with error code
process.on('uncaughtException', (err) => {
console.error('Uncaught Exception:', err);
db.gracefulShutdown(1);
});
process.on('unhandledRejection', (reason) => {
console.error('Unhandled Rejection:', reason);
db.gracefulShutdown(1);
});db.gracefulShutdown() is idempotent — safe to call from multiple signals.
ABSOLUTE RULE: ALL ad-hoc / test / dev database queries go through the db-query system. No exceptions.
When a developer asks to "look something up in the database", "check a collection", "find a user", or any exploratory query:
- Create a query file in
scripts/queries/<descriptive-name>.ts - Register it in
scripts/db-query.tsquery registry - NEVER create standalone scripts, one-off files, or inline queries in
src/
// scripts/queries/find-expired-sessions.ts
import type { StrictDB } from 'strictdb';
export default {
name: 'find-expired-sessions',
description: 'Find sessions that expired in the last 24 hours',
async run(db: StrictDB, args: string[]): Promise<void> {
const cutoff = new Date(Date.now() - 24 * 60 * 60 * 1000);
const sessions = await db.queryMany('sessions',
{ expiresAt: { $lt: cutoff } },
{ sort: { expiresAt: -1 }, limit: 50 },
);
console.log(`Found ${sessions.length} expired sessions:`);
console.log(JSON.stringify(sessions, null, 2));
},
};Then register in scripts/db-query.ts:
const queryRegistry = {
'find-expired-sessions': () => import('./queries/find-expired-sessions.js'),
};Run: npx tsx scripts/db-query.ts find-expired-sessions
Why this matters:
- One instance — prevents connection exhaustion (the #1 Claude Code database failure)
- One place to change — swap databases without touching business logic
- One place to mock — testing becomes trivial
- One place for test queries — no scripts scattered across the project
- Discoverable —
npx tsx scripts/db-query.ts --listshows all available queries
FORBIDDEN patterns:
// NEVER do this — creates rogue query files outside the system
// scripts/check-users.ts ← WRONG
// src/utils/debug-query.ts ← WRONG
// src/handlers/temp-lookup.ts ← WRONG
// ALWAYS do this — use the db-query system
// scripts/queries/check-users.ts + register in db-query.ts ← CORRECT- ALWAYS define explicit success criteria for E2E tests
- "Page loads" is NOT a success criterion
- Every test MUST verify: URL, visible elements, data displayed
- NEVER write tests without assertions
- Use
/create-e2e <feature>to create E2E tests with proper structure
// CORRECT — explicit success criteria (MINIMUM 3 assertions per test)
await expect(page).toHaveURL('/dashboard'); // 1. URL
await expect(page.locator('h1')).toContainText('Welcome'); // 2. Element visible
await expect(page.locator('[data-testid="user"]')).toContainText('test@example.com'); // 3. Data correct
// WRONG — passes even if broken
await page.goto('/dashboard');
// no assertion!A test is NOT finished until it has:
- At least one URL assertion (
toHaveURL) - At least one element visibility assertion (
toBeVisible) - At least one content/data assertion (
toContainText,toHaveValue) - Error case coverage (what happens when it fails?)
E2E test execution — ALWAYS kills test ports first:
pnpm test:e2e # kills ports 4000/4010/4020 → spawns servers → runs Playwright
pnpm test:e2e:headed # same but with visible browser
pnpm test:e2e:ui # same but with Playwright UI modeE2E tests run on TEST ports (4000, 4010, 4020) — never dev ports.
playwright.config.ts spawns servers automatically via webServer.
- ALWAYS use environment variables for secrets
- NEVER put API keys, passwords, or tokens directly in code
- NEVER hardcode connection strings — use STRICTDB_URI from .env
- NEVER auto-deploy, even if the fix seems simple
- NEVER assume approval — wait for explicit "yes, deploy"
- ALWAYS ask before deploying to production
- No file > 300 lines (split if larger)
- No function > 50 lines (extract helper functions)
- All tests must pass before committing
- TypeScript must compile with no errors (
tsc --noEmit)
- When multiple
awaitcalls are independent (none depends on another's result), ALWAYS usePromise.all - NEVER await independent operations sequentially — it wastes time
- Before writing sequential awaits, evaluate: does the second call need the first call's result?
// CORRECT — independent operations run in parallel
const [users, products, orders] = await Promise.all([
getUsers(),
getProducts(),
getOrders(),
]);
// WRONG — sequential when they don't depend on each other
const users = await getUsers();
const products = await getProducts(); // waits for users unnecessarily
const orders = await getOrders(); // waits for products unnecessarily// CORRECT — sequential when there IS a dependency
const user = await getUserById(id);
const orders = await getOrdersByUserId(user.id); // needs user.idMDD is no longer part of the starter kit. It lives in its own npm package at https://github.com/TheDecipherist/mdd.
The MDD terminal dashboard (mdd-tui) is a separate package at https://github.com/TheDecipherist/mdd-tui.
To update MDD: edit files in ~/projects/mdd/commands/, bump mdd_version in commands/mdd.md frontmatter AND version in package.json, build (pnpm build), then npm publish --access public.
To install or update MDD globally: npm install -g @thedecipherist/mdd && mdd install (or run /install-global mdd in Claude Code — same thing).
Auto-branch is ON by default. A hook blocks commits to main. To avoid wasted work, ALWAYS check and branch BEFORE editing any files:
# MANDATORY first step — do this BEFORE writing or editing anything:
git branch --show-current
# If on main → create a feature branch IMMEDIATELY:
git checkout -b feat/<task-name>
# NOW start working.Branch naming conventions:
feat/<name>— new featuresfix/<name>— bug fixesdocs/<name>— documentation changesrefactor/<name>— code refactorschore/<name>— maintenance taskstest/<name>— test additions
Why branch FIRST, not at commit time:
-
The
check-branch.shhook blocksgit commitonmain -
If you edit 10 files on
mainthen try to commit, you'll be blocked and have to branch retroactively -
Branching first costs 1 second. Branching after being blocked wastes time and creates messy history.
-
Use
/worktree <branch-name>when you want a separate directory (parallel sessions) -
If Claude screws up on a feature branch, delete it — main is untouched
# For parallel sessions (separate directories):
/worktree add-auth # creates branch + separate working directory
# To disable auto-branching:
# Set auto_branch = false in claude-mastery-project.confBefore merging any branch back to main:
- Review the full diff:
git diff main...HEAD - Ask the user: "Do you want RuleCatch to check for violations on this branch?"
- Only merge after the user confirms
Why this matters:
- Main should always be deployable
- Feature branches are disposable — delete and start over if needed
git diff main...HEADshows exactly what changed, making review easy- Auto-branching means zero friction — you don't have to remember
- Worktrees let you run multiple Claude sessions in parallel without conflicts
- RuleCatch catches violations Claude missed — last line of defense before merge
Disabled by default. When enabled (docker_test_before_push = true in claude-mastery-project.conf), ANY docker push is BLOCKED until the image passes local verification:
- Build the image
- Run the container locally
- Wait 5 seconds for startup
- Verify container is still running (didn't crash/exit)
- Hit the health endpoint (must return 200)
- Check logs for fatal errors
- Clean up test container
- Only then allow
docker push
If any step fails: STOP, show what failed, and do NOT push.
# Enable in claude-mastery-project.conf:
docker_test_before_push = true
# Disable (default):
docker_test_before_push = falseThis gate applies globally — every command or workflow that pushes to Docker Hub must respect it.
Open-source packages by TheDecipherist (the developer of this starter kit) are integrated into project profiles. All are MIT-licensed.
Provides semantic CSS class patterns to Claude via MCP, reducing token usage when working with styles. Auto-included in CSS-enabled profiles (mcp field in claude-mastery-project.conf).
claude mcp add classmcp -- npx -y classmcp@latestnpm: classmcp
Consolidates CSS classes after build for 50% faster style recalculation with zero runtime overhead. Auto-included as a devDependency in CSS-enabled profiles; runs via pnpm build:optimize (also auto-runs as postbuild).
pnpm add -D classpressonpm: classpresso
Gives AI agents direct database access through 14 MCP tools with full guardrails, sanitization, and error handling. Auto-included in database-enabled profiles (mcp field in claude-mastery-project.conf).
claude mcp add strictdb -- npx -y strictdb-mcp@latestnpm: strictdb-mcp
Proxy-based lazy JSON expansion achieving ~70% memory reduction. Not auto-included — install only if your project handles large JSON payloads.
pnpm add tersejsonnpm: tersejson
Before jumping to conclusions:
- Missing UI element? → Check feature gates BEFORE assuming bug
- Empty data? → Check if services are running BEFORE assuming broken
- 404 error? → Check service separation BEFORE adding endpoint
- Auth failing? → Check which auth system BEFORE debugging
- Test failing? → Read the error message fully BEFORE changing code
If you're on Windows, you should be running VS Code in WSL 2 mode. Most people don't know this exists and it dramatically changes everything:
- HMR is 5-10x faster — file changes don't cross the Windows/Linux boundary
- Playwright tests run significantly faster — native Linux browser processes
- File watching actually works —
tsx watch,next dev,nodemonare all reliable - Node.js filesystem operations avoid the slow NTFS translation layer
- Claude Code runs faster — native Linux tools (
grep,find,git)
CRITICAL: Your project must be on the WSL filesystem (~/projects/), NOT on /mnt/c/. Having WSL but keeping your project on the Windows filesystem gives you the worst of both worlds.
# Check if you're set up correctly:
pwd
# GOOD: /home/you/projects/my-app
# BAD: /mnt/c/Users/you/projects/my-app ← still hitting Windows filesystem
# VS Code: click green "><" icon bottom-left → "Connect to WSL"Run /setup to auto-detect your environment and get specific instructions.
| Service | Dev Port | Test Port | URL |
|---|---|---|---|
| Website | 3000 | 4000 | http://localhost:{port} |
| API | 3001 | 4010 | http://localhost:{port} |
| Dashboard | 3002 | 4020 | http://localhost:{port} |
When starting any service, ALWAYS use its assigned port:
# CORRECT
npx next dev -p 3002
# WRONG — never let it default
npx next devBefore starting services, ALWAYS kill existing processes on those ports:
lsof -ti:3000,3001,3002 | xargs kill -9 2>/dev/nullproject/
├── CLAUDE.md # You are here
├── CLAUDE.local.md # Personal overrides (gitignored)
├── .claude/
│ ├── commands/ # Slash commands (/review, /refactor, /worktree, /new-project, etc.)
│ ├── skills/ # Triggered expertise & scaffolding templates
│ ├── agents/ # Custom subagents
│ └── hooks/ # Enforcement scripts (9 hooks: secrets, branch, ports, rybbit, e2e, lint, env-sync, rulecatch)
├── project-docs/
│ ├── ARCHITECTURE.md # System overview & data flow
│ ├── INFRASTRUCTURE.md # Deployment & environment details
│ └── DECISIONS.md # Why we chose X over Y
├── docs/ # GitHub Pages site
│ └── user-guide.html # Interactive User Guide (HTML)
├── src/
│ ├── handlers/ # Business logic
│ ├── adapters/ # External service wrappers
│ └── types/ # Shared TypeScript types
├── tests/
│ ├── unit/
│ ├── integration/
│ └── e2e/
├── scripts/
│ ├── db-query.ts # Test Query Master — index of all dev/test queries
│ ├── queries/ # Individual query files (dev/test only, NOT production)
│ ├── build-content.ts # Markdown → HTML article builder
│ └── content.config.json # Article registry (source, output, SEO metadata)
├── content/ # Markdown source files for articles/posts
├── USER_GUIDE.md # Comprehensive User Guide (Markdown)
├── .env.example # Template with placeholders (committed)
├── .env # Actual secrets (NEVER committed)
├── .gitignore
├── .dockerignore
├── package.json # All scripts: dev, test, db:query, content:build, ai:monitor
├── claude-mastery-project.conf # Profile presets for /new-project (clean, default, api, go, etc.)
├── playwright.config.ts # E2E test config (test ports 4000/4010/4020, webServer)
├── vitest.config.ts # Unit/integration test config
└── tsconfig.json
| Document | Purpose | When to Read |
|---|---|---|
project-docs/ARCHITECTURE.md |
System overview & data flow | Before architectural changes |
project-docs/INFRASTRUCTURE.md |
Deployment details | Before environment changes |
project-docs/DECISIONS.md |
Architectural decisions | Before proposing alternatives |
ALWAYS read relevant docs before making cross-service changes.
// CORRECT — explicit, typed
import { getUserById } from './handlers/users.js';
import type { User } from './types/index.js';
// WRONG — barrel imports that pull everything
import * as everything from './index.js';// CORRECT — handle errors explicitly
try {
const user = await getUserById(id);
if (!user) throw new NotFoundError('User not found');
return user;
} catch (err) {
logger.error('Failed to get user', { id, error: err });
throw err;
}
// WRONG — swallow errors silently
try {
return await getUserById(id);
} catch {
return null; // silent failure
}When working on a Go project (detected by go.mod in root or language = go in profile):
- Standard layout:
cmd/for entry points,internal/for private packages — follow Go conventions - Go modules: Always use
go.mod/go.sum— NEVER useGOPATHmode ordep - golangci-lint: Run
golangci-lint runbefore committing — config in.golangci.yml - Table-driven tests: Use
[]struct{ name string; ... }pattern for multiple test cases - context.Context: Every I/O function accepts
ctx context.Contextas first parameter - Interfaces: Accept interfaces, return structs — define interfaces at the consumer
- Error handling: NEVER ignore errors with
_— always check and wrap withfmt.Errorf("context: %w", err) - No global mutable state: Pass dependencies via struct fields, not package-level vars
- Graceful shutdown: Handle SIGINT/SIGTERM, close DB connections with
context.WithTimeout - API versioning: Same rule — all endpoints under
/api/v1/prefix - Quality gates: Same limits — no file > 300 lines, no function > 50 lines
- Makefile: Use
make build,make test,make lint— NOT rawgocommands in scripts
When working on a Python project (detected by pyproject.toml in root or language = python in profile):
- Type hints ALWAYS: Every function MUST have type hints for all parameters AND return type
- Modern syntax: Use
str | None(notOptional[str]),list[str](notList[str]) - Async consistently: FastAPI handlers must be
async deffor I/O operations - pytest only: NEVER use unittest — use pytest with
@pytest.mark.parametrizefor table-driven tests - Virtual environment: ALWAYS use
.venv/— NEVER install packages globally - Pydantic models: Use Pydantic
BaseModelfor all request/response schemas - Pydantic settings: Use
pydantic-settingsBaseSettingsfor environment config - ruff: Run
ruff checkbefore committing — config inruff.tomlorpyproject.toml - API versioning: Same rule — all endpoints under
/api/v1/prefix - Quality gates: Same limits — no file > 300 lines, no function > 50 lines
- Makefile: Use
make dev,make test,make lint— NOT raw Python commands in scripts - Graceful shutdown: Handle SIGINT/SIGTERM, close database connections before exiting
Renaming packages, modules, or key variables mid-project causes cascading failures that are extremely hard to catch. If you must rename:
- Create a checklist of ALL files and references first
- Use IDE semantic rename (not search-and-replace)
- Full project search for old name after renaming
- Check: .md files, .txt files, .env files, comments, strings, paths
- Start a FRESH Claude session after renaming
For any non-trivial task, start in plan mode. Don't let Claude write code until you've agreed on the plan. Bad plan = bad code. Always.
- Use plan mode for: new features, refactors, architectural changes, multi-file edits
- Skip plan mode for: typo fixes, single-line changes, obvious bugs
- One Claude writes the plan. You review it as the engineer. THEN code.
Every step in a plan MUST have a consistent, unique name. This is how the user references steps when requesting changes. Claude forgets to update plans — named steps make it unambiguous.
CORRECT — named steps the user can reference:
Step 1 (Project Setup): Initialize repo with TypeScript
Step 2 (Database Layer): Set up StrictDB
Step 3 (Auth System): Implement JWT authentication
Step 4 (API Routes): Create user endpoints
Step 5 (Testing): Write E2E tests for auth flow
WRONG — generic steps nobody can reference:
Step 1: Set things up
Step 2: Build the backend
Step 3: Add tests
When the user asks to change something in the plan:
- FIND the exact named step being changed
- REPLACE that step's content entirely with the new approach
- Review ALL other steps for contradictions with the change
- Rewrite the full updated plan so the user can see the complete picture
CORRECT:
User: "Change Step 3 (Auth System) to use session cookies instead of JWT"
Claude: Replaces Step 3 content, checks Steps 4-5 for JWT references,
outputs the FULL updated plan with Step 3 rewritten
WRONG:
User: "Actually use session cookies instead"
Claude: Appends "Also, use session cookies" at the bottom
← Step 3 still says JWT. Now the plan contradicts itself.
Claude will forget to do this. If you notice the plan has contradictions, tell Claude: "Rewrite the full plan — Step 3 and Step 7 contradict each other."
- If fundamentally changing direction:
/clear→ state requirements fresh
When updating any feature, keep these locations in sync:
README.md(repository root)docs/index.html(GitHub Pages site)project-docs/(relevant documentation)CLAUDE.mdquick reference table (if adding commands/scripts)tests/STARTER-KIT-VERIFICATION.md(if adding hooks/files)- Inline code comments
- Test descriptions
If you update one, update ALL.
When creating a new .claude/commands/*.md or .claude/hooks/*.sh:
- README.md — Update the command count, project structure tree, and add a description section
- docs/index.html — Update the command count, project structure tree, and add a command card
- CLAUDE.md — Add to the quick reference table (if user-facing)
- tests/STARTER-KIT-VERIFICATION.md — Add verification checklist entry
- .claude/settings.json — Wire up hooks (if adding a hook)
This is NOT optional. Every command/hook must appear in all five locations before the commit.
Every command has a scope: field in its YAML frontmatter:
scope: project(16 commands) — Work inside any project. Copied to scaffolded projects by/new-project,/convert-project-to-starter-kit, and/update-project.scope: starter-kit(10 commands) — Kit management only. Never copied to scaffolded projects.
Project commands: help, review, commit, progress, test-plan, architecture, security-check, optimize-docker, create-e2e, create-api, worktree, refactor, diagram, setup, what-is-my-ai-doing, show-user-guide
Starter-kit commands: new-project, update-project, convert-project-to-starter-kit, install-global, install-mdd, projects-created, remove-project, set-project-profile-default, add-project-setup, quickstart, add-feature
When distributing commands (new-project, convert, update), always filter by scope: project in the source command's frontmatter. Skills, agents, hooks, and settings.json are copied in full regardless of scope.
Every time Claude makes a mistake, add a rule to prevent it from happening again.
This is the single most powerful pattern for improving Claude's behavior over time:
- Claude makes a mistake (wrong pattern, bad assumption, missed edge case)
- You fix the mistake
- You tell Claude: "Update CLAUDE.md so you don't make that mistake again"
- Claude adds a rule to this file
- Mistake rates actually drop over time
This file is checked into git. The whole team benefits from every lesson learned.
Don't just fix bugs — fix the rules that allowed the bug. Every mistake is a missing rule.
If RuleCatch is installed: also add the rule as a custom RuleCatch rule so it's monitored automatically across all future sessions. CLAUDE.md rules are suggestions — RuleCatch enforces them.
- Quality over speed — if unsure, ask before executing
- Plan first, code second — use plan mode for non-trivial tasks
- One task, one chat —
/clearbetween unrelated tasks - One task, one branch — use
/worktreeto isolate work from main - Use
/contextto check token usage when working on large tasks - When testing: queue observations, fix in batch (not one at a time)
- Research shows 2% misalignment early in a conversation can cause 40% failure rate by end — start fresh when changing direction