|
| 1 | +--- |
| 2 | +name: risk-assess |
| 3 | +description: Analyze a repository to assess vibe-coding risk per module. Detects modules, scans code patterns, asks targeted questions, and writes structured assessment to CLAUDE.md. |
| 4 | +disable-model-invocation: true |
| 5 | +--- |
| 6 | + |
| 7 | +# /risk-assess — Interactive Risk Assessment |
| 8 | + |
| 9 | +Assess the vibe-coding risk level of the current repository by scanning code patterns, |
| 10 | +detecting modules, and interactively confirming uncertain dimensions with the user. |
| 11 | + |
| 12 | +**Reference model:** Read `.claude/skills/shared/risk-model.md` before proceeding. |
| 13 | +It contains the dimension definitions, scoring tables, grep patterns, module detection |
| 14 | +strategy, mitigation signals, and the required output format. Do NOT duplicate those |
| 15 | +patterns inline — always read them from the shared file at runtime. |
| 16 | + |
| 17 | +--- |
| 18 | + |
| 19 | +## Step 1 — Module Detection |
| 20 | + |
| 21 | +Detect the modules (independently assessable units) in this repository. |
| 22 | + |
| 23 | +### 1a. Check Workspace Configs (confidence 0.9) |
| 24 | + |
| 25 | +Look for these files in the repo root and parse the relevant field: |
| 26 | + |
| 27 | +| Config File | Field to Parse | |
| 28 | +|-------------|---------------| |
| 29 | +| `pnpm-workspace.yaml` | `packages:` array | |
| 30 | +| `package.json` (root) | `"workspaces"` field | |
| 31 | +| `lerna.json` | `"packages"` array | |
| 32 | +| `Cargo.toml` (root) | `[workspace] members` | |
| 33 | +| `settings.gradle` / `settings.gradle.kts` | `include(...)` calls | |
| 34 | +| `pom.xml` (root) | `<modules>` elements | |
| 35 | +| `go.work` | `use (...)` directives | |
| 36 | + |
| 37 | +Resolve any glob patterns to actual directories. |
| 38 | + |
| 39 | +### 1b. Conventional Directories (confidence 0.6-0.8) |
| 40 | + |
| 41 | +If no workspace config is found, check for these directory patterns: |
| 42 | + |
| 43 | +- `packages/*/package.json` — JS/TS monorepo |
| 44 | +- `apps/*/` with a build config — application packages |
| 45 | +- `services/*/Dockerfile` — microservices |
| 46 | +- `frontend/` + `backend/` — client/server split |
| 47 | +- `src/client/` + `src/server/` — co-located client/server |
| 48 | +- `docker-compose.yml` with multiple `build:` entries — multi-service |
| 49 | + |
| 50 | +### 1c. Fallback |
| 51 | + |
| 52 | +If neither workspace configs nor conventional patterns are found, |
| 53 | +treat the entire repository as a single module. |
| 54 | + |
| 55 | +### 1d. Present and Confirm |
| 56 | + |
| 57 | +Present the discovered modules to the user: |
| 58 | + |
| 59 | +``` |
| 60 | +Detected modules: |
| 61 | + 1. {module-name} ({path}) — detected via {method} |
| 62 | + 2. ... |
| 63 | +
|
| 64 | +Does this look correct? Should I add, remove, or rename any modules? |
| 65 | +``` |
| 66 | + |
| 67 | +Wait for user confirmation before proceeding. |
| 68 | + |
| 69 | +--- |
| 70 | + |
| 71 | +## Step 2 — Auto-Scan per Module |
| 72 | + |
| 73 | +For each confirmed module, run automated detection for every dimension. |
| 74 | +Use the grep patterns defined in `.claude/skills/shared/risk-model.md`. |
| 75 | + |
| 76 | +### 2a. Language Detection |
| 77 | + |
| 78 | +Count files by extension within the module directory. Use the extension-to-score |
| 79 | +mapping from the shared risk model. The module's language score is the **maximum** |
| 80 | +score across all detected languages (weighted by file count — if only 1-2 files |
| 81 | +of a high-score language exist among hundreds of low-score files, note this). |
| 82 | + |
| 83 | +Report: |
| 84 | +``` |
| 85 | +Language scan for {module}: |
| 86 | + .ts/.tsx: 42 files (score 1) |
| 87 | + .js/.jsx: 8 files (score 2) |
| 88 | + → Auto-detected score: 2 (Dynamically typed) — JS files present |
| 89 | +``` |
| 90 | + |
| 91 | +### 2b. Code Type Detection |
| 92 | + |
| 93 | +Search for patterns in the shared risk model, starting from the highest score (4) |
| 94 | +and working down. Stop at the first match level that has significant hits. |
| 95 | + |
| 96 | +Report each match with the file and line: |
| 97 | +``` |
| 98 | +Code Type scan for {module}: |
| 99 | + Auth/Security patterns (score 4): |
| 100 | + src/auth/login.ts:15 — matches "authenticate" |
| 101 | + src/middleware/csrf.ts:3 — matches "csrf" |
| 102 | + API/DB patterns (score 3): |
| 103 | + src/routes/users.ts:8 — matches "app.get(" |
| 104 | + → Auto-detected score: 4 (Auth/Security/Crypto) |
| 105 | +``` |
| 106 | + |
| 107 | +### 2c. Data Sensitivity Detection |
| 108 | + |
| 109 | +Search for data sensitivity patterns from the shared risk model, starting from |
| 110 | +score 4 (PHI/PCI) down to score 2 (General PII). |
| 111 | + |
| 112 | +Report matches with evidence: |
| 113 | +``` |
| 114 | +Data Sensitivity scan for {module}: |
| 115 | + PHI/PCI patterns (score 4): no matches |
| 116 | + Sensitive PII patterns (score 3): no matches |
| 117 | + General PII patterns (score 2): |
| 118 | + src/models/user.ts:5 — matches "email" |
| 119 | + src/models/user.ts:7 — matches "phone_number" |
| 120 | + → Auto-detected score: 2 (General PII) |
| 121 | +``` |
| 122 | + |
| 123 | +### 2d. Deployment Hints |
| 124 | + |
| 125 | +Search for deployment/regulatory patterns from the shared risk model. |
| 126 | +Also check for: |
| 127 | +- `Dockerfile`, `docker-compose.yml` — containerized deployment |
| 128 | +- `.github/workflows/`, `Jenkinsfile`, `.gitlab-ci.yml` — CI/CD presence |
| 129 | +- `kubernetes/`, `k8s/`, `helm/` — orchestrated deployment |
| 130 | + |
| 131 | +This dimension has low auto-detection confidence (0.2-0.5). |
| 132 | +Note findings but flag that user confirmation is required. |
| 133 | + |
| 134 | +### 2e. Blast Radius Hints |
| 135 | + |
| 136 | +Blast radius is nearly impossible to auto-detect. Note any hints: |
| 137 | +- Number of downstream dependents (if library) |
| 138 | +- Presence of health/safety keywords |
| 139 | +- Scale indicators (load balancer configs, horizontal scaling) |
| 140 | + |
| 141 | +Flag that user confirmation is **required**. |
| 142 | + |
| 143 | +--- |
| 144 | + |
| 145 | +## Step 3 — Interactive Confirmation |
| 146 | + |
| 147 | +Process modules **one at a time**. For each module, present the auto-scan |
| 148 | +results and ask the user to confirm or adjust. |
| 149 | + |
| 150 | +### 3a. High-Confidence Dimensions |
| 151 | + |
| 152 | +For dimensions with high confidence (language, codeType when score >= 3): |
| 153 | + |
| 154 | +``` |
| 155 | +Language: Auto-detected score 2 (Dynamically typed) |
| 156 | + Evidence: 42 .ts files (score 1), 8 .js files (score 2) |
| 157 | + → Accept score 2? [Y/n] |
| 158 | +``` |
| 159 | + |
| 160 | +### 3b. Low-Confidence Dimensions (ALWAYS ask) |
| 161 | + |
| 162 | +For `deployment` and `blastRadius` (and any dimension with low confidence), |
| 163 | +present a multiple-choice question: |
| 164 | + |
| 165 | +``` |
| 166 | +Deployment context for {module}: |
| 167 | + Auto-detected hints: Dockerfile found, GitHub Actions CI |
| 168 | + No regulatory keywords detected. |
| 169 | +
|
| 170 | + What best describes the deployment context? |
| 171 | + [0] Personal / Prototype — local tools, learning projects |
| 172 | + [1] Internal tool — company-internal dashboards |
| 173 | + [2] Public-facing app — SaaS, public APIs, mobile apps |
| 174 | + [3] Regulated system — HIPAA, PCI-DSS, SOC2, GDPR-critical |
| 175 | + [4] Safety-critical — avionics, medical devices, automotive |
| 176 | +
|
| 177 | + Suggested: [2] (containerized with CI/CD, no regulatory signals) |
| 178 | +``` |
| 179 | + |
| 180 | +``` |
| 181 | +Blast Radius for {module}: |
| 182 | + What is the worst realistic impact of a bug in this module? |
| 183 | + [0] Cosmetic / Tech debt — UI glitches, code smell |
| 184 | + [1] Performance / DoS — slowdowns, service unavailability |
| 185 | + [2] Data loss (recoverable) — lost data restorable from backups |
| 186 | + [3] Systemic breach — unrecoverable data exposure |
| 187 | + [4] Safety (life & limb) — physical harm, loss of life |
| 188 | +``` |
| 189 | + |
| 190 | +### 3c. Data Sensitivity Confirmation |
| 191 | + |
| 192 | +If data sensitivity was auto-detected at score >= 2, confirm with the user: |
| 193 | + |
| 194 | +``` |
| 195 | +Data Sensitivity: Auto-detected score 2 (General PII) |
| 196 | + Evidence: email, phone_number fields in user model |
| 197 | + Could there be higher-sensitivity data not detected by pattern scan? |
| 198 | + [Keep 2] / [Upgrade to 3: Sensitive PII] / [Upgrade to 4: PHI/PCI] |
| 199 | +``` |
| 200 | + |
| 201 | +--- |
| 202 | + |
| 203 | +## Step 4 — Tier Calculation and Output |
| 204 | + |
| 205 | +### 4a. Calculate Tier |
| 206 | + |
| 207 | +``` |
| 208 | +Tier = max(codeType, language, deployment, data, blastRadius) |
| 209 | +Mapping: max <= 1 → Tier 1, max <= 2 → Tier 2, max <= 3 → Tier 3, max = 4 → Tier 4 |
| 210 | +``` |
| 211 | + |
| 212 | +Present the result: |
| 213 | +``` |
| 214 | +{module} Risk Assessment: |
| 215 | + Code Type: 3 (API / DB Queries) |
| 216 | + Language: 2 (Dynamically typed) |
| 217 | + Deployment: 2 (Public-facing app) |
| 218 | + Data Sensitivity: 2 (General PII) |
| 219 | + Blast Radius: 1 (Performance / DoS) |
| 220 | +
|
| 221 | + → Tier 3 — determined by Code Type = 3 |
| 222 | +``` |
| 223 | + |
| 224 | +### 4b. Scan Existing Mitigations |
| 225 | + |
| 226 | +Before writing, scan for mitigation signals as listed in the shared risk model. |
| 227 | +Check for config files and CI workflow steps that indicate existing mitigations. |
| 228 | + |
| 229 | +### 4c. Check for Existing Assessment |
| 230 | + |
| 231 | +Before writing to CLAUDE.md: |
| 232 | +- Check if CLAUDE.md already contains a `## Risk Radar Assessment` section |
| 233 | +- If it does, ask the user: "CLAUDE.md already contains a risk assessment. Overwrite it?" |
| 234 | +- If the user declines, skip writing |
| 235 | + |
| 236 | +### 4d. Write to CLAUDE.md |
| 237 | + |
| 238 | +Use the exact output format from `.claude/skills/shared/risk-model.md` under |
| 239 | +"CLAUDE.md Output Format". Write: |
| 240 | + |
| 241 | +1. The assessment header with timestamp |
| 242 | +2. Per-module dimension table with scores, levels, and evidence |
| 243 | +3. Tier result with determining dimension |
| 244 | +4. Per-module mitigation status table |
| 245 | + |
| 246 | +Insert or replace the `## Risk Radar Assessment` section in CLAUDE.md. |
| 247 | +Preserve all other existing content in CLAUDE.md. |
| 248 | + |
| 249 | +--- |
| 250 | + |
| 251 | +## Important Guidelines |
| 252 | + |
| 253 | +- **One module at a time**: Complete the full assess-confirm cycle for each module |
| 254 | + before moving to the next. Do not overwhelm the user with all modules at once. |
| 255 | +- **Show evidence**: Always explain WHY a score was auto-detected. Include file |
| 256 | + paths and matched patterns. |
| 257 | +- **Respect the shared model**: Read patterns from `.claude/skills/shared/risk-model.md` |
| 258 | + at runtime. Do not hardcode pattern lists. |
| 259 | +- **Tiers are cumulative**: When listing mitigations, include all tiers up to and |
| 260 | + including the assessed tier. |
| 261 | +- **Be conservative**: When uncertain, suggest the higher (more cautious) score and |
| 262 | + let the user downgrade if appropriate. |
| 263 | +- **Timestamp**: Use the current date in YYYY-MM-DD format in the output header. |
0 commit comments