You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
docs: add LLM Runtime Integration section to sidebar (Phase 4)
New docs section explains the cross-cutting modifier:
- Build-time vs runtime distinction
- L0-L4 escalation ladder with risks per level
- Hard tier multiplier (L3 → Tier 3, L4 → Tier 4)
- Why the built-in mitigation catalog is insufficient from L3 on,
with links to OWASP LLM Top 10, Palo Alto SHIELD, Aikido VCAL,
Google SAIF
Placed after "mitigations" and before "references" in DE + EN.
Refs #20
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy file name to clipboardExpand all lines: src/i18n.js
+38Lines changed: 38 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -353,6 +353,25 @@ Dieses Framework bietet eine https://github.com/LLM-Coding/Semantic-Anchors?tab=
353
353
*Probabilistisch* (lila) — Findet vieles, aber nicht alles. AI Code Review, Property-Based Testing, Fuzzing. Erhöht die Erkennungsrate, bietet aber keine Garantie.
354
354
355
355
*Organisatorisch* (orange) — Braucht Menschen, skaliert am schlechtesten. Deshalb erst ab Tier 2/3 eingeplant, und dort gezielt auf die riskantesten Änderungen fokussiert.`,
356
+
},
357
+
{
358
+
id: "llmRuntime",
359
+
title: "LLM Runtime Integration",
360
+
content: `Der Risk Radar bewertet primär den *geschriebenen Code*. Viele moderne Systeme nutzen LLMs aber auch *zur Laufzeit* — von einfacher Klassifikation bis zu agentic Systemen, die Code autonom ausführen. Diese Runtime-Nutzung bringt qualitativ andere Risiken mit sich als LLM-generierter Code und wird über den cross-cutting Modifier *LLM Runtime Integration* abgebildet.
361
+
362
+
*Build-Time vs. Runtime* — LLM-Code ist ein Build-Time-Problem (Mitigation durch Linter, Review, SAST). LLM-Runtime ist ein Operational-Problem (Mitigation durch Sandboxing, Tool-Whitelists, Output-Filter, Prompt-Injection-Detection). Beide müssen gemeinsam betrachtet werden.
363
+
364
+
*Die Eskalationsleiter:*
365
+
366
+
* *L0 — Kein LLM:* Klassische Software ohne LLM zur Laufzeit.
*Tier-Kopplung (harter Multiplier):* L3 erzwingt mindestens Tier 3, L4 mindestens Tier 4 — unabhängig von den fünf Code-Dimensionen. Ein Coding-Agent, der \`rm -rf\` ausführen könnte, ist per Definition safety-critical, selbst wenn der Blast Radius oberflächlich klein wirkt.
373
+
374
+
*Ab L3 gilt: unser Mitigations-Katalog reicht nicht.* Die hier gelisteten Maßnahmen decken Build-Time-Risiken ab. Für Prompt Injection, Tool Sandboxing, Agentic Guardrails und Runtime-Monitoring verweisen wir auf spezialisierte Frameworks: https://owasp.org/www-project-top-10-for-large-language-model-applications/[OWASP LLM Top 10], https://unit42.paloaltonetworks.com/securing-vibe-coding-tools/[Palo Alto SHIELD], https://www.aikido.dev/blog/vibe-coding-security[Aikido VCAL], https://saif.google/secure-ai-framework[Google SAIF]. Diese Tools sind in ihrer Domäne reifer — der Radar ordnet nur ein und verweist weiter.`,
356
375
},
357
376
{
358
377
id: "references",
@@ -754,6 +773,25 @@ This framework provides a https://github.com/LLM-Coding/Semantic-Anchors?tab=rea
754
773
*Probabilistic* (purple) — Finds many issues but not all. AI code review, property-based testing, fuzzing. Increases detection rate but offers no guarantee.
755
774
756
775
*Organizational* (orange) — Requires humans, scales worst. Therefore only introduced from Tier 2/3 onward, focused on the riskiest changes.`,
776
+
},
777
+
{
778
+
id: "llmRuntime",
779
+
title: "LLM Runtime Integration",
780
+
content: `The Risk Radar primarily assesses the *code being written*. However, many modern systems also use LLMs *at runtime* — from simple classification to agentic systems that execute code autonomously. This runtime use carries qualitatively different risks than LLM-generated code, captured by the cross-cutting *LLM Runtime Integration* modifier.
781
+
782
+
*Build-time vs. runtime* — LLM code is a build-time problem (mitigation via linters, review, SAST). LLM runtime is an operational problem (mitigation via sandboxing, tool whitelists, output filters, prompt injection detection). Both must be considered together.
783
+
784
+
*The escalation ladder:*
785
+
786
+
* *L0 — No LLM:* Classical software, no LLM at runtime.
*Tier coupling (hard multiplier):* L3 forces at least Tier 3, L4 forces at least Tier 4 — independent of the five code dimensions. A coding agent that could run \`rm -rf\` is by definition safety-critical, even if the surface-level blast radius seems small.
793
+
794
+
*From L3 onward, our mitigation catalog is insufficient.* The measures listed here cover build-time risks. For prompt injection, tool sandboxing, agentic guardrails, and runtime monitoring, we defer to specialized frameworks: https://owasp.org/www-project-top-10-for-large-language-model-applications/[OWASP LLM Top 10], https://unit42.paloaltonetworks.com/securing-vibe-coding-tools/[Palo Alto SHIELD], https://www.aikido.dev/blog/vibe-coding-security[Aikido VCAL], https://saif.google/secure-ai-framework[Google SAIF]. These tools are more mature in their domain — the radar only classifies and points further.`,
0 commit comments