AI compliance baseline and governance structure#293
Conversation
…ssisted workflows
📝 WalkthroughWalkthroughAdds repository AI governance: introduces normative Changes
Sequence Diagram(s)Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (5)
AI_COMPLIANCE.md (1)
48-50: Optional: Vary sentence structure for readability.Three consecutive sentences begin with "MUST" (lines 48-50). While technically correct for normative requirements, slightly varying the structure could improve readability without weakening the requirements.
✍️ Optional rewording to reduce repetition
### 4.2 AI Agents -- MUST operate as assistants, never as autonomous approvers. -- MUST treat generated code and generated commands as untrusted by default. -- MUST stop and escalate when requested actions exceed authority or risk gates. +- MUST operate as assistants, never as autonomous approvers. +- Generated code and commands MUST be treated as untrusted by default. +- When requested actions exceed authority or risk gates, MUST stop and escalate.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@AI_COMPLIANCE.md` around lines 48 - 50, Reword the three consecutive normative sentences that all start with "MUST"—specifically the lines containing "MUST operate as assistants, never as autonomous approvers.", "MUST treat generated code and generated commands as untrusted by default.", and "MUST stop and escalate when requested actions exceed authority or risk gates."—to vary sentence openings for readability while preserving their normative force; for example, convert one or two into constructions like "Assistants must...", "Treat generated code and commands as untrusted by default.", or "When requested actions exceed authority or risk gates, stop and escalate." Ensure the requirement keywords remain clear and the obligations are unchanged.AGENTS.md (1)
16-20: Consider adding explicit boot sequence guidance.While the precedence model is clearly documented, AI_COMPLIANCE.md section 12 specifies a required boot sequence (read AGENTS.md first, then AI_COMPLIANCE.md). Adding this to the governance section could help agents understand the intended reading order.
📖 Optional enhancement to clarify boot sequence
Source of truth and precedence: +- At session start, read this file (`AGENTS.md`) first, then `AI_COMPLIANCE.md`. - `AI_COMPLIANCE.md` is the normative policy baseline for AI-assisted work. - `AGENTS.md` provides operational context and repository-specific workflows. - When guidance conflicts, follow the precedence model defined in `AI_COMPLIANCE.md`.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@AGENTS.md` around lines 16 - 20, Add an explicit boot sequence note to the governance/precedence section in AGENTS.md stating the required reading order: read AGENTS.md first for operational context, then AI_COMPLIANCE.md for the normative policy baseline (so the precedence model still applies), and reference AI_COMPLIANCE.md section 12 as the source of this boot-sequence requirement; update the paragraph around the existing "Source of truth and precedence" text to include this single-line instruction so agents know to load AGENTS.md before AI_COMPLIANCE.md.plans/ai-governance-structure-optimization.md (2)
1-39: Enhance plan structure to include required traceability elements.Similar to the rollout plan, this plan is missing traceability elements defined in AI_COMPLIANCE.md section 9. The plan should include risk class, tooling used, validation outcomes, and follow-up owner.
📋 Suggested enhancement to add missing traceability elements
# AI Governance Structure Optimization ## Summary Review and tighten TKO's AI governance structure so decisions are easier to apply under delivery pressure and evidence is consistent across contributors and agents. +## Risk Class +`HIGH` — Changes to governance documents themselves are classified as HIGH risk per AI_COMPLIANCE.md section 6.3 and require explicit maintainer approval. + ## Goals - Restructure `AI_COMPLIANCE.md` into clearer policy sections with explicit decision rights and approval gates. - Add a practical risk-to-evidence matrix tied to current TKO workflows. - Add explicit exception and incident handling rules with ownership and expiry. - Align `AGENTS.md` with the compliance baseline to reduce duplicated or drifting guidance. ## Non-Goals - No automation or CI enforcement changes in this iteration. - No changes to release mechanics, build tooling behavior, or package runtime. - No policy expansion outside repository governance and engineering controls. ## Implementation 1. Normalize policy language (`MUST` / `SHOULD` / `MAY`) and clarify precedence. 2. Define role accountability and approval authority for `HIGH` risk changes. 3. Add risk model with explicit TKO high-risk path mapping. 4. Add control gates and evidence requirements per risk class. 5. Add exception workflow (owner, expiry, compensating controls). 6. Add incident runbook for leakage/prompt-injection/supply-chain events. 7. Update `AGENTS.md` governance section to point to the stricter baseline. ## Verification - Manual consistency check between `AI_COMPLIANCE.md` and `AGENTS.md`. - Confirm references map to real TKO paths and commands. - Confirm plan/evidence expectations remain compatible with existing `plans/` workflow. ## Deliverables - Updated `AI_COMPLIANCE.md` - Updated `AGENTS.md` governance section - This plan entry in `plans/` + +## AI Evidence +- Risk class: `HIGH` (governance document changes) +- Files changed: `AI_COMPLIANCE.md` (new), `AGENTS.md` (governance section added), `plans/ai-compliance-governance-rollout.md`, `plans/ai-governance-structure-optimization.md` +- Tools/commands: Manual authoring; cross-document consistency verification +- Validation: Markdown validated; precedence model verified; boot sequence confirmed; risk paths mapped to real TKO directories +- Result: Comprehensive governance baseline with clear roles, risk tiers, and operational controls +- Follow-up owner: TKO maintainers (quarterly review cycle per AI_COMPLIANCE.md section 13)Based on learnings, plans for substantial AI-assisted changes should include objective, risk class, files changed, tooling used, validation evidence, and follow-up owner.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plans/ai-governance-structure-optimization.md` around lines 1 - 39, Update this plan to include the traceability fields required by AI_COMPLIANCE.md section 9: add a "Risk class" (e.g., LOW/MEDIUM/HIGH), "Files changed" listing affected docs (referencing AI_COMPLIANCE.md and AGENTS.md), "Tooling used" (tools/versions used for edits/validation), "Validation outcomes" (summary of checks performed and their results), and a "Follow-up owner" with a clear expiry/next-review date; ensure the AGENTS.md governance section reference is updated to point to the new baseline and include these traceability items in the plan header and Implementation steps (items 1–7) so reviewers can verify compliance.
22-26: Optional: Reduce repetitive sentence beginnings for better readability.Lines 22-26 begin four consecutive sentences with "Add" or "Define". Consider varying the sentence structure slightly.
✍️ Optional rewording to reduce repetition
## Implementation 1. Normalize policy language (`MUST` / `SHOULD` / `MAY`) and clarify precedence. 2. Define role accountability and approval authority for `HIGH` risk changes. -3. Add risk model with explicit TKO high-risk path mapping. -4. Add control gates and evidence requirements per risk class. -5. Add exception workflow (owner, expiry, compensating controls). -6. Add incident runbook for leakage/prompt-injection/supply-chain events. +3. Create risk model with explicit TKO high-risk path mapping. +4. Establish control gates and evidence requirements per risk class. +5. Document exception workflow (owner, expiry, compensating controls). +6. Provide incident runbook for leakage/prompt-injection/supply-chain events. 7. Update `AGENTS.md` governance section to point to the stricter baseline.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plans/ai-governance-structure-optimization.md` around lines 22 - 26, The list items 1–5 repeat sentence starts ("Normalize", "Define", "Add") which reduces readability; reword those bullets to vary sentence openings and flow while preserving meaning—for example, change item 2 to "Establish role accountability and approval authority for `HIGH` risk changes", item 3 to "Introduce a risk model that maps explicit TKO high‑risk paths", item 4 to "Specify control gates and required evidence per risk class", and item 5 to "Create an exception workflow (owner, expiry, compensating controls)"; apply similar small tweaks to other bullets so each begins with a different verb or phrase.plans/ai-compliance-governance-rollout.md (1)
1-23: Enhance plan structure to include required traceability elements.The plan is missing several elements that are recommended for substantial AI-assisted changes. According to the governance baseline being introduced in this PR (AI_COMPLIANCE.md section 9), plans should include:
- Risk class (HIGH/MEDIUM/LOW)
- Tooling/commands used
- Validation outcomes and results
- Follow-up owner
📋 Suggested enhancement to add missing traceability elements
# AI Compliance Governance Rollout ## Summary Introduce a repository-specific AI compliance baseline for TKO and wire it into `AGENTS.md` so all agents and contributors follow consistent governance controls. +## Risk Class +`LOW` — Documentation-only change with no runtime, build, or CI behavior modifications. + ## Goals - Add a `AI_COMPLIANCE.md` in project root - Update `AGENTS.md` with a clear mandatory governance section. - Keep guidance practical for current TKO workflows (`make`, Karma, changesets, release CI). ## Non-Goals - No runtime code or build-system behavior changes. - No CI policy automation in this step. ## Implementation 1. Define compliance baseline with scope, precedence, risk tiers, controls, and incident handling. 2. Add mandatory boot/read order and operational controls to `AGENTS.md`. 3. Ensure high-risk TKO areas are explicitly mapped to approval requirements. ## Verification - Confirm both files are valid markdown and present at repository root. - Check AGENTS content references `AI_COMPLIANCE.md` and governance state files. - Validate guidance matches existing TKO constraints (zero runtime deps, release workflow, docs verification flow). + +## AI Evidence +- Risk class: `LOW` +- Files changed: `AI_COMPLIANCE.md` (new), `AGENTS.md` (updated), `plans/ai-compliance-governance-rollout.md`, `plans/ai-governance-structure-optimization.md` +- Tools/commands: Manual authoring and review; markdown validation +- Validation: Markdown syntax verified; cross-references between documents checked for consistency +- Result: Governance baseline established with clear precedence, boot sequence, and operational controls +- Follow-up owner: TKO maintainers (quarterly review per AI_COMPLIANCE.md section 13)Based on learnings, plans for substantial AI-assisted changes should include objective, risk class, files changed, tooling used, validation evidence, and follow-up owner.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plans/ai-compliance-governance-rollout.md` around lines 1 - 23, The plan is missing required traceability elements (risk class, tooling, validation, owner) per the new governance baseline §9; update the AI Compliance Rollout plan to add a "Risk Class" (HIGH/MEDIUM/LOW), a "Tooling/Commands" list (e.g., make, Karma, changesets, release CI commands), a "Validation & Results" section that records concrete verification outcomes (e.g., markdown presence checks, AGENTS.md references, CI/documentation checks) and their pass/fail details, and a "Follow-up Owner" with an assignee and cadence; ensure these are included alongside existing sections (Summary, Goals, Implementation, Verification) and reference AI_COMPLIANCE.md and AGENTS.md where appropriate.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@AGENTS.md`:
- Around line 16-20: Add an explicit boot sequence note to the
governance/precedence section in AGENTS.md stating the required reading order:
read AGENTS.md first for operational context, then AI_COMPLIANCE.md for the
normative policy baseline (so the precedence model still applies), and reference
AI_COMPLIANCE.md section 12 as the source of this boot-sequence requirement;
update the paragraph around the existing "Source of truth and precedence" text
to include this single-line instruction so agents know to load AGENTS.md before
AI_COMPLIANCE.md.
In `@AI_COMPLIANCE.md`:
- Around line 48-50: Reword the three consecutive normative sentences that all
start with "MUST"—specifically the lines containing "MUST operate as assistants,
never as autonomous approvers.", "MUST treat generated code and generated
commands as untrusted by default.", and "MUST stop and escalate when requested
actions exceed authority or risk gates."—to vary sentence openings for
readability while preserving their normative force; for example, convert one or
two into constructions like "Assistants must...", "Treat generated code and
commands as untrusted by default.", or "When requested actions exceed authority
or risk gates, stop and escalate." Ensure the requirement keywords remain clear
and the obligations are unchanged.
In `@plans/ai-compliance-governance-rollout.md`:
- Around line 1-23: The plan is missing required traceability elements (risk
class, tooling, validation, owner) per the new governance baseline §9; update
the AI Compliance Rollout plan to add a "Risk Class" (HIGH/MEDIUM/LOW), a
"Tooling/Commands" list (e.g., make, Karma, changesets, release CI commands), a
"Validation & Results" section that records concrete verification outcomes
(e.g., markdown presence checks, AGENTS.md references, CI/documentation checks)
and their pass/fail details, and a "Follow-up Owner" with an assignee and
cadence; ensure these are included alongside existing sections (Summary, Goals,
Implementation, Verification) and reference AI_COMPLIANCE.md and AGENTS.md where
appropriate.
In `@plans/ai-governance-structure-optimization.md`:
- Around line 1-39: Update this plan to include the traceability fields required
by AI_COMPLIANCE.md section 9: add a "Risk class" (e.g., LOW/MEDIUM/HIGH),
"Files changed" listing affected docs (referencing AI_COMPLIANCE.md and
AGENTS.md), "Tooling used" (tools/versions used for edits/validation),
"Validation outcomes" (summary of checks performed and their results), and a
"Follow-up owner" with a clear expiry/next-review date; ensure the AGENTS.md
governance section reference is updated to point to the new baseline and include
these traceability items in the plan header and Implementation steps (items 1–7)
so reviewers can verify compliance.
- Around line 22-26: The list items 1–5 repeat sentence starts ("Normalize",
"Define", "Add") which reduces readability; reword those bullets to vary
sentence openings and flow while preserving meaning—for example, change item 2
to "Establish role accountability and approval authority for `HIGH` risk
changes", item 3 to "Introduce a risk model that maps explicit TKO high‑risk
paths", item 4 to "Specify control gates and required evidence per risk class",
and item 5 to "Create an exception workflow (owner, expiry, compensating
controls)"; apply similar small tweaks to other bullets so each begins with a
different verb or phrase.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 70e760a7-75c5-4fab-ba77-2982d84e290d
📒 Files selected for processing (4)
AGENTS.mdAI_COMPLIANCE.mdplans/ai-compliance-governance-rollout.mdplans/ai-governance-structure-optimization.md
There was a problem hiding this comment.
♻️ Duplicate comments (1)
AI_COMPLIANCE.md (1)
51-52:⚠️ Potential issue | 🔴 CriticalCritical: Incomplete sentence and missing line break.
Line 51 ends with an incomplete directive: "MUST consider code quality, human readability and interface compatibility " without finishing the thought or adding proper punctuation. Additionally, section 4.3 starts immediately at line 52 without a blank line separator.
🔧 Suggested fix
Complete the sentence and add proper spacing:
-- MUST consider code quality, human readability and interface compatibility -### 4.3 Security and Quality Owners +- MUST consider code quality, human readability, and interface compatibility when generating changes. + +### 4.3 Security and Quality OwnersAlternatively, if this bullet point should be removed (as suggested in past review comments), then:
-- MUST consider code quality, human readability and interface compatibility + ### 4.3 Security and Quality Owners🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@AI_COMPLIANCE.md` around lines 51 - 52, Finish the incomplete directive that starts with "MUST consider code quality, human readability and interface compatibility" by completing the sentence with clear guidance (e.g., "MUST consider code quality, human readability, and interface compatibility when making changes to APIs and libraries.") and add a blank line before the "### 4.3 Security and Quality Owners" heading to separate sections; alternatively, if this bullet is intended to be removed per prior reviews, delete the entire line containing that directive instead.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@AI_COMPLIANCE.md`:
- Around line 51-52: Finish the incomplete directive that starts with "MUST
consider code quality, human readability and interface compatibility" by
completing the sentence with clear guidance (e.g., "MUST consider code quality,
human readability, and interface compatibility when making changes to APIs and
libraries.") and add a blank line before the "### 4.3 Security and Quality
Owners" heading to separate sections; alternatively, if this bullet is intended
to be removed per prior reviews, delete the entire line containing that
directive instead.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
AI_COMPLIANCE.md (1)
51-51:⚠️ Potential issue | 🔴 CriticalComplete the normative statement.
Line 51 is grammatically incomplete: "MUST consider code quality, human readability and interface compatibility" lacks a completing action. For a MUST statement to be enforceable in a normative baseline, it needs to specify what AI agents must do with these considerations (e.g., "MUST consider code quality, human readability, and interface compatibility when generating or proposing changes" or "MUST prioritize...").
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@AI_COMPLIANCE.md` at line 51, The normative statement "MUST consider code quality, human readability and interface compatibility" is incomplete; edit that sentence so it specifies the required action (e.g., "MUST consider code quality, human readability, and interface compatibility when generating or proposing changes to code or interfaces") to make the requirement enforceable and clear.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@AI_COMPLIANCE.md`:
- Line 71: Fix the grammar in the guideline sentence "AI tooling SHOULD verified
before use" by inserting the missing verb "be" so it reads "AI tooling SHOULD be
verified before use"; locate the exact sentence in AI_COMPLIANCE.md (the line
containing "AI tooling SHOULD verified before use") and update it accordingly,
preserving casing and the "SHOULD" RFC-style keyword.
- Around line 202-203: Replace the ambiguous instruction "If required governance
files are missing, initialize minimal valid state files before proceeding with
implementation work." with a clear escalation rule: instruct agents to stop work
and escalate to maintainers when required governance files are missing, and
explicitly reference Section 4.2 ("MUST operate as assistants, never as
autonomous approvers") to forbid agents from generating or auto-initializing
normative governance documents; update the sentence to read something like "If
required governance files are missing, stop work and escalate to
maintainers—agents must not create or modify governance documents autonomously
(see Section 4.2)."
---
Duplicate comments:
In `@AI_COMPLIANCE.md`:
- Line 51: The normative statement "MUST consider code quality, human
readability and interface compatibility" is incomplete; edit that sentence so it
specifies the required action (e.g., "MUST consider code quality, human
readability, and interface compatibility when generating or proposing changes to
code or interfaces") to make the requirement enforceable and clear.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
🧹 Nitpick comments (1)
AI_COMPLIANCE.md (1)
30-36: Clarify precedence wording to avoid governance ambiguity.Current order puts
AGENTS.mdaboveAI_COMPLIANCE.md, while this file is described as the normative baseline. Consider explicitly stating that compliance/security controls inAI_COMPLIANCE.mdoverrideAGENTS.mdin conflicts (or invert those two entries) to prevent interpretation drift.Based on learnings: Follow AI governance documents: `AI_COMPLIANCE.md` is the normative policy baseline, `AGENTS.md` provides operational context, and conflicts are resolved per the precedence model in `AI_COMPLIANCE.md`.Suggested wording tweak
-3. `AGENTS.md` -4. This file (`AI_COMPLIANCE.md`) +3. This file (`AI_COMPLIANCE.md`) for compliance/security controls +4. `AGENTS.md` for operational workflow context🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@AI_COMPLIANCE.md` around lines 30 - 36, Update the precedence wording so there is no ambiguity: explicitly state in AI_COMPLIANCE.md that its compliance/security controls are the normative baseline and override any conflicting guidance in AGENTS.md (or invert the list order so AI_COMPLIANCE.md appears before AGENTS.md); reference the files by name (AI_COMPLIANCE.md, AGENTS.md) and add a single authoritative sentence such as “In case of conflict, the rules in AI_COMPLIANCE.md take precedence over AGENTS.md” so governance resolution is unambiguous.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@AI_COMPLIANCE.md`:
- Around line 30-36: Update the precedence wording so there is no ambiguity:
explicitly state in AI_COMPLIANCE.md that its compliance/security controls are
the normative baseline and override any conflicting guidance in AGENTS.md (or
invert the list order so AI_COMPLIANCE.md appears before AGENTS.md); reference
the files by name (AI_COMPLIANCE.md, AGENTS.md) and add a single authoritative
sentence such as “In case of conflict, the rules in AI_COMPLIANCE.md take
precedence over AGENTS.md” so governance resolution is unambiguous.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f8672551-9067-4011-8e80-8b232be94924
📒 Files selected for processing (4)
AGENTS.mdAI_COMPLIANCE.mdplans/ai-compliance-governance-rollout.mdplans/ai-governance-structure-optimization.md
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@AGENTS.md`:
- Line 125: Update the documentation line that currently reads "types/ #
Typescript typings" to correct the language name: change "Typescript" to
"TypeScript" so the comment becomes "types/ # TypeScript typings";
locate the line containing the "types/" entry or the phrase "Typescript typings"
in the AGENTS.md content and apply this single-word capitalization fix.
- Around line 17-21: The current paragraph in AGENTS.md incorrectly implies
AI_COMPLIANCE.md is the top precedence; update the wording to match the
precedence model in AI_COMPLIANCE.md Section 3 by stating that contributors
should follow the precedence model defined in AI_COMPLIANCE.md (which may place
AGENTS.md above other docs) when guidance conflicts, and remove or rephrase the
line that asserts AI_COMPLIANCE.md is the normative top source so the two
documents are consistent (reference AGENTS.md and AI_COMPLIANCE.md in the
updated sentence).
In `@AI_GLOSSARY.md`:
- Line 37: The TOC entry "Governance & Process Terms" points to a non-existent
anchor `#governance--process-terms`; fix it by either adding a matching section
heading exactly "Governance & Process Terms" (so the generated anchor becomes
governance--process-terms) or by updating the TOC link to match the actual
heading used in the document; locate the TOC line "20. [Governance & Process
Terms](`#governance--process-terms`)" and reconcile it with the corresponding
section title (or create that section) so the anchor resolves.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: bdb85391-bdb7-4cf2-9fb6-3e63adff7703
📒 Files selected for processing (9)
AGENTS.mdAI_COMPLIANCE.mdAI_GLOSSARY.mdplans/agent-verified-behaviors.mdplans/ai-compliance-governance-rollout.mdplans/ai-governance-structure-optimization.mdplans/jsx-playground.mdplans/trusted-publishing.mdplans/tsx-doc-examples-rollout.md
✅ Files skipped from review due to trivial changes (4)
- plans/trusted-publishing.md
- plans/jsx-playground.md
- plans/tsx-doc-examples-rollout.md
- plans/agent-verified-behaviors.md
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
AGENTS.md (1)
25-26: Clarify accepted risk-class values in AGENTS.md for consistent plans.
risk classis required, but the allowed values are not stated here. AddingHIGH / MEDIUM / LOWinline will reduce reviewer interpretation variance.✏️ Suggested wording
-- Add or update a plan in `plans/` with objective, risk class, planned changes and steps, +- Add or update a plan in `plans/` with objective, risk class (`HIGH` / `MEDIUM` / `LOW`), planned changes and steps, tooling used, validation evidence, and any follow-up owner.Also applies to: 182-183
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@AGENTS.md` around lines 25 - 26, Update the AGENTS.md line that instructs adding "risk class" to plans so it explicitly lists the accepted values (e.g., "risk class: HIGH / MEDIUM / LOW") to ensure consistency; modify the sentence mentioning "risk class" (the instruction that currently reads about objective, risk class, planned changes and steps) and the duplicate occurrence around lines 182-183 to include the allowed enums "HIGH / MEDIUM / LOW" inline.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@AGENTS.md`:
- Around line 181-183: Replace the ambiguous "should" in the sentence
"Significant changes should have a plan file in `plans/`" with explicit
mandatory language for AI-assisted work and add a concrete requirement: for
substantial AI-assisted changes, require adding/updating a plan in `plans/` that
documents objective, risk class, planned changes, step-by-step implementation
and verification steps, tooling used, validation evidence, and a named follow-up
owner; ensure this new wording aligns with the existing section that treats
substantial AI-assisted changes as mandatory so there is no ambiguity.
---
Nitpick comments:
In `@AGENTS.md`:
- Around line 25-26: Update the AGENTS.md line that instructs adding "risk
class" to plans so it explicitly lists the accepted values (e.g., "risk class:
HIGH / MEDIUM / LOW") to ensure consistency; modify the sentence mentioning
"risk class" (the instruction that currently reads about objective, risk class,
planned changes and steps) and the duplicate occurrence around lines 182-183 to
include the allowed enums "HIGH / MEDIUM / LOW" inline.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
| Significant changes should have a plan file in `plans/` before implementation | ||
| begins. Plans document the context, approach, and verification steps. Review | ||
| begins. Plans document the context, approach, risk class, and verification steps. Review | ||
| existing plans in that directory for format examples. |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Strengthen plan requirement language to avoid policy ambiguity.
Line 181 says significant changes “should” have a plan, while Line 23-Line 26 frames substantial AI-assisted changes as mandatory. Consider making this section explicitly mandatory for the AI-assisted scope to avoid interpretation drift.
✏️ Suggested wording
-Significant changes should have a plan file in `plans/` before implementation
-begins. Plans document the context, approach, risk class, and verification steps. Review
+For substantial AI-assisted changes, add or update a plan file in `plans/`
+before implementation begins. Plans document the context, approach, risk class,
+and verification steps. Review
existing plans in that directory for format examples.Based on learnings: For substantial AI-assisted changes, add or update a plan in plans/ documenting objective, risk class, planned changes, steps, tooling used, validation evidence, and follow-up owner.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@AGENTS.md` around lines 181 - 183, Replace the ambiguous "should" in the
sentence "Significant changes should have a plan file in `plans/`" with explicit
mandatory language for AI-assisted work and add a concrete requirement: for
substantial AI-assisted changes, require adding/updating a plan in `plans/` that
documents objective, risk class, planned changes, step-by-step implementation
and verification steps, tooling used, validation evidence, and a named follow-up
owner; ensure this new wording aligns with the existing section that treats
substantial AI-assisted changes as mandatory so there is no ambiguity.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
skills/plan-creation/SKILL.md (1)
38-41: Use consistent risk adjective style in example bullets.These lines use
HIGH risk/LOW risk, while other sections useHIGH-risk/LOW-risk. Standardizing improves policy readability and searchability.✏️ Suggested edit
-- `plans/trusted-publishing.md` — HIGH risk release and publishing changes -- `plans/ai-governance-structure-optimization.md` — HIGH risk governance and approval model changes +- `plans/trusted-publishing.md` — HIGH-risk release and publishing changes +- `plans/ai-governance-structure-optimization.md` — HIGH-risk governance and approval model changes - `plans/agent-verified-behaviors.md` — LOW risk generated docs/reference work -- `plans/tsx-doc-examples-rollout.md` — LOW risk docs convention rollout +- `plans/tsx-doc-examples-rollout.md` — LOW-risk docs convention rollout🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@skills/plan-creation/SKILL.md` around lines 38 - 41, Update the risk adjective styling in the example bullets so they match the rest of the document: change occurrences of "HIGH risk" and "LOW risk" in the bullets referencing plans/trusted-publishing.md, plans/ai-governance-structure-optimization.md, plans/agent-verified-behaviors.md, and plans/tsx-doc-examples-rollout.md to "HIGH-risk" and "LOW-risk" respectively to ensure consistent hyphenation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@skills/plan-creation/SKILL.md`:
- Around line 38-41: Update the risk adjective styling in the example bullets so
they match the rest of the document: change occurrences of "HIGH risk" and "LOW
risk" in the bullets referencing plans/trusted-publishing.md,
plans/ai-governance-structure-optimization.md,
plans/agent-verified-behaviors.md, and plans/tsx-doc-examples-rollout.md to
"HIGH-risk" and "LOW-risk" respectively to ensure consistent hyphenation.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 88dd65d1-fca5-4a55-947f-48768c84a8df
📒 Files selected for processing (3)
AGENTS.mdskills/plan-creation/SKILL.mdskills/plan-creation/assets/plan-template.md
✅ Files skipped from review due to trivial changes (1)
- skills/plan-creation/assets/plan-template.md
Remove AI_COMPLIANCE.md, AI_GLOSSARY.md, skills/plan-creation/, governance-specific plans, and the governance section from AGENTS.md. Thank you @phillipc for the effort here — the intent to bring structure to AI-assisted work is appreciated. We should have given clearer guidance earlier about what the project needs. The issue is scope: 1790 lines of compliance policy, glossary, and approval matrices add overhead that doesn't match TKO's scale as a small open-source project. Every AI agent session was forced to read ~1400 extra lines before starting work, and the 4-layer precedence hierarchy complicated what should be a single reference file. The glossary content is valuable — we'll extract the TKO-specific parts into agents/glossary.md for llms.txt. The useful AGENTS.md updates (package layout, build targets) were already preserved in the Vitest migration (#303). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Revert AI governance framework (PR #293)
Add AI compliance baseline and governance structure to enhance AI-assisted workflows
AI_COMPLIANCE.mdas a mandatory policy baseline for AI-assisted work.AGENTS.mdwith a clear governance section outlining compliance and operational controls.Summary by CodeRabbit