Conversation
Supply-chain monitoring (V1):
- installer/upstreams.yaml + installer/manifest.py: declarative manifest
pinning every npm/brew/curl/mcp/pypi upstream by exact version + sha256
- installer/steps/dependencies.py + prerequisites.py: every install_*
reads the manifest; --ignore-scripts denied by default; sha256-verified
curl-pipe helper with soft-pin behavior for vendor-managed endpoints;
Homebrew tap verification before brew install
- .mcp.json + pilot/.mcp.json: every npx-launched MCP server pinned
- install.sh + launcher/build.py: bootstrap surface pinned (uv install
sha256-verified; --with rich==/certifi==/PyYAML==/cryptography==)
- scripts/check_manifest_drift.py: regex drift checker (PR + nightly CI)
- .github/workflows/supply-chain.yml: schema validation + drift gate
- .github/renovate.json: bump PRs against manifest
- release.yml + release-dev.yml: supply-chain-gate via Checks API
- docs/security/AUDIT.md: audit how-to + soft-pin SLA + Renovate gap
Credential scanning hooks (concurrent work):
- pilot/hooks/credential_scanner.py + secret_scanner.py + allow_tags.py
- pilot/rules/security-credentials.md
- Hooks registered in pilot/hooks/hooks.json
Console settings + spec/fix workflow updates:
- console: Settings panel additions, settings routes, useSettings hook
- pilot/skills/{fix,spec-plan,spec-verify}: improved review flow
- pilot/agents/{changes,spec}-review-codex.md: Codex prompt templates
- launcher/model_config.py: model config refresh
- pilot/hooks/spec_stop_guard.py: tighter stop-guard
Tests: 2163 pass (513 installer including 24 manifest + 14 drift +
9 curl-pipe-verify + 6 brew tap-verification + 11 precache pinning)
|
The latest updates on your projects. Learn more about Vercel for GitHub. 1 Skipped Deployment
|
|
Caution Review failedPull request was closed or merged during review Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughThis PR centralizes upstream pins in a validated YAML manifest, refactors installer and prerequisite steps to use manifest-driven, hash-verified curl/npm helpers, adds a manifest drift checker, introduces a supply-chain CI workflow, gates release workflows on the supply-chain check, configures Renovate to manage manifest updates, and adds extensive unit tests. ChangesSupply-Chain Manifest, Validation & Installer Refactoring
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Claude finished @maxritter's task in 2m 45s —— View job PR Review: Supply-Chain Monitoring + Credential Scanning + Console Settings
Overall: This is a well-designed, defence-in-depth PR. The manifest system is solid, the curl-pipe-with-hash-verify helper is correct, and the credential scanner shows real care (allow-tag trust model, signal-alarm guard, BOM-aware reads, staged-diff scanning). Three issues stand out: one is critically ironic for a supply-chain PR, one is a genuine pin-bypass, and one is a minor drift gap. Critical1.
|
- scripts/check_manifest_drift.py: pass encoding='utf-8' to every Path.read_text() call. CI runners with non-UTF-8 locale defaults (LANG=C) crashed on the first non-ASCII byte (UnicodeDecodeError). - .github/workflows/supply-chain.yml: trigger on push to main + dev, not just pull_request — so the release-dev/release workflows that start on the same push can find the check via the Checks API. - release.yml + release-dev.yml: poll the supply-chain check for up to 12 min instead of the original one-shot lookup. Both workflows begin at the same time on a push, so the gate must wait for the supply-chain run to land before deciding.
|
Claude finished @maxritter's task in 2m 57s —— View job Incremental Review — push
|
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
installer/steps/dependencies.py (1)
140-147:⚠️ Potential issue | 🟠 Major | ⚡ Quick winThe NVM fallback still installs a floating Node 22.x.
This path is now manifest-driven for the NVM installer itself, but
nvm install 22still resolves whatever 22.x is current at runtime. Any machine that falls back to this code path bypasses the manifest’s exact Node pin and can drift between installs.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@installer/steps/dependencies.py` around lines 140 - 147, The fallback NVM installation currently runs a floating "nvm install 22" which can drift; instead obtain the exact Node pin from the manifest used by the installer and use that value when building nvm_cmd. Replace the hardcoded '22' in the nvm_cmd string with the manifest-driven Node version (e.g. fetch a pinned version via the same manifest utilities used by _curl_pipe_from_manifest or a new helper like _get_manifest_node_version()) and then call _run_bash_with_retry with that pinned version so _get_nvm_source_cmd, _curl_pipe_from_manifest and _run_bash_with_retry continue to be used but the install uses the manifest exact pin rather than the floating "22".installer/tests/unit/steps/test_dependencies.py (1)
1456-1465:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winMissing
subprocess.runmock makes this test environment-dependent.All other
install_golangci_linttests that exercise the Go-present path (lines 1408 and 1437) patchsubprocess.runto return a fake GOPATH. This test mockscommand_existsto returnTruefor"go"— so the function proceeds to resolve GOPATH viasubprocess.run(["go", "env", "GOPATH"])— butsubprocess.runis not patched here. In CI runners without Go this raisesFileNotFoundError(or produces empty/unexpected stdout), making the test flaky.🛠 Proposed fix
+@patch("installer.steps.dependencies.subprocess.run") `@patch`("installer.steps.dependencies._curl_pipe_from_manifest", return_value=False) `@patch`("installer.steps.dependencies.command_exists", side_effect=lambda cmd: cmd == "go") `@patch`("installer.steps.dependencies._is_golangci_lint_installed", return_value=False) -def test_install_golangci_lint_returns_false_on_failure(self, mock_check, mock_cmd, mock_helper): +def test_install_golangci_lint_returns_false_on_failure(self, mock_check, mock_cmd, mock_helper, mock_sub): """install_golangci_lint returns False when manifest-pinned install fails.""" from installer.steps.dependencies import install_golangci_lint + mock_sub.return_value = MagicMock(returncode=0, stdout="/home/runner/go\n") result = install_golangci_lint() assert result is False🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@installer/tests/unit/steps/test_dependencies.py` around lines 1456 - 1465, The test test_install_golangci_lint_returns_false_on_failure mocks command_exists to return True for "go" but does not mock subprocess.run, making it environment-dependent; update the test to patch installer.steps.dependencies.subprocess.run (or subprocess.run used by install_golangci_lint) to return a fake CompletedProcess with stdout containing a plausible GOPATH (e.g., b"/fake/gopath\n") so the code path that calls subprocess.run(["go","env","GOPATH"]) does not invoke the real go binary; keep existing mocks (_curl_pipe_from_manifest, _is_golangci_lint_installed) and ensure the patched subprocess.run is passed into the test via `@patch` to avoid external dependencies.
🧹 Nitpick comments (4)
.github/workflows/supply-chain.yml (2)
10-14: ⚡ Quick win
checks: writeappears unused; narrow to least-privilege.Nothing in this workflow creates or updates a check run — the gate-side query lives in
release.yml. Consider droppingchecks: write(or downgrading toreadif nothing reads either) to keep the token minimum.As per coding guidelines: "Review workflows for security best practices and proper permissions."
🔒️ Proposed fix
permissions: contents: read issues: write # nightly cron opens GitHub issues on findings pull-requests: read - checks: write🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In @.github/workflows/supply-chain.yml around lines 10 - 14, The workflow grants unnecessary "checks: write" permission; update the permissions block in the supply-chain workflow by removing "checks: write" (or change it to "checks: read" if any read is needed) so the job uses least-privilege. Edit the permissions section where "permissions:" and the keys "contents", "issues", "pull-requests", and "checks" are defined and ensure "checks" is not set to write.
43-53: ⚡ Quick winNightly failure issues will accumulate without de-duplication.
A persistent failure opens a fresh issue every 24h (title is timestamped to the day). Consider searching for an existing open
security+supply-chainissue first and either commenting on it or skipping creation, so a sustained outage produces one tracked thread instead of N stale duplicates.♻️ Sketch
script: | + const existing = await github.rest.issues.listForRepo({ + ...context.repo, state: 'open', + labels: 'security,supply-chain', per_page: 1, + }); + if (existing.data.length > 0) { + await github.rest.issues.createComment({ + ...context.repo, issue_number: existing.data[0].number, + body: `Recurring failure on ${context.sha}: ${context.serverUrl}/${context.repo.owner}/${context.repo.repo}/actions/runs/${context.runId}`, + }); + return; + } await github.rest.issues.create({ ...context.repo, title: `Supply-chain: nightly scan failed (${new Date().toISOString().slice(0,10)})`, body: `Nightly supply-chain scan failed for ${context.sha}. See workflow run: ${context.serverUrl}/${context.repo.owner}/${context.repo.repo}/actions/runs/${context.runId}`, labels: ['security', 'supply-chain'] });🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In @.github/workflows/supply-chain.yml around lines 43 - 53, The workflow step "Open issue on nightly failure" currently always calls github.rest.issues.create which creates a new issue each day; change it to first query existing open issues using github.rest.issues.listForRepo filtered by labels ['security','supply-chain'] (and optionally by title containing "Supply-chain: nightly scan failed"), and if an open match exists call github.rest.issues.createComment on that issue instead of creating a new one, otherwise proceed to github.rest.issues.create as before; update the script block to use github.rest.issues.listForRepo, github.rest.issues.createComment and github.rest.issues.create accordingly and ensure you handle pagination/zero-results..github/workflows/release.yml (1)
78-87: 💤 Low valueConsider handling the "check is still in-progress" case explicitly.
sc.conclusionisnullwhile a check isin_progressorqueued, which the current code reports as${sc?.conclusion ?? 'missing'}. Ifrelease.ymlis triggered shortly after the commit lands (e.g., manual dispatch on a fresh commit), the supply-chain check may not yet have a conclusion and the gate will fail with "missing/null". That's actually the safe default, but a short retry/wait or a clearer message distinguishing in-progress vs. truly missing/failed would reduce false-negative release blocks during normal merges.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In @.github/workflows/release.yml around lines 78 - 87, The release gate currently treats a null sc.conclusion as "missing", which conflates in_progress/queued with absent checks; update the logic around github.rest.checks.listForRef and the sc lookup so that if sc exists but sc.conclusion is null you either (a) perform a short retry loop with a small delay and re-fetch checks up to N times before failing, or (b) emit a clearer core.setFailed message that distinguishes "in_progress/queued" (sc.status) from truly missing/failed; use the existence of sc, sc.status and sc.conclusion in the decision branch that currently sets core.setFailed for supply-chain to implement one of these behaviors.installer/tests/unit/steps/test_dependencies.py (1)
286-286: 💤 Low value
tmp_path: Pathfixture parameter is accepted but never used in three tests.
test_runs_when_hash_matches,test_hard_pin_mismatch_returns_false, andtest_soft_pin_mismatch_proceedsall declaretmp_pathbut rely on the_fake_run_bashhelper (which extracts the temp path from the curl command string) rather than usingtmp_pathdirectly. Remove the parameter or replace the path-extraction logic withtmp_path-rooted mocks to align with the intent.Also applies to: 320-320, 346-346
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@installer/tests/unit/steps/test_dependencies.py` at line 286, The three tests test_runs_when_hash_matches, test_hard_pin_mismatch_returns_false, and test_soft_pin_mismatch_proceeds declare tmp_path but never use it; either remove the unused tmp_path parameter from those test signatures or modify the tests and the helper _fake_run_bash so they use tmp_path explicitly (e.g., build the curl/file paths from tmp_path and assert against those paths instead of parsing them out of the curl command string). Locate and update the tests and the _fake_run_bash helper to consistently use the tmp_path fixture (or remove the fixture from the three test functions) so the temp directory comes from the pytest fixture rather than string extraction.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In @.github/workflows/release-dev.yml:
- Around line 15-36: The current supply-chain-gate step does a single
checks.listForRef lookup and fails if the 'supply-chain' run is missing or not
'success'; change it to poll the checks API until the check run reaches a
terminal state or a timeout instead of immediately failing. In the
actions/github-script block (the script using github.rest.checks.listForRef and
the sc variable), implement a loop with a short exponential/backoff delay that
re-calls checks.listForRef and re-finds r => r.name === 'supply-chain' until
sc.status === 'completed' (or sc.conclusion is one of the terminal conclusions)
or until a configurable timeout is reached; only call core.setFailed if the poll
ends with a completed non-success conclusion or the overall timeout elapses;
alternatively (if you prefer not to poll) restrict this job’s triggers to events
where the 'supply-chain' check is guaranteed to run so the initial lookup is
valid.
In @.github/workflows/release.yml:
- Line 75: Replace the floating tag "uses: actions/github-script@v7" with a full
commit SHA pin to match the repo's existing pattern; locate the workflow step
referencing actions/github-script (the line containing "uses:
actions/github-script@v7") and change it to the equivalent "uses:
actions/github-script@<full-commit-sha>" and add the same style version comment
used across the file so the action is pinned to a specific commit and consistent
with other steps (e.g., include the commit SHA and short human-readable version
comment).
In @.github/workflows/supply-chain.yml:
- Around line 22-46: The workflow uses floating action tags actions/checkout@v4,
actions/setup-python@v5, and actions/github-script@v7 which should be pinned to
full commit SHAs to mitigate supply-chain risks; replace each uses entry with
the corresponding full-length commit SHA for that tag (e.g.,
actions/checkout@<full-commit-sha>) and add a trailing comment with the
human-readable tag/version (e.g., "# v4") for reference, ensuring you fetch the
canonical SHAs from the official GitHub release/tag pages or ls-remote before
committing.
In `@installer/manifest.py`:
- Around line 133-135: The code currently uses raw.get("entries") or [] which
treats invalid values (like "" or {}) as missing; instead, first detect whether
the "entries" key exists on raw and only default to an empty list when the key
is absent. Replace the raw_entries assignment with logic: if "entries" not in
raw then set raw_entries = [] else set raw_entries = raw["entries"]; then keep
the existing type check and raise ManifestError("'entries' must be a list") if
raw_entries is not a list. This uses the raw_entries variable,
raw.get/raw["entries"] access, and the existing ManifestError to enforce the
schema.
In `@installer/steps/dependencies.py`:
- Around line 233-255: The soft-pin sha mismatch stores a warning in
_thread_local.last_retry_stderr but is never surfaced on success, so soft_pin
drift proceeds silently; when soft_pin is true and you set
_thread_local.last_retry_stderr = msg, also emit or persist that warning on the
success path (e.g., log to stderr or attach it to the successful result) before
returning from this routine so callers see the audit reminder—update the branch
around the sha mismatch handling to ensure the message is logged/emitted
(reference _thread_local.last_retry_stderr, soft_pin, the call to
_run_bash_with_retry, and opts) rather than only kept for failure paths.
In `@installer/tests/unit/steps/test_dependencies.py`:
- Around line 1505-1507: The test currently assumes every mocked call passed
timeout as a keyword which can raise KeyError; update the list comprehension
that builds timeouts from mock_run.call_args_list to safely read timeout (e.g.
use call.kwargs.get("timeout") and include call.args inspection only if needed)
and filter out None so positional timeouts won't crash the test; keep references
to the test's mock_run.call_args_list and the
install_pbt_tools/_run_bash_with_retry call pattern so the assertion checks
compare UV_TOOL_INSTALL_TIMEOUT and GLOBAL_NPM_INSTALL_TIMEOUT against the
collected non-None timeouts.
In `@installer/tests/unit/test_drift_check.py`:
- Around line 162-177: The tests only scan installer/steps/*.py and .mcp.json
files but omit other paths listed in SCAN_FILES (notably install.sh and
launcher/build.py); update the unit tests (e.g., extend
test_installer_steps_clean_after_tasks_2_through_5 or add a new test) to iterate
the full SCAN_FILES set from the codebase and call drift_module.scan_file (and
drift_module.cross_reference_mcp where applicable) for each path, asserting no
findings so regressions in install.sh and launcher/build.py are caught by these
tests.
In `@installer/upstreams.yaml`:
- Around line 216-245: The three brew monitor entries (ids pnpm-brew, bun-brew,
uv-brew) are missing the auto_upgrade: false flag causing installed formulas to
drift from the pinned version; update each entry (pnpm-brew, bun-brew, uv-brew)
to include auto_upgrade: false to match the other monitor-pinned brew entries,
or if there is an intentional reason to exempt any of them, add a short comment
on that entry documenting why it is exempt instead of adding the flag.
In `@scripts/check_manifest_drift.py`:
- Around line 101-111: The helper _is_npm_install_pkg_pinned is misclassifying
non-exact specs like pkg@beta, pkg@next, or pkg@^1.2.3 as pinned because it only
checks for the presence of "@"; change it to require an exact semver (no range
operators or tags). Update _is_npm_install_pkg_pinned to: skip known flags and
other "--" args as before; for scoped names (pkg startswith "@") split off the
package name and take the part after the final "@" (if any); for non-scoped
names take the part after the final "@"; then verify that the version part is a
strict numeric semver (e.g., matches the exact X.Y.Z format, optionally with a
valid numeric pre-release/build per semver rules) and reject ranges/tags/leading
operators like ^ ~ > < = latest beta next; if you prefer, call into a semver
parser/validator (e.g., semver.parseExact) instead of a regex to decide
exactness.
- Around line 83-92: Catch UnicodeDecodeError in _iter_lines(path: Path) and
convert it into a deterministic Finding instead of letting the exception abort
the run: wrap path.read_text() in a try/except UnicodeDecodeError, construct a
Finding (use the existing Finding class/API and include path and the decode
error text in the message), emit or register that Finding so the check records
it, and return an empty list (or otherwise a safe list of _LineCtx) so
processing continues; update the caller if necessary to ensure emitted Findings
are collected and surfaced.
---
Outside diff comments:
In `@installer/steps/dependencies.py`:
- Around line 140-147: The fallback NVM installation currently runs a floating
"nvm install 22" which can drift; instead obtain the exact Node pin from the
manifest used by the installer and use that value when building nvm_cmd. Replace
the hardcoded '22' in the nvm_cmd string with the manifest-driven Node version
(e.g. fetch a pinned version via the same manifest utilities used by
_curl_pipe_from_manifest or a new helper like _get_manifest_node_version()) and
then call _run_bash_with_retry with that pinned version so _get_nvm_source_cmd,
_curl_pipe_from_manifest and _run_bash_with_retry continue to be used but the
install uses the manifest exact pin rather than the floating "22".
In `@installer/tests/unit/steps/test_dependencies.py`:
- Around line 1456-1465: The test
test_install_golangci_lint_returns_false_on_failure mocks command_exists to
return True for "go" but does not mock subprocess.run, making it
environment-dependent; update the test to patch
installer.steps.dependencies.subprocess.run (or subprocess.run used by
install_golangci_lint) to return a fake CompletedProcess with stdout containing
a plausible GOPATH (e.g., b"/fake/gopath\n") so the code path that calls
subprocess.run(["go","env","GOPATH"]) does not invoke the real go binary; keep
existing mocks (_curl_pipe_from_manifest, _is_golangci_lint_installed) and
ensure the patched subprocess.run is passed into the test via `@patch` to avoid
external dependencies.
---
Nitpick comments:
In @.github/workflows/release.yml:
- Around line 78-87: The release gate currently treats a null sc.conclusion as
"missing", which conflates in_progress/queued with absent checks; update the
logic around github.rest.checks.listForRef and the sc lookup so that if sc
exists but sc.conclusion is null you either (a) perform a short retry loop with
a small delay and re-fetch checks up to N times before failing, or (b) emit a
clearer core.setFailed message that distinguishes "in_progress/queued"
(sc.status) from truly missing/failed; use the existence of sc, sc.status and
sc.conclusion in the decision branch that currently sets core.setFailed for
supply-chain to implement one of these behaviors.
In @.github/workflows/supply-chain.yml:
- Around line 10-14: The workflow grants unnecessary "checks: write" permission;
update the permissions block in the supply-chain workflow by removing "checks:
write" (or change it to "checks: read" if any read is needed) so the job uses
least-privilege. Edit the permissions section where "permissions:" and the keys
"contents", "issues", "pull-requests", and "checks" are defined and ensure
"checks" is not set to write.
- Around line 43-53: The workflow step "Open issue on nightly failure" currently
always calls github.rest.issues.create which creates a new issue each day;
change it to first query existing open issues using
github.rest.issues.listForRepo filtered by labels ['security','supply-chain']
(and optionally by title containing "Supply-chain: nightly scan failed"), and if
an open match exists call github.rest.issues.createComment on that issue instead
of creating a new one, otherwise proceed to github.rest.issues.create as before;
update the script block to use github.rest.issues.listForRepo,
github.rest.issues.createComment and github.rest.issues.create accordingly and
ensure you handle pagination/zero-results.
In `@installer/tests/unit/steps/test_dependencies.py`:
- Line 286: The three tests test_runs_when_hash_matches,
test_hard_pin_mismatch_returns_false, and test_soft_pin_mismatch_proceeds
declare tmp_path but never use it; either remove the unused tmp_path parameter
from those test signatures or modify the tests and the helper _fake_run_bash so
they use tmp_path explicitly (e.g., build the curl/file paths from tmp_path and
assert against those paths instead of parsing them out of the curl command
string). Locate and update the tests and the _fake_run_bash helper to
consistently use the tmp_path fixture (or remove the fixture from the three test
functions) so the temp directory comes from the pytest fixture rather than
string extraction.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 7de6a10c-4f07-4b76-95c5-f681db5214a3
⛔ Files ignored due to path filters (36)
console/src/services/worker/http/routes/SettingsRoutes.tsis excluded by!console/**console/src/ui/viewer/hooks/useSettings.tsis excluded by!console/**console/src/ui/viewer/views/Settings/index.tsxis excluded by!console/**console/tests/settings-routes.test.tsis excluded by!console/**launcher/build.pyis excluded by!launcher/**launcher/model_config.pyis excluded by!launcher/**launcher/tests/unit/test_model_config.pyis excluded by!launcher/**pilot/.mcp.jsonis excluded by!pilot/**pilot/agents/changes-review-codex.mdis excluded by!pilot/**pilot/agents/spec-review-codex.mdis excluded by!pilot/**pilot/hooks/_lib/allow_tags.pyis excluded by!pilot/**pilot/hooks/_lib/secret_scanner.pyis excluded by!pilot/**pilot/hooks/credential_scanner.pyis excluded by!pilot/**pilot/hooks/hooks.jsonis excluded by!pilot/**pilot/hooks/spec_stop_guard.pyis excluded by!pilot/**pilot/hooks/tests/test_allow_tags.pyis excluded by!pilot/**pilot/hooks/tests/test_credential_scanner.pyis excluded by!pilot/**pilot/hooks/tests/test_secret_scanner.pyis excluded by!pilot/**pilot/hooks/tests/test_spec_stop_guard.pyis excluded by!pilot/**pilot/rules/development-practices.mdis excluded by!pilot/**pilot/rules/security-credentials.mdis excluded by!pilot/**pilot/scripts/mcp-server.cjsis excluded by!pilot/**pilot/scripts/worker-service.cjsis excluded by!pilot/**pilot/skills/benchmark/tests/test_isolation.pyis excluded by!pilot/**pilot/skills/fix/orchestrator.mdis excluded by!pilot/**pilot/skills/fix/steps/01-investigate.mdis excluded by!pilot/**pilot/skills/fix/steps/03-fix.mdis excluded by!pilot/**pilot/skills/fix/steps/06-finalise.mdis excluded by!pilot/**pilot/skills/spec-plan/steps/11-plan-verification.mdis excluded by!pilot/**pilot/skills/spec-verify/steps/04-launch-review.mdis excluded by!pilot/**pilot/skills/spec-verify/steps/06-feature-parity.mdis excluded by!pilot/**pilot/skills/spec-verify/steps/07-collect-results.mdis excluded by!pilot/**pilot/skills/spec-verify/steps/13-e2e.mdis excluded by!pilot/**pilot/skills/spec-verify/steps/19-update-status.mdis excluded by!pilot/**pilot/ui/viewer-bundle.jsis excluded by!pilot/**uv.lockis excluded by!**/*.lock
📒 Files selected for processing (17)
.github/renovate.json.github/workflows/release-dev.yml.github/workflows/release.yml.github/workflows/supply-chain.ymldocs/security/AUDIT.mdinstall.shinstaller/manifest.pyinstaller/steps/dependencies.pyinstaller/steps/prerequisites.pyinstaller/tests/unit/steps/test_dependencies.pyinstaller/tests/unit/steps/test_dependencies_playwright.pyinstaller/tests/unit/steps/test_prerequisites.pyinstaller/tests/unit/test_drift_check.pyinstaller/tests/unit/test_manifest.pyinstaller/upstreams.yamlpyproject.tomlscripts/check_manifest_drift.py
| timeouts = [call.kwargs["timeout"] for call in mock_run.call_args_list] | ||
| assert UV_TOOL_INSTALL_TIMEOUT in timeouts | ||
| assert GLOBAL_NPM_INSTALL_TIMEOUT in timeouts |
There was a problem hiding this comment.
call.kwargs["timeout"] raises KeyError if timeout is passed positionally.
If any _run_bash_with_retry call inside install_pbt_tools passes timeout as a positional argument, call.kwargs will not contain the key and the list comprehension raises KeyError, producing a confusing test error instead of a clear assertion failure.
🛠 Proposed fix
-timeouts = [call.kwargs["timeout"] for call in mock_run.call_args_list]
+timeouts = [
+ call.kwargs.get("timeout") if call.kwargs.get("timeout") is not None
+ else (call.args[1] if len(call.args) > 1 else None)
+ for call in mock_run.call_args_list
+]Or simply use call.kwargs.get("timeout") and filter out None before the assertions.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| timeouts = [call.kwargs["timeout"] for call in mock_run.call_args_list] | |
| assert UV_TOOL_INSTALL_TIMEOUT in timeouts | |
| assert GLOBAL_NPM_INSTALL_TIMEOUT in timeouts | |
| timeouts = [ | |
| call.kwargs.get("timeout") if call.kwargs.get("timeout") is not None | |
| else (call.args[1] if len(call.args) > 1 else None) | |
| for call in mock_run.call_args_list | |
| ] | |
| assert UV_TOOL_INSTALL_TIMEOUT in timeouts | |
| assert GLOBAL_NPM_INSTALL_TIMEOUT in timeouts |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@installer/tests/unit/steps/test_dependencies.py` around lines 1505 - 1507,
The test currently assumes every mocked call passed timeout as a keyword which
can raise KeyError; update the list comprehension that builds timeouts from
mock_run.call_args_list to safely read timeout (e.g. use
call.kwargs.get("timeout") and include call.args inspection only if needed) and
filter out None so positional timeouts won't crash the test; keep references to
the test's mock_run.call_args_list and the
install_pbt_tools/_run_bash_with_retry call pattern so the assertion checks
compare UV_TOOL_INSTALL_TIMEOUT and GLOBAL_NPM_INSTALL_TIMEOUT against the
collected non-None timeouts.
| - id: pnpm-brew | ||
| name: pnpm | ||
| source_type: brew | ||
| source_url: https://github.com/pnpm/pnpm | ||
| version: "9.15.0" | ||
| pin_kind: monitor | ||
| brew_formula: pnpm | ||
| brew_tap: homebrew/core | ||
| last_audited: "2026-05-07" | ||
|
|
||
| - id: bun-brew | ||
| name: bun | ||
| source_type: brew | ||
| source_url: https://github.com/oven-sh/bun | ||
| version: "1.1.40" | ||
| pin_kind: monitor | ||
| brew_formula: bun | ||
| # bun ships from oven-sh/bun tap (Pilot pre-taps it); intentional non-core origin. | ||
| brew_tap: oven-sh/bun | ||
| last_audited: "2026-05-07" | ||
|
|
||
| - id: uv-brew | ||
| name: uv (Homebrew install path) | ||
| source_type: brew | ||
| source_url: https://github.com/astral-sh/uv | ||
| version: "0.5.10" | ||
| pin_kind: monitor | ||
| brew_formula: uv | ||
| brew_tap: homebrew/core | ||
| last_audited: "2026-05-07" |
There was a problem hiding this comment.
Missing auto_upgrade: false on three monitor-pinned brew entries.
pnpm-brew, bun-brew, and uv-brew omit auto_upgrade: false while every other monitor-pinned brew entry sets it (git, gh, python@3.12, node@22, nvm, go, gopls, jq, ripgrep, rtk). Per the comment at lines 154-159, this flag is what keeps the manifest version equal to the installed version under routine brew upgrade; without it these three formulas can silently drift off the pinned version, which the drift checker won't catch since it inspects code references, not installed brew state.
Either set the flag to align with the rest, or document why these are intentionally exempt.
🛡️ Proposed fix
- id: pnpm-brew
name: pnpm
source_type: brew
source_url: https://github.com/pnpm/pnpm
version: "9.15.0"
pin_kind: monitor
+ auto_upgrade: false
brew_formula: pnpm
brew_tap: homebrew/core
last_audited: "2026-05-07"
- id: bun-brew
name: bun
source_type: brew
source_url: https://github.com/oven-sh/bun
version: "1.1.40"
pin_kind: monitor
+ auto_upgrade: false
brew_formula: bun
# bun ships from oven-sh/bun tap (Pilot pre-taps it); intentional non-core origin.
brew_tap: oven-sh/bun
last_audited: "2026-05-07"
- id: uv-brew
name: uv (Homebrew install path)
source_type: brew
source_url: https://github.com/astral-sh/uv
version: "0.5.10"
pin_kind: monitor
+ auto_upgrade: false
brew_formula: uv
brew_tap: homebrew/core
last_audited: "2026-05-07"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - id: pnpm-brew | |
| name: pnpm | |
| source_type: brew | |
| source_url: https://github.com/pnpm/pnpm | |
| version: "9.15.0" | |
| pin_kind: monitor | |
| brew_formula: pnpm | |
| brew_tap: homebrew/core | |
| last_audited: "2026-05-07" | |
| - id: bun-brew | |
| name: bun | |
| source_type: brew | |
| source_url: https://github.com/oven-sh/bun | |
| version: "1.1.40" | |
| pin_kind: monitor | |
| brew_formula: bun | |
| # bun ships from oven-sh/bun tap (Pilot pre-taps it); intentional non-core origin. | |
| brew_tap: oven-sh/bun | |
| last_audited: "2026-05-07" | |
| - id: uv-brew | |
| name: uv (Homebrew install path) | |
| source_type: brew | |
| source_url: https://github.com/astral-sh/uv | |
| version: "0.5.10" | |
| pin_kind: monitor | |
| brew_formula: uv | |
| brew_tap: homebrew/core | |
| last_audited: "2026-05-07" | |
| - id: pnpm-brew | |
| name: pnpm | |
| source_type: brew | |
| source_url: https://github.com/pnpm/pnpm | |
| version: "9.15.0" | |
| pin_kind: monitor | |
| auto_upgrade: false | |
| brew_formula: pnpm | |
| brew_tap: homebrew/core | |
| last_audited: "2026-05-07" | |
| - id: bun-brew | |
| name: bun | |
| source_type: brew | |
| source_url: https://github.com/oven-sh/bun | |
| version: "1.1.40" | |
| pin_kind: monitor | |
| auto_upgrade: false | |
| brew_formula: bun | |
| # bun ships from oven-sh/bun tap (Pilot pre-taps it); intentional non-core origin. | |
| brew_tap: oven-sh/bun | |
| last_audited: "2026-05-07" | |
| - id: uv-brew | |
| name: uv (Homebrew install path) | |
| source_type: brew | |
| source_url: https://github.com/astral-sh/uv | |
| version: "0.5.10" | |
| pin_kind: monitor | |
| auto_upgrade: false | |
| brew_formula: uv | |
| brew_tap: homebrew/core | |
| last_audited: "2026-05-07" |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@installer/upstreams.yaml` around lines 216 - 245, The three brew monitor
entries (ids pnpm-brew, bun-brew, uv-brew) are missing the auto_upgrade: false
flag causing installed formulas to drift from the pinned version; update each
entry (pnpm-brew, bun-brew, uv-brew) to include auto_upgrade: false to match the
other monitor-pinned brew entries, or if there is an intentional reason to
exempt any of them, add a short comment on that entry documenting why it is
exempt instead of adding the flag.
GitHub's secret-scanning alerts (different/broader than push-protection) flagged AWS canonical example, GCP API-key shape, and OpenAI project-key shape on commit 77193e3. The values are clearly synthetic test fixtures for the hook's own pattern detector — same intent as the existing sendgrid/mailgun/stripe/github-pat tests that already use string concatenation. Splits every literal that matches a real-secret regex: AWS access key, AWS session token, GCP API key, GitLab PAT, Slack token, OpenAI project key, JWT, and generic-secret api_key string. Runtime values unchanged; all 41 secret-scanner tests pass.
|
Claude finished @maxritter's task in 2m 2s —— View job Incremental Review — push
|
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (3)
.github/workflows/release.yml (1)
77-77:⚠️ Potential issue | 🟠 Major | ⚡ Quick win
actions/github-scriptis still floating on@v7in a release-blocking job.Line 77 should be pinned to an immutable commit SHA.
As per coding guidelines, `.github/workflows/**`: Review workflows for security best practices and proper permissions.🔧 Suggested fix
- - uses: actions/github-script@v7 + - uses: actions/github-script@<full-commit-sha> # v7🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In @.github/workflows/release.yml at line 77, Replace the floating action reference "uses: actions/github-script@v7" with an immutable commit SHA: locate the workflow step containing uses: actions/github-script@v7 and change it to the corresponding pinned commit (actions/github-script@<commit-sha>) by looking up the latest stable release commit on the actions/github-script repository and using that SHA so the release job is pinned to an immutable reference.scripts/check_manifest_drift.py (2)
85-90:⚠️ Potential issue | 🟠 Major | ⚡ Quick winSilent read/decode failures currently bypass drift enforcement.
On Line 89 and Line 90, unreadable files are dropped by returning no line contexts, so the scan can pass without flagging that the file was never analyzed.
🔧 Suggested fix
-def _iter_lines(path: Path) -> list[_LineCtx]: +def _iter_lines(path: Path) -> tuple[list[_LineCtx], str | None]: out: list[_LineCtx] = [] try: # Explicit UTF-8: pathlib.Path.read_text() uses the locale default, which # is not always UTF-8 on CI runners (depends on $LANG / $LC_ALL). text = path.read_text(encoding="utf-8") - except (OSError, UnicodeDecodeError): - return out + except (OSError, UnicodeDecodeError) as exc: + return out, f"could not read file as utf-8: {exc}" for i, line in enumerate(text.splitlines(), start=1): has_noqa, justified = _parse_noqa(line) out.append(_LineCtx(text=line, line_no=i, has_noqa=has_noqa, noqa_justified=justified)) - return out + return out, Nonedef _scan_python_or_shell(path: Path) -> list[Finding]: findings: list[Finding] = [] is_shell = path.suffix == ".sh" or path.name.endswith(".sh") - for ctx in _iter_lines(path): + lines, read_error = _iter_lines(path) + if read_error: + findings.append(Finding(file=path, line=0, message=read_error)) + return findings + for ctx in lines:Also applies to: 119-120
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@scripts/check_manifest_drift.py` around lines 85 - 90, The current except block around path.read_text silently returns an empty 'out' when OSError/UnicodeDecodeError occurs (path.read_text), which lets unreadable files bypass drift checks; change the handler to capture the exception (e) and surface it instead of returning silently — either re-raise a descriptive exception or append a sentinel error context to 'out' (e.g. an entry indicating the file path and error message) so the drift scan records the unreadable file; apply the same change to the similar except at lines handling the second read (the other path.read_text branch around lines 119-120).
103-114:⚠️ Potential issue | 🟠 Major | ⚡ Quick winPinned npm spec check is still too permissive for exact-version policy.
Line 112 and Line 113 only check for
@, sopkg@latest,pkg@next, andpkg@^1.2.3are treated as pinned.🔧 Suggested fix
+EXACT_SEMVER_RE = re.compile( + r"^(0|[1-9]\d*)\.(0|[1-9]\d*)\.(0|[1-9]\d*)" + r"(?:-[0-9A-Za-z-]+(?:\.[0-9A-Za-z-]+)*)?" + r"(?:\+[0-9A-Za-z-]+(?:\.[0-9A-Za-z-]+)*)?$" +) + def _is_npm_install_pkg_pinned(pkg_arg: str) -> bool: """`pkg@1.2.3` is pinned; `pkg`, `@scope/pkg`, `pkg@latest` are not.""" if pkg_arg in {"-g", "--force", "--ignore-scripts", "--no-audit", "--no-fund"}: return True # flag, not a package if pkg_arg.startswith("--"): return True - if pkg_arg.startswith("@"): - # `@scope/pkg` or `@scope/pkg`@version - rest = pkg_arg[1:] - return "@" in rest - return "@" in pkg_arg + name, sep, version = pkg_arg.rpartition("@") + if not sep: + return False + if pkg_arg.startswith("@") and "/" not in name: + # e.g. "@scope/pkg" with no version can rpartition at leading '@' + return False + return bool(EXACT_SEMVER_RE.fullmatch(version))🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@scripts/check_manifest_drift.py` around lines 103 - 114, _is_npm_install_pkg_pinned currently treats any package string containing "@" as pinned, which incorrectly classifies ranges like "pkg@latest", "pkg@next", or "pkg@^1.2.3" as pinned; change the logic in _is_npm_install_pkg_pinned to treat a package as pinned only when the part after the final "@" is an exact semver (e.g. 1.2.3); specifically, skip flags as before, find the version token as the substring after the last "@" (handle scoped names by ignoring the initial "@scope/"), and validate it against a strict semver regex (for example /^\d+\.\d+\.\d+(-[0-9A-Za-z-.]+)?(\+[0-9A-Za-z-.]+)?$/) rejecting values like "latest", "next", ranges with ^, ~, >=, <=, >, <, x, or wildcards; return True only when that regex matches.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In @.github/workflows/release-dev.yml:
- Line 25: The workflow step currently uses the mutable tag
"actions/github-script@v7", which should be pinned to a specific commit SHA like
the other actions; update the usage of actions/github-script (the entry
referenced as actions/github-script@v7) to the corresponding commit SHA for the
v7 release and replace the tag with that SHA to ensure consistency and reduce
supply-chain risk.
In `@scripts/check_manifest_drift.py`:
- Around line 201-223: The code builds monitored_urls from manifest.entries
using e.source_url but later compares package names (pkg) against that set;
instead create a set of manifest IDs for npm entries (e.g. monitored_ids = {e.id
for e in manifest.entries if e.source_type == "npm"}) and compare pkg to that
set (or normalize pkg to the same ID form used in manifest) before appending the
Finding; update references to monitored_urls → monitored_ids and keep the
existing pkg extraction and findings logic.
---
Duplicate comments:
In @.github/workflows/release.yml:
- Line 77: Replace the floating action reference "uses:
actions/github-script@v7" with an immutable commit SHA: locate the workflow step
containing uses: actions/github-script@v7 and change it to the corresponding
pinned commit (actions/github-script@<commit-sha>) by looking up the latest
stable release commit on the actions/github-script repository and using that SHA
so the release job is pinned to an immutable reference.
In `@scripts/check_manifest_drift.py`:
- Around line 85-90: The current except block around path.read_text silently
returns an empty 'out' when OSError/UnicodeDecodeError occurs (path.read_text),
which lets unreadable files bypass drift checks; change the handler to capture
the exception (e) and surface it instead of returning silently — either re-raise
a descriptive exception or append a sentinel error context to 'out' (e.g. an
entry indicating the file path and error message) so the drift scan records the
unreadable file; apply the same change to the similar except at lines handling
the second read (the other path.read_text branch around lines 119-120).
- Around line 103-114: _is_npm_install_pkg_pinned currently treats any package
string containing "@" as pinned, which incorrectly classifies ranges like
"pkg@latest", "pkg@next", or "pkg@^1.2.3" as pinned; change the logic in
_is_npm_install_pkg_pinned to treat a package as pinned only when the part after
the final "@" is an exact semver (e.g. 1.2.3); specifically, skip flags as
before, find the version token as the substring after the last "@" (handle
scoped names by ignoring the initial "@scope/"), and validate it against a
strict semver regex (for example
/^\d+\.\d+\.\d+(-[0-9A-Za-z-.]+)?(\+[0-9A-Za-z-.]+)?$/) rejecting values like
"latest", "next", ranges with ^, ~, >=, <=, >, <, x, or wildcards; return True
only when that regex matches.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 8625362b-4b79-4630-a8d7-a57ebbde2a2b
⛔ Files ignored due to path filters (1)
pilot/hooks/tests/test_secret_scanner.pyis excluded by!pilot/**
📒 Files selected for processing (4)
.github/workflows/release-dev.yml.github/workflows/release.yml.github/workflows/supply-chain.ymlscripts/check_manifest_drift.py
🚧 Files skipped from review as they are similar to previous changes (1)
- .github/workflows/supply-chain.yml
| monitored_urls = {e.source_url for e in manifest.entries if e.source_type == "npm"} | ||
| servers = doc.get("mcpServers") or {} | ||
| for name, cfg in servers.items(): | ||
| if cfg.get("command") != "npx": | ||
| continue | ||
| for a in cfg.get("args") or []: | ||
| if not isinstance(a, str): | ||
| continue | ||
| if a.startswith("-") or a.startswith("http"): | ||
| continue | ||
| # Strip the @<version>; keep @scope/ prefix. | ||
| if a.startswith("@"): | ||
| rest = a[1:] | ||
| pkg = "@" + rest.split("@", 1)[0] | ||
| else: | ||
| pkg = a.split("@", 1)[0] | ||
| if pkg and pkg not in monitored_urls: | ||
| findings.append( | ||
| Finding( | ||
| file=path, | ||
| line=0, | ||
| message=f"{name}: pinned package {pkg!r} is not in manifest (unmonitored)", | ||
| ) |
There was a problem hiding this comment.
MCP cross-reference appears to compare package names against source_url instead of manifest IDs.
On Line 201 and Line 217, pkg is checked against source_url values. That conflicts with the file contract at Line 11-Line 13 (“resolve to manifest entry by id”), and can misclassify monitored packages.
🔧 Suggested fix
- monitored_urls = {e.source_url for e in manifest.entries if e.source_type == "npm"}
+ monitored_ids = {e.id for e in manifest.entries if e.source_type == "npm"}
@@
- if pkg and pkg not in monitored_urls:
+ manifest_id = pkg.split("/", 1)[-1] if pkg.startswith("@") else pkg
+ if manifest_id and manifest_id not in monitored_ids:
findings.append(
Finding(
file=path,
line=0,
- message=f"{name}: pinned package {pkg!r} is not in manifest (unmonitored)",
+ message=f"{name}: pinned package {pkg!r} is not in manifest (unmonitored)",
)
)🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@scripts/check_manifest_drift.py` around lines 201 - 223, The code builds
monitored_urls from manifest.entries using e.source_url but later compares
package names (pkg) against that set; instead create a set of manifest IDs for
npm entries (e.g. monitored_ids = {e.id for e in manifest.entries if
e.source_type == "npm"}) and compare pkg to that set (or normalize pkg to the
same ID form used in manifest) before appending the Finding; update references
to monitored_urls → monitored_ids and keep the existing pkg extraction and
findings logic.
Bot review findings (10 fixes from triage): 1. SHA-pin floating GitHub Actions in supply-chain.yml + supply-chain-gate jobs (release.yml + release-dev.yml). Meta-irony in a supply-chain PR. - actions/checkout@34e1148 (v4) - actions/setup-python@a26af69 (v5) - actions/github-script@f28e40c (v7) 2. Pin zod via manifest entry; _fix_npx_peer_dependencies now reads from the manifest and uses --ignore-scripts (was unversioned npm install). 3. manifest.py: reject non-list 'entries' values explicitly instead of coercing {} or '' to empty list (was silent schema bypass). 4. _curl_pipe_with_hash_verify: surface soft-pin warning on the success path via stderr so re-pin reminders aren't buried (was set-and-forget on _thread_local.last_retry_stderr). 5. check_manifest_drift._iter_lines: convert UnicodeDecodeError into a deterministic Finding instead of aborting the whole run. 6. _is_npm_install_pkg_pinned: require strict semver (X.Y.Z[+pre]) — no longer accepts pkg@beta, pkg@^1.2.3, pkg@latest as 'pinned'. 7. test_drift_check: extend clean-repo audit to cover install.sh and launcher/build.py (the bootstrap surface), not just installer/steps/. Plus regression tests for strict-semver + UTF-8 finding behaviors. 8. is_shell suffix check: drop redundant 'path.suffix == ".sh"' clause (strict subset of 'path.name.endswith(".sh")'). 9. supply-chain.yml: scope 'issues: write' to the supply-chain job instead of granting it workflow-wide. UI: - Settings page: move Console + Security side-by-side in a 2-col grid, both placed before Spec Workflow Review Agents. Removes the duplicate Security sub-section from inside Spec Workflow. Cleanup: - Drop docs/security/AUDIT.md (V1 supply-chain dropped Scorecard, Socket, branch-protection — the AUDIT doc no longer describes anything not already in upstreams.yaml comments). - Remove AUDIT.md references from upstreams.yaml header, dependencies.py install_claude_code() docstring, release.yml gate comment, renovate.json packageRule description. Tests: 1078 Python pass (5 new drift tests for strict-semver + UTF-8-finding + bootstrap-surface coverage); 1485 console tests pass; drift checker green on full SCAN_FILES set.
|
Claude finished @maxritter's task in 3m 26s —— View job Incremental Review — push
|
| Severity | Location | What was fixed |
|---|---|---|
| Critical | supply-chain.yml:30,34,44 |
actions/checkout, actions/setup-python, actions/github-script all now SHA-pinned (@34e114876b..., @a26af69be9..., @f28e40c7f3...) |
| High | release.yml:77, release-dev.yml:25 |
supply-chain-gate actions/github-script@v7 now SHA-pinned (@f28e40c7f3...) |
| High | dependencies.py:1106–1115 |
npm install zod now uses manifest_get("zod") with a versioned spec; zod@4.4.3 entry added to upstreams.yaml |
| Medium | manifest.py:133–139 |
raw.get("entries") or [] coercion fixed — invalid values like {} now correctly raise ManifestError |
| Medium | check_manifest_drift.py:87–110 |
UnicodeDecodeError now emits a deterministic Finding instead of aborting the run; _iter_lines returns `(lines, finding |
| Medium | check_manifest_drift.py:119–136 |
_is_npm_install_pkg_pinned now requires strict numeric semver via _EXACT_SEMVER_RE; @beta, @next, @^1.2.3 correctly rejected |
| Medium | supply-chain.yml:20–25 |
issues: write moved to job-level with explanatory comment; workflow-level token no longer over-grants write on PR runs |
| Medium | dependencies.py:247–251 |
Soft-pin drift warning now print(msg, file=sys.stderr) on the success path so re-pin reminders are visible |
| Low | check_manifest_drift.py:141 |
Redundant path.suffix == ".sh" clause removed from is_shell check |
| Low | test_drift_check.py:199–226 |
Clean-repo tests now cover full SCAN_FILES surface (test_bootstrap_surface_clean + test_full_scan_surface_clean) |
Still open (one line each — no new details)
- Medium —
dependencies.py:147—nvm install 22still hardcoded; not sourced from manifestnode-22-brewentry - Low —
upstreams.yaml:215–244—pnpm-brew,bun-brew,uv-brewstill missingauto_upgrade: false - Low —
dependencies.py:321—replace(" ", " ")double-space collapse not fully general (" ".join(filter(None, parts))would be cleaner) - Low —
prerequisites.py:57–90— tap-pollution mismatch logs only "Could not install{pkg}" with no expected-vs-actual tap detail - Low —
manifest.py:151— module-level_cachedhas no reset mechanism; tests that mock the manifest path can pollute subsequent test cases in the same process - Low —
supply-chain.yml:15—checks: writeat workflow level appears unused (no step creates or updates check runs) - Low —
test_dependencies.py:1511—call.kwargs["timeout"]raisesKeyErroriftimeoutwas ever passed positionally;call.kwargs.get("timeout")is safer - Low —
test_dependencies.py:1465—test_install_golangci_lint_returns_false_on_failurestill missingsubprocess.runmock; environment-dependent whengois absent fromPATH
…d.py crashed scan) Without GIT_CRYPT_KEY-based decryption, actions/checkout pulls launcher/** and console/** as encrypted blobs whose first non-ASCII byte (0xd1, position 12) trips the new UnicodeDecodeError-as-Finding path and fails the supply-chain check. Mirror the existing release.yml pattern: install git-crypt + run setup-git-crypt.sh before the drift checker. The drift checker still emits a Finding for any genuinely-broken-utf8 file outside git-crypt scope; encrypted files now reach the scan as plaintext like every developer's local checkout sees them.
|
Claude finished @maxritter's task in 3m 2s —— View job Incremental Review — push
|
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
installer/upstreams.yaml (1)
215-294:⚠️ Potential issue | 🟠 Major | ⚡ Quick winInconsistent
auto_upgrade: falseacrosspin_kind: monitorbrew entries — silent version drift.
auto_upgrade: falseis documented (lines 154–159) as the mechanism that keeps the installed brew version equal to the manifestversionunder routinebrew upgrade. It's set ongit-brew,gh-brew,python-3-12-brew,node-22-brew, andnvm-brew, but missing on the remaining 8 monitor brew entries:pnpm-brew,bun-brew,uv-brew,go-brew,gopls-brew,jq-brew,ripgrep-brew,rtk-brew. Without the flag, these formulas can silently drift off the pinnedversionon anybrew upgrade, and the drift checker won't catch it (it scans code references, not installed brew state) — defeating the purpose ofpin_kind: monitorfor those entries.Either set the flag on the 8 missing entries to align with the rest, or document why each is intentionally exempt (the
bun-brewcomment about non-core tap is unrelated to upgrade behavior).🛡️ Proposed fix
- id: pnpm-brew name: pnpm source_type: brew source_url: https://github.com/pnpm/pnpm version: "9.15.0" pin_kind: monitor + auto_upgrade: false brew_formula: pnpm brew_tap: homebrew/core last_audited: "2026-05-07" - id: bun-brew name: bun source_type: brew source_url: https://github.com/oven-sh/bun version: "1.1.40" pin_kind: monitor + auto_upgrade: false brew_formula: bun # bun ships from oven-sh/bun tap (Pilot pre-taps it); intentional non-core origin. brew_tap: oven-sh/bun last_audited: "2026-05-07" - id: uv-brew name: uv (Homebrew install path) source_type: brew source_url: https://github.com/astral-sh/uv version: "0.5.10" pin_kind: monitor + auto_upgrade: false brew_formula: uv brew_tap: homebrew/core last_audited: "2026-05-07" - id: go-brew name: Go source_type: brew source_url: https://github.com/golang/go version: "1.23.4" pin_kind: monitor + auto_upgrade: false brew_formula: go brew_tap: homebrew/core last_audited: "2026-05-07" - id: gopls-brew name: gopls source_type: brew source_url: https://github.com/golang/tools version: "0.17.0" pin_kind: monitor + auto_upgrade: false brew_formula: gopls brew_tap: homebrew/core last_audited: "2026-05-07" - id: jq-brew name: jq source_type: brew source_url: https://github.com/jqlang/jq version: "1.7.1" pin_kind: monitor + auto_upgrade: false brew_formula: jq brew_tap: homebrew/core last_audited: "2026-05-07" - id: ripgrep-brew name: ripgrep source_type: brew source_url: https://github.com/BurntSushi/ripgrep version: "14.1.1" pin_kind: monitor + auto_upgrade: false brew_formula: ripgrep brew_tap: homebrew/core last_audited: "2026-05-07" - id: rtk-brew name: rtk (Homebrew install path) source_type: brew source_url: https://github.com/rtk-ai/rtk version: "0.1.0" pin_kind: monitor + auto_upgrade: false brew_formula: rtk brew_tap: homebrew/core last_audited: "2026-05-07"🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@installer/upstreams.yaml` around lines 215 - 294, The brew entries with pin_kind: monitor (ids pnpm-brew, bun-brew, uv-brew, go-brew, gopls-brew, jq-brew, ripgrep-brew, rtk-brew) are missing auto_upgrade: false which allows silent version drift; add auto_upgrade: false to each of those entries to ensure installed formulæ remain pinned to the manifest version, or alternatively add a clear comment next to each id explaining why it is intentionally exempt from auto_upgrade (so reviewers know this is deliberate).
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In @.github/workflows/release-dev.yml:
- Around line 15-23: The Supply-Chain Gate job (supply-chain-gate) lacks a
job-level timeout, so add a timeout-minutes setting under the job definition to
guard against a hung github.rest.checks.listForRef call; specifically, update
the supply-chain-gate job configuration to include a reasonable timeout-minutes
value (e.g., 15) so the runner is forcibly cancelled if the polling loop blocks
indefinitely.
---
Duplicate comments:
In `@installer/upstreams.yaml`:
- Around line 215-294: The brew entries with pin_kind: monitor (ids pnpm-brew,
bun-brew, uv-brew, go-brew, gopls-brew, jq-brew, ripgrep-brew, rtk-brew) are
missing auto_upgrade: false which allows silent version drift; add auto_upgrade:
false to each of those entries to ensure installed formulæ remain pinned to the
manifest version, or alternatively add a clear comment next to each id
explaining why it is intentionally exempt from auto_upgrade (so reviewers know
this is deliberate).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 62c5a24f-f803-49e6-9bb4-b8a2d25d4b02
⛔ Files ignored due to path filters (2)
console/src/ui/viewer/views/Settings/index.tsxis excluded by!console/**pilot/ui/viewer-bundle.jsis excluded by!pilot/**
📒 Files selected for processing (11)
.devcontainer/devcontainer-lock.json.github/renovate.json.github/workflows/release-dev.yml.github/workflows/release.yml.github/workflows/supply-chain.ymlinstaller/manifest.pyinstaller/steps/dependencies.pyinstaller/tests/unit/steps/test_dependencies.pyinstaller/tests/unit/test_drift_check.pyinstaller/upstreams.yamlscripts/check_manifest_drift.py
✅ Files skipped from review due to trivial changes (3)
- .devcontainer/devcontainer-lock.json
- installer/manifest.py
- .github/renovate.json
🚧 Files skipped from review as they are similar to previous changes (3)
- .github/workflows/supply-chain.yml
- .github/workflows/release.yml
- installer/tests/unit/test_drift_check.py
| supply-chain-gate: | ||
| # Cross-workflow `needs:` cannot link to supply-chain.yml. Poll the Checks | ||
| # API for the `supply-chain` check on this SHA — both workflows start on | ||
| # the same push, so we wait up to 12 min for it to complete. | ||
| name: Supply-Chain Gate | ||
| permissions: | ||
| contents: read | ||
| checks: read | ||
| runs-on: ubuntu-latest |
There was a problem hiding this comment.
Add timeout-minutes to guard against hung API calls.
The script enforces a 12-minute programmatic deadline, but the job itself carries the 6-hour GitHub default. A single github.rest.checks.listForRef call that hangs indefinitely (e.g., during a GitHub API degradation event) would bypass the Date.now() deadline guard — it is only evaluated at the top of the loop, after the awaited call returns. Adding a job-level timeout ensures the runner is always released.
🛡️ Proposed fix
name: Supply-Chain Gate
+ timeout-minutes: 15
permissions:
contents: read
checks: read🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In @.github/workflows/release-dev.yml around lines 15 - 23, The Supply-Chain
Gate job (supply-chain-gate) lacks a job-level timeout, so add a timeout-minutes
setting under the job definition to guard against a hung
github.rest.checks.listForRef call; specifically, update the supply-chain-gate
job configuration to include a reasonable timeout-minutes value (e.g., 15) so
the runner is forcibly cancelled if the polling loop blocks indefinitely.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In @.github/workflows/supply-chain.yml:
- Around line 63-71: The issue creation can fail if the 'security' or
'supply-chain' labels don't exist; change the logic so github.rest.issues.create
is called without the labels field (use the same title/body from context and run
info), then call github.rest.issues.addLabels to add ['security','supply-chain']
inside a try/catch and ignore or log errors from addLabels so the alert step
won't fail the workflow if labels are missing; reference
github.rest.issues.create and github.rest.issues.addLabels and the labels array
when implementing this defensive flow.
- Around line 12-25: The workflow-level permissions block currently declares
"checks: write" and "pull-requests: read" but is shadowed by the job-level
permissions in the supply-chain job; update the permissions so the effective job
permissions include any needed rights: either remove "checks: write" and
"pull-requests: read" from the top-level permissions to avoid misleading
over‑scope, or move those entries into the job's permissions under the
supply-chain job (jobs -> supply-chain -> permissions) so the job actually
receives "checks: write" and/or "pull-requests: read" as required.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: b4ccc9cf-205c-4f65-b212-2ac2650df736
📒 Files selected for processing (1)
.github/workflows/supply-chain.yml
| permissions: | ||
| contents: read | ||
| pull-requests: read | ||
| checks: write | ||
|
|
||
| jobs: | ||
| supply-chain: | ||
| name: supply-chain | ||
| permissions: | ||
| contents: read | ||
| # `issues: write` is scoped to this job and only used by the nightly | ||
| # cron's failure-handler step. PR + workflow_dispatch invocations don't | ||
| # need it, so granting at the workflow level was over-scoped. | ||
| issues: write |
There was a problem hiding this comment.
checks: write at workflow level is shadowed by the job-level permissions block and never granted.
In GitHub Actions, when a job defines its own permissions block (lines 20–25), it completely replaces the workflow-level permissions for that job. Since this is the only job in the workflow, the workflow-level checks: write (line 15) is never actually applied to any job. If checks: write is not needed, remove it from the workflow-level block to reduce the apparent permission surface. If it is needed (e.g., for a future step or a downstream workflow that checks for a passing check run), move it into the job-level block.
Similarly, pull-requests: read at the workflow level (line 14) is also shadowed.
🔧 Proposed fix — move `checks: write` to the job block (or drop it if unused)
permissions:
contents: read
- pull-requests: read
- checks: write
+ # workflow-level fallback only; actual permissions are set per-job below
jobs:
supply-chain:
name: supply-chain
permissions:
contents: read
issues: write
+ checks: write # add only if a step actually needs it🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In @.github/workflows/supply-chain.yml around lines 12 - 25, The workflow-level
permissions block currently declares "checks: write" and "pull-requests: read"
but is shadowed by the job-level permissions in the supply-chain job; update the
permissions so the effective job permissions include any needed rights: either
remove "checks: write" and "pull-requests: read" from the top-level permissions
to avoid misleading over‑scope, or move those entries into the job's permissions
under the supply-chain job (jobs -> supply-chain -> permissions) so the job
actually receives "checks: write" and/or "pull-requests: read" as required.
| uses: actions/github-script@f28e40c7f34bde8b3046d885e986cb6290c5673b # v7 | ||
| with: | ||
| script: | | ||
| await github.rest.issues.create({ | ||
| ...context.repo, | ||
| title: `Supply-chain: nightly scan failed (${new Date().toISOString().slice(0,10)})`, | ||
| body: `Nightly supply-chain scan failed for ${context.sha}. See workflow run: ${context.serverUrl}/${context.repo.owner}/${context.repo.repo}/actions/runs/${context.runId}`, | ||
| labels: ['security', 'supply-chain'] | ||
| }); |
There was a problem hiding this comment.
Issue creation will fail with HTTP 422 if the security or supply-chain labels don't exist in the repository.
The github.rest.issues.create call passes labels: ['security', 'supply-chain'] directly. If either label is absent from the repo, the GitHub API returns a 422 and the step errors out — ironically causing a double-failure on nightly runs (the real failure plus a failed alert step). The safest fix is to omit labels here and add them with a separate addLabels call wrapped in a try/catch, or ensure the labels are pre-created as part of repo setup.
🛡️ Proposed fix — defensive label handling
script: |
+ const LABELS = ['security', 'supply-chain'];
+ for (const label of LABELS) {
+ try {
+ await github.rest.issues.getLabel({ ...context.repo, name: label });
+ } catch {
+ await github.rest.issues.createLabel({ ...context.repo, name: label, color: 'e11d48' });
+ }
+ }
await github.rest.issues.create({
...context.repo,
title: `Supply-chain: nightly scan failed (${new Date().toISOString().slice(0,10)})`,
body: `Nightly supply-chain scan failed for ${context.sha}. See workflow run: ${context.serverUrl}/${context.repo.owner}/${context.repo.repo}/actions/runs/${context.runId}`,
- labels: ['security', 'supply-chain']
+ labels: LABELS
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| uses: actions/github-script@f28e40c7f34bde8b3046d885e986cb6290c5673b # v7 | |
| with: | |
| script: | | |
| await github.rest.issues.create({ | |
| ...context.repo, | |
| title: `Supply-chain: nightly scan failed (${new Date().toISOString().slice(0,10)})`, | |
| body: `Nightly supply-chain scan failed for ${context.sha}. See workflow run: ${context.serverUrl}/${context.repo.owner}/${context.repo.repo}/actions/runs/${context.runId}`, | |
| labels: ['security', 'supply-chain'] | |
| }); | |
| uses: actions/github-script@f28e40c7f34bde8b3046d885e986cb6290c5673b # v7 | |
| with: | |
| script: | | |
| const LABELS = ['security', 'supply-chain']; | |
| for (const label of LABELS) { | |
| try { | |
| await github.rest.issues.getLabel({ ...context.repo, name: label }); | |
| } catch { | |
| await github.rest.issues.createLabel({ ...context.repo, name: label, color: 'e11d48' }); | |
| } | |
| } | |
| await github.rest.issues.create({ | |
| ...context.repo, | |
| title: `Supply-chain: nightly scan failed (${new Date().toISOString().slice(0,10)})`, | |
| body: `Nightly supply-chain scan failed for ${context.sha}. See workflow run: ${context.serverUrl}/${context.repo.owner}/${context.repo.repo}/actions/runs/${context.runId}`, | |
| labels: LABELS | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In @.github/workflows/supply-chain.yml around lines 63 - 71, The issue creation
can fail if the 'security' or 'supply-chain' labels don't exist; change the
logic so github.rest.issues.create is called without the labels field (use the
same title/body from context and run info), then call
github.rest.issues.addLabels to add ['security','supply-chain'] inside a
try/catch and ignore or log errors from addLabels so the alert step won't fail
the workflow if labels are missing; reference github.rest.issues.create and
github.rest.issues.addLabels and the labels array when implementing this
defensive flow.
…ction
Max-plan model defaulting (the actual install bug):
- launcher/model_config.py: DEFAULT_MODEL_CONFIG had spec-implement and
spec-verify defaulting to 'sonnet' with extendedContext: True globally,
which on a Max plan resolves to sonnet[1m] — but Max does NOT include
Sonnet 1M, so the skills failed at runtime. The v9 migration was supposed
to flip non-Max users to sonnet, but it assumed the default was 'opus'
and only triggered on existing config.json files (fresh installs skipped
the entire migration runner because the file didn't exist yet).
- Reverted DEFAULT_MODEL_CONFIG.spec-implement and spec-verify to 'opus'.
- installer/steps/config_migration.py: when config.json is missing, treat
it as a fresh install with raw={} so all migrations (including v9) run
against a synthetic empty config and the resulting subscription-aware
defaults get written. Max users now land on opus[1m] (works); non-Max
users get sonnet[1m] via v9 (also works).
- Added regression tests covering both branches.
Settings UI fixes:
- console: fix Worker URL truncation in the new 2-col Console+Security
layout. Card body switched from a single horizontal flex-wrap row to a
vertical stack: row 1 has label/input/reset/save (Save now uses ml-auto
to right-align), row 2 shows the resolved URL with break-all so the
full `http://localhost:N` always renders, row 3 is the range hint.
- Both sections now use `flex flex-col` + `flex-1` inside an
`items-stretch` grid so the Console card height matches the Security
toggle box.
Renovate noise reduction (62 projected PRs → ~10-15):
- major: enabled false — breaking-change PRs go to the dashboard only;
evaluate manually instead of getting auto-PR'd.
- pinDigests: false — actions stay SHA-pinned at the version we set;
no perpetual digest-bump PRs against the workflows we just hardened.
- Group all minor/patch devDependencies into a single weekly PR per
workspace, instead of one PR per package.
- Disable docs/site and docs/docusaurus entirely — marketing site is
not on the supply-chain audit path.
- Enable dependencyDashboard explicitly so the surfaced-but-unbumped
list (majors, ignored docs deps) is visible.
Tests: 2168 pass.
|
Claude finished @maxritter's task in 4m 34s —— View job Incremental Review — push
|
|
🎉 This PR is included in version 8.7.0 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
Summary
Lands three concurrent threads that have been accumulating on
dev:pilot/agents/.Supply-chain monitoring details
installer/upstreams.yaml(37 entries) +installer/manifest.py— single source of truth for npm/brew/curl/mcp/pypi pins; loader/validator/dataclass with module-level cache (cached_load()).installer/steps/dependencies.py+prerequisites.py:install_*reads its package + version from the manifest.--ignore-scriptsdenied by default;better-sqlite3is the only allow with documented justification._curl_pipe_with_hash_verifyhelper — sha256-verified curl-then-execute, owner-only temp file (0o600), cleanup on success and mismatch paths, soft-pin warning path for vendor-managed endpoints (claude.ai/install.sh)._verify_homebrew_taprejects formulas from unexpected taps beforebrew install..mcp.json+pilot/.mcp.json— every npx-launched MCP server pinned@<version>.install.sh+launcher/build.py— bootstrap surface:uvinstaller sha256-verified, every--withpinned (rich==14.2.0,certifi==2026.4.22,PyYAML==6.0.3,cryptography==46.0.6).scripts/check_manifest_drift.py— regex drift checker (6 files, MCP cross-ref,# noqa: drift-checkrequires non-empty justification)..github/workflows/supply-chain.yml— schema validation + drift gate on PR + nightly cron + manual dispatch. Lean 53-line workflow; nightly opens asecurity+supply-chainissue on failure..github/renovate.json— manifest-aware customManagers for npm/brew/curl/pypi; high-risk packages get individual PRs; non-GitHub curl entries get aneeds-manual-bumplabel.release.yml+release-dev.yml—supply-chain-gatejob uses GitHub Checks API to block release tags when the supply-chain check is red for the SHA.docs/security/AUDIT.md— audit log how-to + soft-pin SLA + Renovate gap + V1 non-goals.V1 explicit non-goals (intentionally cut from the original plan to minimize GitHub-side ops): OpenSSF Scorecard scoring loop, Socket.dev workflow step, branch-protection required check. The release-time gate is sufficient for a single-maintainer repo.
Tests
2163 Python tests pass (513 installer including 24 new manifest + 14 drift + 9 curl-pipe-verify + 6 brew tap-verification + 11 precache pinning).
Notes for reviewers / bots
--with PyYAML==, which would have brokenpython -m installeron a fresh box (manifest module imports yaml at module load time).pilot/hooks/tests/test_secret_scanner.pywas rewritten as string concatenation so GitHub push-protection doesn't false-positive on the synthetic placeholder.Summary by CodeRabbit
New Features
Bug Fixes / Hardening
Chores
Tests