diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md index 9ab031e8eaf..d253f147f81 100644 --- a/.github/copilot-instructions.md +++ b/.github/copilot-instructions.md @@ -25,10 +25,23 @@ azldev.toml # Root config — includes distro/ and base │ ├── distro.toml # Includes all *.distro.toml │ ├── fedora.distro.toml # Fedora: dist-git URIs, lookaside, version branches │ └── mock/ # Mock build environment configs +├── specs/ # Rendered specs (generated by `azldev comp render`) +│ └── // # Per-component: spec, patches, scripts (no source tarballs) └── external/schemas/ └── azldev.schema.json # Authoritative schema for all TOML config files ``` +## Rendered Specs (`specs/`) + +The `specs/` directory (as specified by `rendered-specs-dir` config) contains rendered spec files generated by `azldev comp render`. These are the final specs with all overlays applied — ready for check-in. After adding or modifying components/overlays, re-render: + +```bash +azldev comp render -p # Single component +azldev comp render -a # All components (slow) +``` + +To inspect what a component's spec looks like after overlays, read `specs///.spec` directly — no need to run `prep-sources` just to view the result. Use `prep-sources` when you need the full source tree (tarballs) or want to diff pre/post overlay output for debugging. + ## Key Concepts **Components** = unit of packaging (→ one or more RPMs). Spec sources: upstream (default, from Fedora dist-git), local, or pinned upstream. See [`comp-toml.instructions.md`](instructions/comp-toml.instructions.md#spec-source-types) for syntax. @@ -52,13 +65,15 @@ Run all commands from the repo root (where `azldev.toml` lives). If the terminal | Add a component | `azldev comp add` | | Build a component | `azldev comp build -p -q` | | Build chain (auto-publish to local repo) | `azldev comp build --local-repo-with-publish ./base/out -p -p -q` | +| Render all specs for check-in | `azldev comp render -a` | +| Render a single component | `azldev comp render -p ` | | Prepare sources (apply overlays) | `azldev comp prep-sources -p --force -o -q` | | Prepare sources (skip overlays) | `azldev comp prep-sources -p --skip-overlays --force -o -q` | | Build, keep env on failure | `azldev comp build -p --preserve-buildenv on-failure -q` | | List images | `azldev image list` | | Build an image | `azldev image build` | | Boot an image in QEMU | `azldev image boot` | -| Dump resolved config | `azldev config dump -q -O json` | +| Dump resolved config | `azldev config dump -q -f json` | | Advanced commands (like mock shell) | `azldev adv --help` (hidden from normal help) | ## Repository Hygiene Rules diff --git a/.github/instructions/pr-check-workflows.instructions.md b/.github/instructions/pr-check-workflows.instructions.md new file mode 100644 index 00000000000..92102bdbead --- /dev/null +++ b/.github/instructions/pr-check-workflows.instructions.md @@ -0,0 +1,109 @@ +--- +applyTo: ".github/workflows/**" +description: ALWAYS review these instructions when reading or modifying PR check workflows, or any scripts referenced by the workflows. +--- + +# PR Check Workflow Guidelines + +## Fork-PR-safe pattern: stub + reusable + +Problem: `pull_request` triggers on fork PRs run without secrets and with a read-only token. `pull_request_target` runs with write access but checks out the BASE ref by default — easy to footgun into RCE if you then check out PR code with full privileges. + +Pattern: + +1. **Stub workflow** on the default branch — triggered by `pull_request_target`, guards on repo owner, calls the reusable workflow. This is the only file GitHub will load from the base branch, so it locks the entrypoint. +2. **Reusable workflow** (`workflow_call`) holds the real logic. Lives on the PR branch, so contributors can iterate on it. +3. Stub passes `pull-requests: write` / `contents: read` only. Reusable declares its own minimum permissions. + +Never check out PR code into a privileged job and then execute it on the host. Either: +- Run untrusted code inside a container with no secrets mounted, **or** +- Keep the privileged job read-only (lint, comment-post) and isolate code execution to a separate unprivileged job. + +## Prefer in-container for anything that executes PR code + +If the check builds, renders, or runs PR code, do the whole thing inside the build container. Mock is a critical component of many azldev workflows. It requires many privileges to run successfully. It also is not available in Ubuntu, which is the default runner image for GitHub Actions. + +- Mount the PR checkout read-only when possible; if writes are needed (e.g. `git add -N`), mount rw but don't leak host paths or secrets. +- Produce **all outputs** (reports, patches, diffs) inside the container and write them to a bind-mounted output dir. Host-side steps then only read these artifacts (json report, patch files, etc.) +- This eliminates a huge class of config-driven git RCE vectors (`core.fsmonitor`, `core.sshCommand`, hook files, etc.) because the host never runs git against PR-controlled config. + +### Container config + +The shared runner image is [`.github/workflows/containers/azldev-runner.Dockerfile`](../workflows/containers/azldev-runner.Dockerfile). It's a minimal Azure Linux base with `mock`, `git`, `python3`, `sudo`, and `azldev` itself (installed to `/usr/local/bin` during image build) — enough to run any `azldev` subcommand. Reuse it rather than building a per-check image; add extras via a derived `FROM localhost/azldev-runner` stage if a check genuinely needs more. + +`azldev` is baked in via `go install …/azldev@main` during image build. The pin lives in the Dockerfile so it can be reviewed and bumped deliberately. Image build context is `.github/workflows/containers/` only — keep it that way so the build can never see PR-controlled files. + +Build it with the caller's UID so bind-mounted writes don't end up root-owned: + +```yaml +- name: Build azldev runner + run: | + docker build \ + --build-arg UID=$(id -u) \ + -t localhost/azldev-runner \ + -f .github/workflows/containers/azldev-runner.Dockerfile \ + .github/workflows/containers/ +``` + +#### Bind-mount conventions + +| Mount | Mode | Purpose | +| ----- | ---- | ------- | +| `pr-head/` → `/workdir` | rw | PR checkout. rw because `azldev` writes to `specs/`, `base/build/`, etc. | +| `/` → `/output` | rw | Trusted-shape outputs (JSON reports, patches, ...) the container produces for the host to consume after the run. | +| `.github/workflows/scripts/` → `/scripts` | ro | Helper scripts from the trusted base checkout. | + +#### Sandbox flags (minimum viable for `mock`) + +```yaml +docker run --rm \ + --cap-add=SYS_ADMIN \ + --security-opt seccomp=unconfined \ + --security-opt apparmor=unconfined \ + ... +``` + +Why each one is needed: + +- **`--cap-add=SYS_ADMIN`** — `mock` sets up mount namespaces for its chroot. Without this you get `mount … exit status 32` during chroot init. +- **`--security-opt seccomp=unconfined`** — `mock` uses syscalls (`unshare`, `pivot_root`, etc.) that Docker's default seccomp profile blocks. +- **`--security-opt apparmor=unconfined`** — `ubuntu-latest` ships the `docker-default` AppArmor profile, which blocks `mount -t tmpfs` on paths under `/var/lib/mock` **even with `SYS_ADMIN` granted**. This is the confusing one; symptom is the same `exit status 32` after seccomp is already unconfined. + +Avoid `--privileged` — it grants every capability and removes cgroup restrictions, which is a much bigger blast radius than the three flags above. + +`--security-opt no-new-privileges` would be nice but `mock`'s `userhelper` needs setuid, which that flag blocks. + +#### Running commands in the container + +Use `bash -eu -o pipefail -c '…'` as the entrypoint invocation so a failure inside the heredoc actually fails the step: + +```yaml +localhost/azldev-runner \ + bash -eu -o pipefail -c ' + azldev component render -q -a --clean-stale -O json > /output/render.json + python3 /scripts/check_rendered_specs.py \ + --specs-dir "$(azldev config dump -q -f json | jq -r .project.renderedSpecsDir)" \ + --report /output/report.json \ + --patch /output/rendered-specs.patch + ' +``` + +Use single-quotes around the `-c` payload so host-side `${{ … }}` interpolation doesn't leak into the container script. If you need to pass a host value in, use `-e VAR=…` and reference `"$VAR"` inside — same script-injection concern as any other shell step. + +## Shell hardening in workflow steps + +- Start every multi-line `run:` with `set -euo pipefail`. +- Quote **every** expansion involving a workflow input, matrix value, or file path: `"${VAR}"`, not `$VAR`. +- Never interpolate `${{ github.event.pull_request.* }}` directly into a shell script — assign to an `env:` var first, then reference as `"$VAR"`. Direct interpolation is a classic script-injection vector. +- For paths that must stay inside the repo, resolve with `realpath -m` and verify they start with the repo root prefix before use. + +## Markdown / HTML injection in PR comments + +- Escape any PR-controlled string (file paths, error messages) before dropping into Markdown. +- Prefer code spans (`` `path` ``) or fenced blocks for anything path-like. + +## zizmor / pedantic linting + +Workflows are linted with `zizmor --pedantic`. + +Use `# zizmor: ignore[]` comments as an absolute last resort, and provide a comprehensive justification for why the rule is being ignored. diff --git a/.github/instructions/rendered-specs.instructions.md b/.github/instructions/rendered-specs.instructions.md new file mode 100644 index 00000000000..0dfd02c769a --- /dev/null +++ b/.github/instructions/rendered-specs.instructions.md @@ -0,0 +1,32 @@ +--- +applyTo: "specs/**/*" +description: ALWAYS refer to this when working with rendered spec files (`*.spec`) in the `specs/` directory. +--- + +# Rendered Spec Files (`specs/*.spec`) + +## What are rendered specs? + +Rendered specs are generated by the `azldev comp render` command based on the component definitions and overlays. They are output to the `specs/` directory (as specified by `rendered-specs-dir` config) and should not be edited directly. + +They are meant to be consumed by downstream processes (e.g., build pipelines) and are the source of truth for all subsequent steps. Any changes to the spec should be made via the component definition and overlays, not by editing the rendered spec. + +## Changing a rendered spec + +To change a rendered spec, modify the component's `.comp.toml` and/or its overlays. Then re-run the render command to regenerate the spec: + +```bash +# VERY SLOW - Re-render all specs, removes any stale specs that are no longer defined in the components +azldev comp render -a --clean-stale -O json +``` + +```bash +# Small set, will NOT remove stale specs, faster for iterative development +azldev comp render -p -p -O json +``` + +```bash +# Custom output directory, useful for debugging. When not using the automatically configured spec directory, --force is +# required to delete and re-create the output folders if they already exist. +azldev comp render -p -O json -o ./base/build/work/scratch/rendered-specs --force +``` diff --git a/.github/instructions/spec.instructions.md b/.github/instructions/spec.instructions.md index d8156a34946..7751d8b3fa1 100644 --- a/.github/instructions/spec.instructions.md +++ b/.github/instructions/spec.instructions.md @@ -1,5 +1,6 @@ --- -applyTo: "**/*.spec" +applyTo: "base/**/*.spec" +description: Read this when working with spec files (`*.spec`) that are hand-maintained in the Azure Linux repo (not rendered). --- # RPM Spec Files (`*.spec`) diff --git a/.github/prompts/azl-add-component.prompt.md b/.github/prompts/azl-add-component.prompt.md index 2638ada1395..5a5564c1556 100644 --- a/.github/prompts/azl-add-component.prompt.md +++ b/.github/prompts/azl-add-component.prompt.md @@ -19,6 +19,6 @@ Follow the workflow in the [skill-add-component skill](../skills/skill-add-compo - Needs overlays or customizations → create `${input:component_name}/${input:component_name}.comp.toml` - Needs extensive changes overlays can't handle → forked local spec (**last resort**, requires explicit user sign-off) 6. Add overlays with meaningful `description` fields explaining *why* each change is needed -7. Validate: `azldev comp prep-sources -p ${input:component_name} --force -o base/build/work/scratch/${input:component_name}-post -q` (with overlays) and diff against the skip-overlays output +7. Render and verify: `azldev comp render -p ${input:component_name}` and inspect `specs/` (as specified by `rendered-specs-dir` config) output. For deeper debugging, diff pre/post overlay output with `prep-sources`. 8. Build: `azldev comp build -p ${input:component_name} -q` 9. Smoke-test the built RPMs in a mock chroot diff --git a/.github/prompts/azl-debug-component.prompt.md b/.github/prompts/azl-debug-component.prompt.md index 3a139173696..a983427cf61 100644 --- a/.github/prompts/azl-debug-component.prompt.md +++ b/.github/prompts/azl-debug-component.prompt.md @@ -27,7 +27,13 @@ First, determine the error category: - Follow the `skill-mock` workflow: install RPMs in a mock chroot, verify contents, check dependencies - Common causes: missing Requires, wrong file paths, permission issues -**When in doubt**, start with a `prep-sources` pre/post diff to determine if the issue is overlay-related: +**When in doubt**, start with a render to determine if the issue is overlay-related: + +```bash +azldev comp render -p ${input:component_name} +``` + +If `render` fails, the issue is overlay-related (category 1). For deeper debugging, diff pre/post overlay output: ```bash azldev comp prep-sources -p ${input:component_name} --skip-overlays -o base/build/work/scratch/${input:component_name}-pre --force @@ -35,7 +41,7 @@ azldev comp prep-sources -p ${input:component_name} -o base/build/work/scratch/$ diff -r base/build/work/scratch/${input:component_name}-pre base/build/work/scratch/${input:component_name}-post ``` -If `prep-sources` itself fails, the issue is overlay-related (category 1). If it succeeds but `comp build` fails, it's a build issue (category 2). +If both render and `prep-sources` succeed but `comp build` fails, it's a build issue (category 2). ## Fix diff --git a/.github/prompts/azl-update-component.prompt.md b/.github/prompts/azl-update-component.prompt.md index 25c7c1248d6..bbab2bf3d46 100644 --- a/.github/prompts/azl-update-component.prompt.md +++ b/.github/prompts/azl-update-component.prompt.md @@ -18,6 +18,10 @@ Use structural patterns from [comp-toml.instructions.md](../instructions/comp-to - `config` — build config changes (`build.defines`, `build.without`) 3. **Apply changes** to the `.comp.toml` file 4. **Verify overlays still apply:** + ```bash + azldev comp render -p ${input:component_name} + ``` + Inspect `specs/` (as specified by `rendered-specs-dir` config) output. For deeper debugging, diff pre/post overlay output: ```bash azldev comp prep-sources -p ${input:component_name} --skip-overlays -o base/build/work/scratch/${input:component_name}-pre --force azldev comp prep-sources -p ${input:component_name} -o base/build/work/scratch/${input:component_name}-post --force diff --git a/.github/skills/skill-add-component/SKILL.md b/.github/skills/skill-add-component/SKILL.md index bcf83e16802..6d4314a1232 100644 --- a/.github/skills/skill-add-component/SKILL.md +++ b/.github/skills/skill-add-component/SKILL.md @@ -69,6 +69,16 @@ Overlays are vastly preferable to maintaining a forked spec, they get automatic ## Validate +After adding overlays or customizations, render the spec to verify: + +```bash +azldev comp render -p +# Inspect the result +cat specs///.spec +``` + +For deeper debugging (diffing pre/post overlay output with full sources): + > Use a temp dir for `prep-sources` output. Use `--force` to overwrite an existing output dir. `prep-sources -o ` writes to a user-specified directory (NOT `base/out/` — that's for `comp build` output). @@ -77,9 +87,11 @@ Overlays are vastly preferable to maintaining a forked spec, they get automatic azldev comp prep-sources -p --skip-overlays --force -o base/build/work/scratch/-pre -q azldev comp prep-sources -p --force -o base/build/work/scratch/-post -q diff -r base/build/work/scratch/-pre base/build/work/scratch/-post +``` +```bash # Test build (RPMs land in base/out/ per project.toml output-dir) azldev comp build -p -q ``` -For testing the built RPMs, see the [`skill-mock`](../skill-mock/SKILL.md) skill. New components always need a smoke-test. For the full inner loop cycle (investigate → modify → verify → build → test → inspect), see [`skill-build-component`](../skill-build-component/SKILL.md). +For testing the built RPMs, see the [`skill-mock`](../skill-mock/SKILL.md) skill. New components always need a smoke-test. For the full inner loop cycle (investigate → modify → render → build → test → inspect), see [`skill-build-component`](../skill-build-component/SKILL.md). diff --git a/.github/skills/skill-build-component/SKILL.md b/.github/skills/skill-build-component/SKILL.md index 79ced9db2c3..1c039d998f5 100644 --- a/.github/skills/skill-build-component/SKILL.md +++ b/.github/skills/skill-build-component/SKILL.md @@ -49,26 +49,40 @@ Build foundational packages first (e.g., `azurelinux-rpm-config`), then dependen The standard cycle for investigating, modifying, and verifying components: ``` -investigate → modify → verify → build → test → inspect +investigate → modify → render → build → test → inspect ``` | Step | Command | What to check | |------|---------|---------------| -| **Investigate** | `prep-sources --skip-overlays --force -o base/build/work/scratch/-pre` | Upstream spec/sources as-is | -| **Compare** | `prep-sources --force -o base/build/work/scratch/-post` + `diff -r ...-pre ...-post` | Current overlay effect | +| **Investigate** | Read `specs///.spec` or `prep-sources --skip-overlays --force -o base/build/work/scratch/-pre` | Upstream spec/sources as-is | +| **Compare** | `prep-sources --force -o base/build/work/scratch/-post` + `diff -r ...-pre ...-post` | Current overlay effect (deep debug) | | **Modify** | Edit `*.comp.toml` (overlays, defines, without) | — | -| **Verify** | `prep-sources --force -o base/build/work/scratch/-post` | Overlay applies cleanly | +| **Verify** | `comp render -p ` + inspect `specs///` | Overlay applies cleanly (fast path) | | **Build** | `comp build -p ` | RPMs appear in `base/out/` | | **Test** | `adv mock shell --add-package base/out/*.rpm` | Package installs, binary runs, basic functionality works | | **Inspect** | `comp build --preserve-buildenv always` + `adv mock shell` | BUILDROOT contents, file lists | +> **Prefer `comp render` for quick verification.** It's faster than `prep-sources` since it skips downloading source tarballs. Use `prep-sources` when you need the full source tree or want to diff pre/post overlay output for debugging. + > Use a temp dir for `prep-sources` output. Use `--force` to overwrite an existing output dir. > Package builds are often very long, so adjust command timeouts accordingly when using shell tools to run builds, or use background mode if available. ## Debugging Build Failures -### 1. Diff sources pre/post overlay +### 1. Render and inspect the spec + +The fastest way to verify overlays applied correctly: + +```bash +azldev comp render -p +# Inspect the result +cat specs///.spec +``` + +### 2. Diff sources pre/post overlay (deep debug) + +When you need to understand exactly what upstream provides vs. what overlays change: ```bash azldev comp prep-sources -p --skip-overlays --force -o base/build/work/scratch/-pre -q @@ -78,14 +92,14 @@ diff -r base/build/work/scratch/-pre base/build/work/scratch/-post This reveals whether overlays apply as intended or whether upstream changed. -### 2. Preserve build environment on failure +### 3. Preserve build environment on failure ```bash azldev comp build -p --preserve-buildenv on-failure -q # Use `always` to inspect even successful builds ``` -### 3. Enter mock shell (deep debug) +### 4. Enter mock shell (deep debug) For testing built RPMs or inspecting the chroot, see the [`skill-mock`](../skill-mock/SKILL.md) skill. Quick reference: diff --git a/.github/skills/skill-fix-overlay/SKILL.md b/.github/skills/skill-fix-overlay/SKILL.md index 707a40d6867..70f801255d8 100644 --- a/.github/skills/skill-fix-overlay/SKILL.md +++ b/.github/skills/skill-fix-overlay/SKILL.md @@ -7,7 +7,21 @@ description: "[Skill] Diagnose and fix overlay issues in Azure Linux components. ## Diagnosis Workflow -### 1. Reproduce and inspect +### 1. Render and inspect + +The fastest way to check if overlays apply cleanly: + +```bash +azldev comp render -p +# Inspect the result +cat specs///.spec +``` + +If `render` fails, the error message will identify which overlay failed and why. + +### 2. Diff pre/post overlay (deep debug) + +When you need to understand exactly what upstream provides vs. what overlays change: > Use a temp dir for `prep-sources` output. Use `--force` to overwrite an existing output dir. @@ -19,9 +33,7 @@ azldev comp prep-sources -p --force -o base/build/work/scratch/-pos diff -r base/build/work/scratch/-pre base/build/work/scratch/-post ``` -If `prep-sources` fails, the error message will identify which overlay failed and why. - -### 2. Inspect the upstream spec/sources +### 3. Inspect the upstream spec/sources Look at the pre-overlay output dir — this is what the overlay is trying to modify. Common root cause: upstream changed and the overlay's assumptions no longer hold. @@ -61,5 +73,5 @@ For overlay type reference (all 12 types with key fields), see [`comp-toml.instr - **Test incrementally.** Apply one overlay at a time and verify with `prep-sources`. Debugging 10 overlays at once is painful. - **Minimize overlays.** Each is a potential failure point. Prefer the smallest delta from upstream. - **Verify in chroot.** If overlays apply but the build still fails, use [`skill-mock`](../skill-mock/SKILL.md) to inspect the build environment. -- **Follow the inner loop.** The full cycle is: investigate → modify → verify → build → test → inspect. See [`skill-build-component`](../skill-build-component/SKILL.md) for details. +- **Follow the inner loop.** The full cycle is: investigate → modify → render → build → test → inspect. See [`skill-build-component`](../skill-build-component/SKILL.md) for details. - **Smoke-test after fixing overlays.** A clean apply and successful build don't guarantee working RPMs. See [`skill-mock`](../skill-mock/SKILL.md). diff --git a/.github/workflows/check-rendered-specs-stub.yml b/.github/workflows/check-rendered-specs-stub.yml new file mode 100644 index 00000000000..b2678558f0b --- /dev/null +++ b/.github/workflows/check-rendered-specs-stub.yml @@ -0,0 +1,43 @@ +# Stub workflow — A copy of this workflow must live on the default branch (3.0) so that the +# pull_request_target event can trigger it with access to GITHUB_TOKEN (pull-requests: write). +# It delegates all real work to the reusable template on tomls/base/main. +# +# This two-stage design lets fork PRs trigger the check safely: the stub runs in the +# context of the default branch (with write token), but the reusable workflow checks out +# the PR's data files (TOML configs, specs) into a separate directory — never mixing +# untrusted code with execution context. +# +# The stub must exist on the default branch because pull_request_target always runs +# workflows from there. The reusable workflow on tomls/base/main has the actual scripts, +# container setup, and rendering logic. +name: Check Rendered Specs + +# pull_request_target gives us a GITHUB_TOKEN with pull-requests: write even for fork PRs. +# The stub itself runs NO code from the PR — it only delegates to a trusted reusable +# workflow pinned to tomls/base/main, which checks out PR data (not code) into an +# isolated subdirectory. +on: # zizmor: ignore[dangerous-triggers] + pull_request_target: + branches: + - tomls/base/main + +permissions: {} + +concurrency: + group: render-check-${{ github.event.pull_request.number }} + cancel-in-progress: true + +jobs: + check: + # Prevent forks from running a stale/vulnerable copy of this stub with Actions enabled + if: github.repository == 'microsoft/azurelinux' + # Intentionally branch-pinned so the reusable workflow picks up updates automatically. + uses: microsoft/azurelinux/.github/workflows/check-rendered-specs.yml@tomls/base/main # zizmor: ignore[unpinned-uses] + permissions: + contents: read + pull-requests: write # Post/update/delete drift comments on PRs + with: + pr-head-sha: ${{ github.event.pull_request.head.sha }} + pr-head-repo: ${{ github.event.pull_request.head.repo.full_name }} + pr-number: ${{ github.event.pull_request.number }} + repo: ${{ github.repository }} diff --git a/.github/workflows/check-rendered-specs.yml b/.github/workflows/check-rendered-specs.yml new file mode 100644 index 00000000000..03f07f67819 --- /dev/null +++ b/.github/workflows/check-rendered-specs.yml @@ -0,0 +1,200 @@ +# Reusable workflow — renders specs from a PR and checks for drift. +# +# Called by the stub on the default branch (check-rendered-specs-stub.yml) via +# pull_request_target. The stub provides the PR details; this workflow does all +# the real work: +# 1. Checks out base branch (trusted tools/scripts) +# 2. Checks out PR head into pr-head/ (untrusted data — TOML configs, specs) +# 3. Renders specs inside a privileged container using azldev -C pr-head/ +# 4. Checks for drift (compares rendered output against PR's committed specs) +# 5. Posts a PR comment with results + downloadable fix patch +# +# Security: the PR checkout is data-only. We never execute code from the PR — +# azldev is installed from upstream, scripts come from the base branch checkout. +name: "Check Rendered Specs" + +on: + workflow_call: + inputs: + pr-head-sha: + required: true + type: string + pr-head-repo: + required: true + type: string + pr-number: + required: true + type: string + repo: + required: true + type: string + +permissions: {} + +# Belt-and-suspenders: the stub also sets concurrency, but keep it here so the +# contract survives refactoring / direct invocation. +concurrency: + group: render-check-${{ inputs.repo }}-${{ inputs.pr-number }} + cancel-in-progress: true + +jobs: + # Render PR's specs + check for drift. Runs PR-derived data through the + # container; deliberately has NO pull-requests write (and no secrets beyond + # the default read-only token) so that even if the container somehow leaks + # into the host, it can't touch the PR. All output flows to the next job via + # artifacts + job outputs only. + render: + name: Render + drift check + runs-on: ubuntu-latest + timeout-minutes: 60 + permissions: + contents: read + outputs: + patch-url: ${{ steps.upload-patch.outputs.artifact-url }} + steps: + # --- Trusted base branch (tools, scripts, container config) --- + - name: Checkout base (trusted) + uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + with: + repository: ${{ inputs.repo }} + ref: tomls/base/main + persist-credentials: false + + # --- PR head (untrusted data — TOML configs, overlays, specs) --- + - name: Checkout PR head (data) + uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + with: + repository: ${{ inputs.pr-head-repo }} + ref: ${{ inputs.pr-head-sha }} + path: pr-head + fetch-depth: 0 + persist-credentials: false + + - name: Build azldev runner container + run: | + docker build \ + --build-arg UID=$(id -u) \ + -t localhost/azldev-runner \ + -f .github/workflows/containers/azldev-runner.Dockerfile \ + .github/workflows/containers/ + + # Render + drift-check run entirely inside the container. The host never + # invokes git against PR data: all git operations happen in the sandboxed + # environment, and the host only reads trusted-shape outputs + # (report.json, rendered-specs.patch) from a dedicated output volume. + # This dodges the whole class of poisoned-.git/config attacks + # (diff.external, diff drivers, filter drivers, hooks, etc.). + # + # Sandbox knobs: + # --cap-add=SYS_ADMIN mock needs mount namespaces for chroot + # seccomp=unconfined mock uses syscalls filtered by the default profile + # apparmor=unconfined ubuntu-latest ships docker-default AppArmor which + # blocks `mount -t tmpfs` on paths under /var/lib/mock + # even with SYS_ADMIN granted + # We still avoid --privileged (broader blast radius). + # --security-opt no-new-privileges would be nice but mock's userhelper + # requires setuid, which that flag blocks. + - name: Render + check for drift + id: check + continue-on-error: true # TODO: flip off once check stabilizes (see PR #16674) + env: + WORKSPACE: ${{ github.workspace }} + run: | + set -euo pipefail + mkdir -p "$WORKSPACE/render-output" + docker run --rm \ + --cap-add=SYS_ADMIN \ + --security-opt seccomp=unconfined \ + --security-opt apparmor=unconfined \ + -v "$WORKSPACE/pr-head:/workdir" \ + -v "$WORKSPACE/render-output:/output" \ + -v "$WORKSPACE/.github/workflows/scripts:/scripts:ro" \ + localhost/azldev-runner \ + bash -eu -o pipefail -c ' + azldev component render -q -a --clean-stale -O json \ + > /output/render-output.json + SPECS_DIR=$(azldev config dump -q -f json \ + | python3 -c "import json,sys; print(json.load(sys.stdin)[\"project\"][\"renderedSpecsDir\"])") + python3 /scripts/check_rendered_specs.py \ + --specs-dir "$SPECS_DIR" \ + --report /output/render-check-report.json \ + --patch /output/rendered-specs.patch + ' + + # Dual upload: `archive: false` (v7 feature) gives browser users a direct + # download of the raw patch (artifact name is derived from the filename, + # so `name:` is ignored here — that's why we need the second upload). + # The zipped `rendered-specs-patch` artifact is for `gh run download`, + # which only works with named artifacts. + - name: Upload fix patch (unzipped, for browser download) + id: upload-patch + if: hashFiles('render-output/rendered-specs.patch') != '' + uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 + with: + path: render-output/rendered-specs.patch + archive: false + + # See: https://github.com/cli/cli/issues/13012 for why this is needed. + - name: Upload fix patch (zipped, for gh run download) + if: hashFiles('render-output/rendered-specs.patch') != '' + uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 + with: + name: rendered-specs-patch + path: render-output/rendered-specs.patch + + - name: Upload render output + if: always() + uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0 + with: + name: render-output + path: | + render-output/render-output.json + render-output/render-check-report.json + + # Post the PR comment. Runs in a separate job so the `pull-requests: write` + # token is only granted to a job that does NOT execute any PR-derived code: + # it only checks out the trusted base branch (for the poster script) and + # consumes the trusted-shape report artifact from the render job. + comment: + name: Post drift comment + needs: render + if: always() && needs.render.result != 'cancelled' + runs-on: ubuntu-latest + timeout-minutes: 5 + permissions: + contents: read + pull-requests: write # Post/update/delete drift comments on PRs + steps: + - name: Checkout base (trusted) + uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + with: + repository: ${{ inputs.repo }} + ref: tomls/base/main + persist-credentials: false + + - name: Set up Python + uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5 + with: + python-version: "3.12" + + - name: Download render report + uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0 + with: + name: render-output + path: render-output + + - name: Post PR comment + continue-on-error: true + env: + GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} + PR_REPO: ${{ inputs.repo }} + PR_NUMBER: ${{ inputs.pr-number }} + PATCH_URL: ${{ needs.render.outputs.patch-url }} + RUN_ID: ${{ github.run_id }} + run: | + python .github/workflows/scripts/post_render_comment.py \ + --repo "$PR_REPO" \ + --pr "$PR_NUMBER" \ + --report render-output/render-check-report.json \ + --artifacts-url "$PATCH_URL" \ + --run-id "$RUN_ID" diff --git a/.github/workflows/containers/azldev-runner.Dockerfile b/.github/workflows/containers/azldev-runner.Dockerfile new file mode 100644 index 00000000000..4738fff5fd1 --- /dev/null +++ b/.github/workflows/containers/azldev-runner.Dockerfile @@ -0,0 +1,50 @@ +FROM mcr.microsoft.com/azurelinux/base/core:3.0 + +# Generic azldev runner image for CI PR checks. Provides the toolchain +# required to run arbitrary `azldev` subcommands (render, build, ...) +# against an untrusted PR checkout. +# +# Callers are expected to bind-mount: +# /workdir : PR checkout (typically rw — azldev writes specs/ and build/) +# /output : trusted-shape outputs produced by the container (ro on host) +# /scripts : trusted helper scripts from the base branch (ro) +# +# `azldev` is baked into the image (installed to /usr/local/bin) so callers +# don't need to set up Go or bind-mount a GOPATH. +# +# Kept intentionally minimal — anything that isn't needed by every azldev +# workflow should be added by the caller (e.g. via a derived image) rather +# than baked in here. +# build-essential + openssl/symcrypt/symcrypt-openssl: required by Microsoft +# Go's default `systemcrypto` GOEXPERIMENT (cgo at build time, system crypto +# libs at run time). See: +# https://github.com/microsoft/go/blob/microsoft/main/eng/doc/MigrationGuide.md +RUN tdnf -y install \ + build-essential \ + ca-certificates \ + git \ + golang \ + mock \ + mock-rpmautospec \ + openssl \ + python3 \ + shadow-utils \ + sudo \ + symcrypt \ + symcrypt-openssl \ + && tdnf clean all + +# TODO: pin to a tagged release once azure-linux-dev-tools cuts one. +# `@main` is a moving target — fine while azldev is pre-1.0 and we want +# CI to track upstream, but we should swap to `@vX.Y.Z` (and bump it +# deliberately) once the tool stabilizes. ADO #18834 +RUN GOBIN=/usr/local/bin go install \ + github.com/microsoft/azure-linux-dev-tools/cmd/azldev@main \ + && rm -rf /root/go /root/.cache + +ARG UID=1000 + +RUN useradd -u "${UID}" -G mock -m builduser + +USER builduser +WORKDIR /workdir diff --git a/.github/workflows/scripts/check_rendered_specs.py b/.github/workflows/scripts/check_rendered_specs.py index 58a05b2cb0a..d450be1ff9e 100755 --- a/.github/workflows/scripts/check_rendered_specs.py +++ b/.github/workflows/scripts/check_rendered_specs.py @@ -1,24 +1,20 @@ #!/usr/bin/env python3 """ -Check rendered specs for drift and (optionally) post a PR comment. +Check rendered specs for drift. -Compares the committed specs tree against the working tree (after -`azldev component render -a` has been run) and reports meaningful differences, -filtering out changelog-timestamp noise. +Runs inside the render container: compares the committed specs tree against +the working tree (after `azldev component render -a` has been run) and writes +a JSON report and a git patch to the mounted output volume. The host never +invokes git against PR data, so this script runs in a trusted environment +and doesn't need host-side hardening against poisoned .git/config. Usage: - # Just check (local dev, CI without comment posting): - python check_rendered_specs.py --specs-dir specs - - # Check and post/update a PR comment: - python check_rendered_specs.py --specs-dir specs --repo owner/repo --pr 123 + python3 check_rendered_specs.py --specs-dir specs + python3 check_rendered_specs.py --specs-dir specs --report report.json --patch fix.patch Exit codes: 0 — specs are up to date (timestamp-only noise filtered) 1 — real diffs, extra files, or missing files detected - -Environment: - GH_TOKEN — required when --repo/--pr are given """ from __future__ import annotations @@ -26,7 +22,6 @@ import argparse import difflib import json -import os import re import subprocess import sys @@ -36,16 +31,14 @@ # Constants # --------------------------------------------------------------------------- -COMMENT_MARKER = "" -MAX_INLINE_DIFFS = 10 -MAX_FILE_LIST = 50 # cap extra/missing file lists in comment -MAX_COMMENT_CHARS = 60_000 # GH limit is 65 535; leave headroom -MAX_STEP_SUMMARY = 1_000_000 # GH step summary limit is 1024 KiB - # Matches the azldev-generated changelog line. -# e.g. "* Wed Apr 08 2026 azldev <> - 1.0-1" +# Tightly coupled to the format emitted by azldev's render pipeline; if azldev +# ever changes the emitted form (email suffix, version tag, different +# whitespace) this regex must be updated or every spec will look like drift. +# Owner: azure-linux-dev-tools (cmd that emits "* azldev " lines). +# e.g. "* Wed Apr 08 2026 azldev - 1.0-1" _CHANGELOG_DATE_RE = re.compile( - r"^\* [A-Z][a-z]{2} [A-Z][a-z]{2} [0-9]{2} [0-9]{4} azldev " + r"^\* [A-Z][a-z]{2} [A-Z][a-z]{2} [0-9]{2} [0-9]{4} azldev\b" ) # --------------------------------------------------------------------------- @@ -53,16 +46,44 @@ # --------------------------------------------------------------------------- -def _git(*args: str) -> str: - """Run a git command and return stdout.""" - return subprocess.run( - ["git", *args], capture_output=True, text=True, check=True - ).stdout +def _git_bytes(*args: str) -> bytes: + """Run a git command and return stdout (bytes).""" + return subprocess.run(["git", *args], capture_output=True, check=True).stdout + +def _git_lines_z(*args: str) -> list[str]: + """Run a git command with -z-compatible args and return NULL-split lines.""" + out = _git_bytes(*args) + return [p.decode("utf-8") for p in out.split(b"\x00") if p] -def _git_lines(*args: str) -> list[str]: - """Run a git command and return non-empty output lines.""" - return [line for line in _git(*args).splitlines() if line] + +def _resolve_head_blobs(paths: list[str]) -> dict[str, str]: + """Map each path to its HEAD blob SHA, or '' if not present in HEAD. + + Resolves paths via `git ls-tree -z HEAD -- ` rather than the + `git show HEAD:` rev-parse syntax. The latter chokes on perfectly + legal filenames containing `:` (interpreted as the rev/path separator) + and leading `-` (parsed as options); `--` after `ls-tree` makes the path + list unambiguous, and `-z` keeps newline-bearing filenames intact. + """ + if not paths: + return {} + raw = _git_bytes("ls-tree", "-z", "HEAD", "--", *paths).decode("utf-8") + out: dict[str, str] = {p: "" for p in paths} + for entry in raw.split("\0"): + if not entry: + continue + # Format: " \t" + meta, _, path = entry.partition("\t") + try: + _, kind, sha = meta.split(" ") + except ValueError: + continue + if kind != "blob": + # Submodule (commit) or tree — not a regular file we can diff. + continue + out[path] = sha + return out # --------------------------------------------------------------------------- @@ -75,7 +96,7 @@ def normalize_changelog_date(text: str) -> str: out: list[str] = [] for line in text.splitlines(keepends=True): if _CHANGELOG_DATE_RE.match(line): - line = _CHANGELOG_DATE_RE.sub("* DATEPLACEHOLDER azldev ", line) + line = _CHANGELOG_DATE_RE.sub("* DATEPLACEHOLDER azldev", line) out.append(line) return "".join(out) @@ -100,11 +121,24 @@ def classify_changes(specs_dir: Path) -> tuple[list[str], list[str], list[str]]: The three lists are disjoint: changed contains only modified files, missing contains only deleted files, and extra contains untracked files. + + Untracked enumeration deliberately does NOT honor `.gitignore` — a + malicious PR could otherwise commit a `.gitignore` under specs_dir to + hide newly rendered files and make the check green. We also drop any + `.gitignore`/`.gitattributes` files found under specs_dir since they + have no business in a rendered-output tree. """ sd = str(specs_dir) - changed = _git_lines("diff", "--diff-filter=M", "--name-only", "--", sd) - extra = _git_lines("ls-files", "--others", "--exclude-standard", "--", sd) - missing = _git_lines("ls-files", "--deleted", "--", sd) + changed = _git_lines_z("diff", "-z", "--diff-filter=M", "--name-only", "--", sd) + # No --exclude-standard: list ALL untracked files so a PR-committed + # .gitignore under specs_dir can't hide drift. + extra_raw = _git_lines_z("ls-files", "-z", "--others", "--", sd) + missing = _git_lines_z("ls-files", "-z", "--deleted", "--", sd) + # Filter out .gitignore / .gitattributes anywhere under specs_dir — they + # shouldn't be in rendered output and shouldn't influence our enumeration. + extra = [ + p for p in extra_raw if Path(p).name not in (".gitignore", ".gitattributes") + ] return changed, extra, missing @@ -115,18 +149,65 @@ def filter_timestamp_noise(changed_files: list[str], specs_dir: Path) -> list[di (patches, GPG keys, etc.) are always treated as real changes. """ real_diffs: list[dict] = [] + # Resolve all HEAD blob hashes up front so we can fetch each file's + # committed contents by hash (`git cat-file blob `) instead of by + # rev-parse string (`git show HEAD:`). The hash form sidesteps a + # whole class of path-parsing pitfalls — colons, leading dashes, etc. + head_blobs = _resolve_head_blobs(changed_files) for path_str in changed_files: file_path = Path(path_str) is_spec = file_path.suffix == ".spec" - # Read committed version — use bytes to handle binary files + # Symlinks in a rendered-output tree are suspicious (could point + # anywhere on the runner's filesystem). Flag and skip content reads. + if file_path.is_symlink(): + print( + f"::warning::Symlink in rendered-specs tree, treated as drift: {path_str}", + file=sys.stderr, + ) + real_diffs.append( + { + "path": path_str, + "component": component_from_path(path_str), + "diff": f"Symlink {path_str} — refusing to follow", + } + ) + continue + + # Read committed version via blob hash (path-safe). If the path + # didn't resolve to a blob in HEAD, treat it as drift rather than + # silently dropping it — git diff said it changed, so something + # really is going on. + sha = head_blobs.get(path_str, "") + if not sha: + print( + f"::warning::could not resolve HEAD blob for {path_str}; " + "treating as drift", + file=sys.stderr, + ) + real_diffs.append( + { + "path": path_str, + "component": component_from_path(path_str), + "diff": f"{path_str} changed but HEAD blob unresolved", + } + ) + continue try: - committed_bytes = subprocess.run( - ["git", "show", f"HEAD:{path_str}"], - capture_output=True, - check=True, - ).stdout - except subprocess.CalledProcessError: + committed_bytes = _git_bytes("cat-file", "blob", sha) + except subprocess.CalledProcessError as exc: + print( + f"::warning::git cat-file blob {sha} ({path_str}) failed: {exc}; " + "treating as drift", + file=sys.stderr, + ) + real_diffs.append( + { + "path": path_str, + "component": component_from_path(path_str), + "diff": f"{path_str} changed but HEAD content unreadable", + } + ) continue # Try to decode as UTF-8; if it fails, it's binary — always a real diff @@ -142,9 +223,26 @@ def filter_timestamp_noise(changed_files: list[str], specs_dir: Path) -> list[di ) continue + # Read working tree; use O_NOFOLLOW-equivalent guard above (is_symlink), + # and strict decode so true binaries route through the binary branch. try: - working = file_path.read_text(encoding="utf-8", errors="replace") + working_bytes = file_path.read_bytes() except FileNotFoundError: + print( + f"::warning::working tree file missing during read: {path_str}", + file=sys.stderr, + ) + continue + try: + working = working_bytes.decode("utf-8") + except UnicodeDecodeError: + real_diffs.append( + { + "path": path_str, + "component": component_from_path(path_str), + "diff": f"Binary file {path_str} differs", + } + ) continue if is_spec: @@ -154,13 +252,17 @@ def filter_timestamp_noise(changed_files: list[str], specs_dir: Path) -> list[di norm_committed = committed norm_working = working + # Equality check on the *normalised* text filters out timestamp-only + # drift (the whole point of this function). If the normalised + # versions match, skip. if norm_committed == norm_working: continue + # Use the original diff for display purposes. udiff = "".join( difflib.unified_diff( - norm_committed.splitlines(keepends=True), - norm_working.splitlines(keepends=True), + committed.splitlines(keepends=True), + working.splitlines(keepends=True), fromfile=f"committed/{path_str}", tofile=f"rendered/{path_str}", ) @@ -216,264 +318,114 @@ def _unique_components(items: list[dict]) -> list[str]: return out +# NOTE: _unique_components and _render_command are duplicated in post_render_comment.py def _render_command(components: list[str], use_all: bool = False) -> str: if use_all or len(components) > 30: - return "azldev component render -a" + return "azldev component render -a --clean-stale" return f"azldev component render {' '.join(components)}" -def format_comment( - report: dict, - artifacts_url: str | None = None, - run_id: str | None = None, - repo: str | None = None, -) -> str: - content_diffs = report.get("content_diffs", []) - extra_files = report.get("extra_files", []) - missing_files = report.get("missing_files", []) - - n_diff = len(content_diffs) - n_extra = len(extra_files) - n_missing = len(missing_files) - total = n_diff + n_extra + n_missing - - if total == 0: - return f"{COMMENT_MARKER}\n## ✅ Rendered specs are up to date\n" - - all_comps: list[str] = sorted( - set(_unique_components(content_diffs) + _unique_components(missing_files)) - ) - use_all = bool(extra_files) - remediation_cmd = _render_command([] if use_all else all_comps, use_all=use_all) - - lines: list[str] = [ - COMMENT_MARKER, - "## ❌ Rendered specs are out of date", - "", - "🚧🚧🚧🚧🚧", - "", - "> [!WARNING]", - ">", - "> **Disregard this comment.**", - ">", - "> Spec rendering is still under development and checked-in specs", - "> should not be updated in PRs yet.", - "> Please ignore this comment for now unless you are actively", - "> working on the render pipeline.", - "", - "🚧🚧🚧🚧🚧", - "", - "**FIX:** — run this and commit the result:", - "", - f"```bash\n{remediation_cmd}\n```", - "", - ] - - if artifacts_url: - lines.append(f"Or [download the fix patch]({artifacts_url}) and apply it:") - lines.append("") - if run_id and repo: - lines.append( - "```bash\n" - f"gh run download {run_id} -R {repo} -n rendered-specs-patch\n" - "git apply rendered-specs.patch\n" - "```" - ) - else: - lines.append("```bash\ngit apply rendered-specs.patch\n```") - lines.append("") - - lines.extend( - [ - "| Category | Count |", - "|----------|-------|", - f"| Content diffs | {n_diff} |", - f"| Extra files (untracked) | {n_extra} |", - f"| Missing files (deleted) | {n_missing} |", - "", - ] - ) - - if content_diffs: - lines.append("### Content diffs") - lines.append("") - shown = 0 - body_so_far = len("\n".join(lines)) - for item in content_diffs: - if shown >= MAX_INLINE_DIFFS: - remaining = n_diff - shown - lines.append( - f"*… and {remaining} more file(s). " - "Run the remediation command above to see all changes.*" - ) - lines.append("") - break - path = item["path"] - diff_text = item.get("diff", "") - block = ( - "
\n" - f"{path}\n\n" - f"```diff\n{diff_text}\n```\n\n" - "
\n" - ) - if body_so_far + len(block) > MAX_COMMENT_CHARS - 2000: - remaining = n_diff - shown - lines.append( - f"*… and {remaining} more file(s) — comment size limit reached. " - "Run the remediation command above to see all changes.*" - ) - lines.append("") - break - lines.append(block) - body_so_far += len(block) - shown += 1 - - if extra_files: - lines.append("### Files to add") - lines.append("") - lines.append( - "These files are produced by `azldev component render` but are " - "missing from your branch. Add them." - ) - lines.append("") - for item in extra_files[:MAX_FILE_LIST]: - lines.append(f"- `{item['path']}`") - if len(extra_files) > MAX_FILE_LIST: - lines.append(f"\n*… and {len(extra_files) - MAX_FILE_LIST} more file(s).*") - lines.append("") - - if missing_files: - lines.append("### Files to remove") - lines.append("") - lines.append( - "These files are in your branch but are not produced by render. " - "Remove them." - ) - lines.append("") - for item in missing_files[:MAX_FILE_LIST]: - lines.append(f"- `{item['path']}`") - if len(missing_files) > MAX_FILE_LIST: - lines.append( - f"\n*… and {len(missing_files) - MAX_FILE_LIST} more file(s).*" - ) - lines.append("") - - return "\n".join(lines) - - def generate_patch( content_diffs: list[dict], extra_files: list[str], missing_files: list[str], + specs_dir: Path, ) -> bytes: """Generate a git patch covering all detected drift. Uses `git add -N` to mark untracked (extra) files as intent-to-add, - then runs `git diff` on the specific affected files to capture - modified, new, and deleted files in one clean patch. + then runs `git diff` scoped to `specs_dir` to capture modified, + new, and deleted files in one clean patch. + + Scaling + path-safety notes: + * `git add -N` / `git reset` receive the exact extra-file list via + `--pathspec-from-file=- --pathspec-file-nul` (NUL-separated stdin). + NUL separators match the rest of this script (`-z` on every reading + side) and are the only delimiter that's safe for arbitrary paths — + filenames may legally contain newlines, which would otherwise be + split into bogus pathspec entries. This also avoids both ARG_MAX + limits and the "pathspec file outside working tree" check that + `git` does on on-disk pathspec files. + * `git diff` does *not* support `--pathspec-from-file` (verified + on git 2.45 — only `add`/`reset`/`commit`/`checkout`/`restore` + do). Instead of batching, we scope the diff to `specs_dir` with a + single positional pathspec. Render only touches files under that + directory, so this captures exactly the same drift regardless of + file count — scales cleanly to 10k+ files. """ - paths = [d["path"] for d in content_diffs] + extra_files + missing_files - if not paths: + if not (content_diffs or extra_files or missing_files): return b"" - # Mark untracked files as intent-to-add so git diff includes them + # NUL-separated stdin for --pathspec-file-nul. See docstring for why. + extra_stdin = ( + b"\x00".join(p.encode("utf-8") for p in extra_files) + b"\x00" + if extra_files + else b"" + ) + + # Mark untracked files as intent-to-add so `git diff` picks them up as + # "new file" entries instead of silently skipping them. if extra_files: try: subprocess.run( - ["git", "add", "-N", "--", *extra_files], + [ + "git", + "add", + "-N", + "--pathspec-from-file=-", + "--pathspec-file-nul", + ], + input=extra_stdin, check=True, capture_output=True, ) - except subprocess.CalledProcessError: - pass + except subprocess.CalledProcessError as exc: + stderr = (exc.stderr or b"").decode("utf-8", errors="replace").strip() + print( + f"::warning::git add -N failed: exit={exc.returncode}: {stderr}", + file=sys.stderr, + ) try: - result = subprocess.run( - ["git", "diff", "--", *paths], - capture_output=True, - check=True, - ) - patch = result.stdout - except subprocess.CalledProcessError: - patch = b"" - - # Undo the intent-to-add so we don't leave index dirty - if extra_files: try: - subprocess.run( - ["git", "reset", "--", *extra_files], - check=True, + result = subprocess.run( + ["git", "diff", "--", str(specs_dir)], capture_output=True, + check=True, ) - except subprocess.CalledProcessError: - pass - - return patch - - -# --------------------------------------------------------------------------- -# GitHub comment posting -# --------------------------------------------------------------------------- - - -def _gh(*args: str) -> str: - return subprocess.run( - ["gh", *args], capture_output=True, text=True, check=True - ).stdout.strip() - - -def find_existing_comment(repo: str, pr: str) -> str | None: - try: - output = _gh( - "api", - "--paginate", - f"/repos/{repo}/issues/{pr}/comments", - "--jq", - f'.[] | select(.body | contains("{COMMENT_MARKER}")) | .id', - ) - except subprocess.CalledProcessError: - return None - comment_id = output.split("\n")[0].strip() if output else None - return comment_id or None - - -def post_or_update_comment(repo: str, pr: str, body: str) -> None: - existing_id = find_existing_comment(repo, pr) - - # Write body to a temp file to avoid ARG_MAX limits - body_path = Path("render-check-comment.md") - body_path.write_text(body, encoding="utf-8") - try: - if existing_id: - print(f"Updating existing comment {existing_id}") - _gh( - "api", - "--method", - "PATCH", - f"/repos/{repo}/issues/comments/{existing_id}", - "-F", - f"body=@{body_path}", + patch = result.stdout + except subprocess.CalledProcessError as exc: + stderr = (exc.stderr or b"").decode("utf-8", errors="replace").strip() + print( + f"::warning::git diff failed while generating patch: " + f"exit={exc.returncode}: {stderr}", + file=sys.stderr, ) - else: - print("Creating new comment") - _gh("pr", "comment", pr, "--repo", repo, "--body-file", str(body_path)) + patch = b"" finally: - body_path.unlink(missing_ok=True) - + # Undo the intent-to-add so the index is left the way we found it. + if extra_files: + try: + subprocess.run( + [ + "git", + "reset", + "--pathspec-from-file=-", + "--pathspec-file-nul", + ], + input=extra_stdin, + check=True, + capture_output=True, + ) + except subprocess.CalledProcessError as exc: + stderr = (exc.stderr or b"").decode("utf-8", errors="replace").strip() + print( + f"::warning::git reset (undo intent-to-add) failed: " + f"exit={exc.returncode}: {stderr}", + file=sys.stderr, + ) -def delete_comment_if_exists(repo: str, pr: str) -> None: - existing_id = find_existing_comment(repo, pr) - if existing_id: - print(f"Deleting stale comment {existing_id}") - try: - _gh( - "api", - "--method", - "DELETE", - f"/repos/{repo}/issues/comments/{existing_id}", - ) - except subprocess.CalledProcessError: - print("Warning: failed to delete stale comment", file=sys.stderr) + return patch # --------------------------------------------------------------------------- @@ -483,7 +435,7 @@ def delete_comment_if_exists(repo: str, pr: str) -> None: def main() -> int: parser = argparse.ArgumentParser( - description="Check rendered specs for drift and optionally post a PR comment." + description="Check rendered specs for drift. Outputs a JSON report and optional patch." ) parser.add_argument( "--specs-dir", @@ -497,30 +449,14 @@ def main() -> int: default=None, help="Write JSON report to this path", ) - parser.add_argument("--repo", default=None, help="GitHub repo (owner/repo)") - parser.add_argument("--pr", default=None, help="PR number") parser.add_argument( "--patch", type=Path, default=None, - help="Write a .patch file for real content diffs", - ) - parser.add_argument( - "--artifacts-url", - default=None, - help="URL to the workflow run artifacts (linked in PR comment)", - ) - parser.add_argument( - "--run-id", - default=None, - help="GitHub Actions run ID (for gh run download command in PR comment)", + help="Write a .patch file for all detected drift", ) args = parser.parse_args() - if bool(args.repo) != bool(args.pr): - print("Error: --repo and --pr must be provided together", file=sys.stderr) - return 1 - specs_dir = args.specs_dir # 1. Classify changes @@ -544,44 +480,15 @@ def main() -> int: total = len(content_diffs) + len(extra) + len(missing) - # 3b. Generate patch for all drift + # 4. Generate patch for all drift if args.patch and total > 0: - patch_content = generate_patch(content_diffs, extra, missing) + patch_content = generate_patch(content_diffs, extra, missing, specs_dir) if patch_content: args.patch.parent.mkdir(parents=True, exist_ok=True) args.patch.write_bytes(patch_content) print(f"Patch written to {args.patch}") - # 4. Post comment or clean up (best-effort — exit code is authoritative) - if args.repo and args.pr: - try: - if total == 0: - delete_comment_if_exists(args.repo, args.pr) - else: - body = format_comment( - report, - artifacts_url=args.artifacts_url, - run_id=args.run_id, - repo=args.repo, - ) - post_or_update_comment(args.repo, args.pr, body) - except (subprocess.CalledProcessError, OSError) as exc: - print(f"Warning: failed to post/update PR comment: {exc}", file=sys.stderr) - - # 5. Write to GitHub step summary if available - summary_file = os.environ.get("GITHUB_STEP_SUMMARY") - if summary_file and total > 0: - summary_body = format_comment( - report, artifacts_url=args.artifacts_url, run_id=args.run_id, repo=args.repo - ) - if len(summary_body) <= MAX_STEP_SUMMARY: - with open(summary_file, "a", encoding="utf-8") as sf: - sf.write(summary_body) - sf.write("\n") - else: - print("Warning: step summary too large, skipping", file=sys.stderr) - - # 6. Print summary and exit + # 5. Print summary and exit if total == 0: print("All rendered specs are up to date (timestamp-only noise filtered).") return 0 @@ -596,8 +503,8 @@ def main() -> int: + _unique_components(report.get("missing_files", [])) ) ) - if extra: - print("Remediation: azldev component render -a") + if extra or missing: + print(f"Remediation: {_render_command([], use_all=True)}") elif all_comps: print(f"Remediation: {_render_command(all_comps)}") @@ -605,4 +512,4 @@ def main() -> int: if __name__ == "__main__": - sys.exit(main()) \ No newline at end of file + sys.exit(main()) diff --git a/.github/workflows/scripts/post_render_comment.py b/.github/workflows/scripts/post_render_comment.py new file mode 100644 index 00000000000..139426a9a1b --- /dev/null +++ b/.github/workflows/scripts/post_render_comment.py @@ -0,0 +1,453 @@ +#!/usr/bin/env python3 +""" +Post (or update/delete) a PR comment with rendered-spec drift results. + +Reads the JSON report produced by check_rendered_specs.py and posts a +formatted comment on the PR. Designed to run in a workflow_run context +where the base repo's GITHUB_TOKEN is available (needed for fork PRs). + +Usage: + python post_render_comment.py \\ + --report render-check-report.json \\ + --repo owner/repo \\ + --pr 123 \\ + --artifacts-url https://... \\ + --run-id 12345 + +Exit codes: + 0 — comment posted/updated/deleted successfully + 1 — error reading report or missing arguments + +Environment: + GH_TOKEN — required for GitHub API calls +""" + +from __future__ import annotations + +import argparse +import json +import os +import re +import subprocess +import sys +import tempfile +from pathlib import Path + +# --------------------------------------------------------------------------- +# Constants +# --------------------------------------------------------------------------- + +COMMENT_MARKER = "" +MAX_INLINE_DIFFS = 10 +MAX_FILE_LIST = 50 +MAX_COMMENT_CHARS = 60_000 +# Safety margin under MAX_COMMENT_CHARS — leaves room for the trailing +# "... N more" summary + section footers that are appended after the budget +# check trips. +COMMENT_BUDGET_MARGIN = 2000 +# Hard cap on any individual displayed path. Even though _SAFE_PATH_RE filters +# characters, a fork PR can still create validly-named paths near PATH_MAX; +# without a length cap, 50 of those can blow past GitHub's 65_536-char comment +# limit and cause the whole post to fail (which was silent before). +MAX_DISPLAY_PATH_LEN = 200 + +# Author of comments we own. Only comments from this user are eligible for +# update/delete — prevents hijacking a PR-author comment that happens to +# contain our marker. +BOT_AUTHOR = "github-actions[bot]" + +# Bare-integer validator for comment IDs returned from the GitHub API. +_ID_RE = re.compile(r"^[0-9]+$") + +# Paths rendered into markdown must not break code spans or introduce HTML. +# Rendered-spec paths are well-known and conservative; anything else is +# replaced with a placeholder rather than trusted. +_SAFE_PATH_RE = re.compile(r"^[A-Za-z0-9._/\-]+$") + + +def _safe_path(path: str) -> str: + """Return a markdown-safe, length-bounded rendering of `path`. + + Rendered-spec paths are expected to be ASCII-ish component/file names. + Anything with backticks, angle brackets, whitespace, or other markdown + metacharacters is replaced with a placeholder so an attacker-controlled + filename can't inject HTML or break out of a code span. Paths longer + than ``MAX_DISPLAY_PATH_LEN`` are truncated with an ellipsis marker so + a fork PR can't push the total comment size past the GitHub API limit + via pathologically long (but otherwise valid) filenames. + """ + if not _SAFE_PATH_RE.match(path): + return "" + if len(path) > MAX_DISPLAY_PATH_LEN: + keep = MAX_DISPLAY_PATH_LEN - 20 + return f"{path[:keep]}...<{len(path) - keep} chars truncated>" + return path + + +def _fence_for(text: str) -> str: + """Pick a backtick fence longer than any run of backticks in `text`.""" + longest = max((len(m.group(0)) for m in re.finditer(r"`+", text)), default=0) + return "`" * max(3, longest + 1) + + +# --------------------------------------------------------------------------- +# Comment formatting +# --------------------------------------------------------------------------- + + +# NOTE: _render_command is duplicated in check_rendered_specs.py +def _render_command(components: list[str], use_all: bool = False) -> str: + if use_all or len(components) > 30: + return "azldev component render -a --clean-stale" + return f"azldev component render {' '.join(components)}" + + +def format_comment( + report: dict, + artifacts_url: str | None = None, + run_id: str | None = None, + repo: str | None = None, +) -> str: + content_diffs = report.get("content_diffs", []) + extra_files = report.get("extra_files", []) + missing_files = report.get("missing_files", []) + + n_diff = len(content_diffs) + n_extra = len(extra_files) + n_missing = len(missing_files) + + all_comps: list[str] = sorted( + {item["component"] for item in content_diffs + missing_files} + ) + use_all = bool(extra_files) or bool(missing_files) + remediation_cmd = _render_command([] if use_all else all_comps, use_all=use_all) + + lines: list[str] = [ + COMMENT_MARKER, + "## ❌ Rendered specs are out of date", + "", + "🚧🚧🚧🚧🚧", + "", + "> [!WARNING]", + ">", + "> **Disregard this comment.**", + ">", + "> Spec rendering is still under development and checked-in specs", + "> should not be updated in PRs yet.", + "> Please ignore this comment for now unless you are actively", + "> working on the render pipeline.", + "", + "🚧🚧🚧🚧🚧", + "", + "**FIX:** — run this and commit the result:", + "", + "```bash", + remediation_cmd, + "```", + "", + ] + + if artifacts_url: + lines.append(f"Or [download the fix patch]({artifacts_url}) and apply it:") + lines.append("") + if run_id and repo: + lines.extend( + [ + "```bash", + f"gh run download {run_id} -R {repo} -n rendered-specs-patch", + "git apply rendered-specs.patch", + "```", + ] + ) + else: + lines.extend( + [ + "```bash", + "git apply rendered-specs.patch", + "```", + ] + ) + lines.append("") + + lines.extend( + [ + "| Category | Count |", + "|----------|-------|", + f"| Content diffs | {n_diff} |", + f"| Extra files (untracked) | {n_extra} |", + f"| Missing files (deleted) | {n_missing} |", + "", + ] + ) + + # Running total of comment body size. Every section that appends + # PR-controlled content (paths, diff bodies) must check this budget + # before appending and bail out with a "... and N more" summary once it + # gets close to the GitHub API's 65_536-char comment limit. A comment + # rejected for being too large is effectively invisible (the post step + # has continue-on-error: true), so a fork PR author could otherwise + # suppress the drift warning by spamming long or numerous paths. + body_so_far = len("\n".join(lines)) + budget_cap = MAX_COMMENT_CHARS - COMMENT_BUDGET_MARGIN + + if content_diffs: + lines.append("### Content diffs") + lines.append("") + shown = 0 + for item in content_diffs: + if shown >= MAX_INLINE_DIFFS: + remaining = n_diff - shown + lines.append( + f"*… and {remaining} more file(s). " + "Run the remediation command above to see all changes.*" + ) + lines.append("") + break + path = _safe_path(item["path"]) + diff_text = item.get("diff", "") + fence = _fence_for(diff_text) + # Emit fixed raw HTML for the collapsible wrapper (`
` and + # ``), but keep attacker-controlled content in markdown + # code formatting: the path is rendered as code in the summary, and + # the diff body is inside a dynamically chosen fence longer than any + # backtick run in the diff text. + block = ( + "
\n" + f"`{path}`\n\n" + f"{fence}diff\n{diff_text}\n{fence}\n\n" + "
\n" + ) + if body_so_far + len(block) > budget_cap: + remaining = n_diff - shown + lines.append( + f"*… and {remaining} more file(s) — comment size limit reached. " + "Run the remediation command above to see all changes.*" + ) + lines.append("") + break + lines.append(block) + body_so_far += len(block) + shown += 1 + + def _append_file_list( + header: str, + description: str, + items: list[dict], + ) -> None: + """Append a bulleted file list, enforcing the shared comment budget. + + Stops early once the cumulative body size gets near the GitHub + limit, so a fork PR can't suppress the warning by producing either + very long paths or a huge number of them. + """ + nonlocal body_so_far + lines.append(header) + lines.append("") + lines.append(description) + lines.append("") + shown = 0 + truncated_for_size = False + for item in items[:MAX_FILE_LIST]: + entry = f"- `{_safe_path(item['path'])}`" + # +1 for the newline added by the final "\n".join(lines). + if body_so_far + len(entry) + 1 > budget_cap: + truncated_for_size = True + break + lines.append(entry) + body_so_far += len(entry) + 1 + shown += 1 + if truncated_for_size: + remaining = len(items) - shown + note = ( + f"\n*… and {remaining} more file(s) — comment size limit reached. " + "Run the remediation command above to see all changes.*" + ) + elif len(items) > MAX_FILE_LIST: + remaining = len(items) - MAX_FILE_LIST + note = f"\n*… and {remaining} more file(s).*" + else: + note = None + if note is not None: + lines.append(note) + body_so_far += len(note) + 1 + lines.append("") + body_so_far += 1 + + if extra_files: + _append_file_list( + "### Files to add", + "These files are produced by `azldev component render` but are " + "missing from your branch. Add them.", + extra_files, + ) + + if missing_files: + _append_file_list( + "### Files to remove", + "These files are in your branch but are not produced by render. " + "Remove them.", + missing_files, + ) + + return "\n".join(lines) + + +# --------------------------------------------------------------------------- +# GitHub comment posting +# --------------------------------------------------------------------------- + + +def _gh(*args: str) -> str: + return subprocess.run( + ["gh", *args], capture_output=True, text=True, check=True + ).stdout.strip() + + +def find_existing_comments(repo: str, pr: str) -> list[str]: + """Return IDs of all comments authored by the bot that carry our marker. + + Filtering by author prevents a PR author from posing as our comment + (they could write a body containing the marker and get their comment + edited/deleted by the bot). Returns all matches so stale duplicates + (from past bugs or races) can be cleaned up. + """ + try: + output = _gh( + "api", + "--paginate", + f"/repos/{repo}/issues/{pr}/comments", + "--jq", + ( + f'.[] | select(.user.login == "{BOT_AUTHOR}") ' + f'| select(.body | contains("{COMMENT_MARKER}")) ' + "| .id" + ), + ) + except subprocess.CalledProcessError: + return [] + ids = [line.strip() for line in output.splitlines() if line.strip()] + # Validate IDs are bare integers before interpolating into API URLs. + return [i for i in ids if _ID_RE.match(i)] + + +def post_or_update_comment(repo: str, pr: str, body: str) -> None: + existing_ids = find_existing_comments(repo, pr) + fd, body_path = tempfile.mkstemp(prefix="render-check-comment-", suffix=".md") + try: + with os.fdopen(fd, "w") as f: + f.write(body) + if existing_ids: + # Update the first, delete the rest (shouldn't normally exist). + primary, *stale = existing_ids + print(f"Updating existing comment {primary}") + _gh( + "api", + "--method", + "PATCH", + f"/repos/{repo}/issues/comments/{primary}", + "-F", + f"body=@{body_path}", + ) + for stale_id in stale: + print(f"Deleting stale duplicate comment {stale_id}") + try: + _gh( + "api", + "--method", + "DELETE", + f"/repos/{repo}/issues/comments/{stale_id}", + ) + except subprocess.CalledProcessError: + print( + f"Warning: failed to delete stale comment {stale_id}", + file=sys.stderr, + ) + else: + print("Creating new comment") + _gh("pr", "comment", pr, "--repo", repo, "--body-file", body_path) + finally: + Path(body_path).unlink(missing_ok=True) + + +def delete_comment_if_exists(repo: str, pr: str) -> None: + for existing_id in find_existing_comments(repo, pr): + print(f"Deleting stale comment {existing_id}") + try: + _gh( + "api", + "--method", + "DELETE", + f"/repos/{repo}/issues/comments/{existing_id}", + ) + except subprocess.CalledProcessError: + print( + f"Warning: failed to delete stale comment {existing_id}", + file=sys.stderr, + ) + + +# --------------------------------------------------------------------------- +# Main +# --------------------------------------------------------------------------- + + +def main() -> int: + parser = argparse.ArgumentParser( + description="Post rendered-spec drift results as a PR comment." + ) + parser.add_argument( + "--report", + type=Path, + required=True, + help="Path to the JSON report from check_rendered_specs.py", + ) + parser.add_argument("--repo", required=True, help="GitHub repo (owner/repo)") + parser.add_argument("--pr", required=True, help="PR number") + parser.add_argument( + "--artifacts-url", default=None, help="Direct URL to patch artifact" + ) + parser.add_argument("--run-id", default=None, help="GitHub Actions run ID") + args = parser.parse_args() + + try: + with open(args.report, encoding="utf-8") as f: + report = json.load(f) + except (FileNotFoundError, json.JSONDecodeError) as exc: + print(f"Error reading report: {exc}", file=sys.stderr) + return 1 + + total = ( + len(report.get("content_diffs", [])) + + len(report.get("extra_files", [])) + + len(report.get("missing_files", [])) + ) + + body: str | None = None + try: + if total == 0: + delete_comment_if_exists(args.repo, args.pr) + else: + body = format_comment( + report, + artifacts_url=args.artifacts_url, + run_id=args.run_id, + repo=args.repo, + ) + post_or_update_comment(args.repo, args.pr, body) + except (subprocess.CalledProcessError, OSError) as exc: + print(f"Warning: failed to post/update PR comment: {exc}", file=sys.stderr) + + # Write to GitHub step summary if available + summary_file = os.environ.get("GITHUB_STEP_SUMMARY") + if summary_file and body: + max_summary = 1_000_000 # GH step summary limit is 1024 KiB + summary = body[:max_summary] if len(body) > max_summary else body + with open(summary_file, "a", encoding="utf-8") as sf: + sf.write(summary) + sf.write("\n") + + return 0 + + +if __name__ == "__main__": + sys.exit(main()) diff --git a/.gitignore b/.gitignore index 248c6910cfd..b33e83f3803 100644 --- a/.gitignore +++ b/.gitignore @@ -4,3 +4,4 @@ __pycache__/ .spec_review/ spec_review_kb.md .env +.hyenas/ diff --git a/AGENTS.md b/AGENTS.md index 20e6aaa18a9..74bf02b3026 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -9,7 +9,7 @@ For project context and architecture, see [`.github/copilot-instructions.md`](.g ### Examples of changes that should trigger a final test prior to sign-off (not an exhaustive list) - Version bumps or pinning a new upstream version -- Adding, modifying, or removing overlays (trivial edits may only require `prep-sources` verification, but when in doubt, do a full build + smoke-test) +- Adding, modifying, or removing overlays (trivial edits may only require `render` verification, but when in doubt, do a full build + smoke-test) - Changing build config (`build.defines`, `build.with`, `build.without`) - Modifying local spec files or source files (again, trivial edits may not require a full rebuild, but when in doubt, test) - Adding a new component (first build) @@ -33,9 +33,11 @@ Do NOT skip testing for changes that affect RPM output. Do NOT tell the user "th - Always run `azldev comp list -p -q -O json` before modifying a component. - Prefer overlays over forking/local specs when customizing upstream packages. -- Use `azldev comp prep-sources -p --force -o -q` to verify overlays apply cleanly before building. Always use `--force` to overwrite an existing output dir, `rm -rf` requires user confirmation which is disruptive. -- Follow the inner loop cycle: investigate → modify → verify → build → test → inspect. See [`skill-build-component`](.github/skills/skill-build-component/SKILL.md). - - Note: Use your best judgement, some packages are VERY slow to build (e.g., `kernel`), in those cases you may want to do multiple iterations of investigate → modify → verify with `prep-sources` before doing a full build + test. +- After modifying overlays or component config, re-render with `azldev comp render -p ` and inspect `specs///` to verify the result. This is the fastest verification path. + - Note: Changing a global snapshot time may affect all components that depend on it, potentially causing widespread rebuilds. Full re-render is time-consuming, but may be done by `azldev comp render -a --clean-stale`. +- Use `prep-sources` for deeper debugging: `azldev comp prep-sources -p --skip-overlays --force -o -q` and `azldev comp prep-sources -p --force -o -q` to diff pre/post overlay output when you need to understand what upstream provides vs. what overlays change. Always use `--force` to overwrite an existing output dir, `rm -rf` requires user confirmation which is disruptive. +- Follow the inner loop cycle: investigate → modify → render → build → test → inspect. See [`skill-build-component`](.github/skills/skill-build-component/SKILL.md). + - Note: Use your best judgement, some packages are VERY slow to build (e.g., `kernel`), in those cases you may want to do multiple iterations of investigate → modify → verify with `render` before doing a full build + test. - `prep-sources -o ` output is ad-hoc (user-chosen dir). `comp build` output goes to project-configured dirs (`base/out/`, `base/build/`). Don't conflate them. - For temporary files, ensure they are all placed inside the project's defined work directory (`azldev config dump -q -f json 2>&1 | grep 'workDir'`). Example commands use `base/build/work/scratch/`, and all temp directories should be inside it unless there's a specific reason not to be. diff --git a/README.md b/README.md index 24febcf7ef3..6f43c885e6d 100644 --- a/README.md +++ b/README.md @@ -4,6 +4,25 @@ This branch contains: * [Distro-wide configuration](distro/) * [Base project: components, images, etc.](base/) +* [Rendered specs](specs/) + +## Getting Started + +### Install azldev + +The [`azldev`](https://github.com/microsoft/azure-linux-dev-tools) CLI tool drives all component, image, and build workflows. Install it from source (requires Go): + +```bash +go install github.com/microsoft/azure-linux-dev-tools/cmd/azldev@main +``` + +> **Note:** azldev is still in active development, using the latest commit from the `main` branch is recommended for the most up-to-date features and fixes. + +### Render specs + +The `specs/` directory (as specified by `rendered-specs-dir` config) contains "rendered" spec files created by `azldev`. They are a read-only snapshot of the final spec files after all overlays and modifications have been applied. They are the canonical source for what will be built and packaged. + +They can be updated at any time by running `azldev component render -a`, or a single component can be rendered with `azldev component render `. ## AI-Assisted Development (VSCode + GitHub Copilot CLI) @@ -96,9 +115,9 @@ scripts/batch-triage/triage.sh --results /path/to/results.json 'only triage one Requirements: -- Docker (buildx recommended) -- GitHub auth ([copilot env var](https://docs.github.com/en/copilot/how-tos/copilot-cli/set-up-copilot-cli/authenticate-copilot-cli), `copilot` logged in, or `gh` logged in) -- a `.env` file (see above). +* Docker (buildx recommended) +* GitHub auth ([copilot env var](https://docs.github.com/en/copilot/how-tos/copilot-cli/set-up-copilot-cli/authenticate-copilot-cli), `copilot` logged in, or `gh` logged in) +* a `.env` file (see above). Output lands in `out/triage/`. diff --git a/base/project.toml b/base/project.toml index 4280409241d..c65b89dd8b1 100644 --- a/base/project.toml +++ b/base/project.toml @@ -10,3 +10,4 @@ log-dir = 'build/logs' work-dir = 'build/work' output-dir = 'out' default-distro = { name = "azurelinux" } +rendered-specs-dir = '../specs'