fix(bb): cache Emscripten .js / .worker.mjs siblings of .wasm targets (into #22815)#23111
Closed
AztecBot wants to merge 253 commits into
Closed
fix(bb): cache Emscripten .js / .worker.mjs siblings of .wasm targets (into #22815)#23111AztecBot wants to merge 253 commits into
AztecBot wants to merge 253 commits into
Conversation
…source Adds a generation script that parses the AztecNode and AztecNodeAdmin interfaces to produce the API reference doc, ensuring it stays in sync with the actual RPC surface. Fixes missing endpoints (getCheckpointedBlockNumber, getCheckpointsDataForEpoch, etc.) and removes stale methods.
…de API reference generation
Extracted into #22647 so the doc corrections can land independently.
…or improved API discoverability
Wire the DocsGPT stream API directly behind an Aztec-branded React component. Remove the @cookbookdev/docsbot integration. - Add expand/collapse toggle and raise widget z-index so the panel renders above the fixed navbar and the in-page Copy page button. - Clip body overflow-x to prevent the hero motif from triggering a horizontal scrollbar. - Strip "Powered by DocsGPT" from the header.
…tec logo - Move the NPS feedback card above the Ask AI launcher by raising its bottom offset (110px desktop / 100px mobile) so the two no longer stack on top of each other. - Swap the inline stepped-diamond SVG in the symbol and label button variants for the real Aztec_Symbol_Dark.png / Aztec Symbol_Light.png site assets.
Swap the custom inline-code tokenizer for react-markdown with a styled components map (inline/block code, links, lists, headings, blockquote, hr, table) matching the widget's ink/parchment palette. Drop the inline streaming cursor now that responses render as block-level markdown; the existing "Thinking ..." eyebrow is the in-flight affordance.
Add remark-gfm so markdown tables (and other GitHub-flavored markdown) render. Also preprocess the streamed response to re-insert row newlines around the table separator and at " | |" row boundaries, since the API frequently returns tables collapsed onto a single line.
Wire prism-react-renderer (already a site dependency) into the widget's markdown components. Inline code keeps the chartreuse tint; fenced blocks render via prism with vsDark (ink theme) or github (parchment theme), picking up the language from the ```lang fence. The default <pre> override is a passthrough since CodeBlock emits its own <pre>.
Auto-scroll now only fires when messages.length changes (i.e. the user sends a new question). Token-by-token stream updates no longer yank the viewport to the bottom while the user is reading earlier content.
…eference # Conflicts: # .claude/skills/release-network-docs/SKILL.md # docs/docs-operate/operators/reference/node-api-reference.md # docs/network_versioned_docs/version-v4.1.3/operators/reference/node-api-reference.md
…arity and accuracy
This env var wasn't being passed through to the pods
Implements the new JSON RPC API proposal for blocks and checkpoints. Next steps: - Remove the methods flagged as deprecated - Align the internal archiver API (and potentially storage) with this new external node API so we can get rid of the block response adapters - Review the chain tips structure (in particular the `proposedCheckpoint`) - Review the APIs that return checkpoints on when we return proposed vs mined checkpoints Fixes A-974
## Motivation Under today's pipelining schedule, attestations for slot N's checkpoint trail into slot N+1 by `blockDuration + p2pPropagationTime` (~8s with defaults), which delays the L1 `propose` tx and pushes it toward the 3rd L1 block of the target slot. Shifting the broadcast ~8s earlier lets attestations finish by the build-slot boundary so the proposer can submit to L1 immediately at the target-slot start. ## Approach Updates the pipelined timing model to reserve assembly, round-trip p2p, and last-block re-execution at the *end* of the build slot (rather than leaking into the target slot). Proposal broadcast now fires at ~T = slotDuration − timeReservedAtEnd inside the build slot, so attestations arrive before the boundary. Receiver acceptance windows are tightened accordingly: `proposalWindowIntoTargetSlot` drops from `p2pPropagationTime` to 0 (only clock-disparity grace remains) and `attestationWindowIntoTargetSlot` shrinks from `slotDuration − l1PublishingTime` (60s at defaults) to `2·p2pPropagationTime` (4s). No new p2p validation path is needed: the main `messageSlot === targetSlot || messageSlot === nextSlot` check already covers the steady state under pipelining. ## Changes - **stdlib (timetable)**: `PipelinedCheckpointTimingModel.timeReservedAtEnd` is now `checkpointAssembleTime + 2·p2pPropagationTime + blockDuration`; `pipeliningAttestationGracePeriod` drops to 0 under pipelining. `proposalWindowIntoTargetSlot` drops to 0 and `attestationWindowIntoTargetSlot` shrinks to `2·p2pPropagationTime`. - **p2p (msg_validators)**: Tightens `PipeliningWindow.acceptsProposal` / `acceptsAttestation` via the new timing-model values; `isWithinPipeliningWindow` JSDoc is refreshed to describe the straggler-acceptance role explicitly. - **sequencer-client (README)**: Rewrites the pipelining section to describe the new schedule (broadcast inside build slot, attestations in hand by slot boundary, L1 submission at target-slot boundary) and refreshes the worked example. - **tests (stdlib, p2p, sequencer-client)**: Updates pipelined timing / window-size assertions and tightens acceptance tests for the shorter straggler windows. ## Rollout notes The tightened straggler windows mean a receiver running this change will reject stale-schedule proposals that arrive more than 500ms into the target slot and stale-schedule attestations that arrive more than 4.5s into the target slot. Expect to roll this out across validators together (the merge-train/spartan flow already batches validator upgrades) rather than mixing old and new schedules on a live network. --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Updates the archiver data store model so that we can account for more than a single proposed checkpoint. Under pipelining, it's possible we receive the proposal for the next checkpoint before we've managed to sync the previous one from L1, thus having two checkpoints unmined. Also fixes `addProposedCheckpoint` in the archiver so that it goes through the queue, instead of potentially injecting it in the middle of an L1 sync. Next steps is to review the archiver and node APIs that return checkpoints, and unify whether we return proposed checkpoints along mined checkpoints or not. Related to #22781. Fixes A-910
Address review feedback that index.jsx was monolithic. Pure mechanical extraction: index.jsx 1329 -> 155 lines, with Panel, Hero, Message, LauncherButton, Icons, markdown, streamAnswer, and theme as siblings. No functional or styling changes.
## Summary - Adds the `docsgpt-react` widget to the docs site as a Docusaurus client module - Provides an AI assistant chat widget for developer questions about Aztec protocol - Configured with Aztec branding (dark theme, Aztec colors and avatar) ## Test plan - [ ] Run `yarn start` in `docs/` and verify the widget appears on the page - [ ] Click the widget button and confirm the chat interface opens - [ ] Ask a question and verify responses are returned from the DocsGPT backend - [ ] Update `apiHost` to point to production before merging
Initial exploration of multi-app init/inner kernels. Currently just adds some basic lib utils and tests demonstrating feasibility of naively extending the current hard-coded single app pattern. --------- Co-authored-by: federicobarbacovi <171914500+federicobarbacovi@users.noreply.github.com>
…23058) ## Summary Third in the series fixing search after #22861. Previous PRs (#23042, #23049) successfully indexed 14,773 records from 2,222 aztec-nr-api pages, but **users still don't see those records in the dropdown search**. ## Root cause The `docusaurus-theme-search-typesense` package ANDs a contextual filter into every search query: ```ts // docusaurus-theme-search-typesense/src/client/useTypesenseContextualFacetFilters.ts const languageFilter = `language:=${locale}`; const tagsFilter = `docusaurus_tag:=[${tags.join(',')}]`; return [languageFilter, tagsFilter].filter(Boolean).join(' && '); ``` The api-nr records have neither `language` nor `docusaurus_tag` set, because the typesense-docsearch-scraper only stamps those onto records scraped from docusaurus pages (it reads `<meta name="docsearch:docusaurus_tag" content="...">` tags). Rustdoc-style nargo-doc pages don't emit those metas, so every api-nr record is missing the fields the theme filters on, so every api-nr record is filtered out of every dropdown query. ## Fix Three coordinated changes: ### 1. `extra_attributes` on the api-nr start_url (`docs/typesense.config.json`) Stamp every api-nr record with the attributes the theme expects: ```json "extra_attributes": { "language": "en", "docusaurus_tag": [ "docs-participate-current", "docs-root-current", "default" ] } ``` These cover the three unversioned plugin contexts. The two versioned ones (`docs-developer-${version}` and `docs-network-${version}`) are appended dynamically by the workflow (see #3) so the static config doesn't go stale on version bumps. Typesense's `docusaurus_tag:=[<context-tag>]` matches if the record's array contains the context tag, so the api-nr records will satisfy the filter from any plugin context. ### 2. `field_definitions` schema override (`docs/typesense.config.json`) The scraper's default schema (`scraper/src/typesense_helper.py` v0.11.0) declares the wildcard `.*_tag` as `string`, so sending an array for `docusaurus_tag` would be rejected at import time. `field_definitions` overrides this — but it REPLACES the entire default schema rather than merging, so the full default field list is reproduced verbatim with one targeted change: `docusaurus_tag` is added with type `string*` (accepts both string and array) before the `.*_tag` wildcard. Existing docusaurus records continue to work because they pass `docusaurus_tag` as a single string from a meta tag, and `string*` accepts that too. ### 3. Derive versioned tags at scrape time (`.github/workflows/docs-typesense.yml`) Read `developer_version_config.json` and `network_version_config.json`, build the `docs-developer-${mainnet}`, `docs-developer-${testnet}`, `docs-network-${mainnet}`, `docs-network-${testnet}` strings (dropping empty/duplicates), and use `jq` to append them to the api-nr start_url's `docusaurus_tag` array before passing the config to the scraper. This way the static JSON never holds version-specific values that need manual updating. The workflow run also switches to `set -euo pipefail` so a `jq` derivation failure aborts the run rather than feeding an empty config to docker. ## Caveats - Existing 14,773 api-nr records in the production collection are stale until the next scraper run rewrites them. The scraper alias-swaps to a fresh collection on each run, so no manual purge is needed. ## Test plan - [ ] Manually dispatch `Docs Scraper` workflow on this branch via `workflow_dispatch`. - [ ] Confirm scraper run reports `Nb hits` ≈ 27,000 (no regression in record count). - [ ] Confirm no schema-validation errors in the run log. - [ ] Confirm the workflow log echoes the derived `docusaurus_tag` values matching the current docs versions (e.g. `docs-developer-v4.2.0`, `docs-network-v4.2.0`). - [ ] After merge, search docs.aztec.network from the homepage, /developers/, and /operate/ for an Aztec.nr identifier (e.g. `ContractClassId`, `balance_set`, `compute_log_tag`) and confirm API reference pages appear in the dropdown in all three contexts.
Drops the `artifact` field from `SimulationOverrides.contracts` entries. Simulation now resolves the override-instance's `currentContractClassId` through PXE's locally registered classes — callers register the target artifact via `pxe.registerContractClass(...)` once, then construct an instance with the desired `currentContractClassId` to drive dispatch. In-tree account-stub flows (`cli-wallet`, `embedded_wallet`, `test_wallet`) migrate to the pre-register pattern: pre-register the stub class and bump `currentContractClassId` on the override instance. `proxied_contract_data_source` drops its `getFunctionArtifact*` overrides — function lookup falls through to the regular `ContractStore`. The proxy now only overrides `getContractInstance`. ## Test plan - All existing simulator unit tests pass against the refactored proxy. - E2E account-stub flows (kernelless-override simulation mode) continue to work via the pre-register pattern.
Remove the chonk_bench as the risk of using it to make decisions is greater than the utility it provides.
Extends `SimulationOverrides` with a `publicStorage` field, plumbed
through `.simulate({ overrides })` on aztec.js → wallet → PXE →
`AztecNode.simulatePublicCalls`. Each entry writes a `(contract, slot,
value)` into the public-data tree of the ephemeral world-state fork
before the tx runs; real chain state is untouched.
```typescript
const result = await contract.methods.read_balance(account).simulate({
overrides: {
publicStorage: [{ contract: contract.address, slot: BALANCE_SLOT, value: new Fr(1_000_000n) }],
},
});
```
## Test plan
- Unit tests in
`aztec-node/src/aztec-node/public_data_overrides.test.ts`.
- E2E `e2e_avm_simulator` `publicDataOverrides` describe block.
- `migration_notes.md` and `how_to_test.md` updated.
…ct upgrades) (#22932) Adds `wallet.registerContractClass(artifact)` (a thin pass-through to PXE) so external callers can register a new class artifact locally before passing an instance override (next pr) that targets it. Without this, PXE-side ACIR dispatch can't resolve private functions of the override's class. `ContractClassesCapability` gains a `canRegister?: boolean` flag so wallets can grant or deny the new method via the capability manifest, matching the existing `ContractsCapability.canRegister` pattern. With `fastForwardContractUpdate` (next pr in stack) plus this method, a single `.simulate({ overrides })` covers both private and public function calls on an upgraded contract: ```typescript await wallet.registerContractClass(UpdatedContract.artifact); // and then upcoming work: const overrides = await fastForwardContractUpdate({ instanceAddress, newClassId, node }); await updatedContract.methods.set_private_value().simulate({ from, overrides }); ```
…g SIGABRT (#23077) Fix BatchMergeTests/*.TooManySubtablesFails by making the code use the TweakableBatchMergeProver. ## Summary Unblocks the [Nightly Debug Build](https://github.com/AztecProtocol/aztec-packages/actions/runs/25539203050) on `next`. With `_GLIBCXX_DEBUG` enabled, `BatchMergeTests/2.TooManySubtablesFails` aborts (exit 134) with: ``` Error: attempt to subscript container with out-of-bounds index 40, but container only holds 40 elements. ``` ## Root cause `TooManySubtablesFails` uses `BB_DISABLE_ASSERTS()` so it can drive the verifier with `N = max_subtables + 1` subtables. With BB asserts demoted to warnings, control flow falls past the `BB_ASSERT_LTE(N, M, ...)` guard at the top of `BatchMergeProver::construct_proof` and reaches `compute_degree_check_polynomial`, which iterates over `flattened_columns.size()` (sized for `N`) but indexes `degree_check_challenges` (sized for `(M+1) * NUM_WIRES`). In Release the over-read returns garbage that the verifier rejects — exactly what the test wants. In Debug, libstdc++'s `_GLIBCXX_DEBUG` (set by the `debug` preset's `CXXFLAGS`) bounds-checks `std::vector::operator[]` and traps. `BB_DISABLE_ASSERTS()` cannot suppress this — the bounds check is libstdc++'s, not bb's. ## Why the previously-merged fix didn't fix anything PR #23019 (already merged into `merge-train/barretenberg` via the open #23025) was *titled* the same as this PR but only modified `TweakableBatchMergeProver::construct_proof`. The actual `TooManySubtablesFails` test path uses `prove_and_verify` with the default `fault_mode = FaultMode::NONE`, which routes through `BatchMergeProver` (the base class), not `TweakableBatchMergeProver`. I confirmed this in this container by building `goblin_tests` against `origin/merge-train/barretenberg` HEAD with the debug preset — the same OOB still aborts. The merge train cannot deliver this fix without a real prover-side change. ## Fix Clamp the loop to `min(flattened_columns.size(), degree_check_challenges.size())` inside `BatchMergeProver::compute_degree_check_polynomial`. Normal paths are unaffected because both sizes equal `(M+1) * NUM_WIRES` whenever `N <= M` (the assert holds). The misuse path now produces a partial degree-check polynomial that the verifier still rejects in both Debug and Release — same observable behaviour, no UB. This is the same one-file change as the open draft #22976; opening this fresh PR against current `next` so it doesn't sit silently in draft. ## Verification Reproduced inside the ClaudeBox container on `f23aa82c52` (current `next` HEAD) with the same flags as the nightly: ```bash cd barretenberg/cpp HOME=/tmp cmake --preset debug -DAVM_TRANSPILER_LIB= cd build-debug && ninja goblin_tests ./bin/goblin_tests --gtest_filter='*TooManySubtablesFails*' ``` - Pre-fix: aborts on `BatchMergeTests/2.TooManySubtablesFails` with the OOB shown above (exit 134). Confirmed identical abort on `origin/merge-train/barretenberg` HEAD. - Post-fix: 2 passed, 2 skipped (the native-curve cases self-skip). Full `goblin_tests` post-fix: **68 passed / 7 skipped / 0 failed** (342s). ## Related PRs - #22937 (merged) — fixed the prior abort site (`gemini_masking_poly` virtual size). - #23019 (merged into merge-train/barretenberg via open #23025) — same title, but the diff only modified `TweakableBatchMergeProver`, which is unused on this test path; it is a no-op for the nightly. - #22976 (open draft) — same one-file prover clamp as this PR, against an older base. Detailed analysis: https://gist.github.com/AztecBot/7be72c96a1d3d18458dce92a828116a2 ClaudeBox log: https://claudebox.work/s/8ad866e315acbe92?run=1 ClaudeBox log: https://claudebox.work/s/8ad866e315acbe92?run=1 --------- Co-authored-by: federicobarbacovi <171914500+federicobarbacovi@users.noreply.github.com>
#23058 tried to make aztec-nr-api records visible in the docusaurus search dropdown by stamping `docusaurus_tag` as an array of plugin context tags, paired with a `field_definitions` schema override that declared `docusaurus_tag` as `string*` (string-or-array). In practice the override doesn't take effect: every api-nr document import is rejected by Typesense with `'Field 'docusaurus_tag' must be a string.'` The CI guard added in #23042 (MIN_HITS=5000) didn't trip because the ~12k non-api docs still passed. The fix is much smaller. The docusaurus theme's contextual filter unconditionally prepends the constant `default` (DEFAULT_SEARCH_TAG in docusaurus-theme-common) to every dropdown query's `docusaurus_tag` list. So a single scalar value of `"default"` on api-nr records satisfies the filter from every plugin context, and no schema override is needed: the scraper's default `.*_tag: string` accepts the scalar cleanly. Changes: - `docs/typesense.config.json`: drop `field_definitions`; collapse api-nr `extra_attributes.docusaurus_tag` to scalar `"default"`. - `.github/workflows/docs-typesense.yml`: drop the jq mutation that derived versioned tags (no longer needed). Add a post-index curl smoke check that searches the live alias for `docusaurus_tag:=[default]&&language:=en` and fails the run if fewer than MIN_API_HITS=1000 records are visible. No existing docusaurus page carries the `"default"` tag (each one is stamped with its plugin-context tag from the docsearch meta), so this count is effectively the count of indexed api-nr records.
Keep the post-index curl as informational logging, but don't fail the run on api-nr count. Hard-coded thresholds don't auto-adjust as aztec-nr content size changes, and the count is more useful as a debugging signal than a CI gate.
…3097) ## Summary Fourth in the series fixing search after #22861. After #23058 merged, the production index still has **0 records under `aztec-nr-api/mainnet/...`**. Confirmed by querying the live Typesense collection directly (`filter_by:url:=https://docs.aztec.network/aztec-nr-api/mainnet/*` returns `found: 0`) and by inspecting the most recent scraper run logs. ## Root cause The schema override added by #23058 doesn't take effect. Every api-nr document import is rejected by Typesense with HTTP 400 `'Field \`docusaurus_tag\` must be a string.'`, even though `custom_settings.field_definitions` lists an explicit `{ \"name\": \"docusaurus_tag\", \"type\": \"string*\" }` ahead of the wildcard `.*_tag: string`. Per Typesense docs an explicit field should win over a regex pattern field, but in practice the wildcard's `string` type appears to be what's enforced. The CI guard from #23042 (`MIN_HITS=5000`) didn't trip because the ~12k non-api docs still passed. ## Fix The PR over-engineered the solution. Reading the docusaurus theme: ```ts // docusaurus-theme-common/src/utils/searchUtils.ts export const DEFAULT_SEARCH_TAG = 'default'; ``` ```ts // docusaurus-theme-common/src/index.ts const tags = [DEFAULT_SEARCH_TAG, ...docsTags]; return {locale: i18n.currentLocale, tags}; ``` …the theme unconditionally prepends `'default'` to the `docusaurus_tag` filter on every dropdown query, in every plugin context. So api-nr records only need the single scalar value `\"default\"` to satisfy the filter from anywhere on the docs site. No array, no schema surgery, no version-specific tag derivation. Three changes: ### 1. `docs/typesense.config.json` Drop the `custom_settings.field_definitions` override entirely (the scraper's default schema with `.*_tag: string` accepts scalar string values cleanly), and collapse the api-nr `extra_attributes.docusaurus_tag` to scalar `\"default\"`. ### 2. `.github/workflows/docs-typesense.yml` — remove jq mutation The jq block that derived versioned tags is no longer needed. The scraper now reads `docs/typesense.config.json` verbatim. ### 3. `.github/workflows/docs-typesense.yml` — log api-nr visibility post-index After the scraper completes its alias swap, curl the live `aztec-docs` alias for `docusaurus_tag:=[default]&&language:=en` and log the count. No existing docusaurus page carries the `\"default\"` tag (each is stamped with its plugin-context tag, e.g. `docs-developer-v4.2.0`, from the `<meta name=\"docsearch:docusaurus_tag\">` tag), so this count is effectively the count of indexed api-nr records — and the filter mirrors what the theme actually sends. Informational only; not gated by a threshold. ## Behavior change api-nr records will now appear in the search dropdown from every plugin context (developer, network, root, participate) and every doc version (mainnet, testnet, nightly), because they're stamped with the always-prepended `\"default\"` tag rather than version-specific tags. Today we only generate `aztec-nr-api/mainnet/`, so a user browsing testnet developer docs would see mainnet aztec-nr API links in their dropdown. Probably desirable (an aztec-nr API symbol is the same regardless of which doc version you're reading), but a behavior change vs the (non-functional) #23058 attempt. ## Caveat api-nr visibility now depends on the docusaurus theme's `DEFAULT_SEARCH_TAG = 'default'` invariant. If a future caller ever issues a search query that doesn't include `'default'` in the tag list (e.g. a custom search page bypassing `useContextualSearchFilters`), api-nr records would silently disappear from that surface. ## Test plan - [ ] Manually dispatch `Docs Scraper` workflow via `workflow_dispatch` on this branch. - [ ] Confirm the run logs `Indexed N records (threshold: 5000)` with N >> 5000. - [ ] Confirm the run logs `api-nr records visible under docusaurus_tag:=[default]: M` with M well above zero (#23049 indexed 14,773 api-nr records before the schema rejection started silently dropping them, so we expect a similar count). - [ ] Confirm no `'Field \`docusaurus_tag\` must be a string.'` 400s in the scraper output. - [ ] After merge, search docs.aztec.network from the homepage, /developers/, /network/, and /participate/ for an Aztec.nr identifier (e.g. `ContractClassId`, `balance_set`, `compute_log_tag`, `address_note`) and confirm API reference pages appear in the dropdown in all four contexts.
More efficient Poseidon2 thanks to quad compression trick --------- Co-authored-by: ledwards2225 <l.edwards.d@gmail.com> Co-authored-by: notnotraju <raju@aztec.foundation>
BEGIN_COMMIT_OVERRIDE fix(bb): clamp BatchMergeProver degree-check loop to fix nightly debug SIGABRT (#23019) feat: extend databus with 2 more cols (#23010) feat: n1 apps (#22974) chore: remove chonk bench once and for all (#23067) fix(bb): clamp BatchMergeProver degree-check loop to fix nightly debug SIGABRT (#23077) feat!: optimized Poseidon2 (#22652) END_COMMIT_OVERRIDE
…m targets preset_cache_paths only matched targets by name + native sibling extensions (.exe, .node, lib*.a). Emscripten emits a .js loader and .worker.mjs pthread worker alongside every .wasm exec target as a unit, so cache_upload missed them and a wasm-threads cache hit restored barretenberg.wasm without the matching barretenberg.js. bb-ts/scripts/copy_wasm.sh then unconditionally copies cpp/build-wasm-threads/ bin/barretenberg.js into dest/<flavor>/barretenberg_wasm/, which fast-mode CI hits before any cache miss can rebuild the cpp side, producing: cp: cannot stat '../cpp/build-wasm-threads/bin/barretenberg.js': No such file or directory Extend preset_cache_paths so any target ending in .wasm also picks up the $stem.js and $stem.worker.mjs siblings from the same bin/ directory.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Tiny CI fix targeted directly at #22815's head branch (
claudebox/a62d1521c38e34c3-3). Merging this PR updates #22815's branch in place without rewriting history, so #22815 picks up the cache fix immediately and re-runs CI.This is the workaround for a session-side push guard: the claudebox bot session that produced this commit is restricted to
cb/-prefixed branches, so it could not directly push toclaudebox/a62d1521c38e34c3-3. Routing the patch as a PR-into-PR is the non-destructive equivalent.What it changes
barretenberg/cpp/bootstrap.sh::preset_cache_pathsonly matched a target by name plus three native sibling extensions (.exe,.node,lib*.a). The wasm-threads preset on #22815 (Emscripten) emits, for each.wasmexec target, three sibling files as a unit:barretenberg.wasm(the module)barretenberg.js(the loader / glue)barretenberg.worker.mjs(pthread worker, threads preset only)Plus the existing
barretenberg.wasm.gz, which the targets list already names. The matcher caught onlybarretenberg.wasmandbarretenberg.wasm.gz, socache_uploadsilently dropped the.jsloader and.worker.mjs. On a subsequent cache hit,barretenberg/ts/scripts/copy_wasm.shthen unconditionally runs:…and fails with:
This is the exact
bb-tsfailure on #22815's most recent CI run (http://ci.aztec-labs.com/067043b7ad90da31).Fix
Extend
preset_cache_paths: for any target whose name ends in.wasm, also pick up<stem>.jsand<stem>.worker.mjsfrom the samebin/directory.Test plan
claudebox/a62d1521c38e34c3-3.bb-tsno longer trips on the missingbarretenberg.jsafter a wasm-threads cache hit.Note on
check-squashedStill red on #22815 because the PR has 15 commits. That's a separate hygiene check — squash on merge or add
ci-no-squash.