chore: merge next into merge-train/fairies#23021
Merged
Merged
Conversation
⚠️ **This PR includes #22870. Reviewers should review only the first commit.**⚠️ ## Motivation Consolidates the block-related lookup surface on `L2BlockSource` from ~17 narrow methods returning ~9 different shapes down to 4 methods returning 2 shapes (`L2Block` and `BlockData`). Replaces the per-shape getters with discriminated query objects that carry both the lookup discriminant and a single `onlyCheckpointed` filter, removing the parallel `Checkpointed*` API and the throwaway wrapper types. Additionally, this refactor centralizes block-zero handling in the archiver and threads the dynamic initial header through every component that previously hard-coded the constant, eliminating the divergence and removing the special-case branches in callers. ## Approach `L2BlockSource` exposes 4 methods that take query objects: ```ts getBlock(query: BlockQuery): Promise<L2Block | undefined> getBlocks(query: BlocksQuery): Promise<L2Block[]> getBlockData(query: BlockQuery): Promise<BlockData | undefined> getBlocksData(query: BlocksQuery): Promise<BlockData[]> type BlockQuery = ({number} | {hash} | {archive}) & { onlyCheckpointed?: boolean } type BlocksQuery = ({from, limit} | {epoch}) & { onlyCheckpointed?: boolean } ``` On-disk format is unchanged — the archiver already stored block metadata, tx bodies, and per-checkpoint L1/attestation data in separate LMDB maps; `CheckpointedL2Block` was only an in-memory join produced at read time. **Includes changes from #22870 ## API surface change ### Methods removed from `L2BlockSource` `getL2Block`, `getL2BlockByHash`, `getL2BlockByArchive`, `getCheckpointedBlock`, `getCheckpointedBlockByHash`, `getCheckpointedBlockByArchive`, `getCheckpointedBlocks`, `getCheckpointedBlocksForEpoch`, `getCheckpointedBlockHeadersForEpoch`, `getBlock(number)`, `getBlocks(from, limit)`, `getBlockData(number)`, `getBlockDataByArchive`, `getBlockDataWithCheckpointContext`, `getBlockHeader`, `getBlockHeaderByHash`, `getBlockHeaderByArchive`. ### Types deleted `CheckpointedL2Block`, `BlockDataWithCheckpointContext` — both removed entirely (file + schema + re-exports). Callers that previously read `.l1` / `.attestations` off these now do `getBlockData(...)` followed by `getCheckpointData(blockData.checkpointNumber)` and read those fields off `CheckpointData`. ### Types added `BlockQuery`, `BlocksQuery` (and matching Zod schemas) on `L2BlockSource`. No new domain types — `L2Block`, `BlockData`, `BlockHeader` are unchanged. ### AztecNode public RPC Method names preserved (`getBlock`, `getBlockHeader`, `getCheckpointedBlocks`, etc. — bodies delegate internally to the new `L2BlockSource` methods). One wire-level change: `AztecNode.getCheckpointedBlocks` element type goes `CheckpointedL2Block[]` → `BlockResponse[]`, forced by the type deletion. Older RPC clients that parse the old shape will need to update. ## Changes - **stdlib**: `BlockQuery` / `BlocksQuery` types + Zod schemas next to `L2BlockSource`. `CheckpointedL2Block` file deleted; `BlockDataWithCheckpointContext` removed from `block_data.ts`. `ArchiverApiSchema` and `MockArchiver` shrunk; new `it()` blocks cover each query discriminant. `L2BlockStream` migrated. - **archiver**: `BlockStore` consolidates to four query-object reads plus iterators. `data_source_base.ts` adds `resolveBlocksQuery` that translates `{ epoch }` → `{ from, limit }` (returns `null` for empty epochs so callers short-circuit to `[]`). Mocks honor `onlyCheckpointed`. - **aztec-node**: `server.ts` keeps the public RPC method names but delegates to the new query methods. `getCheckpointedBlocks` adds a per-call `Map<CheckpointNumber, CheckpointData>` cache to avoid an N+1. - **consumer migrations**: `world-state`, `txe`, `p2p` block-txs handler, `validator-client` (`validator.ts`, `proposal_handler.ts`), `pxe` block-stream source (honors `onlyCheckpointed` via `node.getL2Tips`), `prover-node`, `sequencer-client`, `telemetry-client`, `aztec/testing`, `L2BlockStream` in stdlib. - **tests**: per-package mocks updated for the new shapes; new test covers `getBlocks({ epoch })` empty-epoch returning `[]`.
…2935) Updates the custom no-non-primitive-in-collections rule to allow for branded primitive types like BlockNumber as keys in Maps or Sets.
End-to-end test for the "missed L1 publish" scenario under proposer pipelining. Each of 4 nodes holds exactly one validator key. We pick four consecutive slots (slotZero, slotOne, slotTwo, slotThree) such that the proposers for slotOne, slotTwo, and slotThree are three distinct validators, then warp to one L1 block before slotZero begins. The proposer for slotOne is configured to skip its L1 publish. With pipelining, the proposer for slot N+1 builds and gossips its checkpoint during slot N, then publishes that checkpoint to L1 during slot N+1. So gossip-driven `proposed` chain advances arrive one slot earlier than the L1-driven `checkpointed` advance. Expected behavior: - During slotZero, the pipelined proposer for slotOne gossips its build → every node's `proposed` tip advances to a block at slotOne. - During slotOne, the pipelined proposer for slotTwo gossips on top of the slotOne proposal → `proposed` advances to a block at slotTwo. Meanwhile the proposer for slotOne attempts L1 publish but is configured to skip it, so no checkpoint lands. - When slotOne ends with no checkpoint mined, every node's archiver prunes the uncheckpointed slotOne and slotTwo blocks; we verify rollback via the prune event. We then re-enable publishing on the formerly suppressed node so recovery can proceed. - During slotTwo, the pipelined proposer for slotThree builds on top of the (now genesis) checkpointed tip → `proposed` advances again. - During slotThree, that pipelined work is published → `checkpointed` finally advances.
USE_XX_HASH was never true in production; discv5 already validates via checkCompressedComponentVersion. Removes xxhash/toBufferBE from versioning.ts and tests the string format only.
Closes a slashing-soundness gap in the checkpoint attestation pool: two
different checkpoint proposals (or attestations) at the same slot with
identical archive root were considered equal in the attestation pool,
since the pool keyed on `archive`. So we'd never slash for
`DUPLICATE_PROPOSAL` / `DUPLICATE_ATTESTATION`.
There were two scenarios where two different proposals could have the
same archiveroot: a malicious or buggy node that sent two proposals with
the same root but different content (ie an archive root that doesn't
follow from the payload, ie an invalid one), or a malicious or buggy
node that sent two equal proposals but with different
`feeAssetPriceModifier` which is not covered by the archive root.
Fixes A-1013
## Pool changes
- **Dedup by payload hash, not by archive.** Each equivocation position
has two stores: a main store that keeps the *first* full entry seen at
that position, and a parallel multimap that tracks the *set of distinct
signed-payload hashes* seen there. A second distinct hash arriving at
the same position bumps the multimap count to 2, which trips `tryAdd*`'s
`count` and lets libp2p fire its duplicate callback /
`WANT_TO_SLASH_EVENT`. Bytes of the equivocating payload are not
retained.
- `attestationPerSlotAndSigner` (full) +
`attestationHashesPerSlotAndSigner` (hashes), keyed by `(slot, signer)`.
- `checkpointProposalPerSlot` (full) + `checkpointProposalHashesPerSlot`
(hashes), keyed by slot.
- `blockProposalPerSlotAndIndex` (full) +
`blockProposalHashesPerSlotAndIndex` (hashes), keyed by `(slot,
indexWithinCheckpoint)`.
- Hash is `keccak256(getPayloadToSign())`, **never over the signature**,
so non-deterministic ECDSA re-signs of the same payload do not look like
equivocation.
- `CheckpointProposal.getPayloadHash()` hashes through the
`ConsensusPayload` form so a checkpoint proposal's hash matches the
hashes of the attestations that signed it (proposers and attesters sign
different byte layouts of the same logical content).
- Secondary `blockProposalSlotAndIndexPerArchive` index keeps
`getBlockProposalByArchive(archive)` (used by the block-txs req/resp
protocol) resolving by archive root without a wire-protocol change. The
lookup now validates that the stored proposal's archive matches the
requested one and warns + returns `undefined` on mismatch.
- On-disk kv-store map names are unchanged; only the in-memory field
names and the *value format* (now `0x`-prefixed payload-hash hex) are
new.
## Branded payload-hash types
- `CheckpointProposalHash` and `BlockProposalHash` introduced as
`Branded<0x${string}>` in `foundation/branded-types`, so the two cannot
be confused at the TS type level.
`CheckpointAttestation.getPayloadHash()` returns
`CheckpointProposalHash` (the attestation and its proposal sign the same
payload).
- `generateP2PMessageIdentifier()` on `CheckpointProposal` /
`CheckpointAttestation` / `BlockProposal` now derives from the **same
bytes** as `getPayloadHash()` (returning `Buffer32` rather than the
`0x`-string), so libp2p's gossip dedup identity and the attestation
pool's dedup identity agree. A shared `getPayloadHashBuffer()` helper on
`BlockProposal` avoids double-hashing.
## Handler cache (validator-client)
- `ProposalHandler.lastCheckpointValidationResult` now keys by
`CheckpointProposalHash` instead of `(archive, slot)`. Without this fix,
two proposals at the same slot+archive with a differing
`feeAssetPriceModifier` would have shared a cached validation result and
the second proposal would have skipped re-validation.
## API renames
- `AttestationPool.getCheckpointProposal(slot)` — was
`getCheckpointProposal(archive)`.
- `AttestationPool.getBlockProposalByArchive(archive)` — was
`getBlockProposal(archive)`; now validates the resolved proposal's
archive matches.
- `AttestationPool.getCheckpointAttestationsForSlotAndProposal(slot,
proposalPayloadHash)` — was `(slot, archive)`.
- `P2PApi.getCheckpointAttestationsForSlot(slot, proposalPayloadHash?)`.
- Sentinel stores `proposalPayloadHash` alongside archive; validator
client passes `proposal.getPayloadHash()` to filter attestations.
## Tests
- New pool-level test verifies
same-archive-different-`feeAssetPriceModifier` is recognised as an
equivocation.
- New libp2p_service tests verify the equivocation surfaces all the way
to the slash callback for checkpoint proposals, attestations (incl.
negative test for two distinct signers), and block proposals.
- New proposal_handler test verifies the cache is not shared across
proposals that differ only on `feeAssetPriceModifier`.
Notify slack users directly in Grafan alerts
BEGIN_COMMIT_OVERRIDE refactor(archiver)!: simplify L2BlockSource block lookups (#22809) chore(lint): allow branded primitive types as keys in collections (#22935) test(e2e): test missed l1 publishing under pipelining (#22926) fix: dedup attestation pool by payload hash (#22871) chore: notify slack users directly (#22944) END_COMMIT_OVERRIDE
Two small mem optimizations targeting alloc-heavy slices of `Chonk::accumulate`: **Skip zero-init for fully-overwritten polynomials** - `PartiallyEvaluatedMultivariates`: `partially_evaluate` writes every cell in `[0, desired_size)` before any read; the virtual tail past `desired_size` is served by `SharedShiftedVirtualZeroesArray`'s implicit zeros. - ProverInstance sigma/id: `compute_honk_style_permutation_lagrange_polynomials_from_mapping` writes every cell in the active range; the virtual tail outside it remains implicitly zero. Adds an overload `Polynomial::shiftable(size, virtual_size, DontZeroMemory)` that mirrors the existing `shiftable()` factory but leaves the backing memory uninitialized. **Reserve copy_cycle vectors** Adds a counting pre-pass over all blocks that tallies copy-cycle sizes per real-variable index, then `reserve()`s each `copy_cycles[i]` once before the serial concat in phase 1.5. Eliminates the amortized realloc cost across the concat. ## Perf (3-run native median, transfer_1+sponsored_fpc, 16 threads) | Operation | Before (ms) | After (ms) | Delta | |-------------------------------------------------|------------:|-----------:|---------:| | fill_copy_cycles | 62.66 | 13.76 | -78.05% | | allocate_permutation_argument_polynomials | 13.96 | 1.81 | -87.06% | | Polynomial::zero_init (system-wide) | 549.71 | 213.84 | -61.10% | | Polynomial::Polynomial(size_t,size_t,size_t) | 163.46 | 81.26 | -50.29% | | Chonk::accumulate (warm aggregate) | 2881.23 | 2835.98 | -1.57% | | Chonk::prove | 2119.53 | 2088.02 | -1.49% | | ChonkAPI::prove (total wall) | 5834.66 | 5757.96 | -1.31% |
…e rename - Add setup-local.sh for running against a locally-built sandbox (no Docker) - Make artifact directory configurable via --artifacts-dir CLI arg - Rename bridge.mjs -> wallet-bridge.mjs to avoid ambiguity - Wait for port release after kill in setup-local.sh - Document local vs Docker artifact path distinction in README Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Added setup-local.sh for running against a locally-built sandbox, along with some small related improvements. Tested on mainframe off the current next.
Implementation of the required infrastructure to support multiple apps per kernel. This PR generalises databus-related infrastructure to support multiple apps per kernel. It also generalised mocking IVC state, mock circuits producers, and public inputs structure so that we can easily transition to kernels processing multiple apps. --------- Co-authored-by: ledwards2225 <l.edwards.d@gmail.com>
#22982) The 6 boolean pairs (non_revertible_append_X, revertible_append_X) for X in {note_hash, nullifier, l2_l1_msg} in TX_PHASE_SPEC_MAP were perfectly correlated with is_revertible: in every phase where one of them is set, is_revertible already determines which side. Replace each pair with a single sel_append_X selector and let is_revertible carry the revertibility bit. This removes 3 precomputed columns and 3 committed columns in the tx trace, shrinks the #[READ_PHASE_SPEC] lookup tuple, and simplifies the sel_try_X_append / is_tree_insert_phase / SEL_CAN_EMIT_X expressions. In #[NOTE_HASH_APPEND], the sel_unique flag (previously fed by sel_revertible_append_note_hash) is now is_revertible directly: on the two rows where the lookup is gated on, the two values agree by construction, and is_revertible is the more direct semantic for the "make this note hash unique with a nonce" decision. Updates the hardcoded precomputed VK commitments in avm_fixed_vk.hpp and the corresponding vk_hash; new values obtained from AvmFixedVKTests.FixedVKCommitments.
## Summary Removes dead code from `p2p/src/versioning.ts`: - `USE_XX_HASH` was never `true` outside tests; production ENRs always used the compressed string from `compressComponentVersions`. - Peer discovery already validates with `checkCompressedComponentVersion` in `discV5_service.ts`; `checkAztecEnrVersion` was only used from tests. - Drops `xxhash-wasm` / `toBufferBE` from this module (gossip `encoding.ts` still uses xxhash for message IDs). Related to [A-766](https://linear.app/aztec-labs/issue/A-766/audit-97-enr-version-detection-uses-string-prefix-matching-fragile)
Two improvements: 1. `add_scaled_batch` was iterating over all polys to be batched and processing indices based on the range of the destination poly (the biggest of the polys to be batched). This PR adds a skipping condition that speeds up the function: we only iterate over the poly to be batched 2. Write a bespoke `add_batch_scaled` for use in the AVM with dynamic allocation of threads: each thread picks up the new available poly and works with it. This makes optimal usage of the fact that many polys in the AVM are small. Link to AVM bulk test: http://ci.aztec-labs.com/1df80aa9b6ae0088. The PCS component is `446ms` down from ~`600ms`
BEGIN_COMMIT_OVERRIDE refactor(avm)!: consolidate revertible/non-revertible append selectors (#22982) END_COMMIT_OVERRIDE
Automated update of Noir submodule to latest nightly. **Current**: unknown **New**: nightly-2026-05-05 [View changes in noir-lang/noir](noir-lang/noir@20391fd...nightly-2026-05-05)
Automated update of Noir submodule to latest nightly. **Current**: unknown **New**: nightly-2026-05-05 [View changes in noir-lang/noir](noir-lang/noir@20391fd...nightly-2026-05-05)
Added bug bounty to Security.md
…2937) ## Summary Nightly debug build was failing again because `OinkProver::commit_to_masking_poly` allocates `gemini_masking_poly` with size `max_end_index()`. When that value is odd, sumcheck's pairwise iteration over `(edge_idx, edge_idx + 1)` reads one element past the polynomial — `compute_effective_round_size` itself rounds the iteration bound up to even, but the masking polynomial wasn't padded to match. In release builds this is silent UB; in debug, the bounds-checked accessor inside `Polynomial::operator[]` trips and the build fails very early (~2 minutes wall-clock, consistent with the failing job duration). Fix: round the masking polynomial's allocation up to even at construction so the layout matches what sumcheck assumes. The change only ever enlarges the polynomial by at most one element and never affects non-ZK flavors (gated by `flavor_has_gemini_masking<Flavor>()`). ## Recurrence This is the **fifth** independent claudebox session converging on this exact patch over the past four days; the prior four (`claudebox/fix-bb-debug-nightly` 2026-05-01, `claudebox/fix-nightly-bb-debug-build` 2026-05-03, PR #22918 2026-05-04, plus an earlier branch) were not merged, so the bug recurred each night. The diff here is byte-for-byte identical to the closed PR #22918. Most recent visible failed run: https://github.com/AztecProtocol/aztec-packages/actions/runs/25303760883 ## Verification - `cmake --preset debug` configures cleanly. - `ninja ultra_honk_tests` builds cleanly in the `debug` preset. - `./bin/ultra_honk_tests --gtest_filter='*Gemini*:*ZK*:*Mask*'` — **28 passed, 6 skipped** (the 6 skips are flavor-gated, not regressions). Detailed analysis: https://gist.github.com/AztecBot/97de8e254a5df2dad90f895be7d28f08 ClaudeBox log: https://claudebox.work/s/bd19299558fa14bc?run=1 ClaudeBox log: https://claudebox.work/s/bd19299558fa14bc?run=1 --------- Co-authored-by: sergei iakovenko <105737703+iakovenkos@users.noreply.github.com> Co-authored-by: iakovenkos <sergey.s.yakovenko@gmail.com>
…th changes (#23014) Adds a "Prover.toml Fixtures" subsection to `barretenberg/cpp/CLAUDE.md` documenting the `AZTEC_GENERATE_TEST_DATA=1 FAKE_PROOFS=1 yarn workspace @aztec/end-to-end test full.test` regen step that proof-length changes (e.g. `CHONK_PROOF_LENGTH` bumps) require. Without it, `nargo execute` fails type-checking on every protocol circuit using `ChonkProofData`.
Thunkar
approved these changes
May 7, 2026
This was referenced May 7, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Same single conflict as PR #22991, recurring because new commits landed on
next(e.g. #23011, p2p versioning, AVM tx changes) before fairies merged forward.Conflict in
yarn-project/pxe/src/pxe.tsaround the PXE store-construction block —merge-train/fairiesuses theopenPxeStores(store, initialBlockHash)factory;nextstill has the inlinenew AddressStore / ... / new L2TipsKVStore(store, 'pxe', initialBlockHash)pattern. Resolution keeps the factory;initialBlockHashis already threaded through it.No other files needed in this round — the previous round's
schema_tests.tsfollow-up is already on the branch. All fivenew L2TipsKVStore(...)callers in yarn-project pass the three required args.Details: https://gist.github.com/AztecBot/cee6d74ae6e439f71f16de4ef05e5a12
CI will validate the merge.
ClaudeBox log: https://claudebox.work/s/8a13135d95b87e96?run=3