Skip to content

Commit 498b961

Browse files
authored
test(e2e): fix race in 'proposer invalidates multiple checkpoints' (#23259)
Addresses a config-timing race in `epochs_invalidate_block.parallel.test.ts > "proposer invalidates multiple checkpoints"` that caused intermittent CI failures with `expect(validCount).toBeLessThan(quorum)` (e.g. 5/6 attestations when quorum=5). ## The race The test reads `currentSlot` via `monitor.run()` right after waiting for the first checkpoint to land — that read can land anywhere within the current L2 slot, including near its end. It then computes `badSlot1 = currentSlot + 2` and races to push malicious config (`skipCollectingAttestations: true`, …) to that slot's proposer via `await node.setConfig({...})`. `CheckpointProposalJob` is constructed with `this.config` passed by reference (`sequencer-client/src/sequencer/sequencer.ts:559`), and `Sequencer.updateConfig` reassigns `this.config = merge(...)` rather than mutating, so a job built before `setConfig` lands keeps the old config object. Under proposer pipelining (`PROPOSER_PIPELINING_SLOT_OFFSET = 1`, `epoch-cache/src/epoch_cache.ts:26`), the job for `badSlot1` is built during the last L1 slot of L2 slot `badSlot1 - 1`. With 32s L2 slots and 8s L1 slots, that's ~24s into the previous L2 slot — so if `currentSlot` was read late, badSlot1's proposer can snapshot the old config before our `setConfig` round-trip completes. ## Fix - Wait for an L2 slot boundary (`monitor.waitUntilNextL2Slot()`) before reading `currentSlot`, so we start from the beginning of a slot rather than wherever we happened to land. - Bump the gap from `+2/+3` to `+3/+4` for a second slot of margin. Cost is up to one additional L2 slot of test runtime in the worst case; the existing 8-slot wait window for both checkpoints still fits.
1 parent 1eaca4b commit 498b961

1 file changed

Lines changed: 10 additions & 3 deletions

File tree

yarn-project/end-to-end/src/e2e_epochs/epochs_invalidate_block.parallel.test.ts

Lines changed: 10 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -378,12 +378,19 @@ describe('e2e_epochs/epochs_invalidate_block', () => {
378378
// Wait for at least one checkpoint to be mined so that any in-progress slot has completed
379379
const initialCheckpointNumber = (await nodes[0].getChainTips()).checkpointed.checkpoint.number;
380380
await test.waitUntilCheckpointNumber(CheckpointNumber(initialCheckpointNumber + 1), test.L2_SLOT_DURATION_IN_S * 4);
381+
382+
// Align to the start of an L2 slot before computing the bad slots, so we have a generous
383+
// buffer to push the malicious config to badSlot1's proposer before it snapshots its config
384+
// into a new CheckpointProposalJob. Under proposer pipelining, that job is built during the
385+
// last L1 slot of the previous L2 slot (when getEpochAndSlotInNextL1Slot first returns the
386+
// proposer's target slot), so the practical window is somewhat less than a full L2 slot.
387+
await test.monitor.waitUntilNextL2Slot();
381388
const { l2SlotNumber: currentSlot } = await test.monitor.run();
382389
logger.warn(`First checkpoint mined, current slot is ${currentSlot}`);
383390

384-
// Pick the next two slots after the current one, with a 1-slot gap to account for pipelining
385-
const badSlot1 = SlotNumber.add(currentSlot, 2);
386-
const badSlot2 = SlotNumber.add(currentSlot, 3);
391+
// Pick the next two slots with a 2-slot gap to account for pipelining plus a margin
392+
const badSlot1 = SlotNumber.add(currentSlot, 3);
393+
const badSlot2 = SlotNumber.add(currentSlot, 4);
387394
const badSlots = [badSlot1, badSlot2];
388395
const badProposers = await Promise.all(badSlots.map(s => test.epochCache.getProposerAttesterAddressInSlot(s)));
389396

0 commit comments

Comments
 (0)