p2p/sentry, rpc: restore eth/71 server-side dispatch (EIP-7928 / EIP-8159 Amsterdam)#20893
Open
p2p/sentry, rpc: restore eth/71 server-side dispatch (EIP-7928 / EIP-8159 Amsterdam)#20893
Conversation
…se 5a) Subscribes to GET_BLOCK_ACCESS_LISTS_71 and routes it to a new handler that answers with BlockAccessLists sourced from rawdb via the handler added in Phase 3. After this commit, two erigon nodes running with the eth/71-aware stack can complete the request/response round trip at the wire level: node A sends GetBlockAccessLists; node B decodes, looks up the BALs, and replies with BlockAccessLists positionally aligned. Changes in p2p/sentry/sentry_multi_client/sentry_multi_client.go: - RecvUploadMessageLoop subscribes to the new request MessageId (GET_BLOCK_ACCESS_LISTS_71) alongside the existing GetBlockBodies / GetReceipts subscriptions. - New method getBlockAccessLists71 mirrors getBlockBodies66: decode the eth/66 request-id envelope, open a read-only tx, call the Phase 3 handler (eth.AnswerGetBlockAccessListsQuery), encode the reply as BlockAccessListsPacket66 with the matching request id, and send via sentry.SendMessageById to BLOCK_ACCESS_LISTS_71. - Placeholder blockAccessLists71 (no-op) is wired for inbound responses so the sentry routing table doesn't error. The full response path — request-id matching, keccak256 validation against the header's BlockAccessListHash, bad-peer scoring, and writing to rawdb — lives in the client fetcher landing next (Phase 5b). - handleInboundMessage switch now routes both new MessageIds. Tested: short tests pass in p2p/sentry, p2p/sentry/libsentry, and p2p/sentry/sentry_multi_client; make lint clean; make erigon builds. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds three small dev/test tools used to discover and verify the missing eth/71 server-side wiring (server handlers, dispatch, and inbound subscription) restored in this PR's first commit: - debug_getRawBlockAccessList(blockHash) JSON-RPC method: returns the RLP-encoded BlockAccessList bytes this node has stored for a block (exactly what the eth/71 server-side handler returns to peers). - cmd/bal-test: dump / delete / compare BAL entries in chaindata, used to drive the BAL downloader fetch loop and verify the bytes refetched from peers byte-match what we stored locally from execution. - cmd/bal-scan: walk the kv.BlockAccessList table, print (block,hash,len) per entry — used to confirm what's actually persisted after prune. This is Amsterdam-fork (EIP-7928 / EIP-8159) functionality.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What
Restores the eth/71 server-side wiring that should have been part of #20794 (handler + sentry dispatch). The squash-merge of #20794 dropped the multi_client.go changes — only the eth/protocols handler (
AnswerGetBlockAccessListsQuery) and the sentry gRPC server bits made it in. As a result main today negotiates eth/71 capability with peers but silently drops every inboundGetBlockAccessListsmessage at the dispatcher, so peers requesting our BAL data time out and we never serve a single response.This PR restores those bits in a single, small change, as originally intended by the dev branch commit
8eed8fe7bf("Phase 5a"):MultiClient.RecvUploadMessageLoop— subscribe to inboundeth/71GetBlockAccessListsMsgrequests so the sentry actually pumps them into our handler.MultiClient.handleInboundMessage— add theGET_BLOCK_ACCESS_LISTS_71andBLOCK_ACCESS_LISTS_71cases to the dispatch switch.MultiClient.getBlockAccessLists71/blockAccessLists71— the in-process handler functions that read from rawdb and reply (server side) and route inbound responses to the in-flight fetcher (client side).This is Amsterdam-fork (EIP-7928 / EIP-8159) functionality — only blocks at or after the Amsterdam timestamp commit to a
BlockAccessListHash, and only those blocks have BAL bytes for peers to request.How it was found
While running PR #20795 (the BAL fetcher/downloader, which is the client-side counterpart) on bal-devnet-3, the BAL downloader's scan correctly identified missing BALs and fired
GetBlockAccessListsrequests at peers, but every request timed out — peers (Besu, Nimbus, our own erigon) were never delivering responses, and our own node was never replying when peers asked us for BALs. Adding the dispatch wiring in this PR resolves both directions: with this fix applied to both of two erigon nodes peering on bal-devnet-3, the BAL downloader successfully fetched ~328 BALs from a Besu peer and from our other erigon in a single run.The new tooling in the second commit was the means of finding and confirming the bug:
debug_getRawBlockAccessList(blockHash)JSON-RPC method (in thedebugnamespace, available with--http.api=debug): returns the raw RLP bytes this node has stored for the given block — exactly what the server-side eth/71 handler hands back to peers. Useful for byte-level cross-client comparison of BAL bytes.cmd/bal-test— dump / delete / compare BAL entries directly inchaindata(requires erigon stopped). Lets you snapshot a known-good set of BALs from one node, delete the same blocks on a peer, restart, watch the BAL downloader refetch from the wider devnet, and assert byte-equality against the snapshot.cmd/bal-scan— walk thekv.BlockAccessListtable and emit one(block, hash, len)line per entry within an optional range. Used to confirm what's actually persisted after prune (and how it intersects with what the downloader claims to have stored).Verification
make lint— cleanmake erigon— cleango test -short -count=1 ./p2p/sentry/sentry_multi_client/... ./p2p/protocols/eth/... ./rpc/jsonrpc/...— passeseth/71and our handler answers — without it, every fetch times out and[bal-downloader]only logsfetch failed err=\"bal: fetch timed out waiting for peer response\".Out of scope
This PR only restores the server-side wiring missing on
main. The client-side counterpart (BAL fetcher + downloader +BlockAccessListsMsgresponse subscription onRecvMessageLoop) is in #20795 and lands separately.