backport: partial bitcoin#25717, #25960, #25963, #25978, #26012, #26172, #26184, #26355, #26387, #26954, #30761 (anti-DOS headers sync)#7318
Conversation
|
✅ Review complete (commit 6db8fdf) |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 6b8d56c43e
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| // We'll rely on headers having valid proof-of-work further down, as an | ||
| // anti-DoS criteria (note: this check is required before passing any | ||
| // headers into HeadersSyncState). | ||
| if (!CheckHeadersPoW(headers, m_chainparams.GetConsensus(), pfrom)) { |
There was a problem hiding this comment.
Defer full PoW scan until after known-header fast path
Running CheckHeadersPoW() before any block-index lookups makes every valid headers message pay a full PoW verification cost, even when all headers are already known ancestors and would otherwise be fast-path no-ops. A peer can repeatedly send old 2000-header batches and force repeated hash checks without triggering misbehavior, creating a CPU DoS vector that this path previously avoided by letting ProcessNewBlockHeaders() short-circuit duplicates.
Useful? React with 👍 / 👎.
WalkthroughThis PR implements a two-phase header synchronization (PRESYNC and REDOWNLOAD) with per-peer HeadersSyncState, adds CompressedHeader and a packed-bit bitdeque for commitments, introduces anti-DoS difficulty checks (PermittedDifficultyTransition) and min_pow_checked gating, updates validation to report BLOCK_HEADER_LOW_WORK, refactors locator helpers, wires network logic to aggregate/report presync progress, updates UI SyncType and Qt signals, and adds/adapts tests and fuzz targets. Sequence Diagram(s)sequenceDiagram
participant Peer
participant PeerManager
participant HeadersSyncState
participant ChainstateManager
participant UI
Peer->>PeerManager: send HEADERS (moved headers, uses_compressed)
PeerManager->>HeadersSyncState: ProcessNextHeaders(received_headers, full_message)
HeadersSyncState->>HeadersSyncState: PRESYNC -> REDOWNLOAD (commitments, buffer)
HeadersSyncState->>ChainstateManager: ProcessNewBlockHeaders(min_pow_checked)
ChainstateManager->>UI: ReportHeadersPresync(work, height, timestamp)
UI->>PeerManager: notify presync progress (signals)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes ✨ Finishing Touches🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/validation.cpp (1)
4309-4319:⚠️ Potential issue | 🟠 Major | ⚡ Quick winMove the low-work gate ahead of the chainlock-conflict insertion.
This branch still calls
AddToBlockIndex(..., BLOCK_CONFLICT_CHAINLOCK)before the newmin_pow_checkedrejection runs. That means low-work headers can still be persisted on chainlocked heights, which defeats the “don’t save low-work chains” part of this backport.Suggested fix
- if (ActiveChainstate().m_chain_helper->HasConflictingChainLock(pindexPrev->nHeight + 1, hash)) { + if (!min_pow_checked) { + LogPrint(BCLog::VALIDATION, "%s: not adding new block header %s, missing anti-dos proof-of-work validation\n", __func__, hash.ToString()); + return state.Invalid(BlockValidationResult::BLOCK_HEADER_LOW_WORK, "too-little-chainwork"); + } + + if (ActiveChainstate().m_chain_helper->HasConflictingChainLock(pindexPrev->nHeight + 1, hash)) { if (miSelf == m_blockman.m_block_index.end()) { m_blockman.AddToBlockIndex(block, hash, m_best_header, BLOCK_CONFLICT_CHAINLOCK); } LogPrintf("ERROR: %s: header %s conflicts with chainlock\n", __func__, hash.ToString()); return state.Invalid(BlockValidationResult::BLOCK_CHAINLOCK, "bad-chainlock"); } } - if (!min_pow_checked) { - LogPrint(BCLog::VALIDATION, "%s: not adding new block header %s, missing anti-dos proof-of-work validation\n", __func__, hash.ToString()); - return state.Invalid(BlockValidationResult::BLOCK_HEADER_LOW_WORK, "too-little-chainwork"); - } CBlockIndex* pindex{m_blockman.AddToBlockIndex(block, hash, m_best_header)};🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/validation.cpp` around lines 4309 - 4319, The code currently calls m_blockman.AddToBlockIndex(..., BLOCK_CONFLICT_CHAINLOCK) when ActiveChainstate().m_chain_helper->HasConflictingChainLock(...) is true before checking min_pow_checked, allowing low-work headers to be persisted; move the low-work gate so that the check if (!min_pow_checked) happens before any call to m_blockman.AddToBlockIndex or persisting/logging for chainlock conflicts. Concretely, in the block that checks HasConflictingChainLock, first verify min_pow_checked (or return state.Invalid(BlockValidationResult::BLOCK_HEADER_LOW_WORK, "too-little-chainwork")) and only if min_pow_checked is true proceed to call m_blockman.AddToBlockIndex(block, hash, m_best_header, BLOCK_CONFLICT_CHAINLOCK) and the subsequent LogPrintf/state.Invalid for BLOCK_CHAINLOCK.
🧹 Nitpick comments (8)
test/functional/rpc_blockchain.py (1)
473-473: 💤 Low valueConsider clarifying the comment about fork position.
The comment states "choose something vaguely near our tip," but at this point in the test flow,
current_heightis approximately 207, makingfork_heightaround 107—roughly halfway through the chain rather than near the tip. Consider updating the comment to reflect that this chooses a fork point well below the current tip, or adjust the offset if proximity to the tip is actually desired for the test scenario.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@test/functional/rpc_blockchain.py` at line 473, The inline comment for the fork selection is misleading: update the comment near the assignment to fork_height (using current_height) to accurately state that subtracting 100 selects a fork point well below the current tip (about halfway for current_height ≈ 207), or if the test actually needs a fork near the tip, change the offset (e.g., use a smaller value like 10 or 20) and update the comment accordingly; locate the fork_height = current_height - 100 line and either edit the numeric offset or reword the comment to reflect the intended fork proximity.test/functional/p2p_invalid_messages.py (1)
56-69: ⚡ Quick winCall
test_oversized_headers2_msg()fromrun_test()to keep coverage complete.
test_oversized_headers2_msg()is defined but still not exercised in the test sequence.Suggested fix
self.test_oversized_getdata_msg() self.test_oversized_headers_msg() + self.test_oversized_headers2_msg() self.test_invalid_pow_headers_msg() self.test_noncontinuous_headers_msg() self.test_resource_exhaustion()🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@test/functional/p2p_invalid_messages.py` around lines 56 - 69, run_test() currently calls test_oversized_headers_msg() but never invokes the defined test_oversized_headers2_msg(); add a call to test_oversized_headers2_msg() in the run_test() sequence (for example immediately after test_oversized_headers_msg()) so the test is executed and coverage remains complete, ensuring the method name test_oversized_headers2_msg is used exactly as defined.src/util/bitdeque.h (1)
8-14: ⚡ Quick winInclude all directly used STL headers in this public header.
bitdeque.huses symbols from<algorithm>,<iterator>,<type_traits>, and<utility>but does not include them directly. Relying on transitive includes is brittle across toolchains/stdlib updates.Proposed diff
+#include <algorithm> `#include` <bitset> `#include` <cstddef> `#include` <deque> +#include <iterator> `#include` <limits> `#include` <stdexcept> `#include` <tuple> +#include <type_traits> +#include <utility>🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/util/bitdeque.h` around lines 8 - 14, The public header src/util/bitdeque.h currently relies on transitive includes for several STL facilities; add direct includes for <algorithm>, <iterator>, <type_traits>, and <utility> at the top of bitdeque.h so symbols used throughout the BitDeque implementation (e.g., iterator utilities, type traits and move/utility helpers) are guaranteed available across toolchains; update the include block alongside the existing <bitset>, <cstddef>, <deque>, <limits>, and <stdexcept> lines.src/qt/clientmodel.h (1)
145-145: ⚡ Quick winConsider renaming the
SyncType headerparameter for clarity.The parameter is named
headerbut has typeSyncType, which can represent header presync, header sync, or block sync. The nameheaderis misleading since it suggests a boolean or header-specific value. Consider renaming tosynctypeto match the type and improve readability at call sites.Suggested parameter rename
- void numBlocksChanged(int count, const QDateTime& blockDate, const QString& blockHash, double nVerificationProgress, SyncType header, SynchronizationState sync_state); + void numBlocksChanged(int count, const QDateTime& blockDate, const QString& blockHash, double nVerificationProgress, SyncType synctype, SynchronizationState sync_state);🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/qt/clientmodel.h` at line 145, The parameter name `header` in the numBlocksChanged declaration is misleading given its type SyncType; rename the parameter to `synctype` in the declaration of numBlocksChanged and update the corresponding definition/implementation and all call sites to use the new name (e.g., change signatures and any references inside ClientModel::numBlocksChanged and places that call numBlocksChanged to pass/inspect `synctype`), ensuring compilation and consistency with the SyncType semantics.test/functional/p2p_headers_sync_with_minchainwork.py (3)
149-149: ⚡ Quick winRename unused loop variable.
The loop variable
iis not used within the loop body. Per Python conventions, rename it to_to indicate it's intentionally unused.♻️ Proposed fix
- for i in range(2000): + for _ in range(2000):🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@test/functional/p2p_headers_sync_with_minchainwork.py` at line 149, The loop variable in the for loop "for i in range(2000):" is unused; rename the loop variable from i to _ (i.e., "for _ in range(2000):") to follow Python conventions for intentionally unused variables and make the intent clear in the test file (test/functional/p2p_headers_sync_with_minchainwork.py).
159-162: 💤 Low valueDocument the disabled assertion.
The test documents that the
presynced_headersassertion is currently disabled and explains why. This is appropriate for a known limitation in the functional test environment that doesn't affect real-world operation (as confirmed by manual testing on testnet).Consider tracking this TODO in a separate issue if not already covered.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@test/functional/p2p_headers_sync_with_minchainwork.py` around lines 159 - 162, Update the TODO comment around the disabled assertion for presynced_headers (the line with node.getpeerinfo()[0]['presynced_headers']) to clearly state why the assertion is disabled, under what conditions the discrepancy occurs (functional test environment vs real testnet), and note that manual RPC/GUI checks show correct behavior; also add a brief action item to track this TODO in the issue tracker (reference an issue or create one) so it isn't forgotten.
24-24: ⚡ Quick winRemove unused import.
The
assert_equalimport is not used in this test file.🧹 Proposed fix
-from test_framework.util import assert_equal🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@test/functional/p2p_headers_sync_with_minchainwork.py` at line 24, Remove the unused import "assert_equal" from the top of the test file (the from test_framework.util import assert_equal line) since it is not referenced anywhere in p2p_headers_sync_with_minchainwork.py; this keeps imports clean and avoids linter warnings.src/test/headers_sync_chainwork_tests.cpp (1)
55-56: ⚡ Quick winRemove unnecessary
returnstatement.The explicit
return;at the end of a void function is redundant.♻️ Proposed fix
prev_time = next_header.nTime; } - return; }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/test/headers_sync_chainwork_tests.cpp` around lines 55 - 56, Remove the redundant explicit "return;" at the end of the void test function in headers_sync_chainwork_tests.cpp (the function that currently ends with "return;") — simply delete that statement so the function ends naturally without an unnecessary return.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/headerssync.cpp`:
- Around line 148-160: The loop only verifies headers[0] links to
m_last_header_received, but later headers in the same batch are not checked for
internal continuity, allowing an unlinked chain to be accepted; fix by verifying
chain continuity for each header before calling ValidateAndProcessSingleHeader:
for the first header ensure headers[0].hashPrevBlock ==
m_last_header_received.GetHash(), and for every subsequent header ensure
headers[i].hashPrevBlock == headers[i-1].GetHash() (or maintain a running
prev_hash variable) and abort/return false on any discontinuity so that
m_minimum_required_work/presync logic cannot be advanced by unlinked headers
(adjust the same logic referenced around ValidateAndProcessSingleHeader and the
presync handling).
- Line 43: The computation that sets m_max_commitments can produce a negative
intermediate (because NodeClock::now() -
NodeSeconds{chain_start->GetMedianTimePast()} may be < -MAX_FUTURE_BLOCK_TIME)
which, when assigned to the uint64_t m_max_commitments, will wrap to a huge
value; clamp the elapsed/ticks value to zero before using it in the formula:
compute the tick/elapsed value between NodeClock::now() and
NodeSeconds{chain_start->GetMedianTimePast()}, apply a max(0, elapsed -
MAX_FUTURE_BLOCK_TIME) (or otherwise ensure the signed result is non-negative),
then use that non-negative value in the 6*(...)/HEADER_COMMITMENT_PERIOD
calculation to set m_max_commitments so the later check against
m_header_commitments.size() behaves correctly.
In `@src/net_processing.cpp`:
- Around line 3215-3226: The GetAntiDoSWorkThreshold function currently takes
cs_main internally which can deadlock because callers in the CMPCTBLOCK and
BLOCK paths already hold cs_main; change the design so the core implementation
requires cs_main be held (e.g. rename to GetAntiDoSWorkThreshold_Locked or
assert LOCK held) and remove the internal LOCK(cs_main), and provide a thin
unlocked wrapper (e.g. GetAntiDoSWorkThreshold that acquires cs_main and calls
the locked implementation) for callers that do not already hold the lock; update
CMPCTBLOCK/BLOCK callers to call the locked implementation directly (or assert
they hold cs_main) to avoid double-locking. Ensure the same change is applied to
the other occurrences mentioned (around the other listed ranges).
- Around line 1905-1908: When erasing entries from m_headers_presync_stats
(protected by m_headers_presync_mutex), also set m_headers_presync_should_signal
so ReportHeadersPresync() will emit a clearing update (presync=false) to the UI;
specifically, inside the same LOCK(m_headers_presync_mutex) blocks that call
m_headers_presync_stats.erase(nodeid) (the three teardown paths that currently
only erase), set m_headers_presync_should_signal = true before releasing the
lock so the presync clearing is signalled for the best presync peer that
disconnected/finished.
In `@src/pow.cpp`:
- Around line 257-303: PermittedDifficultyTransition currently allows bounded
difficulty changes for all pre-KGW heights; enforce exact equality for
non-retarget heights by adding an early check in PermittedDifficultyTransition:
if height < params.nPowKGWHeight and the height is not a retarget boundary (i.e.
height % params.DifficultyAdjustmentInterval() != 0) then require new_nbits ==
old_nbits and return false if they differ; otherwise continue with the existing
bounding logic. Reference PermittedDifficultyTransition, params.nPowKGWHeight,
old_nbits, new_nbits, and params.DifficultyAdjustmentInterval() to locate where
to add this check.
In `@src/test/headers_sync_chainwork_tests.cpp`:
- Line 44: The statement "CBlockHeader& next_header = headers.back();;" contains
a redundant semicolon; remove the extra trailing semicolon so the line reads
"CBlockHeader& next_header = headers.back();" (i.e., keep the declaration and a
single semicolon) to eliminate the syntax/style issue in the test function where
next_header is defined.
In `@src/test/pow_tests.cpp`:
- Around line 77-79: The test's assertion is inverted: it intends to verify that
increasing nbits is NOT permitted but currently asserts true; update the check
that calls PermittedDifficultyTransition(chainParams->GetConsensus(),
blockIndexLast->nHeight+1, blockIndexLast->nBits, invalid_nbits) so it expects
false (e.g., negate the call or use BOOST_CHECK_EQUAL(false, ...)) where
invalid_nbits = expected_nbits+1 is used, keeping the same inputs
(chainParams->GetConsensus(), blockIndexLast->nHeight+1, blockIndexLast->nBits).
In `@src/validation.cpp`:
- Around line 4394-4399: The presync progress math in validation.cpp can produce
negative blocks_left and a divide-by-zero when timestamp is ahead of GetTime();
update the initial_download path around symbols initial_download, blocks_left,
progress, GetTime(), timestamp and GetConsensus().nPowTargetSpacing to clamp
blocks_left to a non-negative value (e.g. max(0, computed_blocks_left)) and
guard the progress calculation so you never divide by zero (if height +
blocks_left == 0 set progress to 0.0), and finally clamp the computed progress
into [0.0, 100.0] before calling LogPrintf.
---
Outside diff comments:
In `@src/validation.cpp`:
- Around line 4309-4319: The code currently calls
m_blockman.AddToBlockIndex(..., BLOCK_CONFLICT_CHAINLOCK) when
ActiveChainstate().m_chain_helper->HasConflictingChainLock(...) is true before
checking min_pow_checked, allowing low-work headers to be persisted; move the
low-work gate so that the check if (!min_pow_checked) happens before any call to
m_blockman.AddToBlockIndex or persisting/logging for chainlock conflicts.
Concretely, in the block that checks HasConflictingChainLock, first verify
min_pow_checked (or return
state.Invalid(BlockValidationResult::BLOCK_HEADER_LOW_WORK,
"too-little-chainwork")) and only if min_pow_checked is true proceed to call
m_blockman.AddToBlockIndex(block, hash, m_best_header, BLOCK_CONFLICT_CHAINLOCK)
and the subsequent LogPrintf/state.Invalid for BLOCK_CHAINLOCK.
---
Nitpick comments:
In `@src/qt/clientmodel.h`:
- Line 145: The parameter name `header` in the numBlocksChanged declaration is
misleading given its type SyncType; rename the parameter to `synctype` in the
declaration of numBlocksChanged and update the corresponding
definition/implementation and all call sites to use the new name (e.g., change
signatures and any references inside ClientModel::numBlocksChanged and places
that call numBlocksChanged to pass/inspect `synctype`), ensuring compilation and
consistency with the SyncType semantics.
In `@src/test/headers_sync_chainwork_tests.cpp`:
- Around line 55-56: Remove the redundant explicit "return;" at the end of the
void test function in headers_sync_chainwork_tests.cpp (the function that
currently ends with "return;") — simply delete that statement so the function
ends naturally without an unnecessary return.
In `@src/util/bitdeque.h`:
- Around line 8-14: The public header src/util/bitdeque.h currently relies on
transitive includes for several STL facilities; add direct includes for
<algorithm>, <iterator>, <type_traits>, and <utility> at the top of bitdeque.h
so symbols used throughout the BitDeque implementation (e.g., iterator
utilities, type traits and move/utility helpers) are guaranteed available across
toolchains; update the include block alongside the existing <bitset>, <cstddef>,
<deque>, <limits>, and <stdexcept> lines.
In `@test/functional/p2p_headers_sync_with_minchainwork.py`:
- Line 149: The loop variable in the for loop "for i in range(2000):" is unused;
rename the loop variable from i to _ (i.e., "for _ in range(2000):") to follow
Python conventions for intentionally unused variables and make the intent clear
in the test file (test/functional/p2p_headers_sync_with_minchainwork.py).
- Around line 159-162: Update the TODO comment around the disabled assertion for
presynced_headers (the line with node.getpeerinfo()[0]['presynced_headers']) to
clearly state why the assertion is disabled, under what conditions the
discrepancy occurs (functional test environment vs real testnet), and note that
manual RPC/GUI checks show correct behavior; also add a brief action item to
track this TODO in the issue tracker (reference an issue or create one) so it
isn't forgotten.
- Line 24: Remove the unused import "assert_equal" from the top of the test file
(the from test_framework.util import assert_equal line) since it is not
referenced anywhere in p2p_headers_sync_with_minchainwork.py; this keeps imports
clean and avoids linter warnings.
In `@test/functional/p2p_invalid_messages.py`:
- Around line 56-69: run_test() currently calls test_oversized_headers_msg() but
never invokes the defined test_oversized_headers2_msg(); add a call to
test_oversized_headers2_msg() in the run_test() sequence (for example
immediately after test_oversized_headers_msg()) so the test is executed and
coverage remains complete, ensuring the method name test_oversized_headers2_msg
is used exactly as defined.
In `@test/functional/rpc_blockchain.py`:
- Line 473: The inline comment for the fork selection is misleading: update the
comment near the assignment to fork_height (using current_height) to accurately
state that subtracting 100 selects a fork point well below the current tip
(about halfway for current_height ≈ 207), or if the test actually needs a fork
near the tip, change the offset (e.g., use a smaller value like 10 or 20) and
update the comment accordingly; locate the fork_height = current_height - 100
line and either edit the numeric offset or reword the comment to reflect the
intended fork proximity.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: a2ac0659-9ce7-4d01-af2c-8f318d4dfe2d
📒 Files selected for processing (59)
src/Makefile.amsrc/Makefile.test.includesrc/chain.cppsrc/chain.hsrc/consensus/validation.hsrc/headerssync.cppsrc/headerssync.hsrc/index/base.cppsrc/interfaces/node.hsrc/net_processing.cppsrc/net_processing.hsrc/node/interface_ui.cppsrc/node/interface_ui.hsrc/node/interfaces.cppsrc/pow.cppsrc/pow.hsrc/primitives/block.hsrc/qt/bitcoin.cppsrc/qt/bitcoingui.cppsrc/qt/bitcoingui.hsrc/qt/clientmodel.cppsrc/qt/clientmodel.hsrc/qt/informationwidget.cppsrc/qt/informationwidget.hsrc/qt/modaloverlay.cppsrc/qt/modaloverlay.hsrc/qt/rpcconsole.cppsrc/qt/rpcconsole.hsrc/qt/sendcoinsdialog.cppsrc/qt/sendcoinsdialog.hsrc/rpc/mining.cppsrc/rpc/net.cppsrc/test/blockfilter_index_tests.cppsrc/test/coinstatsindex_tests.cppsrc/test/evo_deterministicmns_tests.cppsrc/test/fuzz/bitdeque.cppsrc/test/fuzz/pow.cppsrc/test/fuzz/utxo_snapshot.cppsrc/test/headers_sync_chainwork_tests.cppsrc/test/miner_tests.cppsrc/test/pow_tests.cppsrc/test/skiplist_tests.cppsrc/test/util/mining.cppsrc/test/util/setup_common.cppsrc/test/util_tests.cppsrc/test/validation_block_tests.cppsrc/test/validation_chainstate_tests.cppsrc/util/bitdeque.hsrc/validation.cppsrc/validation.htest/functional/feature_block.pytest/functional/p2p_compactblocks.pytest/functional/p2p_dos_header_tree.pytest/functional/p2p_headers_sync_with_minchainwork.pytest/functional/p2p_invalid_messages.pytest/functional/p2p_unrequested_blocks.pytest/functional/rpc_blockchain.pytest/functional/rpc_net.pytest/functional/test_runner.py
| // exceeds this bound, because it's not possible for a consensus-valid | ||
| // chain to be longer than this (at the current time -- in the future we | ||
| // could try again, if necessary, to sync a longer chain). | ||
| m_max_commitments = 6*(Ticks<std::chrono::seconds>(NodeClock::now() - NodeSeconds{std::chrono::seconds{chain_start->GetMedianTimePast()}}) + MAX_FUTURE_BLOCK_TIME) / HEADER_COMMITMENT_PERIOD; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -e
rg -n -C2 '\bm_max_commitments\b|\bm_header_commitments\b|Ticks<std::chrono::seconds>\(|MAX_FUTURE_BLOCK_TIME' src/headerssync.h src/headerssync.cppRepository: dashpay/dash
Length of output: 3809
🏁 Script executed:
#!/bin/bash
# Find Ticks template definition and type
rg -n 'template.*Ticks|struct Ticks|class Ticks' --type cpp --type h | head -20Repository: dashpay/dash
Length of output: 38
🏁 Script executed:
#!/bin/bash
# Find MAX_FUTURE_BLOCK_TIME definition
rg -n 'MAX_FUTURE_BLOCK_TIME' src/headerssync.h src/headerssync.cpp src/Repository: dashpay/dash
Length of output: 1021
🏁 Script executed:
#!/bin/bash
# Check the actual calculation and potential implicit conversions
# Look at how time differences are computed
rg -n 'NodeClock::now|GetMedianTimePast' src/headerssync.cpp -A2 -B2 | head -40Repository: dashpay/dash
Length of output: 604
🏁 Script executed:
#!/bin/bash
# Search for Ticks definition in chrono files
find . -name "*.h" -o -name "*.hpp" | xargs rg -l "template.*Ticks|struct Ticks|class Ticks" | head -10Repository: dashpay/dash
Length of output: 38
🏁 Script executed:
#!/bin/bash
# Look for chrono-related includes and Ticks usage in headerssync
rg -n '#include.*chrono|Ticks' src/headerssync.h src/headerssync.cpp -B2 -A2 | head -50Repository: dashpay/dash
Length of output: 704
🏁 Script executed:
#!/bin/bash
# Check the actual types in the calculation - look for NodeClock, NodeSeconds, NodeDuration definitions
rg -n 'using NodeClock|using NodeSeconds|using NodeDuration|struct NodeClock|class NodeClock' src/ --type h | head -20Repository: dashpay/dash
Length of output: 226
🏁 Script executed:
#!/bin/bash
# Read full util/time.h to find Ticks definition
cat -n src/util/time.h | head -100Repository: dashpay/dash
Length of output: 4493
🏁 Script executed:
#!/bin/bash
# Search more broadly for Ticks in headers
rg 'Ticks' src/util/time.h src/util/ --type h -B2 -A5Repository: dashpay/dash
Length of output: 1609
🏁 Script executed:
#!/bin/bash
# Check what happens before line 43 - any validation on chain_start or timing
sed -n '1,50p' src/headerssync.cpp | cat -n | tail -50Repository: dashpay/dash
Length of output: 3067
🏁 Script executed:
#!/bin/bash
# Look for any validation of GetMedianTimePast or time differences in initialization
rg -n 'chain_start|GetMedianTimePast' src/headerssync.cpp -B5 -A2 | head -60Repository: dashpay/dash
Length of output: 3204
Clamp negative elapsed time before computing m_max_commitments.
If the local clock is behind chain_start->GetMedianTimePast() by more than MAX_FUTURE_BLOCK_TIME, the expression evaluates to a negative value. Assigning this negative int64_t to the uint64_t member m_max_commitments causes an integer wrap-around to an extremely large positive value, effectively removing the memory bound that is checked against m_header_commitments.size() at line 198.
Proposed fix
- m_max_commitments = 6*(Ticks<std::chrono::seconds>(NodeClock::now() - NodeSeconds{std::chrono::seconds{chain_start->GetMedianTimePast()}}) + MAX_FUTURE_BLOCK_TIME) / HEADER_COMMITMENT_PERIOD;
+ const auto elapsed = Ticks<std::chrono::seconds>(
+ NodeClock::now() - NodeSeconds{std::chrono::seconds{chain_start->GetMedianTimePast()}});
+ const auto bounded_elapsed = std::max<int64_t>(0, elapsed + MAX_FUTURE_BLOCK_TIME);
+ m_max_commitments = 6 * bounded_elapsed / HEADER_COMMITMENT_PERIOD;🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/headerssync.cpp` at line 43, The computation that sets m_max_commitments
can produce a negative intermediate (because NodeClock::now() -
NodeSeconds{chain_start->GetMedianTimePast()} may be < -MAX_FUTURE_BLOCK_TIME)
which, when assigned to the uint64_t m_max_commitments, will wrap to a huge
value; clamp the elapsed/ticks value to zero before using it in the formula:
compute the tick/elapsed value between NodeClock::now() and
NodeSeconds{chain_start->GetMedianTimePast()}, apply a max(0, elapsed -
MAX_FUTURE_BLOCK_TIME) (or otherwise ensure the signed result is non-negative),
then use that non-negative value in the 6*(...)/HEADER_COMMITMENT_PERIOD
calculation to set m_max_commitments so the later check against
m_header_commitments.size() behaves correctly.
| if (headers[0].hashPrevBlock != m_last_header_received.GetHash()) { | ||
| // Somehow our peer gave us a header that doesn't connect. | ||
| // This might be benign -- perhaps our peer reorged away from the chain | ||
| // they were on. Give up on this sync for now (likely we will start a | ||
| // new sync with a new starting point). | ||
| LogPrint(BCLog::NET, "Initial headers sync aborted with peer=%d: non-continuous headers at height=%i (presync phase)\n", m_id, m_current_height); | ||
| return false; | ||
| } | ||
|
|
||
| // If it does connect, (minimally) validate and occasionally store | ||
| // commitments. | ||
| for (const auto& hdr : headers) { | ||
| if (!ValidateAndProcessSingleHeader(hdr)) { |
There was a problem hiding this comment.
Validate chain continuity for every PRESYNC header.
Line 148 only proves headers[0] connects to the previous batch. Every later header in the same batch can be disjoint, because ValidateAndProcessSingleHeader() never checks current.hashPrevBlock against m_last_header_received before adding work and commitments. That lets a peer reach m_minimum_required_work with a mostly unlinked sequence and weakens the presync DoS filter.
🛠️ Proposed fix
bool HeadersSyncState::ValidateAndProcessSingleHeader(const CBlockHeader& current)
{
Assume(m_download_state == State::PRESYNC);
if (m_download_state != State::PRESYNC) return false;
- int next_height = m_current_height + 1;
+ const int next_height = m_current_height + 1;
+ if (current.hashPrevBlock != m_last_header_received.GetHash()) {
+ LogPrint(BCLog::NET,
+ "Initial headers sync aborted with peer=%d: non-continuous headers at height=%i (presync phase)\n",
+ m_id, next_height);
+ return false;
+ }
// Verify that the difficulty isn't growing too fast; an adversary with
// limited hashing capability has a greater chance of producing a highAlso applies to: 177-210
🧰 Tools
🪛 Cppcheck (2.20.0)
[error] 148-148: Buffer is accessed out of bounds
(bufferAccessOutOfBounds)
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/headerssync.cpp` around lines 148 - 160, The loop only verifies
headers[0] links to m_last_header_received, but later headers in the same batch
are not checked for internal continuity, allowing an unlinked chain to be
accepted; fix by verifying chain continuity for each header before calling
ValidateAndProcessSingleHeader: for the first header ensure
headers[0].hashPrevBlock == m_last_header_received.GetHash(), and for every
subsequent header ensure headers[i].hashPrevBlock == headers[i-1].GetHash() (or
maintain a running prev_hash variable) and abort/return false on any
discontinuity so that m_minimum_required_work/presync logic cannot be advanced
by unlinked headers (adjust the same logic referenced around
ValidateAndProcessSingleHeader and the presync handling).
| { | ||
| LOCK(m_headers_presync_mutex); | ||
| m_headers_presync_stats.erase(nodeid); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== ReportHeadersPresync definitions and consumers ==="
rg -n -C3 '\bReportHeadersPresync\b|\bpresync_height\b|\bNotifyHeaderTip\b.*presync' src test
echo
echo "=== net_processing presync signal/erase paths ==="
rg -n -C2 'm_headers_presync_stats\.erase|m_headers_presync_should_signal|ReportHeadersPresync' src/net_processing.cppRepository: dashpay/dash
Length of output: 6953
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== All NotifyHeaderTip calls (check for presync=false or clearing mechanism) ==="
rg -n 'NotifyHeaderTip' src --type cpp -A2 -B2
echo
echo "=== Full ReportHeadersPresync implementation ==="
sed -n '4378,4410p' src/validation.cpp
echo
echo "=== Check for any periodic clearing or timeout mechanisms ==="
rg -n 'presync.*clear|presync.*reset|presync.*timeout|m_last_presync' src --type cppRepository: dashpay/dash
Length of output: 13432
Signal a clearing update when the best presync peer finishes, disconnects, or leaves phase 1.
Presync progress reporting relies on ReportHeadersPresync() being called to notify validation/UI with presync=true, and the clearing mechanism (presync=false) only triggers when actual headers are accepted into the chain via NotifyHeaderTip() in the static validation function. If the best presync peer disconnects or finishes without sending headers that reach validation, the UI never receives a clearing update and presync progress stays stale. This can happen when a peer announces headers in phase 1 but disconnects before those headers are validated, or when phase 1 ends without triggering a validated header.
The three teardown paths at lines 1905–1908, 3311–3316, and 3582–3585 all erase stats without setting m_headers_presync_should_signal, so no clearing signal is sent.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/net_processing.cpp` around lines 1905 - 1908, When erasing entries from
m_headers_presync_stats (protected by m_headers_presync_mutex), also set
m_headers_presync_should_signal so ReportHeadersPresync() will emit a clearing
update (presync=false) to the UI; specifically, inside the same
LOCK(m_headers_presync_mutex) blocks that call
m_headers_presync_stats.erase(nodeid) (the three teardown paths that currently
only erase), set m_headers_presync_should_signal = true before releasing the
lock so the presync clearing is signalled for the best presync peer that
disconnected/finished.
| arith_uint256 PeerManagerImpl::GetAntiDoSWorkThreshold() | ||
| { | ||
| arith_uint256 near_chaintip_work = 0; | ||
| LOCK(cs_main); | ||
| if (m_chainman.ActiveChain().Tip() != nullptr) { | ||
| const CBlockIndex *tip = m_chainman.ActiveChain().Tip(); | ||
| // Use a 144 block buffer, so that we'll accept headers that fork from | ||
| // near our tip. | ||
| near_chaintip_work = tip->nChainWork - std::min<arith_uint256>(144*GetBlockProof(*tip), tip->nChainWork); | ||
| } | ||
| return std::max(near_chaintip_work, arith_uint256(nMinimumChainWork)); | ||
| } |
There was a problem hiding this comment.
Avoid taking cs_main inside GetAntiDoSWorkThreshold().
This helper locks cs_main, but both the CMPCTBLOCK and BLOCK paths call it while already holding cs_main. That will self-deadlock as soon as either low-work check executes. Make the helper require cs_main and use a separate unlocked wrapper for callers that need it outside the lock, or move those threshold reads out of the locked regions.
Also applies to: 5203-5207, 5556-5560
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/net_processing.cpp` around lines 3215 - 3226, The GetAntiDoSWorkThreshold
function currently takes cs_main internally which can deadlock because callers
in the CMPCTBLOCK and BLOCK paths already hold cs_main; change the design so the
core implementation requires cs_main be held (e.g. rename to
GetAntiDoSWorkThreshold_Locked or assert LOCK held) and remove the internal
LOCK(cs_main), and provide a thin unlocked wrapper (e.g. GetAntiDoSWorkThreshold
that acquires cs_main and calls the locked implementation) for callers that do
not already hold the lock; update CMPCTBLOCK/BLOCK callers to call the locked
implementation directly (or assert they hold cs_main) to avoid double-locking.
Ensure the same change is applied to the other occurrences mentioned (around the
other listed ranges).
| bool PermittedDifficultyTransition(const Consensus::Params& params, int64_t height, uint32_t old_nbits, uint32_t new_nbits) | ||
| { | ||
| if (params.fPowAllowMinDifficultyBlocks) return true; | ||
|
|
||
| // Per-block retargeting (KGW + DGW): no useful per-pair bound. | ||
| if (height >= params.nPowKGWHeight) return true; | ||
|
|
||
| int64_t smallest_timespan = params.nPowTargetTimespan/4; | ||
| int64_t largest_timespan = params.nPowTargetTimespan*4; | ||
|
|
||
| const arith_uint256 pow_limit = UintToArith256(params.powLimit); | ||
| arith_uint256 observed_new_target; | ||
| observed_new_target.SetCompact(new_nbits); | ||
|
|
||
| // Calculate the largest difficulty value possible: | ||
| arith_uint256 largest_difficulty_target; | ||
| largest_difficulty_target.SetCompact(old_nbits); | ||
| largest_difficulty_target *= largest_timespan; | ||
| largest_difficulty_target /= params.nPowTargetTimespan; | ||
|
|
||
| if (largest_difficulty_target > pow_limit) { | ||
| largest_difficulty_target = pow_limit; | ||
| } | ||
|
|
||
| // Round and then compare this new calculated value to what is | ||
| // observed. | ||
| arith_uint256 maximum_new_target; | ||
| maximum_new_target.SetCompact(largest_difficulty_target.GetCompact()); | ||
| if (maximum_new_target < observed_new_target) return false; | ||
|
|
||
| // Calculate the smallest difficulty value possible: | ||
| arith_uint256 smallest_difficulty_target; | ||
| smallest_difficulty_target.SetCompact(old_nbits); | ||
| smallest_difficulty_target *= smallest_timespan; | ||
| smallest_difficulty_target /= params.nPowTargetTimespan; | ||
|
|
||
| if (smallest_difficulty_target > pow_limit) { | ||
| smallest_difficulty_target = pow_limit; | ||
| } | ||
|
|
||
| // Round and then compare this new calculated value to what is | ||
| // observed. | ||
| arith_uint256 minimum_new_target; | ||
| minimum_new_target.SetCompact(smallest_difficulty_target.GetCompact()); | ||
| if (minimum_new_target > observed_new_target) return false; | ||
| return true; | ||
| } |
There was a problem hiding this comment.
Enforce exact nBits on non-retarget heights in the pre-KGW path.
This accepts bounded transitions at every pre-KGW height, but outside retarget boundaries the expected rule is equality (new_nbits == old_nbits). As written, the heuristic is materially weaker than intended for header spam filtering.
Suggested fix
bool PermittedDifficultyTransition(const Consensus::Params& params, int64_t height, uint32_t old_nbits, uint32_t new_nbits)
{
if (params.fPowAllowMinDifficultyBlocks) return true;
// Per-block retargeting (KGW + DGW): no useful per-pair bound.
if (height >= params.nPowKGWHeight) return true;
+
+ // Bitcoin-style retargeting: difficulty must remain unchanged between
+ // adjustment boundaries.
+ if (height % params.DifficultyAdjustmentInterval() != 0) {
+ return new_nbits == old_nbits;
+ }
int64_t smallest_timespan = params.nPowTargetTimespan/4;
int64_t largest_timespan = params.nPowTargetTimespan*4;🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/pow.cpp` around lines 257 - 303, PermittedDifficultyTransition currently
allows bounded difficulty changes for all pre-KGW heights; enforce exact
equality for non-retarget heights by adding an early check in
PermittedDifficultyTransition: if height < params.nPowKGWHeight and the height
is not a retarget boundary (i.e. height % params.DifficultyAdjustmentInterval()
!= 0) then require new_nbits == old_nbits and return false if they differ;
otherwise continue with the existing bounding logic. Reference
PermittedDifficultyTransition, params.nPowKGWHeight, old_nbits, new_nbits, and
params.DifficultyAdjustmentInterval() to locate where to add this check.
| // Test that increasing nbits further would not be a PermittedDifficultyTransition. | ||
| unsigned int invalid_nbits = expected_nbits+1; | ||
| BOOST_CHECK(PermittedDifficultyTransition(chainParams->GetConsensus(), blockIndexLast->nHeight+1, blockIndexLast->nBits, invalid_nbits)); |
There was a problem hiding this comment.
Test logic is inverted - checks opposite of what the comment states.
The comment says "Test that increasing nbits further would not be a PermittedDifficultyTransition," but the code uses BOOST_CHECK(PermittedDifficultyTransition(...)) which asserts that the function returns true. To test that invalid_nbits is not permitted, use the negated check.
🐛 Proposed fix
- BOOST_CHECK(PermittedDifficultyTransition(chainParams->GetConsensus(), blockIndexLast->nHeight+1, blockIndexLast->nBits, invalid_nbits));
+ BOOST_CHECK(!PermittedDifficultyTransition(chainParams->GetConsensus(), blockIndexLast->nHeight+1, blockIndexLast->nBits, invalid_nbits));📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Test that increasing nbits further would not be a PermittedDifficultyTransition. | |
| unsigned int invalid_nbits = expected_nbits+1; | |
| BOOST_CHECK(PermittedDifficultyTransition(chainParams->GetConsensus(), blockIndexLast->nHeight+1, blockIndexLast->nBits, invalid_nbits)); | |
| // Test that increasing nbits further would not be a PermittedDifficultyTransition. | |
| unsigned int invalid_nbits = expected_nbits+1; | |
| BOOST_CHECK(!PermittedDifficultyTransition(chainParams->GetConsensus(), blockIndexLast->nHeight+1, blockIndexLast->nBits, invalid_nbits)); |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/test/pow_tests.cpp` around lines 77 - 79, The test's assertion is
inverted: it intends to verify that increasing nbits is NOT permitted but
currently asserts true; update the check that calls
PermittedDifficultyTransition(chainParams->GetConsensus(),
blockIndexLast->nHeight+1, blockIndexLast->nBits, invalid_nbits) so it expects
false (e.g., negate the call or use BOOST_CHECK_EQUAL(false, ...)) where
invalid_nbits = expected_nbits+1 is used, keeping the same inputs
(chainParams->GetConsensus(), blockIndexLast->nHeight+1, blockIndexLast->nBits).
| bool initial_download = chainstate.IsInitialBlockDownload(); | ||
| uiInterface.NotifyHeaderTip(GetSynchronizationState(initial_download), height, timestamp, /*presync=*/true); | ||
| if (initial_download) { | ||
| const int64_t blocks_left{(GetTime() - timestamp) / GetConsensus().nPowTargetSpacing}; | ||
| const double progress{100.0 * height / (height + blocks_left)}; | ||
| LogPrintf("Pre-synchronizing blockheaders, height: %d (~%.2f%%)\n", height, progress); |
There was a problem hiding this comment.
Clamp presync progress math before logging.
timestamp is still presync data here, so it can be ahead of local time. That makes blocks_left negative and can produce bogus percentages or a zero denominator in the log path.
Suggested fix
bool initial_download = chainstate.IsInitialBlockDownload();
uiInterface.NotifyHeaderTip(GetSynchronizationState(initial_download), height, timestamp, /*presync=*/true);
if (initial_download) {
- const int64_t blocks_left{(GetTime() - timestamp) / GetConsensus().nPowTargetSpacing};
- const double progress{100.0 * height / (height + blocks_left)};
+ const int64_t age_secs{std::max<int64_t>(0, GetTime() - timestamp)};
+ const int64_t blocks_left{age_secs / GetConsensus().nPowTargetSpacing};
+ const int64_t denom{std::max<int64_t>(1, height + blocks_left)};
+ const double progress{100.0 * height / denom};
LogPrintf("Pre-synchronizing blockheaders, height: %d (~%.2f%%)\n", height, progress);
}
}🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/validation.cpp` around lines 4394 - 4399, The presync progress math in
validation.cpp can produce negative blocks_left and a divide-by-zero when
timestamp is ahead of GetTime(); update the initial_download path around symbols
initial_download, blocks_left, progress, GetTime(), timestamp and
GetConsensus().nPowTargetSpacing to clamp blocks_left to a non-negative value
(e.g. max(0, computed_blocks_left)) and guard the progress calculation so you
never divide by zero (if height + blocks_left == 0 set progress to 0.0), and
finally clamp the computed progress into [0.0, 100.0] before calling LogPrintf.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
src/headerssync.cpp (2)
159-163:⚠️ Potential issue | 🟠 Major | ⚡ Quick winEnforce per-header chain continuity during PRESYNC.
Only the first header in a batch is checked against the prior tip. Later headers in the same message are not required to link to their immediate predecessor, so disjoint headers can still advance work/commitments.
Proposed fix
bool HeadersSyncState::ValidateAndProcessSingleHeader(const CBlockHeader& current) { Assume(m_download_state == State::PRESYNC); if (m_download_state != State::PRESYNC) return false; - int next_height = m_current_height + 1; + const int next_height = m_current_height + 1; + if (current.hashPrevBlock != m_last_header_received.GetHash()) { + LogPrint(BCLog::NET, "Initial headers sync aborted with peer=%d: non-continuous headers at height=%i (presync phase)\n", m_id, next_height); + return false; + }Also applies to: 177-210
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/headerssync.cpp` around lines 159 - 163, The PRESYNC loop currently calls ValidateAndProcessSingleHeader(hdr) for each header but only the first header is checked against the prior tip, allowing later headers in the same batch to be disjoint; modify the loop in headers processing (the for-loop using ValidateAndProcessSingleHeader and the similar logic in the 177-210 block) to enforce per-header chain continuity by validating that each header's prev_block_hash equals the hash of the immediate predecessor header (or the prior tip for the first header) before calling ValidateAndProcessSingleHeader, and update the running "previous hash" value to the accepted header's hash as you iterate so every header must link to its predecessor within the batch.
43-43:⚠️ Potential issue | 🟠 Major | ⚡ Quick winClamp negative elapsed-time before assigning
m_max_commitments.
Ticks(...) + MAX_FUTURE_BLOCK_TIMEcan be negative; assigning that to the unsigned commitment bound can wrap to a huge value and effectively disable the memory cap.Proposed fix
- m_max_commitments = 6*(Ticks<std::chrono::seconds>(NodeClock::now() - NodeSeconds{std::chrono::seconds{chain_start->GetMedianTimePast()}}) + MAX_FUTURE_BLOCK_TIME) / HEADER_COMMITMENT_PERIOD; + const int64_t elapsed = Ticks<std::chrono::seconds>( + NodeClock::now() - NodeSeconds{std::chrono::seconds{chain_start->GetMedianTimePast()}}); + const int64_t bounded = std::max<int64_t>(0, elapsed + MAX_FUTURE_BLOCK_TIME); + m_max_commitments = 6 * bounded / HEADER_COMMITMENT_PERIOD;🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/headerssync.cpp` at line 43, The expression used to compute m_max_commitments can be negative before being assigned to an unsigned type and wrap to a huge value; compute the elapsed ticks = Ticks<std::chrono::seconds>(NodeClock::now() - NodeSeconds{std::chrono::seconds{chain_start->GetMedianTimePast()}}), add MAX_FUTURE_BLOCK_TIME, clamp that sum to >= 0 (e.g. use std::max(sum, decltype(sum)(0))) and only then divide by HEADER_COMMITMENT_PERIOD and multiply by 6 to assign to m_max_commitments; ensure you clamp the intermediate signed value rather than the final unsigned assignment to avoid wraparound.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/test/fuzz/bitdeque.cpp`:
- Around line 322-326: The test is checking operator[] but indexes non-const
containers (deq/bitdeq) so it doesn't validate the const operator[]. Change the
check to read from the const views instead (use cdeq and a const alias for
bitdeq, e.g. cbitdeq) and assert that cdeq[pos] == cbitdeq[pos]; ensure the
block still uses provider.ConsumeIntegralInRange with cdeq.size() and indexes
the const containers to exercise operator[] const.
---
Duplicate comments:
In `@src/headerssync.cpp`:
- Around line 159-163: The PRESYNC loop currently calls
ValidateAndProcessSingleHeader(hdr) for each header but only the first header is
checked against the prior tip, allowing later headers in the same batch to be
disjoint; modify the loop in headers processing (the for-loop using
ValidateAndProcessSingleHeader and the similar logic in the 177-210 block) to
enforce per-header chain continuity by validating that each header's
prev_block_hash equals the hash of the immediate predecessor header (or the
prior tip for the first header) before calling ValidateAndProcessSingleHeader,
and update the running "previous hash" value to the accepted header's hash as
you iterate so every header must link to its predecessor within the batch.
- Line 43: The expression used to compute m_max_commitments can be negative
before being assigned to an unsigned type and wrap to a huge value; compute the
elapsed ticks = Ticks<std::chrono::seconds>(NodeClock::now() -
NodeSeconds{std::chrono::seconds{chain_start->GetMedianTimePast()}}), add
MAX_FUTURE_BLOCK_TIME, clamp that sum to >= 0 (e.g. use std::max(sum,
decltype(sum)(0))) and only then divide by HEADER_COMMITMENT_PERIOD and multiply
by 6 to assign to m_max_commitments; ensure you clamp the intermediate signed
value rather than the final unsigned assignment to avoid wraparound.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 8089d480-7f65-45f8-ac9a-d7d8ccc9c6f4
📒 Files selected for processing (59)
src/Makefile.amsrc/Makefile.test.includesrc/chain.cppsrc/chain.hsrc/consensus/validation.hsrc/headerssync.cppsrc/headerssync.hsrc/index/base.cppsrc/interfaces/node.hsrc/net_processing.cppsrc/net_processing.hsrc/node/interface_ui.cppsrc/node/interface_ui.hsrc/node/interfaces.cppsrc/pow.cppsrc/pow.hsrc/primitives/block.hsrc/qt/bitcoin.cppsrc/qt/bitcoingui.cppsrc/qt/bitcoingui.hsrc/qt/clientmodel.cppsrc/qt/clientmodel.hsrc/qt/informationwidget.cppsrc/qt/informationwidget.hsrc/qt/modaloverlay.cppsrc/qt/modaloverlay.hsrc/qt/rpcconsole.cppsrc/qt/rpcconsole.hsrc/qt/sendcoinsdialog.cppsrc/qt/sendcoinsdialog.hsrc/rpc/mining.cppsrc/rpc/net.cppsrc/test/blockfilter_index_tests.cppsrc/test/coinstatsindex_tests.cppsrc/test/evo_deterministicmns_tests.cppsrc/test/fuzz/bitdeque.cppsrc/test/fuzz/pow.cppsrc/test/fuzz/utxo_snapshot.cppsrc/test/headers_sync_chainwork_tests.cppsrc/test/miner_tests.cppsrc/test/pow_tests.cppsrc/test/skiplist_tests.cppsrc/test/util/mining.cppsrc/test/util/setup_common.cppsrc/test/util_tests.cppsrc/test/validation_block_tests.cppsrc/test/validation_chainstate_tests.cppsrc/util/bitdeque.hsrc/validation.cppsrc/validation.htest/functional/feature_block.pytest/functional/p2p_compactblocks.pytest/functional/p2p_dos_header_tree.pytest/functional/p2p_headers_sync_with_minchainwork.pytest/functional/p2p_invalid_messages.pytest/functional/p2p_unrequested_blocks.pytest/functional/rpc_blockchain.pytest/functional/rpc_net.pytest/functional/test_runner.py
✅ Files skipped from review due to trivial changes (4)
- test/functional/rpc_net.py
- src/index/base.cpp
- src/Makefile.am
- src/test/util_tests.cpp
🚧 Files skipped from review as they are similar to previous changes (49)
- src/Makefile.test.include
- src/pow.h
- src/test/skiplist_tests.cpp
- src/primitives/block.h
- src/qt/rpcconsole.cpp
- src/test/coinstatsindex_tests.cpp
- src/test/util/setup_common.cpp
- test/functional/p2p_dos_header_tree.py
- src/test/fuzz/utxo_snapshot.cpp
- src/test/miner_tests.cpp
- src/node/interface_ui.h
- test/functional/p2p_unrequested_blocks.py
- src/qt/informationwidget.cpp
- src/chain.cpp
- src/test/fuzz/pow.cpp
- src/test/pow_tests.cpp
- src/interfaces/node.h
- test/functional/feature_block.py
- src/qt/rpcconsole.h
- src/qt/informationwidget.h
- src/qt/bitcoingui.h
- src/rpc/mining.cpp
- src/net_processing.h
- src/qt/clientmodel.cpp
- src/qt/sendcoinsdialog.cpp
- test/functional/test_runner.py
- src/qt/sendcoinsdialog.h
- src/test/evo_deterministicmns_tests.cpp
- src/consensus/validation.h
- src/test/validation_block_tests.cpp
- test/functional/p2p_compactblocks.py
- src/qt/bitcoin.cpp
- src/test/validation_chainstate_tests.cpp
- src/pow.cpp
- src/node/interfaces.cpp
- src/headerssync.h
- src/test/blockfilter_index_tests.cpp
- src/test/util/mining.cpp
- src/qt/modaloverlay.h
- src/qt/clientmodel.h
- test/functional/p2p_invalid_messages.py
- src/qt/modaloverlay.cpp
- src/chain.h
- src/qt/bitcoingui.cpp
- src/node/interface_ui.cpp
- src/rpc/net.cpp
- src/validation.cpp
- src/util/bitdeque.h
- src/net_processing.cpp
| // operator[] const | ||
| if (!cdeq.empty()) { | ||
| size_t pos = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size() - 1); | ||
| assert(deq[pos] == bitdeq[pos]); | ||
| } |
There was a problem hiding this comment.
Use const containers in the operator[] const check.
This block currently reads from deq/bitdeq, so it doesn’t actually validate const operator[] behavior.
Suggested patch
[&] {
// operator[] const
if (!cdeq.empty()) {
size_t pos = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size() - 1);
- assert(deq[pos] == bitdeq[pos]);
+ assert(cdeq[pos] == cbitdeq[pos]);
}
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // operator[] const | |
| if (!cdeq.empty()) { | |
| size_t pos = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size() - 1); | |
| assert(deq[pos] == bitdeq[pos]); | |
| } | |
| // operator[] const | |
| if (!cdeq.empty()) { | |
| size_t pos = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size() - 1); | |
| assert(cdeq[pos] == cbitdeq[pos]); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/test/fuzz/bitdeque.cpp` around lines 322 - 326, The test is checking
operator[] but indexes non-const containers (deq/bitdeq) so it doesn't validate
the const operator[]. Change the check to read from the const views instead (use
cdeq and a const alias for bitdeq, e.g. cbitdeq) and assert that cdeq[pos] ==
cbitdeq[pos]; ensure the block still uses provider.ConsumeIntegralInRange with
cdeq.size() and indexes the const containers to exercise operator[] const.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: ef1d7426a5
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| // We'll rely on headers having valid proof-of-work further down, as an | ||
| // anti-DoS criteria (note: this check is required before passing any | ||
| // headers into HeadersSyncState). | ||
| if (!CheckHeadersPoW(headers, m_chainparams.GetConsensus(), pfrom)) { |
There was a problem hiding this comment.
Skip PoW scan for already-known ancestor headers
ProcessHeadersMessage now calls CheckHeadersPoW(headers, ...) before any block-index lookup, so even a batch of 2000 headers that are all already-known ancestors incurs full proof-of-work verification every time. In this path, ProcessNewBlockHeaders() would otherwise short-circuit known headers cheaply, so a peer can repeatedly send old full batches and force unnecessary CPU work without tripping misbehavior scoring. Consider moving the expensive PoW loop after the known-ancestor fast path (or adding an early duplicate/ancestor check) so repeated historical headers remain near no-op.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
♻️ Duplicate comments (3)
src/test/fuzz/bitdeque.cpp (1)
322-326:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winUse const containers in the
operator[] constcheck.This block still indexes mutable containers, so it doesn’t validate const
operator[].Suggested patch
[&] { // operator[] const if (!cdeq.empty()) { size_t pos = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size() - 1); - assert(deq[pos] == bitdeq[pos]); + assert(cdeq[pos] == cbitdeq[pos]); } },🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/test/fuzz/bitdeque.cpp` around lines 322 - 326, The test is still indexing mutable containers in the "operator[] const" check; make and use const-qualified references (or const copies) for the containers before the assertion so the const overload is exercised — e.g., create const auto& cdeq_const = cdeq (and const auto& deq_const = deq, const auto& bitdeq_const = bitdeq) and then use cdeq_const.empty(), consume a pos in range [0, cdeq_const.size()-1], and assert deq_const[pos] == bitdeq_const[pos] to validate the const operator[] overloads.src/headerssync.cpp (2)
43-43:⚠️ Potential issue | 🟠 Major | ⚡ Quick winClamp negative elapsed time before assigning
m_max_commitments.Line 43 can become negative; converting that to unsigned wraps to a huge value and effectively disables the commitment bound.
Proposed fix
- m_max_commitments = 6*(Ticks<std::chrono::seconds>(NodeClock::now() - NodeSeconds{std::chrono::seconds{chain_start->GetMedianTimePast()}}) + MAX_FUTURE_BLOCK_TIME) / HEADER_COMMITMENT_PERIOD; + const int64_t elapsed = Ticks<std::chrono::seconds>( + NodeClock::now() - NodeSeconds{std::chrono::seconds{chain_start->GetMedianTimePast()}}); + const int64_t bounded = std::max<int64_t>(0, elapsed + MAX_FUTURE_BLOCK_TIME); + m_max_commitments = 6 * static_cast<uint64_t>(bounded) / HEADER_COMMITMENT_PERIOD;🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/headerssync.cpp` at line 43, The computation assigning m_max_commitments can produce a negative elapsed duration that, when converted to an unsigned via Ticks<...>, wraps to a huge value; clamp the elapsed time to non-negative before converting: compute elapsed = NodeClock::now() - NodeSeconds{std::chrono::seconds{chain_start->GetMedianTimePast()}}, then replace it with max(elapsed, NodeSeconds{0}) (or equivalent) before passing into Ticks and the rest of the expression using MAX_FUTURE_BLOCK_TIME and HEADER_COMMITMENT_PERIOD so m_max_commitments is always a sensible non-negative value.
159-163:⚠️ Potential issue | 🟠 Major | ⚡ Quick winEnforce per-header chain continuity during PRESYNC.
Only the first header in a batch is linked-checked today; later headers can be disjoint while still advancing work/commitments. Add a continuity check per header in
ValidateAndProcessSingleHeader.Proposed fix
bool HeadersSyncState::ValidateAndProcessSingleHeader(const CBlockHeader& current) { Assume(m_download_state == State::PRESYNC); if (m_download_state != State::PRESYNC) return false; - int next_height = m_current_height + 1; + const int next_height = m_current_height + 1; + if (current.hashPrevBlock != m_last_header_received.GetHash()) { + LogPrint(BCLog::NET, "Initial headers sync aborted with peer=%d: non-continuous headers at height=%i (presync phase)\n", + m_id, next_height); + return false; + }Also applies to: 177-210
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/headerssync.cpp` around lines 159 - 163, The loop currently calls ValidateAndProcessSingleHeader(hdr) for each header but only the first header is checked for chain continuity; update ValidateAndProcessSingleHeader to enforce that each hdr.prev_hash (or equivalent parent identifier) matches the hash of the last-accepted header/tip before processing this header, returning false if mismatched so the loop (and the other loop region noted around 177-210) cannot advance on disjoint headers; locate and use the module's canonical tip/last-processed header accessor (e.g., the in-file variable or function that returns the current last header) inside ValidateAndProcessSingleHeader to perform this per-header continuity check and fail fast on mismatch.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Duplicate comments:
In `@src/headerssync.cpp`:
- Line 43: The computation assigning m_max_commitments can produce a negative
elapsed duration that, when converted to an unsigned via Ticks<...>, wraps to a
huge value; clamp the elapsed time to non-negative before converting: compute
elapsed = NodeClock::now() -
NodeSeconds{std::chrono::seconds{chain_start->GetMedianTimePast()}}, then
replace it with max(elapsed, NodeSeconds{0}) (or equivalent) before passing into
Ticks and the rest of the expression using MAX_FUTURE_BLOCK_TIME and
HEADER_COMMITMENT_PERIOD so m_max_commitments is always a sensible non-negative
value.
- Around line 159-163: The loop currently calls
ValidateAndProcessSingleHeader(hdr) for each header but only the first header is
checked for chain continuity; update ValidateAndProcessSingleHeader to enforce
that each hdr.prev_hash (or equivalent parent identifier) matches the hash of
the last-accepted header/tip before processing this header, returning false if
mismatched so the loop (and the other loop region noted around 177-210) cannot
advance on disjoint headers; locate and use the module's canonical
tip/last-processed header accessor (e.g., the in-file variable or function that
returns the current last header) inside ValidateAndProcessSingleHeader to
perform this per-header continuity check and fail fast on mismatch.
In `@src/test/fuzz/bitdeque.cpp`:
- Around line 322-326: The test is still indexing mutable containers in the
"operator[] const" check; make and use const-qualified references (or const
copies) for the containers before the assertion so the const overload is
exercised — e.g., create const auto& cdeq_const = cdeq (and const auto&
deq_const = deq, const auto& bitdeq_const = bitdeq) and then use
cdeq_const.empty(), consume a pos in range [0, cdeq_const.size()-1], and assert
deq_const[pos] == bitdeq_const[pos] to validate the const operator[] overloads.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 36acccee-6d46-4cd5-984a-ff9caa5b246c
📒 Files selected for processing (59)
src/Makefile.amsrc/Makefile.test.includesrc/chain.cppsrc/chain.hsrc/consensus/validation.hsrc/headerssync.cppsrc/headerssync.hsrc/index/base.cppsrc/interfaces/node.hsrc/net_processing.cppsrc/net_processing.hsrc/node/interface_ui.cppsrc/node/interface_ui.hsrc/node/interfaces.cppsrc/pow.cppsrc/pow.hsrc/primitives/block.hsrc/qt/bitcoin.cppsrc/qt/bitcoingui.cppsrc/qt/bitcoingui.hsrc/qt/clientmodel.cppsrc/qt/clientmodel.hsrc/qt/informationwidget.cppsrc/qt/informationwidget.hsrc/qt/modaloverlay.cppsrc/qt/modaloverlay.hsrc/qt/rpcconsole.cppsrc/qt/rpcconsole.hsrc/qt/sendcoinsdialog.cppsrc/qt/sendcoinsdialog.hsrc/rpc/mining.cppsrc/rpc/net.cppsrc/test/blockfilter_index_tests.cppsrc/test/coinstatsindex_tests.cppsrc/test/evo_deterministicmns_tests.cppsrc/test/fuzz/bitdeque.cppsrc/test/fuzz/pow.cppsrc/test/fuzz/utxo_snapshot.cppsrc/test/headers_sync_chainwork_tests.cppsrc/test/miner_tests.cppsrc/test/pow_tests.cppsrc/test/skiplist_tests.cppsrc/test/util/mining.cppsrc/test/util/setup_common.cppsrc/test/util_tests.cppsrc/test/validation_block_tests.cppsrc/test/validation_chainstate_tests.cppsrc/util/bitdeque.hsrc/validation.cppsrc/validation.htest/functional/feature_block.pytest/functional/p2p_compactblocks.pytest/functional/p2p_dos_header_tree.pytest/functional/p2p_headers_sync_with_minchainwork.pytest/functional/p2p_invalid_messages.pytest/functional/p2p_unrequested_blocks.pytest/functional/rpc_blockchain.pytest/functional/rpc_net.pytest/functional/test_runner.py
✅ Files skipped from review due to trivial changes (5)
- src/rpc/net.cpp
- src/Makefile.am
- test/functional/rpc_net.py
- src/qt/rpcconsole.h
- src/test/util_tests.cpp
🚧 Files skipped from review as they are similar to previous changes (50)
- src/consensus/validation.h
- src/node/interface_ui.h
- test/functional/test_runner.py
- test/functional/p2p_dos_header_tree.py
- src/primitives/block.h
- src/pow.h
- src/test/fuzz/utxo_snapshot.cpp
- src/net_processing.h
- src/test/validation_chainstate_tests.cpp
- src/test/miner_tests.cpp
- src/qt/sendcoinsdialog.cpp
- src/qt/clientmodel.h
- src/interfaces/node.h
- src/qt/modaloverlay.cpp
- src/test/util/mining.cpp
- src/chain.h
- src/node/interface_ui.cpp
- src/Makefile.test.include
- src/index/base.cpp
- test/functional/rpc_blockchain.py
- src/qt/bitcoingui.h
- test/functional/feature_block.py
- src/qt/modaloverlay.h
- src/test/skiplist_tests.cpp
- src/chain.cpp
- src/qt/informationwidget.cpp
- src/test/fuzz/pow.cpp
- src/node/interfaces.cpp
- src/test/util/setup_common.cpp
- src/qt/informationwidget.h
- src/test/blockfilter_index_tests.cpp
- src/pow.cpp
- src/test/pow_tests.cpp
- test/functional/p2p_invalid_messages.py
- src/headerssync.h
- src/test/coinstatsindex_tests.cpp
- src/qt/rpcconsole.cpp
- src/rpc/mining.cpp
- src/qt/clientmodel.cpp
- src/qt/sendcoinsdialog.h
- src/test/evo_deterministicmns_tests.cpp
- src/test/validation_block_tests.cpp
- src/qt/bitcoingui.cpp
- test/functional/p2p_compactblocks.py
- src/validation.h
- src/qt/bitcoin.cpp
- test/functional/p2p_unrequested_blocks.py
- src/util/bitdeque.h
- src/net_processing.cpp
- src/validation.cpp
BACKPORT NOTE: partial due to missing changes in bitcoin-chainstate.cpp 3add234 ui: show header pre-synchronization progress (Pieter Wuille) 738421c Emit NotifyHeaderTip signals for pre-synchronization progress (Pieter Wuille) 376086f Make validation interface capable of signalling header presync (Pieter Wuille) 93eae27 Test large reorgs with headerssync logic (Suhas Daftuar) 3555473 Track headers presync progress and log it (Pieter Wuille) 03712dd Expose HeadersSyncState::m_current_height in getpeerinfo() (Suhas Daftuar) 150a548 Test headers sync using minchainwork threshold (Suhas Daftuar) 0b6aa82 Add unit test for HeadersSyncState (Suhas Daftuar) 83c6a0c Reduce spurious messages during headers sync (Suhas Daftuar) ed6cddd Require callers of AcceptBlockHeader() to perform anti-dos checks (Suhas Daftuar) 551a8d9 Utilize anti-DoS headers download strategy (Suhas Daftuar) ed47094 Add functions to construct locators without CChain (Pieter Wuille) 84852bb Add bitdeque, an std::deque<bool> analogue that does bit packing. (Pieter Wuille) 1d4cfa4 Add function to validate difficulty changes (Suhas Daftuar) Pull request description: New nodes starting up for the first time lack protection against DoS from low-difficulty headers. While checkpoints serve as our protection against headers that fork from the main chain below the known checkpointed values, this protection only applies to nodes that have been able to download the honest chain to the checkpointed heights. We can protect all nodes from DoS from low-difficulty headers by adopting a different strategy: before we commit to storing a header in permanent storage, first verify that the header is part of a chain that has sufficiently high work (either `nMinimumChainWork`, or something comparable to our tip). This means that we will download headers from a given peer twice: once to verify the work on the chain, and a second time when permanently storing the headers. The p2p protocol doesn't provide an easy way for us to ensure that we receive the same headers during the second download of peer's headers chain. To ensure that a peer doesn't (say) give us the main chain in phase 1 to trick us into permanently storing an alternate, low-work chain in phase 2, we store commitments to the headers during our first download, which we validate in the second download. Some parameters must be chosen for commitment size/frequency in phase 1, and validation of commitments in phase 2. In this PR, those parameters are chosen to both (a) minimize the per-peer memory usage that an attacker could utilize, and (b) bound the expected amount of permanent memory that an attacker could get us to use to be well-below the memory growth that we'd get from the honest chain (where we expect 1 new block header every 10 minutes). After this PR, we should be able to remove checkpoints from our code, which is a nice philosophical change for us to make as well, as there has been confusion over the years about the role checkpoints play in Bitcoin's consensus algorithm. Thanks to Pieter Wuille for collaborating on this design. ACKs for top commit: Sjors: re-tACK 3add234 mzumsande: re-ACK 3add234 sipa: re-ACK 3add234 glozow: ACK 3add234 Tree-SHA512: e7789d65f62f72141b8899eb4a2fb3d0621278394d2d7adaa004675250118f89a4e4cb42777fe56649d744ec445ad95141e10f6def65f0a58b7b35b2e654a875 Co-authored-by: fanquake <fanquake@gmail.com>
…tidy fixup 6b24dfe CBlockLocator: performance-move-const-arg Clang tidy fixups (Jon Atack) Pull request description: Fix Clang-tidy CI errors on master. See https://cirrus-ci.com/task/4806752200818688?logs=ci#L4696 for an example. ACKs for top commit: MarcoFalke: review ACK 6b24dfe vasild: ACK 6b24dfe Tree-SHA512: 7a67acf7b42da07b63fbb392236e9a7be8cf35c36e37ca980c4467fe8295c2eda8aef10f41a1e3036cd9ebece47fa957fc3256033f853bd6a97ce2ca42799a0a Co-authored-by: MacroFake <falke.marco@gmail.com>
94af3e4 Fix typo from PR25717 (Suhas Daftuar) e5982ec Bypass headers anti-DoS checks for NoBan peers (Suhas Daftuar) 132ed7e Move headerssync logging to BCLog::NET (Suhas Daftuar) Pull request description: Remove BCLog::HEADERSSYNC and move all headerssync logging to BCLog::NET. Bypass headers anti-DoS checks for NoBan peers Also fix a typo that was introduced in PR25717. ACKs for top commit: Sjors: tACK 94af3e4 ajtowns: ACK 94af3e4 sipa: ACK 94af3e4 naumenkogs: ACK 94af3e4 w0xlt: ACK bitcoin@94af3e4 Tree-SHA512: 612d594eddace977359bcc8234b2093d273fd50662f4ac70cb90903d28fb831f6e1aecff51a4ef6c0bb0f6fb5d1aa7ff1eb8798fac5ac142783788f3080717dc Co-authored-by: fanquake <fanquake@gmail.com>
…th_minchainwork.py 88e7807 test: fix non-determinism in p2p_headers_sync_with_minchainwork.py (Suhas Daftuar) Pull request description: The test for node3's chaintips (added in PR25960) needs some sort of synchronization in order to be reliable. ACKs for top commit: mzumsande: Code Review ACK 88e7807 satsie: ACK 88e7807 Tree-SHA512: 5607c5b1a95d91e7cf81b695eb356b782cbb303bcc7fd9044e1058c0c0625c5f9e5fe4f4dde9d2bffa27a80d83fc060336720f7becbba505ccfb8a04fcc81705 Co-authored-by: fanquake <fanquake@gmail.com>
fa4ba04 fuzz: Remove no-op call to get() (MacroFake) fa64228 fuzz: Avoid timeout in bitdeque fuzz target (MacroFake) Pull request description: I'd guess that any bug should be discoverable within `10` ops. However, `900` seems also better than no limit at all, which causes timeouts such as https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=50892 ACKs for top commit: sipa: ACK fa4ba04 Tree-SHA512: f6bd25e78d5f04c6f88e9300c2fa3d0993a0911cb0fd1b414077adc0edde1a06ad72af5e2f50f0ab1324f91999ae57d879686c545b2e6c19ae7f637a8804bd48 Co-authored-by: fanquake <fanquake@gmail.com>
…eader bdcafb9 p2p: ProcessHeadersMessage(): fix received_new_header (Larry Ruane) Pull request description: Follow-up to bitcoin#25717. The commit "Utilize anti-DoS headers download strategy" changed how this bool variable is computed, so that its value is now the opposite of what it should be. Prior to bitcoin#25717: ``` bool received_new_header{WITH_LOCK(::cs_main, return m_chainman.m_blockman.LookupBlockIndex(headers.back().GetHash()) == nullptr)}; ``` After bitcoin#25717 (simplified): ``` { LOCK(cs_main); last_received_header = m_chainman.m_blockman.LookupBlockIndex(headers.back().GetHash()); } bool received_new_header{last_received_header != nullptr}; ``` ACKs for top commit: dergoegge: ACK bdcafb9 glozow: ACK bdcafb9, I believe this is correct and don't see anything to suggest the switch was intentional. stickies-v: ACK bdcafb9 Tree-SHA512: 35c12762f1429585a0b1c15053e310e83efb28c3d8cbf4092fad9fe81c893f6d766df1f2b20624882acb9654d0539a0c871f587d7090dc2a198115adf59db3ec Co-authored-by: glozow <gloriajzhao@gmail.com>
…eturn value correctly when new headers sync is started 7ad15d1 [net processing] Handle IsContinuationOfLowWorkHeadersSync return value correctly when new headers sync is started (dergoegge) Pull request description: This PR fixes a bug in the headers sync logic that enables submitting headers to a nodes block index that don't lead to a chain that surpasses our DoS limit. The issue is that we ignore the return value on [the first `IsContinuationOfLowWorkHeadersSync` call after a new headers sync is started](https://github.com/bitcoin/bitcoin/blob/fabc0310480b49e159a15d494525c5aa15072cba/src/net_processing.cpp#L2553-L2568), which leads to us passing headers to [`ProcessNewBlockHeaders`](https://github.com/bitcoin/bitcoin/blob/fabc0310480b49e159a15d494525c5aa15072cba/src/net_processing.cpp#L2856) when that initial `IsContinuationOfLowWorkHeadersSync` call returns `false`. One easy way (maybe the only?) to trigger this is by sending 2000 headers where the last header has a different `nBits` value than the prior headers (which fails the pre-sync logic [here](https://github.com/bitcoin/bitcoin/blob/fabc0310480b49e159a15d494525c5aa15072cba/src/headerssync.cpp#L189)). Those 2000 headers will be passed to `ProcessNewBlockHeaders`. I haven't included a test here so far because we can't test this without changing the default value for `CRegTestParams::consensus.fPowAllowMinDifficultyBlocks` or doing some more involved refactoring. ACKs for top commit: sipa: ACK 7ad15d1 glozow: ACK 7ad15d1 Tree-SHA512: 9aabb8bf3700401e79863d0accda0befd2a83c4d469a53f97d827e51139e2f826aee08cdfbc8866b311b153f61fdac9b7aa515fcfa2a21c5e2812c2bf3c03664 Co-authored-by: glozow <gloriajzhao@gmail.com>
784b023 [net processing] Simplify use of IsContinuationOfLowWorkHeadersSync in TryLowWorkHeaderSync (dergoegge) e891aab [net processing] Fixup TryLowWorkHeadersSync comment (dergoegge) Pull request description: See bitcoin#26355 (comment) and bitcoin#26355 (comment) ACKs for top commit: hernanmarino: ACK 784b023 brunoerg: crACK 784b023 mzumsande: ACK 784b023 Tree-SHA512: b47ac0d78a09ca3a1806e38c5d2e2fcf1e5f0668f202450b5079c5cb168e168ac6828c0948d23f3610696375134986d75ef3c6098858173023bcb743aec8004c Co-authored-by: fanquake <fanquake@gmail.com>
…_minchainwork fa952fa test: Avoid rpc timeout in p2p_headers_sync_with_minchainwork (MarcoFalke) Pull request description: When running a lot of tests in parallel, I get `JSONRPCException: 'generatetoaddress' RPC took longer than 30.000000 seconds.` The general recommendation, if running into timeouts, is to increase the `--timeout-factor`. However, I think that the default timeout values should be suitable to run the tests out of the box on reasonable hardware. ACKs for top commit: fanquake: ACK fa952fa Tree-SHA512: b7eeda54f8db900f077417c0431f659c67e686e2fc078f8c713e37ed75b8bc862814ce20e8400741638e35e224d7284ad16172bf5f82168f803376d0c9ec4524 Co-authored-by: MarcoFalke <*~=`'#}+{/-|&$^_@721217.xyz>
…id proof-of-work disconnects peer 7726712 test: p2p: check that headers message with invalid proof-of-work disconnects peer (Sebastian Falbesoner) Pull request description: One of the earliest anti-DoS checks done after receiving and deserializing a `headers` message from a peer is verifying whether the proof-of-work is valid (called in method `PeerManagerImpl::ProcessHeadersMessage`): https://github.com/bitcoin/bitcoin/blob/f227e153e80c8c50c30d76e1ac638d7206c7ff61/src/net_processing.cpp#L2752-L2762 The called method `PeerManagerImpl::CheckHeadersPoW` calls `Misbehaving` with a score of 100, i.e. leading to an immediate disconnect of the peer: https://github.com/bitcoin/bitcoin/blob/f227e153e80c8c50c30d76e1ac638d7206c7ff61/src/net_processing.cpp#L2368-L2372 This PR adds a simple test for both the misbehaving log and the resulting disconnect. For creating a block header with invalid proof-of-work, we first create one that is accepted by the node (the difficulty field `nBits` is copied from the genesis block) and based on that the nonce is modified until we have block header hash prefix that is too high to fulfill even the minimum difficulty. ACKs for top commit: Sjors: ACK 7726712 achow101: ACK 7726712 brunoerg: crACK 7726712 furszy: Code review ACK 7726712 with a non-blocking speedup. Tree-SHA512: 680aa7939158d1dc672b90aa6554ba2b3a92584b6d3bcb0227776035858429feb8bc66eed18b47de0fe56df7d9b3ddaee231aaeaa360136603b9ad4b19e6ac11 Co-authored-by: Andrew Chow <github@achow101.com>
There was a problem hiding this comment.
♻️ Duplicate comments (3)
src/test/fuzz/bitdeque.cpp (1)
322-326:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winUse const containers in the const
operator[]check.This block still indexes non-const containers, so it does not validate
operator[] const.Suggested patch
[&] { // operator[] const if (!cdeq.empty()) { size_t pos = provider.ConsumeIntegralInRange<size_t>(0, cdeq.size() - 1); - assert(deq[pos] == bitdeq[pos]); + assert(cdeq[pos] == cbitdeq[pos]); } },🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/test/fuzz/bitdeque.cpp` around lines 322 - 326, The test is meant to exercise the const overload of operator[] but currently indexes non-const containers; create const references to the containers before indexing so the const operator[] is invoked. For example, introduce const references (e.g., const auto& cdeq_const = deq; const auto& cbitdeq = bitdeq;) and then use those in the assertion (assert(cdeq_const[pos] == cbitdeq[pos])) so the const operator[] is actually tested.src/headerssync.cpp (2)
43-43:⚠️ Potential issue | 🟠 Major | ⚡ Quick winClamp the elapsed-time term before assigning to
m_max_commitments.Line 43 can go negative before assignment; converting that to unsigned can wrap and effectively disable the intended commitment bound.
Proposed fix
- m_max_commitments = 6*(Ticks<std::chrono::seconds>(NodeClock::now() - NodeSeconds{std::chrono::seconds{chain_start->GetMedianTimePast()}}) + MAX_FUTURE_BLOCK_TIME) / HEADER_COMMITMENT_PERIOD; + const int64_t elapsed = Ticks<std::chrono::seconds>( + NodeClock::now() - NodeSeconds{std::chrono::seconds{chain_start->GetMedianTimePast()}}); + const int64_t bounded_elapsed = std::max<int64_t>(0, elapsed + MAX_FUTURE_BLOCK_TIME); + m_max_commitments = (6 * static_cast<uint64_t>(bounded_elapsed)) / HEADER_COMMITMENT_PERIOD;🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/headerssync.cpp` at line 43, The calculation assigning m_max_commitments can yield a negative elapsed-time which, when converted to an unsigned via Ticks, will wrap; compute the elapsed seconds between NodeClock::now() and NodeSeconds{chain_start->GetMedianTimePast()} first, clamp that value to be non-negative (e.g. max(elapsed, NodeSeconds{0}) or similar) before converting with Ticks and adding MAX_FUTURE_BLOCK_TIME, then perform the division by HEADER_COMMITMENT_PERIOD and multiply by 6; update the expression around m_max_commitments to use the clamped elapsed-term (references: m_max_commitments, Ticks<...>, NodeClock::now(), NodeSeconds, chain_start->GetMedianTimePast(), MAX_FUTURE_BLOCK_TIME, HEADER_COMMITMENT_PERIOD).
159-163:⚠️ Potential issue | 🟠 Major | ⚡ Quick winEnforce header-to-header continuity throughout PRESYNC.
Line 148 checks only the first header in the batch. In Line 159 onward, later headers can be disjoint and still contribute work/commitments because
ValidateAndProcessSingleHeader()never checkshashPrevBlocklinkage.Proposed fix
bool HeadersSyncState::ValidateAndProcessSingleHeader(const CBlockHeader& current) { Assume(m_download_state == State::PRESYNC); if (m_download_state != State::PRESYNC) return false; - int next_height = m_current_height + 1; + const int next_height = m_current_height + 1; + if (current.hashPrevBlock != m_last_header_received.GetHash()) { + LogPrint(BCLog::NET, "Initial headers sync aborted with peer=%d: non-continuous headers at height=%i (presync phase)\n", m_id, next_height); + return false; + } // Verify that the difficulty isn't growing too fast; an adversary withAlso applies to: 177-193
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/headerssync.cpp` around lines 159 - 163, The batch header loop fails to enforce header-to-header continuity because ValidateAndProcessSingleHeader(hdr) does not verify hdr.hashPrevBlock against the previous header in the batch; update the loop that iterates over headers (the vector named headers) to track the previous header hash (e.g., prevHash = headers[i-1].GetHash() or maintain a running prevHash initialized to the existing chain tip) and before calling ValidateAndProcessSingleHeader(hdr) assert that hdr.hashPrevBlock == prevHash, returning false on mismatch; apply the same change to the other loop referenced (lines 177-193) so both PRESYNC header-processing paths enforce contiguous hashPrevBlock linkage prior to further validation.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Duplicate comments:
In `@src/headerssync.cpp`:
- Line 43: The calculation assigning m_max_commitments can yield a negative
elapsed-time which, when converted to an unsigned via Ticks, will wrap; compute
the elapsed seconds between NodeClock::now() and
NodeSeconds{chain_start->GetMedianTimePast()} first, clamp that value to be
non-negative (e.g. max(elapsed, NodeSeconds{0}) or similar) before converting
with Ticks and adding MAX_FUTURE_BLOCK_TIME, then perform the division by
HEADER_COMMITMENT_PERIOD and multiply by 6; update the expression around
m_max_commitments to use the clamped elapsed-term (references:
m_max_commitments, Ticks<...>, NodeClock::now(), NodeSeconds,
chain_start->GetMedianTimePast(), MAX_FUTURE_BLOCK_TIME,
HEADER_COMMITMENT_PERIOD).
- Around line 159-163: The batch header loop fails to enforce header-to-header
continuity because ValidateAndProcessSingleHeader(hdr) does not verify
hdr.hashPrevBlock against the previous header in the batch; update the loop that
iterates over headers (the vector named headers) to track the previous header
hash (e.g., prevHash = headers[i-1].GetHash() or maintain a running prevHash
initialized to the existing chain tip) and before calling
ValidateAndProcessSingleHeader(hdr) assert that hdr.hashPrevBlock == prevHash,
returning false on mismatch; apply the same change to the other loop referenced
(lines 177-193) so both PRESYNC header-processing paths enforce contiguous
hashPrevBlock linkage prior to further validation.
In `@src/test/fuzz/bitdeque.cpp`:
- Around line 322-326: The test is meant to exercise the const overload of
operator[] but currently indexes non-const containers; create const references
to the containers before indexing so the const operator[] is invoked. For
example, introduce const references (e.g., const auto& cdeq_const = deq; const
auto& cbitdeq = bitdeq;) and then use those in the assertion
(assert(cdeq_const[pos] == cbitdeq[pos])) so the const operator[] is actually
tested.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 61c0375f-0fe5-4e69-8215-efbc7c5f42e5
📒 Files selected for processing (59)
src/Makefile.amsrc/Makefile.test.includesrc/chain.cppsrc/chain.hsrc/consensus/validation.hsrc/headerssync.cppsrc/headerssync.hsrc/index/base.cppsrc/interfaces/node.hsrc/net_processing.cppsrc/net_processing.hsrc/node/interface_ui.cppsrc/node/interface_ui.hsrc/node/interfaces.cppsrc/pow.cppsrc/pow.hsrc/primitives/block.hsrc/qt/bitcoin.cppsrc/qt/bitcoingui.cppsrc/qt/bitcoingui.hsrc/qt/clientmodel.cppsrc/qt/clientmodel.hsrc/qt/informationwidget.cppsrc/qt/informationwidget.hsrc/qt/modaloverlay.cppsrc/qt/modaloverlay.hsrc/qt/rpcconsole.cppsrc/qt/rpcconsole.hsrc/qt/sendcoinsdialog.cppsrc/qt/sendcoinsdialog.hsrc/rpc/mining.cppsrc/rpc/net.cppsrc/test/blockfilter_index_tests.cppsrc/test/coinstatsindex_tests.cppsrc/test/evo_deterministicmns_tests.cppsrc/test/fuzz/bitdeque.cppsrc/test/fuzz/pow.cppsrc/test/fuzz/utxo_snapshot.cppsrc/test/headers_sync_chainwork_tests.cppsrc/test/miner_tests.cppsrc/test/pow_tests.cppsrc/test/skiplist_tests.cppsrc/test/util/mining.cppsrc/test/util/setup_common.cppsrc/test/util_tests.cppsrc/test/validation_block_tests.cppsrc/test/validation_chainstate_tests.cppsrc/util/bitdeque.hsrc/validation.cppsrc/validation.htest/functional/feature_block.pytest/functional/p2p_compactblocks.pytest/functional/p2p_dos_header_tree.pytest/functional/p2p_headers_sync_with_minchainwork.pytest/functional/p2p_invalid_messages.pytest/functional/p2p_unrequested_blocks.pytest/functional/rpc_blockchain.pytest/functional/rpc_net.pytest/functional/test_runner.py
✅ Files skipped from review due to trivial changes (4)
- src/test/util_tests.cpp
- test/functional/p2p_dos_header_tree.py
- src/qt/rpcconsole.h
- test/functional/rpc_net.py
🚧 Files skipped from review as they are similar to previous changes (50)
- src/index/base.cpp
- src/test/coinstatsindex_tests.cpp
- src/test/util/setup_common.cpp
- src/net_processing.h
- src/qt/sendcoinsdialog.cpp
- src/qt/modaloverlay.h
- test/functional/p2p_unrequested_blocks.py
- src/Makefile.am
- src/test/fuzz/utxo_snapshot.cpp
- src/node/interface_ui.cpp
- src/pow.cpp
- src/node/interface_ui.h
- src/interfaces/node.h
- src/consensus/validation.h
- test/functional/feature_block.py
- src/test/miner_tests.cpp
- src/qt/sendcoinsdialog.h
- src/qt/informationwidget.h
- src/primitives/block.h
- src/test/validation_chainstate_tests.cpp
- src/rpc/net.cpp
- src/test/skiplist_tests.cpp
- src/rpc/mining.cpp
- src/test/evo_deterministicmns_tests.cpp
- src/test/fuzz/pow.cpp
- src/test/validation_block_tests.cpp
- src/Makefile.test.include
- src/node/interfaces.cpp
- src/test/util/mining.cpp
- src/qt/rpcconsole.cpp
- src/test/pow_tests.cpp
- test/functional/test_runner.py
- src/pow.h
- src/qt/bitcoin.cpp
- test/functional/rpc_blockchain.py
- src/qt/informationwidget.cpp
- src/chain.cpp
- src/test/blockfilter_index_tests.cpp
- src/qt/clientmodel.h
- src/qt/modaloverlay.cpp
- src/validation.h
- src/chain.h
- src/qt/clientmodel.cpp
- test/functional/p2p_invalid_messages.py
- src/headerssync.h
- src/qt/bitcoingui.cpp
- src/util/bitdeque.h
- test/functional/p2p_compactblocks.py
- src/validation.cpp
- src/net_processing.cpp
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 6db8fdf81d
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| int64_t smallest_timespan = params.nPowTargetTimespan/4; | ||
| int64_t largest_timespan = params.nPowTargetTimespan*4; | ||
|
|
There was a problem hiding this comment.
Enforce fixed-interval nBits in pre-KGW transition check
PermittedDifficultyTransition() is used by HeadersSyncState as an anti-DoS prefilter, but this implementation applies the ±4x bound to every pre-KGW height instead of only retarget boundaries. On BTC-style segments (height < nPowKGWHeight), consensus requires nBits to stay unchanged between adjustment intervals, so a peer can send PoW-valid headers with impossible in-interval nBits changes that pass this filter and consume presync/redownload resources until later contextual validation rejects them. Adding an interval check (same-nBits off-boundary) would restore the intended early rejection behavior.
Useful? React with 👍 / 👎.
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
Backport of Bitcoin's anti-DoS headers sync stack (PR 25717 + followups) is wired through net_processing, validation, compact-block handling, and Dash's HEADERS2/compressed-headers path consistently. Findings are all non-blocking: a pow_tests case that became trivially-true under Dash's KGW/DGW override, a stale docstring on PermittedDifficultyTransition, an int64→int narrowing in the Qt header-tip shim, and a stale comment in HeadersSyncState. One agent finding about MaybeSendSendHeaders was dropped as a false positive (matches upstream).
Reviewed commit: 6db8fdf
🟡 1 suggestion(s) | 💬 4 nitpick(s)
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/test/pow_tests.cpp`:
- [SUGGESTION] lines 77-79: PermittedDifficultyTransition negative test is trivially true under Dash KGW/DGW override
The comment says 'increasing nbits further would not be a PermittedDifficultyTransition' but the assertion is positive (no `!`). More importantly, `blockIndexLast->nHeight = 123456` is well past mainnet's `nPowKGWHeight = 15200`, and the Dash implementation of `PermittedDifficultyTransition` (src/pow.cpp:262) unconditionally returns true for `height >= nPowKGWHeight`. As a result, every `BOOST_CHECK(PermittedDifficultyTransition(...))` in this test (lines 52, 60, 65, 68, 72, 75, 79) passes regardless of the nBits arguments — the transition bound is never exercised. Either move the test setup to a height below `nPowKGWHeight` so the 4x bound is actually evaluated, or rewrite/remove this case and fix the misleading comment to reflect Dash behavior.
| // Test that increasing nbits further would not be a PermittedDifficultyTransition. | ||
| unsigned int invalid_nbits = expected_nbits+1; | ||
| BOOST_CHECK(PermittedDifficultyTransition(chainParams->GetConsensus(), blockIndexLast->nHeight+1, blockIndexLast->nBits, invalid_nbits)); |
There was a problem hiding this comment.
🟡 Suggestion: PermittedDifficultyTransition negative test is trivially true under Dash KGW/DGW override
The comment says 'increasing nbits further would not be a PermittedDifficultyTransition' but the assertion is positive (no !). More importantly, blockIndexLast->nHeight = 123456 is well past mainnet's nPowKGWHeight = 15200, and the Dash implementation of PermittedDifficultyTransition (src/pow.cpp:262) unconditionally returns true for height >= nPowKGWHeight. As a result, every BOOST_CHECK(PermittedDifficultyTransition(...)) in this test (lines 52, 60, 65, 68, 72, 75, 79) passes regardless of the nBits arguments — the transition bound is never exercised. Either move the test setup to a height below nPowKGWHeight so the 4x bound is actually evaluated, or rewrite/remove this case and fix the misleading comment to reflect Dash behavior.
source: ['claude']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/test/pow_tests.cpp`:
- [SUGGESTION] lines 77-79: PermittedDifficultyTransition negative test is trivially true under Dash KGW/DGW override
The comment says 'increasing nbits further would not be a PermittedDifficultyTransition' but the assertion is positive (no `!`). More importantly, `blockIndexLast->nHeight = 123456` is well past mainnet's `nPowKGWHeight = 15200`, and the Dash implementation of `PermittedDifficultyTransition` (src/pow.cpp:262) unconditionally returns true for `height >= nPowKGWHeight`. As a result, every `BOOST_CHECK(PermittedDifficultyTransition(...))` in this test (lines 52, 60, 65, 68, 72, 75, 79) passes regardless of the nBits arguments — the transition bound is never exercised. Either move the test setup to a height below `nPowKGWHeight` so the 4x bound is actually evaluated, or rewrite/remove this case and fix the misleading comment to reflect Dash behavior.
| /** | ||
| * Return false if the proof-of-work requirement specified by new_nbits at a | ||
| * given height is not possible, given the proof-of-work on the prior block as | ||
| * specified by old_nbits. | ||
| * | ||
| * This function only checks that the new value is within a factor of 4 of the | ||
| * old value for blocks at the difficulty adjustment interval, and otherwise | ||
| * requires the values to be the same. | ||
| * | ||
| * Always returns true on networks where min difficulty blocks are allowed, | ||
| * such as regtest/testnet. | ||
| */ | ||
| bool PermittedDifficultyTransition(const Consensus::Params& params, int64_t height, uint32_t old_nbits, uint32_t new_nbits); |
There was a problem hiding this comment.
💬 Nitpick: PermittedDifficultyTransition docstring omits Dash KGW/DGW always-true regime
The header docstring is verbatim from upstream Bitcoin Core and only mentions that the function returns true on networks with min-difficulty blocks (regtest/testnet). The Dash implementation also unconditionally returns true for height >= params.nPowKGWHeight (per-block retargeting under KGW/DGW), with detailed rationale in src/pow.cpp:236-256. Callers that only read the header (e.g. HeadersSyncState in src/headerssync.cpp:189,237) might assume the 4x bound is enforced post-KGW. Mirror a one-line summary of the Dash-specific behavior into the header to avoid future confusion.
source: ['claude']
| ::uiInterface.NotifyHeaderTip_connect([fn](SynchronizationState sync_state, int64_t height, int64_t timestamp, bool presync) { | ||
| fn(sync_state, BlockTip{(int)height, timestamp, uint256{}}, presync); | ||
| })); |
There was a problem hiding this comment.
💬 Nitpick: int64_t presync height narrowed to int when forwarded to BlockTip
handleNotifyHeaderTip receives int64_t height from the new presync signalling path and silently narrows it via BlockTip{(int)height, timestamp, uint256{}} (interfaces/node.h:547-551 defines block_height as int). Heights from HeadersSyncState::GetPresyncHeight() are bounded by m_max_commitments * HEADER_COMMITMENT_PERIOD, so overflow is not reachable in any near-term scenario, but the explicit narrowing cast hides the type mismatch and would silently produce garbage if BlockTip is widened or the bound grows. Consider widening BlockTip::block_height to int64_t (as upstream Bitcoin Core does) or at least asserting the value fits.
source: ['claude']
| /** Hash of last header in m_redownloaded_headers (initialized to | ||
| * m_chain_start). We have to cache it because we don't have hashPrevBlock | ||
| * available in a CompressedHeader. | ||
| */ | ||
| uint256 m_redownload_buffer_last_hash; | ||
|
|
||
| /** The hashPrevBlock entry for the first header in m_redownloaded_headers | ||
| * We need this to reconstruct the full header when it's time for | ||
| * processing. | ||
| */ | ||
| uint256 m_redownload_buffer_first_prev_hash; |
There was a problem hiding this comment.
💬 Nitpick: Stale comment: m_redownload_buffer_last_hash is not initialized at construction
The comment claims m_redownload_buffer_last_hash is 'initialized to m_chain_start'. The constructor at src/headerssync.cpp:25-34 does not list this member (nor m_redownload_buffer_first_prev_hash) in its init list, so both are default-constructed to the null uint256. m_redownload_buffer_last_hash is later assigned m_chain_start->GetBlockHash() only on transition to REDOWNLOAD (src/headerssync.cpp:169). Either initialize the members in the constructor to match the comment, or update the comment to say 'set to m_chain_start->GetBlockHash() upon transition to REDOWNLOAD'.
source: ['claude']
| def mocktime_all(self, time): | ||
| for n in self.nodes: | ||
| n.setmocktime(time) | ||
|
|
There was a problem hiding this comment.
💬 Nitpick: PR 30761 content folded into the 26184 merge commit without attribution
The PR title lists bitcoin#30761 (test: Avoid intermittent timeout in p2p_headers_sync_with_minchainwork.py) as a backport target, but there is no separate commit for it. Its changes (the import time, the mocktime_all() helper, and the mocktime_all(int(time.time())) ... mocktime_all(0) wrap around sync_blocks(timeout=300)) are present, but they were folded into the merge commit for bitcoin#26184 (6db8fdf) — upstream 26184 itself touches p2p_invalid_messages.py only. Backport content is functionally complete; this is purely a hygiene/attribution nit (30761's commit message and MarcoFalke authorship are lost).
source: ['claude']
Issue being fixed or feature implemented
Original 25717 description:
What was done?
Backports from Bitcoin Core v24, v25 with anti-DOS header sync
Bitcoin Core's original implementation provides 2 features:
PermittedDifficultyTransition). This feature can't be backported directly due to allowed rapid changes in difficulty: difficulty is recalculated every block (not once in 2016 as bitcoin) + it could change significantly just after several blocks due to DGW (dark gravity wave) algorithm.Perf optimization should from #7272 should be applied for pre-sync stage too, new PR is coming with it (WIP, in debug&testing stage).
How Has This Been Tested?
Breaking Changes
N/A
But it changes behavior of Dash Core sync process noticeable.
Checklist: