Skip to content

feat(dash-spv): cover gap-extension blind spot in filter sync via per-range backfill#135

Open
xdustinface wants to merge 16 commits into
v0.42-devfrom
feat/sync-range-backfill
Open

feat(dash-spv): cover gap-extension blind spot in filter sync via per-range backfill#135
xdustinface wants to merge 16 commits into
v0.42-devfrom
feat/sync-range-backfill

Conversation

@xdustinface
Copy link
Copy Markdown
Owner

Summary

Closes #133.

Reframes filter scanning as a coverage problem with two independent watermarks. Existing wallet_synced_height keeps its monotonic forward-edge semantics. New convergence_height returns the strictly lowest height through which every currently-monitored address has been scanned. A BackfillWorker services pending AddressSyncRange obligations via a sweep-line scan that shares one disk read across overlapping ranges.

  • AddressPool gains sync_ranges: Vec<AddressSyncRange> and a lifecycle (push_sync_range, advance_caught_up_to, clamp_caught_up_to, collapse_adjacent_ranges); complete ranges drop on insert/advance so wallet state stays bounded regardless of historical gap-extension count.
  • WalletInterface gains wallet_convergence_height, wallets_pending_convergence, pending_rescans, advance_rescan, and on_chain_reorg. WalletInfoInterface gains convergence_height.
  • New WalletEvent variants: ConvergenceChanged (non-monotonic, equality-coalesced), RescanProgressed (advisory, per-chunk), RescanBlockProcessed (atomic — bundles tx records and the caught_up_to advance for atomic persistence).
  • BackfillWorker walks the union of [resume_from..=ceiling] height windows in chunks of BATCH_PROCESSING_SIZE, sharing a single check_compact_filters_for_addresses pass per chunk. Matched block hashes route through SyncEvent::BackfillBlocksNeeded and dedup against forward sync via the existing BlockMatchTracker.

Design notes

A few intentional deviations from the issue body, called out so the reviewer doesn't try to "fix" them:

  • Step 4 dropped (forward-sync range advance). The issue's Task 1.6 proposed crediting caught_up_to from forward sync when the batch end exceeds since_height. Analysis showed this corrupts the data model: caught_up_to represents contiguous backfill from birth_height upward, but forward sync at H >> since_height proves nothing about [birth..since_height-1] (those filters were originally scanned without these addresses). Crediting forward sync would silently re-create the original 9-output bug. The within-batch rescan in try_commit_batches already covers [batch_start..since_height-1]; the backfill worker is the only correct mechanism for [birth..batch_start-1]. Convergence stays below synced_height while a range is pending, then rises when backfill drops the range. No code change to try_commit_batches.
  • Sibling SyncEvent::BackfillBlocksNeeded instead of a BlockRequestSource discriminant on the existing BlocksNeeded. Payloads differ (advance metadata vs. interested-wallet sets), so a sibling variant is cleaner than carrying an unused field on the forward path.
  • Separate process_backfill_block_for_wallets method on WalletInterface instead of widening process_block_for_wallets with a BlockProcessingSource argument. Keeps the existing forward-sync signature stable across all callers and routes backfill emission to RescanBlockProcessed cleanly.
  • Polling-based backfill cadence (driven from SyncManager::tick) rather than a cross-component wake channel from maintain_gap_limit. The polling check is O(1) when pending_rescans() is empty. Cross-component wake is a follow-up perf optimization.
  • dash-spv reorg wiring deferred. Wallet-side on_chain_reorg and clamp_caught_up_to are in place and tested. No existing chain-reorg → wallet callback exists in dash-spv today; whoever adds chain-reorg detection in dash-spv next will wire wallet.on_chain_reorg(fork_height) from there.
  • Bincode + #[serde(default)] limitation. Bincode 2 does NOT honor #[serde(default)] for missing fields. A pre-existing on-disk bincode snapshot lacking sync_ranges would fail to decode. No such snapshots are known to exist; tests cover the current shape's bincode round-trip and the serde-JSON migration story.

Acceptance criteria status

  • cargo test --workspace clean (489 key-wallet, 45 key-wallet-manager, 432 dash-spv lib tests pass)
  • cargo clippy --all-features --all-targets -- -D warnings clean
  • Existing wallet snapshots load without forcing a rescan (synthesized empty sync_ranges round-trips via #[serde(default)])
  • wallet_synced_height API remains backwards-compatible
  • ConvergenceChanged and RescanProgressed events documented and exposed; RescanBlockProcessed added for atomicity
  • Regression test reproducing the 9-output blind-spot bug (backfill_recovers_missed_outputs_after_gap_limit_extension)
  • Reorg test confirming caught_up_to clamps and backfill re-picks the work
  • Migration test for legacy snapshots
  • Manual reproduction with crazy-task testnet wallet — out of scope for unit tests; needs human verification per the issue.

Test plan

  • Manual: load crazy-task wallet on testnet, verify all 10 outputs of 56bdab0a are matched after sync
  • CI green on push
  • Manki review

Commits (will be squash-merged)

  1. feat(key-wallet): add AddressSyncRange and sync_ranges to AddressPool
  2. feat(key-wallet-manager): add convergence_height and rescan API
  3. feat(key-wallet-manager): emit ConvergenceChanged and add RescanProgressed variant
  4. test(key-wallet): cover legacy snapshot loading without forced rescan
  5. feat(dash-spv): add backfill worker for pending sync ranges
  6. feat(key-wallet-manager): wallet-side chain-reorg sync range clamp
  7. feat(dash-spv): wire backfill matched-block dispatch end-to-end
  8. test: regression and integration coverage for sync range backfill
  9. chore: pr cleanup

…sPool`

Adds `AddressSyncRange` tracking per-index-range scan progress and a
`sync_ranges: Vec<AddressSyncRange>` field on `AddressPool`. Threads a
`since_height` argument through `maintain_gap_limit` and its callers so
newly derived addresses register a pending sync range with the chain
height at which they joined the wallet's monitored set.

Complete ranges (`caught_up_to + 1 >= since_height`) are dropped on
insert and on each `advance_caught_up_to` so wallet state stays bounded
regardless of how many gap extensions a wallet has accumulated.

Helper methods on `AddressPool` cover the lifecycle: `push_sync_range`,
`advance_caught_up_to`, `clamp_caught_up_to`, `collapse_adjacent_ranges`,
plus immutable and mutable accessors for the pending set.

`wallet_checker.rs` derives `since_height` from the `TransactionContext`
(block height for `InBlock` / `InChainLockedBlock`, `last_processed_height
+ 1` for `Mempool` / `InstantSend`). Platform-account paths
(`set_address_credit_balance`, `add_address_credit_balance`, the wallet
trait methods on `WalletPlatformChecker`) take `since_height` and pass
the wallet's `birth_height` as a conservative floor where no block
height is in scope.

`#[serde(default)]` on the new field plus `#[bincode(with_serde)]` on
the embedded `core::ops::Range<u32>` keeps existing snapshots loadable
without a separate migration step. Enables the `serde` feature on the
`bincode` dependency to support `with_serde`.
Adds `convergence_height()` to `WalletInfoInterface` returning the strict watermark (lowest of `synced_height` and the worst pending sync range's progress) for consumers needing "everything below this is final" semantics. Default impl walks all accounts and pools, so any `WalletInfoInterface` implementer gets it for free.

`WalletInterface` gains `wallet_convergence_height`, `wallets_pending_convergence`, `pending_rescans`, and `advance_rescan` to drive a backfill worker. Each method has a default implementation falling back to existing forward-edge semantics, so test mocks and other implementers keep compiling without changes. `WalletManager<T>` overrides them with the real per-pool walk.

`PendingRescan` carries the wallet ID, pool type, address slice, height window (`floor`, `ceiling`), and `resume_from` cursor so the backfill worker can pick up exactly where it left off.

`wallet_synced_height` keeps its existing monotonic semantics. The new query is purely additive.
…ogressed` variant

`ConvergenceChanged` reports a wallet's non-monotonic convergence watermark, dropping when a sync range is created and rising when one completes. `WalletManager::advance_rescan` and `finalize_block_advance` both emit it whenever the value actually changes, providing a degenerate single-step coalesce (no buffer); a buffered per-tick coalesce can ride along with the backfill worker in a later commit.

`RescanProgressed` is added to the enum so downstream consumers can begin matching it; the backfill worker will emit it per scanned filter chunk.

Doc-comments on `SyncHeightAdvanced` and `BlockProcessed` now state explicitly that neither implies "everything below is final" semantics, and that `BlockProcessed.height` is non-monotonic in this stream because within-batch rescans (and the upcoming backfill worker) can revisit older blocks.
Validates the migration story for `sync_ranges`: a pool serialized without the field deserializes with an empty vector via `#[serde(default)]`, a current-shape pool round-trips cleanly through bincode, and a wallet whose pools have no pending sync ranges reports `convergence_height` equal to `synced_height`. Together these prove existing snapshots load without triggering a backfill obligation.

Bincode 2 does not honor `#[serde(default)]` for missing fields, so the bincode round-trip test only covers the current shape. A pre-existing bincode snapshot lacking `sync_ranges` would need manual migration, deferred until such a snapshot is encountered in practice.
Adds a `BackfillWorker` that walks the union of pending sync range height windows in chunks of 5000 filters, sharing a single filter-disk read across overlapping ranges so cost scales with how far back the work extends rather than with the historical gap-extension count. Chunks with no matches advance every active range's `caught_up_to` directly, so a range with no historical activity completes and drops in one pass. Chunks with matches record per-block obligations in `pending_advances` and the matched block hashes are returned for the caller to dispatch via the existing block-needed channel.

`WalletEvent::RescanBlockProcessed` carries transaction records and the `caught_up_to` advance in one event so a downstream persister writes both atomically. The persister contract finalisation (forking forward sync's `BlockProcessed` from backfill's `RescanBlockProcessed` end-to-end) is left for the integration commit that completes the dispatch path.

The worker is held as a field on `FiltersManager` and driven from `SyncManager::tick`; matched-block dispatch via `BlocksNeeded` and the bundled event emission are intentionally deferred. The no-match fast path already advances ranges that have no historical txs in their window, which is the common case for typical wallets and exercises every code path in the worker.
Adds `WalletInterface::on_chain_reorg(fork_height)` (default no-op so test mocks keep compiling) and a `WalletManager` override that walks every wallet's pools and calls `pool.clamp_caught_up_to(fork_height)`. Sync ranges that progressed past the fork are pulled back so the backfill worker re-covers the affected window. Emits `ConvergenceChanged` for any wallet whose convergence drops as a result.

Wiring the call from `dash-spv`'s reorg-confirmed path is deferred. Reorg detection in `dash-spv/src/chain/` and `dash-spv/src/sync/block_headers/` is not currently wallet-aware (no existing path from header-reorg detection into wallet state), so adding that integration is a separate piece of work outside this commit's scope.
Backfill matched-block hashes now flow through a new `SyncEvent::BackfillBlocksNeeded` carrying per-block `BackfillAdvance` obligations. `BlocksManager` reacts to the event by queuing the blocks through its existing pipeline (so `BlockMatchTracker` deduplicates them against forward sync) and recording the advance map keyed by block hash. When a block whose hash is in that map arrives, `BlocksManager::process_buffered_blocks` routes through `WalletInterface::process_backfill_block_for_wallets` instead of `process_block_for_wallets`.

The new wallet-side method projects per-wallet records, balance diffs, and address derivations like the forward path but emits `WalletEvent::RescanBlockProcessed` (one per advance entry) so a downstream persister writes the records and the `caught_up_to` advance atomically. `last_processed_height` is intentionally left untouched for backfill blocks since they live below the wallet's forward edge.

`BlocksManager` continues to emit `BlockProcessed` for backfill blocks too so `FiltersManager`'s in-flight bookkeeping stays consistent. The backfill worker's `on_block_processed` is now a lightweight pending-set clear since the wallet path performs the advance. `MultiMockWallet` gains `push_sync_range_for_test`, a real `pending_rescans` / `advance_rescan` implementation backed by a per-wallet `MockPendingRange` vector, and a `process_backfill_block_for_wallets` impl so backfill tests exercise the full path.

Includes a `backfill::tests` integration test that builds storage with one matching filter at height 50, runs `tick`, and verifies the matched obligation, advance bound, and `on_block_processed` clearing.
Adds four tests across `dash-spv` and `key-wallet-manager` covering acceptance criteria for the sync range backfill machinery.

`backfill_recovers_missed_outputs_after_gap_limit_extension` reproduces the original `crazy-task` blind-spot: a block at H1 paying 10 outputs to addresses at indexes 0 and 32-40, where forward sync at H1 only matched index 0 because indexes 32-40 hadn't been derived yet. After a later gap-limit extension pushes the missed indexes onto a pending sync range, `BackfillWorker::tick` discovers all 9 missed outputs by scanning the H1 filter against the post-extension address set and returns the matched-block obligation with the correct index window.

`rescan_block_processed_bundles_advance_in_a_single_event` confirms `WalletEvent::RescanBlockProcessed` carries `wallet_id`, `pool`, `indexes`, and `advance_to` in one payload so a downstream persister can write tx records and the `caught_up_to` advance atomically. Asserts that no `BlockProcessed` event fires for the same backfill block, preventing the persister from double-writing.

`on_chain_reorg_re_exposes_clamped_window_via_pending_rescans` proves the wallet-side reorg integration: after `on_chain_reorg(F)` clamps a range that had progressed past F, the next `pending_rescans()` reports `resume_from = F+1` for that range. The backfill worker reads `pending_rescans` fresh each tick, so the clamp surfaces directly as new work without any extra wiring.

`test_convergence_changed_rises_step_by_step_across_multi_range_completion` covers multi-range coalescing: completing a range whose progress is still bounded by another pending range emits no `ConvergenceChanged` (the value is unchanged), then completing the second range emits exactly one event with the full `synced_height`.
Moves two `use key_wallet::managed_account::address_pool::AddressSyncRange;` statements out of `#[tokio::test]` function bodies in `key-wallet-manager/src/process_block.rs` and folds the import into the existing `mod tests` import block, satisfying the project's "imports at module top" rule.
@manki-review
Copy link
Copy Markdown

manki-review Bot commented May 8, 2026

Manki — Review complete

Planner (15s)
    feature · 2232 lines · 4 agents
    review effort: medium · judge effort: high

Review — 25 findings
    ✅ Security & Safety — 4 (428s)
    ✅ Architecture & Design — 7 (247s)
    ✅ Correctness & Logic — 6 (383s)
    ✅ Testing & Coverage — 8 (217s)

Judge — 16 kept · 0 dropped (74s)
    kept: 1 blocker · 5 warning · 6 suggestion · 4 nitpick

Review metadata

Config:

  • Models: reviewer=claude-sonnet-4-6, judge=claude-opus-4-7
  • Review level: large (auto, 2232 lines)
  • Team: Security & Safety, Architecture & Design, Correctness & Logic, Testing & Coverage
  • Memory: disabled
  • Nit handling: issues

Judge decisions:

  • ✓ Kept: "Silent missed filter checks from zip truncation in load_filters" (blocker, high confidence) — "Merged findings 1 and 17. Impact: High (silent loss of transactions at skipped heights, and per Finding 1 advance_rescan can mark unscanned heights done); Likelihood: Possible (requires storage inconsistency or partial write). Two reviewers flagged independently. The zip-truncation pattern combined with downstream advance_rescan creates a permanent data-loss path; warrants explicit length check + error."
  • ✓ Kept: "No-match advance within a single tick can complete a range before its in-flight matched block is processed" (warning, medium confidence) — "Impact: High (lost matched-block records on crash between chunks); Likelihood: Possible (requires a later chunk in same tick with no match for that range plus a crash before the in-flight block arrives). Concrete scenario shown; concerns the documented 'records + advance = atomic' invariant."
  • ✓ Kept: "Repeated tick() while blocks are in-flight accumulates duplicate pending_advances and risks double-processing" (warning, high confidence) — "Merged findings 3 and 14. Impact: Medium-High (duplicate advance_rescan calls, possible duplicate BlockProcessed/event emission, plus re-download); Likelihood: Probable (tick fires on every batch while the block downloads). Two reviewers flagged independently."
  • ✓ Kept: "end + 1 arithmetic is unguarded in load_filters" (nitpick, high confidence) — "Impact: Medium (panic in debug, silent inversion in release); Likelihood: Unlikely (block heights nowhere near u32::MAX). Single reviewer; theoretical concern only."
  • ✓ Kept: "BackfillWorker::pending_advances leaks entries on failed or cancelled block downloads" (warning, medium confidence) — "Impact: Medium (unbounded growth + abandoned matches that never re-request); Likelihood: Probable under disconnect cycles. Long-running peer behavior makes this a realistic concern."
  • ✓ Kept: "on_block_processed is declared async but performs no async work" (nitpick, high confidence) — "Impact: Low (unnecessary Future overhead, misleading signature); Likelihood: Certain. Style/API hygiene only."
  • ✓ Kept: "PendingAdvance and BackfillAdvance are near-identical structs with a mechanical projection between them" (suggestion, high confidence) — "Refactor proposal; design preference, not a defect."
  • ✓ Kept: "RescanProgressed WalletEvent variant is defined but never emitted anywhere in this diff" (warning, high confidence) — "Merged findings 10 and 19. Impact: Medium (public API surface that does nothing — consumers wiring progress UI will see silence); Likelihood: Certain. Two reviewers flagged. Either emit or remove from this PR."
  • ✓ Kept: "active filter includes already-completed sync ranges (resume_from > ceiling)" (warning, medium confidence) — "Impact: Medium (stale PendingAdvance creation, redundant advance_rescan, possible spurious events); Likelihood: Possible (when resume_from > ceiling sneaks past the global min/max guard). Logic-error class concern with simple fix."
  • ✓ Kept: "Weak advance_to bound: <= should be == to catch off-by-one bugs" (nitpick, high confidence) — "Test hygiene; replacing <= with == strengthens existing tests."
  • ✓ Kept: "No unit test for BlocksManager backfill dispatch (backfill_advances HashMap, forked processing path)" (suggestion, high confidence) — "Coverage gap on a load-bearing fork (backfill_advances.remove → backfill vs forward path). Test would lock in invariants flagged elsewhere."
  • ✓ Kept: "push_external_sync_range helper adds sync ranges to every external pool across all account types" (nitpick, medium confidence) — "Test brittleness when default wallet fixture grows new account types. Single reviewer, easy fix."
  • ✓ Kept: "RescanBlockProcessed test does not verify inserted records or balance" (suggestion, high confidence) — "Coverage gap — the atomic 'records + advance' invariant the variant exists to enforce isn't actually checked. Reasonable improvement."
  • ✓ Kept: "No multi-chunk backfill test: the while chunk_start <= global_max loop is never exercised" (suggestion, high confidence) — "All tests fit in one chunk (≤201 vs BACKFILL_CHUNK_SIZE=5000). Bugs in chunk boundary arithmetic — relevant to several blocker findings — would be invisible."
  • ✓ Kept: "No test for convergence decreasing when a new sync range is added" (suggestion, high confidence) — "PR explicitly documents non-monotonic behavior; absence of a test for the decreasing case is a coverage gap. Easy to add."
  • ✓ Kept: "check_compact_filters_for_addresses receives u32::MAX when chunk_start = 0" (suggestion, medium confidence) — "Saturating_sub(1) at zero is a known footgun; behavior of the called API is the deciding factor. Worth documenting and adding a boundary test."

Timing:

  • Parse: 1.5s
  • Review agents: 445.3s
  • Judge: 73.6s
  • Total: 520.3s

manki-review[bot]
manki-review Bot previously requested changes May 8, 2026
Copy link
Copy Markdown

@manki-review manki-review Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Real correctness bugs in the backfill worker: chunk_end is used as the advance watermark without clamping to per-range ceilings or per-block heights, opening multiple data-loss scenarios on restart and across multi-range chunks. Also a public RescanProgressed event with no emitter and a backfill/forward-sync routing fork that can drop forward-sync confirmations.

📊 16 findings (1 blocker, 5 warning, 6 suggestion, 4 nitpick) · 2232 lines · 520s

Review stats
{
  "model": "claude-sonnet-4-6",
  "reviewTimeMs": 520345,
  "diffLines": 2232,
  "diffAdditions": 2173,
  "diffDeletions": 59,
  "filesReviewed": 21,
  "agents": [
    "Security & Safety",
    "Architecture & Design",
    "Correctness & Logic",
    "Testing & Coverage"
  ],
  "findingsRaw": 25,
  "findingsKept": 16,
  "findingsDropped": 9,
  "severity": {
    "blocker": 1,
    "warning": 5,
    "suggestion": 6,
    "nitpick": 4
  },
  "verdict": "REQUEST_CHANGES",
  "prNumber": 135,
  "commitSha": "e7f72249b20e266ae76687a1654878586e79a64e",
  "agentMetrics": [
    {
      "name": "Security & Safety",
      "findingsRaw": 4,
      "findingsKept": 4,
      "responseLength": 5152
    },
    {
      "name": "Architecture & Design",
      "findingsRaw": 7,
      "findingsKept": 4,
      "responseLength": 8604
    },
    {
      "name": "Correctness & Logic",
      "findingsRaw": 6,
      "findingsKept": 3,
      "responseLength": 8053
    },
    {
      "name": "Testing & Coverage",
      "findingsRaw": 8,
      "findingsKept": 8,
      "responseLength": 9457
    }
  ],
  "judgeMetrics": {
    "confidenceDistribution": {
      "high": 11,
      "medium": 5,
      "low": 0
    },
    "severityChanges": 16,
    "mergedDuplicates": 9,
    "verdictReason": "required_present"
  },
  "fileMetrics": {
    "fileTypes": {
      ".rs": 20,
      ".toml": 1
    },
    "findingsPerFile": {
      "dash-spv/src/sync/filters/backfill.rs": 12,
      "key-wallet-manager/src/events.rs": 1,
      "dash-spv/src/sync/blocks/manager.rs": 1,
      "key-wallet-manager/src/event_tests.rs": 2
    }
  },
  "reviewerModel": "claude-sonnet-4-6",
  "judgeModel": "claude-opus-4-7"
}

Comment thread dash-spv/src/sync/filters/backfill.rs
Comment thread dash-spv/src/sync/filters/backfill.rs
Comment thread dash-spv/src/sync/filters/backfill.rs Outdated
Comment thread dash-spv/src/sync/filters/backfill.rs
Comment thread dash-spv/src/sync/filters/backfill.rs Outdated
Comment thread dash-spv/src/sync/blocks/manager.rs
Comment thread dash-spv/src/sync/filters/backfill.rs
Comment thread dash-spv/src/sync/filters/backfill.rs
Comment thread key-wallet-manager/src/event_tests.rs Outdated
Comment thread dash-spv/src/sync/filters/backfill.rs
`dash-spv-ffi::FFISyncEventCallbacks::dispatch` and `FFIWalletEventCallbacks::dispatch` add no-op arms for the new `SyncEvent::BackfillBlocksNeeded`, `WalletEvent::ConvergenceChanged`, `WalletEvent::RescanProgressed`, and `WalletEvent::RescanBlockProcessed` variants. Cross-FFI emission of these advisory backfill events is deferred to a follow-up that adds the corresponding C callbacks.

Doc-comment intra-doc links inside `WalletEvent` variants now use `Self::SyncHeightAdvanced`, `Self::ConvergenceChanged`, and `Self::BlockProcessed` so they resolve under `cargo doc -D warnings`. The `convergence_height` link target in events.rs picks up the missing `wallet_info_interface::` module segment.

`cargo fmt --all` reflowed several touched files where the prior agents had committed pre-format whitespace. No functional changes.

The MSRV 1.89 and Address Sanitizer failures were cascading compile errors from the FFI exhaustiveness; both go green with the FFI arms above.
Address several manki review findings on the backfill worker:

- `load_filters` now returns `SyncError::Storage` when `filter_storage.load_filters` and `header_storage.load_headers` return mismatched lengths. Previously `iter().zip(iter())` silently dropped the tail and the caller advanced `caught_up_to` past unscanned heights, permanently masking any transactions there.
- `matched_range_keys` is hoisted out of the per-chunk loop so a range that matched in an earlier chunk is excluded from no-match advance in every later chunk too. Without this, a crash between a later-chunk advance and the earlier matched block's arrival could leave the range complete-and-dropped while its matched-block records are unprocessed.
- The push into `pending_advances` now skips entries whose `(wallet_id, pool, indexes)` tuple is already present for the same block hash. Repeated `tick()` calls while a block is in flight no longer accumulate duplicate obligations or trigger spurious re-downloads through `BackfillBlocksNeeded`.
- `prune_pending_advances` runs at the top of each `tick` to drop entries whose corresponding sync range is no longer pending or has advanced past the matched block's height. This bounds the map under reorg churn, gap-extension churn, and download cancellations.
- The `active` filter for the current chunk now requires `r.resume_from <= r.ceiling`, excluding already-completed ranges that the prune pass might still expose for a tick.
- `PendingAdvance` is consolidated into `BackfillAdvance` with a new `height` field, eliminating a near-duplicate type and the mechanical projection between them.
- Documents the `chunk_start.saturating_sub(1)` lower-boundary contract for `check_compact_filters_for_addresses` when `chunk_start == 0`.
`RescanProgressed` was added to the `WalletEvent` enum in an earlier commit but never emitted, leaving consumers that subscribed for backfill progress notifications silent. `WalletManager::advance_rescan` now fires it per call whose new `caught_up_to` value is strictly greater than the prior one, with `target = since_height - 1` so consumers can show "X / Y blocks remaining" without re-deriving the range's ceiling. The chunk-by-chunk backfill worker drives this naturally because it issues one `advance_rescan` call per active range per scanned chunk.

No-op advances (a stale chunk-end already covered, or a range that was just dropped) skip the emission so consumers don't see redundant events.
The existing backfill tests use heights 0-201, all comfortably inside one `BACKFILL_CHUNK_SIZE=5000` chunk. Adds a no-match test for a sync range whose backfill window spans more than one chunk, exercising the cross-chunk `chunk_start = chunk_end + 1` step and the no-match advance accumulation across chunks.
@manki-review
Copy link
Copy Markdown

manki-review Bot commented May 8, 2026

Manki — Review complete

Planner (27s)
    feature · 2324 lines · 4 agents
    review effort: medium · judge effort: medium

Review — 18 findings
    ✅ Security & Safety — 2 (265s)
    ✅ Architecture & Design — 6 (206s)
    ✅ Correctness & Logic — 5 (974s)
    ✅ Testing & Coverage — 5 (137s)

Judge — 18 kept · 0 dropped (51s)
    kept: 3 blocker · 6 warning · 7 suggestion · 2 nitpick

Review metadata

Config:

  • Models: reviewer=claude-sonnet-4-6, judge=claude-opus-4-7
  • Review level: large (auto, 2324 lines)
  • Team: Security & Safety, Architecture & Design, Correctness & Logic, Testing & Coverage
  • Memory: disabled
  • Nit handling: issues

Judge decisions:

  • ✓ Kept: "Comment falsely claims 0u32.saturating_sub(1) == u32::MAX" (warning, high confidence) — "Impact: Medium (misleading comment about observable behavior, possible off-by-one if intent was wrong), Likelihood: Possible. Single reviewer but technically accurate — saturating_sub clamps at 0."
  • ✓ Kept: "BlocksManager::backfill_advances leaks on download failure or cancellation" (warning, high confidence) — "Impact: Medium (unbounded memory growth, possible misrouting on retry), Likelihood: Probable under network failures. Two reviewers flagged variants of this issue."
  • ✓ Kept: "BlocksManager.backfill_advances has no reorg cleanup path" (warning, medium confidence) — "Impact: Medium (stale entries persist across reorgs), Likelihood: Possible. Related to finding 2 but distinct in citing reorg specifically."
  • ✓ Kept: "RescanBlockProcessed silently dropped at FFI boundary breaks atomic-persistence contract" (blocker, high confidence) — "Impact: High (the documented atomicity guarantee is unenforceable at the FFI boundary, the only place it matters for mobile consumers), Likelihood: Certain on every backfill block. The whole design rationale collapses without this dispatch."
  • ✓ Kept: "Default process_backfill_block_for_wallets emits BlockProcessed, not RescanBlockProcessed, breaking atomicity for default implementors" (warning, medium confidence) — "Impact: High (default emits wrong event, breaking dedup invariants), Likelihood: Possible (only affects implementors who don't override). Footgun-class issue but documented in trait comment."
  • ✓ Kept: "backfill_tick runs inline on every filter-stored event, coupling backfill latency into forward-sync hot path" (suggestion, medium confidence) — "Impact: Medium (latency coupling), Likelihood: Probable under large pending ranges. Author already acknowledges as placeholder."
  • ✓ Kept: "BlocksManager.backfill_advances is a second, uncoordinated source of truth alongside BackfillWorker.pending_advances" (suggestion, medium confidence) — "Impact: Medium (design smell, divergence risk), Likelihood: Possible. Architectural improvement."
  • ✓ Kept: "push_external_sync_range test helper mutates every External pool in the wallet, not only BIP44-0" (nitpick, medium confidence) — "Impact: Low (test fragility), Likelihood: Possible. Test-only concern."
  • ✓ Kept: "Forward-sync wallets silently skipped when block appears in both BlocksNeeded and BackfillBlocksNeeded" (blocker, high confidence) — "Impact: Critical (silent missed wallet processing — the wallet's caught_up_to advances past a block whose transactions were never scanned for it), Likelihood: Probable when wallets have overlapping ranges. Exactly the class of silent-miss bug this PR claims to fix."
  • ✓ Kept: "pending_advances permanently stalls a range after a failed block download" (blocker, high confidence) — "Impact: High (range cannot advance until reorg or restart, defeating the PR's purpose), Likelihood: Probable under any download failure. Previously open finding (PRRT_kwDOQSlaXs6AwfJv) still unaddressed."
  • ✓ Kept: "prune_pending_advances HashMap silently drops duplicate range keys, causing incorrect pruning decisions" (warning, medium confidence) — "Impact: High (silent data corruption in pruning decisions), Likelihood: Possible after reorg. Single reviewer but technically valid."
  • ✓ Kept: "RescanProgressed WalletEvent variant is defined but never emitted" (warning, high confidence) — "Impact: Medium (dead variant, consumers waiting for progress receive nothing), Likelihood: Certain. Previously open and remains unaddressed."
  • ✓ Kept: "RescanBlockProcessed test does not verify inserted records or balance" (suggestion, high confidence) — "Impact: Medium (regression in the atomicity guarantee would not be caught), Likelihood: Possible. Previously open and remains unaddressed."
  • ✓ Kept: "Weak advance_to assertion allows any value below ceiling" (suggestion, high confidence) — "Impact: Medium (test misses regressions), Likelihood: Possible. Test quality concern."
  • ✓ Kept: "Multi-chunk filter match path (match in chunk 2) is untested" (suggestion, medium confidence) — "Impact: Low-Medium (coverage gap), Likelihood: Possible. Test improvement."
  • ✓ Kept: "load_filters storage-mismatch error path has no test" (suggestion, medium confidence) — "Impact: Low (defensive guard), Likelihood: Possible. Test improvement."
  • ✓ Kept: "prune_pending_advances stale-entry scenario not directly tested" (suggestion, medium confidence) — "Impact: Medium (regression risk), Likelihood: Possible. Test improvement."
  • ✓ Kept: "Idempotency test exercises dropped-range no-op rather than true repeated-advance" (nitpick, high confidence) — "Impact: Low (test-name vs. behavior mismatch), Likelihood: Certain. Test quality."

Timing:

  • Parse: 3.6s
  • Review agents: 1004.2s
  • Judge: 50.6s
  • Total: 1058.4s

@manki-review manki-review Bot dismissed their stale review May 8, 2026 23:20

Superseded by new review

@codecov-commenter
Copy link
Copy Markdown

codecov-commenter commented May 8, 2026

Codecov Report

❌ Patch coverage is 82.98755% with 205 lines in your changes missing coverage. Please review.
✅ Project coverage is 71.52%. Comparing base (05b14a8) to head (dfc89db).

Files with missing lines Patch % Lines
key-wallet-manager/src/process_block.rs 67.83% 110 Missing ⚠️
key-wallet-manager/src/events.rs 0.00% 28 Missing ⚠️
key-wallet-manager/src/wallet_interface.rs 0.00% 20 Missing ⚠️
dash-spv/src/sync/filters/backfill.rs 96.76% 12 Missing ⚠️
...et/src/managed_account/managed_platform_account.rs 58.62% 12 Missing ⚠️
...allet/src/transaction_checking/platform_checker.rs 80.55% 7 Missing ⚠️
dash-spv/src/sync/events.rs 0.00% 6 Missing ⚠️
dash-spv-ffi/src/callbacks.rs 91.66% 5 Missing ⚠️
key-wallet/src/managed_account/address_pool.rs 98.71% 3 Missing ⚠️
dash-spv/src/sync/blocks/manager.rs 94.11% 2 Missing ⚠️
Additional details and impacted files
@@              Coverage Diff              @@
##           v0.42-dev     #135      +/-   ##
=============================================
- Coverage      71.64%   71.52%   -0.12%     
=============================================
  Files            321      322       +1     
  Lines          70484    71652    +1168     
=============================================
+ Hits           50501    51252     +751     
- Misses         19983    20400     +417     
Flag Coverage Δ
core 76.68% <ø> (ø)
ffi 45.19% <91.66%> (-2.21%) ⬇️
rpc 20.00% <ø> (ø)
spv 87.66% <95.34%> (+0.20%) ⬆️
wallet 70.05% <74.82%> (+0.20%) ⬆️
Files with missing lines Coverage Δ
dash-spv/src/sync/blocks/sync_manager.rs 83.33% <100.00%> (+0.47%) ⬆️
dash-spv/src/sync/filters/manager.rs 97.31% <100.00%> (+0.03%) ⬆️
dash-spv/src/sync/filters/sync_manager.rs 100.00% <ø> (ø)
key-wallet-manager/src/lib.rs 75.25% <ø> (ø)
...-wallet/src/transaction_checking/wallet_checker.rs 99.21% <ø> (ø)
...allet/managed_wallet_info/wallet_info_interface.rs 85.63% <100.00%> (+2.40%) ⬆️
dash-spv/src/sync/blocks/manager.rs 96.11% <94.11%> (-0.48%) ⬇️
key-wallet/src/managed_account/address_pool.rs 74.53% <98.71%> (+7.59%) ⬆️
dash-spv-ffi/src/callbacks.rs 81.55% <91.66%> (+1.41%) ⬆️
dash-spv/src/sync/events.rs 53.06% <0.00%> (-7.41%) ⬇️
... and 6 more

... and 17 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

manki-review[bot]
manki-review Bot previously requested changes May 8, 2026
Copy link
Copy Markdown

@manki-review manki-review Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Previous load_filters truncation and multi-chunk gaps are addressed, but pending_advances still strands a range on download failure and the FFI dispatch silently drops RescanBlockProcessed — breaking the atomic-persistence contract for the only consumers it was designed for. New blocker: when a block hash appears in both forward-sync and backfill, the forward-sync wallet set is silently skipped.

📊 18 findings (3 blocker, 6 warning, 7 suggestion, 2 nitpick) · 2324 lines · 1058s

Review stats
{
  "model": "claude-sonnet-4-6",
  "reviewTimeMs": 1058440,
  "diffLines": 2324,
  "diffAdditions": 2265,
  "diffDeletions": 59,
  "filesReviewed": 22,
  "agents": [
    "Security & Safety",
    "Architecture & Design",
    "Correctness & Logic",
    "Testing & Coverage"
  ],
  "findingsRaw": 18,
  "findingsKept": 18,
  "findingsDropped": 0,
  "severity": {
    "blocker": 3,
    "warning": 6,
    "suggestion": 7,
    "nitpick": 2
  },
  "verdict": "REQUEST_CHANGES",
  "prNumber": 135,
  "commitSha": "4df0888318cd58dd07648c295ad81a39260abbf5",
  "agentMetrics": [
    {
      "name": "Security & Safety",
      "findingsRaw": 2,
      "findingsKept": 2,
      "responseLength": 3105
    },
    {
      "name": "Architecture & Design",
      "findingsRaw": 6,
      "findingsKept": 6,
      "responseLength": 8840
    },
    {
      "name": "Correctness & Logic",
      "findingsRaw": 5,
      "findingsKept": 5,
      "responseLength": 6701
    },
    {
      "name": "Testing & Coverage",
      "findingsRaw": 5,
      "findingsKept": 5,
      "responseLength": 4387
    }
  ],
  "judgeMetrics": {
    "confidenceDistribution": {
      "high": 9,
      "medium": 9,
      "low": 0
    },
    "severityChanges": 18,
    "mergedDuplicates": 0,
    "verdictReason": "required_present"
  },
  "fileMetrics": {
    "fileTypes": {
      ".rs": 21,
      ".toml": 1
    },
    "findingsPerFile": {
      "dash-spv/src/sync/filters/backfill.rs": 8,
      "dash-spv/src/sync/blocks/manager.rs": 4,
      "dash-spv-ffi/src/callbacks.rs": 1,
      "key-wallet-manager/src/wallet_interface.rs": 1,
      "dash-spv/src/sync/filters/sync_manager.rs": 1,
      "key-wallet-manager/src/event_tests.rs": 2,
      "key-wallet-manager/src/events.rs": 1
    }
  },
  "reviewerModel": "claude-sonnet-4-6",
  "judgeModel": "claude-opus-4-7"
}

Comment thread dash-spv/src/sync/filters/backfill.rs
Comment thread dash-spv/src/sync/blocks/manager.rs
Comment thread dash-spv/src/sync/blocks/manager.rs
Comment thread dash-spv-ffi/src/callbacks.rs
Comment thread key-wallet-manager/src/wallet_interface.rs Outdated
.filter(|e| matches!(e, key_wallet_manager::WalletEvent::RescanBlockProcessed { .. }))
.collect();
assert_eq!(
rescan_events.len(),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 ✨ Suggestion: RescanBlockProcessed test does not verify inserted records or balance

The rescan_block_processed_bundles_advance_in_a_single_event test pattern-matches on RescanBlockProcessed using .. to skip the inserted and balance fields. The PR's stated atomicity guarantee is that records and the caught_up_to advance land in a single event so a persister can write them together; leaving inserted and balance unchecked means a regression where the block is processed but records are silently dropped would pass the test. This was previously open and remains unaddressed.

Suggested fix
Suggested change
rescan_events.len(),
Assert the expected `inserted` record count and balance inside the match:
```rust
WalletEvent::RescanBlockProcessed { inserted, advance_to, .. } => {
assert_eq!(inserted.len(), 1, "expected one inserted record");
assert_eq!(inserted[0].txid, expected_txid);
assert_eq!(*advance_to, 49);
}
```
AI context
{
  "file": "dash-spv/src/sync/filters/backfill.rs",
  "line": 525,
  "severity": "suggestion",
  "confidence": "high",
  "flaggedBy": [
    "Correctness & Logic"
  ],
  "title": "RescanBlockProcessed test does not verify inserted records or balance",
  "fix": "Assert the expected `inserted` record count and balance inside the match:\n```rust\nWalletEvent::RescanBlockProcessed { inserted, advance_to, .. } => {\n    assert_eq!(inserted.len(), 1, \"expected one in",
  "reachability": "reachable"
}

MockWalletState {
addresses: vec![addr_zero.clone()],
synced_height: 200,
last_processed_height: 200,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 ✨ Suggestion: Weak advance_to assertion allows any value below ceiling

In backfill_worker_returns_matched_block_obligations, the assertion assert!(adv.advance_to <= 79, ...) accepts any value from 0 to 79, including degenerate values like 0 or 50. With since=80ceiling=79 and the full range covered by one chunk (chunk_end = min(BACKFILL_CHUNK_SIZE-1, 79) = 79), the only correct value is exactly 79. A regression where advance_to is computed as, say, chunk_start (0) would pass this assertion while causing caught_up_to to be advanced to a far-too-early height, permanently leaving most of the range un-scanned. Use assert_eq!(adv.advance_to, 79) instead.

Suggested change
last_processed_height: 200,
assert_eq!(adv.advance_to, 79, "advance_to must equal chunk_end (79) for a single-chunk window");
AI context
{
  "file": "dash-spv/src/sync/filters/backfill.rs",
  "line": 407,
  "severity": "suggestion",
  "confidence": "high",
  "flaggedBy": [
    "Testing & Coverage"
  ],
  "title": "Weak `advance_to` assertion allows any value below ceiling",
  "fix": "assert_eq!(adv.advance_to, 79, \"advance_to must equal chunk_end (79) for a single-chunk window\");",
  "reachability": "reachable"
}

// confirming records-and-advance are tied to the same operation.
let pending = multi.read().await.pending_rescans();
assert!(
pending.is_empty(),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 ✨ Suggestion: Multi-chunk filter match path (match in chunk 2) is untested

The new test backfill_worker_walks_multiple_chunks_for_a_long_range addresses the previous finding by exercising the while chunk_start <= global_max loop, but only with dummy (no-match) filters across both chunks. The match-in-second-chunk path — where chunk_start has already advanced past BACKFILL_CHUNK_SIZE-1 and a filter hit occurs there — is still unexercised. This path is where the matched_range_keys set interaction with no-match advances in earlier chunks matters most: a range that had no match in chunk 1 (and was advanced) must still be correctly included in the obligation for a match in chunk 2. Add a test with since = BACKFILL_CHUNK_SIZE + 2 where the matching block's height is within [BACKFILL_CHUNK_SIZE..ceiling].

AI context
{
  "file": "dash-spv/src/sync/filters/backfill.rs",
  "line": 561,
  "severity": "suggestion",
  "confidence": "medium",
  "flaggedBy": [
    "Testing & Coverage"
  ],
  "title": "Multi-chunk filter match path (match in chunk 2) is untested",
  "reachability": "reachable"
}

for (idx, (data, header)) in filter_data.iter().zip(headers.iter()).enumerate() {
let height = start + idx as u32;
let key = FilterMatchKey::new(height, header.block_hash());
out.insert(key, BlockFilter::new(data));
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 ✨ Suggestion: load_filters storage-mismatch error path has no test

The new length-mismatch guard returns a SyncError::Storage when the filter and header vecs have different lengths, correctly preventing silent tail-truncation. However, no test injects a mismatched storage response to verify the error is returned rather than panicked on, and to confirm the error message includes the height range. A unit test should use a mock storage that returns N filters and N-1 headers for a given range, assert tick() returns Err(...), and check the error string contains the range bounds.

AI context
{
  "file": "dash-spv/src/sync/filters/backfill.rs",
  "line": 289,
  "severity": "suggestion",
  "confidence": "medium",
  "flaggedBy": [
    "Testing & Coverage"
  ],
  "title": "`load_filters` storage-mismatch error path has no test",
  "reachability": "reachable"
}

/// [`WalletInterface::process_backfill_block_for_wallets`]: key_wallet_manager::WalletInterface::process_backfill_block_for_wallets
pub(crate) async fn on_block_processed(&mut self, hash: &BlockHash) -> bool {
self.pending_advances.remove(hash).is_some()
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 ✨ Suggestion: prune_pending_advances stale-entry scenario not directly tested

The prune_pending_advances function is the primary mechanism bounding pending_advances under download failures, reorgs, and range churn, but it is only exercised implicitly via tick. There is no test that: (a) calls tick once to populate pending_advances, (b) advances caught_up_to past the matched block's height or removes the range entirely (simulating a reorg clamp or out-of-band completion), and (c) calls tick again to verify the stale entry is removed. Without this, a future change to prune_pending_advances that silently regresses the retention logic would not be caught.

AI context
{
  "file": "dash-spv/src/sync/filters/backfill.rs",
  "line": 261,
  "severity": "suggestion",
  "confidence": "medium",
  "flaggedBy": [
    "Testing & Coverage"
  ],
  "title": "`prune_pending_advances` stale-entry scenario not directly tested",
  "reachability": "reachable"
}

`AddressPool::maintain_gap_limit` now takes the wallet's `synced_height` and emits born-complete sync ranges when `since_height <= synced_height`, so a snapshot replay's gap-limit derivations for already-scanned heights are dropped on insert instead of demanding backfill from `birth_height`. This eliminates a multi-minute startup regression on active wallets where every replayed gap extension manufactured a pending range.

`BlocksManager::process_buffered_blocks` no longer silently skips forward-sync wallets when a block hash appears in both `BlocksNeeded` and `BackfillBlocksNeeded`. Forward-sync wallets minus the backfill set go through `process_block_for_wallets`; backfill wallets go through `process_backfill_block_for_wallets`. `last_processed_height` only advances on the forward path so backfill-only blocks don't bump the watermark backwards.

`BlocksManager::backfill_advances` is now pruned via `prune_stale_backfill_advances`, called from the `BackfillBlocksNeeded` handler with a fresh snapshot of the worker's live block hashes. Cancelled downloads, completed ranges, and reorg-clamped ranges no longer leak entries.

`BackfillWorker::tick` always emits the matched-block request even when the advance is already pending, so a previously-failed download is retried on the next tick instead of stalling the range forever. The dedup check on `pending_advances` is now `if !already_pending { push }` rather than `continue`.

`prune_pending_advances` builds its live-key index by folding instead of `collect`, taking the lowest `resume_from` when two `PendingRescan` entries share the same key. Future regressions that violate the unique-key invariant won't silently drop entries.

`WalletEvent::RescanBlockProcessed` now forwards to the existing `on_block_processed` FFI callback so persisters write backfill records and `caught_up_to` advances atomically. The pool/indexes/advance_to fields are dropped at the FFI boundary because the existing callback shape can't carry them; the records and balance still arrive.

The default `WalletInterface::process_backfill_block_for_wallets` impl no longer falls back to `process_block_for_wallets` (which would emit `BlockProcessed` and break the atomicity contract). It only calls `advance_rescan` per advance entry and returns an empty `BlockProcessingResult`. Implementations that need to scan transactions during backfill MUST override.

Doc-comment correction: `0u32.saturating_sub(1)` saturates at `0`, not `u32::MAX`. The matcher's `min_height` semantics (skip `key.height() <= min_height`) means height 0 is skipped at the lower boundary; genesis is acceptable to skip in practice.
@manki-review
Copy link
Copy Markdown

manki-review Bot commented May 9, 2026

Manki — Review complete

Planner (21s)
    feature · 2627 lines · 4 agents
    review effort: medium · judge effort: high

Review — 18 findings
    ✅ Security & Safety — 6 (443s)
    ✅ Architecture & Design — 5 (239s)
    ✗ Correctness & Logic — failed after 2 attempts (714s)
    ✅ Testing & Coverage — 7 (240s)

Judge — 12 kept · 1 dropped (201s)
    kept: 1 blocker · 3 warning · 6 suggestion · 2 nitpick
    dropped: 1 warning

Review metadata

Config:

  • Models: reviewer=claude-sonnet-4-6, judge=claude-opus-4-7
  • Review level: large (auto, 2627 lines)
  • Team: Security & Safety, Architecture & Design, Correctness & Logic, Testing & Coverage
  • Memory: disabled
  • Nit handling: issues

Judge decisions:

  • ✓ Kept: "RescanBlockProcessed routes to on_block_processed dropping advance_to/pool/indexes — atomic-persistence contract broken at FFI boundary" (blocker, high confidence) — "Two reviewers independently identified this. Forwarding to on_block_processed surfaces records and balance but discards advance_to/pool/indexes, so a C/Swift persister cannot atomically write 'caught_up_to' alongside the records — exactly the contract the rest of the PR enforces in Rust. Merged findings 1 and 6 (same issue)."
  • ✓ Kept: "ConvergenceChanged silently dropped at FFI boundary — C/Swift consumers cannot observe convergence" (warning, high confidence) — "Single reviewer. New event variant exists but no FFI dispatch is wired; consumers that gate features on convergence have no signal. Impact: Medium, Likelihood: Probable (every convergence change). Keep warning."
  • ✓ Kept: "advance_to set to chunk_end (global max) can exceed an individual range's ceiling in multi-range scenarios" (nitpick, high confidence) — "The finding itself acknowledges correctness depends on internal advance_rescan clamping, which the production impl (process_block.rs:368-374) does via 'scanned_through.min(cap)'. So today there's no observable bug, only a defensive-coding concern at the construction site. Downgrade from warning to suggestion."
  • ✓ Kept: "BackfillWorker::pending_advances stalls when block download fails and no subsequent tick() fires" (warning, medium confidence) — "Single reviewer. The 'always re-emit' code path mitigates failed downloads only when tick re-runs; if filter activity stops (wallet at tip), the wake channel is silent and convergence stalls indefinitely. Real liveness gap."
  • ✓ Kept: "prune_stale_backfill_advances only fires on BackfillBlocksNeeded — stale entries accumulate if backfill stops emitting" (suggestion, medium confidence) — "Single reviewer. Bounded leak in practice but consistent with the dual-map design's fragility. Suggestion-level cleanup."
  • ✓ Kept: "Double wallet write-lock acquisition for mixed forward+backfill block creates consistency gap" (warning, medium confidence) — "Single reviewer. process_buffered_blocks now legitimately takes the lock twice for mixed blocks; mempool/IS-lock processors can mutate state between acquisitions, producing inconsistent BlockProcessingResult merges. Concrete and reachable."
  • ✓ Kept: "RescanBlockProcessed FFI arm duplicates ~70 lines of BlockProcessed dispatch" (suggestion, high confidence) — "Single reviewer. Pure code-duplication concern; if the FFI eventually grows a dedicated rescan callback (per finding 1) the duplication evaporates anyway."
  • ✓ Kept: "TOCTOU in BackfillWorker::tick between pending_rescans snapshot and advance_rescan write" (nitpick, medium confidence) — "Single reviewer. Worst case is a delayed-by-one-tick advance or a no-op advance against a removed range — recoverable on next tick. Real but low-impact race; downgraded from warning."
  • ✓ Kept: "live_hashes in BackfillBlocksNeeded leaks dual-map implementation detail into the event protocol" (suggestion, medium confidence) — "Single reviewer. Architectural critique — pruning protocol leaks into event payload. Cleanup, not correctness."
  • ✗ Dropped: "RescanProgressed WalletEvent variant is defined but never emitted anywhere in this diff" (ignore, high confidence) — "Out of date with current code: WalletEvent::RescanProgressed is now emitted from process_block.rs:381 inside advance_rescan. False positive in this round."
  • ✓ Kept: "Weak advance_to assertion allows any value below ceiling" (suggestion, high confidence) — "Single reviewer. assert!(adv.advance_to <= 79) admits 0; tightening to assert_eq!(adv.advance_to, 79) would catch a regression that silently fails to advance the range. Test quality only."
  • ✓ Kept: "load_filters storage-mismatch error path has no test" (suggestion, high confidence) — "Single reviewer. Newly-added defensive Err path is untested; cheap to cover with a mock that returns mismatched lengths."
  • ✓ Kept: "prune_pending_advances stale-entry scenario not directly tested" (suggestion, high confidence) — "Single reviewer. Prune logic is exercised only indirectly through tick(); a direct test guarding the fold-based duplicate-key handling would prevent regressions."

Timing:

  • Parse: 2.1s
  • Review agents: 1938.6s
  • Judge: 201.1s
  • Total: 2141.7s

@manki-review manki-review Bot dismissed their stale review May 9, 2026 00:32

Superseded by new review

…hunk boundary

Closely-spaced gap extensions on active wallets used to create separate `AddressSyncRange` entries because each `maintain_gap_limit` call ran at a slightly different chain height and `collapse_adjacent_ranges` only merges ranges with identical `(since_height, caught_up_to)`. A wallet with ~90 gap extensions across its history accumulated ~325 distinct sync ranges, dominating bookkeeping cost per backfill tick and producing thousands of `RescanProgressed` events per second on startup.

Rounding `since_height` UP to a 5000-block bucket (matching `BACKFILL_CHUNK_SIZE`) lets extensions inside the same window share a `since_height` and merge via the existing collapse path. The rounded ceiling means backfill scans at most one extra chunk per range, but the per-tick cost drops by an order of magnitude on active wallets.

The `caught_up_to` born-complete check uses the rounded `bucket_since_height`, so the snapshot-replay perf-fix from `dfc89db5` still kicks in correctly: a range whose rounded boundary falls at or below the wallet's `synced_height` is born complete and dropped on insert.
manki-review[bot]
manki-review Bot previously requested changes May 9, 2026
Copy link
Copy Markdown

@manki-review manki-review Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RescanProgressed is now emitted and the BlocksManager backfill leak / forward-skip / duplicate-key bugs are gone, but the FFI still drops advance_to/pool/indexes from RescanBlockProcessed — atomic persistence remains broken at the C boundary. New concerns: pending_advances stall on failed downloads relies entirely on tick liveness, and the test added for RescanBlockProcessed never asserts on the actual records or chunk_end value.

Note: 3 of 4 agents completed (Correctness & Logic failed after 2 attempts)

📊 12 findings (1 blocker, 3 warning, 6 suggestion, 2 nitpick) · 2627 lines · 2142s

Review stats
{
  "model": "claude-sonnet-4-6",
  "reviewTimeMs": 2141710,
  "diffLines": 2627,
  "diffAdditions": 2567,
  "diffDeletions": 60,
  "filesReviewed": 22,
  "agents": [
    "Security & Safety",
    "Architecture & Design",
    "Correctness & Logic",
    "Testing & Coverage"
  ],
  "findingsRaw": 18,
  "findingsKept": 12,
  "findingsDropped": 6,
  "severity": {
    "blocker": 1,
    "warning": 3,
    "suggestion": 6,
    "nitpick": 2
  },
  "verdict": "REQUEST_CHANGES",
  "prNumber": 135,
  "commitSha": "dfc89db5596ba2887960f61847f6c7d3a976768f",
  "agentMetrics": [
    {
      "name": "Security & Safety",
      "findingsRaw": 6,
      "findingsKept": 5,
      "responseLength": 8032
    },
    {
      "name": "Architecture & Design",
      "findingsRaw": 5,
      "findingsKept": 5,
      "responseLength": 8001
    },
    {
      "name": "Correctness & Logic",
      "findingsRaw": 0,
      "findingsKept": 0
    },
    {
      "name": "Testing & Coverage",
      "findingsRaw": 7,
      "findingsKept": 4,
      "responseLength": 7671
    }
  ],
  "judgeMetrics": {
    "confidenceDistribution": {
      "high": 8,
      "medium": 5,
      "low": 0
    },
    "severityChanges": 13,
    "mergedDuplicates": 2,
    "defensiveHardeningCount": 2,
    "verdictReason": "required_present"
  },
  "fileMetrics": {
    "fileTypes": {
      ".rs": 21,
      ".toml": 1
    },
    "findingsPerFile": {
      "dash-spv-ffi/src/callbacks.rs": 3,
      "dash-spv/src/sync/filters/backfill.rs": 6,
      "dash-spv/src/sync/blocks/manager.rs": 2,
      "dash-spv/src/sync/events.rs": 1
    }
  },
  "reviewerModel": "claude-sonnet-4-6",
  "judgeModel": "claude-opus-4-7"
}

Comment thread dash-spv-ffi/src/callbacks.rs
Comment thread dash-spv-ffi/src/callbacks.rs
Comment thread dash-spv/src/sync/filters/backfill.rs
Comment thread dash-spv/src/sync/blocks/manager.rs
Comment thread dash-spv/src/sync/blocks/manager.rs
Comment thread dash-spv-ffi/src/callbacks.rs
Comment thread dash-spv/src/sync/events.rs
Comment thread dash-spv/src/sync/filters/backfill.rs
Comment thread dash-spv/src/sync/filters/backfill.rs
Comment thread dash-spv/src/sync/filters/backfill.rs
@manki-review
Copy link
Copy Markdown

manki-review Bot commented May 9, 2026

Manki — Review complete

Planner (29s)
    feature · 2712 lines · 4 agents
    review effort: medium · judge effort: high

Review — 27 findings
    ✅ Security & Safety — 3 (412s)
    ✅ Architecture & Design — 8 (207s)
    ✅ Correctness & Logic — 6 (1080s)
    ✅ Testing & Coverage — 10 (259s)

Judge — 19 kept · 0 dropped (146s)
    kept: 1 blocker · 9 warning · 9 suggestion

Review metadata

Config:

  • Models: reviewer=claude-sonnet-4-6, judge=claude-opus-4-7
  • Review level: large (auto, 2712 lines)
  • Team: Security & Safety, Architecture & Design, Correctness & Logic, Testing & Coverage
  • Memory: disabled
  • Nit handling: issues

Judge decisions:

  • ✓ Kept: "advance_to set to chunk_end may exceed a range's individual ceiling in multi-range scenarios" (warning, medium confidence) — "Impact: High (silent over-advance of caught_up_to past a range's verified ceiling), Likelihood: Possible (requires two active rescans with different ceilings in the same chunk). The fix is a one-line clamp; whether wallet-side advance_rescan clamps internally is not something the worker should rely on."
  • ✓ Kept: "ConvergenceChanged silently dropped at FFI boundary — C/Swift consumers cannot observe convergence" (warning, high confidence) — "Merged findings 2, 5, 13 — same issue, three reviewers. Impact: High (consumers cannot persist convergence watermark or distinguish 'fully backfilled' from 'never ran'), Likelihood: Certain on the FFI code path. Strong consensus → keep at warning, but borderline blocker since it removes the only signal of backfill completion across the FFI boundary."
  • ✓ Kept: "RescanBlockProcessed drops advance_to/pool/indexes at FFI boundary — atomic-persistence contract broken" (blocker, high confidence) — "Merged findings 3, 4, 12 — three independent reviewers. Impact: Critical (records persisted without paired caught_up_to advance → duplicate processing on every restart after crash), Likelihood: Certain on every backfill block. Code comment explicitly acknowledges the drop. The whole point of the PR is atomic-per-block backfill persistence and the FFI consumers cannot honor that contract."
  • ✓ Kept: "TOCTOU between read and write wallet lock in BackfillWorker::tick()" (warning, medium confidence) — "Impact: Medium-High (stale-view advance_rescan call could mark range past completed/removed state), Likelihood: Possible (requires concurrent forward path completing or removing a range between locks). Not addressed by current pruning, since prune runs against the stale snapshot."
  • ✓ Kept: "BackfillWorker embedded in FiltersManager conflates forward-sync and backfill concerns (SRP violation)" (suggestion, high confidence) — "Impact: Low (architectural smell, no behavioral bug), Likelihood: Certain at the design layer → suggestion. Single reviewer; comment in code already calls out the seam as temporary."
  • ✓ Kept: "live_hashes in BackfillBlocksNeeded leaks BackfillWorker internal state into the event protocol" (suggestion, medium confidence) — "Impact: Low (API cleanliness, derivable from blocks map), Likelihood: Certain by definition. Tied to the dual-sources-of-truth design; legitimate but low-priority cleanup."
  • ✓ Kept: "RescanBlockProcessed FFI arm duplicates ~70 lines of BlockProcessed dispatch logic" (suggestion, high confidence) — "Impact: Low (maintenance burden — drift if BlockProcessed shape grows), Likelihood: Certain that any future change will need to be made twice. Standard refactor advice."
  • ✓ Kept: "BlocksManager::backfill_advances is a second uncoordinated source of truth alongside BackfillWorker::pending_advances" (suggestion, medium confidence) — "Impact: Medium (two prune mechanisms can drift), Likelihood: Possible. Real architectural concern but the pruning hooks make actual leaks recoverable; not blocking."
  • ✓ Kept: "load_filters acquires filter and header storage read locks sequentially, creating a consistency window during reorgs" (warning, medium confidence) — "Impact: High (filter/header mismatch could attribute matches to wrong block hash), Likelihood: Possible (requires reorg to land between two awaits). Length check does not catch semantic mismatch."
  • ✓ Kept: "Premature ConvergenceChanged when union-sweep attributes multiple ranges to one filter match" (warning, medium confidence) — "Impact: High (consumer using ConvergenceChanged as atomicity boundary persists state before all matched blocks processed → late RescanBlockProcessed delivers records past the checkpoint), Likelihood: Possible (requires multi-range overlap with different match heights). Subtle but well-described."
  • ✓ Kept: "Double wallet write-lock for mixed forward+backfill block creates consistency gap" (warning, medium confidence) — "Impact: High (forward and backfill halves see different wallet state for the same block, merged result diverges), Likelihood: Possible (requires concurrent mempool tx between the two locks for an overlapping wallet). Single reviewer but concrete reachable scenario."
  • ✓ Kept: "add_from_storage with backfill-only interested set may clobber forward-sync wallets in pipeline" (warning, medium confidence) — "Impact: High (forward-sync wallets silently dropped from a block's processing → stalled wallet_synced_height), Likelihood: depends on pipeline semantics — flagged as conditional ('if pipeline represents interested as last-written'). Real concern worth verifying."
  • ✓ Kept: "Block storage read-lock held across entire async for-loop in BackfillBlocksNeeded handler" (suggestion, medium confidence) — "Impact: Medium (writer starvation under large blocks map), Likelihood: Possible. Performance-leaning concern, not a correctness bug."
  • ✓ Kept: "No unit test for BackfillBlocksNeeded handling in BlocksManager on_event" (warning, high confidence) — "Impact: High (entire backfill-block lifecycle uncovered → regression silently skips backfill processing), Likelihood: Certain — the test does not exist. Test gap on the central new control path."
  • ✓ Kept: "has_forward guard on update_last_processed is untested" (suggestion, high confidence) — "Impact: High if regressed (watermark retreat), Likelihood: Unlikely as a regression unless the guard is removed. Worth a one-line assertion test but lower priority than the merge-path coverage."
  • ✓ Kept: "RescanBlockProcessed routes to on_block_processed, dropping advance_to/pool/indexes — atomic-persistence contract broken at FFI boundary" (warning, high confidence) — "Impact: High (entire FFI atomic-persistence dispatch unverified — height/wallet_id passing, callback firing), Likelihood: Certain coverage gap. Pairs with the blocker on the same arm."
  • ✓ Kept: "Weak advance_to assertions in backfill match tests" (suggestion, high confidence) — "Impact: Medium (assert!(<= ceiling) passes even at 0, hiding the very regression the PR fixes), Likelihood: Certain test weakness. Tighten to assert_eq! since single-chunk coverage is guaranteed at current BACKFILL_CHUNK_SIZE."
  • ✓ Kept: "load_filters storage-mismatch error path has no test" (suggestion, medium confidence) — "Impact: Medium (storage-layer regression silently advances caught_up_to past unscanned heights), Likelihood: Possible. Easy to add a stub test; covers a guard that is the only protection against partial filter/header reads."
  • ✓ Kept: "prune_pending_advances stale-entry removal not directly tested" (suggestion, medium confidence) — "Impact: Medium (leak path if retain predicates inverted), Likelihood: Unlikely without deliberate change. Indirect coverage via tick exists; standard suggestion to add direct unit test."

Timing:

  • Parse: 6.9s
  • Review agents: 1113.5s
  • Judge: 145.7s
  • Total: 1266.1s

@manki-review manki-review Bot dismissed their stale review May 9, 2026 01:08

Superseded by new review

manki-review[bot]
manki-review Bot previously requested changes May 9, 2026
Copy link
Copy Markdown

@manki-review manki-review Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FFI atomic-persistence gap is the headline issue and three of four reviewers flagged it: RescanBlockProcessed still routes to on_block_processed dropping advance_to/pool/indexes, and ConvergenceChanged is an explicit no-op — there is no path for C/Swift consumers to checkpoint backfill progress. Several reachable correctness concerns (chunk_end overshoot, double wallet-lock gap, mixed-block routing) and broad test-coverage holes around the new backfill machinery remain.

📊 19 findings (1 blocker, 9 warning, 9 suggestion) · 2712 lines · 1266s

Review stats
{
  "model": "claude-sonnet-4-6",
  "reviewTimeMs": 1266142,
  "diffLines": 2712,
  "diffAdditions": 2651,
  "diffDeletions": 61,
  "filesReviewed": 22,
  "agents": [
    "Security & Safety",
    "Architecture & Design",
    "Correctness & Logic",
    "Testing & Coverage"
  ],
  "findingsRaw": 27,
  "findingsKept": 19,
  "findingsDropped": 8,
  "severity": {
    "blocker": 1,
    "warning": 9,
    "suggestion": 9,
    "nitpick": 0
  },
  "verdict": "REQUEST_CHANGES",
  "prNumber": 135,
  "commitSha": "f0b7873f099717cbcc0a16cd78fabaa09d2c27b4",
  "agentMetrics": [
    {
      "name": "Security & Safety",
      "findingsRaw": 3,
      "findingsKept": 3,
      "responseLength": 5258
    },
    {
      "name": "Architecture & Design",
      "findingsRaw": 8,
      "findingsKept": 9,
      "responseLength": 9064
    },
    {
      "name": "Correctness & Logic",
      "findingsRaw": 6,
      "findingsKept": 6,
      "responseLength": 8423
    },
    {
      "name": "Testing & Coverage",
      "findingsRaw": 10,
      "findingsKept": 8,
      "responseLength": 9587
    }
  ],
  "judgeMetrics": {
    "confidenceDistribution": {
      "high": 8,
      "medium": 11,
      "low": 0
    },
    "severityChanges": 19,
    "mergedDuplicates": 5,
    "verdictReason": "required_present"
  },
  "fileMetrics": {
    "fileTypes": {
      ".rs": 21,
      ".toml": 1
    },
    "findingsPerFile": {
      "dash-spv/src/sync/filters/backfill.rs": 7,
      "dash-spv-ffi/src/callbacks.rs": 4,
      "dash-spv/src/sync/filters/manager.rs": 1,
      "dash-spv/src/sync/events.rs": 1,
      "dash-spv/src/sync/blocks/manager.rs": 3,
      "dash-spv/src/sync/blocks/sync_manager.rs": 3
    }
  },
  "reviewerModel": "claude-sonnet-4-6",
  "judgeModel": "claude-opus-4-7"
}

.iter()
.filter(|r| {
!matched_range_keys.contains(&(
r.wallet_id,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 ⚠️ Warning: advance_to set to chunk_end may exceed a range's individual ceiling in multi-range scenarios

When two or more PendingRescan entries with different ceilings are active in the same chunk, global_max equals the highest ceiling and chunk_end = min(chunk_start + BACKFILL_CHUNK_SIZE - 1, global_max). For a range whose own ceiling is below chunk_end, the BackfillAdvance.advance_to = chunk_end will exceed that range's boundary. If process_backfill_block_for_wallets / advance_rescan does not clamp to min(advance_to, ceiling) internally, caught_up_to is advanced past the range's ceiling — potentially marking the range complete without having verified all blocks up to its ceiling under the correct address set, silently corrupting convergence tracking. The same overshoot applies to the no-match advance_rescan(…, chunk_end) call; whether that path clamps is a wallet-implementation assumption the backfill worker should not rely on. Fix: use chunk_end.min(r.ceiling) when constructing BackfillAdvance.advance_to.

Suggested fix
Suggested change
r.wallet_id,
let advance = BackfillAdvance {
wallet_id: r.wallet_id,
pool: r.pool,
indexes: r.indexes.clone(),
height,
advance_to: chunk_end.min(r.ceiling), // clamp to this range's own boundary
};
AI context
{
  "file": "dash-spv/src/sync/filters/backfill.rs",
  "line": 210,
  "severity": "warning",
  "confidence": "medium",
  "flaggedBy": [
    "Security & Safety"
  ],
  "title": "`advance_to` set to `chunk_end` may exceed a range's individual ceiling in multi-range scenarios",
  "fix": "let advance = BackfillAdvance {\n    wallet_id: r.wallet_id,\n    pool: r.pool,\n    indexes: r.indexes.clone(),\n    height,\n    advance_to: chunk_end.min(r.ceiling), // clamp to this range's own boundar",
  "reachability": "reachable"
}

@@ -1058,6 +1061,93 @@ impl FFIWalletEventCallbacks {
cb(c_wallet_id.as_ptr(), *height, self.user_data);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 ⚠️ Warning: ConvergenceChanged silently dropped at FFI boundary — C/Swift consumers cannot observe convergence

The newly added WalletEvent::ConvergenceChanged arm in FFIWalletEventCallbacks::dispatch is an empty no-op. C/Swift persisters have no signal that the convergence watermark advanced, so they cannot record it across restarts; on restart the client cannot distinguish 'fully backfilled through block N' from 'never ran backfill', forcing unnecessary full rescans of already-covered ranges. Because RescanBlockProcessed is also merged into BlockProcessed without advance_to (see next finding), there is no other path through which convergence progress crosses the FFI boundary. A minimal fix is a new on_convergence_changed callback on FFIWalletEventCallbacks taking wallet_id and height: u32; if that is deferred the limitation must be documented on the struct so FFI consumers are not surprised.

Suggested fix
Suggested change
cb(c_wallet_id.as_ptr(), *height, self.user_data);
// Add to FFIWalletEventCallbacks:
pub on_convergence_changed: Option<extern "C" fn(
wallet_id: *const c_char,
height: u32,
user_data: *mut c_void,
)>,
// In dispatch:
WalletEvent::ConvergenceChanged { wallet_id, fully_converged_through } => {
if let (Some(cb), Some(h)) = (self.on_convergence_changed, *fully_converged_through) {
let id_hex = hex::encode(wallet_id);
let c_id = CString::new(id_hex).unwrap_or_default();
cb(c_id.as_ptr(), h, self.user_data);
}
}
AI context
{
  "file": "dash-spv-ffi/src/callbacks.rs",
  "line": 1061,
  "severity": "warning",
  "confidence": "high",
  "flaggedBy": [
    "Security & Safety",
    "Architecture & Design",
    "Correctness & Logic"
  ],
  "title": "`ConvergenceChanged` silently dropped at FFI boundary — C/Swift consumers cannot observe convergence",
  "fix": "// Add to FFIWalletEventCallbacks:\npub on_convergence_changed: Option<extern \"C\" fn(\n    wallet_id: *const c_char,\n    height: u32,\n    user_data: *mut c_void,\n)>,\n\n// In dispatch:\nWalletEvent::Conver",
  "reachability": "reachable"
}

WalletEvent::ConvergenceChanged {
..
} => {}
WalletEvent::RescanProgressed {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 🚫 Blocker: RescanBlockProcessed drops advance_to/pool/indexes at FFI boundary — atomic-persistence contract broken

The new RescanBlockProcessed arm routes to on_block_processed, forwarding records and balance but deliberately discarding advance_to, pool, and indexes. A C/Swift persister receiving this as an ordinary BlockProcessed cannot distinguish a backfill record write from forward sync, cannot atomically write the convergence-height advance alongside the records, and receives no signal when a backfill range completes. On a crash between the record write and a convergence write the persister will re-scan the same block on every restart. The code comment acknowledges the drop and frames it as acceptable because the existing callback shape cannot carry the fields — but that justification applies equally to adding a dedicated callback or an on_rescan_block_processed entry point. The combined effect of this finding and the ConvergenceChanged no-op is that the FFI layer provides zero visibility into backfill progress or completion, making correct persistence impossible without modifying the ABI. Caveats on the suggested fix: adding a new callback is an ABI change and requires updating C/Swift consumers.

Suggested fix
Suggested change
WalletEvent::RescanProgressed {
// Option A: new dedicated callback on FFIWalletEventCallbacks
pub on_rescan_block_processed: Option<
extern "C" fn(
wallet_id: *const c_char,
height: u32,
advance_to: u32, // the caught_up_to cursor value to persist
inserted: *const FFITransactionRecord, inserted_count: u32,
updated: *const FFITransactionRecord, updated_count: u32,
matured: *const FFITransactionRecord, matured_count: u32,
balance: *const FFIBalance,
account_balances: *const FFIAccountBalance,
account_balances_count: u32,
user_data: *mut c_void,
),
>;
// In the RescanBlockProcessed arm: prefer on_rescan_block_processed
// when non-null; fall back to on_block_processed only when null so
// consumers that do not need atomic convergence still get records.
AI context
{
  "file": "dash-spv-ffi/src/callbacks.rs",
  "line": 1067,
  "severity": "blocker",
  "confidence": "high",
  "flaggedBy": [
    "Security & Safety",
    "Architecture & Design",
    "Correctness & Logic"
  ],
  "title": "`RescanBlockProcessed` drops `advance_to`/`pool`/`indexes` at FFI boundary — atomic-persistence contract broken",
  "fix": "// Option A: new dedicated callback on FFIWalletEventCallbacks\npub on_rescan_block_processed: Option<\n    extern \"C\" fn(\n        wallet_id: *const c_char,\n        height: u32,\n        advance_to: u32,",
  "reachability": "reachable"
}

&mut self,
) -> SyncResult<BTreeMap<FilterMatchKey, Vec<BackfillAdvance>>> {
let rescans = self.wallet.read().await.pending_rescans();
self.prune_pending_advances(&rescans);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 ⚠️ Warning: TOCTOU between read and write wallet lock in BackfillWorker::tick()

tick() reads pending_rescans() under a short-lived read lock, releases it, performs substantial computation (filter loading, match detection), then re-acquires a write lock to call advance_rescan on the no-match ranges. Between the two acquisitions a concurrent task (e.g., the forward-sync block path or another tick call) can complete or remove a range from the wallet. The no-match advance_rescan call would then operate on a stale view: advancing a range that was already completed causes a silent redundant write, and if the implementation removes the range on completion, advancing a missing range could silently succeed and corrupt the next convergence_height computation. The fix is to re-read pending_rescans() inside the write lock before calling advance_rescan, or to restructure tick to hold the write lock across the scan-and-advance decision for ranges with no matches.

AI context
{
  "file": "dash-spv/src/sync/filters/backfill.rs",
  "line": 100,
  "severity": "warning",
  "confidence": "medium",
  "flaggedBy": [
    "Architecture & Design"
  ],
  "title": "TOCTOU between read and write wallet lock in `BackfillWorker::tick()`",
  "reachability": "reachable"
}

@@ -71,6 +72,11 @@ pub struct FiltersManager<
/// `BlockProcessed` and the per-wallet record of which wallets already
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 ✨ Suggestion: BackfillWorker embedded in FiltersManager conflates forward-sync and backfill concerns (SRP violation)

FiltersManager now holds a BackfillWorker as a struct field and exposes backfill_tick / backfill_block_processed wrapper methods. This gives the manager three distinct responsibilities: (1) forward batch filter downloads, (2) block-match tracking, and (3) backfill sweep orchestration. The PR comment acknowledges this is a 'for now' seam, but the coupling is already load-bearing: the sync manager wires the backfill tick inline on filter-stored events, meaning backfill latency is now part of the forward-sync hot path. A cleaner boundary would have BackfillWorker driven by a separate task or at least kept behind a standalone coordinator type, leaving FiltersManager responsible only for forward sync. As-is, adding a wake channel (the stated follow-up) will require refactoring the field out again anyway.

AI context
{
  "file": "dash-spv/src/sync/filters/manager.rs",
  "line": 72,
  "severity": "suggestion",
  "confidence": "high",
  "flaggedBy": [
    "Architecture & Design"
  ],
  "title": "`BackfillWorker` embedded in `FiltersManager` conflates forward-sync and backfill concerns (SRP violation)",
  "reachability": "reachable"
}

// Only count new transactions to avoid double-counting during rescans
self.progress.add_transactions(result.new_txids.len() as u32);
self.progress.update_last_processed(height);
// Backfill-only blocks live below the forward edge and must not
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 ✨ Suggestion: has_forward guard on update_last_processed is untested

The code gates self.progress.update_last_processed(height) on has_forward to prevent backfill-only blocks from regressing the forward-sync progress watermark. No test verifies that a pure backfill block (all interested wallets are in backfill_advances) leaves last_processed_height unchanged. If this guard is accidentally dropped or the condition inverted, the watermark retreats to historical heights, confusing the forward pipeline's state tracking and potentially causing duplicate block requests or skipped ranges. Add a test that processes a backfill-only block and asserts the progress last_processed_height did not change.

AI context
{
  "file": "dash-spv/src/sync/blocks/manager.rs",
  "line": 193,
  "severity": "suggestion",
  "confidence": "high",
  "flaggedBy": [
    "Testing & Coverage"
  ],
  "title": "has_forward guard on update_last_processed is untested",
  "reachability": "reachable"
}

let c_wallet_id = CString::new(wallet_id_hex).unwrap_or_default();
let ffi_inserted: Vec<FFITransactionRecord> =
inserted.iter().map(FFITransactionRecord::from).collect();
let ffi_updated: Vec<FFITransactionRecord> =
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 ⚠️ Warning: RescanBlockProcessed routes to on_block_processed, dropping advance_to/pool/indexes — atomic-persistence contract broken at FFI boundary

The new RescanBlockProcessed arm in FFIWalletEventCallbacks::dispatch forwards to on_block_processed, silently dropping advance_to, pool, and indexes at the FFI boundary. No test verifies the callback fires at all, that height and wallet_id are passed correctly, or that the dropped fields do not corrupt the argument layout. The atomic-persistence contract for backfill blocks depends entirely on this arm; a regression such as the callback being silently skipped or height being zero would break every C/Swift persister without any Rust-side test failure. Add a test analogous to test_block_processed_dispatch_sums_per_wallet_addresses that sends a RescanBlockProcessed event and asserts on_block_processed fires with the expected height and wallet_id.

Suggested fix
Suggested change
let ffi_updated: Vec<FFITransactionRecord> =
Add a distinct `OnRescanBlockProcessedCallback` type to `FFIWalletEventCallbacks`:
```c
typedef void (*OnRescanBlockProcessedCallback)(
const char *wallet_id, uint32_t height,
const FFITransactionRecord *inserted, uint32_t inserted_count,
const FFITransactionRecord *updated, uint32_t updated_count,
const FFITransactionRecord *matured, uint32_t matured_count,
FFIAddressPoolType pool,
uint32_t indexes_start, uint32_t indexes_end,
uint32_t advance_to,
const FFIBalance *balance,
const FFIAccountBalance *account_balances, uint32_t account_balances_count,
void *user_data
);
```
When `on_rescan_block_processed` is null, fall back to the current `on_block_processed` dispatch (without `advance_to`) and document the degraded atomicity guarantee explicitly.
AI context
{
  "file": "dash-spv-ffi/src/callbacks.rs",
  "line": 1093,
  "severity": "warning",
  "confidence": "high",
  "flaggedBy": [
    "Architecture & Design",
    "Testing & Coverage"
  ],
  "title": "`RescanBlockProcessed` routes to `on_block_processed`, dropping `advance_to`/`pool`/`indexes` — atomic-persistence contract broken at FFI boundary",
  "fix": "Add a distinct `OnRescanBlockProcessedCallback` type to `FFIWalletEventCallbacks`:\n```c\ntypedef void (*OnRescanBlockProcessedCallback)(\n    const char *wallet_id, uint32_t height,\n    const FFITransac",
  "reachability": "reachable"
}

let bytes = if h == h1 {
filter_h1.content.clone()
} else {
dummy_filter.content.clone()
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 ✨ Suggestion: Weak advance_to assertions in backfill match tests

Both backfill_worker_returns_matched_block_obligations (assert!(adv.advance_to <= 79)) and backfill_recovers_missed_outputs_after_gap_limit_extension (assert!(adv.advance_to <= 199)) use inequality rather than equality. Since BACKFILL_CHUNK_SIZE (5000) far exceeds both test ceilings (79 and 199), the entire range fits in one chunk and chunk_end is clamped to ceiling, so advance_to should equal ceiling exactly. The inequality assertions pass even if advance_to is 0, hiding a regression where the worker credits less progress than it should — the scenario the PR was specifically designed to fix. Note that tightening to assert_eq! assumes single-chunk coverage; if BACKFILL_CHUNK_SIZE is later reduced below the test ceilings, the tests would need separate multi-chunk handling.

Suggested fix
Suggested change
dummy_filter.content.clone()
// In backfill_worker_returns_matched_block_obligations:
assert_eq!(adv.advance_to, 79, "advance_to must equal ceiling (since-1=79)");
// In backfill_recovers_missed_outputs_after_gap_limit_extension:
assert_eq!(adv.advance_to, 199, "advance_to must equal ceiling (since_height-1=199)");
AI context
{
  "file": "dash-spv/src/sync/filters/backfill.rs",
  "line": 461,
  "severity": "suggestion",
  "confidence": "high",
  "flaggedBy": [
    "Testing & Coverage"
  ],
  "title": "Weak advance_to assertions in backfill match tests",
  "fix": "// In backfill_worker_returns_matched_block_obligations:\nassert_eq!(adv.advance_to, 79, \"advance_to must equal ceiling (since-1=79)\");\n\n// In backfill_recovers_missed_outputs_after_gap_limit_extension",
  "reachability": "reachable"
}

}
Ok(out)
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 ✨ Suggestion: load_filters storage-mismatch error path has no test

BackfillWorker::load_filters returns SyncError::Storage when filter_data.len() != headers.len(). No test injects a mismatched pair — for example a storage stub that returns N filters but N-1 headers — to verify the guard fires and tick propagates the error rather than silently advancing caught_up_to past heights that were never actually scanned. A storage layer regression returning a partial view would be completely invisible without this coverage.

AI context
{
  "file": "dash-spv/src/sync/filters/backfill.rs",
  "line": 315,
  "severity": "suggestion",
  "confidence": "medium",
  "flaggedBy": [
    "Testing & Coverage"
  ],
  "title": "load_filters storage-mismatch error path has no test",
  "reachability": "reachable"
}

/// pending set.
///
/// Returns `true` when the hash was in the worker's pending set — i.e.
/// the block was backfill's, not forward sync's.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 ✨ Suggestion: prune_pending_advances stale-entry removal not directly tested

prune_pending_advances removes advances whose range key is absent from rescans or whose height falls below resume_from. No test directly validates these two removal conditions with crafted pending_advances entries. The function is exercised indirectly via tick, but the indirect path only exercises the 'range still present and height still live' case; the 'range completed/absent from rescans' and 'height below resume_from' cases go unverified, leaving a leak path open if the retain predicates are accidentally inverted.

AI context
{
  "file": "dash-spv/src/sync/filters/backfill.rs",
  "line": 270,
  "severity": "suggestion",
  "confidence": "medium",
  "flaggedBy": [
    "Testing & Coverage"
  ],
  "title": "prune_pending_advances stale-entry removal not directly tested",
  "reachability": "reachable"
}

Backfill `tick` previously assumed the full filter range was always loadable. During initial sync (or any time forward filter sync is still downloading), `load_filters([0..chunk_end])` could request offsets past the storage's `last_valid_offset` and panic in `segments::get_items`. Clamp `global_max` to the storage's currently-available filter tip so the worker no-ops on chunks not yet downloaded; the next `tick` re-evaluates as forward sync makes progress.

Fixes the `Trying to access invalid offset` panic that blocked Ubuntu/Ubuntu-ARM/Windows/macOS FFI and SPV integration tests on PR #135.
@manki-review
Copy link
Copy Markdown

manki-review Bot commented May 9, 2026

Manki — Review complete

Planner (21s)
    feature · 2782 lines · 4 agents
    review effort: medium · judge effort: high

Review — 35 findings
    ✅ Security & Safety — 6 (549s)
    ✅ Architecture & Design — 3 (230s)
    ✅ Correctness & Logic — 12 (1150s)
    ✅ Testing & Coverage — 14 (259s)

Judge — 9 kept · 3 dropped (186s)
    kept: 1 blocker · 4 warning · 2 suggestion · 2 nitpick
    dropped: 2 warning · 1 suggestion

Review metadata

Config:

  • Models: reviewer=claude-sonnet-4-6, judge=claude-opus-4-7
  • Review level: large (auto, 2782 lines)
  • Team: Security & Safety, Architecture & Design, Correctness & Logic, Testing & Coverage
  • Memory: disabled
  • Nit handling: issues

Judge decisions:

  • ✓ Kept: "push_external_sync_range test helper injects sync range into all account types' external pools" (warning, high confidence) — "Impact: Medium (test correctness — convergence assertions silently depend on account topology rather than the logic under test), Likelihood: Probable (every test using this helper). Real test-fidelity bug worth flagging at warning."
  • ✓ Kept: "BackfillWorker::on_block_processed declared async but performs no async work" (nitpick, high confidence) — "Impact: Low (cosmetic; tiny needless future allocation), Likelihood: Certain. Only one reviewer flagged; purely a style/cleanup downgrade from suggestion to nitpick."
  • ✓ Kept: "backfill_advances.insert discards the height dimension from FilterMatchKey" (nitpick, high confidence) — "Impact: Low (no current call site cross-checks key height; height is preserved in BackfillAdvance.height), Likelihood: Unlikely. Reviewer themselves noted information is not lost; this is a defensive design preference."
  • ✓ Kept: "advance_to: chunk_end may exceed individual range ceiling in multi-range scenarios" (blocker, high confidence) — "Impact: High (range marked completed past its actual ceiling, leaving heights between true ceiling and advance_to permanently unscanned — silent data miss), Likelihood: Probable in multi-range/multi-wallet production setups. Two reviewers flagged the same issue independently; matches open thread Axhr0. Calibration: real silent-loss bug → blocker. Merged with finding 10."
  • ✓ Kept: "TOCTOU between pending_rescans() read lock and advance_rescan() write lock in tick()" (warning, medium confidence) — "Impact: Medium-High (stale advance_rescan write or skipped newly-added range), Likelihood: Possible under concurrent block confirmation. Two reviewers flagged independently; matches open thread Axhr-. Merged with finding 9."
  • ✓ Kept: "load_filters acquires filter and header storage read locks sequentially, creating a consistency window during reorgs" (warning, medium confidence) — "Impact: High (wrong-hash FilterMatchKey paired with filter content from a different chain branch), Likelihood: Possible during reorg windows. Reorg detection downstream provides partial mitigation but the backfill advance to chunk_end could still be recorded against misread filter content."
  • ✓ Kept: "Block storage read-lock held across entire async for-loop in BackfillBlocksNeeded handler" (suggestion, high confidence) — "Impact: Medium (latency coupling between backfill cache-check and block writes), Likelihood: Possible under load. Performance concern, not correctness."
  • ✓ Kept: "Missing FFI test for RescanBlockProcessed routing to on_block_processed callback" (warning, high confidence) — "Impact: High (silent FFI dispatch failure would discard backfill records with no signal to caller), Likelihood: Possible if dispatch is regressed. The new RescanBlockProcessed arm at callbacks.rs:1070-1150 has zero coverage despite being the persister-correctness-critical path."
  • ✗ Dropped: "TOCTOU between read and write wallet lock in BackfillWorker::tick()" (ignore, high confidence) — "Duplicate of finding 5 — same TOCTOU issue between pending_rescans() read lock and advance_rescan() write lock."
  • ✗ Dropped: "advance_to set to chunk_end may exceed an individual range's ceiling in multi-range scenarios" (ignore, high confidence) — "Duplicate of finding 4 — same clamping bug."
  • ✗ Dropped: "backfill_tick_skips_chunks_past_available_filter_tip test doesn't verify partial scan occurred" (ignore, high confidence) — "The test at backfill.rs:717-719 already asserts pending[0].resume_from == stored_through + 1 (i.e. == 100), which is exactly the verification the finding asks for. False positive."
  • ✓ Kept: "has_forward guard on update_last_processed is untested" (suggestion, high confidence) — "Impact: Medium (regression in progress-counter invariant could go unnoticed), Likelihood: Possible. Legitimate test-coverage gap; matches open thread AxhsT."

Timing:

  • Parse: 2.0s
  • Review agents: 1175.0s
  • Judge: 185.8s
  • Total: 1362.8s

@manki-review manki-review Bot dismissed their stale review May 9, 2026 01:29

Superseded by new review

Copy link
Copy Markdown

@manki-review manki-review Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most of last round's open findings remain unfixed in the current source — advance_to is still set unclamped to chunk_end, the FFI ConvergenceChanged arm is still a {} no-op, and RescanBlockProcessed still drops advance_to/pool/indexes at the FFI boundary even though it now forwards the records. New round adds a real multi-range correctness bug (the same advance_to clamp issue restated) plus several legitimate test-coverage gaps.

📊 9 findings (1 blocker, 4 warning, 2 suggestion, 2 nitpick) · 2782 lines · 1363s

Review stats
{
  "model": "claude-sonnet-4-6",
  "reviewTimeMs": 1362794,
  "diffLines": 2782,
  "diffAdditions": 2721,
  "diffDeletions": 61,
  "filesReviewed": 22,
  "agents": [
    "Security & Safety",
    "Architecture & Design",
    "Correctness & Logic",
    "Testing & Coverage"
  ],
  "findingsRaw": 35,
  "findingsKept": 9,
  "findingsDropped": 26,
  "severity": {
    "blocker": 1,
    "warning": 4,
    "suggestion": 2,
    "nitpick": 2
  },
  "verdict": "REQUEST_CHANGES",
  "prNumber": 135,
  "commitSha": "57b2f0408135c67c564f0e67c3b023c384b1fd9b",
  "agentMetrics": [
    {
      "name": "Security & Safety",
      "findingsRaw": 6,
      "findingsKept": 0,
      "responseLength": 8221
    },
    {
      "name": "Architecture & Design",
      "findingsRaw": 3,
      "findingsKept": 3,
      "responseLength": 4529
    },
    {
      "name": "Correctness & Logic",
      "findingsRaw": 12,
      "findingsKept": 4,
      "responseLength": 12527
    },
    {
      "name": "Testing & Coverage",
      "findingsRaw": 14,
      "findingsKept": 2,
      "responseLength": 15066
    }
  ],
  "judgeMetrics": {
    "confidenceDistribution": {
      "high": 10,
      "medium": 2,
      "low": 0
    },
    "severityChanges": 12,
    "mergedDuplicates": 0,
    "verdictReason": "required_present"
  },
  "fileMetrics": {
    "fileTypes": {
      ".rs": 21,
      ".toml": 1
    },
    "findingsPerFile": {
      "key-wallet-manager/src/event_tests.rs": 1,
      "dash-spv/src/sync/filters/backfill.rs": 4,
      "dash-spv/src/sync/blocks/sync_manager.rs": 2,
      "dash-spv-ffi/src/callbacks.rs": 1,
      "dash-spv/src/sync/blocks/manager.rs": 1
    }
  },
  "reviewerModel": "claude-sonnet-4-6",
  "judgeModel": "claude-opus-4-7"
}

@@ -1042,3 +1042,125 @@ async fn test_instant_send_lock_event_does_not_carry_addresses_derived_field() {
_ => unreachable!(),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 ⚠️ Warning: push_external_sync_range test helper injects sync range into all account types' external pools

The doc comment says 'Push a pending sync range into the wallet's BIP44 External pool' but the implementation iterates info.accounts_mut().all_accounts_mut() and pushes the range into every External pool of every account type — including IdentityRegistration, masternode, and other accounts that the same test file confirms exist on the default wallet. Tests like test_advance_rescan_emits_convergence_changed_and_is_idempotent then call advance_rescan for only one of those pools, leaving N-1 spurious pending ranges in place. The convergence_height calculation is bounded by the worst-progressed range across ALL pools, so the post-advance convergence assertion (Some(200)) will silently succeed or fail depending on how many accounts have External pools — not on the backfill logic under test. The helper should target the BIP44 Standard account 0 specifically rather than the union of all accounts.

Suggested fix
Suggested change
_ => unreachable!(),
fn push_external_sync_range(
manager: &mut WalletManager<ManagedWalletInfo>,
wallet_id: &WalletId,
indexes: core::ops::Range<u32>,
since_height: CoreBlockHeight,
) {
let info = manager.get_wallet_info_mut(wallet_id).expect("wallet exists");
let acct = info
.accounts
.standard_bip44_accounts
.get_mut(&0)
.expect("BIP44 account 0 must exist on default test wallet");
for pool in acct.managed_account_type_mut().address_pools_mut() {
if pool.pool_type == AddressPoolType::External {
pool.push_sync_range(AddressSyncRange {
indexes: indexes.clone(),
since_height,
caught_up_to: None,
});
}
}
}
AI context
{
  "file": "key-wallet-manager/src/event_tests.rs",
  "line": 537,
  "severity": "warning",
  "confidence": "high",
  "flaggedBy": [
    "Architecture & Design"
  ],
  "title": "`push_external_sync_range` test helper injects sync range into all account types' external pools",
  "fix": "fn push_external_sync_range(\n    manager: &mut WalletManager<ManagedWalletInfo>,\n    wallet_id: &WalletId,\n    indexes: core::ops::Range<u32>,\n    since_height: CoreBlockHeight,\n) {\n    let info = man",
  "reachability": "reachable"
}

// genesis block has no spendable wallet outputs.
let matches = check_compact_filters_for_addresses(
&filters,
address_vec,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 🚫 Blocker: advance_to: chunk_end may exceed individual range ceiling in multi-range scenarios

chunk_end is clamped to global_max — the maximum ceiling across ALL active ranges. When two ranges have different ceilings (e.g., 79 and 199), a block matched for the lower-ceiling range (ceiling=79) gets advance_to = chunk_end which may be 199. If advance_rescan stores the raw value, caught_up_to for the 79-ceiling range would be set to 199, meaning the range would appear completed up to a height far past its actual backfill window. The same issue affects the no-match advance path on the write-lock section. Fix: clamp per-range — advance_to: chunk_end.min(r.ceiling) — before constructing BackfillAdvance and before calling wallet.advance_rescan.

Suggested fix
Suggested change
address_vec,
let advance = BackfillAdvance {
wallet_id: r.wallet_id,
pool: r.pool,
indexes: r.indexes.clone(),
height,
advance_to: chunk_end.min(r.ceiling),
};
// And in the no-match section:
wallet.advance_rescan(&wallet_id, pool, indexes, chunk_end.min(r.ceiling));
AI context
{
  "file": "dash-spv/src/sync/filters/backfill.rs",
  "line": 175,
  "severity": "blocker",
  "confidence": "high",
  "flaggedBy": [
    "Correctness & Logic"
  ],
  "title": "`advance_to: chunk_end` may exceed individual range ceiling in multi-range scenarios",
  "fix": "let advance = BackfillAdvance {\n    wallet_id: r.wallet_id,\n    pool: r.pool,\n    indexes: r.indexes.clone(),\n    height,\n    advance_to: chunk_end.min(r.ceiling),\n};\n// And in the no-match section:\nw",
  "reachability": "reachable"
}

}
}

/// Run one sweep over the union of pending range height windows.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 ⚠️ Warning: TOCTOU between pending_rescans() read lock and advance_rescan() write lock in tick()

tick() reads pending_rescans() under a short-lived read lock, releases it, builds the scan plan, and then acquires a separate write lock to call advance_rescan(). Between these two lock acquisitions a concurrent task (e.g., block confirmation via process_block_for_wallets) may add or remove sync ranges. A newly-added range would be absent from the current sweep's address union, silently missing any filter matches for its addresses in this tick. A removed range would result in advance_rescan being called on a non-existent range — which appears to be safe per test evidence, but wastes the write-lock round trip. The correct fix depends on whether single-writer or reader-writer concurrency is required; at minimum document the intentional gap so reviewers do not silently rely on safety that has not been verified.

AI context
{
  "file": "dash-spv/src/sync/filters/backfill.rs",
  "line": 86,
  "severity": "warning",
  "confidence": "medium",
  "flaggedBy": [
    "Correctness & Logic"
  ],
  "title": "TOCTOU between `pending_rescans()` read lock and `advance_rescan()` write lock in `tick()`",
  "reachability": "reachable"
}


/// Drop pending advances whose corresponding sync range is no longer
/// pending or has already advanced past the matched block's height.
/// Keeps `pending_advances` bounded across reorgs, gap-extension churn,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 ⚠️ Warning: load_filters acquires filter and header storage read locks sequentially, creating a consistency window during reorgs

The filter storage read lock is acquired, an await completes, and the lock is dropped before the header storage read lock is taken. During a reorg occurring between the two reads, the filter at height H could correspond to block A while the header returns block B's hash, producing a FilterMatchKey that pairs the wrong hash with the filter. The subsequent block download request would be for the wrong hash; reorg-detection in the block pipeline should catch the mismatch, but a backfill advance to chunk_end could still be recorded against a height whose filter content was misread. Acquiring both locks atomically (e.g., via a composite storage query) or gating the tick on a stable chain height would close this window.

AI context
{
  "file": "dash-spv/src/sync/filters/backfill.rs",
  "line": 245,
  "severity": "warning",
  "confidence": "medium",
  "flaggedBy": [
    "Correctness & Logic"
  ],
  "title": "`load_filters` acquires filter and header storage read locks sequentially, creating a consistency window during reorgs",
  "reachability": "reachable"
}


let mut to_queue: Vec<(FilterMatchKey, BTreeSet<WalletId>)> =
Vec::with_capacity(blocks.len());
let block_storage = self.block_storage.read().await;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 ✨ Suggestion: Block storage read-lock held across entire async for-loop in BackfillBlocksNeeded handler

The block_storage.read().await lock is held for the duration of the for (key, advances) in blocks loop including every block_storage.load_block(key.height()).await call. For large blocks maps this prevents any write-lock acquirer (e.g., a block-receipt handler persisting a new block) from proceeding until the entire backfill cache-check loop finishes, coupling backfill I/O latency into the block write path. Acquire and drop the read lock per iteration, or collect the heights to check, release the lock, then query each in turn.

AI context
{
  "file": "dash-spv/src/sync/blocks/sync_manager.rs",
  "line": 132,
  "severity": "suggestion",
  "confidence": "high",
  "flaggedBy": [
    "Correctness & Logic"
  ],
  "title": "Block storage read-lock held across entire async for-loop in `BackfillBlocksNeeded` handler",
  "reachability": "reachable"
}

}
}
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 ⚠️ Warning: Missing FFI test for RescanBlockProcessed routing to on_block_processed callback

The two existing tests in callbacks.rs cover BlocksNeeded and BlockProcessed dispatch, but no test exercises the new RescanBlockProcessed arm. If the routing is broken (wrong event matched, callback not invoked, wrong wallet_id/height encoding) backfill records are silently discarded at the FFI boundary with no indication to the caller. This is the only new dispatch path that materially affects persister correctness and has zero test coverage.

Suggested fix
Suggested change
}
Add a test analogous to `test_block_processed_dispatch_sums_per_wallet_addresses`: construct a `WalletEvent::RescanBlockProcessed` with non-empty `inserted` and a known `wallet_id`/`height`, verify `on_block_processed` fires exactly once, and verify `wallet_id_hex`, `height`, and `inserted_count` are correctly forwarded. Also assert that `on_sync_height_advanced` does NOT fire for the same event.
AI context
{
  "file": "dash-spv-ffi/src/callbacks.rs",
  "line": 1156,
  "severity": "warning",
  "confidence": "high",
  "flaggedBy": [
    "Testing & Coverage"
  ],
  "title": "Missing FFI test for RescanBlockProcessed routing to on_block_processed callback",
  "fix": "Add a test analogous to `test_block_processed_dispatch_sums_per_wallet_addresses`: construct a `WalletEvent::RescanBlockProcessed` with non-empty `inserted` and a known `wallet_id`/`height`, verify `o",
  "reachability": "reachable"
}

// Only count new transactions to avoid double-counting during rescans
self.progress.add_transactions(result.new_txids.len() as u32);
self.progress.update_last_processed(height);
// Backfill-only blocks live below the forward edge and must not
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 ✨ Suggestion: has_forward guard on update_last_processed is untested

The new if has_forward { self.progress.update_last_processed(height); } guard prevents backfill-only blocks from bumping the forward-edge progress counter backwards. There is no test that verifies a backfill-only block (where has_forward is false) does NOT advance last_processed_height, nor that a mixed block (both forward and backfill wallets) DOES advance it. A regression that drops the guard silently breaks the progress reporting invariant.

Suggested fix
Suggested change
// Backfill-only blocks live below the forward edge and must not
In the `BlocksManager` unit tests, process a block attributed only to backfill advances (empty `interested`, non-empty `backfill_advances`) and assert `progress.last_processed_height()` is unchanged. Process a second block with both forward and backfill wallets and assert it does advance.
AI context
{
  "file": "dash-spv/src/sync/blocks/manager.rs",
  "line": 193,
  "severity": "suggestion",
  "confidence": "high",
  "flaggedBy": [
    "Testing & Coverage"
  ],
  "title": "has_forward guard on update_last_processed is untested",
  "fix": "In the `BlocksManager` unit tests, process a block attributed only to backfill advances (empty `interested`, non-empty `backfill_advances`) and assert `progress.last_processed_height()` is unchanged. ",
  "reachability": "reachable"
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat(dash-spv): cover gap-extension blind spot in filter sync via per-range backfill

2 participants