Lelantus strip#1805
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughThis PR removes the Lelantus protocol and associated HD-mint infrastructure and tests, strips liblelantus cryptographic/prover/verifier code, updates consensus/validation, RPC and UI surfaces to Spark-only behavior, and simplifies Spark proof verification to a synchronous flow. Changes
Sequence Diagram(s)mermaid Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested reviewers
✨ Finishing Touches🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 12
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
src/qt/locale/bitcoin_zh_CN.ts (1)
874-880:⚠️ Potential issue | 🟡 MinorKeep this translation aligned with the source string.
Line 879 removes the
Lelantusreference only inzh_CN, but the<source>text in this message still explicitly mentions it. That leaves this locale semantically different from the English fallback and the rest of the translations. If the UI copy is meant to be protocol-neutral now, please update the source string and regenerate the locale files instead of patching a single translation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/qt/locale/bitcoin_zh_CN.ts` around lines 874 - 880, The zh_CN translation drops the "Lelantus" reference but the English <source> string still mentions it, causing a semantic mismatch; instead of editing this single translation, update the canonical <source> string (the message starting "Sending funds to a RAP address requires a notification transaction...") to the protocol-neutral wording (remove or generalize the "Lelantus" mention) and then regenerate the locale files so all translations (including src/qt/locale/bitcoin_zh_CN.ts) are in sync; ensure the updated source text is used by the i18n extraction/regeneration tooling so translators can update their locales.src/chain.h (1)
17-25:⚠️ Potential issue | 🟡 MinorDuplicate include of
libspark/coin.h.The header
libspark/coin.his included twice: once at line 17 and again at line 25. This appears to be a leftover from replacingliblelantus/coin.hwithlibspark/coin.h. Remove one of the duplicate includes.🔧 Proposed fix
`#include` <secp256k1/include/GroupElement.h> -#include "libspark/coin.h" `#include` "evo/spork.h" `#include` "firo_params.h" `#include` "util.h"Or alternatively, remove line 25 if line 17 is the intended location.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/chain.h` around lines 17 - 25, Remove the duplicate include of "libspark/coin.h" in the header list: keep a single `#include` "libspark/coin.h" and delete the second occurrence so the file only includes that header once (reference: the duplicated "libspark/coin.h" include in the top-of-file includes block).src/evo/spork.cpp (1)
54-68:⚠️ Potential issue | 🔴 CriticalThis removes historical Lelantus spork enforcement.
ConnectBlock()still sends blocks throughsporkManager->IsTransactionAllowed()/IsBlockAllowed()inside the evo-spork window (src/validation.cpp:2875-2890), andvalidation.cpp:2700-2745still routes Lelantus mints/join-splits through that path. Dropping the Lelantus branches here means oldfeatureLelantus/ transparent-limit sporks stop participating in revalidation, which is consensus-visible for historical blocks. Please keep the legacy checks until the chain no longer depends on those spork states.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/evo/spork.cpp` around lines 54 - 68, The change removed legacy Lelantus spork checks from IsTransactionAllowed(const CTransaction &tx, const ActiveSporkMap &sporkMap, CValidationState &state); restore the historical branches so Lelantus mints/join-splits are still validated against featureLelantus and any transparent-limit sporks during evo-spork window. Specifically, in IsTransactionAllowed keep the original conditional branches that check tx.IsLelantusMint()/tx.IsLelantusJoinSplit() (or equivalent methods used previously) against sporkMap entries for CSporkAction::featureLelantus and its transparent limit spork, mirroring how Spark handling is done (use sporkMap.find(...) and compare amounts against limitSpork->second.second) so ConnectBlock()/sporkManager->IsTransactionAllowed() continues to enforce legacy Lelantus sporks for historical blocks.src/spark/state.cpp (1)
689-815:⚠️ Potential issue | 🟠 MajorExpensive Spark proof verification runs synchronously on mempool validation thread for recent transactions.
The variable
scheduledAsyncis declared on line 772 but never assigned a true value anywhere in the codebase. This means non-batched spends always executespark::SpendTransaction::verify()synchronously on line 813.Batching only activates for blocks older than 1 day (
fCollectProofsset in validation.cpp lines 2670, 3781), which occurs during sync/reindex scenarios. For normal mempool validation of recent transactions,useBatchingis false, forcing expensive proof verification inline on the validation thread. This weakens DoS resistance by allowing malicious transactions to monopolize CPU during proof verification.The code comment on line 812 ("defense-in-depth") confirms this is a fallback after async dispatch was removed.
src/wallet/rpcwallet.cpp (1)
1668-1683:⚠️ Potential issue | 🔴 CriticalDon’t drop transparent funds from
gettotalbalance.This now returns only Spark available balance, so transparent UTXOs disappear from the RPC result, and wallets without a Spark wallet will error instead of reporting their total funds.
Suggested fix
UniValue gettotalbalance(const JSONRPCRequest& request) { CWallet * const pwallet = GetWalletForJSONRPCRequest(request); if (!EnsureWalletIsAvailable(pwallet, request.fHelp)) { return NullUniValue; @@ - EnsureSparkWalletIsAvailable(); LOCK2(cs_main, pwallet->cs_wallet); - return ValueFromAmount(pwallet->sparkWallet->getAvailableBalance()); + CAmount sparkBalance = 0; + if (pwallet->sparkWallet) { + sparkBalance = pwallet->sparkWallet->getAvailableBalance(); + } + + return ValueFromAmount(pwallet->GetBalance() + sparkBalance); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/wallet/rpcwallet.cpp` around lines 1668 - 1683, The current implementation always calls EnsureSparkWalletIsAvailable() and returns only pwallet->sparkWallet->getAvailableBalance(), which drops transparent UTXOs and errors when no Spark wallet exists; change the logic to compute a total = transparent_balance + spark_balance where transparent_balance is obtained from the wallet (use pwallet->GetBalance() or the wallet's appropriate method to read confirmed transparent UTXOs) and spark_balance is 0 if pwallet->sparkWallet is null otherwise pwallet->sparkWallet->getAvailableBalance(); remove or skip the EnsureSparkWalletIsAvailable() call so absence of a Spark wallet doesn't error, sum both amounts and return ValueFromAmount(total).src/validation.cpp (1)
2337-2344:⚠️ Potential issue | 🟠 MajorHistorical Lelantus fees still need to reach
nFeesfor supply accounting.Using the coinbase value as the reward cap fixes the overpayment check, but it does not fix the later
AddTotalSupply(block.vtx[0]->GetValueOut() - nFees)/ disconnect mirror. With Lelantus join-split fees no longer added tonFees, a reindex or reorg on an-addressindexnode will overstate supply by the sum of those historical fees.Also applies to: 2801-2809
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/validation.cpp` around lines 2337 - 2344, The code currently skips adding fees for Spark/Lelantus spends when spark::ParseSparkSpend(tx) throws, which causes supply accounting mismatches (AddTotalSupply(block.vtx[0]->GetValueOut() - nFees)). Fix by adding a fallback that still increments nFees for historical Lelantus join-split spends when tx.IsSparkSpend() but ParseSparkSpend throws: inside the catch block compute the historical fee (e.g. via a dedicated helper such as spark::GetHistoricalLelantusFee(tx) or by deriving fee from the transaction’s inputs/outputs in the legacy format) and add that value to nFees so AddTotalSupply sees the same total fees; ensure the fallback helper is referenced/implemented and used wherever spark::ParseSparkSpend(tx).getFee() is used.
🧹 Nitpick comments (5)
qa/rpc-tests/llmq-is-spark.py (1)
43-43: Optional: Remove unnecessary semicolon.Python doesn't require semicolons after statements. While valid, removing it would be more idiomatic.
♻️ Proposed fix
- break; + break🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@qa/rpc-tests/llmq-is-spark.py` at line 43, Remove the unnecessary semicolon after the break statement in qa/rpc-tests/llmq-is-spark.py (the line containing "break;"); simply change the "break;" to "break" to follow Python idioms and eliminate the redundant character within the loop/block where break is used.src/llmq/quorums_instantsend.cpp (1)
1598-1604: Inconsistent call sites still referenceIsLelantusJoinSplit().With Lelantus support removed from
AdaptPrivateTx, the unchanged call sites at lines 1016 and 1525 now have dead code:// Line 1016: CTransaction const & tx{(tx_.IsLelantusJoinSplit() || tx_.IsSparkSpend()) ? isutils::AdaptPrivateTx(tx_) : tx_}; // Line 1525: CTransaction const & tx{(tx_.IsLelantusJoinSplit() || tx_.IsSparkSpend()) ? isutils::AdaptPrivateTx(tx_) : tx_};The
IsLelantusJoinSplit()check is now meaningless sinceAdaptPrivateTxreturns the transaction unchanged for non-Spark spends. These should be simplified to only checkIsSparkSpend().♻️ Suggested simplification for lines 1016 and 1525
-CTransaction const & tx{(tx_.IsLelantusJoinSplit() || tx_.IsSparkSpend()) ? isutils::AdaptPrivateTx(tx_) : tx_}; +CTransaction const & tx{tx_.IsSparkSpend() ? isutils::AdaptPrivateTx(tx_) : tx_};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/llmq/quorums_instantsend.cpp` around lines 1598 - 1604, Call sites still test tx_.IsLelantusJoinSplit() even though AdaptPrivateTx only handles Spark spends; update both usages that currently do (tx_.IsLelantusJoinSplit() || tx_.IsSparkSpend()) ? isutils::AdaptPrivateTx(tx_) : tx_ to only check tx_.IsSparkSpend(), i.e. replace the whole boolean condition with tx_.IsSparkSpend(), so AdaptPrivateTx is only invoked for Spark spends and the redundant Lelantus check is removed (references: AdaptPrivateTx, isutils::AdaptPrivateTx, IsLelantusJoinSplit, IsSparkSpend).src/wallet/txbuilder.cpp (1)
105-106: Comment text is inaccurate for the actual initialization.The code initializes
feewithpayTxFee.GetFeePerK(), so it does not start with “no fee.” Please adjust the comment wording.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/wallet/txbuilder.cpp` around lines 105 - 106, The comment above the fee loop is misleading: it says "Start with no fee" but the loop actually initializes fee with payTxFee.GetFeePerK(); update the comment to accurately reflect that fee starts at payTxFee.GetFeePerK() (e.g., "Start with base fee from payTxFee.GetFeePerK() and loop until there is enough fee") so readers can match the comment to the initialization in the for(...) that assigns fee.src/libspark/coin.h (1)
237-240: Prefer qualified names overusing namespacein this header.This is a public header, so adding another namespace-wide import makes symbol collisions harder to reason about for every includer. Qualifying
secp_primitives::GroupElementdirectly is safer here. As per coding guidelines "Avoidusing namespacedeclarations".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libspark/coin.h` around lines 237 - 240, Remove the "using namespace secp_primitives;" declaration from the header and replace usages of unqualified secp_primitives symbols with fully qualified names (e.g., change GroupElement to secp_primitives::GroupElement and similarly qualify any other types/functions from secp_primitives referenced in this file, including inside namespace lelantus and in class/struct/function declarations like any constructors, methods, or typedefs that currently rely on the using directive).src/qt/overviewpage.cpp (1)
201-253: Prefer layout policies over per-resize fixed geometry here.This handler hard-codes child widths/heights from the top-level window size and clamps text labels to 20px tall. That is brittle for translations and HiDPI font metrics, and it will fight Qt's layout engine on every resize. Prefer stretch factors and size policies in the
.ui/layout instead. Please verify this page with long translations and 125%-200% scaling before merging.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/qt/overviewpage.cpp` around lines 201 - 253, The resize handler (OverviewPage::resizeEvent) is forcing fixed widths/heights on widgets (ui->labelBalance, ui->labelUnconfirmed, ui->anonymizeButton, etc.) which fights Qt layouts and breaks translations/HiDPI; remove per-resize geometry adjustments and instead configure the layout in the .ui: set appropriate QHBoxLayout/QVBoxLayout stretch factors, use QSizePolicy::Preferred/Minimum/Expanding for labels and buttons, and avoid setFixedWidth/setFixedHeight calls (replace with setMinimumWidth only where truly needed); keep or adapt adjustTextSize only if it computes font scaling correctly for HiDPI and translations, and after making layout changes verify the page with long translations and 125%–200% scaling to confirm no clipping.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@README.md`:
- Line 11: Replace the unhyphenated compound adjective "privacy focused" with
the hyphenated form "privacy-focused" in the README sentence containing the Firo
description (the phrase "privacy focused cryptocurrency" should be updated to
"privacy-focused cryptocurrency"); update the one occurrence of that phrase to
maintain correct grammar and readability.
In `@src/bip47/account.cpp`:
- Around line 251-256: The change unconditionally returns false from
CAccountReceiver::acceptMaskedPayload, disabling BIP47 notifications; restore
behavior by either re-routing callers in src/wallet/wallet.cpp to the surviving
overload acceptMaskedPayload(maskedPayload, outpoint, outpointPubkey) or
implement a Spark-era body for CAccountReceiver::acceptMaskedPayload that parses
the maskedPayload and invokes acceptPcode() (and flips the same success/path the
original did) so the wallet persists channels and derives addresses; update the
wallet.cpp callsite(s) to use the outpoint/outpointPubkey overload if Lelantus
parsing is truly removed, or reintroduce equivalent parsing and success handling
in CAccountReceiver::acceptMaskedPayload so acceptPcode() runs.
In `@src/libspark/coin.h`:
- Around line 176-193: The Serialize/Unserialize in Coin (functions Serialize
and template Unserialize) currently reserve sizeof(int32_t) but only copy the
single-byte CoinDenomination (std::uint8_t) into that space, leaking
uninitialized padding; fix by writing/reading an explicit int32_t value for
denomination (e.g. cast/assign denomination into a local int32_t and memcpy that
into buffer before s.write, and read into an int32_t then assign back to
denomination in Unserialize) so the serialized layout is deterministic and
matches the legacy 4-byte layout referenced (keep GroupElement::memoryRequired
and value.serialize/value.deserialize usage unchanged).
- Around line 172-197: The sigma namespace code references GroupElement and
Scalar but those names are out of scope inside namespace sigma; qualify them as
secp_primitives::GroupElement and secp_primitives::Scalar (or add using
secp_primitives::GroupElement; using secp_primitives::Scalar; at the top of
namespace sigma) for PublicCoin, CScalarHash, and spend_info_container; also fix
PublicCoin::Serialize() and Unserialize() so the denomination size matches what
is written/read (either allocate size + sizeof(CoinDenomination) and write
exactly size + sizeof(CoinDenomination), or explicitly zero/pack a 4-byte
int32_t for denomination consistently in both Serialize and Unserialize) so no
uninitialized bytes are written/read.
In `@src/llmq/quorums_instantsend.cpp`:
- Around line 493-502: The loop in the IsSparkSpend check repeatedly acquires
LOCK(cs_main) for every CTxIn; hoist the LOCK(cs_main) so it is taken once
before iterating tx.vin to avoid repeated lock acquisition. Specifically, around
the block using lTag.deserialize(&in.scriptSig.front()) and
spark::CSparkState::GetState()->IsUsedLTag(lTag), take LOCK(cs_main) once
(outside the for loop), then iterate through CTxIn entries and call IsUsedLTag,
returning false if any tag is used and true afterwards.
In `@src/rpc/misc.cpp`:
- Around line 1075-1098: The RPC handlers getanonymityset, getmintmetadata,
getusedcoinserials, and getlatestcoinid currently remain listed in the public
commands[] table but always throw, so they should not be advertised; update the
commands[] registration to either remove these entries (if fully removed) or
mark them as hidden/deprecated (using the existing RPC registration
flags/mechanism your project uses) so they no longer appear in capability
discovery/help, and ensure the implementations (getanonymityset,
getmintmetadata, getusedcoinserials, getlatestcoinid) are either unregistered or
updated to use the deprecation path rather than being publicly advertised as
supported.
In `@src/spark/sparkwallet.cpp`:
- Line 85: The Spark payload parsing and wallet recovery are running
synchronously on the validation path (mempool admission / block connection) and
performing CWalletDB I/O there; change these code paths to offload heavy work to
the background thread pool instead of blocking validation. Specifically, ensure
threadPool is initialized (make threadPool non-null) and replace direct calls
that parse Spark payloads, identify/recover coins, or perform synchronous
CWalletDB writes (the methods invoked from src/validation.cpp) with tasks posted
to threadPool; move wallet DB writes and full-view-key recovery logic into those
background tasks and return quickly from the validation-call sites so validation
does not perform wallet I/O or long-running recovery synchronously.
In `@src/spark/state.cpp`:
- Around line 473-488: The block total uses size_t which can overflow on 32-bit
builds; change the accumulator blockSpendsValue in CheckSparkBlock to use
CAmount (int64_t) to match GetSpendTransparentAmount() and
consensus.GetMaxValueSparkSpendPerBlock()/GetMaxValueSparkSpendPerTransaction(),
and keep all comparisons using cmp::greater or equivalent overloads that accept
CAmount to avoid architecture-dependent overflow; update any related variable
names/usages in CheckSparkBlock accordingly.
In `@src/validation.cpp`:
- Around line 696-702: Remove the consensus behavior depending on local
sync/param gates: in the Sigma check (the if using
::Params().GetConsensus().nLelantusStartBlock) drop the "nLelantusStartBlock >
1" condition so the rejection only depends on !isVerifyDB and the height
comparison; in the Lelantus checks for
tx.IsLelantusJoinSplit()/tx.IsLelantusMint() remove the
"!IsInitialBlockDownload()" clause so the state.DoS(...) decision only depends
on !isVerifyDB and the nHeight >= (::Params().GetConsensus().nSparkStartBlock +
5) comparison. Ensure you edit the if conditions around state.DoS(...) and leave
isVerifyDB as the historical/reindex escape hatch.
- Around line 4197-4200: The code currently sets block.fChecked when fCheckPOW
&& fCheckMerkleRoot before calling spark::CheckSparkBlock, which causes
Spark-specific failures to be cached; change the logic so that block.fChecked is
only set after spark::CheckSparkBlock(state, block, nHeight) returns
successfully (i.e., move the block.fChecked = true assignment to after the
spark::CheckSparkBlock call and only set it if that call succeeds and no error
state is returned).
In `@src/wallet/rpcdump.cpp`:
- Around line 578-580: The call to sparkWallet->ListSparkMints() inside the
fMintUpdate branch is a no-op because CSparkWallet::ListSparkMints() is a const
accessor that returns a vector which is ignored; either remove this call or use
its return value (e.g., auto mints = sparkWallet->ListSparkMints()) if you
intend to inspect mints, and do not perform a second full rescan via
ScanForWalletTransactions(chainActive.Genesis()) unless necessary; if
post-import Spark bookkeeping (rebuild/sync/refresh) is required, replace the
ListSparkMints() call with the appropriate Spark API method on sparkWallet that
performs that bookkeeping (rather than the read-only accessor) while keeping the
ScanForWalletTransactions usage limited to the intended rescan flow.
In `@src/wallet/rpcwallet.cpp`:
- Around line 65-70: EnsureSparkWalletIsAvailable currently throws
JSONRPCError(RPC_WALLET_ERROR, "spark mint is not allowed for legacy wallet")
which is misleading for non-mint RPCs; update the thrown error message in
EnsureSparkWalletIsAvailable (the check using pwalletMain and
pwalletMain->sparkWallet) to a generic Spark-unavailable message (e.g. "spark
wallet is not available for this wallet") while keeping the same JSONRPCError
type and RPC_WALLET_ERROR code so all callers (balance/signing/address queries
and mint) receive a correct, generic error.
---
Outside diff comments:
In `@src/chain.h`:
- Around line 17-25: Remove the duplicate include of "libspark/coin.h" in the
header list: keep a single `#include` "libspark/coin.h" and delete the second
occurrence so the file only includes that header once (reference: the duplicated
"libspark/coin.h" include in the top-of-file includes block).
In `@src/evo/spork.cpp`:
- Around line 54-68: The change removed legacy Lelantus spork checks from
IsTransactionAllowed(const CTransaction &tx, const ActiveSporkMap &sporkMap,
CValidationState &state); restore the historical branches so Lelantus
mints/join-splits are still validated against featureLelantus and any
transparent-limit sporks during evo-spork window. Specifically, in
IsTransactionAllowed keep the original conditional branches that check
tx.IsLelantusMint()/tx.IsLelantusJoinSplit() (or equivalent methods used
previously) against sporkMap entries for CSporkAction::featureLelantus and its
transparent limit spork, mirroring how Spark handling is done (use
sporkMap.find(...) and compare amounts against limitSpork->second.second) so
ConnectBlock()/sporkManager->IsTransactionAllowed() continues to enforce legacy
Lelantus sporks for historical blocks.
In `@src/qt/locale/bitcoin_zh_CN.ts`:
- Around line 874-880: The zh_CN translation drops the "Lelantus" reference but
the English <source> string still mentions it, causing a semantic mismatch;
instead of editing this single translation, update the canonical <source> string
(the message starting "Sending funds to a RAP address requires a notification
transaction...") to the protocol-neutral wording (remove or generalize the
"Lelantus" mention) and then regenerate the locale files so all translations
(including src/qt/locale/bitcoin_zh_CN.ts) are in sync; ensure the updated
source text is used by the i18n extraction/regeneration tooling so translators
can update their locales.
In `@src/validation.cpp`:
- Around line 2337-2344: The code currently skips adding fees for Spark/Lelantus
spends when spark::ParseSparkSpend(tx) throws, which causes supply accounting
mismatches (AddTotalSupply(block.vtx[0]->GetValueOut() - nFees)). Fix by adding
a fallback that still increments nFees for historical Lelantus join-split spends
when tx.IsSparkSpend() but ParseSparkSpend throws: inside the catch block
compute the historical fee (e.g. via a dedicated helper such as
spark::GetHistoricalLelantusFee(tx) or by deriving fee from the transaction’s
inputs/outputs in the legacy format) and add that value to nFees so
AddTotalSupply sees the same total fees; ensure the fallback helper is
referenced/implemented and used wherever spark::ParseSparkSpend(tx).getFee() is
used.
In `@src/wallet/rpcwallet.cpp`:
- Around line 1668-1683: The current implementation always calls
EnsureSparkWalletIsAvailable() and returns only
pwallet->sparkWallet->getAvailableBalance(), which drops transparent UTXOs and
errors when no Spark wallet exists; change the logic to compute a total =
transparent_balance + spark_balance where transparent_balance is obtained from
the wallet (use pwallet->GetBalance() or the wallet's appropriate method to read
confirmed transparent UTXOs) and spark_balance is 0 if pwallet->sparkWallet is
null otherwise pwallet->sparkWallet->getAvailableBalance(); remove or skip the
EnsureSparkWalletIsAvailable() call so absence of a Spark wallet doesn't error,
sum both amounts and return ValueFromAmount(total).
---
Nitpick comments:
In `@qa/rpc-tests/llmq-is-spark.py`:
- Line 43: Remove the unnecessary semicolon after the break statement in
qa/rpc-tests/llmq-is-spark.py (the line containing "break;"); simply change the
"break;" to "break" to follow Python idioms and eliminate the redundant
character within the loop/block where break is used.
In `@src/libspark/coin.h`:
- Around line 237-240: Remove the "using namespace secp_primitives;" declaration
from the header and replace usages of unqualified secp_primitives symbols with
fully qualified names (e.g., change GroupElement to
secp_primitives::GroupElement and similarly qualify any other types/functions
from secp_primitives referenced in this file, including inside namespace
lelantus and in class/struct/function declarations like any constructors,
methods, or typedefs that currently rely on the using directive).
In `@src/llmq/quorums_instantsend.cpp`:
- Around line 1598-1604: Call sites still test tx_.IsLelantusJoinSplit() even
though AdaptPrivateTx only handles Spark spends; update both usages that
currently do (tx_.IsLelantusJoinSplit() || tx_.IsSparkSpend()) ?
isutils::AdaptPrivateTx(tx_) : tx_ to only check tx_.IsSparkSpend(), i.e.
replace the whole boolean condition with tx_.IsSparkSpend(), so AdaptPrivateTx
is only invoked for Spark spends and the redundant Lelantus check is removed
(references: AdaptPrivateTx, isutils::AdaptPrivateTx, IsLelantusJoinSplit,
IsSparkSpend).
In `@src/qt/overviewpage.cpp`:
- Around line 201-253: The resize handler (OverviewPage::resizeEvent) is forcing
fixed widths/heights on widgets (ui->labelBalance, ui->labelUnconfirmed,
ui->anonymizeButton, etc.) which fights Qt layouts and breaks
translations/HiDPI; remove per-resize geometry adjustments and instead configure
the layout in the .ui: set appropriate QHBoxLayout/QVBoxLayout stretch factors,
use QSizePolicy::Preferred/Minimum/Expanding for labels and buttons, and avoid
setFixedWidth/setFixedHeight calls (replace with setMinimumWidth only where
truly needed); keep or adapt adjustTextSize only if it computes font scaling
correctly for HiDPI and translations, and after making layout changes verify the
page with long translations and 125%–200% scaling to confirm no clipping.
In `@src/wallet/txbuilder.cpp`:
- Around line 105-106: The comment above the fee loop is misleading: it says
"Start with no fee" but the loop actually initializes fee with
payTxFee.GetFeePerK(); update the comment to accurately reflect that fee starts
at payTxFee.GetFeePerK() (e.g., "Start with base fee from payTxFee.GetFeePerK()
and loop until there is enough fee") so readers can match the comment to the
initialization in the for(...) that assigns fee.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: aaa1e246-c6c5-47c8-86e4-fee54f93119a
⛔ Files ignored due to path filters (1)
contrib/flathub/org.firo.firo-qt.metainfo.xmlis excluded by!**/*.xml
📒 Files selected for processing (144)
README.mdqa/pull-tester/rpc-tests.pyqa/rpc-tests/bip47-sendreceive.pyqa/rpc-tests/lelantus_mint.pyqa/rpc-tests/lelantus_mintspend.pyqa/rpc-tests/lelantus_setmintstatus_validation.pyqa/rpc-tests/lelantus_spend_gettransaction.pyqa/rpc-tests/llmq-is-spark.pyqa/rpc-tests/wallet-dump.pysrc/CMakeLists.txtsrc/batchproof_container.cppsrc/batchproof_container.hsrc/bip47/account.cppsrc/bip47/secretpoint.cppsrc/bloom.cppsrc/chain.hsrc/chainparams.cppsrc/coin_containers.cppsrc/coin_containers.hsrc/evo/specialtx.cppsrc/evo/spork.cppsrc/evo/spork.hsrc/hdmint/hdmint.cppsrc/hdmint/hdmint.hsrc/hdmint/mintpool.cppsrc/hdmint/mintpool.hsrc/hdmint/test/lelantus_tests.cppsrc/hdmint/tracker.cppsrc/hdmint/tracker.hsrc/hdmint/wallet.cppsrc/hdmint/wallet.hsrc/init.cppsrc/init.hsrc/lelantus.cppsrc/lelantus.hsrc/liblelantus/challenge_generator.hsrc/liblelantus/challenge_generator_impl.hsrc/liblelantus/coin.cppsrc/liblelantus/coin.hsrc/liblelantus/innerproduct_proof.hsrc/liblelantus/innerproduct_proof_generator.cppsrc/liblelantus/innerproduct_proof_generator.hsrc/liblelantus/innerproduct_proof_verifier.cppsrc/liblelantus/innerproduct_proof_verifier.hsrc/liblelantus/joinsplit.cppsrc/liblelantus/joinsplit.hsrc/liblelantus/lelantus_primitives.cppsrc/liblelantus/lelantus_primitives.hsrc/liblelantus/lelantus_proof.hsrc/liblelantus/lelantus_prover.cppsrc/liblelantus/lelantus_prover.hsrc/liblelantus/lelantus_verifier.cppsrc/liblelantus/lelantus_verifier.hsrc/liblelantus/openssl_context.hsrc/liblelantus/params.cppsrc/liblelantus/params.hsrc/liblelantus/range_proof.hsrc/liblelantus/range_prover.cppsrc/liblelantus/range_prover.hsrc/liblelantus/range_verifier.cppsrc/liblelantus/range_verifier.hsrc/liblelantus/schnorr_proof.hsrc/liblelantus/schnorr_prover.cppsrc/liblelantus/schnorr_prover.hsrc/liblelantus/schnorr_verifier.cppsrc/liblelantus/schnorr_verifier.hsrc/liblelantus/sigmaextended_proof.hsrc/liblelantus/sigmaextended_prover.cppsrc/liblelantus/sigmaextended_prover.hsrc/liblelantus/sigmaextended_verifier.cppsrc/liblelantus/sigmaextended_verifier.hsrc/liblelantus/spend_metadata.cppsrc/liblelantus/spend_metadata.hsrc/liblelantus/test/challenge_generator_tests.cppsrc/liblelantus/test/coin_tests.cppsrc/liblelantus/test/inner_product_test.cppsrc/liblelantus/test/joinsplit_tests.cppsrc/liblelantus/test/lelantus_primitives_tests.cppsrc/liblelantus/test/lelantus_test.cppsrc/liblelantus/test/lelantus_test_fixture.cppsrc/liblelantus/test/lelantus_test_fixture.hsrc/liblelantus/test/range_proof_test.cppsrc/liblelantus/test/schnorr_test.cppsrc/liblelantus/test/serialize_test.cppsrc/liblelantus/test/sigma_extended_test.cppsrc/liblelantus/threadpool.hsrc/libspark/coin.hsrc/libspark/grootle.hsrc/libspark/spend_transaction.hsrc/llmq/quorums_instantsend.cppsrc/miner.cppsrc/primitives/mint_spend.cppsrc/primitives/mint_spend.hsrc/qt/bitcoin.qrcsrc/qt/bitcoingui.cppsrc/qt/bitcoingui.hsrc/qt/coincontroldialog.cppsrc/qt/forms/optionsdialog.uisrc/qt/locale/bitcoin_zh_CN.tssrc/qt/optionsdialog.cppsrc/qt/optionsmodel.cppsrc/qt/optionsmodel.hsrc/qt/overviewpage.cppsrc/qt/overviewpage.hsrc/qt/res/css/firo.csssrc/qt/sendcoinsdialog.cppsrc/qt/sparkmodel.cppsrc/qt/transactiondesc.cppsrc/qt/transactionrecord.cppsrc/qt/walletmodel.cppsrc/qt/walletmodel.hsrc/qt/walletmodeltransaction.cppsrc/qt/walletmodeltransaction.hsrc/rpc/client.cppsrc/rpc/misc.cppsrc/rpc/rawtransaction.cppsrc/rpc/rpcevo.cppsrc/spark/sparkwallet.cppsrc/spark/state.cppsrc/spark/state.hsrc/test/CMakeLists.txtsrc/test/fixtures.cppsrc/test/fixtures.hsrc/test/lelantus_mintspend_test.cppsrc/test/lelantus_state_tests.cppsrc/test/lelantus_tests.cppsrc/test/test_bitcoin.cppsrc/txmempool.cppsrc/txmempool.hsrc/validation.cppsrc/validation.hsrc/wallet/CMakeLists.txtsrc/wallet/crypter.cppsrc/wallet/lelantusjoinsplitbuilder.cppsrc/wallet/lelantusjoinsplitbuilder.hsrc/wallet/rpcdump.cppsrc/wallet/rpcwallet.cppsrc/wallet/test/CMakeLists.txtsrc/wallet/test/lelantus_tests.cppsrc/wallet/txbuilder.cppsrc/wallet/wallet.cppsrc/wallet/wallet.hsrc/wallet/walletdb.cppsrc/wallet/walletdb.h
💤 Files with no reviewable changes (104)
- src/bip47/secretpoint.cpp
- src/init.h
- src/rpc/rpcevo.cpp
- src/CMakeLists.txt
- src/liblelantus/params.cpp
- src/wallet/test/CMakeLists.txt
- src/hdmint/hdmint.cpp
- src/test/test_bitcoin.cpp
- src/liblelantus/spend_metadata.cpp
- src/liblelantus/joinsplit.h
- src/qt/sparkmodel.cpp
- qa/rpc-tests/lelantus_mintspend.py
- src/bloom.cpp
- qa/rpc-tests/lelantus_spend_gettransaction.py
- src/liblelantus/lelantus_verifier.cpp
- src/liblelantus/lelantus_primitives.cpp
- qa/pull-tester/rpc-tests.py
- src/evo/spork.h
- src/liblelantus/test/challenge_generator_tests.cpp
- src/qt/walletmodeltransaction.cpp
- src/wallet/crypter.cpp
- src/qt/bitcoingui.cpp
- src/rpc/rawtransaction.cpp
- src/qt/bitcoin.qrc
- src/liblelantus/test/lelantus_primitives_tests.cpp
- src/liblelantus/openssl_context.h
- src/liblelantus/test/lelantus_test.cpp
- src/liblelantus/coin.cpp
- src/qt/res/css/firo.css
- src/primitives/mint_spend.cpp
- src/txmempool.cpp
- src/test/lelantus_tests.cpp
- src/qt/walletmodel.h
- src/liblelantus/test/schnorr_test.cpp
- qa/rpc-tests/lelantus_setmintstatus_validation.py
- src/liblelantus/sigmaextended_prover.cpp
- src/hdmint/mintpool.cpp
- src/test/CMakeLists.txt
- src/liblelantus/innerproduct_proof.h
- src/liblelantus/test/lelantus_test_fixture.cpp
- src/liblelantus/test/serialize_test.cpp
- src/qt/walletmodel.cpp
- src/liblelantus/test/sigma_extended_test.cpp
- src/hdmint/hdmint.h
- src/qt/forms/optionsdialog.ui
- src/liblelantus/test/range_proof_test.cpp
- src/liblelantus/test/inner_product_test.cpp
- src/wallet/CMakeLists.txt
- src/qt/bitcoingui.h
- src/batchproof_container.cpp
- qa/rpc-tests/lelantus_mint.py
- src/liblelantus/schnorr_verifier.h
- src/qt/optionsmodel.h
- src/lelantus.cpp
- src/qt/optionsmodel.cpp
- src/liblelantus/schnorr_prover.cpp
- src/liblelantus/lelantus_prover.cpp
- src/liblelantus/lelantus_proof.h
- src/liblelantus/challenge_generator.h
- src/liblelantus/schnorr_prover.h
- src/liblelantus/schnorr_verifier.cpp
- qa/rpc-tests/bip47-sendreceive.py
- src/miner.cpp
- src/liblelantus/innerproduct_proof_verifier.h
- src/hdmint/test/lelantus_tests.cpp
- src/liblelantus/spend_metadata.h
- src/hdmint/tracker.h
- src/wallet/test/lelantus_tests.cpp
- src/liblelantus/sigmaextended_proof.h
- src/liblelantus/params.h
- src/liblelantus/range_proof.h
- src/test/lelantus_mintspend_test.cpp
- src/liblelantus/sigmaextended_verifier.cpp
- src/liblelantus/test/coin_tests.cpp
- src/liblelantus/schnorr_proof.h
- src/liblelantus/challenge_generator_impl.h
- src/liblelantus/innerproduct_proof_generator.h
- src/liblelantus/lelantus_prover.h
- src/wallet/lelantusjoinsplitbuilder.h
- src/liblelantus/range_prover.h
- src/liblelantus/range_verifier.cpp
- src/liblelantus/innerproduct_proof_verifier.cpp
- src/liblelantus/range_verifier.h
- src/liblelantus/test/joinsplit_tests.cpp
- src/liblelantus/test/lelantus_test_fixture.h
- src/liblelantus/joinsplit.cpp
- src/liblelantus/range_prover.cpp
- src/test/fixtures.h
- src/liblelantus/sigmaextended_prover.h
- src/liblelantus/threadpool.h
- src/wallet/lelantusjoinsplitbuilder.cpp
- src/hdmint/wallet.cpp
- src/batchproof_container.h
- src/liblelantus/sigmaextended_verifier.h
- src/hdmint/mintpool.h
- src/liblelantus/coin.h
- src/test/fixtures.cpp
- src/test/lelantus_state_tests.cpp
- src/liblelantus/lelantus_verifier.h
- src/liblelantus/lelantus_primitives.h
- src/hdmint/wallet.h
- src/liblelantus/innerproduct_proof_generator.cpp
- src/lelantus.h
- src/hdmint/tracker.cpp
PR #1805 Review: Lelantus Strip145 files changed, +439 / -18,081 — Full removal of Lelantus protocol code, replacing with Spark equivalents. Static Analysis FindingsClean — cppcheck and flawfinder found no issues in the changed files (most changes are deletions). CRITICAL Issues1. ConnectBlock fee accounting gap 2. Incomplete Lelantus state cleanup 3. Missing historical verification path 4. RPC 5. Wallet migration for Lelantus balances HIGH Issues6. Lelantus tests removed but no Spark equivalents added 7. 8. Masternode/LLMQ interaction 9. GUI wallet Lelantus references MEDIUM Issues10. Build system cleanup 11. Documentation references 12. Protocol version bump 13. Log message cleanup SummaryThis is a massive, high-risk removal. The deletion itself is straightforward, but the transition safety (historical validation, wallet migration, fee accounting continuity) needs careful verification before merge. Recommend addressing Critical items 1, 3, and 5 before merge. Review assisted by AI + static analysis (cppcheck, flawfinder, clang-tidy). Posted by @reubenyap. |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/llmq/quorums_instantsend.cpp (1)
1598-1604:⚠️ Potential issue | 🟡 MinorLelantus removal from AdaptPrivateTx is consistent with grace period but lacks safeguard documentation.
The
AdaptPrivateTxfunction now only handles Spark spends. The Lelantus adaptation (viaAdaptJsplitTx) was removed entirely from the codebase, which aligns with the grace period ending January 30, 2026.Since we're now in March 2026 (past the grace period), Lelantus transactions are actively rejected from the mempool (validation.cpp:840-842 checks
chainActive.Height() >= nLelantusGracefulPeriod). However,SyncTransactionis called for both current blocks and reorg'd blocks (validation.cpp:3779). A deep reorg to pre-grace-period blocks could theoretically resurrect Lelantus transactions, which would then pass throughAdaptPrivateTxunchanged instead of being adapted.While this is an extremely low-probability edge case given the grace period has already passed, consider adding a comment explaining the intentional removal of Lelantus handling to prevent future confusion.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/llmq/quorums_instantsend.cpp` around lines 1598 - 1604, AdaptPrivateTx now only handles Spark spends via AdaptSparkTx and intentionally omits the former Lelantus handling (AdaptJsplitTx); add a concise comment above AdaptPrivateTx that documents this intentional removal, noting the grace period end (nLelantusGracefulPeriod) and the mempool rejection check (chainActive.Height() >= nLelantusGracefulPeriod in validation logic), and mention the low-probability reorg edge case where SyncTransaction could encounter pre-grace-period blocks—clarify that Lelantus is deliberately not adapted here to avoid confusion for future maintainers.src/wallet/rpcwallet.cpp (2)
2856-2863:⚠️ Potential issue | 🟠 MajorScope Spark fee override to outgoing transactions only.
At Line 2856, fee parsing runs for any Spark spend tx. Because Line 2865 uses
nNet - nFee, this can distort reportedamountfor non-sender wallet views. Guard this withwtx.IsFromMe(filter).Suggested guard
- if (wtx.tx->IsSparkSpend()) { + if (wtx.IsFromMe(filter) && wtx.tx->IsSparkSpend()) { try { nFee = (0 - spark::ParseSparkSpend(*wtx.tx).getFee()); } catch (const std::exception &) { // do nothing } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/wallet/rpcwallet.cpp` around lines 2856 - 2863, The Spark fee override is currently applied for any Spark spend transaction which can misreport amounts for non-sender views; update the block that checks wtx.tx->IsSparkSpend() to only parse and override nFee when the transaction is outgoing by adding a guard using wtx.IsFromMe(filter). In other words, inside the function handling wallet transactions (the code using variables wtx, nFee, nNet and calling spark::ParseSparkSpend), wrap the ParseSparkSpend try/catch so it runs only if wtx.IsFromMe(filter) returns true, leaving all other logic unchanged.
1643-1655:⚠️ Potential issue | 🟡 Minor
getprivatebalancehelp text overstates coin origin.Line 1643 says “created by the wallet”, but Line 1655 returns
getAvailableBalance(), which includes all confirmed tracked Spark coins without origin filtering. This can mislead operators about what is counted.Suggested wording fix
- "Private balance is the sum of all confirmed spark mints which are created by the wallet.\n" + "Private balance is the sum of confirmed Spark funds tracked by this wallet.\n"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/wallet/rpcwallet.cpp` around lines 1643 - 1655, The getprivatebalance help text incorrectly states that private coins are "created by the wallet" while the RPC actually returns ValueFromAmount(pwallet->sparkWallet->getAvailableBalance()), which includes all confirmed tracked Spark coins regardless of origin; update the help string in the getprivatebalance block (near EnsureSparkWalletIsAvailable() / LOCK2 usage) to accurately describe the result as the sum of confirmed tracked/available Spark mints known to the wallet (or similar wording like "confirmed tracked Spark coins known to the wallet"), keeping the rest of the function (EnsureSparkWalletIsAvailable, LOCK2, and the return of getAvailableBalance()) unchanged.
🧹 Nitpick comments (4)
src/spark/state.cpp (1)
771-815: Dead code from async removal can be cleaned up.After removing the async thread pool infrastructure, several constructs are now vestigial:
scheduledAsync(line 772) is alwaysfalse, making lines 808-810 unreachable dead codefRecheckNeeded(line 775) is alwaysfalse, so thedo-whileloop executes exactly once (it's effectively just a block scope)Consider simplifying to remove these artifacts:
Suggested cleanup
- bool fChecked = false; - bool scheduledAsync = false; + bool fChecked = false; try { - bool fRecheckNeeded; - do { - fRecheckNeeded = false; - - LOCK(cs_checkedSparkSpendTransactions); - if (gCheckedSparkSpendTransactions.count(hashTx)) { - auto& checkState = gCheckedSparkSpendTransactions[hashTx]; - if (checkState.fChecked) { - if (!checkState.fResult) - return state.DoS(100, false, REJECT_INVALID, "CheckSparkSpendTransaction: previously checked and failed"); - else { - LogPrintf("CheckSparkSpendTransaction: already checked tx %s\n", hashTx.ToString()); - fChecked = true; - } + { + LOCK(cs_checkedSparkSpendTransactions); + if (gCheckedSparkSpendTransactions.count(hashTx)) { + auto& checkState = gCheckedSparkSpendTransactions[hashTx]; + if (checkState.fChecked) { + if (!checkState.fResult) + return state.DoS(100, false, REJECT_INVALID, "CheckSparkSpendTransaction: previously checked and failed"); + LogPrintf("CheckSparkSpendTransaction: already checked tx %s\n", hashTx.ToString()); + fChecked = true; } } } - while (fRecheckNeeded); if (fChecked) { passVerify = true; } else { if (fStatefulSigmaCheck) { passVerify = spark::SpendTransaction::verify(*spend, cover_sets); { LOCK(cs_checkedSparkSpendTransactions); gCheckedSparkSpendTransactions[hashTx] = {true, passVerify}; } } else { - if (scheduledAsync) { - passVerify = true; - } else { - passVerify = spark::SpendTransaction::verify(*spend, cover_sets); - } + passVerify = spark::SpendTransaction::verify(*spend, cover_sets); } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/spark/state.cpp` around lines 771 - 815, The code still contains vestigial async-related variables and a redundant do-while loop; remove the unused bool scheduledAsync and the fRecheckNeeded/do-while construct in CheckSparkSpendTransaction, replacing the do-while with a single scoped check using LOCK(cs_checkedSparkSpendTransactions) to look up gCheckedSparkSpendTransactions[hashTx] and set fChecked appropriately, and then simplify the subsequent branch: drop the scheduledAsync branch (lines that assume async scheduling) so the non-stateful path simply falls back to synchronous verify via spark::SpendTransaction::verify and ensure you still write back gCheckedSparkSpendTransactions[hashTx] = {true, passVerify} inside the LOCK where appropriate.src/libspark/coin.h (3)
207-219: Preferstatic_castover C-style casts.The serialization uses C-style casts which are less explicit than
static_cast. While functionally correct, this is inconsistent with the more explicit casting style used inPublicCoin::Serializeabove.♻️ Suggested improvement
template<typename Stream> void Serialize(Stream& s) const { - int64_t tmp = uint8_t(denomination); + int64_t tmp = static_cast<int64_t>(static_cast<std::uint8_t>(denomination)); s << tmp; tmp = coinGroupId; s << tmp; } template<typename Stream> void Unserialize(Stream& s) { int64_t tmp; - s >> tmp; denomination = CoinDenomination(tmp); - s >> tmp; coinGroupId = int(tmp); + s >> tmp; denomination = static_cast<CoinDenomination>(static_cast<std::uint8_t>(tmp)); + s >> tmp; coinGroupId = static_cast<int>(tmp); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libspark/coin.h` around lines 207 - 219, Replace the C-style casts in Coin::Serialize and Coin::Unserialize with explicit C++ casts: when assigning to tmp use static_cast<int64_t>(denomination) in Serialize, and in Unserialize cast tmp to CoinDenomination via denomination = static_cast<CoinDenomination>(tmp) and cast coinGroupId using coinGroupId = static_cast<int>(tmp); update both Serialize(Stream& s) and Unserialize(Stream& s) to use these static_casts for clarity and consistency with PublicCoin::Serialize.
223-234: Use.data()instead of&bnData[0]for vector access.The coding guidelines specify using
.data()instead of&v[0]to avoid potential out-of-bounds access issues and for clarity.♻️ Suggested fix
struct CScalarHash { std::size_t operator ()(const secp_primitives::Scalar& bn) const noexcept { std::vector<unsigned char> bnData(bn.memoryRequired()); - bn.serialize(&bnData[0]); + bn.serialize(bnData.data()); unsigned char hash[CSHA256::OUTPUT_SIZE]; - CSHA256().Write(&bnData[0], bnData.size()).Finalize(hash); + CSHA256().Write(bnData.data(), bnData.size()).Finalize(hash); // take the first bytes of "hash". std::size_t result; std::memcpy(&result, hash, sizeof(std::size_t)); return result; } };As per coding guidelines: "Watch for out-of-bounds vector access; use
.data()instead of&v[0]".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libspark/coin.h` around lines 223 - 234, In CScalarHash::operator() replace the vector element-pointer uses with vector::data(): when serializing and when hashing, use bnData.data() instead of &bnData[0] to follow guidelines and avoid potential out-of-bounds access; update the two call sites inside operator() (the bn.serialize(...) call and the CSHA256().Write(...) call) to pass bnData.data() and keep the rest of the logic (CSHA256 hash, memcpy into result) unchanged.
238-238: Missing namespace closing comment.For consistency with other namespaces in this file (lines 143 and 275), add a closing comment.
♻️ Suggested fix
-} +} // namespace sigma🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libspark/coin.h` at line 238, Add a trailing comment after the final closing brace to indicate which namespace is being closed (match the exact identifier used at the opening of that namespace in this file), e.g., change the lone "}" to "}" followed by "// namespace <NAME>" (use the same namespace name used earlier in this file so it is consistent with the other closing comments near lines 143 and 275).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/chainparams.cpp`:
- Around line 1022-1023: The mempool and block acceptance boundaries diverge
because consensus.nLelantusGracefulPeriod is 0 while block logic falls back to
consensus.nSparkStartBlock; fix by making the two behaviors consistent: either
set consensus.nLelantusGracefulPeriod to the same non‑zero fallback used for
blocks (e.g., the devnet/regtest value currently in nSparkStartBlock) in
chainparams (so the parameter itself is correct), or change the mempool check in
AcceptToMemoryPool (the Lelantus/mempool validation path) to apply the same
fallback logic (if consensus.nLelantusGracefulPeriod == 0 then use
consensus.nSparkStartBlock) before rejecting transactions. Ensure you update the
regtest comment if you change the boundary value.
- Around line 804-805: Remove the dead Lelantus config field and its
assignments: delete the nLelantusV3PayloadStartBlock member from the consensus
params struct (the consensus params declaration) and remove every assignment to
nLelantusV3PayloadStartBlock in chainparams.cpp (the Lelantus strip assignment
and the later assignment around lines ~1320). Ensure no other code references
nLelantusV3PayloadStartBlock after removal; if any compiler errors point to
residual uses, remove or replace those usages with the active nSparkStartBlock
logic.
In `@src/llmq/quorums_instantsend.cpp`:
- Around line 493-502: Add a defensive early rejection for Lelantus JoinSplit
transactions: inside the CheckCanLock / equivalent function (the block handling
tx.IsSparkSpend() and later UTXO checks), add a check for
tx.IsLelantusJoinSplit() that immediately returns false so Lelantus transactions
are explicitly rejected at this layer; keep the existing LOCK(cs_main) placement
and ensure the IsLelantusJoinSplit() check occurs before any UTXO or
synthetic-input processing to improve readability and clarity.
In `@src/wallet/rpcdump.cpp`:
- Around line 577-580: The call to pwallet->ScanForWalletTransactions(pindex)
omits the fUpdate flag causing inconsistent behavior; update the call in the
else branch to pass fUpdate=true (i.e.,
pwallet->ScanForWalletTransactions(pindex, true)) to match other usages and
ensure AddToWalletIfInvolvingMe fully updates wallet transaction state for
discovered transactions; locate the call that uses pindex and
LogPrintf("Rescanning last %i blocks...") and add the missing boolean argument.
In `@src/wallet/rpcwallet.cpp`:
- Around line 1680-1685: EnsureSparkWalletIsAvailable() currently causes
gettotalbalance to hard-fail when Spark is missing; change the logic to compute
the transparent balance unconditionally (use pwallet->GetBalance()), then
attempt to add the Spark balance only if pwallet->sparkWallet is non-null and/or
EnsureSparkWalletIsAvailable() does not throw. Specifically, remove the
unconditional EnsureSparkWalletIsAvailable() call in the gettotalbalance path,
or wrap it in a try/catch, check that pwallet->sparkWallet is valid before
calling pwallet->sparkWallet->getAvailableBalance(), and if Spark is unavailable
return ValueFromAmount(transparent) instead of failing; otherwise return
ValueFromAmount(transparent + spark).
---
Outside diff comments:
In `@src/llmq/quorums_instantsend.cpp`:
- Around line 1598-1604: AdaptPrivateTx now only handles Spark spends via
AdaptSparkTx and intentionally omits the former Lelantus handling
(AdaptJsplitTx); add a concise comment above AdaptPrivateTx that documents this
intentional removal, noting the grace period end (nLelantusGracefulPeriod) and
the mempool rejection check (chainActive.Height() >= nLelantusGracefulPeriod in
validation logic), and mention the low-probability reorg edge case where
SyncTransaction could encounter pre-grace-period blocks—clarify that Lelantus is
deliberately not adapted here to avoid confusion for future maintainers.
In `@src/wallet/rpcwallet.cpp`:
- Around line 2856-2863: The Spark fee override is currently applied for any
Spark spend transaction which can misreport amounts for non-sender views; update
the block that checks wtx.tx->IsSparkSpend() to only parse and override nFee
when the transaction is outgoing by adding a guard using wtx.IsFromMe(filter).
In other words, inside the function handling wallet transactions (the code using
variables wtx, nFee, nNet and calling spark::ParseSparkSpend), wrap the
ParseSparkSpend try/catch so it runs only if wtx.IsFromMe(filter) returns true,
leaving all other logic unchanged.
- Around line 1643-1655: The getprivatebalance help text incorrectly states that
private coins are "created by the wallet" while the RPC actually returns
ValueFromAmount(pwallet->sparkWallet->getAvailableBalance()), which includes all
confirmed tracked Spark coins regardless of origin; update the help string in
the getprivatebalance block (near EnsureSparkWalletIsAvailable() / LOCK2 usage)
to accurately describe the result as the sum of confirmed tracked/available
Spark mints known to the wallet (or similar wording like "confirmed tracked
Spark coins known to the wallet"), keeping the rest of the function
(EnsureSparkWalletIsAvailable, LOCK2, and the return of getAvailableBalance())
unchanged.
---
Nitpick comments:
In `@src/libspark/coin.h`:
- Around line 207-219: Replace the C-style casts in Coin::Serialize and
Coin::Unserialize with explicit C++ casts: when assigning to tmp use
static_cast<int64_t>(denomination) in Serialize, and in Unserialize cast tmp to
CoinDenomination via denomination = static_cast<CoinDenomination>(tmp) and cast
coinGroupId using coinGroupId = static_cast<int>(tmp); update both
Serialize(Stream& s) and Unserialize(Stream& s) to use these static_casts for
clarity and consistency with PublicCoin::Serialize.
- Around line 223-234: In CScalarHash::operator() replace the vector
element-pointer uses with vector::data(): when serializing and when hashing, use
bnData.data() instead of &bnData[0] to follow guidelines and avoid potential
out-of-bounds access; update the two call sites inside operator() (the
bn.serialize(...) call and the CSHA256().Write(...) call) to pass bnData.data()
and keep the rest of the logic (CSHA256 hash, memcpy into result) unchanged.
- Line 238: Add a trailing comment after the final closing brace to indicate
which namespace is being closed (match the exact identifier used at the opening
of that namespace in this file), e.g., change the lone "}" to "}" followed by
"// namespace <NAME>" (use the same namespace name used earlier in this file so
it is consistent with the other closing comments near lines 143 and 275).
In `@src/spark/state.cpp`:
- Around line 771-815: The code still contains vestigial async-related variables
and a redundant do-while loop; remove the unused bool scheduledAsync and the
fRecheckNeeded/do-while construct in CheckSparkSpendTransaction, replacing the
do-while with a single scoped check using LOCK(cs_checkedSparkSpendTransactions)
to look up gCheckedSparkSpendTransactions[hashTx] and set fChecked
appropriately, and then simplify the subsequent branch: drop the scheduledAsync
branch (lines that assume async scheduling) so the non-stateful path simply
falls back to synchronous verify via spark::SpendTransaction::verify and ensure
you still write back gCheckedSparkSpendTransactions[hashTx] = {true, passVerify}
inside the LOCK where appropriate.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 42c044e3-f560-47ae-b3bf-a379e80e6acc
📒 Files selected for processing (9)
README.mdsrc/chain.hsrc/chainparams.cppsrc/libspark/coin.hsrc/llmq/quorums_instantsend.cppsrc/spark/state.cppsrc/validation.cppsrc/wallet/rpcdump.cppsrc/wallet/rpcwallet.cpp
💤 Files with no reviewable changes (1)
- src/chain.h
🚧 Files skipped from review as they are similar to previous changes (1)
- README.md
👮 Files not reviewed due to content moderation or server errors (1)
- src/validation.cpp
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
src/spark/threadpool.h (1)
5-6: Header guard doesn't follow project convention.The header guard
FIRO_SPARK_THREADPOOL_Hdoesn't follow the project's documented patternBITCOIN_<PATH>_<FILE>_H. Consider usingBITCOIN_SPARK_THREADPOOL_HorFIRO_SPARK_THREADPOOL_Hdepending on whether the project has standardized on FIRO prefix for new files.As per coding guidelines: "Header guards should follow format
#ifndef BITCOIN_<PATH>_<FILE>_H"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/spark/threadpool.h` around lines 5 - 6, Change the header guard macro to follow the project's convention: replace the existing FIRO_SPARK_THREADPOOL_H guard with the standardized BITCOIN_SPARK_THREADPOOL_H (update both the `#ifndef/`#define and the corresponding `#endif` comment if present) so the header guard in threadpool.h uses BITCOIN_SPARK_THREADPOOL_H consistently.src/spark/sparkwallet.cpp (1)
95-105: Defensive null checks are good; consider typed pointer.The null check in
FinishTasks()is a sensible defensive measure. However, the repeated(ParallelOpThreadPool<void>*)threadPoolcasts throughout the file suggestthreadPoolis stored asvoid*. Consider declaring it asParallelOpThreadPool<void>*orstd::unique_ptr<ParallelOpThreadPool<void>>to eliminate unsafe casts and improve type safety.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/spark/sparkwallet.cpp` around lines 95 - 105, The member threadPool is currently stored as a void* causing repeated unsafe casts; change its declaration in CSparkWallet to a typed pointer (ParallelOpThreadPool<void>* or better std::unique_ptr<ParallelOpThreadPool<void>>) and update CSparkWallet::~CSparkWallet and CSparkWallet::FinishTasks to use the typed pointer (call shutdown on the typed instance and delete or reset the unique_ptr) so you can remove all (ParallelOpThreadPool<void>*) casts and improve type safety.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/llmq/quorums_instantsend.cpp`:
- Around line 493-506: The loop inside the tx.IsSparkSpend() branch accesses
in.scriptSig.front() without ensuring the scriptSig buffer is non-empty, which
can cause undefined behavior; update the loop in the IsSparkSpend handling
(where you iterate CTxIn in tx.vin and call GroupElement::deserialize on
&in.scriptSig.front()) to first check that in.scriptSig is non-empty (e.g., if
(in.scriptSig.empty()) return false or continue based on intended validation),
only call deserialize when non-empty, and preserve the existing IsUsedLTag check
against spark::CSparkState::GetState()->IsUsedLTag(lTag).
In `@src/wallet/rpcdump.cpp`:
- Around line 574-576: The rescan-only path for legacy mint seed keys (detected
via nChange.first == 2 and guarded by fMintUpdate) currently only calls
pwallet->ScanForWalletTransactions which rebuilds tx history but does not
rebuild the mint pool/state those keys rely on; update the fMintUpdate branch so
that when a legacy mint seed key is detected you either invoke the existing
legacy-mint migration routine (e.g., the wallet's mint pool/state rebuild
function—refer to functions named like RebuildMintPool/RestoreLegacyMints or a
migration helper) to fully reconstruct mint state, or if no migration exists,
reject the import (return an error from importwallet) before persisting keys;
ensure this change is applied in the block that currently contains
pwallet->ScanForWalletTransactions and references fMintUpdate and nChange.first
so legacy mints are not silently left unrecoverable.
---
Nitpick comments:
In `@src/spark/sparkwallet.cpp`:
- Around line 95-105: The member threadPool is currently stored as a void*
causing repeated unsafe casts; change its declaration in CSparkWallet to a typed
pointer (ParallelOpThreadPool<void>* or better
std::unique_ptr<ParallelOpThreadPool<void>>) and update
CSparkWallet::~CSparkWallet and CSparkWallet::FinishTasks to use the typed
pointer (call shutdown on the typed instance and delete or reset the unique_ptr)
so you can remove all (ParallelOpThreadPool<void>*) casts and improve type
safety.
In `@src/spark/threadpool.h`:
- Around line 5-6: Change the header guard macro to follow the project's
convention: replace the existing FIRO_SPARK_THREADPOOL_H guard with the
standardized BITCOIN_SPARK_THREADPOOL_H (update both the `#ifndef/`#define and the
corresponding `#endif` comment if present) so the header guard in threadpool.h
uses BITCOIN_SPARK_THREADPOOL_H consistently.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: e20ce7b4-1ab8-4d5a-ac60-6daa909517c6
📒 Files selected for processing (7)
src/chainparams.cppsrc/consensus/params.hsrc/llmq/quorums_instantsend.cppsrc/spark/sparkwallet.cppsrc/spark/threadpool.hsrc/wallet/rpcdump.cppsrc/wallet/rpcwallet.cpp
💤 Files with no reviewable changes (1)
- src/consensus/params.h
🚧 Files skipped from review as they are similar to previous changes (1)
- src/wallet/rpcwallet.cpp
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/llmq/quorums_instantsend.cpp`:
- Around line 497-507: CheckCanLock currently only checks tx.IsSparkSpend() so
adapted Spark transactions rewritten to INSTANTSEND_ADAPTED_TX slip past the
Spark-specific logic; update this check to include the adapted-transaction flag
(INSTANTSEND_ADAPTED_TX) as an alternate branch and treat adapted transactions
identically to Spark spends (iterate vin, ensure scriptSig/non-empty,
deserialize GroupElement and consult
spark::CSparkState::GetState()->IsUsedLTag(lTag) before returning true), or
alternatively avoid adapting in AdaptSparkTx()/SyncTransaction() before
ProcessTx()/retry queue; adjust the conditional that now reads tx.IsSparkSpend()
to also detect INSTANTSEND_ADAPTED_TX and preserve the existing Spark-path
behavior.
In `@src/wallet/rpcdump.cpp`:
- Around line 548-553: The loop that adds keys (calls like AddKeyPubKey()) may
persist supported keys and then abort immediately when encountering a legacy
mint-seed (pwallet->mapKeyMetadata[keyid].nChange.first == 2), leaving the
wallet partially imported; to fix, perform a pre-scan of the parsed dump for any
unsupported legacy mint-seed entries (check pwallet->mapKeyMetadata for
nChange.first == 2) and abort before calling AddKeyPubKey(), OR accumulate a
flag/error when such entries are found and defer throwing until after
post-import cleanup steps (UpdateTimeFirstKey(), the final rescan, and
ShowProgress("", 100)) so that previously added keys and rescans complete before
failing; ensure the change is applied in the import routine in rpcdump.cpp (the
import loop/ImportWallet logic) and preserve existing persistence and cleanup on
early-detection paths.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 283131c3-9bcb-404d-b0e4-15ce532be103
📒 Files selected for processing (3)
qa/pull-tester/rpc-tests.pysrc/llmq/quorums_instantsend.cppsrc/wallet/rpcdump.cpp
💤 Files with no reviewable changes (1)
- qa/pull-tester/rpc-tests.py
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/llmq/quorums_instantsend.cpp (1)
497-509:⚠️ Potential issue | 🔴 CriticalCritical:
tx.IsSparkSpend()will always be false inside theINSTANTSEND_ADAPTED_TXbranch, bypassing LTag validation.The check at line 498 can never pass because:
AdaptSparkTx()setsnType = INSTANTSEND_ADAPTED_TX(line 1600)IsSparkSpend()requiresnType == TRANSACTION_SPARK(per relevant snippet from transaction.cpp:218-223)When an adapted Spark spend reaches
CheckCanLock, it enters theINSTANTSEND_ADAPTED_TXbranch, but the nestedIsSparkSpend()check fails, causing the LTag validation to be completely skipped. The function returnstrueat line 508 without verifying that the linking tags haven't been used, effectively disabling double-spend protection for Spark InstantSend.🐛 Proposed fix: Remove the redundant nested check
if (tx.nType == isutils::INSTANTSEND_ADAPTED_TX) { - if (tx.IsSparkSpend()) { - for (CTxIn const & in : tx.vin) { - if (in.scriptSig.empty()) - return false; - GroupElement lTag; - lTag.deserialize(&in.scriptSig.front()); - if (spark::CSparkState::GetState()->IsUsedLTag(lTag)) - return false; - } + for (CTxIn const & in : tx.vin) { + if (in.scriptSig.empty()) + return false; + GroupElement lTag; + lTag.deserialize(&in.scriptSig.front()); + if (spark::CSparkState::GetState()->IsUsedLTag(lTag)) + return false; } return true; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/llmq/quorums_instantsend.cpp` around lines 497 - 509, Inside CheckCanLock, the nested check if (tx.IsSparkSpend()) inside the branch if (tx.nType == isutils::INSTANTSEND_ADAPTED_TX) prevents LTag validation because adapted Spark txs set nType to INSTANTSEND_ADAPTED_TX (AdaptSparkTx) while IsSparkSpend() expects TRANSACTION_SPARK; remove the redundant IsSparkSpend() condition so that when tx.nType == INSTANTSEND_ADAPTED_TX you always iterate tx.vin, ensure in.scriptSig is non-empty, deserialize GroupElement lTag from in.scriptSig and call spark::CSparkState::GetState()->IsUsedLTag(lTag) to reject used tags (i.e., delete the if (tx.IsSparkSpend()) guard and its braces while keeping the input loop and IsUsedLTag check).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/llmq/quorums_instantsend.cpp`:
- Around line 497-509: Inside CheckCanLock, the nested check if
(tx.IsSparkSpend()) inside the branch if (tx.nType ==
isutils::INSTANTSEND_ADAPTED_TX) prevents LTag validation because adapted Spark
txs set nType to INSTANTSEND_ADAPTED_TX (AdaptSparkTx) while IsSparkSpend()
expects TRANSACTION_SPARK; remove the redundant IsSparkSpend() condition so that
when tx.nType == INSTANTSEND_ADAPTED_TX you always iterate tx.vin, ensure
in.scriptSig is non-empty, deserialize GroupElement lTag from in.scriptSig and
call spark::CSparkState::GetState()->IsUsedLTag(lTag) to reject used tags (i.e.,
delete the if (tx.IsSparkSpend()) guard and its braces while keeping the input
loop and IsUsedLTag check).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 4136dc0a-295f-43ac-a4e5-685ffe581c83
📒 Files selected for processing (2)
src/llmq/quorums_instantsend.cppsrc/wallet/rpcdump.cpp
💤 Files with no reviewable changes (1)
- src/wallet/rpcdump.cpp
|
User petrosyan.levon93@gmail.com does not have a PR Review subscription. Go to Team management and add this email to the PR Review subscription. |
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit 9f0b130. Configure here.
|
User levoncrypto1994@gmail.com does not have a PR Review subscription. Go to Team management and add this email to the PR Review subscription. |
|
User petrosyan.levon93@gmail.com does not have a PR Review subscription. Go to Team management and add this email to the PR Review subscription. |

Note
High Risk
High-impact removal of protocol and wallet code paths (Lelantus/HD mint tracking) plus consensus/init/spork touchpoints; risk is primarily from unintended behavior changes around transaction validation, wallet migration assumptions, and chain/index compatibility.
Overview
Removes the legacy Lelantus implementation and its wallet/HD mint tracking from the codebase, leaving Spark as the only privacy pool logic wired into batching, spork gating, and startup/init flows.
This drops Lelantus build targets and large swaths of core functionality (e.g.,
src/lelantus.cpp,src/hdmint/*), removes Lelantus-specific consensus fields (e.g.,nLelantusV3PayloadStartBlock), and simplifies transaction handling soTRANSACTION_LELANTUSis effectively treated as a no-op special-tx type.QA and docs are aligned with the removal: Lelantus RPC tests (and the
bip47-sendreceivetest that depended on Lelantus JoinSplits) are deleted from the test runner, Spark tests are adjusted (clean-chain setup, class rename), and wallet dump expectations are updated to reflect no generated sigma/hdmint keys.Reviewed by Cursor Bugbot for commit 9f0b130. Bugbot is set up for automated code reviews on this repo. Configure here.