Skip to content

fix(rewards): claimRewardsAll isolates griefing tokens via try/catch#136

Merged
drewstone merged 1 commit into
mainfrom
fix/claim-rewards-all-grief-isolation
May 15, 2026
Merged

fix(rewards): claimRewardsAll isolates griefing tokens via try/catch#136
drewstone merged 1 commit into
mainfrom
fix/claim-rewards-all-grief-isolation

Conversation

@tangletools
Copy link
Copy Markdown
Contributor

Summary

Single security hardening fix: claimRewardsAll should not be lockable by a griefing ERC20 in the pending-rewards set.

The bug

claimRewardsAll iterates every token in _pendingRewardTokens[operator] and inlines _claimRewardsToken for each. A token whose transfer reverts (paused token, gas-bomb hook, custom revert, broken upgrade) used to abort the entire sweep, locking the operator out of every other claim — including normal, well-behaved tokens. The pre-existing fix path was "remove the griefing token off-chain", which is not actionable: the protocol has no admin route to evict a token from an operator's pending set.

The fix

Each token is now claimed in an isolated self-call (address(this).call(_claimRewardsTokenSafe(account, token))) wrapped in try/catch:

  • Normal tokens: same behavior as before — pending zeroed, transfer fires, event emitted, token removed from set.
  • Griefing tokens: the self-call reverts in isolation, the outer try/catch catches it, the rest of the sweep continues, a RewardsClaimSkipped(account, token) event is emitted so indexers and the operator can identify the offending token and pursue the off-chain mitigation (e.g., transfer the pending balance to a different recipient via a one-off claim once the token is unpaused).

The self-call helper is gated by msg.sender == address(this) so external callers cannot reach it directly. The outer nonReentrant guard on claimRewardsAll still holds for the whole sweep — the inner self-call helper is intentionally not nonReentrant so the diamond can route it back into the same facet via delegatecall.

Single-token claims (claimRewards(), claimRewards(token), claimRewardsBatch) are unchanged. A user explicitly asking to claim one specific token SHOULD see the underlying revert — that is the diagnostic signal. Only the sweep path needs griefing isolation.

Test plan

  • test_ClaimRewardsAll_SkipsGriefingTokenAndCompletesRest (PaymentEdgeCases.t.sol) — full end-to-end: operator with two pending tokens, one of which reverts on transfer. After claimRewardsAll:
    • Normal token's balance lands at the operator.
    • Normal token cleared from pending-rewards set.
    • Griefing token's pending balance preserved (operator can retry later).
    • Griefing token remains in rewardTokens(operator).
    • Single-token claimRewards(griefingToken) still reverts (preserves single-claim semantics).
    • RewardsClaimSkipped event emitted for the griefing token.
  • Full regression: 1,455 / 1,455 tests pass.
  • CI green

Storage / ABI

  • No storage layout changes.
  • Same claimRewardsAll selector, same return type (void). Behavior strictly more permissive than before — never blocks legitimate claims.
  • One new selector _claimRewardsTokenSafe(address,address) registered on TanglePaymentsRewardsFacet. Gated by msg.sender == address(this) so external callers see Errors.Unauthorized().
  • One new event: RewardsClaimSkipped(address indexed account, address indexed token) — indexer-friendly signal that the sweep skipped a token.

A single ERC20 in an operator's pending-rewards set whose `transfer` reverts
(paused token, gas bomb, custom revert hook, broken upgrade) used to revert
the entire `claimRewardsAll` sweep and lock the operator out of every other
claim. With one self-call per token wrapped in try/catch, the griefing token
is skipped (and an explicit `RewardsClaimSkipped` event emitted for indexers
and the affected operator), the remaining tokens are claimed, and the
griefing token stays in the pending set for a future retry once the issue is
resolved off-chain.

The self-call helper `_claimRewardsTokenSafe(account, token)` is gated by
`msg.sender == address(this)` so external callers cannot reach it directly.
It is registered on `TanglePaymentsRewardsFacet` so the diamond routes the
self-call back into the same facet via delegatecall.

Single-token claims (`claimRewards()`, `claimRewards(token)`,
`claimRewardsBatch`) are unchanged — they still revert on failure so the
caller explicitly choosing a token sees the underlying error.

Test: `test_ClaimRewardsAll_SkipsGriefingTokenAndCompletesRest` in
`PaymentEdgeCases.t.sol` covers the scenario end-to-end with a new
`OutboundRevertToken` mock (transferFrom works, transfer reverts).
Full regression: 1,455 / 1,455 tests pass.
@drewstone drewstone merged commit f64a4e4 into main May 15, 2026
1 of 2 checks passed
drewstone added a commit that referenced this pull request May 16, 2026
Regenerates tnt-core-bindings + fixtures against current main (f64a4e4) and
bumps to v0.17.0. Covers PRs #133, #134, #136 since the v0.16.0 release pin.

Binding-surface delta is small: ITanglePaymentsInternal gains
distributeBillWithKeeper(BillDistribution) selector 0x68cdf660 — the diamond
self-call used by billSubscription to hand per-operator weights computed
during accrual directly to the distribution facet. Every other facet selector
still routes to the Tangle diamond unchanged.

Most of the behavioral changes (RFQ requester binding + freshness +
cumulative-TTL caps, multi-asset bill weighting, EIP-170 facet split, O(1)
operator stake aggregate, VPM share-pool slashing, claimRewardsAll griefing
isolation) are observable to indexers via events emitted from the facet
contracts. Those event ABIs (OperatorPoolSlashed, RewardsClaimSkipped,
KeeperRebateAccrued, TntPaymentDiscountApplied, StakerShareRefundedToEscrow,
SubscriptionBaseline*, SubscriptionBill*) are not surfaced on the ITangle /
ITangleFull interfaces and therefore are not present in the generated
bindings; the changelog enumerates them so indexer consumers can decode by
topic hash against the facet source ABIs.

Cargo.lock is updated to 0.17.0 — the lockfile on main was still pinning
0.15.0 (the v0.16.0 release commit did not refresh it).

DO NOT publish to crates.io until: dApp sync verified green, indexer schema
PR merged, at least one downstream blueprint compiles green.

Co-authored-by: Drew Stone <drewstone329@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants