Skip to content

Latest commit

 

History

History
853 lines (638 loc) · 26.1 KB

File metadata and controls

853 lines (638 loc) · 26.1 KB

Key Concepts Explained

Last Updated: 2025-11-15

⚠️ IMPORTANT: The RBF Bot (src/main-rbf.ts) is now the PRIMARY/PRODUCTION bot. The Original Bot (src/index.ts) is LEGACY/DEPRECATED.

This document explains the core concepts and strategies used in the bot.

RBF Bot Concepts (PRODUCTION)

1. Shotgun RBF Strategy

What It Is: 20 RBF pulses per 2-second block, all using the same nonce with escalating gas.

Why It Works:

  • Maximum coverage: 80 broadcast attempts per nonce (20 pulses × 4 RPCs)
  • Escalating gas: Starts at 200x, reaches 1,223x by pulse 20
  • Compound increases: 5% per pulse ensures continuous escalation

How It Works:

  • Pulse 1: 200x (5,000 Gwei) → sent to 4 RPCs
  • Pulse 2: 210x (5,250 Gwei) → sent to 4 RPCs
  • Pulse 3: 220.5x (5,512.5 Gwei) → sent to 4 RPCs
  • ... continues to Pulse 20: 1,223x (30,580 Gwei)

Success Rate: 50-60% in extreme competition (vs <5% for Original bot)

2. Block-Event Driven Detection

What It Is: Reacts to on-chain events instead of predicting unlock time.

Why It Works:

  • No time prediction needed: Eliminates timing errors
  • 0-50ms detection latency: Parallel WebSocket subscriptions
  • Multiple RPC redundancy: Automatic fallback on RPC failure

How It Works:

  • Multiple RPCs subscribe to new blocks via WebSocket
  • On each block, checks availableMintAmount() via RPC pool
  • Window opens when availableMintAmount() > 0
  • No EXPECTED_UNLOCK_TIME configuration needed

3. Receipt-Based Nonce Discipline

What It Is: Waits for transaction receipt before incrementing nonce.

Why It Works:

  • Prevents nonce conflicts: No stale transactions
  • More reliable: Chain state confirmed before proceeding
  • Prevents nonce exhaustion: Controlled nonce usage

How It Works:

  • Sends transaction with nonce N
  • Waits for receipt confirmation
  • Only increments to nonce N+1 after receipt received
  • Prevents "nonce too low" errors

4. Compound Gas Escalation

What It Is: Each pulse increases gas by 5% compound (200x → 210x → 220.5x → ...).

Why It Works:

  • Continuous escalation: Always increasing gas pressure
  • Pre-calculated: All gas levels computed at startup
  • Predictable: Formula: gasLevel = base × (compoundBump ^ pulseNumber)

How It Works:

  • Base: 200x (5,000 Gwei)
  • Compound bump: 1.05 (5% increase)
  • Pulse N: 200 × (1.05 ^ N)
  • Maximum: 1,223x (30,580 Gwei) at pulse 20

Legacy Original Bot Concepts (DEPRECATED)

⚠️ WARNING: Original bot failed in all recent production runs. These concepts are for historical reference only.


1. Battering Ram Mode [LEGACY]

What It Is: Maximum aggression from the very first transaction - no gradual ramp-up.

How It Works [LEGACY]:

  • Initialization: spamGasLevel = 50.0 (not 1.0) [LEGACY]
  • RBF Loop: Starts at 50.0x (not 1.0x) [LEGACY]
  • Shadow Branch: Starts at 75.0x (not 1.5x) [LEGACY]
  • Pre-warming: Escalates to 50.0x (not 3.0x) [LEGACY]

Gas Levels [LEGACY]:

  • 50.0x multiplier = 12,500 Gwei priority fee (250 Gwei base × 50.0) [LEGACY]
  • 75.0x multiplier = 18,750 Gwei priority fee (shadow branch) [LEGACY]

Status: ❌ DEPRECATED - Original bot failed in all recent production runs. Use RBF bot instead.


2. Window Detection

The Problem

Need to detect the exact moment the mint window opens.

Two Methods

Method 1: View-Based Detection (Recommended)

  • Polls contract view functions
  • Checks remainingUnlockTime()
  • Window open when: unlockTime === 0n && !paused

Advantages:

  • Accurate (reads from contract)
  • Efficient (less gas waste)
  • Resilient (works even if timing is off)

Disadvantages:

  • Detection latency (50-100ms for view call)
  • Requires RPC availability
  • Known Issue: Window can open/close between view checks (see docs/audits/POST_MORTEM_2025-11-13_1111_WINDOW.md)

Method 2: Time-Based Spam [LEGACY - DISABLED]

  • Spams transactions based on expected time [LEGACY]
  • Doesn't check view functions during spam [LEGACY]
  • Starts 25s before expected, ends 60s after [LEGACY]

Status: ❌ DISABLED - Not used in RBF bot, disabled in Original bot config

Why Disabled:

  • Wastes gas on reverts (before window opens)
  • Requires accurate expected time (error-prone)
  • RBF bot uses block-event detection instead (no time prediction needed)

3. Nonce Management (Critical Fix 2025-11-14)

The Problem

Nonce must match the network's expected value exactly. If nonce is stale (too low), all transactions are rejected with "nonce too low" errors.

Why It Fails

  1. Nonce fetched at time T
  2. Gas calculation takes 200-500ms
  3. Signing takes 50-100ms
  4. Broadcast takes 50-200ms
  5. By the time RPC receives transaction, network nonce has advanced
  6. Transaction rejected with "nonce too low"

The Solution: Last-Second Fetch

Fetch nonce immediately before signing (not earlier):

  • Early fetch (line 2474-2481): For planning and pre-signing
  • Last-second fetch (line 2566-2575): Right before signing transaction
  • Recovery (line 2629-2639): Immediate refresh on "nonce too low" errors

Implementation

// Last-second fetch (critical fix)
if (inSpamWindow || Math.abs(secondsUntilUnlock) <= 10) {
  const lastSecondNonce = await fetchNonce('pending', 'pre-sign fresh nonce')
  if (lastSecondNonce !== currentNonce) {
    currentNonce = lastSecondNonce
    await saveState({ nonce: currentNonce })
  }
}
// Sign transaction immediately with fresh nonce

Impact

  • Before fix: 100% broadcast failure from stale nonces
  • After fix: Nonce always fresh, transactions accepted

4. Pre-Warming

What It Is

Sending transactions to the mempool before the window opens, with escalating gas.

Why It Works

When the window opens, transactions already in the mempool execute immediately. No need to wait for:

  • Transaction signing
  • RPC broadcast
  • Mempool propagation

Timeline

T-30s: Send at 30.0x gas → Transaction in mempool
T-15s: Replace at 40.0x gas → Higher priority
T-5s:  Replace at 45.0x gas → Even higher priority
T-2s:  Replace at 48.0x gas → Near maximum
T-1s:  Replace at 50.0x gas → Maximum
T-0s:  Window opens → Transaction executes immediately ✅

Strategy

  • Replace-same-nonce: Uses same nonce, replaces with higher gas
  • Continuous: Replaces every 1 second until window opens
  • Multi-RPC: Broadcasts to all RPCs in parallel

Result

Transaction executes the instant window opens (0ms delay from window open to execution).


4. Dual-Branch Strategy

What It Is

Running two parallel transaction attempts:

  • Primary: RBF with same nonce (50.0x gas)
  • Shadow: New nonce with higher gas (75.0x gas)

Why It Works

  • Redundancy: If primary fails, shadow succeeds
  • Speed: Both run in parallel, first success wins
  • Coverage: Handles edge cases (nonce conflicts, RPC issues)

Timeline

T+0ms:   Primary branch starts (RBF, nonce N, 50.0x gas)
T+700ms: Shadow branch starts (new nonce, nonce N+1, 75.0x gas)
         ↓
         Race: First to succeed wins

When Shadow Launches

  • After 700ms delay (DUAL_BRANCH_AFTER_MS)
  • Only if primary hasn't succeeded yet
  • Checks primary status before launching

Result

Higher success rate (redundancy) + faster execution (parallel).


6. Replace-by-Fee (RBF)

What It Is

Replacing a pending transaction with a new one using the same nonce but higher gas.

Why It's Used

  • Pre-warming: Replace low-gas TX with higher-gas TX
  • Retry: Replace failed TX with same-gas TX (already at max)
  • Cleaner: Avoids nonce exhaustion

How It Works

// First transaction (nonce 42, 50.0x gas)
sendTransaction({ nonce: 42, maxPriorityFeePerGas: 12500 Gwei })

// Replace transaction (same nonce, same gas - already at max)
sendTransaction({ nonce: 42, maxPriorityFeePerGas: 12500 Gwei })

Note: In Battering Ram Mode, we're already at max gas, so replacements don't increase gas (they just retry).

When It's Used

  • Pre-warming: Escalating gas (30.0x → 50.0x)
  • RBF loop: Retrying with same gas (50.0x → 50.0x)
  • Spam mode: Continuous replacement (50.0x → 50.0x)

7. Multi-RPC Parallel Broadcast

What It Is

Sending the same transaction to multiple RPC providers simultaneously.

Why It Works

  • Redundancy: If one RPC fails, others succeed
  • Speed: First RPC to accept wins
  • Reliability: Handles RPC outages automatically

Process

const promises = publicClients.map(client => 
  client.sendRawTransaction({ serializedTransaction: rawTx })
)
const results = await Promise.allSettled(promises)
// First success wins

RPC Selection

  • Circuit Breaker: Excludes blacklisted RPCs
  • RPC Roster: Selects top N healthy RPCs
  • Health Monitoring: Tracks latency, errors, timeouts

Result

Higher success rate (redundancy) + faster execution (parallel).


8. Circuit Breaker

What It Is

Automatic RPC health monitoring and blacklisting.

Why It's Needed

  • Some RPCs are slow or unreliable
  • Sending to bad RPCs wastes time
  • Need to automatically exclude failing RPCs

How It Works

  1. Track Metrics:

    • Latency (p50, p95, p99)
    • Error rate (%)
    • Timeout streak (consecutive)
    • Malformed responses
  2. Blacklist Conditions:

    • p95 latency > 400ms
    • Error rate > 10%
    • Consecutive timeouts > 2
    • Malformed responses > 3 in 30s
  3. Auto-Recovery:

    • Blacklist expires after 60s
    • RPC retested on next health check

Result

Automatic failover to healthy RPCs, improved reliability.


9. Nonce Tracking and Validation

What It Is

Each pre-signed transaction stores its nonce. Before using a pre-signed transaction, the bot validates that the nonce matches the current on-chain nonce.

Why It Exists

Pre-signed transactions can become stale if the nonce advances (e.g., another transaction is mined). Using a stale transaction causes "nonce too low" errors.

How It Works

  1. Pre-signing: Transaction signed with current nonce, stored with nonce
  2. Validation: Before use, check if stored nonce matches current nonce
  3. Rebuild: If nonce changed, automatically rebuild all pre-signed transactions
  4. Fallback: If validation fails, sign transaction on-the-fly

Data Structure

interface PreSignedTxEntry {
  nonce: number
  rawTx: Hex
}

Automatic Rebuild Triggers

  • Initial spam nonce sync
  • Nonce mined before spam broadcast
  • Pre-warm scheduler nonce refresh
  • Pre-warm timer nonce refresh
  • Fire path nonce initialization

Result

Prevents "nonce too low" errors by ensuring all pre-signed transactions use valid nonces.


10. Pre-Signed Transactions

What It Is

Signing transactions at startup with all gas levels, storing raw signed transactions.

Why It Works

  • Zero Signing Delay: Transaction ready instantly
  • Hot Path Optimization: No calculation during critical window
  • Multiple Levels: Pre-signed at all bump levels

Process

// At startup:
const preSignNonce = await getTransactionCountWithFallback(...) // Always fresh
for (const multiplier of [50.0, 8.0]) { // Battering Ram: nuclear + fee-capped
  const gasParams = await getGasParams(multiplier)
  const rawTx = await walletClient.signTransaction({
    nonce: preSignNonce,
    gas: cachedGasLimit,
    ...gasParams
  })
  preSignedGasTxs.set(Math.floor(multiplier * 100), { nonce: preSignNonce, rawTx })
}

// During fire (with validation):
const nuclearEntry = getValidPreSignedGasTx(5000) // 50.0x = 5000
if (nuclearEntry) {
  await sendRawToAll(nuclearEntry.rawTx) // Instant broadcast (nonce validated)
} else {
  // Nonce changed, will sign on-the-fly
}

Result

~100-200ms faster execution (no signing delay).


11. Hot Path Optimization

What It Is

Optimizing the critical execution path (T-3s to T+2s) for maximum speed.

Optimizations

  • Pre-cached Values: Calldata, gas limit, gas params
  • Pre-signed TXs: Ready instantly
  • Suppressed Logging: Reduce I/O during critical window
  • Minimal Branching: Simple control flow

Hot Path Detection

function isHotPath(): boolean {
  const now = Date.now()
  if (expectedUnlockTimestamp === null) return false
  const msUntilUnlock = expectedUnlockTimestamp - now
  return msUntilUnlock >= -3000 && msUntilUnlock <= 2000
}

Result

Faster execution during critical window.


12. State Persistence

What It Is

Saving bot state to disk for crash recovery.

State File: .miteddy-state.json

{
  "nonce": 42,
  "lastTxHash": "0x...",
  "windowOpen": false,
  "lastError": null,
  "gasLimitHint": "208000"
}

Note: gasLimitHint is stored as a string (not a number) to support arbitrary precision. The bot automatically converts old numeric values to strings on load for backward compatibility.

When It's Used

  • Crash Recovery: Resume after bot crash
  • Nonce Tracking: Persistent nonce across restarts
  • Error Logging: Save last error for debugging
  • Gas Limit Persistence: Store actual gasUsed from successful mint (Codex refactor 2025-11-13)
    • After first successful mint, stores gasUsed + 20% buffer
    • Future runs reuse this instead of default 500k
    • Reduces nuclear send costs from ~5 BERA to ~3.3 BERA

Recovery Process

  1. Load state on startup
  2. Check if pending transaction exists
  3. Wait for inclusion or replace
  4. Resume operation

Result

Resilience to crashes, nonce tracking across restarts.


13. Time-Based Spam [LEGACY - DISABLED]

Status: ❌ DEPRECATED - Not used in RBF bot, disabled in Original bot config

What It Was [LEGACY]

Continuously sending transactions based on expected unlock time, not view function checks.

Timing [LEGACY]

  • Start: 25s before expected unlock [LEGACY]
  • End: 60s after expected unlock [LEGACY]
  • Rate: 40 tx/sec (every 25ms) [LEGACY]
  • Gas: 50.0x on all transactions [LEGACY]

Why It's Disabled

  • Gas Waste: Many transactions revert (before window opens)
  • Timing Errors: Requires accurate expected time (error-prone)
  • RBF Bot Alternative: Block-event detection is more reliable (no time prediction needed)

RBF Bot Alternative

  • Block-Event Detection: Reacts to on-chain events (no time prediction)
  • Shotgun RBF: 20 pulses per block (more efficient than continuous spam)
  • Better Success Rate: 50-60% vs <5% for Original bot

14. Block Time Prediction

What It Is

Predicting which block the unlock will occur in using linear regression on sampled data.

How It Works

  1. Sample remainingUnlockTime() across multiple blocks (minimum 3, optimal 8)
  2. Use linear regression to fit: remaining = slope * blockNumber + intercept
  3. Predict unlock block: targetBlock = -intercept / slope
  4. Calculate confidence score (R-squared) from regression fit
  5. Estimate unlock timestamp from block prediction and average block time

Implementation (src/timing/synth.ts)

  • BlockTimeSynthesizer class manages samples and predictions
  • Requires minimum 3 samples for prediction
  • Confidence threshold: 0.7 (default)
  • Filters out samples where remainingUnlockTime === 0 (already unlocked)

Usage

  • Aligns polling with block cadence
  • Logs predicted unlock block when < 60s to unlock
  • Helps with timing optimization
  • Provides confidence scores for prediction quality

Limitations

  • Requires multiple samples (minimum 3)
  • Assumes linear decay rate (may not hold if unlock mechanism changes)
  • May be inaccurate if decay rate changes mid-sampling
  • Confidence scores help identify unreliable predictions

15. Gas Calculation

Formula

// Base values
const basePriorityFee = PRIORITY_FEE_GWEI // 250 Gwei
const baseMultiplier = BASE_MULTIPLIER // 50.0

// For multiplier M:
priorityFee = basePriorityFee × M // 250 × 50.0 = 12,500 Gwei
maxFee = (baseFee × baseMultiplier × M) + priorityFee

// Apply caps:
priorityFee = min(priorityFee, MAX_PRIORITY_FEE_GWEI) // 12,500
maxFee = min(maxFee, MAX_FEE_PER_GAS_GWEI) // 100,000
both = min(both, FEE_MAX_GWEI_CAP) // 50,000

Battering Ram Mode

  • Multiplier: 50.0x
  • Priority Fee: 12,500 Gwei
  • Max Fee: Capped at 50,000 Gwei (FEE_MAX_GWEI_CAP)

Caching

  • Calculated once per multiplier
  • Cached permanently (by multiplier key)
  • Instant lookup during fire

16. Nonce Management (Updated 2025-11-14)

Note: See section 3 above for the critical fix (last-second fetch). This section covers the broader nonce management strategy.

Two Strategies

Replace Strategy (Default)

  • Uses same nonce for all transactions
  • Replaces pending TX with higher gas
  • Cleaner, avoids nonce exhaustion

Usage:

  • Pre-warming: Same nonce, escalating gas
  • RBF loop: Same nonce, same gas (retry)
  • Spam mode: Same nonce, same gas (continuous)

Increment Strategy

  • Uses new nonce for each transaction
  • Exhausts nonces quickly
  • Used only for shadow branch

Usage:

  • Shadow branch: nonce+1 (new nonce)

Nonce Tracking

  • Cached: currentNonce (for replace strategy)
  • Persistent: Saved to .miteddy-state.json
  • Sync: Updated if behind chain

17. Error Recovery

RPC Failures

  • Fallback: Try next RPC
  • Circuit Breaker: Blacklist failing RPCs
  • Continue: Operation continues with remaining RPCs

Transaction Failures

  • RBF: Replace with same gas (already at max)
  • Shadow: New nonce if primary fails
  • Retry: Up to MAX_BUMPS attempts

Critical Errors

  • Log: Save to detailed log
  • State: Save error to state file
  • Cleanup: Release resources
  • Exit: Graceful exit with error code

18. Timing Buffer

What It Is

Adjusting expected unlock time to account for early window opening.

Configuration

TIME_OFFSET_MS=-8000  # Start 8 seconds earlier

Why It's Needed

  • Windows sometimes open early (8s observed)
  • Need buffer to account for timing variations
  • Ensures spam starts before actual opening

Calculation

expectedUnlockTimestamp = contractUnlockTimestamp - TIME_OFFSET_MS
// Negative offset = start earlier

Result

Better coverage for timing variations.


19. Time Units System

The Problem

The contract returns remainingUnlockTime() in SECONDS, not milliseconds. Accidentally treating seconds as milliseconds causes catastrophic timing failures (e.g., thinking 300 seconds = 300ms).

The Solution

Type-safe helper functions in src/time/units.ts prevent unit mixing bugs.

Key Functions

Conversion Functions

  • sec(n) - Identity function (makes seconds explicit)
  • ms(n) - Convert seconds to milliseconds (bigint)
  • msNum(n) - Convert seconds to milliseconds (number)
  • secFromMs(n) - Convert milliseconds to seconds (bigint)
  • secFromMsNum(n) - Convert milliseconds to seconds (number)

Comparison Functions

  • isLessThanSeconds(unlockTime, seconds) - Compare unlock time to seconds
  • isLessThanMs(unlockTime, ms) - Compare unlock time to milliseconds

Validation Functions

  • assertSeconds(unlockTime, context) - Validate unlock time is in seconds
    • Throws error if value is suspiciously large (> 1 year in seconds)
    • Helps catch milliseconds/seconds bugs early

Usage Example

// ❌ WRONG: Treating seconds as milliseconds
if (unlockTime < 5000) { // This is 5000 seconds, not 5 seconds!
  // ...
}

// ✅ CORRECT: Using helper functions
if (isLessThanSeconds(unlockTime, 5)) { // 5 seconds
  // ...
}

// ✅ CORRECT: Converting to milliseconds when needed
const unlockTimestamp = Date.now() + Number(ms(unlockTime))

Why It Matters

  • Timing Bugs: Mixing units causes bot to fire at wrong time
  • Window Misses: Bot may think window is far away when it's actually close
  • Gas Waste: Incorrect timing leads to premature or late transactions

Best Practices

  1. Always use isLessThanSeconds() for comparisons
  2. Use ms() when converting to JavaScript timestamps
  3. Use assertSeconds() in debug builds to catch bugs
  4. Never compare unlock time directly to numeric literals

14. Single Points of Failure and Resilience (Critical Lesson 2025-11-14)

The 23:11 Catastrophic Failure

On 2025-11-14 at 23:11:11 PST, the bot completely failed to detect the window opening due to a single RPC endpoint hitting rate limits. See docs/audits/POST_MORTEM_2025-11-14_2311_COMPLETE_FAILURE.md for full details.

What happened:

23:10:49 PST (T-11s): PublicNode RPC hits 429 rate limit (600req/60s exceeded)
23:10:49-23:11:14: ALL view function calls fail
23:11:00-23:11:11: Window opens (WebSocket receiving blocks, but bot blind)
23:11:14: PublicNode recovers, bot sees NEXT window (10787s away)
Result: Zero transaction attempts, window completely missed

The Architectural Flaw

Single client for all operations:

// WRONG (pre-fix):
const publicClient = createPublicClient({ transport: http(rpcsHttp[0]) });
const windowDetector = new WindowDetector(publicClient, {...});
// ONE RPC for ALL view functions = single point of failure

Impact:

  • Window detection uses 3 view functions: remainingUnlockTime(), availableMintAmount(), paused()
  • ALL calls routed through ONE RPC endpoint (PublicNode)
  • When PublicNode failed → entire window detection failed
  • WebSocket was working (blocks arriving), but view functions couldn't execute

The Fix: Multi-RPC Fallback Pool

New architecture:

// RIGHT (post-fix):
const rpcPool = new RPCPool({ rpcs: rpcsHttp, circuitBreaker });
const windowDetector = new WindowDetector(publicClient, rpcPool, {...});

// View functions use automatic fallback:
const unlockTime = await rpcPool.callWithFallback(
  (client) => client.readContract({...remainingUnlockTime}),
  'remainingUnlockTime()'
);
// Tries: PublicNode → QuikNode → Official → DRPC → ... (all 9 RPCs)

Features:

  1. Automatic fallback: Tries all healthy RPCs in order
  2. 429 detection: Rate limit errors → temporary blacklist (60s)
  3. Circuit breaker integration: Long-term health tracking
  4. Zero latency fallback: All clients pre-created

HTTP Polling Waste

The problem:

  • WebSocket: 30 blocks/min × 3 view calls/block = 90 calls/min
  • HTTP polling (200ms): 5 checks/sec × 3 calls/check = 900 calls/min
  • Total: 990 calls/min → exceeds PublicNode's 600req/60s limit

The fix:

// WebSocket delivers first block:
this.wsHealthy = true;
clearInterval(this.pollingInterval); // Disable HTTP polling
log.info('WebSocket healthy, HTTP polling disabled (reduces RPC load 50%)');

// If WebSocket fails:
this.wsHealthy = false;
this.pollingInterval = setInterval(...); // Re-enable HTTP polling
log.warn('WebSocket failed, re-enabling HTTP polling backup');

Impact:

  • Reduces RPC calls from 990/min → 90/min (10× reduction)
  • HTTP polling only runs when WebSocket is down

Redundancy vs. Single Point of Failure

What we thought:

  • ✅ 4 WebSocket RPCs = redundancy
  • ✅ 9 HTTP RPCs = redundancy

Reality:

  • ✅ WebSocket delivers BLOCKS (redundancy works)
  • ❌ View functions use HTTP client #0 ONLY (no redundancy)
  • Missing link: Block delivery ≠ contract state checking

Lesson: Redundancy must cover EVERY critical operation, not just some of them.

Error Handlers Must Escalate, Not Hide

Old error handler (WRONG):

catch (error) {
  log.err('[WINDOW] View function error:', error);
  return { unlockTime: 999999n, paused: true }; // "Helpful" fallback
}

Problem: Returns safe fallback → hides that bot is blind → window opens undetected

Better error handler:

catch (error) {
  // Try other RPCs first (RPCPool handles this)
  // Only return fallback if ALL RPCs fail
  if (allRPCsFailed) {
    log.err('🚨 CRITICAL: ALL RPCs failed, BOT IS BLIND!');
    return { unlockTime: 999999n, paused: true };
  }
}

Lesson: Error handlers should ESCALATE critical problems, not quietly paper over them.

Rate Limits Are a Design Constraint, Not a Bug

We treated 429 errors as temporary glitches.

Reality:

  • PublicNode free tier: 600 requests / 60 seconds
  • Bot's normal operation: 990 requests / 60 seconds
  • This is NOT abuse - this is architectural mismatch

Solutions:

  1. Reduce request frequency (disable redundant polling)
  2. Distribute load across multiple RPCs (fallback pool)
  3. Detect and respect 429 errors (temporary blacklist)
  4. Cache view function results (2s TTL would reduce 90 calls/min → 18 calls/min)

Lesson: Design for rate limits from the start. Don't fight them with aggressive retry logic.

Integration Points Are Critical

The fatal line of code:

const windowDetector = new WindowDetector(publicClient, {...});
//                                         ^^^^^^^^^^^^
//                                         THIS killed us

Every integration point needs:

  • ✅ Fallback mechanism (if primary fails, try secondary)
  • ✅ Health checking (detect failures before they break system)
  • ✅ Circuit breaker (automatic isolation of failures)
  • ✅ Redundancy (multiple paths to same outcome)

Lesson: The integration layer is where single points of failure hide. Review every constructor, every client creation, every singleton carefully.


Summary

The bot uses multiple strategies in combination:

  • Battering Ram Mode: Maximum gas from start
  • Pre-warming: Transactions in mempool early
  • Dual-branch: Redundancy + speed (RETIRED - conflicts with RBF)
  • Multi-RPC: Parallel broadcast
  • Time-based spam: Maximum coverage (DISABLED in RBF bot)
  • Circuit breaker: Automatic failover
  • Time Units: Type-safe conversions prevent timing bugs
  • RPC Pool Fallback: Eliminates single RPC failure = total blindness (NEW 2025-11-14)
  • Nonce Cascade Protection: Limits nonce burn to 5 maximum (NEW 2025-11-14)

Result: Maximum chance of winning competitive mints with resilience against infrastructure failures.


Version: 1.2