Last Updated: 2025-11-15
src/main-rbf.ts) is now the PRIMARY/PRODUCTION bot. The Original Bot (src/index.ts) is LEGACY/DEPRECATED.
This document explains the core concepts and strategies used in the bot.
What It Is: 20 RBF pulses per 2-second block, all using the same nonce with escalating gas.
Why It Works:
- Maximum coverage: 80 broadcast attempts per nonce (20 pulses × 4 RPCs)
- Escalating gas: Starts at 200x, reaches 1,223x by pulse 20
- Compound increases: 5% per pulse ensures continuous escalation
How It Works:
- Pulse 1: 200x (5,000 Gwei) → sent to 4 RPCs
- Pulse 2: 210x (5,250 Gwei) → sent to 4 RPCs
- Pulse 3: 220.5x (5,512.5 Gwei) → sent to 4 RPCs
- ... continues to Pulse 20: 1,223x (30,580 Gwei)
Success Rate: 50-60% in extreme competition (vs <5% for Original bot)
What It Is: Reacts to on-chain events instead of predicting unlock time.
Why It Works:
- No time prediction needed: Eliminates timing errors
- 0-50ms detection latency: Parallel WebSocket subscriptions
- Multiple RPC redundancy: Automatic fallback on RPC failure
How It Works:
- Multiple RPCs subscribe to new blocks via WebSocket
- On each block, checks
availableMintAmount()via RPC pool - Window opens when
availableMintAmount() > 0 - No
EXPECTED_UNLOCK_TIMEconfiguration needed
What It Is: Waits for transaction receipt before incrementing nonce.
Why It Works:
- Prevents nonce conflicts: No stale transactions
- More reliable: Chain state confirmed before proceeding
- Prevents nonce exhaustion: Controlled nonce usage
How It Works:
- Sends transaction with nonce N
- Waits for receipt confirmation
- Only increments to nonce N+1 after receipt received
- Prevents "nonce too low" errors
What It Is: Each pulse increases gas by 5% compound (200x → 210x → 220.5x → ...).
Why It Works:
- Continuous escalation: Always increasing gas pressure
- Pre-calculated: All gas levels computed at startup
- Predictable: Formula:
gasLevel = base × (compoundBump ^ pulseNumber)
How It Works:
- Base: 200x (5,000 Gwei)
- Compound bump: 1.05 (5% increase)
- Pulse N:
200 × (1.05 ^ N) - Maximum: 1,223x (30,580 Gwei) at pulse 20
What It Is: Maximum aggression from the very first transaction - no gradual ramp-up.
How It Works [LEGACY]:
- Initialization:
spamGasLevel = 50.0(not 1.0) [LEGACY] - RBF Loop: Starts at 50.0x (not 1.0x) [LEGACY]
- Shadow Branch: Starts at 75.0x (not 1.5x) [LEGACY]
- Pre-warming: Escalates to 50.0x (not 3.0x) [LEGACY]
Gas Levels [LEGACY]:
- 50.0x multiplier = 12,500 Gwei priority fee (250 Gwei base × 50.0) [LEGACY]
- 75.0x multiplier = 18,750 Gwei priority fee (shadow branch) [LEGACY]
Status: ❌ DEPRECATED - Original bot failed in all recent production runs. Use RBF bot instead.
Need to detect the exact moment the mint window opens.
- Polls contract view functions
- Checks
remainingUnlockTime() - Window open when:
unlockTime === 0n && !paused
Advantages:
- Accurate (reads from contract)
- Efficient (less gas waste)
- Resilient (works even if timing is off)
Disadvantages:
- Detection latency (50-100ms for view call)
- Requires RPC availability
- Known Issue: Window can open/close between view checks (see
docs/audits/POST_MORTEM_2025-11-13_1111_WINDOW.md)
- Spams transactions based on expected time [LEGACY]
- Doesn't check view functions during spam [LEGACY]
- Starts 25s before expected, ends 60s after [LEGACY]
Status: ❌ DISABLED - Not used in RBF bot, disabled in Original bot config
Why Disabled:
- Wastes gas on reverts (before window opens)
- Requires accurate expected time (error-prone)
- RBF bot uses block-event detection instead (no time prediction needed)
Nonce must match the network's expected value exactly. If nonce is stale (too low), all transactions are rejected with "nonce too low" errors.
- Nonce fetched at time T
- Gas calculation takes 200-500ms
- Signing takes 50-100ms
- Broadcast takes 50-200ms
- By the time RPC receives transaction, network nonce has advanced
- Transaction rejected with "nonce too low"
Fetch nonce immediately before signing (not earlier):
- Early fetch (line 2474-2481): For planning and pre-signing
- Last-second fetch (line 2566-2575): Right before signing transaction
- Recovery (line 2629-2639): Immediate refresh on "nonce too low" errors
// Last-second fetch (critical fix)
if (inSpamWindow || Math.abs(secondsUntilUnlock) <= 10) {
const lastSecondNonce = await fetchNonce('pending', 'pre-sign fresh nonce')
if (lastSecondNonce !== currentNonce) {
currentNonce = lastSecondNonce
await saveState({ nonce: currentNonce })
}
}
// Sign transaction immediately with fresh nonce- Before fix: 100% broadcast failure from stale nonces
- After fix: Nonce always fresh, transactions accepted
Sending transactions to the mempool before the window opens, with escalating gas.
When the window opens, transactions already in the mempool execute immediately. No need to wait for:
- Transaction signing
- RPC broadcast
- Mempool propagation
T-30s: Send at 30.0x gas → Transaction in mempool
T-15s: Replace at 40.0x gas → Higher priority
T-5s: Replace at 45.0x gas → Even higher priority
T-2s: Replace at 48.0x gas → Near maximum
T-1s: Replace at 50.0x gas → Maximum
T-0s: Window opens → Transaction executes immediately ✅
- Replace-same-nonce: Uses same nonce, replaces with higher gas
- Continuous: Replaces every 1 second until window opens
- Multi-RPC: Broadcasts to all RPCs in parallel
Transaction executes the instant window opens (0ms delay from window open to execution).
Running two parallel transaction attempts:
- Primary: RBF with same nonce (50.0x gas)
- Shadow: New nonce with higher gas (75.0x gas)
- Redundancy: If primary fails, shadow succeeds
- Speed: Both run in parallel, first success wins
- Coverage: Handles edge cases (nonce conflicts, RPC issues)
T+0ms: Primary branch starts (RBF, nonce N, 50.0x gas)
T+700ms: Shadow branch starts (new nonce, nonce N+1, 75.0x gas)
↓
Race: First to succeed wins
- After 700ms delay (DUAL_BRANCH_AFTER_MS)
- Only if primary hasn't succeeded yet
- Checks primary status before launching
Higher success rate (redundancy) + faster execution (parallel).
Replacing a pending transaction with a new one using the same nonce but higher gas.
- Pre-warming: Replace low-gas TX with higher-gas TX
- Retry: Replace failed TX with same-gas TX (already at max)
- Cleaner: Avoids nonce exhaustion
// First transaction (nonce 42, 50.0x gas)
sendTransaction({ nonce: 42, maxPriorityFeePerGas: 12500 Gwei })
// Replace transaction (same nonce, same gas - already at max)
sendTransaction({ nonce: 42, maxPriorityFeePerGas: 12500 Gwei })Note: In Battering Ram Mode, we're already at max gas, so replacements don't increase gas (they just retry).
- Pre-warming: Escalating gas (30.0x → 50.0x)
- RBF loop: Retrying with same gas (50.0x → 50.0x)
- Spam mode: Continuous replacement (50.0x → 50.0x)
Sending the same transaction to multiple RPC providers simultaneously.
- Redundancy: If one RPC fails, others succeed
- Speed: First RPC to accept wins
- Reliability: Handles RPC outages automatically
const promises = publicClients.map(client =>
client.sendRawTransaction({ serializedTransaction: rawTx })
)
const results = await Promise.allSettled(promises)
// First success wins- Circuit Breaker: Excludes blacklisted RPCs
- RPC Roster: Selects top N healthy RPCs
- Health Monitoring: Tracks latency, errors, timeouts
Higher success rate (redundancy) + faster execution (parallel).
Automatic RPC health monitoring and blacklisting.
- Some RPCs are slow or unreliable
- Sending to bad RPCs wastes time
- Need to automatically exclude failing RPCs
-
Track Metrics:
- Latency (p50, p95, p99)
- Error rate (%)
- Timeout streak (consecutive)
- Malformed responses
-
Blacklist Conditions:
- p95 latency > 400ms
- Error rate > 10%
- Consecutive timeouts > 2
- Malformed responses > 3 in 30s
-
Auto-Recovery:
- Blacklist expires after 60s
- RPC retested on next health check
Automatic failover to healthy RPCs, improved reliability.
Each pre-signed transaction stores its nonce. Before using a pre-signed transaction, the bot validates that the nonce matches the current on-chain nonce.
Pre-signed transactions can become stale if the nonce advances (e.g., another transaction is mined). Using a stale transaction causes "nonce too low" errors.
- Pre-signing: Transaction signed with current nonce, stored with nonce
- Validation: Before use, check if stored nonce matches current nonce
- Rebuild: If nonce changed, automatically rebuild all pre-signed transactions
- Fallback: If validation fails, sign transaction on-the-fly
interface PreSignedTxEntry {
nonce: number
rawTx: Hex
}- Initial spam nonce sync
- Nonce mined before spam broadcast
- Pre-warm scheduler nonce refresh
- Pre-warm timer nonce refresh
- Fire path nonce initialization
Prevents "nonce too low" errors by ensuring all pre-signed transactions use valid nonces.
Signing transactions at startup with all gas levels, storing raw signed transactions.
- Zero Signing Delay: Transaction ready instantly
- Hot Path Optimization: No calculation during critical window
- Multiple Levels: Pre-signed at all bump levels
// At startup:
const preSignNonce = await getTransactionCountWithFallback(...) // Always fresh
for (const multiplier of [50.0, 8.0]) { // Battering Ram: nuclear + fee-capped
const gasParams = await getGasParams(multiplier)
const rawTx = await walletClient.signTransaction({
nonce: preSignNonce,
gas: cachedGasLimit,
...gasParams
})
preSignedGasTxs.set(Math.floor(multiplier * 100), { nonce: preSignNonce, rawTx })
}
// During fire (with validation):
const nuclearEntry = getValidPreSignedGasTx(5000) // 50.0x = 5000
if (nuclearEntry) {
await sendRawToAll(nuclearEntry.rawTx) // Instant broadcast (nonce validated)
} else {
// Nonce changed, will sign on-the-fly
}~100-200ms faster execution (no signing delay).
Optimizing the critical execution path (T-3s to T+2s) for maximum speed.
- Pre-cached Values: Calldata, gas limit, gas params
- Pre-signed TXs: Ready instantly
- Suppressed Logging: Reduce I/O during critical window
- Minimal Branching: Simple control flow
function isHotPath(): boolean {
const now = Date.now()
if (expectedUnlockTimestamp === null) return false
const msUntilUnlock = expectedUnlockTimestamp - now
return msUntilUnlock >= -3000 && msUntilUnlock <= 2000
}Faster execution during critical window.
Saving bot state to disk for crash recovery.
{
"nonce": 42,
"lastTxHash": "0x...",
"windowOpen": false,
"lastError": null,
"gasLimitHint": "208000"
}Note: gasLimitHint is stored as a string (not a number) to support arbitrary precision. The bot automatically converts old numeric values to strings on load for backward compatibility.
- Crash Recovery: Resume after bot crash
- Nonce Tracking: Persistent nonce across restarts
- Error Logging: Save last error for debugging
- Gas Limit Persistence: Store actual
gasUsedfrom successful mint (Codex refactor 2025-11-13)- After first successful mint, stores
gasUsed + 20% buffer - Future runs reuse this instead of default 500k
- Reduces nuclear send costs from ~5 BERA to ~3.3 BERA
- After first successful mint, stores
- Load state on startup
- Check if pending transaction exists
- Wait for inclusion or replace
- Resume operation
Resilience to crashes, nonce tracking across restarts.
Status: ❌ DEPRECATED - Not used in RBF bot, disabled in Original bot config
Continuously sending transactions based on expected unlock time, not view function checks.
- Start: 25s before expected unlock [LEGACY]
- End: 60s after expected unlock [LEGACY]
- Rate: 40 tx/sec (every 25ms) [LEGACY]
- Gas: 50.0x on all transactions [LEGACY]
- Gas Waste: Many transactions revert (before window opens)
- Timing Errors: Requires accurate expected time (error-prone)
- RBF Bot Alternative: Block-event detection is more reliable (no time prediction needed)
- Block-Event Detection: Reacts to on-chain events (no time prediction)
- Shotgun RBF: 20 pulses per block (more efficient than continuous spam)
- Better Success Rate: 50-60% vs <5% for Original bot
Predicting which block the unlock will occur in using linear regression on sampled data.
- Sample
remainingUnlockTime()across multiple blocks (minimum 3, optimal 8) - Use linear regression to fit:
remaining = slope * blockNumber + intercept - Predict unlock block:
targetBlock = -intercept / slope - Calculate confidence score (R-squared) from regression fit
- Estimate unlock timestamp from block prediction and average block time
- BlockTimeSynthesizer class manages samples and predictions
- Requires minimum 3 samples for prediction
- Confidence threshold: 0.7 (default)
- Filters out samples where
remainingUnlockTime === 0(already unlocked)
- Aligns polling with block cadence
- Logs predicted unlock block when < 60s to unlock
- Helps with timing optimization
- Provides confidence scores for prediction quality
- Requires multiple samples (minimum 3)
- Assumes linear decay rate (may not hold if unlock mechanism changes)
- May be inaccurate if decay rate changes mid-sampling
- Confidence scores help identify unreliable predictions
// Base values
const basePriorityFee = PRIORITY_FEE_GWEI // 250 Gwei
const baseMultiplier = BASE_MULTIPLIER // 50.0
// For multiplier M:
priorityFee = basePriorityFee × M // 250 × 50.0 = 12,500 Gwei
maxFee = (baseFee × baseMultiplier × M) + priorityFee
// Apply caps:
priorityFee = min(priorityFee, MAX_PRIORITY_FEE_GWEI) // 12,500
maxFee = min(maxFee, MAX_FEE_PER_GAS_GWEI) // 100,000
both = min(both, FEE_MAX_GWEI_CAP) // 50,000- Multiplier: 50.0x
- Priority Fee: 12,500 Gwei
- Max Fee: Capped at 50,000 Gwei (FEE_MAX_GWEI_CAP)
- Calculated once per multiplier
- Cached permanently (by multiplier key)
- Instant lookup during fire
Note: See section 3 above for the critical fix (last-second fetch). This section covers the broader nonce management strategy.
- Uses same nonce for all transactions
- Replaces pending TX with higher gas
- Cleaner, avoids nonce exhaustion
Usage:
- Pre-warming: Same nonce, escalating gas
- RBF loop: Same nonce, same gas (retry)
- Spam mode: Same nonce, same gas (continuous)
- Uses new nonce for each transaction
- Exhausts nonces quickly
- Used only for shadow branch
Usage:
- Shadow branch: nonce+1 (new nonce)
- Cached:
currentNonce(for replace strategy) - Persistent: Saved to
.miteddy-state.json - Sync: Updated if behind chain
- Fallback: Try next RPC
- Circuit Breaker: Blacklist failing RPCs
- Continue: Operation continues with remaining RPCs
- RBF: Replace with same gas (already at max)
- Shadow: New nonce if primary fails
- Retry: Up to MAX_BUMPS attempts
- Log: Save to detailed log
- State: Save error to state file
- Cleanup: Release resources
- Exit: Graceful exit with error code
Adjusting expected unlock time to account for early window opening.
TIME_OFFSET_MS=-8000 # Start 8 seconds earlier- Windows sometimes open early (8s observed)
- Need buffer to account for timing variations
- Ensures spam starts before actual opening
expectedUnlockTimestamp = contractUnlockTimestamp - TIME_OFFSET_MS
// Negative offset = start earlierBetter coverage for timing variations.
The contract returns remainingUnlockTime() in SECONDS, not milliseconds. Accidentally treating seconds as milliseconds causes catastrophic timing failures (e.g., thinking 300 seconds = 300ms).
Type-safe helper functions in src/time/units.ts prevent unit mixing bugs.
sec(n)- Identity function (makes seconds explicit)ms(n)- Convert seconds to milliseconds (bigint)msNum(n)- Convert seconds to milliseconds (number)secFromMs(n)- Convert milliseconds to seconds (bigint)secFromMsNum(n)- Convert milliseconds to seconds (number)
isLessThanSeconds(unlockTime, seconds)- Compare unlock time to secondsisLessThanMs(unlockTime, ms)- Compare unlock time to milliseconds
assertSeconds(unlockTime, context)- Validate unlock time is in seconds- Throws error if value is suspiciously large (> 1 year in seconds)
- Helps catch milliseconds/seconds bugs early
// ❌ WRONG: Treating seconds as milliseconds
if (unlockTime < 5000) { // This is 5000 seconds, not 5 seconds!
// ...
}
// ✅ CORRECT: Using helper functions
if (isLessThanSeconds(unlockTime, 5)) { // 5 seconds
// ...
}
// ✅ CORRECT: Converting to milliseconds when needed
const unlockTimestamp = Date.now() + Number(ms(unlockTime))- Timing Bugs: Mixing units causes bot to fire at wrong time
- Window Misses: Bot may think window is far away when it's actually close
- Gas Waste: Incorrect timing leads to premature or late transactions
- Always use
isLessThanSeconds()for comparisons - Use
ms()when converting to JavaScript timestamps - Use
assertSeconds()in debug builds to catch bugs - Never compare unlock time directly to numeric literals
On 2025-11-14 at 23:11:11 PST, the bot completely failed to detect the window opening due to a single RPC endpoint hitting rate limits. See docs/audits/POST_MORTEM_2025-11-14_2311_COMPLETE_FAILURE.md for full details.
What happened:
23:10:49 PST (T-11s): PublicNode RPC hits 429 rate limit (600req/60s exceeded)
23:10:49-23:11:14: ALL view function calls fail
23:11:00-23:11:11: Window opens (WebSocket receiving blocks, but bot blind)
23:11:14: PublicNode recovers, bot sees NEXT window (10787s away)
Result: Zero transaction attempts, window completely missed
Single client for all operations:
// WRONG (pre-fix):
const publicClient = createPublicClient({ transport: http(rpcsHttp[0]) });
const windowDetector = new WindowDetector(publicClient, {...});
// ONE RPC for ALL view functions = single point of failureImpact:
- Window detection uses 3 view functions:
remainingUnlockTime(),availableMintAmount(),paused() - ALL calls routed through ONE RPC endpoint (PublicNode)
- When PublicNode failed → entire window detection failed
- WebSocket was working (blocks arriving), but view functions couldn't execute
New architecture:
// RIGHT (post-fix):
const rpcPool = new RPCPool({ rpcs: rpcsHttp, circuitBreaker });
const windowDetector = new WindowDetector(publicClient, rpcPool, {...});
// View functions use automatic fallback:
const unlockTime = await rpcPool.callWithFallback(
(client) => client.readContract({...remainingUnlockTime}),
'remainingUnlockTime()'
);
// Tries: PublicNode → QuikNode → Official → DRPC → ... (all 9 RPCs)Features:
- Automatic fallback: Tries all healthy RPCs in order
- 429 detection: Rate limit errors → temporary blacklist (60s)
- Circuit breaker integration: Long-term health tracking
- Zero latency fallback: All clients pre-created
The problem:
- WebSocket: 30 blocks/min × 3 view calls/block = 90 calls/min
- HTTP polling (200ms): 5 checks/sec × 3 calls/check = 900 calls/min
- Total: 990 calls/min → exceeds PublicNode's 600req/60s limit
The fix:
// WebSocket delivers first block:
this.wsHealthy = true;
clearInterval(this.pollingInterval); // Disable HTTP polling
log.info('WebSocket healthy, HTTP polling disabled (reduces RPC load 50%)');
// If WebSocket fails:
this.wsHealthy = false;
this.pollingInterval = setInterval(...); // Re-enable HTTP polling
log.warn('WebSocket failed, re-enabling HTTP polling backup');Impact:
- Reduces RPC calls from 990/min → 90/min (10× reduction)
- HTTP polling only runs when WebSocket is down
What we thought:
- ✅ 4 WebSocket RPCs = redundancy
- ✅ 9 HTTP RPCs = redundancy
Reality:
- ✅ WebSocket delivers BLOCKS (redundancy works)
- ❌ View functions use HTTP client #0 ONLY (no redundancy)
- Missing link: Block delivery ≠ contract state checking
Lesson: Redundancy must cover EVERY critical operation, not just some of them.
Old error handler (WRONG):
catch (error) {
log.err('[WINDOW] View function error:', error);
return { unlockTime: 999999n, paused: true }; // "Helpful" fallback
}Problem: Returns safe fallback → hides that bot is blind → window opens undetected
Better error handler:
catch (error) {
// Try other RPCs first (RPCPool handles this)
// Only return fallback if ALL RPCs fail
if (allRPCsFailed) {
log.err('🚨 CRITICAL: ALL RPCs failed, BOT IS BLIND!');
return { unlockTime: 999999n, paused: true };
}
}Lesson: Error handlers should ESCALATE critical problems, not quietly paper over them.
We treated 429 errors as temporary glitches.
Reality:
- PublicNode free tier: 600 requests / 60 seconds
- Bot's normal operation: 990 requests / 60 seconds
- This is NOT abuse - this is architectural mismatch
Solutions:
- Reduce request frequency (disable redundant polling)
- Distribute load across multiple RPCs (fallback pool)
- Detect and respect 429 errors (temporary blacklist)
- Cache view function results (2s TTL would reduce 90 calls/min → 18 calls/min)
Lesson: Design for rate limits from the start. Don't fight them with aggressive retry logic.
The fatal line of code:
const windowDetector = new WindowDetector(publicClient, {...});
// ^^^^^^^^^^^^
// THIS killed usEvery integration point needs:
- ✅ Fallback mechanism (if primary fails, try secondary)
- ✅ Health checking (detect failures before they break system)
- ✅ Circuit breaker (automatic isolation of failures)
- ✅ Redundancy (multiple paths to same outcome)
Lesson: The integration layer is where single points of failure hide. Review every constructor, every client creation, every singleton carefully.
The bot uses multiple strategies in combination:
- Battering Ram Mode: Maximum gas from start
- Pre-warming: Transactions in mempool early
- Dual-branch: Redundancy + speed (RETIRED - conflicts with RBF)
- Multi-RPC: Parallel broadcast
- Time-based spam: Maximum coverage (DISABLED in RBF bot)
- Circuit breaker: Automatic failover
- Time Units: Type-safe conversions prevent timing bugs
- RPC Pool Fallback: Eliminates single RPC failure = total blindness (NEW 2025-11-14)
- Nonce Cascade Protection: Limits nonce burn to 5 maximum (NEW 2025-11-14)
Result: Maximum chance of winning competitive mints with resilience against infrastructure failures.
Version: 1.2