diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000..8af972c --- /dev/null +++ b/.gitattributes @@ -0,0 +1,3 @@ +/gradlew text eol=lf +*.bat text eol=crlf +*.jar binary diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..d08bb18 --- /dev/null +++ b/.gitignore @@ -0,0 +1,91 @@ +######################################## +# OS / Editor +######################################## +.DS_Store +Thumbs.db + +######################################## +# Environment / Secrets +######################################## +.env +.env.* +*.env + +######################################## +# Logs +######################################## +*.log +hs_err_pid* +replay_pid* + +######################################## +# Node / JS +######################################## +node_modules/ +package-lock.json + +######################################## +# Java / Gradle +######################################## +*.class +*.jar +*.war +*.ear + +.gradle/ +build/ +**/build/ + +# Keep Gradle wrapper +!gradle/wrapper/gradle-wrapper.jar + +######################################## +# Repo-local build artifacts +######################################## +bin/ + +######################################## +# IDEs +######################################## +# IntelliJ IDEA +.idea/ +*.iml +*.ipr +*.iws +out/ + +# Eclipse / STS +.project +.classpath +.settings/ +.springBeans +.sts4-cache +.apt_generated +.factorypath + +# NetBeans +/nbproject/private/ +/nbbuild/ +/dist/ +/nbdist/ +/.nb-gradle/ + +# VS Code +.vscode/ + +######################################## +# Solidity / Foundry +######################################## +# Compiler outputs +cache/ +out/ + +# Foundry broadcast logs +!/broadcast +/broadcast/*/31337/ +/broadcast/**/dry-run/ + +######################################## +# Docs / local notes +######################################## +HELP.md diff --git a/.gitmodules b/.gitmodules new file mode 100644 index 0000000..e80ffd8 --- /dev/null +++ b/.gitmodules @@ -0,0 +1,6 @@ +[submodule "lib/forge-std"] + path = lib/forge-std + url = https://github.com/foundry-rs/forge-std +[submodule "lib/openzeppelin-contracts"] + path = lib/openzeppelin-contracts + url = https://github.com/openzeppelin/openzeppelin-contracts diff --git a/README.md b/README.md index 56c7bb7..dd0286c 100644 --- a/README.md +++ b/README.md @@ -1 +1,307 @@ -# tron-settlement-batching-layer +# TRON Settlement Batching Layer (TSBL) + +This repository is a **monorepo** implementing the **TRON Settlement Batching Layer (TSBL)** — a hybrid off-chain/on-chain system for collecting transfer intents, batching them into Merkle trees, and executing transfers on TRON using Merkle proofs with optional whitelist-based batching. + +The system consists of two main parts: + +* **Backend (`/backend`)** — off-chain intent collection, batching, Merkle tree construction, and on-chain orchestration. +* **Smart contracts (`/contracts`)** — on-chain settlement, fee calculation, whitelist verification, and secure execution. + +--- + +## Repository structure + +```text +tron-settlement-batching-layer/ +│ +├── backend/ # Spring Boot backend (intent intake, batching, Merkle, execution) +├── contracts/ # Solidity smart contracts (Foundry-based) +├── docs/ # (optional) architecture & protocol docs +└── README.md # this file +``` + +--- + +## High-level flow + +``` +User / App + ↓ +Backend API (submit intent) + ↓ +Intent batching (off-chain) + ↓ +Merkle tree construction + ↓ +submitBatch(root) ───────────▶ Settlement.sol + │ + │ (time lock) + ▼ +executeTransfer(proof, data) ─▶ Merkle verification + Fee calculation + Token transfer +``` + +--- + +# Backend (`/backend`) + +Spring Boot backend responsible for **intent submission**, **batching**, **Merkle tree construction**, and **interaction with on-chain contracts**. + +### What the backend does + +* **Accepts transfer intents** + + * REST API for submitting `(from, to, amount, nonce, txType, …)` +* **Batching** + + * Periodic scheduler groups pending intents (size- or time-based) +* **Merkle** + + * Builds Merkle trees, computes root and per-transfer proofs +* **Settlement submission** + + * Submits batch metadata to `Settlement.sol` +* **Execution** + + * Executes individual transfers on-chain using Merkle proofs +* **Whitelist support** + + * For `txType = 2 (BATCHED)`: + + * Generates whitelist Merkle proofs + * Syncs whitelist root on startup +* **Monitoring APIs** + + * Script-friendly endpoints for debugging and automation + +--- + +## Backend tech stack + +* **Java**: JDK **25** (via Gradle toolchain) +* **Framework**: Spring Boot 4 (WebMVC, Validation) +* **TRON client**: Trident (`io.github.tronprotocol:trident`) +* **Crypto utilities**: web3j (ECDSA, ABI decoding) +* **Build**: Gradle + +--- + +## Backend requirements + +* JDK **25** +* `bash`, `curl`, `jq` (used by test scripts) + +--- + +## Backend quick start + +```bash +cd backend +./gradlew bootRun +``` + +* Default port: `8080` + +### Run tests + +```bash +./gradlew test +``` + +### Build runnable JAR + +```bash +./gradlew bootJar +java -jar build/libs/tsol-backend-0.0.1-SNAPSHOT.jar +``` + +--- + +## Backend configuration + +Configuration is resolved in the following order: + +1. **Environment variables** +2. **`.env` file** (via `spring-dotenv`) +3. **Defaults in `application.yaml`** + +Example `.env` (do **not** commit): + +```bash +# Server +PORT=8080 + +# TRON network +NODE_ENDPOINT=grpc.nile.trongrid.io:50051 +CHAIN_ID=3448148188 + +# Settlement +SETTLEMENT_ADDRESS=YOUR_SETTLEMENT_CONTRACT_BASE58 +UPDATER_PRIVATE_KEY=YOUR_64_CHAR_HEX_PRIVATE_KEY_NO_0x +UPDATER_ADDRESS=YOUR_AGGREGATOR_BASE58 + +# Whitelist (for txType=2) +WHITELIST_REGISTRY_ADDRESS=YOUR_WHITELIST_REGISTRY_BASE58 +WL_NEW_ROOT=0xYOUR_WHITELIST_ROOT_HEX +WL_NONCE=0 +WHITELIST_ADDRESSES=BASE58_ADDR_1,BASE58_ADDR_2 + +# Fee module +FEE_MODULE_ADDRESS=YOUR_FEE_MODULE_BASE58 +TOKEN_ADDRESS=TXYZopYRdj2D9XRtbG411XZZ3kM5VkAeBf +``` + +--- + +## Backend API + +### Submit transfer intent + +**POST** `/api/intents` → `202 Accepted` + +```bash +curl -X POST "http://localhost:8080/api/intents" \ + -H "Content-Type: application/json" \ + -d '{ + "from": "TKWvD71EMFTpFVGZyqqX9fC6MQgcR9H76M", + "to": "TFZMxv9HUzvsL3M7obrvikSQkuvJsopgMU", + "amount": "1000000", + "nonce": 123, + "timestamp": 1735000000, + "recipientCount": 1, + "txType": 0 + }' +``` + +**txType mapping:** + +* `0` — DELAYED +* `1` — INSTANT +* `2` — BATCHED (requires whitelist) + +--- + +### Monitoring endpoints + +* `GET /api/monitor/stats` +* `GET /api/monitor/batches` +* `GET /api/monitor/batch/{batchId}` +* `GET /api/monitor/merkle-root/{rootHash}` +* `GET /api/monitor/transfers` +* `POST /api/monitor/create-batch-now` + +--- + +## Backend test scripts + +Located in `/backend`: + +* `test-two-intents-full-flow.sh` +* `test-two-intents-batched-flow.sh` +* `test-20-intents.sh` +* `test-10-intents-batched.sh` + +Run example: + +```bash +./test-two-intents-full-flow.sh +``` + +--- + +## Backend notes + +* **No private key → no on-chain ops** +* **Minimum batch size = 2** +* **txType=2 requires whitelist** +* **Persistence is in-memory** (restart clears state) + +--- + +# Smart Contracts (`/contracts`) + +Solidity contracts implementing **on-chain settlement, fee logic, and whitelist verification**. + +Built and tested using **Foundry**. + +--- + +## Core contracts + +### WhitelistRegistry.sol + +* Stores whitelist Merkle root +* Verifies whitelist proofs +* Allows controlled root updates + +**Key functions** + +* `verifyWhitelist(user, proof)` +* `updateMerkleRoot(newRoot, sig)` +* `requestWhitelist(proof)` + +--- + +### FeeModule.sol + +Responsible for **fee calculation and accounting**. + +**Features** + +* Free tier limits +* Fee logic based on `TxType` +* Batch-level and per-user fee tracking +* Whitelist-aware batching discounts + +--- + +### Settlement.sol + +Core on-chain settlement logic. + +**Responsibilities** + +* Accept batched Merkle roots +* Enforce time lock (delayed finality) +* Verify Merkle proofs +* Apply fees +* Execute token transfers +* Prevent double execution + +--- + +## On-chain execution flow + +``` +submitBatch(merkleRoot, txCount) + ↓ + time lock + ↓ +executeTransfer(proof, data) + ↓ +Merkle verification +Fee calculation +Token transfer +``` + +--- + +## Development (contracts) + +```bash +cd contracts +forge build +forge test +``` + +--- + +## Summary + +This monorepo cleanly separates: + +* **Protocol logic (on-chain)** — deterministic, auditable, minimal +* **Operational logic (off-chain)** — batching, scheduling, orchestration + +Together they form a **scalable, auditable, and gas-efficient settlement layer** for TRON. diff --git a/backend/FUNCTIONALITY_TABLE.md b/backend/FUNCTIONALITY_TABLE.md new file mode 100644 index 0000000..5645233 --- /dev/null +++ b/backend/FUNCTIONALITY_TABLE.md @@ -0,0 +1,19 @@ +### Functionality table (based on `description.text`) — Done vs Should be added + +| Area | Functionality | Status | Where it exists now | What should be added (gap) | +|---|---|---|---|---| +| **Intents** | Accept transfer intents via API | **Done** | `POST /api/intents` (`TransferIntentController`), `TransferIntentService` | Signed intents (canonical format + signature verification), nonce/replay protection, rate limits | +| **Batching** | Create batches from pending intents (timer/quantity) | **Done (basic)** | `BatchingScheduler`, `BatchService` | More flexible batching policy (priority flags, max-wait per intent), single-intent support if required | +| **Merkle** | Compute tx leaf hash + Merkle root + proofs | **Done** | `MerkleTreeService` + scripts in `sc/script/merkle/` | Formal test vectors + cross-language verifier library | +| **On-chain Settlement** | Submit batch (root+count) | **Done** | `Settlement.sol` + `SettlementContractClientTrident.submitBatchWithTxId` | Production reconciliation (detect stuck submits, backfill event scanning) | +| **Timelock / Deferred** | Unlock time gating before execution | **Done** | `Settlement.sol` unlockTime, `ExecutionScheduler` | Operational controls (pause/resume execution), SLA monitoring | +| **Execution** | Execute transfer with proofs | **Done** | `Settlement.sol.executeTransfer`, `SettlementContractClientTrident.executeTransfer` | **Idempotency check** using `isExecutedTransfer(bytes32)` before sending; better error decoding; retry strategy | +| **Whitelist** | Whitelist Merkle root registry | **Done** | `WhitelistRegistry.sol` | Automated whitelist scoring + scheduled root updates (analytics node) | +| **Whitelist enforcement** | Require whitelist proof only for batched txType | **Done** | `Settlement.sol._validateBatched`, Java `BatchService` proof selection | Tools/SDK to generate proofs for external clients | +| **Fee module** | Fee calculation based on txType + free tier quota | **Done (analytics-only)** | `FeeModule.sol` | If “real fees” are required: actual fee collection/transfer + accounting + reporting | +| **Monitoring** | API to view batches/transfers/state | **Done** | `BatchMonitoringController` | Prometheus metrics, dashboards, alerts, audit logs | +| **Persistence** | Store batches/transfers reliably | **Not done** | Current: `InMemoryBatchRepository` | Postgres (or other DB), migrations, restart recovery, indexing strategy | +| **Security model (full vision)** | Rollup/channels, fraud proofs or ZK proofs | **Not done** | — | Off-chain ledger, state root commitments, challenge window (optimistic) or ZK proof pipeline | +| **Router / Custody (full vision)** | Contract accepts deposits, buffers, routes, withdrawals | **Not done** | — | Router contract + event ingestion + exit/withdraw flows | +| **Governance (full vision)** | DAO changes parameters (timings, batch rules, free tier) | **Partial** | Owner/admin controls in contracts | DAO/multisig integration + timelocked parameter changes | +| **Verifier library (full vision)** | OSS verifier for signatures, Merkle proofs, nonce rules | **Not done** | — | Publish libs (JS + backend language) + test vectors + CI | \ No newline at end of file diff --git a/backend/README.md b/backend/README.md new file mode 100644 index 0000000..1a3917c --- /dev/null +++ b/backend/README.md @@ -0,0 +1,155 @@ +### TSBL-backend + +Spring Boot backend for submitting **transfer intents**, batching them into a **Merkle tree**, submitting the batch to an on-chain **Settlement** contract on TRON, and executing transfers using Merkle proofs (optionally with whitelist proofs for batched tx types). + +### What this service does + +- **Accept intents**: REST endpoint to submit transfer intents (from/to/amount/nonce/etc.). +- **Batching**: scheduler groups pending intents into batches (size/time based). +- **Merkle**: builds leaf hashes + Merkle root + per-transfer proofs. +- **Settlement submission**: submits the batch (root + tx count) to the Settlement contract. +- **Execution**: after timelock/unlock, executes each transfer on-chain using proofs. +- **Whitelist support**: for `txType=2` (BATCHED) the backend generates a whitelist proof and also syncs whitelist root on startup. +- **Monitoring APIs**: endpoints under `/api/monitor/*` for scripts and debugging. + +### Tech stack + +- **Java**: JDK **25** (Gradle toolchain is set to 25) +- **Framework**: Spring Boot 4 (WebMVC + Validation) +- **TRON client**: Trident (`io.github.tronprotocol:trident`) +- **Crypto utilities**: web3j (ECDSA/ABI decoding) + +### Requirements + +- **JDK 25** installed (or a Gradle toolchain configured on your machine to provision it) +- **bash + curl + jq** (the repo’s `test-*.sh` scripts use `jq`) + +### Quick start + +- **Run locally** (default port `8080`): + +```bash +./gradlew bootRun +``` + +- **Run tests**: + +```bash +./gradlew test +``` + +- **Build a runnable jar**: + +```bash +./gradlew bootJar +java -jar build/libs/tsol-backend-0.0.1-SNAPSHOT.jar +``` + +### Configuration + +Runtime config lives in `src/main/resources/application.yaml` and is driven by: + +- **(1) Environment variables** +- **(2) `.env` file** (supported via `spring-dotenv`) +- **(3) Defaults in `application.yaml`** + +Create a `.env` in the repo root (do **not** commit it): + +```bash +# Server +PORT=8080 + +# TRON gRPC endpoint (Nile default) +NODE_ENDPOINT=grpc.nile.trongrid.io:50051 +CHAIN_ID=3448148188 + +# Settlement contract + aggregator key +SETTLEMENT_ADDRESS=YOUR_SETTLEMENT_CONTRACT_BASE58 +UPDATER_PRIVATE_KEY=YOUR_64_CHAR_HEX_PRIVATE_KEY_NO_0x + +# Optional +UPDATER_ADDRESS=YOUR_AGGREGATOR_BASE58 + +# Whitelist (required for txType=2 / BATCHED) +WHITELIST_REGISTRY_ADDRESS=YOUR_WHITELIST_REGISTRY_BASE58 +WL_NEW_ROOT=0xYOUR_WHITELIST_ROOT_HEX +WL_NONCE=0 +WHITELIST_ADDRESSES=BASE58_ADDR_1,BASE58_ADDR_2 + +# Fee module + token +FEE_MODULE_ADDRESS=YOUR_FEE_MODULE_BASE58 +TOKEN_ADDRESS=TXYZopYRdj2D9XRtbG411XZZ3kM5VkAeBf +``` + +### API + +#### Submit a transfer intent + +- **Endpoint**: `POST /api/intents` +- **Response**: `202 Accepted` + +Example: + +```bash +curl -X POST "http://localhost:8080/api/intents" \ + -H "Content-Type: application/json" \ + -d '{ + "from": "TKWvD71EMFTpFVGZyqqX9fC6MQgcR9H76M", + "to": "TFZMxv9HUzvsL3M7obrvikSQkuvJsopgMU", + "amount": "1000000", + "nonce": 123, + "timestamp": 1735000000, + "recipientCount": 1, + "txType": 0 + }' +``` + +Request fields: + +- **from**: TRON address (base58) +- **to**: TRON address (base58) +- **amount**: string decimal (commonly token smallest-unit as a string) +- **nonce**: integer +- **timestamp**: unix seconds +- **recipientCount**: used by fee logic (scripts use `1` for txType `0/1`, and `>1` for txType `2`) +- **txType**: integer mapped to Solidity `uint8` (scripts commonly use `0`=DELAYED, `1`=INSTANT, `2`=BATCHED) + +#### Monitoring endpoints (script-friendly) + +- **GET** `/api/monitor/stats`: scheduler status + summary counts +- **GET** `/api/monitor/batches`: all batches with transfers and stats +- **GET** `/api/monitor/batch/{batchId}`: one batch by on-chain batchId +- **GET** `/api/monitor/merkle-root/{rootHash}`: find batch by Merkle root +- **GET** `/api/monitor/transfers`: all transfers across all batches +- **POST** `/api/monitor/create-batch-now`: manual batching trigger (requires at least 2 pending intents) + +### Repo test scripts + +These scripts assume the backend is running on `http://localhost:8080` and your `.env` config is set for Nile. + +- `test-two-intents-full-flow.sh`: submits 2 intents, forces batching, monitors execution +- `test-two-intents-batched-flow.sh`: same as above but uses `txType=2` and validates whitelist proof generation +- `test-20-intents.sh`: submits 20 intents (alternating txType 0/1) and waits for batching+execution +- `test-10-intents-batched.sh`: submits 10 intents with `txType=2`, forces batches, waits for completion + +Run example: + +```bash +./test-two-intents-full-flow.sh +``` + +### Important notes / troubleshooting + +- **No private key = no on-chain ops**: if `UPDATER_PRIVATE_KEY` is missing/invalid, the app will start but blockchain operations (submit/execute/event reads) will be disabled. +- **Minimum batch size is 2**: the scheduler and `/create-batch-now` require at least 2 pending intents (single-tx batches don’t produce valid Merkle proofs in this implementation). +- **txType=2 requires whitelist**: + - `WHITELIST_ADDRESSES` must include the `from` address + - `WHITELIST_REGISTRY_ADDRESS` and `WL_NEW_ROOT` must be correct + - Restart the backend after changing whitelist config (root sync runs on startup) +- **Persistence**: current repository is **in-memory** (`InMemoryBatchRepository`) — restarting the service clears batch state. + +### Docs + +- `FUNCTIONALITY_TABLE.md`: high-level “done vs missing” feature tracking + + diff --git a/backend/build.gradle b/backend/build.gradle new file mode 100644 index 0000000..8190afb --- /dev/null +++ b/backend/build.gradle @@ -0,0 +1,49 @@ +plugins { + id 'java' + id 'org.springframework.boot' version '4.0.0' + id 'io.spring.dependency-management' version '1.1.7' +} + +group = 'dao.tron' +version = '0.0.1-SNAPSHOT' +description = 'Demo project for Spring Boot' + +java { + toolchain { + languageVersion = JavaLanguageVersion.of(25) + } +} + +configurations { + compileOnly { + extendsFrom annotationProcessor + } +} + +repositories { + mavenCentral() +} + +dependencies { + implementation 'org.springframework.boot:spring-boot-starter-validation' + implementation 'org.springframework.boot:spring-boot-starter-webmvc' + implementation('io.github.tronprotocol:trident:0.10.0') + + // Environment variables from .env file support + implementation 'me.paulschwarz:spring-dotenv:4.0.0' + + // Web3j for cryptographic operations (ECDSA signing) + implementation 'org.web3j:crypto:4.9.8' + // Web3j core (ABI decode utilities used for event log decoding) + implementation 'org.web3j:core:4.9.8' + + compileOnly 'org.projectlombok:lombok' + annotationProcessor 'org.projectlombok:lombok' + testImplementation 'org.springframework.boot:spring-boot-starter-validation-test' + testImplementation 'org.springframework.boot:spring-boot-starter-webmvc-test' + testRuntimeOnly 'org.junit.platform:junit-platform-launcher' +} + +tasks.named('test') { + useJUnitPlatform() +} diff --git a/backend/gradle/wrapper/gradle-wrapper.properties b/backend/gradle/wrapper/gradle-wrapper.properties new file mode 100644 index 0000000..23449a2 --- /dev/null +++ b/backend/gradle/wrapper/gradle-wrapper.properties @@ -0,0 +1,7 @@ +distributionBase=GRADLE_USER_HOME +distributionPath=wrapper/dists +distributionUrl=https\://services.gradle.org/distributions/gradle-9.2.1-bin.zip +networkTimeout=10000 +validateDistributionUrl=true +zipStoreBase=GRADLE_USER_HOME +zipStorePath=wrapper/dists diff --git a/backend/gradlew b/backend/gradlew new file mode 100755 index 0000000..adff685 --- /dev/null +++ b/backend/gradlew @@ -0,0 +1,248 @@ +#!/bin/sh + +# +# Copyright © 2015 the original authors. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# https://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# SPDX-License-Identifier: Apache-2.0 +# + +############################################################################## +# +# Gradle start up script for POSIX generated by Gradle. +# +# Important for running: +# +# (1) You need a POSIX-compliant shell to run this script. If your /bin/sh is +# noncompliant, but you have some other compliant shell such as ksh or +# bash, then to run this script, type that shell name before the whole +# command line, like: +# +# ksh Gradle +# +# Busybox and similar reduced shells will NOT work, because this script +# requires all of these POSIX shell features: +# * functions; +# * expansions «$var», «${var}», «${var:-default}», «${var+SET}», +# «${var#prefix}», «${var%suffix}», and «$( cmd )»; +# * compound commands having a testable exit status, especially «case»; +# * various built-in commands including «command», «set», and «ulimit». +# +# Important for patching: +# +# (2) This script targets any POSIX shell, so it avoids extensions provided +# by Bash, Ksh, etc; in particular arrays are avoided. +# +# The "traditional" practice of packing multiple parameters into a +# space-separated string is a well documented source of bugs and security +# problems, so this is (mostly) avoided, by progressively accumulating +# options in "$@", and eventually passing that to Java. +# +# Where the inherited environment variables (DEFAULT_JVM_OPTS, JAVA_OPTS, +# and GRADLE_OPTS) rely on word-splitting, this is performed explicitly; +# see the in-line comments for details. +# +# There are tweaks for specific operating systems such as AIX, CygWin, +# Darwin, MinGW, and NonStop. +# +# (3) This script is generated from the Groovy template +# https://github.com/gradle/gradle/blob/HEAD/platforms/jvm/plugins-application/src/main/resources/org/gradle/api/internal/plugins/unixStartScript.txt +# within the Gradle project. +# +# You can find Gradle at https://github.com/gradle/gradle/. +# +############################################################################## + +# Attempt to set APP_HOME + +# Resolve links: $0 may be a link +app_path=$0 + +# Need this for daisy-chained symlinks. +while + APP_HOME=${app_path%"${app_path##*/}"} # leaves a trailing /; empty if no leading path + [ -h "$app_path" ] +do + ls=$( ls -ld "$app_path" ) + link=${ls#*' -> '} + case $link in #( + /*) app_path=$link ;; #( + *) app_path=$APP_HOME$link ;; + esac +done + +# This is normally unused +# shellcheck disable=SC2034 +APP_BASE_NAME=${0##*/} +# Discard cd standard output in case $CDPATH is set (https://github.com/gradle/gradle/issues/25036) +APP_HOME=$( cd -P "${APP_HOME:-./}" > /dev/null && printf '%s\n' "$PWD" ) || exit + +# Use the maximum available, or set MAX_FD != -1 to use that value. +MAX_FD=maximum + +warn () { + echo "$*" +} >&2 + +die () { + echo + echo "$*" + echo + exit 1 +} >&2 + +# OS specific support (must be 'true' or 'false'). +cygwin=false +msys=false +darwin=false +nonstop=false +case "$( uname )" in #( + CYGWIN* ) cygwin=true ;; #( + Darwin* ) darwin=true ;; #( + MSYS* | MINGW* ) msys=true ;; #( + NONSTOP* ) nonstop=true ;; +esac + + + +# Determine the Java command to use to start the JVM. +if [ -n "$JAVA_HOME" ] ; then + if [ -x "$JAVA_HOME/jre/sh/java" ] ; then + # IBM's JDK on AIX uses strange locations for the executables + JAVACMD=$JAVA_HOME/jre/sh/java + else + JAVACMD=$JAVA_HOME/bin/java + fi + if [ ! -x "$JAVACMD" ] ; then + die "ERROR: JAVA_HOME is set to an invalid directory: $JAVA_HOME + +Please set the JAVA_HOME variable in your environment to match the +location of your Java installation." + fi +else + JAVACMD=java + if ! command -v java >/dev/null 2>&1 + then + die "ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH. + +Please set the JAVA_HOME variable in your environment to match the +location of your Java installation." + fi +fi + +# Increase the maximum file descriptors if we can. +if ! "$cygwin" && ! "$darwin" && ! "$nonstop" ; then + case $MAX_FD in #( + max*) + # In POSIX sh, ulimit -H is undefined. That's why the result is checked to see if it worked. + # shellcheck disable=SC2039,SC3045 + MAX_FD=$( ulimit -H -n ) || + warn "Could not query maximum file descriptor limit" + esac + case $MAX_FD in #( + '' | soft) :;; #( + *) + # In POSIX sh, ulimit -n is undefined. That's why the result is checked to see if it worked. + # shellcheck disable=SC2039,SC3045 + ulimit -n "$MAX_FD" || + warn "Could not set maximum file descriptor limit to $MAX_FD" + esac +fi + +# Collect all arguments for the java command, stacking in reverse order: +# * args from the command line +# * the main class name +# * -classpath +# * -D...appname settings +# * --module-path (only if needed) +# * DEFAULT_JVM_OPTS, JAVA_OPTS, and GRADLE_OPTS environment variables. + +# For Cygwin or MSYS, switch paths to Windows format before running java +if "$cygwin" || "$msys" ; then + APP_HOME=$( cygpath --path --mixed "$APP_HOME" ) + + JAVACMD=$( cygpath --unix "$JAVACMD" ) + + # Now convert the arguments - kludge to limit ourselves to /bin/sh + for arg do + if + case $arg in #( + -*) false ;; # don't mess with options #( + /?*) t=${arg#/} t=/${t%%/*} # looks like a POSIX filepath + [ -e "$t" ] ;; #( + *) false ;; + esac + then + arg=$( cygpath --path --ignore --mixed "$arg" ) + fi + # Roll the args list around exactly as many times as the number of + # args, so each arg winds up back in the position where it started, but + # possibly modified. + # + # NB: a `for` loop captures its iteration list before it begins, so + # changing the positional parameters here affects neither the number of + # iterations, nor the values presented in `arg`. + shift # remove old arg + set -- "$@" "$arg" # push replacement arg + done +fi + + +# Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script. +DEFAULT_JVM_OPTS='"-Xmx64m" "-Xms64m"' + +# Collect all arguments for the java command: +# * DEFAULT_JVM_OPTS, JAVA_OPTS, and optsEnvironmentVar are not allowed to contain shell fragments, +# and any embedded shellness will be escaped. +# * For example: A user cannot expect ${Hostname} to be expanded, as it is an environment variable and will be +# treated as '${Hostname}' itself on the command line. + +set -- \ + "-Dorg.gradle.appname=$APP_BASE_NAME" \ + -jar "$APP_HOME/gradle/wrapper/gradle-wrapper.jar" \ + "$@" + +# Stop when "xargs" is not available. +if ! command -v xargs >/dev/null 2>&1 +then + die "xargs is not available" +fi + +# Use "xargs" to parse quoted args. +# +# With -n1 it outputs one arg per line, with the quotes and backslashes removed. +# +# In Bash we could simply go: +# +# readarray ARGS < <( xargs -n1 <<<"$var" ) && +# set -- "${ARGS[@]}" "$@" +# +# but POSIX shell has neither arrays nor command substitution, so instead we +# post-process each arg (as a line of input to sed) to backslash-escape any +# character that might be a shell metacharacter, then use eval to reverse +# that process (while maintaining the separation between arguments), and wrap +# the whole thing up as a single "set" statement. +# +# This will of course break if any of these variables contains a newline or +# an unmatched quote. +# + +eval "set -- $( + printf '%s\n' "$DEFAULT_JVM_OPTS $JAVA_OPTS $GRADLE_OPTS" | + xargs -n1 | + sed ' s~[^-[:alnum:]+,./:=@_]~\\&~g; ' | + tr '\n' ' ' + )" '"$@"' + +exec "$JAVACMD" "$@" diff --git a/backend/gradlew.bat b/backend/gradlew.bat new file mode 100644 index 0000000..c4bdd3a --- /dev/null +++ b/backend/gradlew.bat @@ -0,0 +1,93 @@ +@rem +@rem Copyright 2015 the original author or authors. +@rem +@rem Licensed under the Apache License, Version 2.0 (the "License"); +@rem you may not use this file except in compliance with the License. +@rem You may obtain a copy of the License at +@rem +@rem https://www.apache.org/licenses/LICENSE-2.0 +@rem +@rem Unless required by applicable law or agreed to in writing, software +@rem distributed under the License is distributed on an "AS IS" BASIS, +@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +@rem See the License for the specific language governing permissions and +@rem limitations under the License. +@rem +@rem SPDX-License-Identifier: Apache-2.0 +@rem + +@if "%DEBUG%"=="" @echo off +@rem ########################################################################## +@rem +@rem Gradle startup script for Windows +@rem +@rem ########################################################################## + +@rem Set local scope for the variables with windows NT shell +if "%OS%"=="Windows_NT" setlocal + +set DIRNAME=%~dp0 +if "%DIRNAME%"=="" set DIRNAME=. +@rem This is normally unused +set APP_BASE_NAME=%~n0 +set APP_HOME=%DIRNAME% + +@rem Resolve any "." and ".." in APP_HOME to make it shorter. +for %%i in ("%APP_HOME%") do set APP_HOME=%%~fi + +@rem Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script. +set DEFAULT_JVM_OPTS="-Xmx64m" "-Xms64m" + +@rem Find java.exe +if defined JAVA_HOME goto findJavaFromJavaHome + +set JAVA_EXE=java.exe +%JAVA_EXE% -version >NUL 2>&1 +if %ERRORLEVEL% equ 0 goto execute + +echo. 1>&2 +echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH. 1>&2 +echo. 1>&2 +echo Please set the JAVA_HOME variable in your environment to match the 1>&2 +echo location of your Java installation. 1>&2 + +goto fail + +:findJavaFromJavaHome +set JAVA_HOME=%JAVA_HOME:"=% +set JAVA_EXE=%JAVA_HOME%/bin/java.exe + +if exist "%JAVA_EXE%" goto execute + +echo. 1>&2 +echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME% 1>&2 +echo. 1>&2 +echo Please set the JAVA_HOME variable in your environment to match the 1>&2 +echo location of your Java installation. 1>&2 + +goto fail + +:execute +@rem Setup the command line + + + +@rem Execute Gradle +"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %GRADLE_OPTS% "-Dorg.gradle.appname=%APP_BASE_NAME%" -jar "%APP_HOME%\gradle\wrapper\gradle-wrapper.jar" %* + +:end +@rem End local scope for the variables with windows NT shell +if %ERRORLEVEL% equ 0 goto mainEnd + +:fail +rem Set variable GRADLE_EXIT_CONSOLE if you need the _script_ return code instead of +rem the _cmd.exe /c_ return code! +set EXIT_CODE=%ERRORLEVEL% +if %EXIT_CODE% equ 0 set EXIT_CODE=1 +if not ""=="%GRADLE_EXIT_CONSOLE%" exit %EXIT_CODE% +exit /b %EXIT_CODE% + +:mainEnd +if "%OS%"=="Windows_NT" endlocal + +:omega diff --git a/backend/settings.gradle b/backend/settings.gradle new file mode 100644 index 0000000..01d9ebc --- /dev/null +++ b/backend/settings.gradle @@ -0,0 +1 @@ +rootProject.name = 'tsol-backend' diff --git a/backend/src/main/java/dao/tron/tsol/TsolBackendApplication.java b/backend/src/main/java/dao/tron/tsol/TsolBackendApplication.java new file mode 100644 index 0000000..e69b2e5 --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/TsolBackendApplication.java @@ -0,0 +1,15 @@ +package dao.tron.tsol; + +import org.springframework.boot.SpringApplication; +import org.springframework.boot.autoconfigure.SpringBootApplication; +import org.springframework.scheduling.annotation.EnableScheduling; + +@SpringBootApplication +@EnableScheduling +public class TsolBackendApplication { + + public static void main(String[] args) { + SpringApplication.run(TsolBackendApplication.class, args); + } + +} diff --git a/backend/src/main/java/dao/tron/tsol/config/BatchProperties.java b/backend/src/main/java/dao/tron/tsol/config/BatchProperties.java new file mode 100644 index 0000000..0e049da --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/config/BatchProperties.java @@ -0,0 +1,39 @@ +package dao.tron.tsol.config; + +import lombok.Data; +import org.springframework.boot.context.properties.ConfigurationProperties; +import org.springframework.context.annotation.Configuration; + +@Configuration +@ConfigurationProperties(prefix = "batch") +@Data +public class BatchProperties { + + /** + * Maximum number of transactions per batch + */ + private Integer maxTxPerBatch; + + /** + * Timelock duration in seconds before batch can be executed + */ + private Long timelockDuration; + + /** + * Current batch Merkle root (hex format with 0x prefix) + * Example: 0x82067662081cf3c1061cae00166d580285a337264c1eb3c91673579a814d32ea + */ + private String merkleRoot; +} + + + + + + + + + + + + diff --git a/backend/src/main/java/dao/tron/tsol/config/ChainProperties.java b/backend/src/main/java/dao/tron/tsol/config/ChainProperties.java new file mode 100644 index 0000000..8a27961 --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/config/ChainProperties.java @@ -0,0 +1,30 @@ +package dao.tron.tsol.config; + +import lombok.Data; +import org.springframework.boot.context.properties.ConfigurationProperties; +import org.springframework.context.annotation.Configuration; + +@Configuration +@ConfigurationProperties(prefix = "chain") +@Data +public class ChainProperties { + + /** + * TRON chain ID + * Nile testnet: 3448148188 + * Mainnet: 728126428 + */ + private Long id; +} + + + + + + + + + + + + diff --git a/backend/src/main/java/dao/tron/tsol/config/FeeProperties.java b/backend/src/main/java/dao/tron/tsol/config/FeeProperties.java new file mode 100644 index 0000000..f205b8d --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/config/FeeProperties.java @@ -0,0 +1,29 @@ +package dao.tron.tsol.config; + +import lombok.Data; +import org.springframework.boot.context.properties.ConfigurationProperties; +import org.springframework.context.annotation.Configuration; + +@Configuration +@ConfigurationProperties(prefix = "fee") +@Data +public class FeeProperties { + + /** + * Fee module contract address (base58 format) + * Example: TUqVYQLKtNvLCjHw6uGPLw4Qmw7vXEavnc + */ + private String moduleAddress; +} + + + + + + + + + + + + diff --git a/backend/src/main/java/dao/tron/tsol/config/SchedulerProperties.java b/backend/src/main/java/dao/tron/tsol/config/SchedulerProperties.java new file mode 100644 index 0000000..3d1d097 --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/config/SchedulerProperties.java @@ -0,0 +1,72 @@ +package dao.tron.tsol.config; + +import lombok.Data; +import org.springframework.boot.context.properties.ConfigurationProperties; +import org.springframework.stereotype.Component; + +@Data +@Component +@ConfigurationProperties(prefix = "scheduler") +public class SchedulerProperties { + + private BatchingConfig batching = new BatchingConfig(); + private ExecutionConfig execution = new ExecutionConfig(); + + @Data + public static class BatchingConfig { + /** + * Enable/disable automatic batching + * Default: true + */ + private boolean enabled = true; + + /** + * Maximum number of intents before triggering batch creation + * Default: 5 intents + */ + private int maxIntents = 5; + + /** + * Maximum delay in seconds before triggering batch creation + * Default: 30 seconds + */ + private long maxDelaySeconds = 30; + + /** + * How often to check for batching conditions (in milliseconds) + * Default: 3000ms (3 seconds) + */ + private long checkIntervalMs = 3000; + } + + @Data + public static class ExecutionConfig { + /** + * How often to check for unlocked batches to execute (in milliseconds) + * Default: 5000ms (5 seconds) + */ + private long checkIntervalMs = 5000; + + /** + * Enable/disable automatic execution + * Default: true + */ + private boolean enabled = true; + + /** + * Max number of transfers to execute concurrently per batch. + * Default: 3 (bounded parallelism; improves throughput while staying gentle on public nodes). + */ + private int maxParallel = 3; + } +} + + + + + + + + + + diff --git a/backend/src/main/java/dao/tron/tsol/config/SettlementProperties.java b/backend/src/main/java/dao/tron/tsol/config/SettlementProperties.java new file mode 100644 index 0000000..68e185b --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/config/SettlementProperties.java @@ -0,0 +1,68 @@ +package dao.tron.tsol.config; + +import lombok.Data; +import org.springframework.boot.context.properties.ConfigurationProperties; +import org.springframework.context.annotation.Configuration; + +@Configuration +@ConfigurationProperties(prefix = "settlement") +@Data +public class SettlementProperties { + + /** + * gRPC or HTTP endpoint for TRON node + * Example: grpc.nile.trongrid.io:50051 + */ + private String nodeEndpoint; + + /** + * Settlement contract address (base58 format) + * Example: TAhZaywaWM1zAQPADJA39FyoQk8cokRLCd + */ + private String contractAddress; + + /** + * Aggregator private key (hex format, 64 characters) + */ + private String privateKey; + + /** + * Aggregator address (base58 format, derived from private key) + * Example: TKWvD71EMFTpFVGZyqqX9fC6MQgcR9H76M + */ + private String aggregatorAddress; + + /** + * Transaction polling settings (to reduce RPC load). + */ + private Polling polling = new Polling(); + + @Data + public static class Polling { + /** + * Timeout for getting TransactionInfo after broadcasting a tx. + */ + private long txInfoTimeoutSeconds = 60; + /** + * Initial poll interval for TransactionInfo. + */ + private long txInfoPollInitialMs = 250; + /** + * Maximum poll interval for TransactionInfo (backoff cap). + */ + private long txInfoPollMaxMs = 2000; + + /** + * Timeout for reading BatchSubmitted event. + */ + private long batchSubmittedTimeoutSeconds = 60; + /** + * Initial poll interval for BatchSubmitted event. + */ + private long batchSubmittedPollInitialMs = 500; + /** + * Maximum poll interval for BatchSubmitted event (backoff cap). + */ + private long batchSubmittedPollMaxMs = 3000; + } +} diff --git a/backend/src/main/java/dao/tron/tsol/config/TokenProperties.java b/backend/src/main/java/dao/tron/tsol/config/TokenProperties.java new file mode 100644 index 0000000..3e634b2 --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/config/TokenProperties.java @@ -0,0 +1,29 @@ +package dao.tron.tsol.config; + +import lombok.Data; +import org.springframework.boot.context.properties.ConfigurationProperties; +import org.springframework.context.annotation.Configuration; + +@Configuration +@ConfigurationProperties(prefix = "token") +@Data +public class TokenProperties { + + /** + * ERC20 token contract address (base58 format) + * Example: TXYZopYRdj2D9XRtbG411XZZ3kM5VkAeBf + */ + private String address; +} + + + + + + + + + + + + diff --git a/backend/src/main/java/dao/tron/tsol/config/WhitelistProperties.java b/backend/src/main/java/dao/tron/tsol/config/WhitelistProperties.java new file mode 100644 index 0000000..ab931f4 --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/config/WhitelistProperties.java @@ -0,0 +1,38 @@ +package dao.tron.tsol.config; + +import lombok.Data; +import org.springframework.boot.context.properties.ConfigurationProperties; +import org.springframework.context.annotation.Configuration; + +@Configuration +@ConfigurationProperties(prefix = "whitelist") +@Data +public class WhitelistProperties { + + /** + * Whitelist registry contract address (base58 format) + * Example: TToEDBXQkGuYGsnyJASTM5JZweb7Rvrnfn + */ + private String registryAddress; + + /** + * Current Merkle root for whitelist (hex format with 0x prefix) + * Example: 0x02012517de2680f90c5eb1b6c64e04e21424609e331954b45e202ace05e2938b + */ + private String merkleRoot; + + /** + * Nonce for whitelist updates + */ + private Long nonce; + + /** + * List of whitelisted addresses (base58 format) + * Example: ["TKWvD71EMFTpFVGZyqqX9fC6MQgcR9H76M", "TVKAAcqpQxz3J4waayePr8dQjSQ2XHkdbF"] + */ + private java.util.List addresses; +} + + + + diff --git a/backend/src/main/java/dao/tron/tsol/controller/BatchMonitoringController.java b/backend/src/main/java/dao/tron/tsol/controller/BatchMonitoringController.java new file mode 100644 index 0000000..c40af5b --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/controller/BatchMonitoringController.java @@ -0,0 +1,415 @@ +package dao.tron.tsol.controller; + +import dao.tron.tsol.model.LocalBatch; +import dao.tron.tsol.model.StoredTransfer; +import dao.tron.tsol.model.TransferData; +import dao.tron.tsol.service.TransferIntentService; +import dao.tron.tsol.service.BatchService; +import dao.tron.tsol.service.MerkleTreeService; +import dao.tron.tsol.config.SchedulerProperties; +import lombok.extern.slf4j.Slf4j; +import org.springframework.http.ResponseEntity; +import org.springframework.web.bind.annotation.*; + +import java.util.*; +import java.util.stream.Stream; + +/** + * Comprehensive monitoring endpoint for batches, transfers, and Merkle trees + * Shows all information from the repository + */ +@Slf4j +@RestController +@RequestMapping("/api/monitor") +public class BatchMonitoringController { + + private final BatchService batchService; + private final MerkleTreeService merkleTreeService; + private final TransferIntentService intentService; + private final SchedulerProperties schedulerProps; + + public BatchMonitoringController(BatchService batchService, + MerkleTreeService merkleTreeService, + dao.tron.tsol.service.SettlementContractClient settlementClient, + TransferIntentService intentService, + SchedulerProperties schedulerProps) { + this.batchService = batchService; + this.merkleTreeService = merkleTreeService; + this.intentService = intentService; + this.schedulerProps = schedulerProps; + } + + + + /** + * GET /api/monitor/batches + * Get all batches with complete information + */ + @GetMapping("/batches") + public ResponseEntity> getAllBatches() { + Map response = new LinkedHashMap<>(); + + try { + List batches = batchService.getBatches(); + + List> batchInfo = new ArrayList<>(); + + for (LocalBatch batch : batches) { + Map info = buildBatchInfo(batch); + batchInfo.add(info); + } + + response.put("status", "SUCCESS"); + response.put("totalBatches", batches.size()); + response.put("batches", batchInfo); + + // Summary statistics + int totalTransfers = batches.stream() + .mapToInt(b -> b.getTransfers() != null ? b.getTransfers().size() : 0) + .sum(); + + int executedTransfers = batches.stream() + .flatMap(b -> b.getTransfers() != null ? b.getTransfers().stream() : java.util.stream.Stream.empty()) + .mapToInt(t -> t.isExecuted() ? 1 : 0) + .sum(); + + response.put("statistics", Map.of( + "totalBatches", batches.size(), + "totalTransfers", totalTransfers, + "executedTransfers", executedTransfers, + "pendingTransfers", totalTransfers - executedTransfers + )); + + } catch (Exception e) { + log.error("Error getting all batches", e); + response.put("status", "ERROR"); + response.put("error", e.getMessage()); + return ResponseEntity.status(500).body(response); + } + + return ResponseEntity.ok(response); + } + + /** + * GET /api/monitor/stats + * + * Script-friendly endpoint used by test scripts in repo root. + */ + @GetMapping("/stats") + public ResponseEntity> getStats() { + Map response = new LinkedHashMap<>(); + + List batches = batchService.getBatches(); + int pendingIntents = intentService.getPendingCount(); + + int totalBatches = batches.size(); + long completedBatches = batches.stream().filter(b -> b.getStatus() != null && b.getStatus().name().equals("COMPLETED")).count(); + + int totalTransfersInBatches = batches.stream() + .mapToInt(b -> b.getTransfers() != null ? b.getTransfers().size() : 0) + .sum(); + + int executedTransfers = batches.stream() + .flatMap(b -> b.getTransfers() != null ? b.getTransfers().stream() : Stream.empty()) + .mapToInt(t -> t.isExecuted() ? 1 : 0) + .sum(); + + response.put("status", "SUCCESS"); + response.put("schedulers", Map.of( + "batching", Map.of( + "enabled", schedulerProps.getBatching().isEnabled(), + "maxIntents", schedulerProps.getBatching().getMaxIntents(), + "maxDelaySeconds", schedulerProps.getBatching().getMaxDelaySeconds() + ), + "execution", Map.of( + "enabled", schedulerProps.getExecution().isEnabled() + ) + )); + + response.put("statistics", Map.of( + "totalTransfers", totalTransfersInBatches + pendingIntents, + "pendingTransfers", pendingIntents, + "executedTransfers", executedTransfers, + "totalBatches", totalBatches, + "completedBatches", completedBatches + )); + + return ResponseEntity.ok(response); + } + + /** + * POST /api/monitor/create-batch-now + * + * Script-friendly manual trigger for batching. + */ + @PostMapping("/create-batch-now") + public ResponseEntity> createBatchNow() { + Map response = new LinkedHashMap<>(); + + try { + int pending = intentService.getPendingCount(); + if (pending < 2) { + response.put("success", false); + response.put("error", "Need at least 2 pending intents to create a valid batch (current=" + pending + ")"); + return ResponseEntity.badRequest().body(response); + } + + int maxIntents = schedulerProps.getBatching().getMaxIntents(); + int before = batchService.getBatches().size(); + + batchService.createAndSubmitBatch(maxIntents); + + List afterBatches = batchService.getBatches(); + if (afterBatches.size() <= before) { + response.put("success", false); + response.put("error", "Batch was not created (no new LocalBatch stored)"); + return ResponseEntity.status(500).body(response); + } + + LocalBatch newest = afterBatches.stream() + .max(Comparator.comparingLong(LocalBatch::getLocalId)) + .orElseThrow(); + + response.put("success", true); + response.put("batchId", newest.getOnChainBatchId()); + response.put("merkleRoot", newest.getMerkleRootHex()); + response.put("txCount", newest.getTxCount()); + return ResponseEntity.ok(response); + } catch (Exception e) { + response.put("success", false); + response.put("error", e.getMessage()); + return ResponseEntity.status(500).body(response); + } + } + + /** + * GET /api/monitor/batch/{batchId} + * Get complete information about a specific batch + */ + @GetMapping("/batch/{batchId}") + public ResponseEntity> getBatchDetails(@PathVariable Long batchId) { + Map response = new LinkedHashMap<>(); + + try { + LocalBatch batch = batchService.getByOnChainBatchId(batchId); + + Map info = buildBatchInfo(batch); + + response.put("status", "SUCCESS"); + // Script compatibility: scripts expect a top-level "batch" object + response.put("batch", info); + // Also keep legacy flat keys for humans/debugging + response.putAll(info); + + } catch (IllegalArgumentException e) { + response.put("status", "NOT_FOUND"); + response.put("error", e.getMessage()); + return ResponseEntity.status(404).body(response); + } catch (Exception e) { + log.error("Error getting batch details", e); + response.put("status", "ERROR"); + response.put("error", e.getMessage()); + return ResponseEntity.status(500).body(response); + } + + return ResponseEntity.ok(response); + } + + /** + * GET /api/monitor/merkle-root/{rootHash} + * Get batch by Merkle root hash + */ + @GetMapping("/merkle-root/{rootHash}") + public ResponseEntity> getBatchByMerkleRoot(@PathVariable String rootHash) { + Map response = new LinkedHashMap<>(); + + try { + // Ensure root has 0x prefix + if (!rootHash.startsWith("0x")) { + rootHash = "0x" + rootHash; + } + + LocalBatch batch = batchService.getByMerkleRoot(rootHash); + + Map info = buildBatchInfo(batch); + + response.put("status", "SUCCESS"); + response.putAll(info); + + } catch (IllegalArgumentException e) { + response.put("status", "NOT_FOUND"); + response.put("error", e.getMessage()); + return ResponseEntity.status(404).body(response); + } catch (Exception e) { + log.error("Error getting batch by merkle root", e); + response.put("status", "ERROR"); + response.put("error", e.getMessage()); + return ResponseEntity.status(500).body(response); + } + + return ResponseEntity.ok(response); + } + + /** + * GET /api/monitor/transfers + * Get all transfers across all batches + */ + @GetMapping("/transfers") + public ResponseEntity> getAllTransfers() { + Map response = new LinkedHashMap<>(); + + try { + List batches = batchService.getBatches(); + List> allTransfers = new ArrayList<>(); + + for (LocalBatch batch : batches) { + if (batch.getTransfers() == null) continue; + + for (int i = 0; i < batch.getTransfers().size(); i++) { + StoredTransfer st = batch.getTransfers().get(i); + Map transferInfo = buildTransferInfo(st, i, batch); + allTransfers.add(transferInfo); + } + } + + response.put("status", "SUCCESS"); + response.put("totalTransfers", allTransfers.size()); + response.put("transfers", allTransfers); + + // Group by status + long executed = allTransfers.stream().filter(t -> Boolean.TRUE.equals(t.get("executed"))).count(); + long pending = allTransfers.size() - executed; + + response.put("summary", Map.of( + "total", allTransfers.size(), + "executed", executed, + "pending", pending + )); + + } catch (Exception e) { + log.error("Error getting all transfers", e); + response.put("status", "ERROR"); + response.put("error", e.getMessage()); + return ResponseEntity.status(500).body(response); + } + + return ResponseEntity.ok(response); + } + + private Map buildBatchInfo(LocalBatch batch) { + Map info = new LinkedHashMap<>(); + + info.put("batchId", batch.getOnChainBatchId()); + info.put("submitTxId", batch.getSubmitTxId()); + info.put("merkleRoot", batch.getMerkleRootHex()); + info.put("txCount", batch.getTxCount()); + info.put("submittedAt", batch.getSubmittedAt()); + info.put("submittedAtReadable", batch.getSubmittedAt() > 0 ? + new Date(batch.getSubmittedAt() * 1000).toString() : "N/A"); + info.put("status", batch.getStatus() != null ? batch.getStatus().toString() : "UNKNOWN"); + info.put("unlockTime", batch.getUnlockTime()); + info.put("unlockTimeReadable", batch.getUnlockTime() > 0 ? + new Date(batch.getUnlockTime() * 1000).toString() : "N/A"); + + if (batch.getTransfers() != null) { + info.put("transferCount", batch.getTransfers().size()); + + List> transfers = new ArrayList<>(); + for (int i = 0; i < batch.getTransfers().size(); i++) { + StoredTransfer st = batch.getTransfers().get(i); + transfers.add(buildTransferInfo(st, i, batch)); + } + info.put("transfers", transfers); + + // Transfer execution summary + long executed = batch.getTransfers().stream().filter(StoredTransfer::isExecuted).count(); + info.put("executionSummary", Map.of( + "total", batch.getTransfers().size(), + "executed", executed, + "pending", batch.getTransfers().size() - executed + )); + } else { + info.put("transferCount", 0); + info.put("transfers", Collections.emptyList()); + } + + return info; + } + + private Map buildTransferInfo(StoredTransfer st, int index, LocalBatch batch) { + Map info = new LinkedHashMap<>(); + + TransferData td = st.getTxData(); + + info.put("index", index); + info.put("batchId", batch.getOnChainBatchId()); + // Script compatibility: scripts expect transfer.txData.{from,to,amount,...} + Map txData = new LinkedHashMap<>(); + txData.put("from", td.getFrom()); + txData.put("to", td.getTo()); + txData.put("amount", td.getAmount()); + txData.put("nonce", td.getNonce()); + txData.put("timestamp", td.getTimestamp()); + txData.put("recipientCount", td.getRecipientCount()); + txData.put("txType", td.getTxType()); + txData.put("batchId", td.getBatchId()); + info.put("txData", txData); + + // Keep legacy flattened fields for existing users + info.put("from", td.getFrom()); + info.put("to", td.getTo()); + info.put("amount", td.getAmount()); + info.put("nonce", td.getNonce()); + info.put("timestamp", td.getTimestamp()); + info.put("timestampReadable", new Date(td.getTimestamp() * 1000).toString()); + info.put("recipientCount", td.getRecipientCount()); + info.put("txType", td.getTxType()); + info.put("executed", st.isExecuted()); + info.put("executionTxId", st.getExecutionTxId()); + info.put("proofSize", st.getTxProof() != null ? st.getTxProof().size() : 0); + // Helpful for txType=2 monitoring (BATCHED requires a whitelist proof). + // Keep only the size (do not expose full proof array in monitoring response). + int wlSize = st.getWhitelistProof() != null ? st.getWhitelistProof().size() : 0; + info.put("whitelistProofSize", wlSize); + + // Calculate tx hash for reference + byte[] txHash = merkleTreeService.leafHash(td, batch.getBatchSalt()); + info.put("txHash", "0x" + bytesToHex(txHash)); + + return info; + } + + @SuppressWarnings("unused") + private Map buildDetailedTransferInfo(StoredTransfer st, int index, LocalBatch batch) { + Map info = buildTransferInfo(st, index, batch); + + if (st.getTxProof() != null && !st.getTxProof().isEmpty()) { + info.put("merkleProof", st.getTxProof()); + } else { + info.put("merkleProof", Collections.emptyList()); + } + + if (st.getWhitelistProof() != null && !st.getWhitelistProof().isEmpty()) { + info.put("whitelistProof", st.getWhitelistProof()); + } else { + info.put("whitelistProof", Collections.emptyList()); + } + + info.put("batch", Map.of( + "batchId", batch.getOnChainBatchId(), + "merkleRoot", batch.getMerkleRootHex(), + "status", batch.getStatus() != null ? batch.getStatus().toString() : "UNKNOWN" + )); + + return info; + } + + private String bytesToHex(byte[] bytes) { + StringBuilder sb = new StringBuilder(bytes.length * 2); + for (byte b : bytes) { + sb.append(String.format("%02x", b & 0xff)); + } + return sb.toString(); + } +} + diff --git a/backend/src/main/java/dao/tron/tsol/controller/TransferIntentController.java b/backend/src/main/java/dao/tron/tsol/controller/TransferIntentController.java new file mode 100644 index 0000000..78dc67e --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/controller/TransferIntentController.java @@ -0,0 +1,24 @@ +package dao.tron.tsol.controller; + +import dao.tron.tsol.model.TransferIntentRequest; +import dao.tron.tsol.service.TransferIntentService; +import jakarta.validation.Valid; +import org.springframework.http.ResponseEntity; +import org.springframework.web.bind.annotation.*; + +@RestController +@RequestMapping("/api/intents") +public class TransferIntentController { + + private final TransferIntentService intentService; + + public TransferIntentController(TransferIntentService intentService) { + this.intentService = intentService; + } + + @PostMapping + public ResponseEntity submitIntent(@Valid @RequestBody TransferIntentRequest req) { + intentService.addIntent(req); + return ResponseEntity.accepted().build(); + } +} diff --git a/backend/src/main/java/dao/tron/tsol/event/BatchSubmittedEvent.java b/backend/src/main/java/dao/tron/tsol/event/BatchSubmittedEvent.java new file mode 100644 index 0000000..715ffdd --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/event/BatchSubmittedEvent.java @@ -0,0 +1,22 @@ +package dao.tron.tsol.event; + +/** + * DTO representing the Settlement BatchSubmitted event. + * + * Solidity: + * event BatchSubmitted(uint64 batchId, bytes32 merkleRoot, uint32 txCount, uint48 timestamp); + * + * NOTE: + * - Older contracts emitted all params as non-indexed (all values in log.data) + * - Newer contracts index batchId + merkleRoot (so those are in topics, while txCount+timestamp are in log.data) + * + * This record represents the fully decoded values independent of how they were indexed. + */ +public record BatchSubmittedEvent( + long batchId, + String merkleRootHex, + int txCount, + long timestamp +) {} + + diff --git a/backend/src/main/java/dao/tron/tsol/event/BatchSubmittedEventReader.java b/backend/src/main/java/dao/tron/tsol/event/BatchSubmittedEventReader.java new file mode 100644 index 0000000..e11adaa --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/event/BatchSubmittedEventReader.java @@ -0,0 +1,233 @@ +package dao.tron.tsol.event; + +import lombok.extern.slf4j.Slf4j; +import org.springframework.stereotype.Service; +import org.tron.trident.core.ApiWrapper; +import org.tron.trident.proto.Response; +import org.tron.trident.utils.Numeric; +import org.web3j.abi.FunctionReturnDecoder; +import org.web3j.abi.TypeReference; +import org.web3j.abi.datatypes.Type; +import org.web3j.abi.datatypes.generated.Bytes32; +import org.web3j.abi.datatypes.generated.Uint256; +import org.web3j.abi.datatypes.generated.Uint32; +import org.web3j.abi.datatypes.generated.Uint64; +import org.web3j.crypto.Hash; + +import java.time.Duration; +import java.util.concurrent.ThreadLocalRandom; +import java.util.Arrays; +import java.util.List; +import java.util.Locale; +import java.util.Optional; + +/** + * Reads Settlement BatchSubmitted event from TRON tx receipt logs using Trident. + *

+ * Requirements: + * - poll ApiWrapper.getTransactionInfoById(txid) + * - scan TransactionInfo.log[] topics for topic0 == keccak256("BatchSubmitted(uint64,bytes32,uint32,uint48)") + * - decode indexed params from topics when present (new contracts index batchId and merkleRoot) + * - decode log.data for non-indexed params (txCount, timestamp) + * - fallback handled elsewhere + */ +@Slf4j +@Service +public class BatchSubmittedEventReader { + + public static final String EVENT_SIGNATURE = "BatchSubmitted(uint64,bytes32,uint32,uint48)"; + + // topic0 = keccak256(eventSignature) + private static final String TOPIC0_HEX = Hash.sha3String(EVENT_SIGNATURE); // 0x... + private static final String TOPIC0_NORM32 = normalizeHexN(TOPIC0_HEX).toLowerCase(Locale.ROOT); + + private final ApiWrapper wrapper; + + public BatchSubmittedEventReader(dao.tron.tsol.config.SettlementProperties settlementProps) { + String privateKey = settlementProps.getPrivateKey(); + if (privateKey == null || privateKey.isBlank() || privateKey.equals("YOUR_PRIVATE_KEY_HERE")) { + this.wrapper = null; + log.warn("BatchSubmittedEventReader: missing UPDATER_PRIVATE_KEY, event reading disabled."); + } else { + this.wrapper = ApiWrapper.ofNile(privateKey); + } + } + + public Optional readWithTimeout(String txId, Duration timeout, Duration pollInterval) { + if (wrapper == null) return Optional.empty(); + long deadline = System.currentTimeMillis() + timeout.toMillis(); + long sleepMs = Math.max(200, pollInterval.toMillis()); + // Cap backoff at ~5x the initial interval (or 3s minimum cap). + long maxSleepMs = Math.max(sleepMs * 5, 3000L); + + while (System.currentTimeMillis() < deadline) { + Response.TransactionInfo info; + try { + info = wrapper.getTransactionInfoById(txId); + } catch (Exception e) { + log.debug("txInfo not available yet for {}: {}", txId, e.getMessage()); + info = null; + } + + if (info != null) { + Optional ev = findEventInTxInfo(info); + if (ev.isPresent()) return ev; + } + + // Backoff + jitter to reduce load on public nodes (especially when many txs are in-flight). + long jitter = ThreadLocalRandom.current().nextLong(0, 150); + if (!sleepQuietly(sleepMs + jitter)) return Optional.empty(); + sleepMs = Math.min(maxSleepMs, (long) Math.ceil(sleepMs * 1.5)); + } + + return Optional.empty(); + } + + public Optional findEventInTxInfo(Response.TransactionInfo info) { + if (info == null) return Optional.empty(); + + // TRON: receipt/logs may exist even if reverted; caller should validate receipt separately if desired. + int logCount = info.getLogCount(); + if (logCount == 0) return Optional.empty(); + + for (int i = 0; i < logCount; i++) { + Response.TransactionInfo.Log l = info.getLog(i); + if (l.getTopicsCount() == 0) continue; + + String topic0 = Numeric.toHexString(l.getTopics(0).toByteArray()); + if (!normalizeHexN(topic0).equalsIgnoreCase(TOPIC0_NORM32)) { + continue; + } + + try { + // New Settlement contract (per sc/src/interfaces/ISettlement.sol): + // event BatchSubmitted(uint64 indexed batchId, bytes32 indexed merkleRoot, uint32 txCount, uint48 timestamp); + if (l.getTopicsCount() >= 3) { + String topicBatchId = Numeric.toHexString(l.getTopics(1).toByteArray()); + String topicMerkleRoot = Numeric.toHexString(l.getTopics(2).toByteArray()); + String dataHex = Numeric.toHexString(l.getData().toByteArray()); + return Optional.of(decodeIndexedLog(topicBatchId, topicMerkleRoot, dataHex)); + } + + // Backward-compatibility: if contracts ever change to non-indexed params. + String dataHex = Numeric.toHexString(l.getData().toByteArray()); + return Optional.of(decodeLogData(dataHex)); + } catch (Exception e) { + log.warn("Failed to decode BatchSubmitted log data for tx {}: {}", info.getId(), e.getMessage()); + } + } + + return Optional.empty(); + } + + /** + * Decode ABI log.data for BatchSubmitted(uint64,bytes32,uint32,uint48). + * + * Data layout (4 x 32-byte slots): + * 0: uint64 batchId + * 1: bytes32 merkleRoot + * 2: uint32 txCount + * 3: uint48 timestamp (encoded as uint256 slot) + */ + public static BatchSubmittedEvent decodeLogData(String dataHex) { + List> decoded = decodeWeb3Abi( + dataHex, + new TypeReference() {}, + new TypeReference() {}, + new TypeReference() {}, + new TypeReference() {} + ); + requireDecodedSize(decoded, 4); + + Uint64 batchId = (Uint64) decoded.get(0); + Bytes32 root = (Bytes32) decoded.get(1); + Uint32 txCount = (Uint32) decoded.get(2); + Uint256 ts = (Uint256) decoded.get(3); + + return new BatchSubmittedEvent( + batchId.getValue().longValue(), + "0x" + Numeric.toHexStringNoPrefix(root.getValue()), + txCount.getValue().intValue(), + ts.getValue().longValue() + ); + } + + /** + * Decode indexed BatchSubmitted log: + * topics[1] = uint64 batchId (left padded to 32 bytes) + * topics[2] = bytes32 merkleRoot + * data = abi.encode(uint32 txCount, uint48 timestamp) => 2x 32-byte slots + */ + public static BatchSubmittedEvent decodeIndexedLog(String topicBatchIdHex, String topicMerkleRootHex, String dataHex) { + java.math.BigInteger batchId = Numeric.toBigInt(topicBatchIdHex); + String merkleRootHex = normalizeHexN(topicMerkleRootHex); + + List> decoded = decodeWeb3Abi( + dataHex, + new TypeReference() {}, + new TypeReference() {} // timestamp stored in 32-byte slot + ); + requireDecodedSize(decoded, 2); + + Uint32 txCount = (Uint32) decoded.get(0); + Uint256 ts = (Uint256) decoded.get(1); + + return new BatchSubmittedEvent( + batchId.longValue(), + merkleRootHex, + txCount.getValue().intValue(), + ts.getValue().longValue() + ); + } + + private static String strip0x(String v) { + if (v == null) return ""; + return v.startsWith("0x") || v.startsWith("0X") ? v.substring(2) : v; + } + + private static String ensure0x(String hex) { + String h = hex == null ? "" : hex; + return (h.startsWith("0x") || h.startsWith("0X")) ? h : ("0x" + h); + } + + private static String normalizeHexN(String hex) { + String c = strip0x(hex); + int n = 32 * 2; + // Ensure fixed width: take least-significant bytes, left-pad with 0s + if (c.length() < n) { + c = "0".repeat(n - c.length()) + c; + } else if (c.length() > n) { + c = c.substring(c.length() - n); + } + return "0x" + c; + } + + private static void requireDecodedSize(List decoded, int expected) { + if (decoded.size() != expected) { + throw new IllegalStateException("Unexpected decoded outputs=" + decoded.size() + ", expected=" + expected); + } + } + + private static boolean sleepQuietly(long ms) { + try { + Thread.sleep(ms); + return true; + } catch (InterruptedException ie) { + Thread.currentThread().interrupt(); + return false; + } + } + + private static List> decodeWeb3Abi(String dataHex, TypeReference... outputs) { + String hex = ensure0x(dataHex); + + @SuppressWarnings({"rawtypes", "unchecked"}) + List> typed = (List) Arrays.asList(outputs); + + @SuppressWarnings({"rawtypes", "unchecked"}) + List> decoded = (List) FunctionReturnDecoder.decode(hex, typed); + return decoded; + } +} + + diff --git a/backend/src/main/java/dao/tron/tsol/model/BatchStatus.java b/backend/src/main/java/dao/tron/tsol/model/BatchStatus.java new file mode 100644 index 0000000..955920b --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/model/BatchStatus.java @@ -0,0 +1,11 @@ +package dao.tron.tsol.model; + +public enum BatchStatus { + CREATED, + SUBMITTED_ONCHAIN, + UNLOCKED, + EXECUTING, + COMPLETED, + FAILED +} + diff --git a/backend/src/main/java/dao/tron/tsol/model/LocalBatch.java b/backend/src/main/java/dao/tron/tsol/model/LocalBatch.java new file mode 100644 index 0000000..2ceffd3 --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/model/LocalBatch.java @@ -0,0 +1,26 @@ +package dao.tron.tsol.model; + +import lombok.Data; + +import java.util.List; + +@Data +public class LocalBatch { + + private long localId; + private long onChainBatchId; + private String submitTxId; + private String merkleRootHex; + private int txCount; + private long submittedAt; // unix seconds (from BatchSubmitted event) + private long unlockTime; // unix seconds + /** + * Salt used when computing txHash/leaf hashes for this batch. + * MUST match the value passed to on-chain submitBatch(..., batchSalt). + * + * Note: batchId is NOT part of txHash anymore; only batchSalt is. + */ + private long batchSalt; + private BatchStatus status; + private List transfers; +} diff --git a/backend/src/main/java/dao/tron/tsol/model/StoredTransfer.java b/backend/src/main/java/dao/tron/tsol/model/StoredTransfer.java new file mode 100644 index 0000000..52c72a9 --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/model/StoredTransfer.java @@ -0,0 +1,18 @@ +package dao.tron.tsol.model; + + +import lombok.Data; + +import java.util.List; + +@Data +public class StoredTransfer { + + private TransferData txData; + private List txProof; // hex-encoded bytes32[] + private List whitelistProof; // hex-encoded bytes32[] + private boolean executed; + /** TRON transaction id of the successful on-chain executeTransfer (if executed). */ + private String executionTxId; +} + diff --git a/backend/src/main/java/dao/tron/tsol/model/TransferData.java b/backend/src/main/java/dao/tron/tsol/model/TransferData.java new file mode 100644 index 0000000..24163c8 --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/model/TransferData.java @@ -0,0 +1,17 @@ +package dao.tron.tsol.model; + +import lombok.Data; + +@Data +public class TransferData { + + private String from; + private String to; + private String amount; + private long nonce; + private long timestamp; + private int recipientCount; + private long batchId; // filled after submitBatch + private int txType; +} + diff --git a/backend/src/main/java/dao/tron/tsol/model/TransferIntentRequest.java b/backend/src/main/java/dao/tron/tsol/model/TransferIntentRequest.java new file mode 100644 index 0000000..b6c085d --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/model/TransferIntentRequest.java @@ -0,0 +1,30 @@ +package dao.tron.tsol.model; + +import jakarta.validation.constraints.NotBlank; +import jakarta.validation.constraints.NotNull; +import lombok.Data; + +@Data +public class TransferIntentRequest { + + @NotBlank + private String from; + + @NotBlank + private String to; + + @NotBlank + private String amount; // string decimal + + @NotNull + private Long nonce; + + @NotNull + private Long timestamp; // unix seconds + + @NotNull + private Integer recipientCount; // for fee calc + + @NotNull + private Integer txType; // map to Solidity uint8 +} diff --git a/backend/src/main/java/dao/tron/tsol/repository/BatchRepository.java b/backend/src/main/java/dao/tron/tsol/repository/BatchRepository.java new file mode 100644 index 0000000..fd3599e --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/repository/BatchRepository.java @@ -0,0 +1,20 @@ +package dao.tron.tsol.repository; + + +import dao.tron.tsol.model.LocalBatch; + +import java.util.List; +import java.util.Optional; + +public interface BatchRepository { + + void save(LocalBatch batch); + + List findAll(); + + Optional findByLocalId(long localId); + + Optional findByOnChainBatchId(long onChainBatchId); + + Optional findByMerkleRoot(String merkleRootHex); +} diff --git a/backend/src/main/java/dao/tron/tsol/repository/InMemoryBatchRepository.java b/backend/src/main/java/dao/tron/tsol/repository/InMemoryBatchRepository.java new file mode 100644 index 0000000..df1d3b2 --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/repository/InMemoryBatchRepository.java @@ -0,0 +1,85 @@ +package dao.tron.tsol.repository; + +import dao.tron.tsol.model.LocalBatch; +import org.springframework.stereotype.Repository; + +import java.util.*; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.atomic.AtomicLong; + +@Repository +public class InMemoryBatchRepository implements BatchRepository { + + // key: localId + private final Map batchesByLocalId = new ConcurrentHashMap<>(); + + // key: on-chain batchId + private final Map localIdByOnChainId = new ConcurrentHashMap<>(); + + // key: merkleRootHex + private final Map localIdByMerkleRoot = new ConcurrentHashMap<>(); + + // key: submitTxId + private final Map localIdBySubmitTxId = new ConcurrentHashMap<>(); + + // key: merkleRootHex -> batchId (requested mapping) + private final Map batchIdByMerkleRoot = new ConcurrentHashMap<>(); + + // key: submitTxId -> batchId (requested mapping) + private final Map batchIdBySubmitTxId = new ConcurrentHashMap<>(); + + private final AtomicLong localIdSeq = new AtomicLong(1); + + @Override + public synchronized void save(LocalBatch batch) { + // assign localId if new + if (batch.getLocalId() == 0L) { + batch.setLocalId(localIdSeq.getAndIncrement()); + } + + batchesByLocalId.put(batch.getLocalId(), batch); + + if (batch.getOnChainBatchId() != 0L) { + localIdByOnChainId.put(batch.getOnChainBatchId(), batch.getLocalId()); + } + + if (batch.getMerkleRootHex() != null) { + localIdByMerkleRoot.put(batch.getMerkleRootHex(), batch.getLocalId()); + if (batch.getOnChainBatchId() != 0L) { + batchIdByMerkleRoot.put(batch.getMerkleRootHex(), batch.getOnChainBatchId()); + } + } + + if (batch.getSubmitTxId() != null && !batch.getSubmitTxId().isBlank()) { + localIdBySubmitTxId.put(batch.getSubmitTxId(), batch.getLocalId()); + if (batch.getOnChainBatchId() != 0L) { + batchIdBySubmitTxId.put(batch.getSubmitTxId(), batch.getOnChainBatchId()); + } + } + + } + + @Override + public List findAll() { + return new ArrayList<>(batchesByLocalId.values()); + } + + @Override + public Optional findByLocalId(long localId) { + return Optional.ofNullable(batchesByLocalId.get(localId)); + } + + @Override + public Optional findByOnChainBatchId(long onChainBatchId) { + Long localId = localIdByOnChainId.get(onChainBatchId); + if (localId == null) return Optional.empty(); + return Optional.ofNullable(batchesByLocalId.get(localId)); + } + + @Override + public Optional findByMerkleRoot(String merkleRootHex) { + Long localId = localIdByMerkleRoot.get(merkleRootHex); + if (localId == null) return Optional.empty(); + return Optional.ofNullable(batchesByLocalId.get(localId)); + } +} diff --git a/backend/src/main/java/dao/tron/tsol/scheduler/BatchingScheduler.java b/backend/src/main/java/dao/tron/tsol/scheduler/BatchingScheduler.java new file mode 100644 index 0000000..5271456 --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/scheduler/BatchingScheduler.java @@ -0,0 +1,51 @@ +package dao.tron.tsol.scheduler; + +import dao.tron.tsol.config.SchedulerProperties; +import dao.tron.tsol.service.BatchService; +import dao.tron.tsol.service.TransferIntentService; +import lombok.extern.slf4j.Slf4j; +import org.springframework.scheduling.annotation.Scheduled; +import org.springframework.stereotype.Component; + +@Slf4j +@Component +public class BatchingScheduler { + + private final TransferIntentService intentService; + private final BatchService batchService; + private final SchedulerProperties schedulerProps; + + public BatchingScheduler(TransferIntentService intentService, + BatchService batchService, + SchedulerProperties schedulerProps) { + this.intentService = intentService; + this.batchService = batchService; + this.schedulerProps = schedulerProps; + } + + @Scheduled(fixedDelayString = "${scheduler.batching.check-interval-ms:3000}") + public void maybeCreateBatch() { + if (!schedulerProps.getBatching().isEnabled()) { + return; + } + if (intentService.isEmpty()) return; + + int count = intentService.getPendingCount(); + long oldestAge = intentService.getOldestAgeSeconds(); + + int maxIntents = schedulerProps.getBatching().getMaxIntents(); + long maxDelaySeconds = schedulerProps.getBatching().getMaxDelaySeconds(); + + // IMPORTANT: Require minimum 2 transactions for valid Merkle proofs + // Single-transaction batches have empty/invalid proofs that fail verification + if (count < 2) { + log.debug("Waiting for at least 2 transactions (current: {})", count); + return; + } + + if (count >= maxIntents || oldestAge >= maxDelaySeconds) { + log.info("Creating batch: pendingCount={}, oldestAge={}", count, oldestAge); + batchService.createAndSubmitBatch(maxIntents); + } + } +} diff --git a/backend/src/main/java/dao/tron/tsol/scheduler/ExecutionScheduler.java b/backend/src/main/java/dao/tron/tsol/scheduler/ExecutionScheduler.java new file mode 100644 index 0000000..8f7406c --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/scheduler/ExecutionScheduler.java @@ -0,0 +1,93 @@ +package dao.tron.tsol.scheduler; + +import dao.tron.tsol.config.SchedulerProperties; +import dao.tron.tsol.model.BatchStatus; +import dao.tron.tsol.model.LocalBatch; +import dao.tron.tsol.service.BatchService; +import dao.tron.tsol.service.ExecutionService; +import dao.tron.tsol.service.SettlementContractClient; +import dao.tron.tsol.service.WhitelistService; +import lombok.extern.slf4j.Slf4j; +import org.springframework.boot.context.event.ApplicationReadyEvent; +import org.springframework.context.event.EventListener; +import org.springframework.scheduling.annotation.Scheduled; +import org.springframework.stereotype.Component; + +import java.util.List; + +@Slf4j +@Component +public class ExecutionScheduler { + + private final BatchService batchService; + private final SettlementContractClient settlementClient; + private final ExecutionService executionService; + private final SchedulerProperties schedulerProps; + private final WhitelistService whitelistService; + + public ExecutionScheduler(BatchService batchService, + SettlementContractClient settlementClient, + ExecutionService executionService, + SchedulerProperties schedulerProps, + WhitelistService whitelistService) { + this.batchService = batchService; + this.settlementClient = settlementClient; + this.executionService = executionService; + this.schedulerProps = schedulerProps; + this.whitelistService = whitelistService; + } + + @EventListener(ApplicationReadyEvent.class) + public void syncWhitelistRootOnStartup() { + // Script-equivalent: ensure whitelist root is correct before any BATCHED txType=2 execution. + try { + if (!whitelistService.ensureWhitelistRootMatchesConfig()) { + log.warn("Whitelist root sync did not succeed (txType=2 may revert as NotWhitelisted)."); + } + } catch (Exception e) { + log.warn("Whitelist root sync failed (txType=2 may revert as NotWhitelisted): {}", e.getMessage()); + } + } + + @Scheduled(fixedDelayString = "${scheduler.execution.check-interval-ms:5000}") + public void executeUnlockedBatches() { + if (!schedulerProps.getExecution().isEnabled()) { + return; + } + + long now = System.currentTimeMillis() / 1000L; + List batches = batchService.getBatches(); + + if (batches.isEmpty()) { + return; + } + + for (LocalBatch batch : batches) { + if (batch.getStatus() != BatchStatus.SUBMITTED_ONCHAIN && + batch.getStatus() != BatchStatus.UNLOCKED) { + continue; + } + + if (batch.getUnlockTime() == 0L) { + try { + long unlockTime = settlementClient.getUnlockTime(batch.getOnChainBatchId()); + batch.setUnlockTime(unlockTime); + log.info("Batch {} unlock time: {} (now: {})", batch.getOnChainBatchId(), unlockTime, now); + } catch (Exception e) { + log.warn("Batch {} not found on-chain. Marking as failed.", batch.getOnChainBatchId()); + batch.setStatus(BatchStatus.FAILED); + continue; + } + } + + if (now < batch.getUnlockTime()) { + continue; + } + + log.info("Executing batch {} (onChainId={})", batch.getLocalId(), batch.getOnChainBatchId()); + + executionService.executeAll(batch); + log.info("Batch {} execution complete", batch.getOnChainBatchId()); + } + } +} diff --git a/backend/src/main/java/dao/tron/tsol/service/BatchService.java b/backend/src/main/java/dao/tron/tsol/service/BatchService.java new file mode 100644 index 0000000..e0a1590 --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/service/BatchService.java @@ -0,0 +1,120 @@ +package dao.tron.tsol.service; + +import dao.tron.tsol.model.*; +import dao.tron.tsol.repository.BatchRepository; +import dao.tron.tsol.util.CryptoUtil; +import lombok.extern.slf4j.Slf4j; +import org.springframework.stereotype.Service; + +import java.util.ArrayList; +import java.util.List; + +@Slf4j +@Service +public class BatchService { + + private final TransferIntentService intentService; + private final MerkleTreeService merkleTreeService; + private final SettlementContractClient settlementClient; + private final BatchRepository batchRepository; + private final WhitelistService whitelistService; + + public BatchService(TransferIntentService intentService, + MerkleTreeService merkleTreeService, + SettlementContractClient settlementClient, + BatchRepository batchRepository, + WhitelistService whitelistService) { + this.intentService = intentService; + this.merkleTreeService = merkleTreeService; + this.settlementClient = settlementClient; + this.batchRepository = batchRepository; + this.whitelistService = whitelistService; + } + + public synchronized void createAndSubmitBatch(int maxTxPerBatch) { + List intents = intentService.drainUpTo(maxTxPerBatch); + if (intents.isEmpty()) return; + + // Per-batch salt used for txHash / Merkle leaf hashing (batchId is NOT hashed anymore) + long batchSalt = CryptoUtil.randomUint64PositiveNonZero(); + + // map to TransferData + // NOTE: batchId is NOT part of Merkle tree calculation in new Settlement contract + List txs = new ArrayList<>(); + for (TransferIntentRequest req : intents) { + TransferData d = new TransferData(); + d.setFrom(req.getFrom()); + d.setTo(req.getTo()); + d.setAmount(req.getAmount()); + d.setNonce(req.getNonce()); + d.setTimestamp(req.getTimestamp()); + d.setRecipientCount(req.getRecipientCount()); + d.setTxType(req.getTxType()); + txs.add(d); + } + + // leaves - now with correct batchId in each leaf hash + List leaves = new ArrayList<>(); + for (TransferData d : txs) { + leaves.add(merkleTreeService.leafHash(d, batchSalt)); + } + + String rootHex = merkleTreeService.computeMerkleRoot(leaves); + + // stored transfers with proofs + List stored = new ArrayList<>(); + for (int i = 0; i < txs.size(); i++) { + TransferData tx = txs.get(i); + StoredTransfer st = new StoredTransfer(); + st.setTxData(tx); + st.setTxProof(merkleTreeService.buildProof(leaves, i)); + + // Generate whitelist proof for BATCHED transactions (txType=2) + if (tx.getTxType() == 2) { // BATCHED + List whitelistProof = whitelistService.generateWhitelistProof(tx.getFrom()); + st.setWhitelistProof(whitelistProof); + } else { + // DELAYED (0), INSTANT (1), FREE_TIER (3) don't need whitelist proof + st.setWhitelistProof(List.of()); + } + + st.setExecuted(false); + stored.add(st); + } + + // Call contract: submitBatch(root, txCount) + BatchSubmission submission = settlementClient.submitBatchWithTxId(rootHex, txs.size(), batchSalt); + long onChainBatchId = submission.batchId(); + + // Set batchId in each TransferData (for storage/tracking purposes only, NOT for hash) + stored.forEach(st -> st.getTxData().setBatchId(onChainBatchId)); + + // Build LocalBatch and save in repository + LocalBatch batch = new LocalBatch(); + batch.setOnChainBatchId(onChainBatchId); + batch.setSubmitTxId(submission.submitTxId()); + batch.setMerkleRootHex(rootHex); + batch.setTxCount(submission.txCount()); + batch.setStatus(BatchStatus.SUBMITTED_ONCHAIN); + batch.setSubmittedAt(submission.submittedAt()); + batch.setUnlockTime(submission.unlockTime()); + batch.setBatchSalt(batchSalt); + batch.setTransfers(stored); + + batchRepository.save(batch); + } + + public List getBatches() { + return batchRepository.findAll(); + } + + public LocalBatch getByOnChainBatchId(long onChainBatchId) { + return batchRepository.findByOnChainBatchId(onChainBatchId) + .orElseThrow(() -> new IllegalArgumentException("Batch not found: " + onChainBatchId)); + } + + public LocalBatch getByMerkleRoot(String merkleRootHex) { + return batchRepository.findByMerkleRoot(merkleRootHex) + .orElseThrow(() -> new IllegalArgumentException("Batch not found for root: " + merkleRootHex)); + } +} diff --git a/backend/src/main/java/dao/tron/tsol/service/BatchSubmission.java b/backend/src/main/java/dao/tron/tsol/service/BatchSubmission.java new file mode 100644 index 0000000..f0600db --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/service/BatchSubmission.java @@ -0,0 +1,18 @@ +package dao.tron.tsol.service; + +/** + * Result of submitBatch() including txId and resolved batchId. + */ +public record BatchSubmission( + String submitTxId, + long batchId, + String merkleRootHex, + int txCount, + long submittedAt, + long unlockTime +) {} + + + + + diff --git a/backend/src/main/java/dao/tron/tsol/service/ExecutionService.java b/backend/src/main/java/dao/tron/tsol/service/ExecutionService.java new file mode 100644 index 0000000..e511b2d --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/service/ExecutionService.java @@ -0,0 +1,104 @@ +package dao.tron.tsol.service; + +import dao.tron.tsol.config.SchedulerProperties; +import dao.tron.tsol.model.BatchStatus; +import dao.tron.tsol.model.LocalBatch; +import dao.tron.tsol.model.StoredTransfer; +import lombok.extern.slf4j.Slf4j; +import org.springframework.stereotype.Service; + +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.*; + +@Slf4j +@Service +public class ExecutionService { + + private final SettlementContractClient settlementClient; + private final SchedulerProperties schedulerProps; + private final ExecutorService executor; + + public ExecutionService(SettlementContractClient settlementClient, SchedulerProperties schedulerProps) { + this.settlementClient = settlementClient; + this.schedulerProps = schedulerProps; + // Upper bound to avoid accidental massive fan-out; can be increased if needed. + int threadSize = Math.max(1, Math.min(8, schedulerProps.getExecution().getMaxParallel())); + this.executor = Executors.newFixedThreadPool(threadSize); + } + + public void executeAll(LocalBatch batch) { + batch.setStatus(BatchStatus.EXECUTING); + + int maxParallel = Math.max(1, schedulerProps.getExecution().getMaxParallel()); + if (maxParallel == 1) { + executeSequential(batch); + return; + } + + // Pre-fix missing batchId once to avoid repeated warnings in parallel tasks. + for (StoredTransfer st : batch.getTransfers()) { + if (st.isExecuted()) continue; + long batchId = st.getTxData().getBatchId(); + if (batchId == 0) { + st.getTxData().setBatchId(batch.getOnChainBatchId()); + } + } + + List> futures = new ArrayList<>(); + for (StoredTransfer st : batch.getTransfers()) { + if (st.isExecuted()) continue; + futures.add(CompletableFuture.supplyAsync(() -> executeOne(st), executor)); + } + + boolean allOk = true; + for (CompletableFuture f : futures) { + try { + allOk &= f.get(); + } catch (Exception e) { + allOk = false; + log.error("Transfer execution task failed: {}", e.getMessage()); + } + } + + batch.setStatus(allOk ? BatchStatus.COMPLETED : BatchStatus.FAILED); + log.info("Batch {} execution finished: status={} (maxParallel={})", batch.getOnChainBatchId(), batch.getStatus(), maxParallel); + } + + private void executeSequential(LocalBatch batch) { + boolean allOk = true; + for (StoredTransfer st : batch.getTransfers()) { + if (st.isExecuted()) continue; + + long batchId = st.getTxData().getBatchId(); + if (batchId == 0) { + log.warn("Transfer missing batchId, setting to: {}", batch.getOnChainBatchId()); + st.getTxData().setBatchId(batch.getOnChainBatchId()); + } + + allOk &= executeOne(st); + } + + batch.setStatus(allOk ? BatchStatus.COMPLETED : BatchStatus.FAILED); + log.info("Batch {} execution finished: status={} (sequential)", batch.getOnChainBatchId(), batch.getStatus()); + } + + private boolean executeOne(StoredTransfer st) { + try { + settlementClient.executeTransfer(st); + st.setExecuted(true); + log.info("Transfer executed: from={}, to={}, amount={}", + st.getTxData().getFrom(), st.getTxData().getTo(), st.getTxData().getAmount()); + return true; + } catch (Exception e) { + log.error("Transfer execution failed: from={}, to={}, error={}", + st.getTxData().getFrom(), st.getTxData().getTo(), e.getMessage()); + return false; + } + } + + @jakarta.annotation.PreDestroy + public void shutdown() { + executor.shutdown(); + } +} diff --git a/backend/src/main/java/dao/tron/tsol/service/MerkleTreeService.java b/backend/src/main/java/dao/tron/tsol/service/MerkleTreeService.java new file mode 100644 index 0000000..c7aa2bd --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/service/MerkleTreeService.java @@ -0,0 +1,261 @@ +package dao.tron.tsol.service; + +import dao.tron.tsol.model.TransferData; +import org.bouncycastle.jcajce.provider.digest.Keccak; +import org.springframework.stereotype.Service; +import org.tron.trident.core.ApiWrapper; + +import java.math.BigInteger; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.List; + +@Service +public class MerkleTreeService { + + /** + * Compute leaf hash using abi.encodePacked for minimal byte representation. + * MUST match Solidity Settlement.sol _calculateTxHash(). + * Note: batchId is NOT included in txHash calculation. + * IMPORTANT: batchSalt IS included (last field). + */ + public byte[] leafHash(TransferData txData, long batchSalt) { + byte[] from = tronAddressToAddressBytes(txData.getFrom()); + byte[] to = tronAddressToAddressBytes(txData.getTo()); + + BigInteger amount = new BigInteger(txData.getAmount()); + + byte[] amountBytes = uint256ToBytes(amount); + byte[] nonceBytes = uint64ToBytes(txData.getNonce()); + byte[] timestampBytes = uint48ToBytes(txData.getTimestamp()); + byte[] recipientCountBytes = uint32ToBytes(txData.getRecipientCount()); + byte[] txTypeBytes = uint8ToBytes((byte) txData.getTxType()); + byte[] batchSaltBytes = uint64ToBytes(batchSalt); + + byte[] packed = concat(from, to); + packed = concat(packed, amountBytes); + packed = concat(packed, nonceBytes); + packed = concat(packed, timestampBytes); + packed = concat(packed, recipientCountBytes); + packed = concat(packed, txTypeBytes); + packed = concat(packed, batchSaltBytes); + + return keccak256(packed); + } + + /** + * Compute Merkle root using sorted-pair hashing (OpenZeppelin MerkleProof style). + * Odd number of nodes on a level -> last one is promoted (carried up unchanged). + * + * IMPORTANT: This must match the scripts in `sc/script/merkle/**` which promote odd nodes. + */ + public String computeMerkleRoot(List leaves) { + if (leaves == null || leaves.isEmpty()) { + throw new IllegalArgumentException("No leaves"); + } + + List level = new ArrayList<>(leaves.size()); + for (byte[] leaf : leaves) { + if (leaf == null || leaf.length != 32) { + throw new IllegalArgumentException("Each leaf must be 32 bytes"); + } + level.add(leaf.clone()); + } + + while (level.size() > 1) { + List next = new ArrayList<>(); + + for (int i = 0; i < level.size(); i += 2) { + byte[] left = level.get(i); + if (i + 1 < level.size()) { + byte[] right = level.get(i + 1); + next.add(hashPair(left, right)); + } else { + next.add(left); + } + } + + level = next; + } + + return "0x" + bytesToHex(level.getFirst()); + } + + /** + * Build Merkle proof for leaf at index. + * Returns list of hex-encoded bytes32 (0x-prefixed) in bottom-up order. + */ + public List buildProof(List leaves, int index) { + if (leaves == null || leaves.isEmpty()) { + throw new IllegalArgumentException("No leaves"); + } + if (index < 0 || index >= leaves.size()) { + throw new IndexOutOfBoundsException("Invalid leaf index: " + index); + } + + List> layers = new ArrayList<>(); + List current = new ArrayList<>(leaves.size()); + for (byte[] leaf : leaves) { + if (leaf == null || leaf.length != 32) { + throw new IllegalArgumentException("Each leaf must be 32 bytes"); + } + current.add(leaf.clone()); + } + layers.add(current); + + while (current.size() > 1) { + List next = new ArrayList<>(); + for (int i = 0; i < current.size(); i += 2) { + byte[] left = current.get(i); + if (i + 1 < current.size()) { + byte[] right = current.get(i + 1); + next.add(hashPair(left, right)); + } else { + // Promote odd leaf + next.add(left); + } + } + layers.add(next); + current = next; + } + + List proof = new ArrayList<>(); + int idx = index; + + for (int layerIdx = 0; layerIdx < layers.size() - 1; layerIdx++) { + List layer = layers.get(layerIdx); + int layerSize = layer.size(); + if (layerSize == 1) break; + + int siblingIndex; + if (idx % 2 == 0) { + if (idx + 1 < layerSize) { + siblingIndex = idx + 1; + } else { + // No sibling at this level (odd leaf promoted) + idx = idx / 2; + continue; + } + } else { + siblingIndex = idx - 1; + } + + byte[] sibling = layer.get(siblingIndex); + proof.add("0x" + bytesToHex(sibling)); + + idx = idx / 2; + } + + return proof; + } + + /** + * Hash a pair of 32-byte nodes with sorted-pair keccak. + */ + private byte[] hashPair(byte[] left, byte[] right) { + if (left == null || right == null || left.length != 32 || right.length != 32) { + throw new IllegalArgumentException("hashPair requires two 32-byte inputs"); + } + + if (compareBytes(left, right) <= 0) { + return keccak256(concat(left, right)); + } else { + return keccak256(concat(right, left)); + } + } + + private static byte[] keccak256(byte[] data) { + Keccak.Digest256 digest = new Keccak.Digest256(); + digest.update(data, 0, data.length); + return digest.digest(); + } + + private static byte[] concat(byte[] a, byte[] b) { + byte[] out = new byte[a.length + b.length]; + System.arraycopy(a, 0, out, 0, a.length); + System.arraycopy(b, 0, out, a.length, b.length); + return out; + } + + private static int compareBytes(byte[] a, byte[] b) { + int len = Math.min(a.length, b.length); + for (int i = 0; i < len; i++) { + int ai = a[i] & 0xff; + int bi = b[i] & 0xff; + if (ai != bi) return ai - bi; + } + return a.length - b.length; + } + + /** + * Convert TRON base58 address to 20-byte EVM address. + */ + private static byte[] tronAddressToAddressBytes(String base58) { + byte[] raw = ApiWrapper.parseAddress(base58).toByteArray(); + if (raw.length < 21) { + throw new IllegalArgumentException("Parsed address length < 21 bytes for " + base58); + } + return Arrays.copyOfRange(raw, raw.length - 20, raw.length); + } + + private static byte[] uint256ToBytes(BigInteger value) { + if (value == null) { + throw new IllegalArgumentException("uint256 value is null"); + } + if (value.signum() < 0) { + throw new IllegalArgumentException("uint256 cannot be negative"); + } + byte[] raw = value.toByteArray(); + if (raw.length > 32) { + throw new IllegalArgumentException("uint256 value too large"); + } + byte[] out = new byte[32]; + System.arraycopy(raw, 0, out, 32 - raw.length, raw.length); + return out; + } + + private static byte[] uint8ToBytes(byte v) { + return new byte[]{ v }; + } + + private static byte[] uint32ToBytes(int v) { + return new byte[]{ + (byte)(v >>> 24), + (byte)(v >>> 16), + (byte)(v >>> 8), + (byte)v + }; + } + + private static byte[] uint48ToBytes(long v) { + return new byte[]{ + (byte)(v >>> 40), + (byte)(v >>> 32), + (byte)(v >>> 24), + (byte)(v >>> 16), + (byte)(v >>> 8), + (byte)v + }; + } + + private static byte[] uint64ToBytes(long v) { + return new byte[]{ + (byte)(v >>> 56), + (byte)(v >>> 48), + (byte)(v >>> 40), + (byte)(v >>> 32), + (byte)(v >>> 24), + (byte)(v >>> 16), + (byte)(v >>> 8), + (byte)v + }; + } + + private String bytesToHex(byte[] bytes) { + StringBuilder sb = new StringBuilder(bytes.length * 2); + for (byte b : bytes) { + sb.append(String.format("%02x", b & 0xff)); + } + return sb.toString(); + } +} diff --git a/backend/src/main/java/dao/tron/tsol/service/SettlementContractClient.java b/backend/src/main/java/dao/tron/tsol/service/SettlementContractClient.java new file mode 100644 index 0000000..8f1973f --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/service/SettlementContractClient.java @@ -0,0 +1,13 @@ +package dao.tron.tsol.service; + + +public interface SettlementContractClient { + + long submitBatch(String merkleRootHex, int txCount, long batchSalt); + + BatchSubmission submitBatchWithTxId(String merkleRootHex, int txCount, long batchSalt); + + long getUnlockTime(long batchId); + + void executeTransfer(dao.tron.tsol.model.StoredTransfer transfer); +} diff --git a/backend/src/main/java/dao/tron/tsol/service/SettlementContractClientTrident.java b/backend/src/main/java/dao/tron/tsol/service/SettlementContractClientTrident.java new file mode 100644 index 0000000..5e16e9c --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/service/SettlementContractClientTrident.java @@ -0,0 +1,438 @@ +package dao.tron.tsol.service; + +import dao.tron.tsol.config.SettlementProperties; +import dao.tron.tsol.event.BatchSubmittedEvent; +import dao.tron.tsol.event.BatchSubmittedEventReader; +import dao.tron.tsol.model.StoredTransfer; +import dao.tron.tsol.model.TransferData; +import lombok.Getter; +import lombok.extern.slf4j.Slf4j; +import org.springframework.stereotype.Service; +import org.tron.trident.abi.FunctionEncoder; +import org.tron.trident.abi.FunctionReturnDecoder; +import org.tron.trident.abi.TypeReference; +import org.tron.trident.abi.datatypes.*; +import org.tron.trident.abi.datatypes.generated.Bytes32; +import org.tron.trident.abi.datatypes.generated.Uint256; +import org.tron.trident.abi.datatypes.generated.Uint32; +import org.tron.trident.abi.datatypes.generated.Uint48; +import org.tron.trident.abi.datatypes.generated.Uint64; +import org.tron.trident.abi.datatypes.generated.Uint8; +import org.tron.trident.core.ApiWrapper; +import org.tron.trident.core.NodeType; +import org.tron.trident.proto.Chain; +import org.tron.trident.proto.Response; +import org.tron.trident.utils.Numeric; + +import java.math.BigInteger; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.List; +import java.time.Duration; +import java.util.concurrent.ThreadLocalRandom; + +@Slf4j +@Service +public class SettlementContractClientTrident implements SettlementContractClient { + + private final ApiWrapper wrapper; + @Getter + private final String aggregatorAddress; + private final String contractAddress; + private final BatchSubmittedEventReader eventReader; + private final SettlementProperties.Polling polling; + /** + * Guard signing/broadcasting so concurrent execution doesn't trip over non-thread-safe internals. + * Receipt polling is intentionally done outside this lock. + */ + private final Object broadcastLock = new Object(); + + private static final long DEFAULT_FEE_LIMIT = 100_000_000L; + + public SettlementContractClientTrident(SettlementProperties props, BatchSubmittedEventReader eventReader) { + this.contractAddress = props.getContractAddress(); + this.eventReader = eventReader; + this.polling = props.getPolling(); + + String privateKey = props.getPrivateKey(); + if (privateKey == null || privateKey.isEmpty() || privateKey.equals("YOUR_PRIVATE_KEY_HERE")) { + log.warn("No valid private key configured. Set UPDATER_PRIVATE_KEY to enable blockchain operations."); + this.wrapper = null; + this.aggregatorAddress = "NOT_CONFIGURED"; + return; + } + + if (privateKey.length() % 2 != 0) { + log.error("Invalid private key format: odd-length hex string"); + this.wrapper = null; + this.aggregatorAddress = "INVALID_KEY_FORMAT"; + return; + } + + ApiWrapper tempWrapper; + String tempAggregatorAddress; + + try { + tempWrapper = ApiWrapper.ofNile(privateKey); + tempAggregatorAddress = tempWrapper.keyPair.toBase58CheckAddress(); + log.info("SettlementContractClientTrident initialized: aggregator={}, contract={}", + tempAggregatorAddress, contractAddress); + } catch (Exception e) { + log.error("Failed to initialize: {}", e.getMessage()); + tempWrapper = null; + tempAggregatorAddress = "INIT_FAILED"; + } + + this.wrapper = tempWrapper; + this.aggregatorAddress = tempAggregatorAddress; + } + + @Override + public long submitBatch(String merkleRootHex, int txCount, long batchSalt) { + return submitBatchWithTxId(merkleRootHex, txCount, batchSalt).batchId(); + } + + @Override + public BatchSubmission submitBatchWithTxId(String merkleRootHex, int txCount, long batchSalt) { + try { + String cleanRoot = cleanHex(merkleRootHex); + byte[] rootBytes = Numeric.hexStringToByteArray(cleanRoot); + if (rootBytes.length != 32) { + throw new IllegalArgumentException("Merkle root must be 32 bytes, got " + rootBytes.length); + } + + Function submitBatchFn = new Function( + "submitBatch", + Arrays.asList( + new Bytes32(rootBytes), + new Uint32(BigInteger.valueOf(txCount)), + new Uint64(BigInteger.valueOf(batchSalt)) + ), + Arrays.asList( + new TypeReference() {}, + new TypeReference() {} + ) + ); + + String encodedHex = FunctionEncoder.encode(submitBatchFn); + + Response.TransactionExtention txnExt = wrapper.triggerContract( + aggregatorAddress, + contractAddress, + encodedHex, + 0L, + 0L, + null, + DEFAULT_FEE_LIMIT + ); + + if (!txnExt.getResult().getResult()) { + String msg = txnExt.getResult().getMessage().toStringUtf8(); + throw new RuntimeException("submitBatch trigger failed: " + msg); + } + + String txId; + synchronized (broadcastLock) { + Chain.Transaction signed = wrapper.signTransaction(txnExt); + txId = wrapper.broadcastTransaction(signed); + } + + // Make failures explicit (revert/OUT_OF_ENERGY/etc.) rather than timing out on event polling. + Response.TransactionInfo txInfo = waitForTxInfo( + txId, + Duration.ofSeconds(polling.getTxInfoTimeoutSeconds()), + Duration.ofMillis(polling.getTxInfoPollInitialMs()), + Duration.ofMillis(polling.getTxInfoPollMaxMs()) + ); + if (txInfo == null) { + throw new RuntimeException("submitBatch failed: no TransactionInfo after timeout. txId=" + txId); + } + if (txInfo.getResult() != Response.TransactionInfo.code.SUCESS) { + String errorMsg = txInfo.getResMessage() != null ? txInfo.getResMessage().toStringUtf8() : "Unknown error"; + throw new RuntimeException("submitBatch failed on-chain: " + errorMsg + ". txId=" + txId); + } + + // Prefer event parsing (source of truth for batchId) + var evOpt = eventReader.readWithTimeout( + txId, + Duration.ofSeconds(polling.getBatchSubmittedTimeoutSeconds()), + Duration.ofMillis(polling.getBatchSubmittedPollInitialMs()) + ); + if (evOpt.isPresent()) { + BatchSubmittedEvent ev = evOpt.get(); + if (!cleanHex(ev.merkleRootHex()).equalsIgnoreCase(cleanHex(merkleRootHex))) { + log.warn("BatchSubmitted merkleRoot mismatch: expected={}, got={}", merkleRootHex, ev.merkleRootHex()); + } + if (ev.txCount() != txCount) { + log.warn("BatchSubmitted txCount mismatch: expected={}, got={}", txCount, ev.txCount()); + } + long unlockTime = getUnlockTime(ev.batchId()); + return new BatchSubmission(txId, ev.batchId(), merkleRootHex, txCount, ev.timestamp(), unlockTime); + } + + // Fallback: poll getBatchIdByRoot(root) until non-zero + long batchId = pollBatchIdByRoot(merkleRootHex, Duration.ofSeconds(polling.getBatchSubmittedTimeoutSeconds())); + if (batchId == 0L) { + throw new RuntimeException("submitBatch failed: could not resolve batchId from event or getBatchIdByRoot within timeout. txId=" + txId); + } + OnChainBatch b = getBatchById(batchId); + return new BatchSubmission(txId, batchId, merkleRootHex, txCount, b.timestamp(), b.unlockTime()); + } catch (Exception e) { + log.error("submitBatch failed", e); + throw new RuntimeException("submitBatch failed: " + e.getMessage(), e); + } + } + + private Response.TransactionInfo waitForTxInfo(String txId, Duration timeout, Duration pollInitial, Duration pollMax) { + long deadline = System.currentTimeMillis() + timeout.toMillis(); + long sleepMs = Math.max(100, pollInitial.toMillis()); + long maxSleepMs = Math.max(sleepMs, pollMax.toMillis()); + while (System.currentTimeMillis() < deadline) { + try { + Response.TransactionInfo info = wrapper.getTransactionInfoById(txId); + if (info != null) return info; + } catch (Exception ignored) {} + try { + long jitter = ThreadLocalRandom.current().nextLong(0, 150); + Thread.sleep(sleepMs + jitter); + } catch (InterruptedException ie) { + Thread.currentThread().interrupt(); + return null; + } + sleepMs = Math.min(maxSleepMs, (long) Math.ceil(sleepMs * 1.5)); + } + return null; + } + + @Override + public long getUnlockTime(long batchId) { + try { + Function getBatchFn = new Function( + "getBatchById", + Collections.singletonList(new Uint64(batchId)), + Arrays.asList( + new TypeReference() {}, + new TypeReference() {}, + new TypeReference() {}, + new TypeReference() {}, + new TypeReference() {} + ) + ); + + String encodedHex = FunctionEncoder.encode(getBatchFn); + + Response.TransactionExtention txn = wrapper.triggerConstantContract( + aggregatorAddress, + contractAddress, + encodedHex, + NodeType.SOLIDITY_NODE + ); + + if (!txn.getResult().getResult()) { + throw new RuntimeException("getBatchById failed: " + txn.getResult().getMessage().toStringUtf8()); + } + + if (txn.getConstantResultCount() == 0) { + throw new IllegalStateException("No constantResult for getBatchById"); + } + + String resultHex = Numeric.toHexString(txn.getConstantResult(0).toByteArray()); + @SuppressWarnings("rawtypes") + List decoded = + FunctionReturnDecoder.decode(resultHex, getBatchFn.getOutputParameters()); + + if (decoded.size() != 5) { + throw new IllegalStateException("Unexpected getBatchById outputs=" + decoded.size()); + } + + Uint256 unlockTime = (Uint256) decoded.get(3); + return unlockTime.getValue().longValue(); + } catch (Exception e) { + log.error("getUnlockTime failed", e); + throw new RuntimeException("getUnlockTime failed: " + e.getMessage(), e); + } + } + + @Override + public void executeTransfer(StoredTransfer transfer) { + try { + TransferData d = transfer.getTxData(); + + List txProofElems = new ArrayList<>(); + for (String hex : transfer.getTxProof()) { + byte[] b = Numeric.hexStringToByteArray(cleanHex(hex)); + if (b.length != 32) { + throw new IllegalArgumentException("txProof element not 32 bytes: " + hex); + } + txProofElems.add(new Bytes32(b)); + } + DynamicArray txProofArray = new DynamicArray<>(Bytes32.class, txProofElems); + + List wlProofElems = new ArrayList<>(); + for (String hex : transfer.getWhitelistProof()) { + byte[] b = Numeric.hexStringToByteArray(cleanHex(hex)); + if (b.length != 32) { + throw new IllegalArgumentException("whitelistProof element not 32 bytes: " + hex); + } + wlProofElems.add(new Bytes32(b)); + } + DynamicArray wlProofArray = new DynamicArray<>(Bytes32.class, wlProofElems); + + StaticStruct txDataTuple = new StaticStruct( + new Address(d.getFrom()), + new Address(d.getTo()), + new Uint256(new BigInteger(d.getAmount())), + new Uint64(BigInteger.valueOf(d.getNonce())), + new Uint48(BigInteger.valueOf(d.getTimestamp())), + new Uint32(BigInteger.valueOf(d.getRecipientCount())), + new Uint64(BigInteger.valueOf(d.getBatchId())), + new Uint8(d.getTxType()) + ); + + Function execFn = new Function( + "executeTransfer", + Arrays.asList(txProofArray, wlProofArray, txDataTuple), + Collections.singletonList(new TypeReference() {}) + ); + + String encodedHex = FunctionEncoder.encode(execFn); + + Response.TransactionExtention txnExt = wrapper.triggerContract( + aggregatorAddress, + contractAddress, + encodedHex, + 0L, + 0L, + null, + DEFAULT_FEE_LIMIT + ); + + if (!txnExt.getResult().getResult()) { + throw new RuntimeException("executeTransfer trigger failed: " + txnExt.getResult().getMessage().toStringUtf8()); + } + + String txId; + synchronized (broadcastLock) { + Chain.Transaction signed = wrapper.signTransaction(txnExt); + txId = wrapper.broadcastTransaction(signed); + } + transfer.setExecutionTxId(txId); + // Don't hard-sleep: poll receipt until available (faster on good days, clearer failure on reverts). + Response.TransactionInfo txInfo = waitForTxInfo( + txId, + Duration.ofSeconds(polling.getTxInfoTimeoutSeconds()), + Duration.ofMillis(polling.getTxInfoPollInitialMs()), + Duration.ofMillis(polling.getTxInfoPollMaxMs()) + ); + if (txInfo == null) { + throw new RuntimeException("Transaction failed: no TransactionInfo after timeout. txId=" + txId); + } + if (txInfo.getResult() != Response.TransactionInfo.code.SUCESS) { + String errorMsg = txInfo.getResMessage() != null ? txInfo.getResMessage().toStringUtf8() : "Unknown error"; + throw new RuntimeException("Transaction failed: " + errorMsg + ". txId=" + txId); + } + + log.info("executeTransfer SUCCESS: txId={}", txId); + + } catch (Exception e) { + log.error("executeTransfer failed", e); + throw new RuntimeException("executeTransfer failed: " + e.getMessage(), e); + } + } + + private String cleanHex(String value) { + if (value == null) return ""; + return (value.startsWith("0x") || value.startsWith("0X")) + ? value.substring(2) + : value; + } + + private long pollBatchIdByRoot(String merkleRootHex, Duration timeout) { + long deadline = System.currentTimeMillis() + timeout.toMillis(); + while (System.currentTimeMillis() < deadline) { + try { + long id = getBatchIdByRoot(merkleRootHex); + if (id != 0L) return id; + } catch (Exception ignored) {} + try { Thread.sleep(1000); } catch (InterruptedException ie) { Thread.currentThread().interrupt(); return 0L; } + } + return 0L; + } + + private record OnChainBatch(String merkleRootHex, long timestamp, int txCount, long unlockTime, long batchSalt) {} + + private OnChainBatch getBatchById(long batchId) { + Function getBatchFn = new Function( + "getBatchById", + Collections.singletonList(new Uint64(batchId)), + Arrays.asList( + new TypeReference() {}, + new TypeReference() {}, + new TypeReference() {}, + new TypeReference() {}, + new TypeReference() {} + ) + ); + + String encodedHex = FunctionEncoder.encode(getBatchFn); + Response.TransactionExtention txn = wrapper.triggerConstantContract( + aggregatorAddress, + contractAddress, + encodedHex, + NodeType.SOLIDITY_NODE + ); + if (!txn.getResult().getResult() || txn.getConstantResultCount() == 0) { + throw new RuntimeException("getBatchById query failed"); + } + String resultHex = Numeric.toHexString(txn.getConstantResult(0).toByteArray()); + @SuppressWarnings("rawtypes") + List decoded = + FunctionReturnDecoder.decode(resultHex, getBatchFn.getOutputParameters()); + if (decoded.size() != 5) { + throw new IllegalStateException("Unexpected getBatchById outputs=" + decoded.size()); + } + Bytes32 root = (Bytes32) decoded.get(0); + Uint256 timestamp = (Uint256) decoded.get(1); + Uint256 txCount = (Uint256) decoded.get(2); + Uint256 unlock = (Uint256) decoded.get(3); + Uint64 batchSalt = (Uint64) decoded.get(4); + return new OnChainBatch( + "0x" + Numeric.toHexStringNoPrefix(root.getValue()), + timestamp.getValue().longValue(), + txCount.getValue().intValue(), + unlock.getValue().longValue(), + batchSalt.getValue().longValue() + ); + } + + private long getBatchIdByRoot(String merkleRootHex) { + String cleanRoot = cleanHex(merkleRootHex); + byte[] rootBytes = Numeric.hexStringToByteArray(cleanRoot); + if (rootBytes.length != 32) throw new IllegalArgumentException("Merkle root must be 32 bytes"); + + Function fn = new Function( + "getBatchIdByRoot", + Collections.singletonList(new Bytes32(rootBytes)), + Collections.singletonList(new TypeReference() {}) + ); + + String encodedHex = FunctionEncoder.encode(fn); + Response.TransactionExtention txn = wrapper.triggerConstantContract( + aggregatorAddress, + contractAddress, + encodedHex, + NodeType.SOLIDITY_NODE + ); + if (!txn.getResult().getResult() || txn.getConstantResultCount() == 0) { + return 0L; + } + String resultHex = Numeric.toHexString(txn.getConstantResult(0).toByteArray()); + @SuppressWarnings("rawtypes") + List decoded = + FunctionReturnDecoder.decode(resultHex, fn.getOutputParameters()); + if (decoded.isEmpty()) return 0L; + Uint64 v = (Uint64) decoded.getFirst(); + return v.getValue().longValue(); + } +} diff --git a/backend/src/main/java/dao/tron/tsol/service/TransferIntentService.java b/backend/src/main/java/dao/tron/tsol/service/TransferIntentService.java new file mode 100644 index 0000000..9528e30 --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/service/TransferIntentService.java @@ -0,0 +1,40 @@ +package dao.tron.tsol.service; + +import dao.tron.tsol.model.TransferIntentRequest; +import org.springframework.stereotype.Service; + +import java.util.ArrayList; +import java.util.List; + +@Service +public class TransferIntentService { + + private final List pending = new ArrayList<>(); + + public synchronized void addIntent(TransferIntentRequest req) { + pending.add(req); + } + + public synchronized boolean isEmpty() { + return pending.isEmpty(); + } + + public synchronized int getPendingCount() { + return pending.size(); + } + + public synchronized long getOldestAgeSeconds() { + if (pending.isEmpty()) return 0L; + long oldestTs = pending.getFirst().getTimestamp(); + long now = System.currentTimeMillis() / 1000L; + return now - oldestTs; + } + + public synchronized List drainUpTo(int max) { + if (pending.isEmpty()) return List.of(); + int n = Math.min(max, pending.size()); + List res = new ArrayList<>(pending.subList(0, n)); + pending.subList(0, n).clear(); + return res; + } +} diff --git a/backend/src/main/java/dao/tron/tsol/service/WhitelistService.java b/backend/src/main/java/dao/tron/tsol/service/WhitelistService.java new file mode 100644 index 0000000..2984f71 --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/service/WhitelistService.java @@ -0,0 +1,409 @@ +package dao.tron.tsol.service; + +import dao.tron.tsol.config.ChainProperties; +import dao.tron.tsol.config.SettlementProperties; +import dao.tron.tsol.config.WhitelistProperties; +import lombok.extern.slf4j.Slf4j; +import org.springframework.stereotype.Service; +import org.tron.trident.abi.FunctionEncoder; +import org.tron.trident.abi.FunctionReturnDecoder; +import org.tron.trident.abi.TypeReference; +import org.tron.trident.abi.datatypes.DynamicBytes; +import org.tron.trident.abi.datatypes.Function; +import org.tron.trident.abi.datatypes.generated.Bytes32; +import org.tron.trident.abi.datatypes.generated.Uint64; +import org.tron.trident.core.ApiWrapper; +import org.tron.trident.proto.Chain; +import org.tron.trident.proto.Response; +import org.tron.trident.utils.Numeric; +import org.web3j.crypto.ECKeyPair; +import org.web3j.crypto.Hash; +import org.web3j.crypto.Sign; + +import java.math.BigInteger; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.List; + +@Slf4j +@Service +public class WhitelistService { + + private final WhitelistProperties whitelistProps; + private final ApiWrapper wrapper; + private final String updaterBase58; + private final ECKeyPair keyPair; + private final long chainId; + private final String registryBase58; + + private static final long DEFAULT_FEE_LIMIT = 50_000_000L; + + public WhitelistService(WhitelistProperties whitelistProps, + SettlementProperties settlementProps, + ChainProperties chainProps) { + this.whitelistProps = whitelistProps; + this.registryBase58 = whitelistProps.getRegistryAddress(); + this.chainId = chainProps.getId() != null ? chainProps.getId() : 3448148188L; + + String privateKey = settlementProps.getPrivateKey(); + if (privateKey == null || privateKey.isBlank()) { + log.warn("WhitelistService: No valid private key configured. Whitelist sync/signing disabled."); + this.wrapper = null; + this.updaterBase58 = "NOT_CONFIGURED"; + this.keyPair = null; + } else { + // NOTE: this project targets Nile; if you need mainnet, plumb node selection like SettlementContractClientTrident. + this.wrapper = ApiWrapper.ofNile(privateKey); + this.updaterBase58 = this.wrapper.keyPair.toBase58CheckAddress(); + String pkHex = privateKey.startsWith("0x") ? privateKey : "0x" + privateKey; + this.keyPair = ECKeyPair.create(new BigInteger(pkHex.substring(2), 16)); + } + + log.info("WhitelistService initialized: updater={}, registry={}, chainId={}, addresses={}", + updaterBase58, + registryBase58, + chainId, + whitelistProps.getAddresses() != null ? whitelistProps.getAddresses().size() : 0); + } + + public List generateWhitelistProof(String addressBase58) { + try { + String target = addressBase58 == null ? "" : addressBase58.trim(); + List configured = whitelistProps.getAddresses(); + if (configured == null || configured.isEmpty()) { + return List.of(); + } + + // Spring may bind `whitelist.addresses` as: + // - a real list, OR + // - a single comma-separated string (e.g. from env), sometimes with spaces/CRLF. + // Normalize by flattening + trimming + dropping empties. + List whitelistAddresses = new ArrayList<>(); + for (String entry : configured) { + if (entry == null) continue; + String e = entry.trim(); + if (e.isEmpty()) continue; + if (e.contains(",")) { + for (String part : e.split(",")) { + String p = part.trim(); + if (!p.isEmpty()) whitelistAddresses.add(p); + } + } else { + whitelistAddresses.add(e); + } + } + if (whitelistAddresses.isEmpty()) { + return List.of(); + } + + List leaves = new ArrayList<>(); + int targetIndex = -1; + + for (int i = 0; i < whitelistAddresses.size(); i++) { + String addr = whitelistAddresses.get(i); + byte[] leaf = whitelistAddressToLeaf(addr); + leaves.add(leaf); + + if (addr.equalsIgnoreCase(target)) { + targetIndex = i; + } + } + + if (targetIndex == -1) { + log.debug("Whitelist proof requested for non-whitelisted address: {} (configuredCount={})", target, whitelistAddresses.size()); + return List.of(); + } + + // Whitelist scripts build a standard OZ-sorted-pair tree with "duplicate last" behavior. + List proof = buildProofDuplicateOddSortedPairs(leaves, targetIndex); + List out = new ArrayList<>(proof.size()); + for (byte[] p : proof) { + out.add("0x" + bytesToHex(p)); + } + return out; + + } catch (Exception e) { + log.error("Failed to generate whitelist proof", e); + return List.of(); + } + } + + /** + * Convert Tron address to whitelist leaf hash: keccak256(bytes32(address)) + */ + private byte[] whitelistAddressToLeaf(String addressBase58) { + // Reuse trident parsing for base58 -> 21 bytes (0x41 + 20 bytes); keep the last 20 bytes. + byte[] raw = org.tron.trident.core.ApiWrapper.parseAddress(addressBase58).toByteArray(); + if (raw.length < 21) { + throw new IllegalArgumentException("Parsed address length < 21 bytes for " + addressBase58); + } + byte[] addr20 = java.util.Arrays.copyOfRange(raw, raw.length - 20, raw.length); + + byte[] bytes32 = new byte[32]; + System.arraycopy(addr20, 0, bytes32, 12, 20); + + return Hash.sha3(bytes32); + } + + /** + * Ensure the on-chain whitelist root matches the addresses configured in `whitelist.addresses`. + * This is the Java equivalent of the scripts `2_signRoot.js` + `3_updateRoot.js`. + */ + public boolean ensureWhitelistRootMatchesConfig() { + if (wrapper == null || keyPair == null) { + log.warn("WhitelistService not configured for on-chain sync (missing private key)."); + return false; + } + try { + String desiredRoot = computeWhitelistRootFromConfig(); + String currentRoot = getCurrentMerkleRoot(); + + if (normalizeHex32(desiredRoot).equalsIgnoreCase(normalizeHex32(currentRoot))) { + log.info("Whitelist root already matches config: {}", desiredRoot); + return true; + } + + long nonce = getCurrentNonce(); + byte[] sig = signWhitelistUpdate(desiredRoot, nonce); + String txId = updateMerkleRoot(desiredRoot, nonce, sig); + + Thread.sleep(5000); + String after = getCurrentMerkleRoot(); + boolean ok = normalizeHex32(desiredRoot).equalsIgnoreCase(normalizeHex32(after)); + log.info("Whitelist root sync result: ok={}, txId={}, old={}, new={}, desired={}", + ok, txId, currentRoot, after, desiredRoot); + return ok; + } catch (Exception e) { + log.error("ensureWhitelistRootMatchesConfig failed", e); + return false; + } + } + + public long getCurrentNonce() { + Function fn = new Function( + "getCurrentNonce", + List.of(), + List.of(new TypeReference() {}) + ); + Response.TransactionExtention txn = wrapper.triggerConstantContract( + updaterBase58, + registryBase58, + FunctionEncoder.encode(fn), + org.tron.trident.core.NodeType.SOLIDITY_NODE + ); + if (!txn.getResult().getResult() || txn.getConstantResultCount() == 0) { + throw new RuntimeException("getCurrentNonce failed: " + txn.getResult().getMessage().toStringUtf8()); + } + String resultHex = Numeric.toHexString(txn.getConstantResult(0).toByteArray()); + @SuppressWarnings("rawtypes") + List decoded = FunctionReturnDecoder.decode(resultHex, fn.getOutputParameters()); + Uint64 v = (Uint64) decoded.get(0); + return v.getValue().longValue(); + } + + public String getCurrentMerkleRoot() { + Function fn = new Function( + "getCurrentMerkleRoot", + List.of(), + List.of(new TypeReference() {}) + ); + Response.TransactionExtention txn = wrapper.triggerConstantContract( + updaterBase58, + registryBase58, + FunctionEncoder.encode(fn), + org.tron.trident.core.NodeType.SOLIDITY_NODE + ); + if (!txn.getResult().getResult() || txn.getConstantResultCount() == 0) { + throw new RuntimeException("getCurrentMerkleRoot failed: " + txn.getResult().getMessage().toStringUtf8()); + } + String resultHex = Numeric.toHexString(txn.getConstantResult(0).toByteArray()); + @SuppressWarnings("rawtypes") + List decoded = FunctionReturnDecoder.decode(resultHex, fn.getOutputParameters()); + Bytes32 root = (Bytes32) decoded.get(0); + return "0x" + bytesToHex(root.getValue()); + } + + public String updateMerkleRoot(String newRootHex, long nonce, byte[] signature) { + try { + String cleanRoot = cleanHex(newRootHex); + byte[] rootBytes = Numeric.hexStringToByteArray(cleanRoot); + if (rootBytes.length != 32) throw new IllegalArgumentException("Root must be 32 bytes"); + + Function fn = new Function( + "updateMerkleRoot", + Arrays.asList( + new Bytes32(rootBytes), + new Uint64(BigInteger.valueOf(nonce)), + new DynamicBytes(signature) + ), + List.of() + ); + + Response.TransactionExtention txnExt = wrapper.triggerContract( + updaterBase58, + registryBase58, + FunctionEncoder.encode(fn), + 0L, + 0L, + null, + DEFAULT_FEE_LIMIT + ); + if (!txnExt.getResult().getResult()) { + throw new RuntimeException("updateMerkleRoot trigger failed: " + txnExt.getResult().getMessage().toStringUtf8()); + } + Chain.Transaction signed = wrapper.signTransaction(txnExt); + return wrapper.broadcastTransaction(signed); + } catch (Exception e) { + throw new RuntimeException("updateMerkleRoot failed: " + e.getMessage(), e); + } + } + + /** + * Signature compatible with Solidity: + * hash = keccak256(abi.encodePacked(newRoot, nonce, chainid, address(registry))); + * signedHash = toEthSignedMessageHash(hash); + * signature = sign(signedHash) + */ + private byte[] signWhitelistUpdate(String newRootHex, long nonce) { + if (keyPair == null) throw new IllegalStateException("No keyPair configured"); + String cleanRoot = cleanHex(newRootHex); + byte[] rootBytes = Numeric.hexStringToByteArray(cleanRoot); + if (rootBytes.length != 32) throw new IllegalArgumentException("Root must be 32 bytes"); + + // registry as 20-byte EVM address (strip 0x41 prefix from TRON address bytes) + byte[] regRaw = ApiWrapper.parseAddress(registryBase58).toByteArray(); + if (regRaw.length < 21) throw new IllegalArgumentException("Registry address parse failed"); + byte[] reg20 = Arrays.copyOfRange(regRaw, regRaw.length - 20, regRaw.length); + + byte[] nonce8 = new byte[8]; + long n = nonce; + for (int i = 7; i >= 0; i--) { + nonce8[i] = (byte) (n & 0xFF); + n >>= 8; + } + + byte[] chainId32 = new byte[32]; + byte[] cid = BigInteger.valueOf(chainId).toByteArray(); + System.arraycopy(cid, 0, chainId32, 32 - cid.length, cid.length); + + byte[] packed = new byte[32 + 8 + 32 + 20]; + System.arraycopy(rootBytes, 0, packed, 0, 32); + System.arraycopy(nonce8, 0, packed, 32, 8); + System.arraycopy(chainId32, 0, packed, 40, 32); + System.arraycopy(reg20, 0, packed, 72, 20); + + byte[] digest = Hash.sha3(packed); + Sign.SignatureData sig = Sign.signPrefixedMessage(digest, keyPair); + + byte[] out = new byte[65]; + System.arraycopy(sig.getR(), 0, out, 0, 32); + System.arraycopy(sig.getS(), 0, out, 32, 32); + out[64] = sig.getV()[0]; + return out; + } + + /** + * Compute whitelist merkle root from the configured base58 addresses. + * Leaf = keccak256(bytes32(address)) (left padded 12 bytes). + * Internal nodes: keccak256(min(a,b) || max(a,b)) (sorted pair). + * Odd nodes: duplicate the last element (script behavior for whitelist trees). + */ + private String computeWhitelistRootFromConfig() { + List addrs = whitelistProps.getAddresses(); + if (addrs == null || addrs.isEmpty()) { + throw new IllegalStateException("whitelist.addresses is empty"); + } + List leaves = new ArrayList<>(addrs.size()); + for (String a : addrs) leaves.add(whitelistAddressToLeaf(a)); + byte[] root = computeRootDuplicateOddSortedPairs(leaves); + return "0x" + bytesToHex(root); + } + + private static byte[] computeRootDuplicateOddSortedPairs(List leaves) { + List level = new ArrayList<>(leaves.size()); + for (byte[] l : leaves) level.add(l.clone()); + while (level.size() > 1) { + List next = new ArrayList<>(); + for (int i = 0; i < level.size(); i += 2) { + byte[] left = level.get(i); + byte[] right = (i + 1 < level.size()) ? level.get(i + 1) : left; // duplicate odd + next.add(hashPairSorted(left, right)); + } + level = next; + } + return level.get(0); + } + + private static List buildProofDuplicateOddSortedPairs(List leaves, int index) { + List> layers = new ArrayList<>(); + List cur = new ArrayList<>(leaves.size()); + for (byte[] l : leaves) cur.add(l.clone()); + layers.add(cur); + + while (cur.size() > 1) { + List next = new ArrayList<>(); + for (int i = 0; i < cur.size(); i += 2) { + byte[] left = cur.get(i); + byte[] right = (i + 1 < cur.size()) ? cur.get(i + 1) : left; // duplicate odd + next.add(hashPairSorted(left, right)); + } + layers.add(next); + cur = next; + } + + List proof = new ArrayList<>(); + int idx = index; + for (int layerIdx = 0; layerIdx < layers.size() - 1; layerIdx++) { + List layer = layers.get(layerIdx); + int sib = idx ^ 1; + if (sib < layer.size()) { + proof.add(layer.get(sib)); + } + idx /= 2; + } + return proof; + } + + private static byte[] hashPairSorted(byte[] a, byte[] b) { + if (compareBytes(a, b) <= 0) { + return Hash.sha3(concat(a, b)); + } + return Hash.sha3(concat(b, a)); + } + + private static int compareBytes(byte[] a, byte[] b) { + int len = Math.min(a.length, b.length); + for (int i = 0; i < len; i++) { + int ai = a[i] & 0xff; + int bi = b[i] & 0xff; + if (ai != bi) return ai - bi; + } + return a.length - b.length; + } + + private static byte[] concat(byte[] a, byte[] b) { + byte[] out = new byte[a.length + b.length]; + System.arraycopy(a, 0, out, 0, a.length); + System.arraycopy(b, 0, out, a.length, b.length); + return out; + } + + private static String bytesToHex(byte[] bytes) { + StringBuilder sb = new StringBuilder(bytes.length * 2); + for (byte b : bytes) sb.append(String.format("%02x", b & 0xff)); + return sb.toString(); + } + + private static String normalizeHex32(String h) { + if (h == null) return null; + String s = h.toLowerCase(); + return s.startsWith("0x") ? s : "0x" + s; + } + + private static String cleanHex(String value) { + if (value == null) return ""; + return (value.startsWith("0x") || value.startsWith("0X")) + ? value.substring(2) + : value; + } +} diff --git a/backend/src/main/java/dao/tron/tsol/util/CryptoUtil.java b/backend/src/main/java/dao/tron/tsol/util/CryptoUtil.java new file mode 100644 index 0000000..8507fce --- /dev/null +++ b/backend/src/main/java/dao/tron/tsol/util/CryptoUtil.java @@ -0,0 +1,45 @@ +package dao.tron.tsol.util; + +import java.security.SecureRandom; + +/** + * Cryptographic utilities. + * + * IMPORTANT: + * - Use SecureRandom for salts/nonces intended to be unpredictable. + * - For Settlement batchSalt, the on-chain contract (sc/) currently expects uint64; we therefore expose a uint64-safe + * generator that returns a non-zero positive long (1..Long.MAX_VALUE). + */ +public final class CryptoUtil { + private CryptoUtil() {} + + private static final SecureRandom RNG = new SecureRandom(); + + public static byte[] randomBytes32() { + byte[] salt = new byte[32]; + RNG.nextBytes(salt); + return salt; + } + + public static String toHex0x(byte[] bytes) { + StringBuilder sb = new StringBuilder("0x"); + for (byte b : bytes) sb.append(String.format("%02x", b)); + return sb.toString(); + } + + /** + * Generate a non-zero uint64 value that is safe to round-trip in Java as a signed long + * and safe to encode using Uint64(BigInteger.valueOf(...)). + */ + public static long randomUint64PositiveNonZero() { + long v; + do { + v = RNG.nextLong() & Long.MAX_VALUE; + } while (v == 0L); + return v; + } +} + + + + diff --git a/backend/src/main/resources/application.yaml b/backend/src/main/resources/application.yaml new file mode 100644 index 0000000..d7d2d75 --- /dev/null +++ b/backend/src/main/resources/application.yaml @@ -0,0 +1,69 @@ +spring: + application: + name: tsol-backend + +# ----------------------------------------------------------------------------- +# Configuration priority (highest to lowest): +# 1) System environment variables +# 2) .env file (loaded by spring-dotenv) +# 3) Defaults below +# ----------------------------------------------------------------------------- + +server: + port: ${PORT:8080} + +settlement: + # TRON node gRPC endpoint + node-endpoint: ${NODE_ENDPOINT:grpc.nile.trongrid.io:50051} + # Settlement contract address (base58) + contract-address: ${SETTLEMENT_ADDRESS} + # Aggregator private key (64 hex chars, no 0x) - MUST be set in .env or environment + private-key: ${UPDATER_PRIVATE_KEY} + # Optional: aggregator base58 address (if you want to pin it; otherwise derived from key where applicable) + aggregator-address: ${UPDATER_ADDRESS} + polling: + tx-info-timeout-seconds: ${SETTLEMENT_TX_INFO_TIMEOUT_SECONDS:60} + tx-info-poll-initial-ms: ${SETTLEMENT_TX_INFO_POLL_INITIAL_MS:250} + tx-info-poll-max-ms: ${SETTLEMENT_TX_INFO_POLL_MAX_MS:2000} + batch-submitted-timeout-seconds: ${SETTLEMENT_BATCH_SUBMITTED_TIMEOUT_SECONDS:60} + batch-submitted-poll-initial-ms: ${SETTLEMENT_BATCH_SUBMITTED_POLL_INITIAL_MS:500} + batch-submitted-poll-max-ms: ${SETTLEMENT_BATCH_SUBMITTED_POLL_MAX_MS:3000} + +whitelist: + # Whitelist registry contract address (base58) + registry-address: ${WHITELIST_REGISTRY_ADDRESS} + # Current whitelist merkle root (0x-prefixed hex) + merkle-root: ${WL_NEW_ROOT} + # Whitelist update nonce + nonce: ${WL_NONCE:0} + # List of whitelisted addresses (comma-separated) + addresses: ${WHITELIST_ADDRESSES} + +fee: + # FeeModule contract address (base58) + module-address: ${FEE_MODULE_ADDRESS} + +token: + # ERC20/TRC20 token contract address (base58) + address: ${TOKEN_ADDRESS:TXYZopYRdj2D9XRtbG411XZZ3kM5VkAeBf} + +batch: + # Batch processing defaults + max-tx-per-batch: ${MAX_TX_PER_BATCH:5} + timelock-duration: ${TIMELOCK_DURATION:0} + merkle-root: ${BATCH_MERKLE_ROOT:} + +chain: + # Nile=3448148188, Mainnet=728126428 + id: ${CHAIN_ID:3448148188} + +scheduler: + batching: + enabled: ${SCHEDULER_BATCHING_ENABLED:true} + check-interval-ms: ${SCHEDULER_BATCHING_CHECK_INTERVAL_MS:3000} + max-intents: ${SCHEDULER_BATCHING_MAX_INTENTS:5} + max-delay-seconds: ${SCHEDULER_BATCHING_MAX_DELAY_SECONDS:30} + execution: + enabled: ${SCHEDULER_EXECUTION_ENABLED:true} + check-interval-ms: ${SCHEDULER_EXECUTION_CHECK_INTERVAL_MS:5000} + max-parallel: ${SCHEDULER_EXECUTION_MAX_PARALLEL:3} \ No newline at end of file diff --git a/backend/src/test/java/dao/tron/tsol/TsolBackendApplicationTests.java b/backend/src/test/java/dao/tron/tsol/TsolBackendApplicationTests.java new file mode 100644 index 0000000..292deee --- /dev/null +++ b/backend/src/test/java/dao/tron/tsol/TsolBackendApplicationTests.java @@ -0,0 +1,13 @@ +package dao.tron.tsol; + +import org.junit.jupiter.api.Test; +import org.springframework.boot.test.context.SpringBootTest; + +@SpringBootTest +class TsolBackendApplicationTests { + + @Test + void contextLoads() { + } + +} diff --git a/backend/src/test/java/dao/tron/tsol/service/BatchSubmittedEventReaderTest.java b/backend/src/test/java/dao/tron/tsol/service/BatchSubmittedEventReaderTest.java new file mode 100644 index 0000000..06b8950 --- /dev/null +++ b/backend/src/test/java/dao/tron/tsol/service/BatchSubmittedEventReaderTest.java @@ -0,0 +1,65 @@ +package dao.tron.tsol.service; + +import dao.tron.tsol.event.BatchSubmittedEvent; +import dao.tron.tsol.event.BatchSubmittedEventReader; +import org.junit.jupiter.api.Test; + +import java.math.BigInteger; + +import static org.junit.jupiter.api.Assertions.assertEquals; + +class BatchSubmittedEventReaderTest { + + @Test + void decodeLogData_decodesAllFields() { + long batchId = 7L; + String merkleRoot = "0x" + "11".repeat(32); + int txCount = 2; + long timestamp = 123456L; + + String dataHex = + "0x" + + pad32(BigInteger.valueOf(batchId)) + + strip0x(merkleRoot) + + pad32(BigInteger.valueOf(txCount)) + + pad32(BigInteger.valueOf(timestamp)); + + BatchSubmittedEvent ev = BatchSubmittedEventReader.decodeLogData(dataHex); + assertEquals(batchId, ev.batchId()); + assertEquals(merkleRoot.toLowerCase(), ev.merkleRootHex().toLowerCase()); + assertEquals(txCount, ev.txCount()); + assertEquals(timestamp, ev.timestamp()); + } + + @Test + void decodeIndexedLog_decodesIndexedBatchIdAndRoot_plusDataFields() { + long batchId = 9L; + String merkleRoot = "0x" + "aa".repeat(32); + int txCount = 5; + long timestamp = 999L; + + String topicBatchId = "0x" + pad32(BigInteger.valueOf(batchId)); + String topicMerkleRoot = merkleRoot; + String dataHex = + "0x" + + pad32(BigInteger.valueOf(txCount)) + + pad32(BigInteger.valueOf(timestamp)); + + BatchSubmittedEvent ev = BatchSubmittedEventReader.decodeIndexedLog(topicBatchId, topicMerkleRoot, dataHex); + assertEquals(batchId, ev.batchId()); + assertEquals(merkleRoot.toLowerCase(), ev.merkleRootHex().toLowerCase()); + assertEquals(txCount, ev.txCount()); + assertEquals(timestamp, ev.timestamp()); + } + + private static String pad32(BigInteger v) { + String h = v.toString(16); + return "0".repeat(64 - h.length()) + h; + } + + private static String strip0x(String h) { + return h.startsWith("0x") ? h.substring(2) : h; + } +} + + diff --git a/backend/src/test/java/dao/tron/tsol/service/MerkleRootDebugTest.java b/backend/src/test/java/dao/tron/tsol/service/MerkleRootDebugTest.java new file mode 100644 index 0000000..a8294fe --- /dev/null +++ b/backend/src/test/java/dao/tron/tsol/service/MerkleRootDebugTest.java @@ -0,0 +1,88 @@ +package dao.tron.tsol.service; + +import dao.tron.tsol.model.TransferData; +import org.junit.jupiter.api.Test; +import org.tron.trident.utils.Numeric; + +import java.util.Arrays; +import java.util.List; + +public class MerkleRootDebugTest { + + @Test + public void testBatch20MerkleRoot() { + MerkleTreeService merkleService = new MerkleTreeService(); + long batchSalt = 1L; // TODO: set to the on-chain batchSalt for Batch #20 when known + + // Transfer 1 - EXACT values from Batch #20 + TransferData tx1 = new TransferData(); + tx1.setFrom("TKWvD71EMFTpFVGZyqqX9fC6MQgcR9H76M"); + tx1.setTo("TToEDBXQkGuYGsnyJASTM5JZweb7Rvrnfn"); + tx1.setAmount("2000"); + tx1.setNonce(1765547218L); + tx1.setTimestamp(1765547218L); + tx1.setRecipientCount(1); + tx1.setBatchId(20); + tx1.setTxType(1); // DELAYED + + // Transfer 2 - EXACT values from Batch #20 + TransferData tx2 = new TransferData(); + tx2.setFrom("TKWvD71EMFTpFVGZyqqX9fC6MQgcR9H76M"); + tx2.setTo("TToEDBXQkGuYGsnyJASTM5JZweb7Rvrnfn"); + tx2.setAmount("2100"); + tx2.setNonce(1765560218L); + tx2.setTimestamp(1765547218L); + tx2.setRecipientCount(1); + tx2.setBatchId(20); + tx2.setTxType(1); // DELAYED + + // Calculate leaf hashes + byte[] leaf1 = merkleService.leafHash(tx1, batchSalt); + byte[] leaf2 = merkleService.leafHash(tx2, batchSalt); + + System.out.println("========================================"); + System.out.println("JAVA MERKLE CALCULATION"); + System.out.println("========================================"); + System.out.println("\nTransfer 1:"); + System.out.println(" amount: " + tx1.getAmount()); + System.out.println(" nonce: " + tx1.getNonce() + " (uint64)"); + System.out.println(" timestamp: " + tx1.getTimestamp() + " (uint48)"); + System.out.println(" recipientCount: " + tx1.getRecipientCount() + " (uint32)"); + System.out.println(" batchId: " + tx1.getBatchId() + " (uint64)"); + System.out.println(" txType: " + tx1.getTxType() + " (uint8)"); + System.out.println(" TxHash: 0x" + Numeric.toHexStringNoPrefix(leaf1)); + + System.out.println("\nTransfer 2:"); + System.out.println(" amount: " + tx2.getAmount()); + System.out.println(" nonce: " + tx2.getNonce() + " (uint64)"); + System.out.println(" timestamp: " + tx2.getTimestamp() + " (uint48)"); + System.out.println(" recipientCount: " + tx2.getRecipientCount() + " (uint32)"); + System.out.println(" batchId: " + tx2.getBatchId() + " (uint64)"); + System.out.println(" txType: " + tx2.getTxType() + " (uint8)"); + System.out.println(" TxHash: 0x" + Numeric.toHexStringNoPrefix(leaf2)); + + // Calculate Merkle root + List leaves = Arrays.asList(leaf1, leaf2); + String merkleRoot = merkleService.computeMerkleRoot(leaves); + + System.out.println("\nMerkle Root: " + merkleRoot); + + System.out.println("\n========================================"); + System.out.println("COMPARISON WITH PYTHON:"); + System.out.println("========================================"); + System.out.println("Expected leaf1: 0x256dc8edc121a6dcabc8caa43ca6b51c402d6a829521e41dabbeeb1595aade23"); + System.out.println("Expected leaf2: 0xb782790b082dfa67521f90ebaf61602895a93bc0e47b5295916bf249618e0d9e"); + System.out.println("Expected root: 0xe12481e04ad5ad0d8891587442f69e975981a33865472c77557cb7226227ec2c"); + System.out.println("\nOn-chain root: 0xa842d9c15cf0db21ca4349de1c9f27333fa4eba1cb29c600ee7603ab539276ca"); + System.out.println("========================================"); + } +} + + + + + + + + + diff --git a/backend/src/test/java/dao/tron/tsol/service/MerkleTreeServiceTest.java b/backend/src/test/java/dao/tron/tsol/service/MerkleTreeServiceTest.java new file mode 100644 index 0000000..a636e99 --- /dev/null +++ b/backend/src/test/java/dao/tron/tsol/service/MerkleTreeServiceTest.java @@ -0,0 +1,454 @@ +package dao.tron.tsol.service; + +import dao.tron.tsol.model.TransferData; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.DisplayName; + +import java.util.ArrayList; +import java.util.List; + +import static org.junit.jupiter.api.Assertions.*; + +/** + * Unit tests for MerkleTreeService + * Validates that Merkle tree generation matches Solidity contract expectations + */ +class MerkleTreeServiceTest { + + private MerkleTreeService merkleTreeService; + private static final long BATCH_SALT = 1L; // source of truth: sc/script/merkle/** uses uint64 batch_salt + + @BeforeEach + void setUp() { + merkleTreeService = new MerkleTreeService(); + } + + @Test + @DisplayName("Test leaf hash generation with sample transfer data") + void testLeafHashGeneration() { + // Arrange: Create sample transfer data + TransferData transfer = createSampleTransfer( + "TKWvD71EMFTpFVGZyqqX9fC6MQgcR9H76M", // from + "TToEDBXQkGuYGsnyJASTM5JZweb7Rvrnfn", // to + "1000000", // amount (1 USDT with 6 decimals) + 1L, // nonce + 1702332000L, // timestamp + 1, // recipientCount + 1L, // batchId + 0 // txType (DELAYED) + ); + + // Act: Generate leaf hash + byte[] leafHash = merkleTreeService.leafHash(transfer, BATCH_SALT); + + // Assert: Verify hash is 32 bytes + assertNotNull(leafHash, "Leaf hash should not be null"); + assertEquals(32, leafHash.length, "Leaf hash should be 32 bytes"); + + // Log the hash for manual verification + System.out.println("Leaf hash (hex): 0x" + bytesToHex(leafHash)); + } + + @Test + @DisplayName("Test Merkle root generation with single transfer") + void testMerkleRootWithSingleTransfer() { + // Arrange: Create one transfer + TransferData transfer = createSampleTransfer( + "TKWvD71EMFTpFVGZyqqX9fC6MQgcR9H76M", + "TToEDBXQkGuYGsnyJASTM5JZweb7Rvrnfn", + "1000000", + 1L, + 1702332000L, + 1, + 1L, + 0 + ); + + List leaves = new ArrayList<>(); + leaves.add(merkleTreeService.leafHash(transfer, BATCH_SALT)); + + // Act: Compute Merkle root + String merkleRoot = merkleTreeService.computeMerkleRoot(leaves); + + // Assert + assertNotNull(merkleRoot, "Merkle root should not be null"); + assertTrue(merkleRoot.startsWith("0x"), "Merkle root should start with 0x"); + assertEquals(66, merkleRoot.length(), "Merkle root should be 66 chars (0x + 64 hex chars)"); + + System.out.println("Single transfer Merkle root: " + merkleRoot); + } + + @Test + @DisplayName("Test Merkle root generation with multiple transfers") + void testMerkleRootWithMultipleTransfers() { + // Arrange: Create 5 transfers (typical batch) + List transfers = new ArrayList<>(); + + transfers.add(createSampleTransfer( + "TKWvD71EMFTpFVGZyqqX9fC6MQgcR9H76M", + "TToEDBXQkGuYGsnyJASTM5JZweb7Rvrnfn", + "1000000", + 1L, + 1702332000L, + 1, + 1L, + 0 + )); + + transfers.add(createSampleTransfer( + "TKWvD71EMFTpFVGZyqqX9fC6MQgcR9H76M", + "TUqVYQLKtNvLCjHw6uGPLw4Qmw7vXEavnc", + "2000000", + 2L, + 1702332001L, + 1, + 1L, + 1 + )); + + transfers.add(createSampleTransfer( + "TToEDBXQkGuYGsnyJASTM5JZweb7Rvrnfn", + "TXYZopYRdj2D9XRtbG411XZZ3kM5VkAeBf", + "500000", + 3L, + 1702332002L, + 1, + 1L, + 2 + )); + + transfers.add(createSampleTransfer( + "TUqVYQLKtNvLCjHw6uGPLw4Qmw7vXEavnc", + "TKWvD71EMFTpFVGZyqqX9fC6MQgcR9H76M", + "3000000", + 4L, + 1702332003L, + 2, + 1L, + 0 + )); + + transfers.add(createSampleTransfer( + "TXYZopYRdj2D9XRtbG411XZZ3kM5VkAeBf", + "TToEDBXQkGuYGsnyJASTM5JZweb7Rvrnfn", + "1500000", + 5L, + 1702332004L, + 1, + 1L, + 1 + )); + + // Generate leaves + List leaves = new ArrayList<>(); + for (TransferData transfer : transfers) { + leaves.add(merkleTreeService.leafHash(transfer, BATCH_SALT)); + } + + // Act: Compute Merkle root + String merkleRoot = merkleTreeService.computeMerkleRoot(leaves); + + // Assert + assertNotNull(merkleRoot, "Merkle root should not be null"); + assertTrue(merkleRoot.startsWith("0x"), "Merkle root should start with 0x"); + assertEquals(66, merkleRoot.length(), "Merkle root should be 66 chars"); + + System.out.println("\n=== BATCH OF 5 TRANSFERS ==="); + System.out.println("Merkle root: " + merkleRoot); + System.out.println("Number of leaves: " + leaves.size()); + + // Print each leaf for debugging + for (int i = 0; i < leaves.size(); i++) { + System.out.println("Leaf " + i + ": 0x" + bytesToHex(leaves.get(i))); + } + } + + @Test + @DisplayName("Test Merkle proof generation and structure") + void testMerkleProofGeneration() { + // Arrange: Create 5 transfers + List transfers = createBatchOfTransfers(5, 1L); + + List leaves = new ArrayList<>(); + for (TransferData transfer : transfers) { + leaves.add(merkleTreeService.leafHash(transfer, BATCH_SALT)); + } + + String merkleRoot = merkleTreeService.computeMerkleRoot(leaves); + + // Act: Generate proofs for each transfer + System.out.println("\n=== MERKLE PROOF GENERATION ==="); + System.out.println("Merkle root: " + merkleRoot); + System.out.println(); + + for (int i = 0; i < leaves.size(); i++) { + List proof = merkleTreeService.buildProof(leaves, i); + + // Assert: Proof should not be empty for multiple leaves + assertNotNull(proof, "Proof should not be null for index " + i); + + // For 5 leaves, we need ceil(log2(5)) = 3 levels, so proof size should be 3 + assertTrue(proof.size() > 0, "Proof should have at least one sibling"); + assertTrue(proof.size() <= 4, "Proof should not have more than 4 siblings for 5 leaves"); + + // All proof elements should be 0x-prefixed 32-byte hashes + for (String proofElement : proof) { + assertTrue(proofElement.startsWith("0x"), "Proof element should start with 0x"); + assertEquals(66, proofElement.length(), "Proof element should be 66 chars"); + } + + System.out.println("Transfer #" + i + " proof (" + proof.size() + " siblings):"); + for (int j = 0; j < proof.size(); j++) { + System.out.println(" [" + j + "] " + proof.get(j)); + } + System.out.println(); + } + } + + @Test + @DisplayName("Test that same data produces same Merkle root (deterministic)") + void testDeterministicMerkleRoot() { + // Arrange: Create same batch twice + List transfers1 = createBatchOfTransfers(3, 1L); + List transfers2 = createBatchOfTransfers(3, 1L); + + List leaves1 = new ArrayList<>(); + List leaves2 = new ArrayList<>(); + + for (int i = 0; i < transfers1.size(); i++) { + leaves1.add(merkleTreeService.leafHash(transfers1.get(i), BATCH_SALT)); + leaves2.add(merkleTreeService.leafHash(transfers2.get(i), BATCH_SALT)); + } + + // Act: Compute roots + String root1 = merkleTreeService.computeMerkleRoot(leaves1); + String root2 = merkleTreeService.computeMerkleRoot(leaves2); + + // Assert: Should be identical + assertEquals(root1, root2, "Same data should produce same Merkle root"); + + System.out.println("Deterministic test - both roots: " + root1); + } + + @Test + @DisplayName("Test that different data produces different Merkle root") + void testDifferentDataProducesDifferentRoot() { + // Arrange: Create two different batches. + // IMPORTANT: batchId is NOT included in txHash (leaf hash), so changing only batchId must NOT change root. + // To validate "different data => different root", we change a hashed field (nonce). + List transfers1 = createBatchOfTransfers(3, 1L); + List transfers2 = createBatchOfTransfers(3, 1L); + transfers2.get(0).setNonce(transfers2.get(0).getNonce() + 1); // change hashed field + + List leaves1 = new ArrayList<>(); + List leaves2 = new ArrayList<>(); + + for (int i = 0; i < transfers1.size(); i++) { + leaves1.add(merkleTreeService.leafHash(transfers1.get(i), BATCH_SALT)); + leaves2.add(merkleTreeService.leafHash(transfers2.get(i), BATCH_SALT)); + } + + // Act: Compute roots + String root1 = merkleTreeService.computeMerkleRoot(leaves1); + String root2 = merkleTreeService.computeMerkleRoot(leaves2); + + // Assert: Should be different + assertNotEquals(root1, root2, "Different data should produce different Merkle roots"); + + System.out.println("Different data test:"); + System.out.println(" Batch 1 root: " + root1); + System.out.println(" Batch 2 root: " + root2); + } + + @Test + @DisplayName("Test Merkle proof for specific transaction in batch") + void testSpecificTransactionProof() { + // Arrange: Create a realistic batch + List transfers = new ArrayList<>(); + + // Transfer 0: Alice sends 10 USDT to Bob + transfers.add(createSampleTransfer( + "TKWvD71EMFTpFVGZyqqX9fC6MQgcR9H76M", // Alice + "TToEDBXQkGuYGsnyJASTM5JZweb7Rvrnfn", // Bob + "10000000", // 10 USDT + 1L, + 1702332000L, + 1, + 1L, + 0 + )); + + // Transfer 1: Bob sends 5 USDT to Charlie + transfers.add(createSampleTransfer( + "TToEDBXQkGuYGsnyJASTM5JZweb7Rvrnfn", // Bob + "TUqVYQLKtNvLCjHw6uGPLw4Qmw7vXEavnc", // Charlie + "5000000", // 5 USDT + 2L, + 1702332001L, + 1, + 1L, + 0 + )); + + // Transfer 2: Charlie sends 2 USDT to Alice + transfers.add(createSampleTransfer( + "TUqVYQLKtNvLCjHw6uGPLw4Qmw7vXEavnc", // Charlie + "TKWvD71EMFTpFVGZyqqX9fC6MQgcR9H76M", // Alice + "2000000", // 2 USDT + 3L, + 1702332002L, + 1, + 1L, + 0 + )); + + List leaves = new ArrayList<>(); + for (TransferData transfer : transfers) { + leaves.add(merkleTreeService.leafHash(transfer, BATCH_SALT)); + } + + String merkleRoot = merkleTreeService.computeMerkleRoot(leaves); + + // Act: Get proof for Transfer 1 (Bob -> Charlie) + List proof = merkleTreeService.buildProof(leaves, 1); + + // Assert + assertNotNull(proof, "Proof should exist"); + assertFalse(proof.isEmpty(), "Proof should not be empty"); + + System.out.println("\n=== SPECIFIC TRANSACTION PROOF ==="); + System.out.println("Transaction: Bob sends 5 USDT to Charlie (index 1)"); + System.out.println("Merkle Root: " + merkleRoot); + System.out.println("Proof elements: " + proof.size()); + System.out.println("Proof:"); + for (int i = 0; i < proof.size(); i++) { + System.out.println(" [" + i + "] " + proof.get(i)); + } + + // This proof can be used to call executeTransfer on-chain + System.out.println("\nThis proof can be submitted to the Settlement contract!"); + } + + @Test + @DisplayName("Test edge case: Odd number of leaves") + void testOddNumberOfLeaves() { + // Arrange: Create 3 transfers (odd number) + List transfers = createBatchOfTransfers(3, 1L); + + List leaves = new ArrayList<>(); + for (TransferData transfer : transfers) { + leaves.add(merkleTreeService.leafHash(transfer, BATCH_SALT)); + } + + // Act & Assert: Should not throw exception + assertDoesNotThrow(() -> { + String merkleRoot = merkleTreeService.computeMerkleRoot(leaves); + assertNotNull(merkleRoot, "Should handle odd number of leaves"); + + // Should be able to generate proofs for all + for (int i = 0; i < leaves.size(); i++) { + List proof = merkleTreeService.buildProof(leaves, i); + assertNotNull(proof, "Should generate proof for index " + i); + } + }); + } + + @Test + @DisplayName("Test edge case: Single leaf") + void testSingleLeaf() { + // Arrange + TransferData transfer = createSampleTransfer( + "TKWvD71EMFTpFVGZyqqX9fC6MQgcR9H76M", + "TToEDBXQkGuYGsnyJASTM5JZweb7Rvrnfn", + "1000000", + 1L, + 1702332000L, + 1, + 1L, + 0 + ); + + List leaves = new ArrayList<>(); + leaves.add(merkleTreeService.leafHash(transfer, BATCH_SALT)); + + // Act: Compute root + String merkleRoot = merkleTreeService.computeMerkleRoot(leaves); + + // Assert: Root should equal the leaf hash itself + String expectedRoot = "0x" + bytesToHex(leaves.get(0)); + assertEquals(expectedRoot.toLowerCase(), merkleRoot.toLowerCase(), + "Single leaf root should equal leaf hash"); + + // Proof should be empty for single leaf + List proof = merkleTreeService.buildProof(leaves, 0); + assertTrue(proof.isEmpty(), "Proof for single leaf should be empty"); + } + + // ========================================================================= + // HELPER METHODS + // ========================================================================= + + private TransferData createSampleTransfer( + String from, + String to, + String amount, + long nonce, + long timestamp, + int recipientCount, + long batchId, + int txType + ) { + TransferData transfer = new TransferData(); + transfer.setFrom(from); + transfer.setTo(to); + transfer.setAmount(amount); + transfer.setNonce(nonce); + transfer.setTimestamp(timestamp); + transfer.setRecipientCount(recipientCount); + transfer.setBatchId(batchId); + transfer.setTxType(txType); + return transfer; + } + + private List createBatchOfTransfers(int count, long batchId) { + List transfers = new ArrayList<>(); + String[] addresses = { + "TKWvD71EMFTpFVGZyqqX9fC6MQgcR9H76M", + "TToEDBXQkGuYGsnyJASTM5JZweb7Rvrnfn", + "TUqVYQLKtNvLCjHw6uGPLw4Qmw7vXEavnc", + "TXYZopYRdj2D9XRtbG411XZZ3kM5VkAeBf", + "TAhZaywaWM1zAQPADJA39FyoQk8cokRLCd" + }; + + for (int i = 0; i < count; i++) { + transfers.add(createSampleTransfer( + addresses[i % addresses.length], + addresses[(i + 1) % addresses.length], + String.valueOf((i + 1) * 1000000), // 1, 2, 3... USDT + i + 1L, + 1702332000L + i, + 1, + batchId, + i % 3 // Rotate through txTypes 0, 1, 2 + )); + } + + return transfers; + } + + private String bytesToHex(byte[] bytes) { + StringBuilder sb = new StringBuilder(bytes.length * 2); + for (byte b : bytes) { + sb.append(String.format("%02x", b & 0xff)); + } + return sb.toString(); + } +} + + + + + + + diff --git a/backend/src/test/java/dao/tron/tsol/service/ScriptMerkleParityTest.java b/backend/src/test/java/dao/tron/tsol/service/ScriptMerkleParityTest.java new file mode 100644 index 0000000..251686a --- /dev/null +++ b/backend/src/test/java/dao/tron/tsol/service/ScriptMerkleParityTest.java @@ -0,0 +1,109 @@ +package dao.tron.tsol.service; + +import com.fasterxml.jackson.databind.JsonNode; +import com.fasterxml.jackson.databind.ObjectMapper; +import dao.tron.tsol.model.TransferData; +import org.junit.jupiter.api.Test; +import org.tron.trident.utils.Numeric; + +import java.io.File; +import java.util.ArrayList; +import java.util.List; + +import static org.junit.jupiter.api.Assertions.assertEquals; + +/** + * Parity test: Java Merkle logic must match the scripts (source of truth). + * + * This test loads sc/script/merkle/batch/merkle_data_deploy.json (generated by the Python script) + * and verifies: + * - each txHash (leaf) matches Java leafHash() + * - the computed Merkle root matches + * - per-index Merkle proofs match exactly + * + * NOTE: Script JSON contains both EVM (0x...) and TRON base58 addresses. We use TRON base58 + * for Java TransferData because the Java backend operates on base58 inputs, but the underlying + * 20-byte address is identical, so the resulting hashes must match. + */ +public class ScriptMerkleParityTest { + + private static final String SCRIPT_JSON_PATH = "sc/script/merkle/batch/merkle_data_deploy.json"; + + @Test + void merkleMatchesScriptJson() throws Exception { + ObjectMapper om = new ObjectMapper(); + JsonNode rootNode = om.readTree(new File(SCRIPT_JSON_PATH)); + + String expectedRoot = rootNode.get("merkleRoot").asText(); + long batchSalt = rootNode.hasNonNull("batchSalt") ? rootNode.get("batchSalt").asLong() : 0L; + JsonNode txs = rootNode.get("transactions"); + + MerkleTreeService merkle = new MerkleTreeService(); + + List leaves = new ArrayList<>(); + List expectedLeafHex = new ArrayList<>(); + List> expectedProofs = new ArrayList<>(); + List transfers = new ArrayList<>(); + + for (JsonNode tx : txs) { + // addresses: use TRON base58 version from JSON to match Java inputs + String fromBase58 = tx.get("tronAddresses").get("from").asText(); + String toBase58 = tx.get("tronAddresses").get("to").asText(); + + JsonNode txDataStruct = tx.get("txDataStruct"); + String amount = txDataStruct.get(2).asText(); // string in JSON + long nonce = txDataStruct.get(3).asLong(); + long timestamp = txDataStruct.get(4).asLong(); + int recipientCount = txDataStruct.get(5).asInt(); + long batchId = txDataStruct.get(6).asLong(); // NOT hashed, but carried in TransferData + int txType = txDataStruct.get(7).asInt(); + + TransferData td = new TransferData(); + td.setFrom(fromBase58); + td.setTo(toBase58); + td.setAmount(amount); + td.setNonce(nonce); + td.setTimestamp(timestamp); + td.setRecipientCount(recipientCount); + td.setBatchId(batchId); + td.setTxType(txType); + + transfers.add(td); + + expectedLeafHex.add(tx.get("txHash").asText().toLowerCase()); + + List proof = new ArrayList<>(); + for (JsonNode p : tx.get("proof")) { + proof.add(p.asText().toLowerCase()); + } + expectedProofs.add(proof); + } + + // 1) Leaf hashes must match script txHash + for (int i = 0; i < transfers.size(); i++) { + byte[] leaf = merkle.leafHash(transfers.get(i), batchSalt); + leaves.add(leaf); + String javaLeafHex = "0x" + Numeric.toHexStringNoPrefix(leaf); + assertEquals(expectedLeafHex.get(i), javaLeafHex.toLowerCase(), "leaf/txHash mismatch at index " + i); + } + + // 2) Root must match script merkleRoot + String javaRoot = merkle.computeMerkleRoot(leaves); + assertEquals(expectedRoot.toLowerCase(), javaRoot.toLowerCase(), "merkleRoot mismatch"); + + // 3) Proofs must match exactly per index + for (int i = 0; i < leaves.size(); i++) { + List javaProof = merkle.buildProof(leaves, i); + List expectedProof = expectedProofs.get(i); + assertEquals(expectedProof, toLower(javaProof), "proof mismatch at index " + i); + } + } + + private static List toLower(List in) { + List out = new ArrayList<>(in.size()); + for (String s : in) out.add(s.toLowerCase()); + return out; + } +} + + diff --git a/backend/test-10-intents-batched.sh b/backend/test-10-intents-batched.sh new file mode 100755 index 0000000..5071989 --- /dev/null +++ b/backend/test-10-intents-batched.sh @@ -0,0 +1,252 @@ +#!/bin/bash +set -euo pipefail + +# ═══════════════════════════════════════════════════════════════════════════ +# 🚀 BATCHED FLOW TEST - 10 INTENTS (txType=2) ON NILE +# ═══════════════════════════════════════════════════════════════════════════ +# +# What this script does: +# 1) Checks backend + schedulers are running +# 2) Submits 10 intents with txType=2 (BATCHED) +# 3) Forces creation of batches immediately (expected: 2 batches with default maxIntents=5) +# 4) Monitors until all created batches are COMPLETED +# +# Important for txType=2: +# - Sender (from) must be in `whitelist.addresses` +# - On startup, backend tries to sync WhitelistRegistry root to match config +# If you changed whitelist config, restart backend before running this script. + +BASE_URL="${BASE_URL:-http://localhost:8080}" + +# Sender/recipient used previously in successful Nile tests +FROM="${FROM_ADDRESS:-TKWvD71EMFTpFVGZyqqX9fC6MQgcR9H76M}" +TO="${TO_ADDRESS:-TFZMxv9HUzvsL3M7obrvikSQkuvJsopgMU}" + +COUNT="${COUNT:-10}" +SLEEP_BETWEEN="${SLEEP_BETWEEN:-0.1}" +TXTYPE="${TXTYPE:-2}" # 2 = BATCHED (requires whitelist proof) +RECIPIENT_COUNT="${RECIPIENT_COUNT:-$COUNT}" + +# FeeModule requires recipientCount > 1 for txType=2 (BATCHED). +if [ "${TXTYPE}" = "2" ] && [ "${RECIPIENT_COUNT}" -le 1 ]; then + echo "❌ Invalid RECIPIENT_COUNT=${RECIPIENT_COUNT} for txType=2 (BATCHED). Must be > 1." + echo "Tip: set RECIPIENT_COUNT=$COUNT (default) or at least 2." + exit 1 +fi + +echo "BATCHED flow: submitting ${COUNT} intents to ${BASE_URL}" +echo "FROM=${FROM}" +echo "TO=${TO}" +echo "txType=${TXTYPE} (BATCHED)" +echo + +# ──────────────────────────────────────────────────────────────────────────── +# Step 0: backend/scheduler sanity checks +# ──────────────────────────────────────────────────────────────────────────── +if ! curl -s -f "${BASE_URL}/api/monitor/stats" > /dev/null 2>&1; then + echo "❌ Backend is not running at ${BASE_URL}" + echo "Start it first: ./gradlew bootRun" + exit 1 +fi + +BASELINE_BATCH_IDS=() +if curl -s -f "${BASE_URL}/api/monitor/batches" > /dev/null 2>&1; then + # Track existing batch IDs so we can detect *all* new batches created (including ones created by scheduler). + # NOTE: macOS ships Bash 3.2 by default (no `mapfile`), so we avoid it. + BASELINE_BATCH_IDS=($(curl -s "${BASE_URL}/api/monitor/batches" | jq -r '.batches[].batchId')) +fi + +STATS=$(curl -s "${BASE_URL}/api/monitor/stats") +BATCHING_ENABLED=$(echo "$STATS" | jq -r '.schedulers.batching.enabled') +EXECUTION_ENABLED=$(echo "$STATS" | jq -r '.schedulers.execution.enabled') +MAX_INTENTS=$(echo "$STATS" | jq -r '.schedulers.batching.maxIntents') + +echo "Backend OK. Schedulers:" +echo " batching.enabled=${BATCHING_ENABLED}" +echo " execution.enabled=${EXECUTION_ENABLED}" +echo " batching.maxIntents=${MAX_INTENTS}" +echo + +if [ "${BATCHING_ENABLED}" != "true" ] || [ "${EXECUTION_ENABLED}" != "true" ]; then + echo "❌ Schedulers must be enabled for this test." + echo "Set env vars and restart backend:" + echo " SCHEDULER_BATCHING_ENABLED=true" + echo " SCHEDULER_EXECUTION_ENABLED=true" + exit 1 +fi + +# Capture baseline counts +BASELINE_BATCHES=$(echo "$STATS" | jq -r '.statistics.totalBatches') +BASELINE_TRANSFERS=$(echo "$STATS" | jq -r '.statistics.totalTransfers') +echo "Baseline: totalBatches=${BASELINE_BATCHES}, totalTransfers=${BASELINE_TRANSFERS}" +echo + +refresh_new_batch_ids() { + # Recompute NEW_BATCH_IDS by diffing current /batches against BASELINE_BATCH_IDS. + # Also keeps any already-known batch IDs (order: as discovered). + local current_ids new_ids id old seen + current_ids=($(curl -s "${BASE_URL}/api/monitor/batches" | jq -r '.batches[].batchId')) + new_ids=() + for id in "${current_ids[@]}"; do + seen=false + for old in "${BASELINE_BATCH_IDS[@]}"; do + if [ "$id" = "$old" ]; then + seen=true + break + fi + done + if [ "$seen" = "false" ]; then + # ensure uniqueness in new_ids + local already=false + local x + for x in "${new_ids[@]}"; do + if [ "$x" = "$id" ]; then + already=true + break + fi + done + if [ "$already" = "false" ]; then + new_ids+=("$id") + fi + fi + done + NEW_BATCH_IDS=("${new_ids[@]}") +} + +# ──────────────────────────────────────────────────────────────────────────── +# Step 1: submit intents (txType=2) +# ──────────────────────────────────────────────────────────────────────────── +TS=$(date +%s) + +for ((i=0; i=2)." + echo " Submit one more intent or wait for manual handling." + fi + break + fi + + RESP=$(curl -s -X POST "${BASE_URL}/api/monitor/create-batch-now") + OK=$(echo "$RESP" | jq -r '.success // false') + if [ "$OK" != "true" ]; then + echo "❌ create-batch-now failed:" + echo "$RESP" | jq '.' + exit 1 + fi + + BID=$(echo "$RESP" | jq -r '.batchId') + CREATED_BATCH_IDS+=("$BID") + echo "✅ Created batchId=${BID} (pending was ${PENDING})" + + # Small pause so monitor endpoints reflect newly stored batch + sleep 1 +done + +# Resolve final list of newly created batch IDs (covers scheduler-created batches too). +NEW_BATCH_IDS=() +refresh_new_batch_ids + +if [ "${#NEW_BATCH_IDS[@]}" -eq 0 ]; then + echo "❌ No NEW batches detected after submitting intents." + echo "Check backend logs and /api/monitor/batches." + exit 1 +fi + +echo +echo "New batches detected: ${NEW_BATCH_IDS[*]}" +echo + +# ──────────────────────────────────────────────────────────────────────────── +# Step 3: monitor execution until all created batches complete +# ──────────────────────────────────────────────────────────────────────────── +echo "Monitoring execution (up to 180s)..." +DEADLINE=$(( $(date +%s) + 180 )) + +while [ $(date +%s) -lt $DEADLINE ]; do + # Pick up any additional batches that might have been created asynchronously by the scheduler. + refresh_new_batch_ids + if [ "${#NEW_BATCH_IDS[@]}" -eq 0 ]; then + echo "⚠️ No new batches visible yet; waiting..." + sleep 2 + continue + fi + + ALL_DONE=true + echo "---- $(date) ----" + for BID in "${NEW_BATCH_IDS[@]}"; do + DETAILS=$(curl -s "${BASE_URL}/api/monitor/batch/${BID}") + STATUS=$(echo "$DETAILS" | jq -r '.batch.status // .status // "UNKNOWN"') + EXECUTED=$(echo "$DETAILS" | jq -r '.batch.transfers | map(select(.executed == true)) | length') + TOTAL=$(echo "$DETAILS" | jq -r '.batch.transfers | length') + echo "batchId=${BID} status=${STATUS} executed=${EXECUTED}/${TOTAL}" + + if [ "$STATUS" = "FAILED" ]; then + # Helpful hint for txType=2 issues: if whitelistProof is empty, contract will revert NotWhitelisted. + NEEDS_WL=$(echo "$DETAILS" | jq -r '[.batch.transfers[].txData.txType] | any(. == 2)') + if [ "$NEEDS_WL" = "true" ]; then + WL_EMPTY_CNT=$(echo "$DETAILS" | jq -r '[.batch.transfers[] | select(.txData.txType == 2) | (.whitelistProofSize // 0)] | map(select(. == 0)) | length') + if [ "$WL_EMPTY_CNT" != "0" ]; then + echo " ↳ Detected txType=2 with EMPTY whitelistProof (count=${WL_EMPTY_CNT})." + echo " ↳ Fix: ensure the sender is in whitelist config and restart backend so it can sync the on-chain whitelist root." + echo " - Check application.yaml: whitelist.addresses" + echo " - If you use .env/env vars, make sure WHITELIST_ADDRESSES is NOT empty and includes FROM=${FROM}" + fi + fi + echo "❌ Batch ${BID} FAILED. Check backend logs (common causes: whitelist root mismatch, not whitelisted, insufficient balance/allowance)." + exit 1 + fi + if [ "$STATUS" != "COMPLETED" ]; then + ALL_DONE=false + fi + done + + if [ "$ALL_DONE" = "true" ]; then + echo "✅ All created batches COMPLETED." + exit 0 + fi + + sleep 5 +done + +echo "⚠️ Timed out waiting for completion. Check:" +echo " - ${BASE_URL}/api/monitor/batches" +echo " - backend logs" +exit 1 + + diff --git a/backend/test-20-intents.sh b/backend/test-20-intents.sh new file mode 100755 index 0000000..c61cd53 --- /dev/null +++ b/backend/test-20-intents.sh @@ -0,0 +1,79 @@ +#!/bin/bash +set -euo pipefail + +BASE_URL="${BASE_URL:-http://localhost:8080}" + +# Sender/recipient used previously in successful Nile tests +FROM="${FROM_ADDRESS:-TKWvD71EMFTpFVGZyqqX9fC6MQgcR9H76M}" +TO="${TO_ADDRESS:-TFZMxv9HUzvsL3M7obrvikSQkuvJsopgMU}" + +COUNT="${COUNT:-20}" +SLEEP_BETWEEN="${SLEEP_BETWEEN:-0.1}" + +echo "Submitting ${COUNT} intents to ${BASE_URL}" +echo "FROM=${FROM}" +echo "TO=${TO}" + +# Capture baseline +BASELINE=$(curl -s "${BASE_URL}/api/monitor/batches") +BASELINE_BATCHES=$(echo "$BASELINE" | jq -r '.totalBatches') +BASELINE_TRANSFERS=$(echo "$BASELINE" | jq -r '.statistics.totalTransfers') +echo "Baseline: totalBatches=${BASELINE_BATCHES}, totalTransfers=${BASELINE_TRANSFERS}" + +TS=$(date +%s) + +for ((i=0; i/dev/null 2>&1; then + echo "❌ Backend is not running at ${BASE_URL}" + exit 1 +fi + +STATS=$(curl -s "${BASE_URL}/api/monitor/stats") +BATCHING_ENABLED=$(echo "$STATS" | jq -r '.schedulers.batching.enabled') +EXECUTION_ENABLED=$(echo "$STATS" | jq -r '.schedulers.execution.enabled') +if [ "$BATCHING_ENABLED" != "true" ] || [ "$EXECUTION_ENABLED" != "true" ]; then + echo "❌ Schedulers must be enabled." + echo " batching.enabled=$BATCHING_ENABLED" + echo " execution.enabled=$EXECUTION_ENABLED" + exit 1 +fi + +NONCE1=$TIMESTAMP +NONCE2=$((TIMESTAMP + 1)) + +AMOUNT1=1000000 +AMOUNT2=2000000 +# For txType=2 (BATCHED) the FeeModule requires recipientCount > 1. +# For this 2-intents batch, recipientCount=2 is the natural value. +RECIPIENT_COUNT=2 + +submit_intent() { + local nonce="$1" + local amount="$2" + local body + body=$(jq -nc \ + --arg from "$FROM_ADDRESS" \ + --arg to "$TO_ADDRESS" \ + --arg amount "$amount" \ + --argjson nonce "$nonce" \ + --argjson timestamp "$TIMESTAMP" \ + --argjson recipientCount "$RECIPIENT_COUNT" \ + --argjson txType "$TXTYPE" \ + '{from:$from,to:$to,amount:$amount,nonce:$nonce,timestamp:$timestamp,recipientCount:$recipientCount,txType:$txType}') + + local http + http=$(curl -s -o /dev/null -w "%{http_code}" -X POST "${BASE_URL}/api/intents" \ + -H "Content-Type: application/json" -d "$body") + if [ "$http" != "202" ]; then + echo "❌ Intent submit failed (HTTP=$http): $body" + exit 1 + fi +} + +echo "Submitting intent 1..." +submit_intent "$NONCE1" "$AMOUNT1" +echo "✅ Intent 1 submitted" + +echo "Submitting intent 2..." +submit_intent "$NONCE2" "$AMOUNT2" +echo "✅ Intent 2 submitted" +echo + +echo "Triggering batch creation..." +RESP=$(curl -s -X POST "${BASE_URL}/api/monitor/create-batch-now") +OK=$(echo "$RESP" | jq -r '.success // false') +if [ "$OK" != "true" ]; then + echo "❌ create-batch-now failed:" + echo "$RESP" | jq '.' + exit 1 +fi + +BATCH_ID=$(echo "$RESP" | jq -r '.batchId') +echo "✅ Batch created: batchId=${BATCH_ID}" +echo + +echo "Checking whitelistProof..." +DETAILS=$(curl -s "${BASE_URL}/api/monitor/batch/${BATCH_ID}") +WL_SIZES=$(echo "$DETAILS" | jq -r '.batch.transfers | map(.whitelistProofSize)') +echo "whitelistProof sizes: ${WL_SIZES}" + +EMPTY_CNT=$(echo "$DETAILS" | jq -r '[.batch.transfers[] | .whitelistProofSize] | map(select(. == 0)) | length') +if [ "$EMPTY_CNT" != "0" ]; then + echo "❌ whitelistProof is empty for ${EMPTY_CNT} transfers." + echo "This means the backend did NOT generate whitelist proofs (txType=2 will revert NotWhitelisted)." + echo "Fix:" + echo " - Ensure FROM is in whitelist.addresses (TRON base58)." + echo " - Restart backend (whitelist root sync runs on startup)." + exit 1 +fi +echo "✅ whitelistProof generated for all transfers" +echo + +echo "Monitoring execution (up to 60s)..." +DEADLINE=$(( $(date +%s) + 60 )) +while [ $(date +%s) -lt $DEADLINE ]; do + DETAILS=$(curl -s "${BASE_URL}/api/monitor/batch/${BATCH_ID}") + STATUS=$(echo "$DETAILS" | jq -r '.batch.status') + EXECUTED=$(echo "$DETAILS" | jq -r '.batch.transfers | map(select(.executed == true)) | length') + TOTAL=$(echo "$DETAILS" | jq -r '.batch.transfers | length') + echo "status=${STATUS} executed=${EXECUTED}/${TOTAL}" + + if [ "$STATUS" = "COMPLETED" ]; then + echo "✅ COMPLETED" + exit 0 + fi + if [ "$STATUS" = "FAILED" ]; then + echo "❌ FAILED (check backend logs for revert reason)" + exit 1 + fi + sleep 5 +done + +echo "⚠️ Timed out waiting for completion. Check:" +echo " ${BASE_URL}/api/monitor/batch/${BATCH_ID}" +exit 1 + + diff --git a/backend/test-two-intents-full-flow.sh b/backend/test-two-intents-full-flow.sh new file mode 100755 index 0000000..da1b7f4 --- /dev/null +++ b/backend/test-two-intents-full-flow.sh @@ -0,0 +1,462 @@ +#!/bin/bash + +# ═══════════════════════════════════════════════════════════════════════════ +# 🚀 FULL FLOW TEST - 2 INTENTS ON NILE +# ═══════════════════════════════════════════════════════════════════════════ +# +# This script tests the COMPLETE Java backend flow with 2 intents: +# 1. Submit 2 transfer intents via REST API +# 2. Monitor batch creation +# 3. Wait for batch unlock time +# 4. Monitor execution +# 5. Verify success on Nile blockchain +# +# ═══════════════════════════════════════════════════════════════════════════ + +set -e + +BASE_URL="http://localhost:8080" +TIMESTAMP=$(date +%s) + +# Colors +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +CYAN='\033[0;36m' +MAGENTA='\033[0;35m' +NC='\033[0m' # No Color + +echo "" +echo "╔══════════════════════════════════════════════════════════════════════════╗" +echo "║ 🚀 FULL FLOW TEST - 2 INTENTS ON NILE 🚀 ║" +echo "╚══════════════════════════════════════════════════════════════════════════╝" +echo "" + +# ═══════════════════════════════════════════════════════════════════════════ +# STEP 0: Check backend is running +# ═══════════════════════════════════════════════════════════════════════════ + +echo -e "${CYAN}═══ STEP 0: Checking Backend Status ═══${NC}" +echo "" + +if ! curl -s -f $BASE_URL/api/monitor/stats > /dev/null 2>&1; then + echo -e "${RED}❌ Backend is not running!${NC}" + echo "" + echo "Please start the backend first:" + echo " ./gradlew bootRun" + echo "" + exit 1 +fi + +echo -e "${GREEN}✅ Backend is running${NC}" + +# Check configuration +CONFIG=$(curl -s $BASE_URL/api/monitor/stats) +BATCHING_ENABLED=$(echo $CONFIG | jq -r '.schedulers.batching.enabled') +EXECUTION_ENABLED=$(echo $CONFIG | jq -r '.schedulers.execution.enabled') +MAX_INTENTS=$(echo $CONFIG | jq -r '.schedulers.batching.maxIntents') +MAX_DELAY=$(echo $CONFIG | jq -r '.schedulers.batching.maxDelaySeconds') + +echo "" +echo "Configuration:" +echo " • Batching enabled: $BATCHING_ENABLED" +echo " • Execution enabled: $EXECUTION_ENABLED" +echo " • Max intents: $MAX_INTENTS" +echo " • Max delay: ${MAX_DELAY}s" +echo "" + +if [ "$BATCHING_ENABLED" != "true" ] || [ "$EXECUTION_ENABLED" != "true" ]; then + echo -e "${RED}❌ Schedulers are not enabled!${NC}" + echo "" + echo "Please enable schedulers in application.yaml or set environment variables:" + echo " SCHEDULER_BATCHING_ENABLED=true" + echo " SCHEDULER_EXECUTION_ENABLED=true" + echo "" + exit 1 +fi + +# ═══════════════════════════════════════════════════════════════════════════ +# STEP 1: Get initial statistics +# ═══════════════════════════════════════════════════════════════════════════ + +echo -e "${CYAN}═══ STEP 1: Initial Statistics ═══${NC}" +echo "" + +INITIAL_STATS=$(curl -s $BASE_URL/api/monitor/stats) +INITIAL_TRANSFERS=$(echo $INITIAL_STATS | jq -r '.statistics.totalTransfers') +INITIAL_BATCHES=$(echo $INITIAL_STATS | jq -r '.statistics.totalBatches') +INITIAL_COMPLETED=$(echo $INITIAL_STATS | jq -r '.statistics.completedBatches') + +echo "Current state:" +echo " • Total transfers: $INITIAL_TRANSFERS" +echo " • Total batches: $INITIAL_BATCHES" +echo " • Completed batches: $INITIAL_COMPLETED" +echo "" + +# ═══════════════════════════════════════════════════════════════════════════ +# STEP 2: Submit 2 transfer intents +# ═══════════════════════════════════════════════════════════════════════════ + +echo -e "${CYAN}═══ STEP 2: Submit 2 Transfer Intents ═══${NC}" +echo "" + +FROM_ADDRESS="TKWvD71EMFTpFVGZyqqX9fC6MQgcR9H76M" +TO_ADDRESS="TFZMxv9HUzvsL3M7obrvikSQkuvJsopgMU" + +# Intent 1 - DELAYED (txType 0) +NONCE1=$TIMESTAMP +AMOUNT1=5000000 # 5 USDT + +echo -e "${BLUE}Intent 1 (DELAYED):${NC}" +echo " • From: $FROM_ADDRESS" +echo " • To: $TO_ADDRESS" +echo " • Amount: 5.0 USDT" +echo " • Nonce: $NONCE1" +echo " • Type: 0 (DELAYED)" +echo "" + +HTTP_CODE1=$(curl -s -w "%{http_code}" -o /dev/null -X POST $BASE_URL/api/intents \ + -H "Content-Type: application/json" \ + -d "{ + \"from\": \"$FROM_ADDRESS\", + \"to\": \"$TO_ADDRESS\", + \"amount\": \"$AMOUNT1\", + \"nonce\": $NONCE1, + \"timestamp\": $TIMESTAMP, + \"recipientCount\": 1, + \"txType\": 0 + }") + +if [ "$HTTP_CODE1" != "202" ]; then + echo -e "${RED}❌ Failed to submit intent 1! HTTP Code: $HTTP_CODE1${NC}" + exit 1 +fi + +echo -e "${GREEN}✅ Intent 1 submitted${NC}" +sleep 0.5 + +# Intent 2 - INSTANT (txType 1) +NONCE2=$((TIMESTAMP + 1)) +AMOUNT2=10000000 # 10 USDT + +echo "" +echo -e "${BLUE}Intent 2 (INSTANT):${NC}" +echo " • From: $FROM_ADDRESS" +echo " • To: $TO_ADDRESS" +echo " • Amount: 10.0 USDT" +echo " • Nonce: $NONCE2" +echo " • Type: 1 (INSTANT)" +echo "" + +HTTP_CODE2=$(curl -s -w "%{http_code}" -o /dev/null -X POST $BASE_URL/api/intents \ + -H "Content-Type: application/json" \ + -d "{ + \"from\": \"$FROM_ADDRESS\", + \"to\": \"$TO_ADDRESS\", + \"amount\": \"$AMOUNT2\", + \"nonce\": $NONCE2, + \"timestamp\": $TIMESTAMP, + \"recipientCount\": 1, + \"txType\": 1 + }") + +if [ "$HTTP_CODE2" != "202" ]; then + echo -e "${RED}❌ Failed to submit intent 2! HTTP Code: $HTTP_CODE2${NC}" + exit 1 +fi + +echo -e "${GREEN}✅ Intent 2 submitted${NC}" +echo "" + +sleep 1 + +# Verify pending count +STATS=$(curl -s $BASE_URL/api/monitor/stats) +PENDING=$(echo $STATS | jq -r '.statistics.pendingTransfers') + +echo "Pending transfers: $PENDING" +echo "" + +if [ "$PENDING" != "2" ]; then + echo -e "${YELLOW}⚠️ Expected 2 pending transfers, got $PENDING${NC}" +fi + +# ═══════════════════════════════════════════════════════════════════════════ +# STEP 3: Trigger batch creation immediately +# ═══════════════════════════════════════════════════════════════════════════ + +echo -e "${CYAN}═══ STEP 3: Trigger Batch Creation ═══${NC}" +echo "" + +echo "Note: We have 2 intents and max is $MAX_INTENTS" +echo "We can either:" +echo " a) Wait ${MAX_DELAY}s for auto-batching" +echo " b) Submit $((MAX_INTENTS - 2)) more intents" +echo " c) Manually trigger batching now" +echo "" + +echo -e "${BLUE}Manually triggering batch creation...${NC}" +echo "" + +BATCH_RESPONSE=$(curl -s -X POST $BASE_URL/api/monitor/create-batch-now) +echo "$BATCH_RESPONSE" | jq '.' +echo "" + +BATCH_CREATED=$(echo $BATCH_RESPONSE | jq -r '.success') + +if [ "$BATCH_CREATED" = "true" ]; then + BATCH_ID=$(echo $BATCH_RESPONSE | jq -r '.batchId') + MERKLE_ROOT=$(echo $BATCH_RESPONSE | jq -r '.merkleRoot') + TX_COUNT=$(echo $BATCH_RESPONSE | jq -r '.txCount') + + echo -e "${GREEN}✅ Batch created successfully!${NC}" + echo "" + echo "Batch details:" + echo " • Batch ID: $BATCH_ID" + echo " • Merkle Root: ${MERKLE_ROOT:0:20}..." + echo " • TX Count: $TX_COUNT" + echo "" +else + echo -e "${RED}❌ Batch creation failed!${NC}" + echo "" + ERROR=$(echo $BATCH_RESPONSE | jq -r '.error // "Unknown error"') + echo "Error: $ERROR" + echo "" + exit 1 +fi + +# ═══════════════════════════════════════════════════════════════════════════ +# STEP 4: Check batch status and unlock time +# ═══════════════════════════════════════════════════════════════════════════ + +echo -e "${CYAN}═══ STEP 4: Check Batch Status ═══${NC}" +echo "" + +sleep 2 # Wait for batch submission to finalize + +BATCH_DETAILS=$(curl -s "$BASE_URL/api/monitor/batch/$BATCH_ID") +STATUS=$(echo $BATCH_DETAILS | jq -r '.batch.status') +UNLOCK_TIME=$(echo $BATCH_DETAILS | jq -r '.batch.unlockTime') +CURRENT_TIME=$(date +%s) + +echo "Batch status:" +echo " • Status: $STATUS" +echo " • Unlock time: $UNLOCK_TIME" +echo " • Current time: $CURRENT_TIME" +echo "" + +if [ "$UNLOCK_TIME" != "null" ] && [ "$UNLOCK_TIME" != "0" ]; then + WAIT_TIME=$((UNLOCK_TIME - CURRENT_TIME)) + if [ "$WAIT_TIME" -gt 0 ]; then + echo -e "${YELLOW}⚠️ Batch is locked for ${WAIT_TIME} more seconds${NC}" + echo "" + echo -e "${BLUE}Waiting for unlock time...${NC}" + echo "" + + # Wait with countdown + for ((i=$WAIT_TIME; i>0; i--)); do + if [ $((i % 5)) -eq 0 ] || [ $i -le 5 ]; then + echo -e " ⏳ ${i}s remaining..." + fi + sleep 1 + done + + echo "" + echo -e "${GREEN}✅ Batch is now unlocked!${NC}" + echo "" + else + echo -e "${GREEN}✅ Batch is already unlocked!${NC}" + echo "" + fi +else + echo -e "${GREEN}✅ No timelock (unlock time is 0)${NC}" + echo "" +fi + +# ═══════════════════════════════════════════════════════════════════════════ +# STEP 5: Monitor execution +# ═══════════════════════════════════════════════════════════════════════════ + +echo -e "${CYAN}═══ STEP 5: Monitor Execution ═══${NC}" +echo "" + +echo "Execution scheduler checks every 5 seconds" +echo "Monitoring for up to 60 seconds..." +echo "" + +EXECUTION_SUCCESS=false + +for i in {1..12}; do + sleep 5 + + BATCH_DETAILS=$(curl -s "$BASE_URL/api/monitor/batch/$BATCH_ID") + STATUS=$(echo $BATCH_DETAILS | jq -r '.batch.status') + EXECUTED_COUNT=$(echo $BATCH_DETAILS | jq -r '.batch.transfers | map(select(.executed == true)) | length') + TOTAL_COUNT=$(echo $BATCH_DETAILS | jq -r '.batch.transfers | length') + + echo -e " [${i}/12] Status: ${STATUS}, Executed: ${EXECUTED_COUNT}/${TOTAL_COUNT}" + + if [ "$STATUS" = "COMPLETED" ]; then + echo "" + echo -e "${GREEN}✅ Batch execution completed!${NC}" + EXECUTION_SUCCESS=true + break + elif [ "$STATUS" = "FAILED" ]; then + echo "" + echo -e "${RED}❌ Batch execution failed!${NC}" + break + fi +done + +echo "" + +if [ "$EXECUTION_SUCCESS" != "true" ]; then + if [ "$STATUS" = "FAILED" ]; then + echo -e "${RED}Execution failed. Check logs for details.${NC}" + else + echo -e "${YELLOW}Execution still in progress or not started yet.${NC}" + echo "Current status: $STATUS" + fi + echo "" +fi + +# ═══════════════════════════════════════════════════════════════════════════ +# STEP 6: View detailed batch information +# ═══════════════════════════════════════════════════════════════════════════ + +echo -e "${CYAN}═══ STEP 6: Batch Details ═══${NC}" +echo "" + +BATCH_DETAILS=$(curl -s "$BASE_URL/api/monitor/batch/$BATCH_ID") +echo "$BATCH_DETAILS" | jq '{ + batchId: .batch.batchId, + status: .batch.status, + merkleRoot: .batch.merkleRoot, + unlockTime: .batch.unlockTime, + transfers: .batch.transfers | map({ + from: .txData.from, + to: .txData.to, + amount: .txData.amount, + txType: .txData.txType, + executed: .executed + }) +}' +echo "" + +# ═══════════════════════════════════════════════════════════════════════════ +# STEP 7: Verify on Nile blockchain (if execution succeeded) +# ═══════════════════════════════════════════════════════════════════════════ + +if [ "$EXECUTION_SUCCESS" = "true" ]; then + echo -e "${CYAN}═══ STEP 7: Verify on Nile Blockchain ═══${NC}" + echo "" + + # Extract transaction details + TRANSFER_1_FROM=$(echo $BATCH_DETAILS | jq -r '.batch.transfers[0].txData.from') + TRANSFER_1_TO=$(echo $BATCH_DETAILS | jq -r '.batch.transfers[0].txData.to') + TRANSFER_1_AMOUNT=$(echo $BATCH_DETAILS | jq -r '.batch.transfers[0].txData.amount') + + TRANSFER_2_FROM=$(echo $BATCH_DETAILS | jq -r '.batch.transfers[1].txData.from') + TRANSFER_2_TO=$(echo $BATCH_DETAILS | jq -r '.batch.transfers[1].txData.to') + TRANSFER_2_AMOUNT=$(echo $BATCH_DETAILS | jq -r '.batch.transfers[1].txData.amount') + + echo "Transfer 1:" + echo " • From: $TRANSFER_1_FROM" + echo " • To: $TRANSFER_1_TO" + echo " • Amount: $TRANSFER_1_AMOUNT (5.0 USDT)" + echo " • Type: DELAYED" + echo "" + + echo "Transfer 2:" + echo " • From: $TRANSFER_2_FROM" + echo " • To: $TRANSFER_2_TO" + echo " • Amount: $TRANSFER_2_AMOUNT (10.0 USDT)" + echo " • Type: INSTANT" + echo "" + + echo -e "${BLUE}To verify on Nile blockchain:${NC}" + echo "" + echo "1. Check Settlement contract events:" + echo " https://nile.tronscan.org/#/contract/TDum6BeRGA5hruf1Z2FRfavEZTn5DfWqAJ/events" + echo "" + echo "2. Look for 'TransferExecuted' events for batch ID: $BATCH_ID" + echo "" + echo "3. Check recipient balance:" + echo " https://nile.tronscan.org/#/address/$TO_ADDRESS" + echo "" +fi + +# ═══════════════════════════════════════════════════════════════════════════ +# STEP 8: Final statistics +# ═══════════════════════════════════════════════════════════════════════════ + +echo -e "${CYAN}═══ STEP 8: Final Statistics ═══${NC}" +echo "" + +FINAL_STATS=$(curl -s $BASE_URL/api/monitor/stats) +FINAL_TRANSFERS=$(echo $FINAL_STATS | jq -r '.statistics.totalTransfers') +FINAL_BATCHES=$(echo $FINAL_STATS | jq -r '.statistics.totalBatches') +FINAL_COMPLETED=$(echo $FINAL_STATS | jq -r '.statistics.completedBatches') +FINAL_PENDING=$(echo $FINAL_STATS | jq -r '.statistics.pendingTransfers') + +echo "Statistics:" +echo " • Total transfers: $FINAL_TRANSFERS (was $INITIAL_TRANSFERS, +$((FINAL_TRANSFERS - INITIAL_TRANSFERS)))" +echo " • Total batches: $FINAL_BATCHES (was $INITIAL_BATCHES, +$((FINAL_BATCHES - INITIAL_BATCHES)))" +echo " • Completed batches: $FINAL_COMPLETED (was $INITIAL_COMPLETED, +$((FINAL_COMPLETED - INITIAL_COMPLETED)))" +echo " • Pending transfers: $FINAL_PENDING" +echo "" + +# ═══════════════════════════════════════════════════════════════════════════ +# SUMMARY +# ═══════════════════════════════════════════════════════════════════════════ + +echo "" +echo "╔══════════════════════════════════════════════════════════════════════════╗" +echo "║ 🎯 TEST SUMMARY 🎯 ║" +echo "╚══════════════════════════════════════════════════════════════════════════╝" +echo "" + +echo -e "${GREEN}✓${NC} Backend running and configured" +echo -e "${GREEN}✓${NC} 2 transfer intents submitted" +echo -e "${GREEN}✓${NC} Batch created (ID: $BATCH_ID)" + +if [ "$EXECUTION_SUCCESS" = "true" ]; then + echo -e "${GREEN}✓${NC} Batch execution completed successfully" + echo "" + echo -e "${MAGENTA}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}" + echo -e "${MAGENTA} 🎉 SUCCESS! ALL TESTS PASSED 🎉 ${NC}" + echo -e "${MAGENTA}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}" + echo "" + echo "Your Java backend successfully:" + echo " 1. ✓ Received 2 transfer intents" + echo " 2. ✓ Created a merkle tree batch" + echo " 3. ✓ Submitted batch to Nile blockchain" + echo " 4. ✓ Waited for unlock time" + echo " 5. ✓ Executed both transfers on-chain" + echo "" + echo "Verify on Nile blockchain:" + echo " 🔗 https://nile.tronscan.org/#/contract/TDum6BeRGA5hruf1Z2FRfavEZTn5DfWqAJ/events" + echo "" +else + echo -e "${YELLOW}⚠${NC} Batch execution status: $STATUS" + echo "" + echo "Possible reasons:" + echo " • Still processing (check logs)" + echo " • Insufficient balance or allowance" + echo " • Contract configuration issue" + echo "" + echo "Check backend logs for details:" + echo " ./gradlew bootRun" +fi + +echo "" +echo "View all batches:" +echo " curl $BASE_URL/api/monitor/batches | jq" +echo "" +echo "View this batch:" +echo " curl $BASE_URL/api/monitor/batch/$BATCH_ID | jq" +echo "" +echo "═══════════════════════════════════════════════════════════════════════════" +echo "" + diff --git a/contracts/README.md b/contracts/README.md new file mode 100644 index 0000000..c50b2b4 --- /dev/null +++ b/contracts/README.md @@ -0,0 +1,155 @@ +## TSBL-contracts + +This folder contains the **on-chain core of the TRON Settlement Batching Layer (TSBL)** — a set of smart contracts that implement **batch-based token transfer execution** using Merkle trees, delayed finality (time-lock), and modular fee logic. +The contracts are designed so that **all critical validation happens on-chain**. +The backend acts only as an **aggregator/operator**, not as a trusted execution component. + +### **WhitelistRegistry.sol** + + ──────── STATE VARIABLES ──────── + ├── bytes32 merkleRoot + ├── uint256 lastUpdate + ──────── FUNCTIONS ──────── + ├── function verifyWhitelist(user, proof) + ├── function updateMerkleRoot(newRoot, sig) + ├── function requestWhitelist(proof) + ──────── EVENTS ──────── + ├── emits WhitelistUpdated, WhitelistRequested + +### Purpose + +`WhitelistRegistry` manages **permissioned access for batched transactions** (`TxType.BATCHED`) using **Merkle tree–based whitelists**. + +Whitelist verification is **only required for batched transactions**. +Non-batched transaction types do not depend on this contract. + +### Core idea + +- The whitelist is represented by a **single Merkle root stored on-chain** +- Users prove inclusion via a **Merkle proof**, without storing addresses on-chain +- Updates to the whitelist are **authorized via ECDSA signatures** and protected by a nonce +- Users may submit whitelist requests by paying a small fee (anti-spam + signaling) + +### **FeeModule.sol** + + ──────── TYPES ──────── + ├── enum TxType + ├── mapping(address => uint256) dailyTxCount + ├── mapping(address => uint256) lastResetTimestamp + ├── mapping(bytes32 transferHash => FeeRecord) feeRecords + ├── mapping(address => bytes32[]) userFeeHistory + ├── mapping(bytes32 batchId => uint256) batchTotalFees + ──────── STATE VARIABLES ──────── + ├── ITSOLWhitelistRegistry public whitelistRegistry + ├── uint256 public totalFeesCollected + ├── uint256 FREE_TIER_LIMIT = 10 tx/day + ├── uint256 INSTANT_FEE = 0.2 TRX + ├── uint256 BATCH_FEE_PER_RECIPIENT = 0.05 TRX/rcpt + ├── uint256 BASE_FEE = 0.1 TRX + ──────── FUNCTIONS ──────── + ├── function calculateFee(sender, TxType, volume, recipientCount) + ├── 1. Check whitelist status (for batch processing) + ├── 2. Check large volume → ENERGY-FREE + ├── 3. Check daily free tier (for small users) + ├── 4. Calculate fee based on TxType + └── 5. Return fee & whitelist status + ├── function applyFee(sender, fee, TxType, transferHash, batchId) + ──────── EVENTS ──────── + ├── emits FeeCalculated, FeeApplied, FreeTierUsed + +### Purpose + +`FeeModule` is responsible for **on-chain fee calculation and accounting**, but **does not collect or transfer real funds**. + +> ⚠️ **Important** +> This module is **purely logical and statistical**: +> - It calculates *what the fee should be* +> - It records fee usage for analytics and UX +> - It does **not** deduct TRX or tokens + +### Core idea + +- Fee calculation depends on: + - transaction type (`TxType`) + - recipient count (`recipientCount`) + - transfer volume + - user free-tier quota +- **Backend never calculates fees** — it only calls `calculateFee` +- Fee logic is deterministic and fully verifiable on-chain + +### **Settlement.sol** + + ──────── TYPES ──────── + ├── struct Batch + ├── struct TransferData + ├── mapping(bytes32 batchId => Batch) batches + ├── mapping (uint256 => bool) executedTransfers + ├── mapping(address => bool) approvedAggregators + ──────── STATE VARIABLES ──────── + ├── ITSOLFeeModule public feeModule + ├── ITSOLWhitelistRegistry public whitelistRegistry + ├── uint256 maxTxPerBatch + ├── uint256 public timeLockDuration + ──────── MODIFIERS ──────── + ├── modifier onlyOwner() + ├── modifier onlyApprovedAggregator() + ──────── FUNCTIONS ──────── + ├── function submitBatch(rootHash, txCount, batchMetadata) onlyApprovedAggregator + ├── 1. Validate + ├── 2. Store batch + ├── 3. Time lock (delayed finality) + └── 4. Emit BatchSubmitted event + ├── function executeTransfer(proof, transactionData) + ├── 1. Get batch merkleRoot from metadata + ├── 2. Validate batch exists and time lock passed + ├── 3. Generate transfer hash + ├── 4. Check not executed + ├── 5. Verify Merkle proof + ├── 6. Calculate fee (with whitelist check) + ├── 7. Apply fee + ├── 8. Execute token transfer + ├── 9. Mark as executed + └── 10. Emit TransferExecuted event + ├── function _verifyMerkleProof(root, leaf, proof) + ├── function setFeeModule(_feeModule) onlyOwner + ├── function setWhitelistRegistry(_registry) onlyOwner + ├── function approveAggregator(aggregator, approved) onlyOwner + ├── function setMaxTxPerBatch(_max) onlyOwner + ├── function setTimeLockDuration(_duration) onlyOwner + ──────── EVENTS ──────── + ├── emits BatchSubmitted, TransferExecuted + +### Purpose + +`Settlement` is the **execution layer of the protocol**. + +It is responsible for: +- accepting batches (Merkle roots) +- enforcing delayed finality (time-lock) +- executing **exactly one transfer per Merkle leaf** + +### Core idea + +> **Batch ≠ multi-send transaction** + +A batch is a **commitment (Merkle root)** to many transfers. +Each transfer is executed **individually**, using its own Merkle proof. + +This design preserves: +- replay protection +- deterministic execution +- partial batch execution safety + +## On-chain Execution Flow + +```text +Aggregator / Backend + | + | submitBatch(merkleRoot, txCount) + v +Settlement + | (time-lock delay) + | + | executeTransfer(proof, data) + v +Merkle verification → fee calculation → token transfer diff --git a/contracts/foundry.lock b/contracts/foundry.lock new file mode 100644 index 0000000..dc035b1 --- /dev/null +++ b/contracts/foundry.lock @@ -0,0 +1,14 @@ +{ + "lib/forge-std": { + "tag": { + "name": "v1.11.0", + "rev": "8e40513d678f392f398620b3ef2b418648b33e89" + } + }, + "lib/openzeppelin-contracts": { + "tag": { + "name": "v5.5.0", + "rev": "fcbae5394ae8ad52d8e580a3477db99814b9d565" + } + } +} \ No newline at end of file diff --git a/contracts/foundry.toml b/contracts/foundry.toml new file mode 100644 index 0000000..2166e98 --- /dev/null +++ b/contracts/foundry.toml @@ -0,0 +1,23 @@ +[profile.default] +src = "src" +out = "out" +libs = ["lib"] +via_ir = true +optimizer = true +optimizer_runs = 200 + +remappings = ["@openzeppelin/contracts/=lib/openzeppelin-contracts/contracts"] + +# Formatter settings +line_length = 120 +tab_width = 4 +bracket_spacing = false +int_types = "long" +multiline_func_header = "attributes_first" +quote_style = "double" +number_underscore = "thousands" +override_spacing = true +wrap_comments = true +ignore = [] + +# See more config options https://github.com/foundry-rs/foundry/blob/master/crates/config/README.md#all-options diff --git a/contracts/package.json b/contracts/package.json new file mode 100644 index 0000000..f6f5808 --- /dev/null +++ b/contracts/package.json @@ -0,0 +1,35 @@ +{ + "name": "tron-sc", + "version": "1.0.0", + "description": "──────── STATE VARIABLES ──────── ├── bytes32 merkleRoot ├── uint256 lastUpdate ──────── FUNCTIONS ──────── ├── function verifyWhitelist(user, proof) ├── function updateMerkleRoot(newRoot, sig) ├── function requestWhitelist(proof) ──────── EVENTS ──────── ├── emits WhitelistUpdated, WhitelistRequested", + "main": "index.js", + "directories": { + "lib": "lib", + "test": "test" + }, + "scripts": { + "test": "echo \"Error: no test specified\" && exit 1", + "lint": "solhint 'src/**/*.sol'", + "lint:fix": "solhint 'src/**/*.sol' --fix" + }, + "repository": { + "type": "git", + "url": "git+https://github.com/BoostyLabs/tron-sc.git" + }, + "keywords": [], + "author": "", + "license": "ISC", + "type": "commonjs", + "bugs": { + "url": "https://github.com/BoostyLabs/tron-sc/issues" + }, + "homepage": "https://github.com/BoostyLabs/tron-sc#readme", + "dependencies": { + "dotenv": "^17.2.3", + "ethereumjs-util": "^7.1.5", + "tronweb": "^6.1.0" + }, + "devDependencies": { + "solhint": "^6.0.1" + } +} diff --git a/contracts/script/for-tests/DeployFeeModule.s.sol b/contracts/script/for-tests/DeployFeeModule.s.sol new file mode 100644 index 0000000..96fc38f --- /dev/null +++ b/contracts/script/for-tests/DeployFeeModule.s.sol @@ -0,0 +1,14 @@ +// SPDX-License-Identifier: MIT +pragma solidity 0.8.25; + +import {Script} from "forge-std/Script.sol"; +import {FeeModule} from "../../src/FeeModule.sol"; + +contract DeployFeeModule is Script { + function run() public returns (FeeModule) { + vm.startBroadcast(); + FeeModule feeModule = new FeeModule(); + vm.stopBroadcast(); + return feeModule; + } +} diff --git a/contracts/script/for-tests/DeployRegistry.s.sol b/contracts/script/for-tests/DeployRegistry.s.sol new file mode 100644 index 0000000..1d422b1 --- /dev/null +++ b/contracts/script/for-tests/DeployRegistry.s.sol @@ -0,0 +1,19 @@ +// SPDX-License-Identifier: MIT +pragma solidity 0.8.25; + +import {Script} from "forge-std/Script.sol"; +import {WhitelistRegistry} from "../../src/WhitelistRegistry.sol"; +import {HelperConfig} from "./HelperConfig.s.sol"; + +contract DeployRegistry is Script { + function run() public returns (WhitelistRegistry) { + HelperConfig helperConfig = new HelperConfig(); + + vm.startBroadcast(); + address updater = helperConfig.getActiveNetworkConfig(); + WhitelistRegistry registry = new WhitelistRegistry(updater); + vm.stopBroadcast(); + + return registry; + } +} diff --git a/contracts/script/for-tests/DeploySettlement.s.sol b/contracts/script/for-tests/DeploySettlement.s.sol new file mode 100644 index 0000000..1d6c4cc --- /dev/null +++ b/contracts/script/for-tests/DeploySettlement.s.sol @@ -0,0 +1,15 @@ +// SPDX-License-Identifier: MIT +pragma solidity 0.8.25; + +import {Script} from "forge-std/Script.sol"; +import {Settlement} from "../../src/Settlement.sol"; + +contract DeploySettlement is Script { + function run() public returns (Settlement) { + vm.startBroadcast(); + Settlement settlement = new Settlement(); + vm.stopBroadcast(); + + return settlement; + } +} diff --git a/contracts/script/for-tests/HelperConfig.s.sol b/contracts/script/for-tests/HelperConfig.s.sol new file mode 100644 index 0000000..fe5a56d --- /dev/null +++ b/contracts/script/for-tests/HelperConfig.s.sol @@ -0,0 +1,44 @@ +// SPDX-License-Identifier: MIT +pragma solidity 0.8.25; + +import {Script} from "forge-std/Script.sol"; + +contract HelperConfig is Script { + struct NetworkConfig { + address updater; + } + + NetworkConfig public activeNetworkConfig; + + constructor() { + if (block.chainid == 2494104990) { + activeNetworkConfig = getShastaTestnetConfig(); + } else if (block.chainid == 728126428) { + activeNetworkConfig = getTronMainnetConfig(); + } else { + activeNetworkConfig = getOrCreateAnvilEthConfig(); + } + } + + function getShastaTestnetConfig() public pure returns (NetworkConfig memory) { + // change address + address updater = address(1); + return NetworkConfig({updater: updater}); + } + + function getTronMainnetConfig() public pure returns (NetworkConfig memory) { + // change address + address updater = address(1); + return NetworkConfig({updater: updater}); + } + + function getOrCreateAnvilEthConfig() public pure returns (NetworkConfig memory) { + // default anvil address + address updater = 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266; + return NetworkConfig({updater: updater}); + } + + function getActiveNetworkConfig() public view returns (address) { + return (activeNetworkConfig.updater); + } +} diff --git a/contracts/script/interactions/1_set.js b/contracts/script/interactions/1_set.js new file mode 100644 index 0000000..e6d3a73 --- /dev/null +++ b/contracts/script/interactions/1_set.js @@ -0,0 +1,59 @@ +const fs = require('fs'); +const path = require('path'); +const { TronWeb } = require('tronweb'); +require('dotenv').config({ quiet: true }); + +const NETWORKS = { + nile: { fullHost: 'https://nile.trongrid.io' }, + mainnet: { fullHost: 'https://api.trongrid.io' } +}; + +const FEE_LIMIT = 500_000_000; + +function loadArtifact(name) { + const p = path.join(__dirname, '../../out', `${name}.sol`, `${name}.json`); + const j = JSON.parse(fs.readFileSync(p, 'utf8')); + return { abi: j.abi, bytecode: j.bytecode?.object || j.bytecode }; +} + +async function main() { + try { + const network = process.argv[2] || 'nile'; + const pk = process.env.UPDATER_PRIVATE_KEY; + if (!NETWORKS[network]) throw new Error('Network must be nile or mainnet'); + if (!pk) throw new Error('Set UPDATER_PRIVATE_KEY in .env'); + + const tronWeb = new TronWeb({ fullHost: NETWORKS[network].fullHost, privateKey: pk }); + + const { abi: feeAbi } = loadArtifact('FeeModule'); + const { abi: settlementAbi } = loadArtifact('Settlement'); + const { abi: whitelistAbi } = loadArtifact('WhitelistRegistry'); + + const feeModule = await tronWeb.contract(feeAbi, process.env.FEE_MODULE_ADDRESS); + const settlement = await tronWeb.contract(settlementAbi, process.env.SETTLEMENT_ADDRESS); + + let tx = await feeModule.setSettlement(process.env.SETTLEMENT_ADDRESS).send({ feeLimit: FEE_LIMIT }); + console.log('FeeModule setSettlement txID:', tx); + + tx = await settlement.setFeeModule(process.env.FEE_MODULE_ADDRESS).send({ feeLimit: FEE_LIMIT }); + console.log('Settlement setFeeModule txID:', tx); + + tx = await settlement.setWhitelistRegistry(process.env.WHITELIST_REGISTRY_ADDRESS).send({ feeLimit: FEE_LIMIT }); + console.log('Settlement setWhitelistRegistry txID:', tx); + + tx = await settlement.setMaxTxPerBatch(process.env.MAX_TX_PER_BATCH).send({ feeLimit: FEE_LIMIT }); + console.log('Settlement setMaxTxPerBatch txID:', tx); + + tx = await settlement.setTimelockDuration(process.env.TIMELOCK_DURATION).send({ feeLimit: FEE_LIMIT }); + console.log('Settlement setTimelockDuration txID:', tx); + + tx = await settlement.setToken(process.env.TOKEN_ADDRESS).send({ feeLimit: FEE_LIMIT }); + console.log('Settlement setToken txID:', tx); + + } catch (error) { + console.error(error); + process.exit(1); + } +} + +main(); \ No newline at end of file diff --git a/contracts/script/interactions/2_signRoot.js b/contracts/script/interactions/2_signRoot.js new file mode 100644 index 0000000..3d0e3d7 --- /dev/null +++ b/contracts/script/interactions/2_signRoot.js @@ -0,0 +1,100 @@ +const fs = require('fs'); +const path = require('path'); +const { TronWeb } = require('tronweb'); +const { ethers } = require('ethers'); +require('dotenv').config({ quiet: true }); + +const NETWORKS = { + nile: { fullHost: 'https://nile.trongrid.io' }, + mainnet: { fullHost: 'https://api.trongrid.io' } +}; + +function requireEnv(name) { + const v = process.env[name]; + if (!v) throw new Error(`Missing env: ${name}`); + return v.trim(); +} + +function tronBase58ToEvm0x(base58) { + const hex = TronWeb.address.toHex(base58); + const hexNo0x = hex.startsWith('0x') ? hex.slice(2) : hex; + if (!hexNo0x.toLowerCase().startsWith('41')) throw new Error(`Unexpected TRON hex: ${hex}`); + const evmHexNo0x = hexNo0x.slice(2); + if (evmHexNo0x.length !== 40) throw new Error(`Invalid EVM address length`); + return ethers.getAddress('0x' + evmHexNo0x); +} + +function ensureBytes32(hex) { + const h = ethers.hexlify(hex); + const b = ethers.getBytes(h); + if (b.length !== 32) throw new Error('Expected bytes32'); + return h; +} + +function loadArtifact(name) { + const p = path.join(__dirname, '../../out', `${name}.sol`, `${name}.json`); + const j = JSON.parse(fs.readFileSync(p, 'utf8')); + return { abi: j.abi }; +} + +async function main() { + const network = process.argv[2] || 'nile'; + if (!NETWORKS[network]) throw new Error('Network must be nile or mainnet'); + + const UPDATER_PK = requireEnv('UPDATER_PRIVATE_KEY'); + const REGISTRY_BASE58 = requireEnv('WHITELIST_REGISTRY_ADDRESS'); + const WL_NEW_ROOT = requireEnv('WL_NEW_ROOT'); + const CHAIN_ID = requireEnv('CHAIN_ID'); + + const wallet = new ethers.Wallet(UPDATER_PK.startsWith('0x') ? UPDATER_PK : `0x${UPDATER_PK}`); + + const tronWeb = new TronWeb({ + fullHost: NETWORKS[network].fullHost, + privateKey: UPDATER_PK + }); + + const { abi: registryAbi } = loadArtifact('WhitelistRegistry'); + const registry = await tronWeb.contract(registryAbi, REGISTRY_BASE58); + + const preNonceEnv = requireEnv('WL_NONCE'); + const preNonce = BigInt(preNonceEnv); + + const updaterBase58 = TronWeb.address.fromPrivateKey(UPDATER_PK); + const isAuth = await registry.isAuthorizedUpdater(updaterBase58).call(); + if (!isAuth) throw new Error('Updater is NOT authorized. Call addAuthorizedUpdater(updaterBase58) from an admin.'); + + const root32 = ensureBytes32(WL_NEW_ROOT); + const chainIdBig = ethers.toBigInt(CHAIN_ID); + const registry0x = tronBase58ToEvm0x(REGISTRY_BASE58); + + const packed = ethers.solidityPacked( + ['bytes32', 'uint64', 'uint256', 'address'], + [root32, preNonce, chainIdBig, registry0x] + ); + const digest = ethers.keccak256(packed); + const signature = await wallet.signMessage(ethers.getBytes(digest)); + + const recovered = ethers.verifyMessage(ethers.getBytes(digest), signature); + if (ethers.getAddress(recovered) !== ethers.getAddress(wallet.address)) { + throw new Error(`Signature mismatch: recovered ${recovered} != wallet ${wallet.address}`); + } + + const out = { + root: root32, + nonce: preNonce.toString(), + chainId: chainIdBig.toString(), + registry0x, + signature + }; + fs.writeFileSync(path.join(__dirname, 'signature.json'), JSON.stringify(out, null, 2)); + + console.log('On-chain nonce:', preNonce.toString()); + console.log('Updater authorized:', isAuth); + console.log('CHAIN_ID:', CHAIN_ID); + console.log('Signature generated and saved to signature.json'); +} + +main().catch((err) => { + console.error(err); + process.exit(1); +}); \ No newline at end of file diff --git a/contracts/script/interactions/3_updateRoot.js b/contracts/script/interactions/3_updateRoot.js new file mode 100644 index 0000000..292a694 --- /dev/null +++ b/contracts/script/interactions/3_updateRoot.js @@ -0,0 +1,100 @@ +const fs = require('fs'); +const path = require('path'); +const { TronWeb } = require('tronweb'); +require('dotenv').config({ quiet: true }); + +const NETWORKS = { + nile: { fullHost: 'https://nile.trongrid.io' }, + mainnet: { fullHost: 'https://api.trongrid.io' } +}; +const FEE_LIMIT = 500_000_000; + +function requireEnv(name) { + const v = process.env[name]; + if (!v) throw new Error(`Missing env: ${name}`); + return v.trim(); +} + +function loadArtifact(name) { + const p = path.join(__dirname, '../../out', `${name}.sol`, `${name}.json`); + const j = JSON.parse(fs.readFileSync(p, 'utf8')); + return { abi: j.abi }; +} + +async function waitReceipt(tronWeb, txId, tries = 10, delayMs = 1500) { + for (let i = 0; i < tries; i++) { + const r = await tronWeb.trx.getTransactionInfo(txId); + const status = r?.receipt?.result || r?.result; + if (status) return { status, receipt: r }; + await new Promise(res => setTimeout(res, delayMs)); + } + return { status: 'UNKNOWN', receipt: {} }; +} + +function normalizeHex32(h) { + if (!h) return h; + const s = h.toLowerCase(); + return s.startsWith('0x') ? s : `0x${s}`; +} + +async function main() { + const network = process.argv[2] || 'nile'; + if (!NETWORKS[network]) throw new Error('Network must be nile or mainnet'); + + const TX_PK = requireEnv('UPDATER_PRIVATE_KEY'); // same signer + const REGISTRY_BASE58 = requireEnv('WHITELIST_REGISTRY_ADDRESS'); + + const sigPath = path.join(__dirname, 'signature.json'); + if (!fs.existsSync(sigPath)) throw new Error('Missing signature.json'); + const sig = JSON.parse(fs.readFileSync(sigPath, 'utf8')); + const { root, nonce, signature, network: signedNetwork } = sig; + + if (signedNetwork && signedNetwork !== network) { + throw new Error(`Network mismatch: signature.json was created for ${signedNetwork}, you are submitting to ${network}. Re-sign or switch network.`); + } + + const expectedRoot = normalizeHex32(root); + + const tronWeb = new TronWeb({ + fullHost: NETWORKS[network].fullHost, + privateKey: TX_PK + }); + + const { abi: registryAbi } = loadArtifact('WhitelistRegistry'); + const registry = await tronWeb.contract(registryAbi, REGISTRY_BASE58); + + const preRoot = normalizeHex32(await registry.getCurrentMerkleRoot().call()); + const preNonceBN = await registry.getCurrentNonce().call(); + const preNonce = BigInt(preNonceBN.toString()); + + console.log('Submitting updateMerkleRoot...'); + const txId = await registry.updateMerkleRoot(root, nonce, signature).send({ feeLimit: FEE_LIMIT }); + console.log('updateMerkleRoot txID:', txId); + + const { status, receipt } = await waitReceipt(tronWeb, txId); + console.log('Receipt status:', status); + if (status !== 'SUCCESS') console.log('Receipt (full):', receipt); + + const postRoot = normalizeHex32(await registry.getCurrentMerkleRoot().call()); + const postNonceBN = await registry.getCurrentNonce().call(); + const postNonce = BigInt(postNonceBN.toString()); + + console.log('current root:', postRoot); + console.log('current nonce:', postNonce.toString()); + + const nonceOk = postNonce === preNonce + 1n; + const rootOk = postRoot === expectedRoot; + + if (nonceOk && rootOk) { + console.log('Success: merkle root updated and nonce incremented.'); + } else if (nonceOk && !rootOk) { + console.log('Partial success: nonce incremented, but root != expected. Check that WL_NEW_ROOT matches what was signed.'); + } else { + console.log('Update did not apply. Check nonce (must sign with current s_nonce), chainId, authorization, duplicate root, and pause state.'); + } +} + +main().catch((err) => { + console.error(err); + process.exit(1); +}); diff --git a/contracts/script/interactions/4_submitBatch.js b/contracts/script/interactions/4_submitBatch.js new file mode 100644 index 0000000..681196d --- /dev/null +++ b/contracts/script/interactions/4_submitBatch.js @@ -0,0 +1,72 @@ +const fs = require('fs'); +const path = require('path'); +const { TronWeb } = require('tronweb'); +require('dotenv').config({ quiet: true }); + +const NETWORKS = { + nile: { fullHost: 'https://nile.trongrid.io' }, + mainnet: { fullHost: 'https://api.trongrid.io' }, +}; + +function loadArtifact(name) { + const p = path.join(__dirname, '../../out', `${name}.sol`, `${name}.json`); + const j = JSON.parse(fs.readFileSync(p, 'utf8')); + return { abi: j.abi }; +} + +function loadMerkleJson(filename = 'merkle_data_deploy.json') { + const p = path.join(__dirname, '../merkle/batch', filename); + return JSON.parse(fs.readFileSync(p, 'utf8')); +} + +async function main() { + try { + const network = process.argv[2] || 'nile'; + const pk = process.env.UPDATER_PRIVATE_KEY; + const settlementAddr = process.env.SETTLEMENT_ADDRESS; + if (!NETWORKS[network]) throw new Error('Network must be nile or mainnet'); + if (!pk) throw new Error('Set UPDATER_PRIVATE_KEY in .env'); + if (!settlementAddr) throw new Error('Set SETTLEMENT_ADDRESS in .env'); + + const tronWeb = new TronWeb({ fullHost: NETWORKS[network].fullHost, privateKey: pk }); + const { abi } = loadArtifact('Settlement'); + const settlement = await tronWeb.contract(abi, settlementAddr); + + const merkle = loadMerkleJson('merkle_data_deploy.json'); + let root = merkle.merkleRoot; + if (!root.startsWith('0x')) { + root = '0x' + root; + } + const txCount = merkle.txCount; + const batchSalt = merkle.batchSalt || 1; + + console.log('Submitting batch:'); + console.log(' merkleRoot:', root); + console.log(' txCount:', txCount); + console.log(' batchSalt:', batchSalt); + + const res = await settlement.submitBatch(root, txCount, batchSalt).send({ + feeLimit: 100_000_000, + shouldPollResponse: false, + callValue: 0, + }); + + console.log('Submitted. TX:', res); + + const batchId = await settlement.getBatchIdByRoot(root).call(); + console.log('Assigned batchId:', batchId.toString()); + + const batch = await settlement.getBatchById(batchId).call(); + console.log('UnlockTime:', batch.unlockTime.toString()); + + console.log('Wait until unlockTime, then execute transfers.'); + } catch (e) { + console.error('Submit failed:', e.message); + if (e.output && e.output.contractResult) { + console.error('Contract error:', e.output.contractResult); + } + process.exit(1); + } +} + +main(); \ No newline at end of file diff --git a/contracts/script/interactions/5_approveToken.js b/contracts/script/interactions/5_approveToken.js new file mode 100644 index 0000000..a4a8623 --- /dev/null +++ b/contracts/script/interactions/5_approveToken.js @@ -0,0 +1,64 @@ +const { TronWeb } = require('tronweb'); +require('dotenv').config({ quiet: true }); + +const NETWORKS = { + nile: { fullHost: 'https://nile.trongrid.io' }, + mainnet: { fullHost: 'https://api.trongrid.io' }, +}; + +const FEE_LIMIT = 500_000_000; +const MAX_UINT256 = '0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff'; + +const TRC20_ABI = [ + { type: 'function', name: 'approve', inputs: [{ type: 'address', name: 'spender' }, { type: 'uint256', name: 'amount' }], outputs: [{ type: 'bool', name: '' }], stateMutability: 'nonpayable' }, + { type: 'function', name: 'allowance', inputs: [{ type: 'address', name: 'owner' }, { type: 'address', name: 'spender' }], outputs: [{ type: 'uint256', name: '' }], stateMutability: 'view' }, + { type: 'function', name: 'balanceOf', inputs: [{ type: 'address', name: 'owner' }], outputs: [{ type: 'uint256', name: '' }], stateMutability: 'view' }, +]; + +function parseArgs() { + const network = process.argv[2] || 'nile'; + const amount = process.argv[3] || MAX_UINT256; + return { network, amount }; +} + +async function main() { + try { + const { network, amount } = parseArgs(); + + const pk = process.env.UPDATER_PRIVATE_KEY; + const tokenAddr = process.env.TOKEN_ADDRESS; + const settlementAddr = process.env.SETTLEMENT_ADDRESS; + + if (!NETWORKS[network]) throw new Error('Network must be nile or mainnet'); + if (!pk) throw new Error('Set UPDATER_PRIVATE_KEY in .env (must be the sender account that owns tokens)'); + if (!tokenAddr) throw new Error('Set TOKEN_ADDRESS in .env (the token configured in Settlement)'); + if (!settlementAddr) throw new Error('Set SETTLEMENT_ADDRESS in .env'); + + const tronWeb = new TronWeb({ fullHost: NETWORKS[network].fullHost, privateKey: pk }); + const token = await tronWeb.contract(TRC20_ABI, tokenAddr); + + const ownerBase58 = tronWeb.address.fromPrivateKey(pk); // sender address (T-addr) + + console.log('Approving token allowance...'); + console.log(`Network: ${network}`); + console.log(`Token: ${tokenAddr}`); + console.log(`Owner: ${ownerBase58}`); + console.log(`Spender (Settlement): ${settlementAddr}`); + console.log(`Amount: ${amount}`); + + const tx = await token.approve(settlementAddr, String(amount)).send({ feeLimit: FEE_LIMIT }); + console.log('approve txID:', tx); + + const allowance = await token.allowance(ownerBase58, settlementAddr).call(); + const balance = await token.balanceOf(ownerBase58).call(); + + console.log('allowance(owner, Settlement):', allowance.toString()); + console.log('balanceOf(owner):', balance.toString()); + console.log('Done.'); + } catch (error) { + console.error(error); + process.exit(1); + } +} + +main(); diff --git a/contracts/script/interactions/6_executeTransfer.js b/contracts/script/interactions/6_executeTransfer.js new file mode 100644 index 0000000..dea82bb --- /dev/null +++ b/contracts/script/interactions/6_executeTransfer.js @@ -0,0 +1,177 @@ +const fs = require('fs'); +const path = require('path'); +const { TronWeb } = require('tronweb'); +require('dotenv').config({ quiet: true }); + +const NETWORKS = { + nile: { fullHost: 'https://nile.trongrid.io' }, + mainnet: { fullHost: 'https://api.trongrid.io' }, +}; + +function loadArtifact(name) { + const p = path.join(__dirname, '../../out', `${name}.sol`, `${name}.json`); + const j = JSON.parse(fs.readFileSync(p, 'utf8')); + return { abi: j.abi }; +} + +function loadMerkleJson(filename = 'merkle_data_deploy.json') { + const p = path.join(__dirname, '../merkle/batch', filename); + return JSON.parse(fs.readFileSync(p, 'utf8')); +} + +function formatAmount(amount, decimals = 6) { + return (BigInt(amount) / BigInt(10 ** decimals)).toString(); +} + +async function main() { + try { + const network = process.argv[2] || 'nile'; + const txIndex = parseInt(process.argv[3]) || 0; + const pk = process.env.EXECUTOR_PRIVATE_KEY || process.env.UPDATER_PRIVATE_KEY; + const settlementAddr = process.env.SETTLEMENT_ADDRESS; + + if (!NETWORKS[network]) throw new Error('Network must be nile or mainnet'); + if (!pk) throw new Error('Set EXECUTOR_PRIVATE_KEY or UPDATER_PRIVATE_KEY in .env'); + if (!settlementAddr) throw new Error('Set SETTLEMENT_ADDRESS in .env'); + + const tronWeb = new TronWeb({ fullHost: NETWORKS[network].fullHost, privateKey: pk }); + + // Load Settlement contract + const settlementAbi = loadArtifact('Settlement').abi; + const settlement = tronWeb.contract(settlementAbi, settlementAddr); + + // Load Merkle data + const merkle = loadMerkleJson('merkle_data_deploy.json'); + + if (txIndex >= merkle.transactions.length) { + throw new Error(`Transaction index ${txIndex} out of range (max: ${merkle.transactions.length - 1})`); + } + + const tx = merkle.transactions[txIndex]; + + console.log(`\n Executing Transfer #${txIndex} (${tx.type})`); + console.log(` From: ${tx.tronAddresses.from}`); + console.log(` To: ${tx.tronAddresses.to}`); + console.log(` Amount: ${formatAmount(tx.txDataStruct[2])} tokens`); + + // Look up the REAL batch ID from the contract using the Merkle root + const realBatchIdRaw = await settlement.getBatchIdByRoot(merkle.merkleRoot).call(); + const realBatchId = BigInt(realBatchIdRaw).toString(); + const batchId = realBatchId; + + // Prepare transaction data struct as ARRAY (not object) + // Order: [from, to, amount, nonce, timestamp, recipientCount, batchId, txType] + const txData = [ + tx.txDataStruct[0], // from + tx.txDataStruct[1], // to + tx.txDataStruct[2], // amount + tx.txDataStruct[3], // nonce + tx.txDataStruct[4], // timestamp + tx.txDataStruct[5], // recipientCount + batchId, // batchId (from contract) + tx.txDataStruct[7] // txType + ]; + + // Validate contract state + const tokenAddr = await settlement.getToken().call(); + const feeModuleAddr = await settlement.getFeeModule().call(); + const isPaused = await settlement.paused().call(); + + if (isPaused) { + throw new Error('Settlement contract is PAUSED!'); + } + + // Validate batch status + const batch = await settlement.getBatchById(batchId).call(); + const unlockTime = batch.unlockTime.toString(); + const currentTime = Math.floor(Date.now() / 1000); + + if (currentTime < parseInt(unlockTime)) { + const waitTime = parseInt(unlockTime) - currentTime; + throw new Error(`Batch is still LOCKED! Wait ${waitTime} seconds`); + } + + // Check if already executed + const isExecuted = await settlement.isExecutedTransfer(tx.txHash).call(); + if (isExecuted) { + throw new Error('Transfer has already been EXECUTED!'); + } + + // Validate token balances and allowances + const tokenAbi = [ + { "constant": true, "inputs": [], "name": "symbol", "outputs": [{ "name": "", "type": "string" }], "type": "function" }, + { "constant": true, "inputs": [], "name": "decimals", "outputs": [{ "name": "", "type": "uint8" }], "type": "function" }, + { "constant": true, "inputs": [{ "name": "who", "type": "address" }], "name": "balanceOf", "outputs": [{ "name": "", "type": "uint256" }], "type": "function" }, + { "constant": true, "inputs": [{ "name": "owner", "type": "address" }, { "name": "spender", "type": "address" }], "name": "allowance", "outputs": [{ "name": "", "type": "uint256" }], "type": "function" } + ]; + const token = tronWeb.contract(tokenAbi, tronWeb.address.fromHex(tokenAddr)); + + const tokenSymbol = await token.symbol().call(); + const tokenDecimals = await token.decimals().call(); + const senderBalance = await token.balanceOf(tx.evmAddresses.from).call(); + const allowance = await token.allowance(tx.evmAddresses.from, settlementAddr).call(); + + if (BigInt(senderBalance.toString()) < BigInt(tx.txDataStruct[2])) { + throw new Error(`Insufficient balance! Has ${senderBalance.toString()}, needs ${tx.txDataStruct[2]}`); + } + + if (BigInt(allowance.toString()) < BigInt(tx.txDataStruct[2])) { + throw new Error(`Insufficient allowance! Has ${allowance.toString()}, needs ${tx.txDataStruct[2]}`); + } + + // Calculate fees + + let feeModuleAbi; + try { + feeModuleAbi = loadArtifact('FeeModule').abi; + } catch (e) { + feeModuleAbi = [ + { "inputs": [{ "name": "sender", "type": "address" }, { "name": "txType", "type": "uint8" }, { "name": "volume", "type": "uint256" }, { "name": "recipientCount", "type": "uint256" }], "name": "calculateFee", "outputs": [{ "components": [{ "name": "fee", "type": "uint256" }, { "name": "txType", "type": "uint8" }], "name": "info", "type": "tuple" }], "stateMutability": "view", "type": "function" } + ]; + } + const feeModule = tronWeb.contract(feeModuleAbi, tronWeb.address.fromHex(feeModuleAddr)); + + let feeAmount = '0'; + try { + const feeInfo = await feeModule.calculateFee( + tx.evmAddresses.from, + txData[7], // txType + txData[2], // amount + txData[5] // recipientCount + ).call(); + + const feeAmountRaw = feeInfo.fee || feeInfo[0] || '0'; + feeAmount = BigInt(feeAmountRaw).toString(); + + const totalRequired = BigInt(tx.txDataStruct[2]) + BigInt(feeAmount); + if (BigInt(senderBalance.toString()) < totalRequired) { + throw new Error(`Insufficient balance for amount + fee!`); + } + } catch (e) { + // Continue without fee validation + } + + // Execute the transfer + const txProof = tx.proof; + const whitelistProof = ['0x2c27f532fe88e4b25c84c1d9e51fb97002414c2ed55927eeb815cfa1733c688e']; + + const res = await settlement.executeTransfer(txProof, whitelistProof, txData).send({ + feeLimit: 150_000_000, + shouldPollResponse: false, + callValue: 0, + }); + + // Success output + console.log(` ✓ Transfer executed successfully`); + console.log(` TX Hash: ${res}`); + if (feeAmount && feeAmount !== '0') { + console.log(` Fee: ${formatAmount(feeAmount, parseInt(tokenDecimals))} ${tokenSymbol}`); + } + + } catch (e) { + console.error(` ✗ Execution failed: ${e.message}`); + process.exit(1); + } +} + +main(); diff --git a/contracts/script/interactions/addUpdater.js b/contracts/script/interactions/addUpdater.js new file mode 100644 index 0000000..d4c7aee --- /dev/null +++ b/contracts/script/interactions/addUpdater.js @@ -0,0 +1,47 @@ +const fs = require('fs'); +const path = require('path'); +const { TronWeb } = require('tronweb'); +require('dotenv').config({ quiet: true }); + +const NETWORKS = { + nile: { fullHost: 'https://nile.trongrid.io' }, + mainnet: { fullHost: 'https://api.trongrid.io' } +}; + +const FEE_LIMIT = 500_000_000; + +function loadArtifact(name) { + const p = path.join(__dirname, '../../out', `${name}.sol`, `${name}.json`); + const j = JSON.parse(fs.readFileSync(p, 'utf8')); + return { abi: j.abi, bytecode: j.bytecode?.object || j.bytecode }; +} + +async function main() { + try { + const network = process.argv[2] || 'nile'; + const pk = process.env.UPDATER_PRIVATE_KEY; + const registryAddress = process.env.WHITELIST_REGISTRY_ADDRESS; + const updater = process.argv[3] || process.env.WL_UPDATER_ADDRESS; + + if (!NETWORKS[network]) throw new Error('Network must be nile or mainnet'); + if (!pk) throw new Error('Set UPDATER_PRIVATE_KEY in .env'); + if (!registryAddress) throw new Error('Set WL_REGISTRY_ADDRESS in .env'); + if (!updater) throw new Error('Provide updater address as arg3 or set WL_UPDATER_ADDRESS in .env'); + + const tronWeb = new TronWeb({ fullHost: NETWORKS[network].fullHost, privateKey: pk }); + + const { abi: wlAbi } = loadArtifact('WhitelistRegistry'); + const wl = await tronWeb.contract(wlAbi, registryAddress); + + const zeroAddr = 'T9yD14Nj9j7xAB4dbGeiX9h8unkKHxuWwb'; + if (updater === zeroAddr) throw new Error('Updater cannot be zero address'); + + const tx = await wl.addAuthorizedUpdater(updater).send({ feeLimit: FEE_LIMIT }); + console.log('Authorized updater added:', tx); + } catch (error) { + console.error(error); + process.exit(1); + } +} + +main(); diff --git a/contracts/script/interactions/approveAggregator.js b/contracts/script/interactions/approveAggregator.js new file mode 100644 index 0000000..feccaaa --- /dev/null +++ b/contracts/script/interactions/approveAggregator.js @@ -0,0 +1,41 @@ +const fs = require('fs'); +const path = require('path'); +const { TronWeb } = require('tronweb'); +require('dotenv').config({ quiet: true }); + +const NETWORKS = { + nile: { fullHost: 'https://nile.trongrid.io' }, + mainnet: { fullHost: 'https://api.trongrid.io' } +}; + +const FEE_LIMIT = 500_000_000; + +function loadArtifact(name) { + const p = path.join(__dirname, '../../out', `${name}.sol`, `${name}.json`); + const j = JSON.parse(fs.readFileSync(p, 'utf8')); + return { abi: j.abi, bytecode: j.bytecode?.object || j.bytecode }; +} + +async function main() { + try { + const network = process.argv[2] || 'nile'; + const pk = process.env.UPDATER_PRIVATE_KEY; + if (!NETWORKS[network]) throw new Error('Network must be nile or mainnet'); + if (!pk) throw new Error('Set UPDATER_PRIVATE_KEY in .env'); + + const tronWeb = new TronWeb({ fullHost: NETWORKS[network].fullHost, privateKey: pk }); + + const { abi: settlementAbi } = loadArtifact('Settlement'); + + const settlement = await tronWeb.contract(settlementAbi, process.env.SETTLEMENT_ADDRESS); + + let tx = await settlement.approveAggregator(process.env.AGGREGATOR_ADDRESS).send({ feeLimit: FEE_LIMIT }); + console.log('Aggregator approved:', tx); + + } catch (error) { + console.error(error); + process.exit(1); + } +} + +main(); \ No newline at end of file diff --git a/contracts/script/interactions/fullSuccessScenario.js b/contracts/script/interactions/fullSuccessScenario.js new file mode 100644 index 0000000..bc5a146 --- /dev/null +++ b/contracts/script/interactions/fullSuccessScenario.js @@ -0,0 +1,165 @@ +const { execSync } = require('child_process'); +const path = require('path'); +const fs = require('fs'); + +const SCRIPTS_DIR = path.join(__dirname); +const PROJECT_ROOT = path.join(__dirname, '..', '..'); +const COLOR_GREEN = '\x1b[32m'; +const COLOR_BLUE = '\x1b[34m'; +const COLOR_YELLOW = '\x1b[33m'; +const COLOR_RED = '\x1b[31m'; +const COLOR_RESET = '\x1b[0m'; + +function log(message, color = COLOR_RESET) { + console.log(`${color}${message}${COLOR_RESET}`); +} + +function runScript(scriptName, args = []) { + const scriptPath = path.join(SCRIPTS_DIR, scriptName); + const command = `node "${scriptPath}" ${args.join(' ')}`; + + log(`\n${'='.repeat(80)}`, COLOR_BLUE); + log(`Running: ${scriptName} ${args.join(' ')}`, COLOR_BLUE); + log('='.repeat(80), COLOR_BLUE); + + try { + execSync(command, { stdio: 'inherit', cwd: PROJECT_ROOT }); + log(`✅ ${scriptName} completed successfully\n`, COLOR_GREEN); + return true; + } catch (error) { + log(`❌ ${scriptName} failed!`, COLOR_RED); + throw error; + } +} + +async function waitForUnlockTime(network) { + const { TronWeb } = require('tronweb'); + require('dotenv').config({ quiet: true }); + + const NETWORKS = { + nile: { fullHost: 'https://nile.trongrid.io' }, + mainnet: { fullHost: 'https://api.trongrid.io' } + }; + + const pk = process.env.UPDATER_PRIVATE_KEY; + const settlementAddress = process.env.SETTLEMENT_ADDRESS; + + if (!pk || !settlementAddress) { + throw new Error('Missing UPDATER_PRIVATE_KEY or SETTLEMENT_ADDRESS in .env'); + } + + const tronWeb = new TronWeb({ + fullHost: NETWORKS[network].fullHost, + privateKey: pk + }); + + function loadArtifact(name) { + const p = path.join(__dirname, '../../out', `${name}.sol`, `${name}.json`); + const j = JSON.parse(fs.readFileSync(p, 'utf8')); + return { abi: j.abi }; + } + + const { abi } = loadArtifact('Settlement'); + const settlement = await tronWeb.contract(abi, settlementAddress); + + // Load batch data to get batchId + const batchFilePath = path.join(__dirname, '../merkle/batch/merkle_data_deploy.json'); + const batchData = JSON.parse(fs.readFileSync(batchFilePath, 'utf8')); + const batchId = batchData.batchId || 1; + + const batch = await settlement.getBatchById(batchId).call(); + const unlockTime = parseInt(batch.unlockTime.toString()); + const currentTime = Math.floor(Date.now() / 1000); + const waitSeconds = unlockTime - currentTime; + + if (waitSeconds > 0) { + log(`\n⏳ Batch is locked. Waiting ${waitSeconds} seconds for unlock time...`, COLOR_YELLOW); + log(` Current time: ${currentTime}`, COLOR_YELLOW); + log(` Unlock time: ${unlockTime}`, COLOR_YELLOW); + + // Wait with countdown + for (let i = waitSeconds; i > 0; i--) { + if (i % 10 === 0 || i <= 5) { + process.stdout.write(`\r ⏳ ${i} seconds remaining...`); + } + await new Promise(resolve => setTimeout(resolve, 1000)); + } + process.stdout.write('\r'); + log('\n✅ Batch is now unlocked!', COLOR_GREEN); + } else { + log('\n✅ Batch is already unlocked!', COLOR_GREEN); + } +} + +async function main() { + const network = process.argv[2] || 'nile'; + + if (!['nile', 'mainnet'].includes(network)) { + throw new Error('Network must be nile or mainnet'); + } + + log('\n' + '='.repeat(80), COLOR_BLUE); + log('🚀 FULL SUCCESS PASS SCENARIO - 3 TRANSFERS', COLOR_BLUE); + log('='.repeat(80), COLOR_BLUE); + log(`Network: ${network}\n`, COLOR_BLUE); + + try { + // Step 1: Sign the whitelist root + log('📝 STEP 1/6: Sign Whitelist Root', COLOR_YELLOW); + runScript('2_signRoot.js', [network]); + + // Step 2: Update the whitelist root on-chain + log('📤 STEP 2/6: Update Whitelist Root On-Chain', COLOR_YELLOW); + runScript('3_updateRoot.js', [network]); + + // Step 4: Submit the batch + log('📦 STEP 3/6: Submit Batch', COLOR_YELLOW); + runScript('4_submitBatch.js', [network]); + + // Step 5: Wait for unlock time + log('⏰ STEP 4/6: Wait for Batch Unlock Time', COLOR_YELLOW); + await waitForUnlockTime(network); + + // Step 6: Approve tokens + log('✅ STEP 5/6: Approve Tokens for Settlement Contract', COLOR_YELLOW); + runScript('5_approveToken.js', [network]); + + // Step 7: Execute all 3 transfers + log('💸 STEP 6/6: Execute All 3 Transfers', COLOR_YELLOW); + + for (let i = 0; i < 3; i++) { + log(`\n Transfer ${i + 1}/3:`, COLOR_YELLOW); + runScript('6_executeTransfer.js', [network, i.toString()]); + } + + // Wait for transactions to be processed on-chain + log('\n⏳ Waiting for transactions to be processed...', COLOR_YELLOW); + await new Promise(resolve => setTimeout(resolve, 2000)); + + // Success summary + log('\n' + '='.repeat(80), COLOR_GREEN); + log('🎉 SUCCESS! All steps completed successfully!', COLOR_GREEN); + log('='.repeat(80), COLOR_GREEN); + log('\nSummary:', COLOR_GREEN); + log(' ✅ Whitelist root signed and updated', COLOR_GREEN); + log(' ✅ Aggregator approved', COLOR_GREEN); + log(' ✅ Batch submitted and unlocked', COLOR_GREEN); + log(' ✅ Tokens approved', COLOR_GREEN); + log(' ✅ Transfer #0 executed (DELAYED)', COLOR_GREEN); + log(' ✅ Transfer #1 executed (INSTANT)', COLOR_GREEN); + log(' ✅ Transfer #2 executed (BATCHED)', COLOR_GREEN); + log('\n' + '='.repeat(80), COLOR_GREEN); + + } catch (error) { + log('\n' + '='.repeat(80), COLOR_RED); + log('❌ SCENARIO FAILED!', COLOR_RED); + log('='.repeat(80), COLOR_RED); + log(`\nError: ${error.message}`, COLOR_RED); + process.exit(1); + } +} + +main().catch((error) => { + console.error(error); + process.exit(1); +}); diff --git a/contracts/script/interactions/signature.json b/contracts/script/interactions/signature.json new file mode 100644 index 0000000..d25237c --- /dev/null +++ b/contracts/script/interactions/signature.json @@ -0,0 +1,7 @@ +{ + "root": "0x02012517de2680f90c5eb1b6c64e04e21424609e331954b45e202ace05e2938b", + "nonce": "0", + "chainId": "3448148188", + "registry0x": "0xA07Bae6d66eff93594e7540F27065d82CCBB1944", + "signature": "0x7018478d3192f1560c15ae4897adbc3f89d036aed1e0ce6fec48dd55667a0f49010e5897c74c708f1cf7e3316db3fbc0399853a6910d40b775fd459f38ca7add1b" +} \ No newline at end of file diff --git a/contracts/script/merkle/batch/generateBatchRoot.py b/contracts/script/merkle/batch/generateBatchRoot.py new file mode 100644 index 0000000..cac5cef --- /dev/null +++ b/contracts/script/merkle/batch/generateBatchRoot.py @@ -0,0 +1,224 @@ +from web3 import Web3 +from enum import IntEnum +from dataclasses import dataclass +from typing import List, Dict, Any +import json +import time + +class TxType(IntEnum): + DELAYED = 0 + INSTANT = 1 + BATCHED = 2 + FREE_TIER = 3 + +@dataclass +class TransferData: + from_address: str + to_address: str + amount: int + nonce: int + timestamp: int + recipient_count: int + batch_id: int # keep for off-chain grouping and on-chain params, but not hashed + tx_type: int + batch_salt: int # uint64 salt used by backend to build merkle root + +def calculate_tx_hash(tx: TransferData) -> bytes: + """ + Match Settlement._calculateTxHash: + keccak256(abi.encodePacked( + from, to, amount, nonce(uint64), timestamp(uint48), recipientCount(uint32), txType(uint8), batchSalt(uint64) + )) + IMPORTANT: batchId is NOT included. + """ + return Web3.solidity_keccak( + ['address', 'address', 'uint256', 'uint64', 'uint48', 'uint32', 'uint8', 'uint64'], + [ + Web3.to_checksum_address(tx.from_address), + Web3.to_checksum_address(tx.to_address), + tx.amount, + tx.nonce, # uint64 + tx.timestamp, # uint48 + tx.recipient_count, # uint32 + tx.tx_type, # uint8 + tx.batch_salt # uint64 + ] + ) + +class MerkleTree: + def __init__(self, leaves: List[bytes]): + self.leaves = leaves + self.tree = self._build_tree(leaves) + + def _build_tree(self, leaves: List[bytes]) -> List[List[bytes]]: + if not leaves: + return [[]] + tree = [leaves] + current = leaves + while len(current) > 1: + nxt = [] + for i in range(0, len(current), 2): + if i + 1 < len(current): + left = current[i] + right = current[i + 1] + # sorted pair hashing to match OpenZeppelin's MerkleProof standard pattern + if left > right: + left, right = right, left + combined = Web3.solidity_keccak(['bytes32', 'bytes32'], [left, right]) + else: + combined = current[i] + nxt.append(combined) + tree.append(nxt) + current = nxt + return tree + + def get_root(self) -> bytes: + return self.tree[-1][0] if self.tree and self.tree[-1] else b'\x00' * 32 + + def get_proof(self, index: int) -> List[bytes]: + proof = [] + idx = index + for level in range(len(self.tree) - 1): + curr = self.tree[level] + if idx % 2 == 0: + if idx + 1 < len(curr): + proof.append(curr[idx + 1]) + else: + proof.append(curr[idx - 1]) + idx //= 2 + return proof + + def verify_proof(self, leaf: bytes, proof: List[bytes], root: bytes) -> bool: + computed = leaf + for p in proof: + if computed <= p: + computed = Web3.solidity_keccak(['bytes32', 'bytes32'], [computed, p]) + else: + computed = Web3.solidity_keccak(['bytes32', 'bytes32'], [p, computed]) + return computed == root + +def generate_test_transactions() -> List[TransferData]: + SENDER = "0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266" + RECIPIENT_1 = "0x70997970C51812dc3A010C7d01b50e0d17dc79C8" + RECIPIENT_2 = "0x3C44CdDdB6a900fa2b585dd299e03d12FA4293BC" + RECIPIENT_3 = "0x90F79bf6EB2c4f870365E785982E1f101E93b906" + RANDOM_ADDR = "0x1234567890123456789012345678901234567890" + ZERO_ADDR = "0x0000000000000000000000000000000000000000" + + BATCH_ID = 1 + BATCH_SALT = 1 # Salt used by backend to build merkle root + base_timestamp = int(time.time()) + ONE_TRX = 1_000_000 + + txs: List[TransferData] = [] + + for i in range(10): + txs.append(TransferData( + from_address=SENDER, + to_address=RECIPIENT_1, + amount=100 * ONE_TRX, + nonce=i + 1, + timestamp=base_timestamp + i, + recipient_count=1, + batch_id=BATCH_ID, + tx_type=TxType.DELAYED, + batch_salt=BATCH_SALT + )) + + txs.append(TransferData(SENDER, RECIPIENT_1, 50 * ONE_TRX, 11, base_timestamp + 10, 1, BATCH_ID, TxType.FREE_TIER, BATCH_SALT)) + txs.append(TransferData(SENDER, RECIPIENT_1, 200 * ONE_TRX, 12, base_timestamp + 11, 1, BATCH_ID, TxType.INSTANT, BATCH_SALT)) + txs.append(TransferData(SENDER, RECIPIENT_1, 300 * ONE_TRX, 13, base_timestamp + 12, 3, BATCH_ID, TxType.INSTANT, BATCH_SALT)) + txs.append(TransferData(SENDER, RECIPIENT_2, 500 * ONE_TRX, 14, base_timestamp + 13, 5, BATCH_ID, TxType.BATCHED, BATCH_SALT)) + txs.append(TransferData(RANDOM_ADDR, RECIPIENT_3, 150 * ONE_TRX, 15, base_timestamp + 14, 3, BATCH_ID, TxType.BATCHED, BATCH_SALT)) + txs.append(TransferData(SENDER, RECIPIENT_1, 100 * ONE_TRX, 16, base_timestamp + 15, 1, BATCH_ID, TxType.BATCHED, BATCH_SALT)) + txs.append(TransferData(SENDER, RECIPIENT_1, 1_000_000_000 * ONE_TRX, 17, base_timestamp + 16, 1, BATCH_ID, TxType.INSTANT, BATCH_SALT)) + txs.append(TransferData(ZERO_ADDR, RECIPIENT_1, 100 * ONE_TRX, 18, base_timestamp + 17, 1, BATCH_ID, TxType.DELAYED, BATCH_SALT)) + txs.append(TransferData(SENDER, ZERO_ADDR, 100 * ONE_TRX, 19, base_timestamp + 18, 1, BATCH_ID, TxType.DELAYED, BATCH_SALT)) + + return txs + +def generate_merkle_data(transactions: List[TransferData]) -> Dict[str, Any]: + leaves = [calculate_tx_hash(tx) for tx in transactions] + tree = MerkleTree(leaves) + root = tree.get_root() + + proofs_data = [] + for i, tx in enumerate(transactions): + leaf = leaves[i] + proof = tree.get_proof(i) + is_valid = tree.verify_proof(leaf, proof, root) + proofs_data.append({ + 'index': i, + 'transaction': tx, + 'tx_hash': '0x' + leaf.hex(), + 'proof': ['0x' + p.hex() for p in proof], + 'valid': is_valid + }) + + return { + 'merkle_root': '0x' + root.hex(), + 'tx_count': len(transactions), + 'transactions': transactions, + 'leaves': ['0x' + l.hex() for l in leaves], + 'proofs_data': proofs_data + } + +def print_results(data: Dict[str, Any]): + print("=" * 80) + print("MERKLE ROOT FOR BATCH") + print("=" * 80) + print(f"Merkle Root: {data['merkle_root']}") + print(f"Transaction Count: {data['tx_count']}") + print() + print("=" * 80) + print("TRANSACTIONS & PROOFS") + print("=" * 80) + + for item in data['proofs_data']: + tx = item['transaction'] + print(f"\n--- Transaction {item['index']} ---") + print(f"Type: {TxType(tx.tx_type).name}") + print(f"From: {tx.from_address}") + print(f"To: {tx.to_address}") + print(f"Amount: {tx.amount}") + print(f"Nonce: {tx.nonce}") + print(f"Timestamp: {tx.timestamp}") + print(f"Recipient Count: {tx.recipient_count}") + print(f"Batch ID (not hashed): {tx.batch_id}") + print(f"Batch Salt: {tx.batch_salt}") + print(f"TX Hash: {item['tx_hash']}") + print(f"Proof Valid: {item['valid']}") + print(f"Proof: [{', '.join(item['proof'])}]") + +def save_json(data: Dict[str, Any], filename: str = 'merkle_data.json'): + json_data = { + 'merkleRoot': data['merkle_root'], + 'txCount': data['tx_count'], + 'transactions': [] + } + for item in data['proofs_data']: + tx = item['transaction'] + json_data['transactions'].append({ + 'index': item['index'], + 'type': TxType(tx.tx_type).name, + 'from': tx.from_address, + 'to': tx.to_address, + 'amount': str(tx.amount), + 'nonce': tx.nonce, + 'timestamp': tx.timestamp, + 'recipientCount': tx.recipient_count, + 'batchId': tx.batch_id, # present for contract calls, not part of tx hash + 'batchSalt': tx.batch_salt, # used in tx hash for merkle root generation + 'txHash': item['tx_hash'], + 'proof': item['proof'], + 'valid': item['valid'] + }) + with open(filename, 'w') as f: + json.dump(json_data, f, indent=2) + print(f"\nData saved to: {filename}") + +if __name__ == "__main__": + transactions = generate_test_transactions() + merkle_data = generate_merkle_data(transactions) + print_results(merkle_data) + save_json(merkle_data) \ No newline at end of file diff --git a/contracts/script/merkle/batch/generateBatchRootDeploy.py b/contracts/script/merkle/batch/generateBatchRootDeploy.py new file mode 100644 index 0000000..ec5e2a6 --- /dev/null +++ b/contracts/script/merkle/batch/generateBatchRootDeploy.py @@ -0,0 +1,256 @@ +import time +import json +import os +import base58 +from web3 import Web3 +from typing import List, Dict, Any +from dataclasses import dataclass +from enum import IntEnum + +# --- CONFIGURATION --- + +TRON_SENDER = "TKWvD71EMFTpFVGZyqqX9fC6MQgcR9H76M" +TRON_RECIPIENT = "TFZMxv9HUzvsL3M7obrvikSQkuvJsopgMU" + +BATCH_ID = 40 +BATCH_SALT = 1 # Salt used by backend to build merkle root +TOKEN_DECIMALS = 6 # e.g. TRC20 USDT has 6 decimals + +# --- TYPES --- + +class TxType(IntEnum): + DELAYED = 0 + INSTANT = 1 + BATCHED = 2 + FREE_TIER = 3 + +@dataclass +class TransferData: + # EVM 0x addresses (Tron Base58 converted by stripping 0x41) + from_address: str + to_address: str + + # Original Tron Base58 for logs/UI only + original_tron_from: str + original_tron_to: str + + amount: int # uint256 + nonce: int # uint64 + timestamp: int # uint48 + recipient_count: int # uint32 + batch_id: int # NOT hashed + tx_type: int # uint8 + batch_salt: int # uint64 salt used by backend to build merkle root + +# --- HELPERS --- + +def tron_to_evm_address(tron_addr: str) -> str: + """ + Convert Tron Base58Check addr (T...) to EVM 0x address by stripping leading 0x41. + Returns checksummed 0x address. + """ + decoded = base58.b58decode_check(tron_addr) + if len(decoded) < 21 or decoded[0] != 0x41: + raise ValueError(f"Invalid Tron address: {tron_addr}") + evm_hex = decoded[1:].hex() + return Web3.to_checksum_address("0x" + evm_hex) + +def ensure_uint_bounds(nonce: int, timestamp: int, recipient_count: int) -> None: + if not (0 <= nonce <= (2**64 - 1)): + raise ValueError("nonce exceeds uint64") + if not (0 <= timestamp <= (2**48 - 1)): + raise ValueError("timestamp exceeds uint48") + if not (0 <= recipient_count <= (2**32 - 1)): + raise ValueError("recipient_count exceeds uint32") + +# --- MERKLE LOGIC --- + +class MerkleTree: + def __init__(self, leaves: List[bytes]): + self.leaves = leaves + self.tree = self._build_tree(leaves) + + def _build_tree(self, leaves: List[bytes]) -> List[List[bytes]]: + if not leaves: + return [[]] + tree = [leaves] + current_level = leaves + while len(current_level) > 1: + next_level: List[bytes] = [] + for i in range(0, len(current_level), 2): + if i + 1 < len(current_level): + left = current_level[i] + right = current_level[i + 1] + # Sorted-pair hashing for OZ-compatible proofs + if left > right: + left, right = right, left + combined = Web3.solidity_keccak(['bytes32', 'bytes32'], [left, right]) + else: + # Promote odd leaf + combined = current_level[i] + next_level.append(combined) + tree.append(next_level) + current_level = next_level + return tree + + def get_root(self) -> bytes: + return self.tree[-1][0] if self.tree and self.tree[-1] else b'\x00' * 32 + + def get_proof(self, index: int) -> List[bytes]: + proof: List[bytes] = [] + idx = index + for level in range(len(self.tree) - 1): + curr = self.tree[level] + if idx % 2 == 0: + if idx + 1 < len(curr): + proof.append(curr[idx + 1]) + else: + proof.append(curr[idx - 1]) + idx //= 2 + return proof + +# --- HASHING (matches Settlement._calculateTxHash) --- + +def calculate_tx_hash(tx: TransferData) -> bytes: + """ + keccak256(abi.encodePacked( + from(address), to(address), + amount(uint256), nonce(uint64), timestamp(uint48), recipientCount(uint32), + txType(uint8), batchSalt(uint64) + )) + batchId is EXCLUDED from the hash. + """ + ensure_uint_bounds(tx.nonce, tx.timestamp, tx.recipient_count) + return Web3.solidity_keccak( + ['address', 'address', 'uint256', 'uint64', 'uint48', 'uint32', 'uint8', 'uint64'], + [ + Web3.to_checksum_address(tx.from_address), + Web3.to_checksum_address(tx.to_address), + tx.amount, + tx.nonce, + tx.timestamp, + tx.recipient_count, + tx.tx_type, + tx.batch_salt + ] + ) + +# --- GENERATION --- + +def generate_batch() -> Dict[str, Any]: + sender_evm = tron_to_evm_address(TRON_SENDER) + recipient_evm = tron_to_evm_address(TRON_RECIPIENT) + + base_ts = int(time.time()) + one_token = 10 ** TOKEN_DECIMALS + + txs: List[TransferData] = [] + + # Two transfers to ensure non-empty proofs + txs.append(TransferData( + from_address=sender_evm, + to_address=recipient_evm, + original_tron_from=TRON_SENDER, + original_tron_to=TRON_RECIPIENT, + amount=10 * one_token, + nonce=1, + timestamp=base_ts, + recipient_count=1, + batch_id=BATCH_ID, + tx_type=TxType.DELAYED, + batch_salt=BATCH_SALT + )) + + txs.append(TransferData( + from_address=sender_evm, + to_address=recipient_evm, + original_tron_from=TRON_SENDER, + original_tron_to=TRON_RECIPIENT, + amount=20 * one_token, + nonce=2, + timestamp=base_ts + 1, + recipient_count=1, + batch_id=BATCH_ID, + tx_type=TxType.INSTANT, + batch_salt=BATCH_SALT + )) + + txs.append(TransferData( + from_address=sender_evm, + to_address=recipient_evm, + original_tron_from=TRON_SENDER, + original_tron_to=TRON_RECIPIENT, + amount=30 * one_token, + nonce=3, + timestamp=base_ts + 2, + recipient_count=3, + batch_id=BATCH_ID, + tx_type=TxType.BATCHED, + batch_salt=BATCH_SALT + )) + + leaves = [calculate_tx_hash(tx) for tx in txs] + tree = MerkleTree(leaves) + root = tree.get_root() + + output: Dict[str, Any] = { + "merkleRoot": "0x" + root.hex(), + "txCount": len(txs), + "batchId": BATCH_ID, + "batchSalt": BATCH_SALT, + "transactions": [] + } + + print("\n" + "=" * 60) + print(f"MERKLE ROOT: 0x{root.hex()}") + print("=" * 60) + + for i, tx in enumerate(txs): + leaf = leaves[i] + proof = tree.get_proof(i) + proof_hex = ["0x" + p.hex() for p in proof] + + tx_data_struct = [ + tx.from_address, # address (EVM 0x) + tx.to_address, # address (EVM 0x) + str(tx.amount), # uint256 (string for JSON safety) + tx.nonce, # uint64 + tx.timestamp, # uint48 + tx.recipient_count, # uint32 + tx.batch_id, # uint64 (NOT hashed) + tx.tx_type, # uint8 + tx.batch_salt # uint64 (used in hash) + ] + + entry = { + "index": i, + "type": TxType(tx.tx_type).name, + "txHash": "0x" + leaf.hex(), + "txDataStruct": tx_data_struct, + "evmAddresses": {"from": tx.from_address, "to": tx.to_address}, + "tronAddresses": {"from": tx.original_tron_from, "to": tx.original_tron_to}, + "proof": proof_hex + } + output["transactions"].append(entry) + + print(f"TX {i} ({TxType(tx.tx_type).name})") + print(f" From (Tron/EVM): {tx.original_tron_from} / {tx.from_address}") + print(f" To (Tron/EVM): {tx.original_tron_to} / {tx.to_address}") + print(f" Amount: {tx.amount}") + print(f" Nonce: {tx.nonce} Timestamp: {tx.timestamp} RecipientCount: {tx.recipient_count}") + print(f" BatchSalt: {tx.batch_salt}") + print(f" Hash: 0x{leaf.hex()}") + print(f" Proof: {proof_hex}") + + return output + +def save_json(output: Dict[str, Any], filename: str = "merkle_data_deploy.json") -> None: + script_dir = os.path.dirname(os.path.abspath(__file__)) + path = os.path.join(script_dir, filename) + with open(path, "w") as f: + json.dump(output, f, indent=2) + print(f"\nSaved: {path}") + +if __name__ == "__main__": + data = generate_batch() + save_json(data) diff --git a/contracts/script/merkle/batch/merkle_data.json b/contracts/script/merkle/batch/merkle_data.json new file mode 100644 index 0000000..1eaf19d --- /dev/null +++ b/contracts/script/merkle/batch/merkle_data.json @@ -0,0 +1,398 @@ +{ + "merkleRoot": "0x3a0c41421185f03cda4c7149849489222399b27838a28ef7931c459b142b0877", + "txCount": 19, + "transactions": [ + { + "index": 0, + "type": "DELAYED", + "from": "0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266", + "to": "0x70997970C51812dc3A010C7d01b50e0d17dc79C8", + "amount": "100000000", + "nonce": 1, + "timestamp": 1766392469, + "recipientCount": 1, + "batchId": 1, + "batchSalt": 1, + "txHash": "0xa230e58e7054695cc88f543500b68c91e2bf2460ea4f50ac925640251a0e9c45", + "proof": [ + "0xaa343f658768354a32adde8928537360916413cc47445617177a82024c422cf7", + "0xd5ec552c4f23fb2dbf907bf031e9e0d2e7c4a81c756071675b4a0625ad37f04c", + "0xea1b9fa23ead5569d3206b76771c4f20e36694822032d195804bebccca4aec51", + "0xb57edcb183a69f8f06421d1c7de51760cd1973914376637f50f10c3ed0fafeb2", + "0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc" + ], + "valid": true + }, + { + "index": 1, + "type": "DELAYED", + "from": "0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266", + "to": "0x70997970C51812dc3A010C7d01b50e0d17dc79C8", + "amount": "100000000", + "nonce": 2, + "timestamp": 1766392470, + "recipientCount": 1, + "batchId": 1, + "batchSalt": 1, + "txHash": "0xaa343f658768354a32adde8928537360916413cc47445617177a82024c422cf7", + "proof": [ + "0xa230e58e7054695cc88f543500b68c91e2bf2460ea4f50ac925640251a0e9c45", + "0xd5ec552c4f23fb2dbf907bf031e9e0d2e7c4a81c756071675b4a0625ad37f04c", + "0xea1b9fa23ead5569d3206b76771c4f20e36694822032d195804bebccca4aec51", + "0xb57edcb183a69f8f06421d1c7de51760cd1973914376637f50f10c3ed0fafeb2", + "0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc" + ], + "valid": true + }, + { + "index": 2, + "type": "DELAYED", + "from": "0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266", + "to": "0x70997970C51812dc3A010C7d01b50e0d17dc79C8", + "amount": "100000000", + "nonce": 3, + "timestamp": 1766392471, + "recipientCount": 1, + "batchId": 1, + "batchSalt": 1, + "txHash": "0x520268ed8b0dc9d3da345cf990135351322b673e52f2f58656bce527e24ecb4f", + "proof": [ + "0x8d2fc6d794ed4c205b3516a6c58f6f87eac5b6324dc4e6f88d46ca0cd622e523", + "0x33fe1a228e60193eb5abdba6048af955b49d849dc59ab5766873907ad10ad7f6", + "0xea1b9fa23ead5569d3206b76771c4f20e36694822032d195804bebccca4aec51", + "0xb57edcb183a69f8f06421d1c7de51760cd1973914376637f50f10c3ed0fafeb2", + "0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc" + ], + "valid": true + }, + { + "index": 3, + "type": "DELAYED", + "from": "0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266", + "to": "0x70997970C51812dc3A010C7d01b50e0d17dc79C8", + "amount": "100000000", + "nonce": 4, + "timestamp": 1766392472, + "recipientCount": 1, + "batchId": 1, + "batchSalt": 1, + "txHash": "0x8d2fc6d794ed4c205b3516a6c58f6f87eac5b6324dc4e6f88d46ca0cd622e523", + "proof": [ + "0x520268ed8b0dc9d3da345cf990135351322b673e52f2f58656bce527e24ecb4f", + "0x33fe1a228e60193eb5abdba6048af955b49d849dc59ab5766873907ad10ad7f6", + "0xea1b9fa23ead5569d3206b76771c4f20e36694822032d195804bebccca4aec51", + "0xb57edcb183a69f8f06421d1c7de51760cd1973914376637f50f10c3ed0fafeb2", + "0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc" + ], + "valid": true + }, + { + "index": 4, + "type": "DELAYED", + "from": "0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266", + "to": "0x70997970C51812dc3A010C7d01b50e0d17dc79C8", + "amount": "100000000", + "nonce": 5, + "timestamp": 1766392473, + "recipientCount": 1, + "batchId": 1, + "batchSalt": 1, + "txHash": "0xe19c6d48248dbca0b142cf85dd63eee8608c91acfefebbdc15b0efd3708d53ff", + "proof": [ + "0x31d36b489cdd2b5dbeb0e095f19a3db47bf729f1dc646977c3347a530dfd638f", + "0xee6a9ff5e1399fa946da5482a6f5d659903e7309c8747f7d22f554d54fc097e2", + "0xcbc1b17bd75d49c23213016ca8a662f2adaee9977ad570f2930a8275b25bb398", + "0xb57edcb183a69f8f06421d1c7de51760cd1973914376637f50f10c3ed0fafeb2", + "0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc" + ], + "valid": true + }, + { + "index": 5, + "type": "DELAYED", + "from": "0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266", + "to": "0x70997970C51812dc3A010C7d01b50e0d17dc79C8", + "amount": "100000000", + "nonce": 6, + "timestamp": 1766392474, + "recipientCount": 1, + "batchId": 1, + "batchSalt": 1, + "txHash": "0x31d36b489cdd2b5dbeb0e095f19a3db47bf729f1dc646977c3347a530dfd638f", + "proof": [ + "0xe19c6d48248dbca0b142cf85dd63eee8608c91acfefebbdc15b0efd3708d53ff", + "0xee6a9ff5e1399fa946da5482a6f5d659903e7309c8747f7d22f554d54fc097e2", + "0xcbc1b17bd75d49c23213016ca8a662f2adaee9977ad570f2930a8275b25bb398", + "0xb57edcb183a69f8f06421d1c7de51760cd1973914376637f50f10c3ed0fafeb2", + "0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc" + ], + "valid": true + }, + { + "index": 6, + "type": "DELAYED", + "from": "0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266", + "to": "0x70997970C51812dc3A010C7d01b50e0d17dc79C8", + "amount": "100000000", + "nonce": 7, + "timestamp": 1766392475, + "recipientCount": 1, + "batchId": 1, + "batchSalt": 1, + "txHash": "0x3d5be0058769e8f9e9d472a4f48190b04ccfa6e7aa2fa412653ed22b7649c75c", + "proof": [ + "0x9f8a7e4c0b8338ac03dd39fb2d6fa8082cf1d32badb2fa29fa959f626793e191", + "0xb3b6b7cdecd8ccdf8ba346985f530fbc878292d55f0ac9cda8689b4744570c7f", + "0xcbc1b17bd75d49c23213016ca8a662f2adaee9977ad570f2930a8275b25bb398", + "0xb57edcb183a69f8f06421d1c7de51760cd1973914376637f50f10c3ed0fafeb2", + "0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc" + ], + "valid": true + }, + { + "index": 7, + "type": "DELAYED", + "from": "0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266", + "to": "0x70997970C51812dc3A010C7d01b50e0d17dc79C8", + "amount": "100000000", + "nonce": 8, + "timestamp": 1766392476, + "recipientCount": 1, + "batchId": 1, + "batchSalt": 1, + "txHash": "0x9f8a7e4c0b8338ac03dd39fb2d6fa8082cf1d32badb2fa29fa959f626793e191", + "proof": [ + "0x3d5be0058769e8f9e9d472a4f48190b04ccfa6e7aa2fa412653ed22b7649c75c", + "0xb3b6b7cdecd8ccdf8ba346985f530fbc878292d55f0ac9cda8689b4744570c7f", + "0xcbc1b17bd75d49c23213016ca8a662f2adaee9977ad570f2930a8275b25bb398", + "0xb57edcb183a69f8f06421d1c7de51760cd1973914376637f50f10c3ed0fafeb2", + "0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc" + ], + "valid": true + }, + { + "index": 8, + "type": "DELAYED", + "from": "0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266", + "to": "0x70997970C51812dc3A010C7d01b50e0d17dc79C8", + "amount": "100000000", + "nonce": 9, + "timestamp": 1766392477, + "recipientCount": 1, + "batchId": 1, + "batchSalt": 1, + "txHash": "0xc8570da104d31b70fa27d5a6db45ef393d9150795802006294ee05aaaf99ff4c", + "proof": [ + "0x5215e0e0db139e499cdf1c49c9eb5b1ec1de1eb4e527b8a5c9566128f5c20f05", + "0x4c263c99b9e277450d7f2ef634c883775456dd4dffb96f46ae22fffa0ba532b1", + "0x8097f9b2b4adcb7ea03968a76be829838c7e54478a37edf3c2060e1626be7fb9", + "0x31994ea0b0c611d5c8673e93d7acac8ce3c40c1a7ff9612645345abc39d59975", + "0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc" + ], + "valid": true + }, + { + "index": 9, + "type": "DELAYED", + "from": "0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266", + "to": "0x70997970C51812dc3A010C7d01b50e0d17dc79C8", + "amount": "100000000", + "nonce": 10, + "timestamp": 1766392478, + "recipientCount": 1, + "batchId": 1, + "batchSalt": 1, + "txHash": "0x5215e0e0db139e499cdf1c49c9eb5b1ec1de1eb4e527b8a5c9566128f5c20f05", + "proof": [ + "0xc8570da104d31b70fa27d5a6db45ef393d9150795802006294ee05aaaf99ff4c", + "0x4c263c99b9e277450d7f2ef634c883775456dd4dffb96f46ae22fffa0ba532b1", + "0x8097f9b2b4adcb7ea03968a76be829838c7e54478a37edf3c2060e1626be7fb9", + "0x31994ea0b0c611d5c8673e93d7acac8ce3c40c1a7ff9612645345abc39d59975", + "0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc" + ], + "valid": true + }, + { + "index": 10, + "type": "FREE_TIER", + "from": "0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266", + "to": "0x70997970C51812dc3A010C7d01b50e0d17dc79C8", + "amount": "50000000", + "nonce": 11, + "timestamp": 1766392479, + "recipientCount": 1, + "batchId": 1, + "batchSalt": 1, + "txHash": "0x462abd36dedb15579a43289e45666716fa82e276865699156f10f01fce09bea4", + "proof": [ + "0x24f873b47ae80c05f69983aace4819e4a005ab024673cc39313788f7a17d305b", + "0x591ff5b157a63b6c10acb4331c80fe4c013f946c177848663e17c995d065ab6c", + "0x8097f9b2b4adcb7ea03968a76be829838c7e54478a37edf3c2060e1626be7fb9", + "0x31994ea0b0c611d5c8673e93d7acac8ce3c40c1a7ff9612645345abc39d59975", + "0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc" + ], + "valid": true + }, + { + "index": 11, + "type": "INSTANT", + "from": "0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266", + "to": "0x70997970C51812dc3A010C7d01b50e0d17dc79C8", + "amount": "200000000", + "nonce": 12, + "timestamp": 1766392480, + "recipientCount": 1, + "batchId": 1, + "batchSalt": 1, + "txHash": "0x24f873b47ae80c05f69983aace4819e4a005ab024673cc39313788f7a17d305b", + "proof": [ + "0x462abd36dedb15579a43289e45666716fa82e276865699156f10f01fce09bea4", + "0x591ff5b157a63b6c10acb4331c80fe4c013f946c177848663e17c995d065ab6c", + "0x8097f9b2b4adcb7ea03968a76be829838c7e54478a37edf3c2060e1626be7fb9", + "0x31994ea0b0c611d5c8673e93d7acac8ce3c40c1a7ff9612645345abc39d59975", + "0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc" + ], + "valid": true + }, + { + "index": 12, + "type": "INSTANT", + "from": "0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266", + "to": "0x70997970C51812dc3A010C7d01b50e0d17dc79C8", + "amount": "300000000", + "nonce": 13, + "timestamp": 1766392481, + "recipientCount": 3, + "batchId": 1, + "batchSalt": 1, + "txHash": "0x7401529a3f64f3280807f8c1c25476cb42f45a8d53eb4a8ce0489ca439530b43", + "proof": [ + "0x560083d61bbad176074a8ff65fa7a2b20cbc6080b8176f55d47bdd63503e7b49", + "0x30265017c12a0d5f16aa7c34dea1bf8fc8554bbdc356287671971c4a4da6b460", + "0x210818ded81351b6dc7f13a560472c3185d0bae960d8f7212190674caacdcfa8", + "0x31994ea0b0c611d5c8673e93d7acac8ce3c40c1a7ff9612645345abc39d59975", + "0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc" + ], + "valid": true + }, + { + "index": 13, + "type": "BATCHED", + "from": "0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266", + "to": "0x3C44CdDdB6a900fa2b585dd299e03d12FA4293BC", + "amount": "500000000", + "nonce": 14, + "timestamp": 1766392482, + "recipientCount": 5, + "batchId": 1, + "batchSalt": 1, + "txHash": "0x560083d61bbad176074a8ff65fa7a2b20cbc6080b8176f55d47bdd63503e7b49", + "proof": [ + "0x7401529a3f64f3280807f8c1c25476cb42f45a8d53eb4a8ce0489ca439530b43", + "0x30265017c12a0d5f16aa7c34dea1bf8fc8554bbdc356287671971c4a4da6b460", + "0x210818ded81351b6dc7f13a560472c3185d0bae960d8f7212190674caacdcfa8", + "0x31994ea0b0c611d5c8673e93d7acac8ce3c40c1a7ff9612645345abc39d59975", + "0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc" + ], + "valid": true + }, + { + "index": 14, + "type": "BATCHED", + "from": "0x1234567890123456789012345678901234567890", + "to": "0x90F79bf6EB2c4f870365E785982E1f101E93b906", + "amount": "150000000", + "nonce": 15, + "timestamp": 1766392483, + "recipientCount": 3, + "batchId": 1, + "batchSalt": 1, + "txHash": "0x864e732fa92158038c741b630577f00889c249ae8441ddaff05c2e185153afc6", + "proof": [ + "0xac933fbe94da3ad81a1b622b7e9e83dce453dc6588b908ec4e39edc0afb91dc6", + "0x287d3ddd35e0c598f46919e410628799adaf9b4d25fcd921e17b0077e12bde90", + "0x210818ded81351b6dc7f13a560472c3185d0bae960d8f7212190674caacdcfa8", + "0x31994ea0b0c611d5c8673e93d7acac8ce3c40c1a7ff9612645345abc39d59975", + "0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc" + ], + "valid": true + }, + { + "index": 15, + "type": "BATCHED", + "from": "0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266", + "to": "0x70997970C51812dc3A010C7d01b50e0d17dc79C8", + "amount": "100000000", + "nonce": 16, + "timestamp": 1766392484, + "recipientCount": 1, + "batchId": 1, + "batchSalt": 1, + "txHash": "0xac933fbe94da3ad81a1b622b7e9e83dce453dc6588b908ec4e39edc0afb91dc6", + "proof": [ + "0x864e732fa92158038c741b630577f00889c249ae8441ddaff05c2e185153afc6", + "0x287d3ddd35e0c598f46919e410628799adaf9b4d25fcd921e17b0077e12bde90", + "0x210818ded81351b6dc7f13a560472c3185d0bae960d8f7212190674caacdcfa8", + "0x31994ea0b0c611d5c8673e93d7acac8ce3c40c1a7ff9612645345abc39d59975", + "0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc" + ], + "valid": true + }, + { + "index": 16, + "type": "INSTANT", + "from": "0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266", + "to": "0x70997970C51812dc3A010C7d01b50e0d17dc79C8", + "amount": "1000000000000000", + "nonce": 17, + "timestamp": 1766392485, + "recipientCount": 1, + "batchId": 1, + "batchSalt": 1, + "txHash": "0xa477e0f7047a672583f1234bbd1485e57bda886eeea3ea1c2f76e242083c7d85", + "proof": [ + "0xed3ccf36419833c97271315356d4320595eee8cc5119ee118bcdf1e2775bc52f", + "0x587b51946820bae3febac952ae82d0a10a2c1991bc887caf0c9831de2adb24bb", + "0xb6576b13bfbf97dae871a1b20d938a53329e10800ce093213abb811e236b7f6c" + ], + "valid": true + }, + { + "index": 17, + "type": "DELAYED", + "from": "0x0000000000000000000000000000000000000000", + "to": "0x70997970C51812dc3A010C7d01b50e0d17dc79C8", + "amount": "100000000", + "nonce": 18, + "timestamp": 1766392486, + "recipientCount": 1, + "batchId": 1, + "batchSalt": 1, + "txHash": "0xed3ccf36419833c97271315356d4320595eee8cc5119ee118bcdf1e2775bc52f", + "proof": [ + "0xa477e0f7047a672583f1234bbd1485e57bda886eeea3ea1c2f76e242083c7d85", + "0x587b51946820bae3febac952ae82d0a10a2c1991bc887caf0c9831de2adb24bb", + "0xb6576b13bfbf97dae871a1b20d938a53329e10800ce093213abb811e236b7f6c" + ], + "valid": true + }, + { + "index": 18, + "type": "DELAYED", + "from": "0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266", + "to": "0x0000000000000000000000000000000000000000", + "amount": "100000000", + "nonce": 19, + "timestamp": 1766392487, + "recipientCount": 1, + "batchId": 1, + "batchSalt": 1, + "txHash": "0x587b51946820bae3febac952ae82d0a10a2c1991bc887caf0c9831de2adb24bb", + "proof": [ + "0xc86dbe3ece3fe96d98622181ce07212d536e3217636b501a87e8a0e98b001a84", + "0xb6576b13bfbf97dae871a1b20d938a53329e10800ce093213abb811e236b7f6c" + ], + "valid": true + } + ] +} \ No newline at end of file diff --git a/contracts/script/merkle/batch/merkle_data_deploy.json b/contracts/script/merkle/batch/merkle_data_deploy.json new file mode 100644 index 0000000..dcd526d --- /dev/null +++ b/contracts/script/merkle/batch/merkle_data_deploy.json @@ -0,0 +1,91 @@ +{ + "merkleRoot": "0x25daa48bca5790b366cbf2fc525223961e425bc236b4d1372797a25f811f6d93", + "txCount": 3, + "batchId": 40, + "batchSalt": 1, + "transactions": [ + { + "index": 0, + "type": "DELAYED", + "txHash": "0x88e80272276e3500f4154a9dfc1763981c2ae6be570747a65bc00ba6ef6b37cc", + "txDataStruct": [ + "0x68b86Ce0e9E72367e20a0e144bECE5e2Bb61f403", + "0x3D4e40ecD81BADC2aEE1e62E694a6c969F29586e", + "10000000", + 1, + 1766393245, + 1, + 40, + 0, + 1 + ], + "evmAddresses": { + "from": "0x68b86Ce0e9E72367e20a0e144bECE5e2Bb61f403", + "to": "0x3D4e40ecD81BADC2aEE1e62E694a6c969F29586e" + }, + "tronAddresses": { + "from": "TKWvD71EMFTpFVGZyqqX9fC6MQgcR9H76M", + "to": "TFZMxv9HUzvsL3M7obrvikSQkuvJsopgMU" + }, + "proof": [ + "0x7399cc043427bd28ce666fe8890e8757c5aec90ec1da28e5a4ad389f3dbc1cd0", + "0x61c3acfb278fef59a1435801cec08385d51a38be09b91872e89084aa55f22f59" + ] + }, + { + "index": 1, + "type": "INSTANT", + "txHash": "0x7399cc043427bd28ce666fe8890e8757c5aec90ec1da28e5a4ad389f3dbc1cd0", + "txDataStruct": [ + "0x68b86Ce0e9E72367e20a0e144bECE5e2Bb61f403", + "0x3D4e40ecD81BADC2aEE1e62E694a6c969F29586e", + "20000000", + 2, + 1766393246, + 1, + 40, + 1, + 1 + ], + "evmAddresses": { + "from": "0x68b86Ce0e9E72367e20a0e144bECE5e2Bb61f403", + "to": "0x3D4e40ecD81BADC2aEE1e62E694a6c969F29586e" + }, + "tronAddresses": { + "from": "TKWvD71EMFTpFVGZyqqX9fC6MQgcR9H76M", + "to": "TFZMxv9HUzvsL3M7obrvikSQkuvJsopgMU" + }, + "proof": [ + "0x88e80272276e3500f4154a9dfc1763981c2ae6be570747a65bc00ba6ef6b37cc", + "0x61c3acfb278fef59a1435801cec08385d51a38be09b91872e89084aa55f22f59" + ] + }, + { + "index": 2, + "type": "BATCHED", + "txHash": "0x61c3acfb278fef59a1435801cec08385d51a38be09b91872e89084aa55f22f59", + "txDataStruct": [ + "0x68b86Ce0e9E72367e20a0e144bECE5e2Bb61f403", + "0x3D4e40ecD81BADC2aEE1e62E694a6c969F29586e", + "30000000", + 3, + 1766393247, + 3, + 40, + 2, + 1 + ], + "evmAddresses": { + "from": "0x68b86Ce0e9E72367e20a0e144bECE5e2Bb61f403", + "to": "0x3D4e40ecD81BADC2aEE1e62E694a6c969F29586e" + }, + "tronAddresses": { + "from": "TKWvD71EMFTpFVGZyqqX9fC6MQgcR9H76M", + "to": "TFZMxv9HUzvsL3M7obrvikSQkuvJsopgMU" + }, + "proof": [ + "0x8a573f8b2597359ebf470b8f913c96712f4c911d6f50e7e797477facb424a820" + ] + } + ] +} \ No newline at end of file diff --git a/contracts/script/merkle/whitelist/generateRoot.py b/contracts/script/merkle/whitelist/generateRoot.py new file mode 100644 index 0000000..f772f15 --- /dev/null +++ b/contracts/script/merkle/whitelist/generateRoot.py @@ -0,0 +1,85 @@ +# filename: generate_root_evm_sorted.py +from eth_utils import keccak, is_hex_address, to_checksum_address + +def normalize(addr: str) -> str: + addr = addr.strip() + if not addr.startswith("0x"): + addr = "0x" + addr + if not is_hex_address(addr): + raise ValueError(f"Invalid address: {addr}") + return addr.lower() + +def leaf_hash(addr: str) -> bytes: + a = normalize(addr) + b20 = bytes.fromhex(a[2:]) + return keccak(b"\x00" * 12 + b20) + +def hash_pair(a: bytes, b: bytes) -> bytes: + if a < b: + return keccak(a + b) + else: + return keccak(b + a) + +def build_leaves(addrs): + addrs_sorted = sorted([normalize(a) for a in addrs]) + leaves = [leaf_hash(a) for a in addrs_sorted] + return leaves, addrs_sorted + +def build_tree(leaves): + layers = [leaves[:]] + while len(layers[-1]) > 1: + cur = layers[-1] + nxt = [] + for i in range(0, len(cur), 2): + l = cur[i] + r = cur[i+1] if i+1 < len(cur) else cur[i] + nxt.append(hash_pair(l, r)) + layers.append(nxt) + return layers + +def proof(layers, idx): + pf = [] + for layer in layers[:-1]: + sib = idx ^ 1 + if sib < len(layer): + pf.append(layer[sib]) + idx //= 2 + return pf + +def verify_sorted(leaf, pf, rt): + h = leaf + for p in pf: + if h < p: + h = keccak(h + p) + else: + h = keccak(p + h) + return h == rt + +if __name__ == "__main__": + user1 = "0x70997970C51812dc3A010C7d01b50e0d17dc79C8" + user2 = "0xBD26367c4B23A6D3713A1e1a50B2D67E8748cB98" + user3 = "0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266" + + whitelist = [user1, user2, user3] + + leaves, addrs_sorted = build_leaves(whitelist) + layers = build_tree(leaves) + rt = layers[-1][0] + print(f"Merkle Root: 0x{rt.hex()}") + + for addr in addrs_sorted: + lf = leaf_hash(addr) + idx = leaves.index(lf) + pf = proof(layers, idx) + + ok = verify_sorted(lf, pf, rt) + print(f"\nAddress {to_checksum_address(addr)} is whitelisted: {ok}") + + pf_hex = ["0x" + x.hex() for x in pf] + print("Proof:", "[" + ", ".join(pf_hex) + "]") + + name = to_checksum_address(addr).replace("0x","").upper() + print("\n// Solidity") + print(f"PROOF_{name} = new bytes32[]({len(pf_hex)});") + for i, p in enumerate(pf_hex): + print(f"PROOF_{name}[{i}] = {p};") \ No newline at end of file diff --git a/contracts/script/merkle/whitelist/generateWhitelistRootDeploy.py b/contracts/script/merkle/whitelist/generateWhitelistRootDeploy.py new file mode 100644 index 0000000..4e4dbde --- /dev/null +++ b/contracts/script/merkle/whitelist/generateWhitelistRootDeploy.py @@ -0,0 +1,88 @@ +import base58 +from eth_utils import keccak + +def tron_to_evm_bytes32(tron_addr: str) -> bytes: + raw = base58.b58decode_check(tron_addr) + addr20 = raw[1:] # 20 bytes + return b'\x00' * 12 + addr20 # pad left to 32 bytes like Solidity assembly + +def merkle_tree(leaves): + tree = [leaves] + while len(tree[-1]) > 1: + layer = tree[-1] + next_layer = [] + for i in range(0, len(layer), 2): + left = layer[i] + right = layer[i+1] if i+1 < len(layer) else layer[i] + # Sort the pair to ensure consistency with Solidity assembly logic + if left > right: + left, right = right, left + combined = keccak(left + right) + next_layer.append(combined) + tree.append(next_layer) + return tree + +def merkle_root(tree): + return tree[-1][0] if tree else None + +def merkle_proof(tree, index): + proof = [] + for layer in tree[:-1]: + pair_index = index ^ 1 + if pair_index < len(layer): + sibling = layer[pair_index] + node = layer[index] + # Sort the pair to ensure consistency with Solidity assembly logic + # The proof always includes the sibling hash + proof.append(sibling) + index //= 2 + return proof + +def verify_proof(leaf, proof, root): + computed_hash = leaf + for sibling in proof: + if computed_hash > sibling: + computed_hash = keccak(sibling + computed_hash) + else: + computed_hash = keccak(computed_hash + sibling) + return computed_hash == root + +# List of whitelisted TRON addresses +whitelist = [ + "TKWvD71EMFTpFVGZyqqX9fC6MQgcR9H76M", + "TVKAAcqpQxz3J4waayePr8dQjSQ2XHkdbF", +] + +# Compute leaves as keccak256 of 32 bytes (12 zeros + 20-byte address) +leaves = [keccak(tron_to_evm_bytes32(addr)) for addr in whitelist] + +# Build Merkle tree +tree = merkle_tree(leaves) + +# Get Merkle root +root = merkle_root(tree) + +def get_proof_for_address(tron_addr): + leaf = keccak(tron_to_evm_bytes32(tron_addr)) + try: + index = leaves.index(leaf) + except ValueError: + return None, None + proof = merkle_proof(tree, index) + return leaf, proof + +if __name__ == "__main__": + test_addresses = [ + "TKWvD71EMFTpFVGZyqqX9fC6MQgcR9H76M", # whitelisted + "TVKAAcqpQxz3J4waayePr8dQjSQ2XHkdbF", # whitelisted + ] + + print("Merkle Root:", root.hex()) + for addr in test_addresses: + leaf, proof = get_proof_for_address(addr) + if leaf is None: + print(f"Address {addr} is NOT in the whitelist.") + continue + is_valid = verify_proof(leaf, proof, root) + print(f"Address {addr} is whitelisted: {is_valid}") + print("Proof:", "[" + ", ".join(f"0x{p.hex()}" for p in proof) + "]") \ No newline at end of file diff --git a/contracts/script/tron-deploy/deployFeeModule.js b/contracts/script/tron-deploy/deployFeeModule.js new file mode 100644 index 0000000..4d98bd5 --- /dev/null +++ b/contracts/script/tron-deploy/deployFeeModule.js @@ -0,0 +1,42 @@ +const fs = require('fs'); +const path = require('path'); +const { TronWeb } = require("tronweb"); +require('dotenv').config({ quiet: true }); + +const NETWORKS = { + nile: { fullHost: 'https://nile.trongrid.io' }, + mainnet: { fullHost: 'https://api.trongrid.io' } +}; + +const FEE_LIMIT = 500_000_000; + +function loadArtifact(name) { + const p = path.join('out', `${name}.sol`, `${name}.json`); + const j = JSON.parse(fs.readFileSync(p, 'utf8')); + return { abi: j.abi, bytecode: j.bytecode?.object || j.bytecode }; +} + +async function main() { + const network = process.argv[2] || 'nile'; + const CONTRACT_NAME = 'FeeModule'; + const pk = process.env.UPDATER_PRIVATE_KEY; + if (!NETWORKS[network]) throw new Error('Network must be nile or mainnet'); + if (!pk) throw new Error('Set UPDATER_PRIVATE_KEY in .env'); + + const tronWeb = new TronWeb({ fullHost: NETWORKS[network].fullHost, privateKey: pk }); + + const { abi, bytecode } = loadArtifact(CONTRACT_NAME); + const deployed = await tronWeb.contract().new({ + abi, + bytecode, + feeLimit: FEE_LIMIT, + callValue: 0 + }); + + console.log(`${CONTRACT_NAME} deployed: ${tronWeb.address.fromHex(deployed.address)}`); +} + +main().catch((e) => { + console.error(e); + process.exit(1); +}); \ No newline at end of file diff --git a/contracts/script/tron-deploy/deploySettlement.js b/contracts/script/tron-deploy/deploySettlement.js new file mode 100644 index 0000000..49cdb31 --- /dev/null +++ b/contracts/script/tron-deploy/deploySettlement.js @@ -0,0 +1,42 @@ +const fs = require('fs'); +const path = require('path'); +const { TronWeb } = require("tronweb"); +require('dotenv').config({ quiet: true }); + +const NETWORKS = { + nile: { fullHost: 'https://nile.trongrid.io' }, + mainnet: { fullHost: 'https://api.trongrid.io' } +}; + +const FEE_LIMIT = 500_000_000; + +function loadArtifact(name) { + const p = path.join('out', `${name}.sol`, `${name}.json`); + const j = JSON.parse(fs.readFileSync(p, 'utf8')); + return { abi: j.abi, bytecode: j.bytecode?.object || j.bytecode }; +} + +async function main() { + const network = process.argv[2] || 'nile'; + const CONTRACT_NAME = 'Settlement'; + const pk = process.env.UPDATER_PRIVATE_KEY; + if (!NETWORKS[network]) throw new Error('Network must be nile or mainnet'); + if (!pk) throw new Error('Set UPDATER_PRIVATE_KEY in .env'); + + const tronWeb = new TronWeb({ fullHost: NETWORKS[network].fullHost, privateKey: pk }); + + const { abi, bytecode } = loadArtifact(CONTRACT_NAME); + const deployed = await tronWeb.contract().new({ + abi, + bytecode, + feeLimit: FEE_LIMIT, + callValue: 0 + }); + + console.log(`${CONTRACT_NAME} deployed: ${tronWeb.address.fromHex(deployed.address)}`); +} + +main().catch((e) => { + console.error(e); + process.exit(1); +}); diff --git a/contracts/script/tron-deploy/deployWhitelistRegistry.js b/contracts/script/tron-deploy/deployWhitelistRegistry.js new file mode 100644 index 0000000..db731e2 --- /dev/null +++ b/contracts/script/tron-deploy/deployWhitelistRegistry.js @@ -0,0 +1,46 @@ +const fs = require('fs'); +const path = require('path'); +const { TronWeb } = require("tronweb"); +require('dotenv').config({ quiet: true }); + +const NETWORKS = { + nile: { fullHost: 'https://nile.trongrid.io' }, + mainnet: { fullHost: 'https://api.trongrid.io' } +}; + +const FEE_LIMIT = 500_000_000; + +function loadArtifact(name) { + const p = path.join('out', `${name}.sol`, `${name}.json`); + const j = JSON.parse(fs.readFileSync(p, 'utf8')); + return { abi: j.abi, bytecode: j.bytecode?.object || j.bytecode }; +} + +async function main() { + const network = process.argv[2] || 'nile'; + const CONTRACT_NAME = 'WhitelistRegistry'; + const pk = process.env.UPDATER_PRIVATE_KEY; + const updater = process.env.UPDATER_ADDRESS; + + if (!NETWORKS[network]) throw new Error('Network must be nile or mainnet'); + if (!pk) throw new Error('Set UPDATER_PRIVATE_KEY in .env'); + if (!updater) throw new Error('Set UPDATER_ADDRESS in .env'); + + const tronWeb = new TronWeb({ fullHost: NETWORKS[network].fullHost, privateKey: pk }); + + const { abi, bytecode } = loadArtifact(CONTRACT_NAME); + const deployed = await tronWeb.contract().new({ + abi, + bytecode, + feeLimit: FEE_LIMIT, + callValue: 0, + parameters: [updater] + }); + + console.log(`${CONTRACT_NAME} deployed: ${tronWeb.address.fromHex(deployed.address)}`); +} + +main().catch((e) => { + console.error(e); + process.exit(1); +}); \ No newline at end of file diff --git a/contracts/src/FeeModule.sol b/contracts/src/FeeModule.sol new file mode 100644 index 0000000..0a41663 --- /dev/null +++ b/contracts/src/FeeModule.sol @@ -0,0 +1,379 @@ +// SPDX-License-Identifier: MIT +pragma solidity 0.8.25; + +import {Ownable} from "@openzeppelin/contracts/access/Ownable.sol"; + +import {IFeeModule} from "./interfaces/IFeeModule.sol"; +import {Types} from "./libraries/Types.sol"; +import {Errors} from "./libraries/Errors.sol"; + +/** + * @title FeeModule + * @notice Statistical fee calculation module + * @dev ⚠️ IMPORTANT: This module does NOT collect actual fees + * All fee calculations are for UI/analytics display purposes only + * No TRX/tokens are transferred or deducted during fee application + */ +contract FeeModule is IFeeModule, Ownable { + /// @dev Mapping of batch ID to transfer hash to fee amount paid + mapping(bytes32 transferHash => uint256 fee) private s_transferFees; + + /// @dev Mapping of batch ID to total fees collected for that batch + mapping(uint64 batchId => uint256 fee) private s_batchTotalFees; + + /// @dev Mapping of user address to their free transaction usage information + mapping(address => Types.FreeTxInfo) private s_freeTxUsage; + + /* -------------------------------------------------------------------------- */ + /* STATE VARIABLES */ + /* -------------------------------------------------------------------------- */ + + /// @dev Total fees collected across all transactions + uint256 private s_totalFees; + + /// @dev Address of the Settlement contract authorized to apply fees + address private s_settlement; + + /// @dev Base fee for delayed transactions (0.1 TRX) + uint256 private constant BASE_FEE = 100_000; + + /// @dev Fee per recipient for batched transactions (0.05 TRX) + uint256 private constant BATCH_FEE = 50_000; + + /// @dev Fee for instant transactions (0.2 TRX) + uint256 private constant INSTANT_FEE = 200_000; + + /// @dev Number of free transactions per day for eligible users + uint256 private constant FREE_TX_AMOUNT = 10; + + /// @dev Volume threshold for large transactions (no fee applied) + uint256 private constant LARGE_VOLUME = 1_000_000_000; + + /* -------------------------------------------------------------------------- */ + /* CONSTRUCTOR */ + /* -------------------------------------------------------------------------- */ + + /** + * @notice Contract constructor + * @dev Sets the deployer as the owner + */ + constructor() Ownable(msg.sender) {} + + /* -------------------------------------------------------------------------- */ + /* FUNCTIONS */ + /* -------------------------------------------------------------------------- */ + + /** + * @notice Calculates the fee for a transaction based on type and parameters + * @dev Resets quota daily based on block.timestamp + * Users near day boundaries may access up to 20 transactions within a short window + * This is an accepted edge case to avoid complex timestamp tracking + * @param sender Address initiating the transaction + * @param txType Type of transaction (FREE_TIER, DELAYED, INSTANT, or BATCHED) + * @param volume Transaction volume/amount + * @param recipientCount Number of recipients (must be >1 for BATCHED, =1 for others) + * @return info FeeInfo struct containing fee amount, transaction type, and remaining free quota + */ + function calculateFee(address sender, Types.TxType txType, uint256 volume, uint256 recipientCount) + external + view + returns (Types.FeeInfo memory info) + { + _validateCalculateFeeInput(sender, volume, recipientCount); + _validateTxType(txType); + _validateRecipientCount(txType, recipientCount); + + if (volume >= LARGE_VOLUME) { + info.fee = 0; + info.txType = txType; + info.freeQuota = _getRemainingFreeTxQuota(sender); + return info; + } + + if (txType == Types.TxType.DELAYED || txType == Types.TxType.FREE_TIER) { + return _calculateDelayedOrFreeFee(sender, txType); + } else if (txType == Types.TxType.INSTANT) { + return _calculateInstantFee(sender); + } else if (txType == Types.TxType.BATCHED) { + return _calculateBatchedFee(sender, recipientCount); + } + + return info; + } + + /** + * @notice Applies the calculated fee to a transaction + * @dev Can only be called by the authorized Settlement contract + * @param sender Address initiating the transaction + * @param fee Fee amount to apply + * @param transferHash Unique hash of the transfer + * @param batchId ID of the batch containing this transaction + * @param txType Type of transaction being processed + */ + function applyFee(address sender, uint256 fee, bytes32 transferHash, uint64 batchId, Types.TxType txType) external { + if (sender == address(0) || transferHash == bytes32(0) || batchId == 0) { + revert Errors.FeeModule__InvalidInput(); + } + + if (msg.sender != s_settlement) { + revert Errors.FeeModule__NotAuthorized(); + } + + if (txType == Types.TxType.FREE_TIER) { + _consumeFreeTxQuota(sender); + } + + s_transferFees[transferHash] = fee; + unchecked { + s_batchTotalFees[batchId] += fee; + s_totalFees += fee; + } + + emit FeeApplied(sender, fee, transferHash, batchId); + } + + /* SETTERS */ + + /** + * @notice Sets the Settlement contract address + * @dev Can only be called by owner. Only this address can call applyFee + * @param settlement Address of the Settlement contract + */ + function setSettlement(address settlement) external onlyOwner { + if (settlement == address(0)) { + revert Errors.FeeModule__InvalidInput(); + } + + if (s_settlement == settlement) { + revert Errors.FeeModule__AlreadySettlement(); + } + + s_settlement = settlement; + + emit SettlementUpdated(settlement); + } + + /* INTERNAL */ + + /** + * @notice Validates basic input parameters for fee calculation + * @param sender Address initiating the transaction + * @param volume Transaction volume/amount + * @param recipientCount Number of recipients + */ + function _validateCalculateFeeInput(address sender, uint256 volume, uint256 recipientCount) internal pure { + if (sender == address(0) || volume == 0 || recipientCount == 0) { + revert Errors.FeeModule__InvalidInput(); + } + } + + /** + * @notice Validates transaction type is one of the allowed types + * @param txType Type of transaction to validate + */ + function _validateTxType(Types.TxType txType) internal pure { + if ( + txType != Types.TxType.FREE_TIER && txType != Types.TxType.DELAYED && txType != Types.TxType.INSTANT + && txType != Types.TxType.BATCHED + ) { + revert Errors.FeeModule__InvalidTxType(); + } + } + + /** + * @notice Validates recipient count matches transaction type requirements + * @param txType Type of transaction + * @param recipientCount Number of recipients + */ + function _validateRecipientCount(Types.TxType txType, uint256 recipientCount) internal pure { + if (txType == Types.TxType.BATCHED && recipientCount <= 1) { + revert Errors.FeeModule__InvalidRecipientCount(); + } + + if (txType != Types.TxType.BATCHED && recipientCount > 1) { + revert Errors.FeeModule__InvalidRecipientCount(); + } + } + + /** + * @notice Calculates fee for delayed or free tier transactions + * @param sender Address initiating the transaction + * @param txType Type of transaction (FREE_TIER or DELAYED) + * @return info FeeInfo struct with fee details + */ + function _calculateDelayedOrFreeFee(address sender, Types.TxType txType) + internal + view + returns (Types.FeeInfo memory info) + { + uint256 remainingQuota = _getRemainingFreeTxQuota(sender); + + if (remainingQuota != 0) { + info.fee = 0; + info.txType = Types.TxType.FREE_TIER; + info.freeQuota = remainingQuota; + } else { + if (txType == Types.TxType.FREE_TIER) { + revert Errors.FeeModule__FreeTierLimitExceeded(); + } + info.fee = BASE_FEE; + info.txType = Types.TxType.DELAYED; + info.freeQuota = 0; + } + } + + /** + * @notice Calculates fee for instant transactions + * @param sender Address initiating the transaction + * @return info FeeInfo struct with fee details + */ + function _calculateInstantFee(address sender) internal view returns (Types.FeeInfo memory info) { + info.fee = INSTANT_FEE; + info.txType = Types.TxType.INSTANT; + info.freeQuota = _getRemainingFreeTxQuota(sender); + } + + /** + * @notice Calculates fee for batched transactions + * @param sender Address initiating the transaction + * @param recipientCount Number of recipients in the batch + * @return info FeeInfo struct with fee details + */ + function _calculateBatchedFee(address sender, uint256 recipientCount) + internal + view + returns (Types.FeeInfo memory info) + { + info.fee = BATCH_FEE * recipientCount; + info.txType = Types.TxType.BATCHED; + info.freeQuota = _getRemainingFreeTxQuota(sender); + } + + /** + * @notice Internal function to get remaining free transaction quota for a user + * @dev Resets quota daily based on block.timestamp + * Users near day boundaries may access up to 20 transactions within a short window + * This is an accepted edge case to avoid complex timestamp tracking + * @param sender Address to check quota for + * @return Remaining number of free transactions available + */ + function _getRemainingFreeTxQuota(address sender) internal view returns (uint256) { + Types.FreeTxInfo storage usage = s_freeTxUsage[sender]; + uint256 currentDay; + unchecked { + currentDay = block.timestamp / 1 days; + } + + if (usage.day < currentDay) { + return FREE_TX_AMOUNT; + } + + if (usage.count < FREE_TX_AMOUNT) { + unchecked { + return FREE_TX_AMOUNT - usage.count; + } + } + + return 0; + } + + /** + * @notice Internal function to consume one free transaction from user's quota + * @dev Updates daily counter and emits event. Reverts if quota exceeded + * @param sender Address consuming the free transaction + */ + function _consumeFreeTxQuota(address sender) internal { + Types.FreeTxInfo storage usage = s_freeTxUsage[sender]; + uint256 currentDay; + unchecked { + currentDay = block.timestamp / 1 days; + } + + if (usage.day < currentDay) { + // Safe: days since epoch fits in uint128 for trillions of years + if (currentDay > type(uint128).max) { + revert Errors.FeeModule__InvalidInput(); + } + usage.day = uint128(currentDay); + usage.count = 0; + } + + if (usage.count < FREE_TX_AMOUNT) { + unchecked { + usage.count += 1; + uint256 remaining = FREE_TX_AMOUNT - usage.count; + emit FreeTierUsed(sender, remaining); + } + } else { + revert Errors.FeeModule__FreeTierLimitExceeded(); + } + } + + /* -------------------------------------------------------------------------- */ + /* GETTERS */ + /* -------------------------------------------------------------------------- */ + + /** + * @notice Returns the Settlement contract address + * @return Address of the Settlement contract + */ + function getSettlement() external view returns (address) { + return s_settlement; + } + + /** + * @notice Returns the contract owner + * @return Address of the owner + */ + function getOwner() external view returns (address) { + return owner(); + } + + /** + * @notice Returns the free transaction usage info for a user + * @param user Address to query + * @return FreeTxInfo struct containing day and count + */ + function getFreeTxUsage(address user) external view returns (Types.FreeTxInfo memory) { + return s_freeTxUsage[user]; + } + + /** + * @notice Returns the fee paid for a specific transaction + * @param transferHash Hash of the transfer + * @return fee Fee amount in wei + */ + function getFeeOfTransaction(bytes32 transferHash) external view returns (uint256 fee) { + return s_transferFees[transferHash]; + } + + /** + * @notice Returns CALCULATED fees for statistical purposes only + * @dev WARNING: Fees are NOT actually collected or transferred + */ + function getTotalFeesCollected() external view returns (uint256 total) { + return s_totalFees; + } + + /** + * @notice Returns total fees collected for a specific batch + * @param batchId ID of the batch + * @return total Total fees for the batch in wei + */ + function getBatchTotalFees(uint64 batchId) external view returns (uint256 total) { + return s_batchTotalFees[batchId]; + } + + /** + * @notice Returns remaining free tier transactions for a user today + * @param sender Address to query + * @return remaining Number of free transactions remaining + */ + function getRemainingFreeTierTransactions(address sender) external view returns (uint256 remaining) { + Types.FreeTxInfo storage usage = s_freeTxUsage[sender]; + uint256 currentDay = block.timestamp / 1 days; + + uint256 count = usage.day < currentDay ? 0 : usage.count; + return FREE_TX_AMOUNT > count ? FREE_TX_AMOUNT - count : 0; + } +} diff --git a/contracts/src/Settlement.sol b/contracts/src/Settlement.sol new file mode 100644 index 0000000..f312f93 --- /dev/null +++ b/contracts/src/Settlement.sol @@ -0,0 +1,556 @@ +// SPDX-License-Identifier: MIT +pragma solidity 0.8.25; + +import {IERC20} from "@openzeppelin/contracts/token/ERC20/IERC20.sol"; +import {Ownable} from "@openzeppelin/contracts/access/Ownable.sol"; +import {Pausable} from "@openzeppelin/contracts/utils/Pausable.sol"; +import {SafeERC20} from "@openzeppelin/contracts/token/ERC20/utils/SafeERC20.sol"; +import {MerkleProof} from "@openzeppelin/contracts/utils/cryptography/MerkleProof.sol"; +import {ReentrancyGuard} from "@openzeppelin/contracts/utils/ReentrancyGuard.sol"; + +import {ISettlement} from "./interfaces/ISettlement.sol"; +import {IFeeModule} from "./interfaces/IFeeModule.sol"; +import {IWhitelistRegistry} from "./interfaces/IWhitelistRegistry.sol"; +import {Types} from "./libraries/Types.sol"; +import {Errors} from "./libraries/Errors.sol"; + +/** + * @title Settlement + * @notice Contract for batch processing and execution of token transfers using Merkle Trees + * @dev Uses Merkle proofs for transaction verification, timelock for security, and whitelist for access control + */ +contract Settlement is ISettlement, Ownable, ReentrancyGuard, Pausable { + using SafeERC20 for IERC20; + + /* -------------------------------------------------------------------------- */ + /* TYPES */ + /* -------------------------------------------------------------------------- */ + + /// @dev Mapping of batch ID to batch data + mapping(uint64 batchId => Types.Batch) private s_batches; + + /// @dev Mapping of Merkle root to batch ID for quick lookup + mapping(bytes32 => uint64) private s_batchIdsByRoot; + + /// @dev Mapping of transaction hashes to execution status (prevents replay attacks) + mapping(bytes32 txHash => bool executed) private s_executedTransfers; + + /// @dev Mapping of aggregator addresses to approval status + mapping(address => bool) private s_approvedAggregators; + + /* -------------------------------------------------------------------------- */ + /* STATE VARIABLES */ + /* -------------------------------------------------------------------------- */ + + /// @dev Module for calculating and applying fees + IFeeModule private s_feeModule; + + /// @dev Whitelist registry for user verification + IWhitelistRegistry private s_registry; + + /// @dev ERC20 token used for transfers + IERC20 private s_token; + + /// @dev Batch ID counter + uint64 private s_batchIds; + + /// @dev Maximum number of transactions per batch + uint32 private s_maxTxPerBatch; + + /// @dev Timelock duration (delay before batch can be executed) + uint48 private s_timelockDuration; + + /// @dev Configuration status + bool private s_configured; + + /* -------------------------------------------------------------------------- */ + /* CONSTRUCTOR */ + /* -------------------------------------------------------------------------- */ + + /** + * @notice Contract constructor + * @dev Sets the deployer as owner and approved aggregator + */ + constructor() Ownable(msg.sender) { + s_approvedAggregators[msg.sender] = true; + } + + /* -------------------------------------------------------------------------- */ + /* FUNCTIONS */ + /* -------------------------------------------------------------------------- */ + + /** + * @notice Submits a new batch of transactions + * @dev Can only be called by approved aggregators. Creates a new batch with timelock + * @param merkleRoot The Merkle root of the transaction batch + * @param txCount The number of transactions in the batch + * @return success Boolean indicating success + * @return batchId The ID of the created batch + */ + function submitBatch(bytes32 merkleRoot, uint32 txCount, uint64 batchSalt) external returns (bool, uint64) { + _requireConfigured(); + _onlyApprovedAggregator(); + + if (merkleRoot == bytes32(0) || txCount == 0 || txCount > s_maxTxPerBatch) { + revert Errors.Settlement__InvalidInput(); + } + + if (s_batchIdsByRoot[merkleRoot] != 0) { + revert Errors.Settlement__BatchAlreadySubmitted(); + } + + if (block.timestamp > type(uint48).max) { + revert Errors.Settlement__InvalidInput(); + } + + uint256 calculatedUnlockTime = block.timestamp + s_timelockDuration; + if (calculatedUnlockTime > type(uint48).max) { + revert Errors.Settlement__InvalidInput(); + } + + ++s_batchIds; + uint64 batchId = s_batchIds; + + s_batches[batchId] = Types.Batch({ + merkleRoot: merkleRoot, + // Safe: block.timestamp fits in uint48 until year ~8.9M AD + timestamp: uint48(block.timestamp), + txCount: txCount, + // Safe: overflow checked above (line 106) + unlockTime: uint48(calculatedUnlockTime), + batchSalt: batchSalt + }); + + s_batchIdsByRoot[merkleRoot] = batchId; + + emit BatchSubmitted(batchId, merkleRoot, txCount, uint48(block.timestamp)); + return (true, batchId); + } + + /** + * @notice Executes a transfer from a submitted batch + * @dev Verifies Merkle proof, whitelist status, and timelock before executing transfer + * @param txProof Merkle proof for the transaction + * @param whitelistProof Merkle proof for whitelist verification + * @param txData Transfer data including from, to, amount, and other parameters + * @return success Boolean indicating successful execution + */ + function executeTransfer( + bytes32[] memory txProof, + bytes32[] memory whitelistProof, + Types.TransferData memory txData + ) external nonReentrant whenNotPaused returns (bool) { + IFeeModule feeModule = s_feeModule; + IERC20 token = s_token; + + _validateTransferInput(txData); + _validateBatched(whitelistProof, txData); + + bytes32 txHash = _validateBatchAndProof(txProof, txData); + + Types.FeeInfo memory fee = + feeModule.calculateFee(txData.from, txData.txType, txData.amount, txData.recipientCount); + + if (token.balanceOf(txData.from) < txData.amount) { + revert Errors.Settlement__InsufficientBalance(); + } + + if (token.allowance(txData.from, address(this)) < txData.amount) { + revert Errors.Settlement__InsufficientAllowance(); + } + + feeModule.applyFee(txData.from, fee.fee, txHash, txData.batchId, fee.txType); + + token.safeTransferFrom(txData.from, txData.to, txData.amount); + s_executedTransfers[txHash] = true; + + emit TransferExecuted(txData.from, txData.to, txData.amount, txData.nonce); + return true; + } + + /** + * @notice Approves a new aggregator + * @dev Can only be called by owner + * @param aggregator Address of the aggregator to approve + */ + function approveAggregator(address aggregator) external onlyOwner { + if (aggregator == address(0)) { + revert Errors.Settlement__InvalidInput(); + } + + if (s_approvedAggregators[aggregator]) { + revert Errors.Settlement__AlreadyAggregator(); + } + + s_approvedAggregators[aggregator] = true; + emit AggregatorApproved(aggregator); + } + + /** + * @notice Removes approval from an aggregator + * @dev Can only be called by owner + * @param aggregator Address of the aggregator to disapprove + */ + function disapproveAggregator(address aggregator) external onlyOwner { + if (aggregator == address(0)) { + revert Errors.Settlement__InvalidInput(); + } + + if (!s_approvedAggregators[aggregator]) { + revert Errors.Settlement__AggregatorNotApproved(); + } + + s_approvedAggregators[aggregator] = false; + emit AggregatorDisapproved(aggregator); + } + + /** + * @notice Pauses the contract + * @dev Can only be called by owner. Prevents executeTransfer calls + */ + function pause() external onlyOwner { + _pause(); + } + + /** + * @notice Unpauses the contract + * @dev Can only be called by owner + */ + function unpause() external onlyOwner { + _unpause(); + } + + /* SETTERS */ + + /** + * @notice Sets the whitelist registry address + * @dev Can only be called by owner + * @param whitelistRegistry Address of the whitelist registry contract + */ + function setWhitelistRegistry(address whitelistRegistry) external onlyOwner { + if (whitelistRegistry == address(0)) { + revert Errors.Settlement__InvalidInput(); + } + + if (s_registry == IWhitelistRegistry(whitelistRegistry)) { + revert Errors.Settlement__AlreadyRegistry(); + } + + s_registry = IWhitelistRegistry(whitelistRegistry); + _recomputeConfigured(); + emit WhitelistRegistryUpdated(whitelistRegistry); + } + + /** + * @notice Sets the fee module address + * @dev Can only be called by owner + * @param feeModule Address of the fee module contract + */ + function setFeeModule(address feeModule) external onlyOwner { + if (feeModule == address(0)) { + revert Errors.Settlement__InvalidInput(); + } + + if (s_feeModule == IFeeModule(feeModule)) { + revert Errors.Settlement__AlreadyFeeModule(); + } + + s_feeModule = IFeeModule(feeModule); + _recomputeConfigured(); + emit FeeModuleUpdated(feeModule); + } + + /** + * @notice Sets the maximum number of transactions per batch + * @dev Can only be called by owner + * @param maxTx Maximum transaction count + */ + function setMaxTxPerBatch(uint32 maxTx) external onlyOwner { + if (maxTx == 0) { + revert Errors.Settlement__InvalidInput(); + } + + if (s_maxTxPerBatch == maxTx) { + revert Errors.Settlement__AlreadySet(); + } + + s_maxTxPerBatch = maxTx; + emit MaxTxPerBatchUpdated(maxTx); + } + + /** + * @notice Sets the timelock duration for batches + * @dev Can only be called by owner. Duration in seconds + * @param duration Timelock duration in seconds + */ + function setTimelockDuration(uint48 duration) external onlyOwner { + if (duration == s_timelockDuration) { + revert Errors.Settlement__AlreadyTimelockDuration(); + } + + s_timelockDuration = uint48(duration); + emit TimelockDurationUpdated(duration); + } + + /** + * @notice Sets the token address for transfers + * @dev Can only be called by owner + * @param tokenAddress Address of the ERC20 token contract + */ + function setToken(address tokenAddress) external onlyOwner { + if (tokenAddress == address(0)) { + revert Errors.Settlement__InvalidInput(); + } + + if (s_token == IERC20(tokenAddress)) { + revert Errors.Settlement__AlreadyToken(); + } + + s_token = IERC20(tokenAddress); + _recomputeConfigured(); + emit TokenUpdated(tokenAddress); + } + + /* INTERNAL */ + + /** + * @notice Internal function to check if caller is an approved aggregator + * @dev Reverts if caller is not approved + */ + function _onlyApprovedAggregator() internal view { + if (!s_approvedAggregators[msg.sender]) { + revert Errors.Settlement__AggregatorNotApproved(); + } + } + + /** + * @notice Validates if the transaction is batched and checks whitelist status + * @param whitelistProof Merkle proof for whitelist verification + * @param txData Transfer data structure + */ + function _validateBatched(bytes32[] memory whitelistProof, Types.TransferData memory txData) internal view { + if (txData.txType == Types.TxType.BATCHED) { + if (whitelistProof.length == 0) { + revert Errors.Settlement__NotWhitelisted(); + } + + if (!s_registry.verifyWhitelist(whitelistProof, txData.from)) { + revert Errors.Settlement__NotWhitelisted(); + } + } + } + + /** + * @notice Calculates the hash of a transfer + * @dev Uses keccak256 to hash all transfer parameters including batchSalt + * @param txData Transfer data structure + * @param batchSalt Salt used by backend to build merkle root + * @return txHash The calculated transaction hash + */ + function _calculateTxHash(Types.TransferData memory txData, uint64 batchSalt) + internal + pure + returns (bytes32 txHash) + { + txHash = keccak256( + abi.encodePacked( + txData.from, + txData.to, + txData.amount, + txData.nonce, + txData.timestamp, + txData.recipientCount, + txData.txType, + batchSalt + ) + ); + } + + /** + * @notice Recomputes the configuration status of the contract + * @dev Sets s_configured to true if all required modules are set + */ + function _recomputeConfigured() internal { + s_configured = + (address(s_registry) != address(0) && address(s_feeModule) != address(0) && address(s_token) != address(0)); + } + + /** + * @notice Ensures the contract is fully configured + * @dev Reverts if not configured + */ + function _requireConfigured() internal view { + if (!s_configured) { + revert Errors.Settlement__NotConfigured(); + } + } + + /** + * @notice Validates basic transfer inputs + * @param txData Transfer data structure + */ + function _validateTransferInput(Types.TransferData memory txData) internal view { + _requireConfigured(); + + if (txData.from == address(0) || txData.to == address(0) || txData.batchId == 0) { + revert Errors.Settlement__InvalidInput(); + } + } + + /** + * @notice Validates batch and Merkle proof + * @param txProof Merkle proof for the transaction + * @param txData Transfer data structure + * @return txHash The calculated transaction hash + */ + function _validateBatchAndProof(bytes32[] memory txProof, Types.TransferData memory txData) + internal + view + returns (bytes32 txHash) + { + Types.Batch storage batch = s_batches[txData.batchId]; + bytes32 merkleRoot = batch.merkleRoot; + uint256 unlockTime = batch.unlockTime; + uint64 batchSalt = batch.batchSalt; + + if (txProof.length == 0 || txData.amount == 0) { + revert Errors.Settlement__InvalidInput(); + } + + if (merkleRoot == bytes32(0)) { + revert Errors.Settlement__InvalidBatch(); + } + + if (block.timestamp < unlockTime) { + revert Errors.Settlement__BatchLocked(); + } + + txHash = _calculateTxHash(txData, batchSalt); + if (!MerkleProof.verify(txProof, merkleRoot, txHash)) { + revert Errors.Settlement__InvalidMerkleProof(); + } + + if (s_executedTransfers[txHash]) { + revert Errors.Settlement__TransferAlreadyExecuted(); + } + } + + /* -------------------------------------------------------------------------- */ + /* GETTERS */ + /* -------------------------------------------------------------------------- */ + + /** + * @notice Returns the contract owner + * @return Owner address + */ + function getOwner() external view returns (address) { + return owner(); + } + + /** + * @notice Returns the whitelist registry address + * @return Whitelist registry address + */ + function getWhitelistRegistry() external view returns (address) { + return address(s_registry); + } + + /** + * @notice Returns the fee module address + * @return Fee module address + */ + function getFeeModule() external view returns (address) { + return address(s_feeModule); + } + + /** + * @notice Returns the token address + * @return Token address + */ + function getToken() external view returns (address) { + return address(s_token); + } + + /** + * @notice Returns the current batch ID counter + * @return Current batch ID + */ + function getCurrentBatchId() external view returns (uint64) { + return s_batchIds; + } + + /** + * @notice Returns the maximum transactions per batch + * @return Maximum transaction count per batch + */ + function getMaxTxPerBatch() external view returns (uint32) { + return s_maxTxPerBatch; + } + + /** + * @notice Returns the timelock duration + * @return Timelock duration in seconds + */ + function getTimelockDuration() external view returns (uint48) { + return s_timelockDuration; + } + + /** + * @notice Checks if an address is an approved aggregator + * @param aggregator Address to check + * @return True if approved, false otherwise + */ + function isApprovedAggregator(address aggregator) external view returns (bool) { + return s_approvedAggregators[aggregator]; + } + + /** + * @notice Returns batch ID by Merkle root hash + * @param rootHash Merkle root hash + * @return Batch ID + */ + function getBatchIdByRoot(bytes32 rootHash) external view returns (uint64) { + if (rootHash == bytes32(0)) { + revert Errors.Settlement__InvalidInput(); + } + return s_batchIdsByRoot[rootHash]; + } + + /** + * @notice Returns batch data by ID + * @param batchId Batch ID + * @return Batch data structure + */ + function getBatchById(uint64 batchId) external view returns (Types.Batch memory) { + return s_batches[batchId]; + } + + /** + * @notice Checks if a transfer has been executed in a specific batch + * @param transferHash Transfer hash + * @return True if executed, false otherwise + */ + function isExecutedTransfer(bytes32 transferHash) external view returns (bool) { + return s_executedTransfers[transferHash]; + } + + /** + * @notice Returns the Merkle root for a given batch ID + * @param batchId Batch ID + * @return Merkle root of the batch + */ + function getRootByBatchId(uint64 batchId) external view returns (bytes32) { + if (batchId == 0 || batchId > s_batchIds) { + revert Errors.Settlement__InvalidInput(); + } + return s_batches[batchId].merkleRoot; + } + + /** + * @notice Checks if the contract is fully configured + * @return True if configured, false otherwise + */ + function isConfigured() external view returns (bool) { + return s_configured; + } +} diff --git a/contracts/src/WhitelistRegistry.sol b/contracts/src/WhitelistRegistry.sol new file mode 100644 index 0000000..efe6d85 --- /dev/null +++ b/contracts/src/WhitelistRegistry.sol @@ -0,0 +1,349 @@ +// SPDX-License-Identifier: MIT +pragma solidity 0.8.25; + +import {ECDSA} from "@openzeppelin/contracts/utils/cryptography/ECDSA.sol"; +import {Pausable} from "@openzeppelin/contracts/utils/Pausable.sol"; +import {MerkleProof} from "@openzeppelin/contracts/utils/cryptography/MerkleProof.sol"; +import {AccessControl} from "@openzeppelin/contracts/access/AccessControl.sol"; +import {MessageHashUtils} from "@openzeppelin/contracts/utils/cryptography/MessageHashUtils.sol"; + +import {IWhitelistRegistry} from "./interfaces/IWhitelistRegistry.sol"; +import {Errors} from "./libraries/Errors.sol"; + +/** + * @title WhitelistRegistry + * @notice Contract for managing whitelisted addresses for batched transfers using Merkle tree verification + * @dev Uses ECDSA signatures for authorized updates, role-based access control, and collects fees for whitelist requests + */ +contract WhitelistRegistry is AccessControl, IWhitelistRegistry, Pausable { + using ECDSA for bytes32; + using MessageHashUtils for bytes32; + using MerkleProof for bytes32[]; + + /* -------------------------------------------------------------------------- */ + /* TYPES */ + /* -------------------------------------------------------------------------- */ + + mapping(address => bool) private s_authorizedUpdaters; + mapping(address => uint48) private s_lastRequestedTime; + + /* -------------------------------------------------------------------------- */ + /* STATE VARIABLES */ + /* -------------------------------------------------------------------------- */ + + bytes32 private s_merkleRoot; + // Packed into single slot (saves 2 storage slots): + uint128 private s_totalCollectedFees; // Max: 3.4×10^32 TRX - safe for trillions of years + uint64 private s_nonce; // Max: 18 quintillion updates - more than sufficient + uint48 private s_lastUpdate; // Timestamp - safe until year ~8.9M AD + uint256 private constant REQUEST_COOLDOWN = 24 hours; + uint256 private constant REQUEST_FEE = 10e6; // 10 TRX + + bytes32 private constant WITHDRAW_ROLE = keccak256("WITHDRAW_ROLE"); + + /* -------------------------------------------------------------------------- */ + /* CONSTRUCTOR */ + /* -------------------------------------------------------------------------- */ + + /** + * @notice Contract constructor + * @dev Sets up initial admin and updater roles for the specified address + * @param updater Address to grant admin, withdraw, and updater roles + */ + constructor(address updater) { + if (updater == address(0)) { + revert Errors.WhitelistRegistry__InvalidInput(); + } + _setRoleAdmin(WITHDRAW_ROLE, DEFAULT_ADMIN_ROLE); + _grantRole(DEFAULT_ADMIN_ROLE, updater); + _grantRole(WITHDRAW_ROLE, updater); + s_authorizedUpdaters[updater] = true; + } + + /* -------------------------------------------------------------------------- */ + /* FUNCTIONS */ + /* -------------------------------------------------------------------------- */ + + /** + * @notice Updates the Merkle root for the whitelist + * @dev Requires valid signature from authorized updater. Increments nonce after successful update + * @param newRoot New Merkle root hash + * @param nonce The next nonce value (must match contract state after update) + * @param signature ECDSA signature from authorized updater + */ + function updateMerkleRoot(bytes32 newRoot, uint64 nonce, bytes calldata signature) external whenNotPaused { + bytes32 oldRoot = s_merkleRoot; + + if (newRoot == s_merkleRoot) { + revert Errors.WhitelistRegistry__DuplicateUpdate(); + } + + _onlyAuthorizedUpdater(newRoot, nonce, signature); + s_merkleRoot = newRoot; + // Safe: block.timestamp fits in uint48 until year ~8.9M AD + s_lastUpdate = uint48(block.timestamp); + + emit WhitelistUpdated(oldRoot, newRoot, nonce); + } + + /** + * @notice Requests whitelist inclusion by paying a fee + * @dev Enforces 24-hour cooldown between requests and minimum fee of 10 TRX + * @return success True if request was successfully recorded + */ + function requestWhitelist() external payable whenNotPaused returns (bool success) { + if (msg.value < REQUEST_FEE) { + revert Errors.WhitelistRegistry__InsufficientFee(); + } + + uint256 lastRequest = uint48(s_lastRequestedTime[msg.sender]); + if (lastRequest != 0 && block.timestamp < lastRequest + REQUEST_COOLDOWN) { + revert Errors.WhitelistRegistry__RequestTooFrequent(); + } + + s_lastRequestedTime[msg.sender] = uint48(block.timestamp); + unchecked { + // Safe: REQUEST_FEE is small (10 TRX), total won't exceed uint128 max + s_totalCollectedFees += uint128(msg.value); + } + + emit WhitelistRequested(msg.sender); + + return true; + } + + /** + * @notice Withdraws all collected fees to caller + * @dev Can only be called by addresses with WITHDRAW_ROLE + */ + function withdraw() external { + if (!hasRole(WITHDRAW_ROLE, msg.sender)) { + revert Errors.WhitelistRegistry__NotAuthorized(); + } + + uint256 balance = s_totalCollectedFees; + + if (balance == 0) { + revert Errors.WhitelistRegistry__NothingToWithdraw(); + } + + s_totalCollectedFees = 0; + (bool success,) = msg.sender.call{value: balance}(""); + + if (!success) { + revert Errors.WhitelistRegistry__WithdrawFailed(); + } + + emit WithdrawSuccess(msg.sender, balance); + } + + /** + * @notice Adds a new authorized updater + * @dev Can only be called by DEFAULT_ADMIN_ROLE + * @param updater Address to authorize for Merkle root updates + */ + function addAuthorizedUpdater(address updater) external { + if (!hasRole(DEFAULT_ADMIN_ROLE, msg.sender)) { + revert Errors.WhitelistRegistry__NotAuthorized(); + } + if (updater == address(0)) { + revert Errors.WhitelistRegistry__InvalidInput(); + } + if (s_authorizedUpdaters[updater]) { + revert Errors.WhitelistRegistry__AlreadyAuthorized(); + } + + s_authorizedUpdaters[updater] = true; + emit AuthorizedUpdaterAdded(updater); + } + + /** + * @notice Removes an authorized updater + * @dev Can only be called by DEFAULT_ADMIN_ROLE + * @param updater Address to remove from authorized updaters + */ + function removeAuthorizedUpdater(address updater) external { + if (!hasRole(DEFAULT_ADMIN_ROLE, msg.sender)) { + revert Errors.WhitelistRegistry__NotAuthorized(); + } + if (updater == address(0)) { + revert Errors.WhitelistRegistry__InvalidInput(); + } + if (!s_authorizedUpdaters[updater]) { + revert Errors.WhitelistRegistry__NotAuthorized(); + } + + s_authorizedUpdaters[updater] = false; + emit AuthorizedUpdaterRemoved(updater); + } + + /** + * @notice Internal function to verify authorized updater signature + * @dev Validates nonce, signature, and signer authorization. Increments nonce on success + * @param newRoot Proposed new Merkle root + * @param nonce Nonce value from caller + * @param signature ECDSA signature to verify + */ + function _onlyAuthorizedUpdater(bytes32 newRoot, uint64 nonce, bytes calldata signature) internal { + if (newRoot == bytes32(0) || signature.length == 0) { + revert Errors.WhitelistRegistry__InvalidInput(); + } + + if (nonce != s_nonce) { + revert Errors.WhitelistRegistry__InvalidNonce(); + } + + bytes32 hash = keccak256(abi.encodePacked(newRoot, nonce, block.chainid, address(this))); + bytes32 signedHash = hash.toEthSignedMessageHash(); + address signer = ECDSA.recover(signedHash, signature); + + if (signer == address(0)) { + revert Errors.WhitelistRegistry__InvalidInput(); + } + + if (!s_authorizedUpdaters[signer]) { + revert Errors.WhitelistRegistry__NotAuthorized(); + } + + s_nonce++; + } + + /** + * @notice Pauses the contract + * @dev Can only be called by DEFAULT_ADMIN_ROLE. Prevents state-changing operations + */ + function pause() external onlyRole(DEFAULT_ADMIN_ROLE) { + _pause(); + } + + /** + * @notice Unpauses the contract + * @dev Can only be called by DEFAULT_ADMIN_ROLE + */ + function unpause() external onlyRole(DEFAULT_ADMIN_ROLE) { + _unpause(); + } + + /* -------------------------------------------------------------------------- */ + /* GETTERS */ + /* -------------------------------------------------------------------------- */ + + /** + * @notice Returns the current Merkle root + * @return Current Merkle root hash + */ + function getCurrentMerkleRoot() external view returns (bytes32) { + return s_merkleRoot; + } + + /** + * @notice Returns total fees collected from whitelist requests + * @return Total collected fees in wei + */ + function getTotalCollectedFees() external view returns (uint128) { + return s_totalCollectedFees; + } + + /** + * @notice Returns the timestamp of the last Merkle root update + * @return Timestamp of last update + */ + function getLastUpdateTime() external view returns (uint48) { + return s_lastUpdate; + } + + /** + * @notice Returns the current nonce value + * @return Current nonce + */ + function getCurrentNonce() external view returns (uint64) { + return s_nonce; + } + + /** + * @notice Checks if an address is an authorized updater + * @param updater Address to check + * @return True if authorized, false otherwise + */ + function isAuthorizedUpdater(address updater) external view returns (bool) { + return s_authorizedUpdaters[updater]; + } + + /** + * @notice Returns the last time an address requested whitelist inclusion + * @param requester Address to check + * @return Timestamp of last request + */ + function getLastRequestedTime(address requester) external view returns (uint48) { + return uint48(s_lastRequestedTime[requester]); + } + + /** + * @notice Returns the cooldown period between whitelist requests + * @return Cooldown duration in seconds (24 hours) + */ + function getRequestCooldown() external pure returns (uint256) { + return REQUEST_COOLDOWN; + } + + /** + * @notice Returns the fee required for whitelist requests + * @return Fee amount in wei (10 TRX) + */ + function getRequestFee() external pure returns (uint256) { + return REQUEST_FEE; + } + + /** + * @notice Verifies if an address is whitelisted using Merkle proof + * @param proof Merkle proof array + * @param user Address to verify + * @return valid True if address is whitelisted, false otherwise + */ + function verifyWhitelist(bytes32[] calldata proof, address user) external view returns (bool valid) { + if (proof.length == 0 || user == address(0)) { + revert Errors.WhitelistRegistry__InvalidInput(); + } + + bytes32 leaf; + assembly { + mstore(0x0, user) + leaf := keccak256(0x0, 0x20) + } + valid = MerkleProof.verify(proof, s_merkleRoot, leaf); + } + + /** + * @notice Returns the WITHDRAW_ROLE identifier + * @return WITHDRAW_ROLE bytes32 identifier + */ + function getWithdrawRole() external pure returns (bytes32) { + return WITHDRAW_ROLE; + } + + /** + * @notice Returns the DEFAULT_ADMIN_ROLE identifier + * @return DEFAULT_ADMIN_ROLE bytes32 identifier + */ + function getDefaultAdminRole() external pure returns (bytes32) { + return DEFAULT_ADMIN_ROLE; + } + + /** + * @notice Checks if an address has DEFAULT_ADMIN_ROLE + * @param account Address to check + * @return True if address is admin, false otherwise + */ + function isAdmin(address account) external view returns (bool) { + return hasRole(DEFAULT_ADMIN_ROLE, account); + } + + /** + * @notice Checks if an address has WITHDRAW_ROLE + * @param account Address to check + * @return True if address can withdraw, false otherwise + */ + function isWithdrawer(address account) external view returns (bool) { + return hasRole(WITHDRAW_ROLE, account); + } +} diff --git a/contracts/src/interfaces/IFeeModule.sol b/contracts/src/interfaces/IFeeModule.sol new file mode 100644 index 0000000..a58abd2 --- /dev/null +++ b/contracts/src/interfaces/IFeeModule.sol @@ -0,0 +1,109 @@ +// SPDX-License-Identifier: MIT +pragma solidity 0.8.25; + +import {Types} from "../libraries/Types.sol"; + +/** + * @title IFeeModule + * @notice Interface for fee calculation and application used by the settlement layer + * @dev Implementations should track fees per transfer and optionally a free-tier allowance + */ +interface IFeeModule { + /** + * @notice Calculate fee for a transaction and return details in a struct + * @param sender The address initiating the transfer + * @param txType The type of transaction (see Types.TxType) + * @param volume The volume of the transfer (used for volume-based fees) + * @param recipientCount Number of recipients for batched transfers (1 for single transfers) + * @return info A FeeInfo containing the fee and related info + */ + function calculateFee(address sender, Types.TxType txType, uint256 volume, uint256 recipientCount) + external + view + returns (Types.FeeInfo memory info); + + /** + * @notice Apply a previously calculated fee to a transfer + * @param sender The address paying the fee + * @param fee Fee amount to apply (in wei / smallest token unit) + * @param transferHash Unique hash identifying the transfer + * @param batchId Batch identifier + * @param txType The type of transaction (see Types.TxType) + */ + function applyFee(address sender, uint256 fee, bytes32 transferHash, uint64 batchId, Types.TxType txType) external; + + /** + * @notice Set the Settlement contract address + * @param settlement Address of the Settlement contract + */ + function setSettlement(address settlement) external; + + /** + * @notice Get the fee applied to a specific transfer + * @param transferHash Unique hash identifying the transfer + * @return fee The fee previously applied to the given transfer + */ + function getFeeOfTransaction(bytes32 transferHash) external view returns (uint256 fee); + + /** + * @notice Get total fees collected by the module + * @notice Returns CALCULATED fees for statistical purposes only + * @dev WARNING: Fees are NOT actually collected or transferred + */ + function getTotalFeesCollected() external view returns (uint256 total); + + /** + * @notice Get remaining number of free-tier transactions for a given sender + * @param sender Address to query + * @return remaining Number of free-tier transactions remaining + */ + function getRemainingFreeTierTransactions(address sender) external view returns (uint256 remaining); + + /** + * @notice Get the Settlement contract address + * @return Address of the Settlement contract + */ + function getSettlement() external view returns (address); + + /** + * @notice Get the contract owner + * @return Address of the owner + */ + function getOwner() external view returns (address); + + /** + * @notice Get the free transaction usage info for a user + * @param user Address to query + * @return FreeTxInfo struct containing day and count + */ + function getFreeTxUsage(address user) external view returns (Types.FreeTxInfo memory); + + /** + * @notice Get total fees collected for a specific batch + * @param batchId ID of the batch + * @return total Total fees for the batch in wei + */ + function getBatchTotalFees(uint64 batchId) external view returns (uint256 total); + + /** + * @notice Emitted when a fee is applied to a transfer + * @param sender The address who paid the fee + * @param fee The fee amount applied + * @param transferHash Transfer identifier for which the fee was applied + * @param batchId Batch identifier if this was part of a batched transfer + */ + event FeeApplied(address indexed sender, uint256 fee, bytes32 transferHash, uint64 batchId); + + /** + * @notice Emitted when a free-tier allowance is consumed + * @param sender The address that used a free-tier transaction + * @param remainingFreeTx Remaining free-tier transactions after use + */ + event FreeTierUsed(address indexed sender, uint256 remainingFreeTx); + + /** + * @notice Emitted when the settlement contract address is updated + * @param settlement The new settlement contract address + */ + event SettlementUpdated(address indexed settlement); +} diff --git a/contracts/src/interfaces/ISettlement.sol b/contracts/src/interfaces/ISettlement.sol new file mode 100644 index 0000000..dd1a35b --- /dev/null +++ b/contracts/src/interfaces/ISettlement.sol @@ -0,0 +1,240 @@ +// SPDX-License-Identifier: MIT +pragma solidity 0.8.25; + +import {Types} from "../libraries/Types.sol"; + +/** + * @title ISettlement + * @notice Interface for batch submission and verified transfer execution + * @dev Admin setters must be access controlled + */ +interface ISettlement { + /** + * @notice Submit a batch by Merkle root + * @dev Validates limits stores metadata emits BatchSubmitted + * @param merkleRoot Batch root + * @param txCount Transactions count + * @param batchSalt Salt used by backend to build merkle root + * @return success True if accepted + * @return batchId Assigned ID + */ + function submitBatch(bytes32 merkleRoot, uint32 txCount, uint64 batchSalt) external returns (bool, uint64); + + /** + * @notice Execute a proven transfer + * @dev Verifies txProof optional whitelistProof applies fees emits TransferExecuted + * @param txProof Proof for txData + * @param whitelistProof Proof for sender whitelist + * @param txData Transfer data + * @return success True if executed + */ + function executeTransfer( + bytes32[] calldata txProof, + bytes32[] calldata whitelistProof, + Types.TransferData memory txData + ) external returns (bool); + + /** + * @notice Approve aggregator + * @dev Admin only Emits AggregatorApproved + * @param aggregator Address to approve + */ + function approveAggregator(address aggregator) external; + + /** + * @notice Disapprove aggregator + * @dev Admin only Emits AggregatorDisapproved + * @param aggregator Address to disapprove + */ + function disapproveAggregator(address aggregator) external; + + /** + * @notice Pause the contract + * @dev Admin only Prevents executeTransfer calls + */ + function pause() external; + + /** + * @notice Unpause the contract + * @dev Admin only + */ + function unpause() external; + + /** + * @notice Set whitelist registry + * @dev Admin only Emits WhitelistRegistryUpdated + * @param whitelistRegistry Registry address + */ + function setWhitelistRegistry(address whitelistRegistry) external; + + /** + * @notice Set fee module + * @dev Admin only Emits FeeModuleUpdated + * @param feeModule Fee module address + */ + function setFeeModule(address feeModule) external; + + /** + * @notice Set max tx per batch + * @dev Admin only Emits MaxTxPerBatchUpdated + * @param maxTx New limit + */ + function setMaxTxPerBatch(uint32 maxTx) external; + + /** + * @notice Set timelock duration + * @dev Admin only Emits TimelockDurationUpdated + * @param duration Seconds + */ + function setTimelockDuration(uint48 duration) external; + + /** + * @notice Set token address + * @dev Admin only Emits TokenUpdated + * @param tokenAddress Token address + */ + function setToken(address tokenAddress) external; + + /** + * @notice Get owner + * @return Address of owner + */ + function getOwner() external view returns (address); + + /** + * @notice Get whitelist registry + * @return Address of registry + */ + function getWhitelistRegistry() external view returns (address); + + /** + * @notice Get fee module + * @return Address of fee module + */ + function getFeeModule() external view returns (address); + + /** + * @notice Get token + * @return Address of token + */ + function getToken() external view returns (address); + + /** + * @notice Get current batch ID + * @return Current batch ID counter + */ + function getCurrentBatchId() external view returns (uint64); + + /** + * @notice Check approved aggregator + * @param aggregator Address to check + * @return True if approved + */ + function isApprovedAggregator(address aggregator) external view returns (bool); + + /** + * @notice Get max tx per batch + * @return maxTx Limit + */ + function getMaxTxPerBatch() external view returns (uint32); + + /** + * @notice Get timelock duration + * @return duration Seconds + */ + function getTimelockDuration() external view returns (uint48); + + /** + * @notice Get batch ID by root + * @param rootHash Batch root + * @return batchId ID + */ + function getBatchIdByRoot(bytes32 rootHash) external view returns (uint64); + + /** + * @notice Get batch by ID + * @param batchId Batch ID + * @return batch Stored metadata + */ + function getBatchById(uint64 batchId) external view returns (Types.Batch memory); + + /** + * @notice Check if transfer executed + * @param transferHash Transfer hash + * @return True if executed + */ + function isExecutedTransfer(bytes32 transferHash) external view returns (bool); + + /** + * @notice Get Merkle root by batch ID + * @param batchId Batch ID + * @return Merkle root of the batch + */ + function getRootByBatchId(uint64 batchId) external view returns (bytes32); + + /** + * @notice Check if contract is configured + * @return True if configured + */ + function isConfigured() external view returns (bool); + + /** + * @notice Emitted on batch submission + * @param batchId Assigned ID + * @param merkleRoot Batch root + * @param txCount Count + * @param timestamp Block time + */ + event BatchSubmitted(uint64 indexed batchId, bytes32 indexed merkleRoot, uint32 txCount, uint48 timestamp); + + /** + * @notice Emitted on transfer execution + * @param from Sender + * @param to Recipient + * @param amount Amount + * @param nonce Nonce + */ + event TransferExecuted(address indexed from, address indexed to, uint256 amount, uint64 nonce); + + /** + * @notice Emitted on whitelist registry update + * @param whitelistRegistry New address + */ + event WhitelistRegistryUpdated(address indexed whitelistRegistry); + + /** + * @notice Emitted on fee module update + * @param feeModule New address + */ + event FeeModuleUpdated(address indexed feeModule); + + /** + * @notice Emitted on aggregator approval + * @param aggregator Approved address + */ + event AggregatorApproved(address indexed aggregator); + + /** + * @notice Emitted on aggregator disapproval + * @param aggregator Disapproved address + */ + event AggregatorDisapproved(address indexed aggregator); + + /** + * @notice Emitted on max tx per batch update + * @param maxTx New limit + */ + event MaxTxPerBatchUpdated(uint32 indexed maxTx); + + /** + * @notice Emitted on timelock duration update + * @param duration New seconds + */ + event TimelockDurationUpdated(uint48 indexed duration); + + /** + * @notice Emitted on token address update + * @param tokenAddress New token + */ + event TokenUpdated(address indexed tokenAddress); +} diff --git a/contracts/src/interfaces/IWhitelistRegistry.sol b/contracts/src/interfaces/IWhitelistRegistry.sol new file mode 100644 index 0000000..521e4b1 --- /dev/null +++ b/contracts/src/interfaces/IWhitelistRegistry.sol @@ -0,0 +1,174 @@ +// SPDX-License-Identifier: MIT +pragma solidity 0.8.25; + +/** + * @title IWhitelistRegistry + * @notice Registry used to manage and verify a Merkle-tree based whitelist + * @dev Implementations should allow updating the merkle root (with authorization) + * and enable verification/requests against the stored root + */ +interface IWhitelistRegistry { + /** + * @notice Update the Merkle root used for whitelist proofs + * @dev A signature or other authorization proof may be supplied to prove the + * caller is allowed to update the root (implementation-specific) + * @param newRoot New Merkle root to set + * @param signature Authorization signature or proof for the update (implementation-specific) + */ + function updateMerkleRoot(bytes32 newRoot, uint64 nonce, bytes calldata signature) external; + + /** + * @notice Request whitelist access (implementation may emit an event) + * @dev Implementations may require additional off-chain verification or queueing logic + * @return success True if the request was accepted + */ + function requestWhitelist() external payable returns (bool success); + + /** + * @notice Withdraw accumulated funds from the contract + * @dev Caller must have appropriate permissions (implementation-specific) + */ + function withdraw() external; + + /** + * @notice Add an authorized updater + * @dev Admin only Emits AuthorizedUpdaterAdded + * @param updater Address to authorize + */ + function addAuthorizedUpdater(address updater) external; + + /** + * @notice Remove an authorized updater + * @dev Admin only Emits AuthorizedUpdaterRemoved + * @param updater Address to remove authorization + */ + function removeAuthorizedUpdater(address updater) external; + + /** + * @notice Pause the contract + * @dev Admin only + */ + function pause() external; + + /** + * @notice Unpause the contract + * @dev Admin only + */ + function unpause() external; + + /** + * @notice Get the currently active Merkle root used for whitelist verification + * @return root Current Merkle root + */ + function getCurrentMerkleRoot() external view returns (bytes32 root); + + /** + * @notice Verify whether a given user is included in the whitelist + * @param proof Merkle proof (array of sibling hashes) proving inclusion + * @param user Address to verify + * @return valid True if the user is included according to the current merkle root + */ + function verifyWhitelist(bytes32[] calldata proof, address user) external view returns (bool valid); + + /** + * @notice Get total collected fees + * @return Total fees collected + */ + function getTotalCollectedFees() external view returns (uint128); + + /** + * @notice Get last update time + * @return Last update timestamp + */ + function getLastUpdateTime() external view returns (uint48); + + /** + * @notice Get current nonce + * @return Current nonce value + */ + function getCurrentNonce() external view returns (uint64); + + /** + * @notice Check if address is authorized updater + * @param updater Address to check + * @return True if authorized + */ + function isAuthorizedUpdater(address updater) external view returns (bool); + + /** + * @notice Get last requested time for a requester + * @param requester Address to check + * @return Last request timestamp + */ + function getLastRequestedTime(address requester) external view returns (uint48); + + /** + * @notice Get request cooldown period + * @return Cooldown duration in seconds + */ + function getRequestCooldown() external pure returns (uint256); + + /** + * @notice Get request fee amount + * @return Fee amount required for requests + */ + function getRequestFee() external pure returns (uint256); + + /** + * @notice Get withdraw role identifier + * @return Withdraw role bytes32 identifier + */ + function getWithdrawRole() external pure returns (bytes32); + + /** + * @notice Get default admin role identifier + * @return Default admin role bytes32 identifier + */ + function getDefaultAdminRole() external pure returns (bytes32); + + /** + * @notice Check if account has admin role + * @param account Address to check + * @return True if account is admin + */ + function isAdmin(address account) external view returns (bool); + + /** + * @notice Check if account has withdraw role + * @param account Address to check + * @return True if account can withdraw + */ + function isWithdrawer(address account) external view returns (bool); + + /** + * @notice Emitted when the whitelist Merkle root is updated + * @param oldRoot The previous Merkle root + * @param newRoot The new Merkle root that was set + * @param nonce The nonce used in the update + */ + event WhitelistUpdated(bytes32 oldRoot, bytes32 newRoot, uint64 nonce); + + /** + * @notice Emitted when an address requests to be added to the whitelist + * @param requester Address that requested whitelist access + */ + event WhitelistRequested(address indexed requester); + + /** + * @notice Emitted when funds are withdrawn from the contract + * @param requester Address that initiated the withdrawal + * @param amount Amount of funds withdrawn + */ + event WithdrawSuccess(address indexed requester, uint256 amount); + /** + * @notice Emitted when a new authorized updater is added + * @param updater Address of the authorized updater added + */ + event AuthorizedUpdaterAdded(address indexed updater); + + /** + * @notice Emitted when an authorized updater is removed + * @param updater Address of the authorized updater removed + */ + event AuthorizedUpdaterRemoved(address indexed updater); +} diff --git a/contracts/src/libraries/Errors.sol b/contracts/src/libraries/Errors.sol new file mode 100644 index 0000000..7d27e1e --- /dev/null +++ b/contracts/src/libraries/Errors.sol @@ -0,0 +1,45 @@ +// SPDX-License-Identifier: MIT +pragma solidity 0.8.25; + +/** + * @title Errors + * @notice Custom error definitions for the protocol + * @dev Using custom errors instead of require strings saves gas + */ +library Errors { + error WhitelistRegistry__NotAuthorized(); + error WhitelistRegistry__InsufficientFee(); + error WhitelistRegistry__RequestTooFrequent(); + error WhitelistRegistry__InvalidNonce(); + error WhitelistRegistry__WithdrawFailed(); + error WhitelistRegistry__InvalidInput(); + error WhitelistRegistry__AlreadyAuthorized(); + error WhitelistRegistry__DuplicateUpdate(); + error WhitelistRegistry__NothingToWithdraw(); + + error FeeModule__InvalidInput(); + error FeeModule__InvalidRecipientCount(); + error FeeModule__FreeTierLimitExceeded(); + error FeeModule__InvalidTxType(); + error FeeModule__NotAuthorized(); + error FeeModule__AlreadySettlement(); + + error Settlement__AggregatorNotApproved(); + error Settlement__InvalidInput(); + error Settlement__AlreadyRegistry(); + error Settlement__BatchLocked(); + error Settlement__BatchAlreadySubmitted(); + error Settlement__InvalidBatch(); + error Settlement__InvalidMerkleProof(); + error Settlement__AlreadyFeeModule(); + error Settlement__AlreadySet(); + error Settlement__AlreadyTimelockDuration(); + error Settlement__AlreadyToken(); + error Settlement__TransferAlreadyExecuted(); + error Settlement__NotWhitelisted(); + error Settlement__InsufficientBalance(); + error Settlement__InsufficientAllowance(); + error Settlement__AlreadyAggregator(); + error Settlement__NotConfigured(); + error Settlement__InsufficientAllowance(); +} diff --git a/contracts/src/libraries/Types.sol b/contracts/src/libraries/Types.sol new file mode 100644 index 0000000..344b357 --- /dev/null +++ b/contracts/src/libraries/Types.sol @@ -0,0 +1,68 @@ +// SPDX-License-Identifier: MIT +pragma solidity 0.8.25; + +/** + * @title Types + * @notice Core types and data structures used across the protocol + */ +library Types { + /** + * @notice Transaction processing modes that determine fee calculation + * @dev Single-recipient: DELAYED (standard), INSTANT (premium) + * Multi-recipient: BATCHED (per-recipient fee) + * Special: FREE_TIER (limited daily quota) + */ + enum TxType { + DELAYED, // Standard processing with lower fee + INSTANT, // Premium processing with higher fee + BATCHED, // Multi-recipient with per-recipient fee + FREE_TIER // Uses daily free transaction quota + } + + /** + * @notice Fee calculation result + * @param fee Amount in smallest token unit (e.g., wei/sun) + * @param txType Actual transaction type applied + * @param freeQuota Remaining free transactions for the day + */ + struct FeeInfo { + uint256 fee; + TxType txType; + uint256 freeQuota; + } + + /** + * @notice Free-tier transaction usage tracking + * @param count Number of free transactions used today + * @param day Last day (timestamp) when free transactions were used + */ + struct FreeTxInfo { + uint128 count; + uint128 day; + } + + /** + * @notice Batch processing information + */ + struct Batch { + bytes32 merkleRoot; // Root of merkle tree containing transactions + uint48 timestamp; // Batch creation time + uint32 txCount; // Number of transactions in batch + uint48 unlockTime; // Time when batch can be processed + uint64 batchSalt; // Salt used by backend to build merkle root + } + + /** + * @notice Individual transfer details + */ + struct TransferData { + address from; // Sender address + address to; // Recipient address + uint256 amount; // Transfer amount + uint64 nonce; // Unique identifier per sender + uint48 timestamp; // Transfer initiation time + uint32 recipientCount; // Number of recipients (used for BATCHED fee calc) + uint64 batchId; // Batch identifier + TxType txType; // Processing mode + } +} diff --git a/contracts/test/integration/FeeModuleIntegration.t.sol b/contracts/test/integration/FeeModuleIntegration.t.sol new file mode 100644 index 0000000..af8ca15 --- /dev/null +++ b/contracts/test/integration/FeeModuleIntegration.t.sol @@ -0,0 +1,329 @@ +// SPDX-License-Identifier: MIT +pragma solidity 0.8.25; + +import {Test} from "forge-std/Test.sol"; +import {IntegrationDeployHelpers} from "../utils/IntegrationDeployHelpers.sol"; +import {TestConstants as TC} from "../utils/TestConstants.sol"; + +import {Types} from "../../src/libraries/Types.sol"; +import {Errors} from "../../src/libraries/Errors.sol"; + +contract FeeModuleIntegrationTest is Test, IntegrationDeployHelpers { + // DEFAULT_SENDER = 0x1804c8AB1F12E6bbf3894d4083f33e07309d1f38; + function setUp() public { + _initUser(); + _initUser2(); + _initFeeModule(); + _initSettlement(); + + vm.prank(DEFAULT_SENDER); + feeModule.setSettlement(address(settlement)); + } + + /* -------------------------------------------------------------------------- */ + /* INITIAL STATE */ + /* -------------------------------------------------------------------------- */ + + function test_Constructor_InitialValues() public view { + assertNotEq(feeModule.getSettlement(), address(0)); + assertEq(feeModule.getSettlement(), address(settlement)); + assertEq(address(feeModule.owner()), DEFAULT_SENDER); + } + + /* -------------------------------------------------------------------------- */ + /* CALCULATIONS */ + /* -------------------------------------------------------------------------- */ + + function test_CalculateFee_FreeQuota_NoChanges() public view { + Types.FeeInfo memory feeInfo = feeModule.calculateFee(user, Types.TxType.DELAYED, TC.VOLUME, 1); + assertEq(feeInfo.fee, 0); + assertEq(uint256(feeInfo.txType), uint256(Types.TxType.FREE_TIER)); + assertEq(feeInfo.freeQuota, TC.FREE_TX_AMOUNT); + + for (uint256 i = 0; i < TC.FREE_TX_AMOUNT; i++) { + feeInfo = feeModule.calculateFee(user, Types.TxType.DELAYED, TC.VOLUME, 1); + assertEq(feeInfo.fee, 0); + assertEq(uint256(feeInfo.txType), uint256(Types.TxType.FREE_TIER)); + assertEq(feeInfo.freeQuota, TC.FREE_TX_AMOUNT); + } + } + + function test__CalculateFee_Batched_MultipleRecipients() public view { + uint256 recipients = 5; + Types.FeeInfo memory feeInfo = feeModule.calculateFee(user, Types.TxType.BATCHED, TC.VOLUME, recipients); + assertEq(feeInfo.fee, TC.BATCH_FEE * recipients); + assertEq(uint256(feeInfo.txType), uint256(Types.TxType.BATCHED)); + } + + function test__CalculateFee_Instant() public view { + Types.FeeInfo memory feeInfo = feeModule.calculateFee(user, Types.TxType.INSTANT, TC.VOLUME, 1); + assertEq(feeInfo.fee, TC.INSTANT_FEE); + assertEq(uint256(feeInfo.txType), uint256(Types.TxType.INSTANT)); + } + + function test_CalculateFee_FreeTier_ResetsNextDay() public { + for (uint256 i = 0; i < TC.FREE_TX_AMOUNT; i++) { + feeModule.calculateFee(user, Types.TxType.DELAYED, TC.VOLUME, 1); + vm.prank(address(settlement)); + feeModule.applyFee(user, 0, keccak256(abi.encodePacked(i)), 1, Types.TxType.FREE_TIER); + } + + Types.FeeInfo memory feeInfo = feeModule.calculateFee(user, Types.TxType.DELAYED, TC.VOLUME, 1); + assertEq(feeInfo.fee, TC.BASE_FEE); + + vm.warp(block.timestamp + 1 days); + + feeInfo = feeModule.calculateFee(user, Types.TxType.DELAYED, TC.VOLUME, 1); + assertEq(feeInfo.fee, 0); + assertEq(feeInfo.freeQuota, TC.FREE_TX_AMOUNT); + } + + function test_CalculateFee_LargeVolume() public view { + Types.FeeInfo memory feeInfo = feeModule.calculateFee(user, Types.TxType.DELAYED, TC.LARGE_VOLUME, 1); + assertEq(feeInfo.fee, 0); + assertEq(uint256(feeInfo.txType), uint256(Types.TxType.DELAYED)); + assertEq(feeInfo.freeQuota, TC.FREE_TX_AMOUNT); + + feeInfo = feeModule.calculateFee(user, Types.TxType.INSTANT, TC.LARGE_VOLUME, 1); + assertEq(feeInfo.fee, 0); + + feeInfo = feeModule.calculateFee(user, Types.TxType.BATCHED, TC.LARGE_VOLUME, 2); + assertEq(feeInfo.fee, 0); + } + + function test_CalculateFee_BatchedWithOneRecipient() public { + vm.expectRevert(Errors.FeeModule__InvalidRecipientCount.selector); + feeModule.calculateFee(user, Types.TxType.BATCHED, TC.VOLUME, 1); + } + + function test__CalculateFee_NonBatched_MultipleRecipients_Reverts() public { + vm.expectRevert(Errors.FeeModule__InvalidRecipientCount.selector); + feeModule.calculateFee(user, Types.TxType.DELAYED, TC.VOLUME, 5); + + vm.expectRevert(Errors.FeeModule__InvalidRecipientCount.selector); + feeModule.calculateFee(user, Types.TxType.INSTANT, TC.VOLUME, 3); + + vm.expectRevert(Errors.FeeModule__InvalidRecipientCount.selector); + feeModule.calculateFee(user, Types.TxType.FREE_TIER, TC.VOLUME, 2); + } + + function test__CalculateFee_Batched_SingleRecipient_Reverts() public { + vm.expectRevert(Errors.FeeModule__InvalidRecipientCount.selector); + feeModule.calculateFee(user, Types.TxType.BATCHED, TC.VOLUME, 1); + } + + function test__CalculateFee_Batched_Success() public view { + Types.FeeInfo memory feeInfo = feeModule.calculateFee(user, Types.TxType.BATCHED, TC.VOLUME, 5); + assertEq(feeInfo.fee, TC.BATCH_FEE * 5); + assertEq(uint256(feeInfo.txType), uint256(Types.TxType.BATCHED)); + } + + function test_CalculateFee_Batched_LargeVolume() public view { + Types.FeeInfo memory feeInfo = feeModule.calculateFee(user, Types.TxType.BATCHED, TC.LARGE_VOLUME, 3); + assertEq(feeInfo.fee, 0); + assertEq(uint256(feeInfo.txType), uint256(Types.TxType.BATCHED)); + } + + /* -------------------------------------------------------------------------- */ + /* applyFee */ + /* -------------------------------------------------------------------------- */ + + function test_ApplyFee_UpdatesTotalFees() public { + bytes32 transferHash = keccak256(abi.encodePacked("transfer1")); + uint64 batchId = 1; + uint256 fee = TC.BASE_FEE; + + vm.prank(address(settlement)); + feeModule.applyFee(user, fee, transferHash, batchId, Types.TxType.DELAYED); + + assertEq(feeModule.getFeeOfTransaction(transferHash), fee); + assertEq(feeModule.getTotalFeesCollected(), fee); + assertEq(feeModule.getBatchTotalFees(batchId), fee); + } + + function test_ApplyFee_Multiple_AccumulatesCorrectly() public { + bytes32 hash1 = keccak256(abi.encodePacked("tx1")); + bytes32 hash2 = keccak256(abi.encodePacked("tx2")); + uint64 batchId = 1; + + vm.startPrank(address(settlement)); + feeModule.applyFee(user, TC.BASE_FEE, hash1, batchId, Types.TxType.INSTANT); + feeModule.applyFee(user, TC.INSTANT_FEE, hash2, batchId, Types.TxType.INSTANT); + vm.stopPrank(); + + assertEq(feeModule.getTotalFeesCollected(), TC.BASE_FEE + TC.INSTANT_FEE); + assertEq(feeModule.getBatchTotalFees(batchId), TC.BASE_FEE + TC.INSTANT_FEE); + } + + /* -------------------------------------------------------------------------- */ + /* FULL FLOW SCENARIOS */ + /* -------------------------------------------------------------------------- */ + + function test_FullFlow_CalculateAndApplyFee() public { + Types.FeeInfo memory feeInfo = feeModule.calculateFee(user, Types.TxType.INSTANT, TC.VOLUME, 1); + assertEq(feeInfo.fee, TC.INSTANT_FEE); + + bytes32 transferHash = keccak256(abi.encodePacked("transfer1")); + uint64 batchId = 1; + + vm.prank(address(settlement)); + feeModule.applyFee(user, feeInfo.fee, transferHash, batchId, Types.TxType.INSTANT); + + assertEq(feeModule.getFeeOfTransaction(transferHash), TC.INSTANT_FEE); + assertEq(feeModule.getTotalFeesCollected(), TC.INSTANT_FEE); + assertEq(feeModule.getBatchTotalFees(batchId), TC.INSTANT_FEE); + } + + function test_MultipleUsers_IndependentFreeTier() public { + address user2 = makeAddr("user2"); + + vm.startPrank(address(settlement)); + + for (uint256 i = 0; i < 5; i++) { + Types.FeeInfo memory feeInfo1 = feeModule.calculateFee(user, Types.TxType.DELAYED, TC.VOLUME, 1); + assertEq(feeInfo1.fee, 0); + assertEq(uint256(feeInfo1.txType), uint256(Types.TxType.FREE_TIER)); + + bytes32 txHash = keccak256(abi.encodePacked(user, i)); + feeModule.applyFee(user, 0, txHash, 1, Types.TxType.FREE_TIER); + } + + Types.FeeInfo memory feeInfo2 = feeModule.calculateFee(user2, Types.TxType.DELAYED, TC.VOLUME, 1); + assertEq(feeInfo2.fee, 0); + assertEq(uint256(feeInfo2.txType), uint256(Types.TxType.FREE_TIER)); + assertEq(feeInfo2.freeQuota, TC.FREE_TX_AMOUNT); + + bytes32 txHash2 = keccak256(abi.encodePacked(user2, uint256(0))); + feeModule.applyFee(user2, 0, txHash2, 2, Types.TxType.FREE_TIER); + + assertEq(feeModule.getRemainingFreeTierTransactions(user), TC.FREE_TX_AMOUNT - 5); + assertEq(feeModule.getRemainingFreeTierTransactions(user2), TC.FREE_TX_AMOUNT - 1); + + vm.stopPrank(); + } + + function test_FreeQuota_AfterApply() public { + Types.FeeInfo memory feeInfo; + + for (uint256 i = 0; i < TC.FREE_TX_AMOUNT; i++) { + feeInfo = feeModule.calculateFee(user, Types.TxType.DELAYED, TC.VOLUME, 1); + vm.prank(address(settlement)); + feeModule.applyFee(user, feeInfo.fee, keccak256(abi.encodePacked(i)), 1, feeInfo.txType); + + assertEq(feeModule.getRemainingFreeTierTransactions(user), TC.FREE_TX_AMOUNT - (i + 1)); + } + + feeInfo = feeModule.calculateFee(user, Types.TxType.DELAYED, TC.VOLUME, 1); + assertEq(feeInfo.fee, TC.BASE_FEE); + } + + function test_MultipleBatches_SeparateFeeAccumulation() public { + bytes32 hash1 = keccak256(abi.encodePacked("tx1")); + bytes32 hash2 = keccak256(abi.encodePacked("tx2")); + bytes32 hash3 = keccak256(abi.encodePacked("tx3")); + + vm.startPrank(address(settlement)); + feeModule.applyFee(user, TC.BASE_FEE, hash1, 1, Types.TxType.DELAYED); + feeModule.applyFee(user, TC.INSTANT_FEE, hash2, 1, Types.TxType.INSTANT); + feeModule.applyFee(user, TC.BATCH_FEE * 3, hash3, 2, Types.TxType.BATCHED); + vm.stopPrank(); + + assertEq(feeModule.getBatchTotalFees(1), TC.BASE_FEE + TC.INSTANT_FEE); + assertEq(feeModule.getBatchTotalFees(2), TC.BATCH_FEE * 3); + assertEq(feeModule.getTotalFeesCollected(), TC.BASE_FEE + TC.INSTANT_FEE + TC.BATCH_FEE * 3); + } + + function test_LargeVolumeUser_ThenSmallVolume() public view { + Types.FeeInfo memory feeInfo = feeModule.calculateFee(user, Types.TxType.DELAYED, TC.LARGE_VOLUME, 1); + assertEq(feeInfo.fee, 0); + assertEq(feeInfo.freeQuota, TC.FREE_TX_AMOUNT); + + feeInfo = feeModule.calculateFee(user, Types.TxType.DELAYED, TC.VOLUME, 1); + assertEq(feeInfo.fee, 0); + assertEq(uint256(feeInfo.txType), uint256(Types.TxType.FREE_TIER)); + assertEq(feeInfo.freeQuota, TC.FREE_TX_AMOUNT); + } + + function test_FreeTier_AcrossDayBoundary() public { + for (uint256 i = 0; i < 3; i++) { + feeModule.calculateFee(user, Types.TxType.DELAYED, TC.VOLUME, 1); + } + assertEq(feeModule.getRemainingFreeTierTransactions(user), 10); + + vm.prank(address(settlement)); + feeModule.applyFee(user, 0, keccak256(abi.encodePacked("tx1")), 1, Types.TxType.FREE_TIER); + assertEq(feeModule.getRemainingFreeTierTransactions(user), 9); + + vm.warp(block.timestamp + 12 hours); + assertEq(feeModule.getRemainingFreeTierTransactions(user), 9); + + vm.warp(block.timestamp + 13 hours); // total 25 hours + assertEq(feeModule.getRemainingFreeTierTransactions(user), TC.FREE_TX_AMOUNT); + } + + function test_CalculateFee_FreeTierLimitExceeded() public { + vm.startPrank(address(settlement)); + for (uint256 i = 0; i < TC.FREE_TX_AMOUNT; i++) { + Types.FeeInfo memory feeInfo1 = feeModule.calculateFee(user, Types.TxType.DELAYED, TC.VOLUME, 1); + assertEq(feeInfo1.fee, 0); + assertEq(uint256(feeInfo1.txType), uint256(Types.TxType.FREE_TIER)); + + bytes32 txHash = keccak256(abi.encodePacked(i)); + feeModule.applyFee(user, 0, txHash, 1, Types.TxType.FREE_TIER); + } + + vm.expectRevert(Errors.FeeModule__FreeTierLimitExceeded.selector); + feeModule.calculateFee(user, Types.TxType.FREE_TIER, TC.VOLUME, 1); + vm.stopPrank(); + } + + function test_ApplyFee_FreeTierLimitExceeded() public { + for (uint256 i = 0; i < TC.FREE_TX_AMOUNT; i++) { + vm.prank(address(settlement)); + feeModule.applyFee(user, 0, keccak256(abi.encodePacked(i)), 1, Types.TxType.FREE_TIER); + } + + vm.prank(address(settlement)); + vm.expectRevert(Errors.FeeModule__FreeTierLimitExceeded.selector); + feeModule.applyFee(user, 0, keccak256("overflow"), 1, Types.TxType.FREE_TIER); + } + + /* -------------------------------------------------------------------------- */ + /* GETTERS */ + /* -------------------------------------------------------------------------- */ + + function test_Getters_All() public { + bytes32 txHash1 = keccak256("tx1"); + bytes32 txHash2 = keccak256("tx2"); + uint64 batchId1 = 1; + uint64 batchId2 = 2; + + vm.startPrank(address(settlement)); + feeModule.applyFee(user, TC.BASE_FEE, txHash1, 1, Types.TxType.DELAYED); + feeModule.applyFee(user, TC.INSTANT_FEE, txHash2, 2, Types.TxType.INSTANT); + vm.stopPrank(); + + feeModule.calculateFee(user2, Types.TxType.DELAYED, TC.VOLUME, 1); + vm.prank(address(settlement)); + feeModule.applyFee(user2, 0, keccak256("freeTx"), 3, Types.TxType.FREE_TIER); + + assertEq(feeModule.getSettlement(), address(settlement)); + assertEq(feeModule.getOwner(), DEFAULT_SENDER); + + Types.FreeTxInfo memory usage = feeModule.getFreeTxUsage(user2); + assertEq(usage.count, 1); + assertEq(usage.day, block.timestamp / 1 days); + + assertEq(feeModule.getFeeOfTransaction(txHash1), TC.BASE_FEE); + assertEq(feeModule.getFeeOfTransaction(txHash2), TC.INSTANT_FEE); + + assertEq(feeModule.getTotalFeesCollected(), TC.BASE_FEE + TC.INSTANT_FEE); + + assertEq(feeModule.getBatchTotalFees(1), TC.BASE_FEE); + assertEq(feeModule.getBatchTotalFees(2), TC.INSTANT_FEE); + assertEq(feeModule.getBatchTotalFees(3), 0); + + uint256 remaining = feeModule.getRemainingFreeTierTransactions(user2); + assertEq(remaining, TC.FREE_TX_AMOUNT - 1); + } +} diff --git a/contracts/test/integration/RegistryIntegration.t.sol b/contracts/test/integration/RegistryIntegration.t.sol new file mode 100644 index 0000000..5907239 --- /dev/null +++ b/contracts/test/integration/RegistryIntegration.t.sol @@ -0,0 +1,547 @@ +// SPDX-License-Identifier: MIT +pragma solidity 0.8.25; + +import {Test} from "forge-std/Test.sol"; +import {Pausable} from "@openzeppelin/contracts/utils/Pausable.sol"; +import {MessageHashUtils} from "@openzeppelin/contracts/utils/cryptography/MessageHashUtils.sol"; + +import {IWhitelistRegistry} from "../../src/interfaces/IWhitelistRegistry.sol"; +import {MaliciousReceiver} from "../mocks/MaliciousReceiver.sol"; + +import {IntegrationDeployHelpers} from "../utils/IntegrationDeployHelpers.sol"; +import {TestConstants as TC} from "../utils/TestConstants.sol"; +import {Errors} from "../../src/libraries/Errors.sol"; + +contract RegistryIntegrationTest is Test, IntegrationDeployHelpers { + using MessageHashUtils for bytes32; + + address updater = 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266; + uint256 updaterPrivKey = 0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80; + + address user1 = 0x70997970C51812dc3A010C7d01b50e0d17dc79C8; + + // ------------------ Whitelist Merkle Test Data ------------------ + + bytes32 MERKLE_ROOT = 0x813e418bccb26456db980833fb3f2d171569401dca4ddd31ba78b99f5d99e242; + + bytes32[] private PROOF_USER1; + bytes32[] private PROOF_USER2; + bytes32[] private PROOF_USER3; + + function setUp() public { + _initRegistry(); + _initUser(); + + PROOF_USER1 = new bytes32[](2); + PROOF_USER1[0] = 0x6a65260b54e189b9d496c6e25ab6e91aef04672387dc6e4b559dd6f6335197a6; + PROOF_USER1[1] = 0x4044ec0d82f345979063e37b899875d71b453c276b360523e82b432c04ea3f17; + + PROOF_USER2 = new bytes32[](2); + PROOF_USER2[0] = 0x7ceb58780fb137bb02223b79c88bc6404f736f8bb4d1f0895d9884122804fb73; + PROOF_USER2[1] = 0x4044ec0d82f345979063e37b899875d71b453c276b360523e82b432c04ea3f17; + + PROOF_USER3 = new bytes32[](1); + PROOF_USER3[0] = 0x1a5324f5a19c274c2f9bfcfcdefcefc0ec65fef7db5a54fe78fca8007b4fe93a; + } + + function _sign(bytes32 mesHash, uint256 privKey) internal pure returns (bytes memory signature) { + (uint8 v, bytes32 r, bytes32 s) = vm.sign(privKey, mesHash); + signature = abi.encodePacked(r, s, v); + } + + function _signMerkleRoot(uint256 privKey, uint64 nonce) public view returns (bytes memory signature) { + bytes32 messageHash = keccak256(abi.encodePacked(MERKLE_ROOT, nonce, block.chainid, address(registry))); + bytes32 ethSignedMessageHash = messageHash.toEthSignedMessageHash(); + signature = _sign(ethSignedMessageHash, privKey); + } + + /* -------------------------------------------------------------------------- */ + /* INITIAL STATE */ + /* -------------------------------------------------------------------------- */ + + function test_Constructor_InitialValues() public view { + assertTrue(registry.hasRole(TC.DEFAULT_ADMIN_ROLE, updater)); + assertTrue(registry.hasRole(TC.WITHDRAW_ROLE, updater)); + assertTrue(registry.isAuthorizedUpdater(updater)); + + uint256 fee = registry.getRequestFee(); + assertEq(fee, TC.REQUEST_FEE); + + uint256 cooldown = registry.getRequestCooldown(); + assertEq(cooldown, TC.REQUEST_COOLDOWN); + } + + /* -------------------------------------------------------------------------- */ + /* updateMerkleRoot */ + /* -------------------------------------------------------------------------- */ + + function test_UpdateMerkleRoot() public { + uint64 currentNonce = registry.getCurrentNonce(); + bytes32 oldRoot = registry.getCurrentMerkleRoot(); + + vm.prank(updater); + bytes memory signature = _signMerkleRoot(updaterPrivKey, currentNonce); + + vm.expectEmit(false, false, false, true); + emit IWhitelistRegistry.WhitelistUpdated(oldRoot, MERKLE_ROOT, currentNonce); + registry.updateMerkleRoot(MERKLE_ROOT, currentNonce, signature); + + bytes32 currentRoot = registry.getCurrentMerkleRoot(); + uint48 lastUpdate = uint48(registry.getLastUpdateTime()); + assertEq(currentRoot, MERKLE_ROOT); + assertEq(lastUpdate, uint48(block.timestamp)); + } + + function test_UpdateMerkleRoot_DuplicateUpdate() public { + uint64 currentNonce = registry.getCurrentNonce(); + + vm.startPrank(updater); + bytes memory signature = _signMerkleRoot(updaterPrivKey, currentNonce); + + registry.updateMerkleRoot(MERKLE_ROOT, uint64(currentNonce), signature); + + uint64 newNonce = registry.getCurrentNonce(); + bytes memory newSignature = _signMerkleRoot(updaterPrivKey, newNonce); + + vm.expectRevert(Errors.WhitelistRegistry__DuplicateUpdate.selector); + registry.updateMerkleRoot(MERKLE_ROOT, newNonce, newSignature); + vm.stopPrank(); + } + + function test_UpdateMerkleRoot_InvalidNonce() public { + uint256 currentNonce = registry.getCurrentNonce(); + uint64 randomNonce = 2131; + assertNotEq(currentNonce, randomNonce); + + vm.startPrank(updater); + bytes memory signature = _signMerkleRoot(updaterPrivKey, randomNonce); + + vm.expectRevert(Errors.WhitelistRegistry__InvalidNonce.selector); + registry.updateMerkleRoot(MERKLE_ROOT, randomNonce, signature); + vm.stopPrank(); + } + + function test_UpdateMerkleRoot_InvalidUpdater() public { + uint64 currentNonce = registry.getCurrentNonce(); + + bytes memory signature = _signMerkleRoot(userPrivKey, currentNonce); + + vm.prank(user); + vm.expectRevert(Errors.WhitelistRegistry__NotAuthorized.selector); + registry.updateMerkleRoot(MERKLE_ROOT, currentNonce, signature); + } + + function test_UpdateMerkleRoot_RightKeyRandomUpdater() public { + uint64 currentNonce = registry.getCurrentNonce(); + + bytes memory signature = _signMerkleRoot(updaterPrivKey, currentNonce); + + vm.prank(user); + registry.updateMerkleRoot(MERKLE_ROOT, currentNonce, signature); + } + + // replay attack + function test_UpdateMerkleRoot_OldSignatureWithCurrentNonce() public { + // set the root1 + uint64 currentNonce = registry.getCurrentNonce(); + bytes memory signature = _signMerkleRoot(updaterPrivKey, currentNonce); + vm.prank(updater); + registry.updateMerkleRoot(MERKLE_ROOT, currentNonce, signature); + bytes32 currentRoot = registry.getCurrentMerkleRoot(); + assertEq(currentRoot, MERKLE_ROOT); + // set another root2 + + bytes32 newRoot = keccak256(abi.encodePacked("another merkle root")); + uint64 newNonce = registry.getCurrentNonce(); + bytes32 messageHash = keccak256(abi.encodePacked(newRoot, newNonce, block.chainid, address(registry))); + bytes32 ethSignedMessageHash = messageHash.toEthSignedMessageHash(); + bytes memory signatureTwo = _sign(ethSignedMessageHash, updaterPrivKey); + + vm.prank(updater); + registry.updateMerkleRoot(newRoot, newNonce, signatureTwo); + bytes32 updatedRoot = registry.getCurrentMerkleRoot(); + assertEq(updatedRoot, newRoot); + + // try to use old sig from root1 to update it one more time + uint64 finalNonce = registry.getCurrentNonce(); + vm.prank(updater); + vm.expectRevert(Errors.WhitelistRegistry__NotAuthorized.selector); + registry.updateMerkleRoot(MERKLE_ROOT, finalNonce, signature); + assertEq(registry.getCurrentMerkleRoot(), newRoot); + } + + function test_UpdateRoot_InvalidNonce_OldNonce() public { + test_UpdateMerkleRoot(); + + bytes32 newRoot = keccak256(abi.encodePacked("attempted replay root")); + uint64 oldNonce = 0; + + bytes32 hash = keccak256(abi.encodePacked(newRoot, oldNonce, block.chainid, address(registry))); + bytes32 signedHash = hash.toEthSignedMessageHash(); + bytes memory signature = _sign(signedHash, updaterPrivKey); + + vm.prank(updater); + vm.expectRevert(Errors.WhitelistRegistry__InvalidNonce.selector); + registry.updateMerkleRoot(newRoot, oldNonce, signature); + } + + function test_UpdateRoot_InvalidNonce_FutureNonce() public { + uint64 currentNonce = registry.getCurrentNonce(); + + bytes32 newRoot = keccak256(abi.encodePacked("future nonce root")); + uint64 futureNonce = currentNonce + 1; + + bytes32 hash = keccak256(abi.encodePacked(newRoot, futureNonce, block.chainid, address(registry))); + bytes32 signedHash = hash.toEthSignedMessageHash(); + bytes memory signature = _sign(signedHash, updaterPrivKey); + + vm.prank(updater); + vm.expectRevert(Errors.WhitelistRegistry__InvalidNonce.selector); + registry.updateMerkleRoot(newRoot, futureNonce, signature); + } + + function test_UpdateMerkleRoot_WhitelistVerification() public { + uint64 currentNonce = registry.getCurrentNonce(); + bytes memory signature = _signMerkleRoot(updaterPrivKey, currentNonce); + + registry.updateMerkleRoot(MERKLE_ROOT, currentNonce, signature); + + bool isValidBefore = registry.verifyWhitelist(PROOF_USER1, user1); + assertTrue(isValidBefore); + + bytes32 newRoot = 0x5e287fa07343625f048462384a5432c590d780ed2c5f765210ef0e2e3ebddcfe; + uint64 newNonce = registry.getCurrentNonce(); + bytes32 hash = keccak256(abi.encodePacked(newRoot, newNonce, block.chainid, address(registry))); + bytes memory newSig = _sign(hash.toEthSignedMessageHash(), updaterPrivKey); + + registry.updateMerkleRoot(newRoot, newNonce, newSig); + + bool isValidAfter = registry.verifyWhitelist(PROOF_USER1, user1); + assertFalse(isValidAfter); + } + + /* -------------------------------------------------------------------------- */ + /* requestWhitelist */ + /* -------------------------------------------------------------------------- */ + + function test_RequestWhitelist_Lifecycle() public { + vm.deal(user, 100 ether); + + vm.startPrank(user); + registry.requestWhitelist{value: TC.REQUEST_FEE}(); + uint256 firstRequestTime = registry.getLastRequestedTime(user); + assertEq(firstRequestTime, block.timestamp); + vm.stopPrank(); + + vm.warp(block.timestamp + TC.REQUEST_COOLDOWN - 1 seconds); + vm.startPrank(user); + vm.expectRevert(Errors.WhitelistRegistry__RequestTooFrequent.selector); + registry.requestWhitelist{value: TC.REQUEST_FEE}(); + vm.stopPrank(); + + vm.warp(block.timestamp + 2 seconds); + + vm.startPrank(user); + registry.requestWhitelist{value: TC.REQUEST_FEE}(); + uint256 secondRequestTime = registry.getLastRequestedTime(user); + assertEq(secondRequestTime, block.timestamp); + assertTrue(secondRequestTime > firstRequestTime); + vm.stopPrank(); + + assertEq(registry.getTotalCollectedFees(), TC.REQUEST_FEE * 2); + } + + function test_RequestWhitelist_CooldownResetSuccess() public { + uint256 requestFee = registry.getRequestFee(); + + vm.deal(user, requestFee * 2); + + vm.prank(user); + registry.requestWhitelist{value: requestFee}(); + + uint256 cooldown = registry.getRequestCooldown(); + vm.warp(block.timestamp + cooldown + 1); + + uint256 totalCollectedFeesBefore = registry.getTotalCollectedFees(); + + uint256 blockTimestampBefore = block.timestamp; + + vm.prank(user); + registry.requestWhitelist{value: requestFee}(); + + assertEq(registry.getLastRequestedTime(user), blockTimestampBefore); + assertEq(registry.getTotalCollectedFees(), totalCollectedFeesBefore + requestFee); + } + + function test_RequestWhitelist_EconomicFlow() public { + address user2 = makeAddr("user2"); + vm.deal(user, 10 ether); + vm.deal(user2, 10 ether); + + vm.prank(user); + registry.requestWhitelist{value: TC.REQUEST_FEE}(); + + uint256 overpayment = TC.REQUEST_FEE * 2; + vm.prank(user2); + registry.requestWhitelist{value: overpayment}(); + + uint256 expectedTotal = TC.REQUEST_FEE + overpayment; + assertEq(registry.getTotalCollectedFees(), expectedTotal); + assertEq(address(registry).balance, expectedTotal); + + uint256 adminBalanceBefore = updater.balance; + + vm.prank(updater); + registry.withdraw(); + + assertEq(address(registry).balance, 0); + assertEq(updater.balance, adminBalanceBefore + expectedTotal); + assertEq(registry.getTotalCollectedFees(), 0); + } + + function test_RequestWhitelist_IndependentCooldowns() public { + address user2 = makeAddr("user2"); + vm.deal(user, 10 ether); + vm.deal(user2, 10 ether); + + vm.prank(user); + registry.requestWhitelist{value: TC.REQUEST_FEE}(); + + vm.warp(block.timestamp + 1); + + vm.prank(user); + vm.expectRevert(Errors.WhitelistRegistry__RequestTooFrequent.selector); + registry.requestWhitelist{value: TC.REQUEST_FEE}(); + + vm.prank(user2); + bool success = registry.requestWhitelist{value: TC.REQUEST_FEE}(); + assertTrue(success); + + assertEq(registry.getLastRequestedTime(user2), block.timestamp); + assertEq(registry.getLastRequestedTime(user), block.timestamp - 1); + } + + /* -------------------------------------------------------------------------- */ + /* withdraw */ + /* -------------------------------------------------------------------------- */ + + function test_Withdraw_FailsIfRecipientCannotReceive() public { + MaliciousReceiver malReceiver = new MaliciousReceiver(); + + vm.startPrank(updater); + registry.grantRole(TC.WITHDRAW_ROLE, address(malReceiver)); + vm.stopPrank(); + + vm.deal(user, 10 ether); + vm.prank(user); + registry.requestWhitelist{value: TC.REQUEST_FEE}(); + + vm.prank(address(malReceiver)); + vm.expectRevert(Errors.WhitelistRegistry__WithdrawFailed.selector); + registry.withdraw(); + } + + function test_Withdraw_MultipleCycles() public { + vm.deal(user, 50 ether); + address user2 = makeAddr("user2"); + vm.deal(user2, 50 ether); + + vm.prank(user); + registry.requestWhitelist{value: TC.REQUEST_FEE}(); + + uint256 balBefore1 = updater.balance; + vm.prank(updater); + registry.withdraw(); + assertEq(updater.balance, balBefore1 + TC.REQUEST_FEE); + assertEq(registry.getTotalCollectedFees(), 0); + + vm.warp(block.timestamp + 25 hours); + vm.prank(user); + registry.requestWhitelist{value: TC.REQUEST_FEE}(); + + vm.prank(user2); + registry.requestWhitelist{value: TC.REQUEST_FEE}(); + + assertEq(registry.getTotalCollectedFees(), TC.REQUEST_FEE * 2); + + uint256 balBefore2 = updater.balance; + vm.prank(updater); + registry.withdraw(); + + assertEq(updater.balance, balBefore2 + (TC.REQUEST_FEE * 2)); + assertEq(address(registry).balance, 0); + } + + function test_Withdraw_RoleRotation() public { + address newManager = makeAddr("newManager"); + vm.deal(user, 10 ether); + + vm.prank(user); + registry.requestWhitelist{value: TC.REQUEST_FEE}(); + + vm.prank(newManager); + vm.expectRevert(Errors.WhitelistRegistry__NotAuthorized.selector); + registry.withdraw(); + + vm.startPrank(updater); + + registry.grantRole(TC.WITHDRAW_ROLE, newManager); + registry.revokeRole(TC.WITHDRAW_ROLE, updater); + + vm.stopPrank(); + + vm.prank(updater); + vm.expectRevert(Errors.WhitelistRegistry__NotAuthorized.selector); + registry.withdraw(); + + uint256 managerBalanceBefore = newManager.balance; + + vm.prank(newManager); + registry.withdraw(); + + assertEq(newManager.balance, managerBalanceBefore + TC.REQUEST_FEE); + assertEq(registry.getTotalCollectedFees(), 0); + + assertTrue(registry.hasRole(TC.WITHDRAW_ROLE, newManager)); + assertFalse(registry.hasRole(TC.WITHDRAW_ROLE, updater)); + } + + /* -------------------------------------------------------------------------- */ + /* authorizedUpdater */ + /* -------------------------------------------------------------------------- */ + + function test_AddAuthorizedUpdater_CanSignUpdates() public { + (address newUpdater, uint256 newUpdaterPrivKey) = makeAddrAndKey("newUpdater"); + + vm.prank(updater); + registry.addAuthorizedUpdater(newUpdater); + + assertTrue(registry.isAuthorizedUpdater(newUpdater)); + + uint64 nonce = registry.getCurrentNonce(); + bytes32 messageHash = keccak256(abi.encodePacked(MERKLE_ROOT, nonce, block.chainid, address(registry))); + bytes memory signature = _sign(messageHash.toEthSignedMessageHash(), newUpdaterPrivKey); + + registry.updateMerkleRoot(MERKLE_ROOT, nonce, signature); + + assertEq(registry.getCurrentMerkleRoot(), MERKLE_ROOT); + } + + function test_AddAuthorizedUpdater_DoesNotGrantAdminOrWithdraw() public { + address newUpdater = makeAddr("newUpdater"); + + vm.prank(updater); + registry.addAuthorizedUpdater(newUpdater); + + assertFalse(registry.hasRole(TC.DEFAULT_ADMIN_ROLE, newUpdater)); + assertFalse(registry.hasRole(TC.WITHDRAW_ROLE, newUpdater)); + + vm.deal(address(registry), 1 ether); + + vm.prank(newUpdater); + vm.expectRevert(Errors.WhitelistRegistry__NotAuthorized.selector); + registry.withdraw(); + + address anotherGuy = makeAddr("anotherGuy"); + vm.prank(newUpdater); + vm.expectRevert(Errors.WhitelistRegistry__NotAuthorized.selector); + registry.addAuthorizedUpdater(anotherGuy); + } + + function test_AddAuthorizedUpdater_BothKeysWork() public { + (address newUpdater, uint256 newUpdaterPrivKey) = makeAddrAndKey("newUpdater"); + + vm.prank(updater); + registry.addAuthorizedUpdater(newUpdater); + + uint64 nonce1 = registry.getCurrentNonce(); + bytes32 root1 = keccak256(abi.encodePacked("root1")); + + bytes32 messageHash1 = keccak256(abi.encodePacked(root1, nonce1, block.chainid, address(registry))); + bytes memory sig1 = _sign(messageHash1.toEthSignedMessageHash(), updaterPrivKey); + + registry.updateMerkleRoot(root1, nonce1, sig1); + assertEq(registry.getCurrentMerkleRoot(), root1); + + uint64 nonce2 = registry.getCurrentNonce(); + bytes32 root2 = keccak256(abi.encodePacked("root2")); + + bytes32 messageHash2 = keccak256(abi.encodePacked(root2, nonce2, block.chainid, address(registry))); + bytes memory sig2 = _sign(messageHash2.toEthSignedMessageHash(), newUpdaterPrivKey); + + registry.updateMerkleRoot(root2, nonce2, sig2); + assertEq(registry.getCurrentMerkleRoot(), root2); + } + + function test_AuthorizedUpdater_Workflow() public { + assertEq(registry.getCurrentMerkleRoot(), bytes32(0)); + (address newUpdater, uint256 newUpdaterPrivKey) = makeAddrAndKey("newUpdater"); + + vm.prank(updater); + registry.addAuthorizedUpdater(newUpdater); + + uint64 nonce = registry.getCurrentNonce(); + bytes32 messageHash = keccak256(abi.encodePacked(MERKLE_ROOT, nonce, block.chainid, address(registry))); + bytes memory signature = _sign(messageHash.toEthSignedMessageHash(), newUpdaterPrivKey); + + vm.prank(newUpdater); + registry.updateMerkleRoot(MERKLE_ROOT, nonce, signature); + assertEq(registry.getCurrentMerkleRoot(), MERKLE_ROOT); + + vm.prank(TC.UPDATER); + registry.removeAuthorizedUpdater(newUpdater); + assertFalse(registry.isAuthorizedUpdater(newUpdater)); + + uint64 newNonce = registry.getCurrentNonce(); + bytes32 newRoot = keccak256(abi.encodePacked("new root")); + bytes32 newMessageHash = keccak256(abi.encodePacked(newRoot, newNonce, block.chainid, address(registry))); + bytes memory newSignature = _sign(newMessageHash.toEthSignedMessageHash(), newUpdaterPrivKey); + + vm.prank(newUpdater); + vm.expectRevert(Errors.WhitelistRegistry__NotAuthorized.selector); + registry.updateMerkleRoot(newRoot, newNonce, newSignature); + assertNotEq(registry.getCurrentMerkleRoot(), newRoot); + + vm.prank(TC.UPDATER); + registry.addAuthorizedUpdater(newUpdater); + assertTrue(registry.isAuthorizedUpdater(newUpdater)); + + uint64 finalNonce = registry.getCurrentNonce(); + bytes32 finalMessageHash = keccak256(abi.encodePacked(newRoot, finalNonce, block.chainid, address(registry))); + bytes memory finalSignature = _sign(finalMessageHash.toEthSignedMessageHash(), newUpdaterPrivKey); + + vm.prank(newUpdater); + registry.updateMerkleRoot(newRoot, finalNonce, finalSignature); + assertEq(registry.getCurrentMerkleRoot(), newRoot); + } + + /* -------------------------------------------------------------------------- */ + /* pause/unpause */ + /* -------------------------------------------------------------------------- */ + + function test_PauseUnpause_Workflow() public { + uint64 currentNonce = registry.getCurrentNonce(); + uint256 fee = registry.getRequestFee(); + vm.deal(user, 10 ether); + + vm.startPrank(updater); + registry.pause(); + + bytes memory signature = _signMerkleRoot(updaterPrivKey, currentNonce); + vm.expectRevert(Pausable.EnforcedPause.selector); + registry.updateMerkleRoot(MERKLE_ROOT, uint64(currentNonce), signature); + vm.stopPrank(); + + vm.prank(user); + vm.expectRevert(Pausable.EnforcedPause.selector); + registry.requestWhitelist{value: fee}(); + + vm.startPrank(updater); + registry.unpause(); + registry.updateMerkleRoot(MERKLE_ROOT, uint64(currentNonce), signature); + vm.stopPrank(); + + vm.prank(user); + registry.requestWhitelist{value: fee}(); + } +} diff --git a/contracts/test/integration/SettlementIntegration.t.sol b/contracts/test/integration/SettlementIntegration.t.sol new file mode 100644 index 0000000..bacadaa --- /dev/null +++ b/contracts/test/integration/SettlementIntegration.t.sol @@ -0,0 +1,1295 @@ +// SPDX-License-Identifier: MIT +pragma solidity 0.8.25; + +import {Test, console} from "forge-std/Test.sol"; +import {MessageHashUtils} from "@openzeppelin/contracts/utils/cryptography/MessageHashUtils.sol"; +import {Pausable} from "@openzeppelin/contracts/utils/Pausable.sol"; +import {Ownable} from "@openzeppelin/contracts/access/Ownable.sol"; + +import {IntegrationDeployHelpers} from "../utils/IntegrationDeployHelpers.sol"; +import {TestConstants as TC} from "../utils/TestConstants.sol"; + +import {ISettlement} from "../../src/interfaces/ISettlement.sol"; + +import {Types} from "../../src/libraries/Types.sol"; +import {Errors} from "../../src/libraries/Errors.sol"; + +contract SettlementIntegrationTest is Test, IntegrationDeployHelpers { + using MessageHashUtils for bytes32; + + struct ExecuteData { + bytes32[] txProof; + bytes32[] wlProof; + Types.TransferData data; + } + + // Updated Merkle root including batchSalt + bytes32 constant BATCH_MERKLE_ROOT = 0x3a0c41421185f03cda4c7149849489222399b27838a28ef7931c459b142b0877; + bytes32 constant WHITELIST_MERKLE_ROOT = 0x9026d8a85fee65817561c5d02b985f4e34a8f70d19b21f5382e13c646a71176a; + + function setUp() public { + _initUser(); + _initUser2(); + _initFeeModule(); + _initRegistry(); + _initSettlement(); + _initToken(); + + vm.startPrank(DEFAULT_SENDER); + feeModule.setSettlement(address(settlement)); + settlement.setWhitelistRegistry(address(registry)); + vm.stopPrank(); + + vm.startPrank(TC.UPDATER); + registry.addAuthorizedUpdater(user); + + vm.startPrank(DEFAULT_SENDER); + settlement.setFeeModule(address(feeModule)); + settlement.setMaxTxPerBatch(uint32(TC.MAX_TX_PER_BATCH)); + settlement.setTimelockDuration(uint48(TC.TIMELOCK_DURATION)); + settlement.setToken(address(mockToken)); + settlement.approveAggregator(user2); + _updateMerkleRoot(); + vm.stopPrank(); + } + + /* -------------------------------------------------------------------------- */ + /* INITIAL STATE */ + /* -------------------------------------------------------------------------- */ + + function test_Constructor_InitialValues() public view { + assert(address(settlement.getFeeModule()) == address(feeModule)); + assert(address(settlement.getWhitelistRegistry()) == address(registry)); + assert(address(settlement.getToken()) == address(mockToken)); + assert(settlement.getMaxTxPerBatch() == TC.MAX_TX_PER_BATCH); + assert(settlement.getTimelockDuration() == TC.TIMELOCK_DURATION); + + assert(settlement.isApprovedAggregator(user2)); + } + + /* -------------------------------------------------------------------------- */ + /* HELPERS */ + /* -------------------------------------------------------------------------- */ + + function _updateMerkleRoot() public returns (bytes memory signature) { + uint64 currentNonce = registry.getCurrentNonce(); + bytes32 hash = + keccak256(abi.encodePacked(WHITELIST_MERKLE_ROOT, currentNonce, block.chainid, address(registry))); + bytes32 signedHash = hash.toEthSignedMessageHash(); + (uint8 v, bytes32 r, bytes32 s) = vm.sign(userPrivKey, signedHash); + signature = abi.encodePacked(r, s, v); + + registry.updateMerkleRoot(WHITELIST_MERKLE_ROOT, currentNonce, signature); + } + + function _mintTokensAndApprove(address to, uint256 amount) internal { + mockToken.mint(to, amount); + vm.prank(to); + mockToken.approve(address(settlement), amount); + } + + function _submitBatch() public { + vm.prank(user2); + settlement.submitBatch(BATCH_MERKLE_ROOT, uint32(TC.MAX_TX_PER_BATCH), 1); + vm.warp(25 hours); + } + + /* -------------------------------------------------------------------------- */ + /* submitBatch */ + /* -------------------------------------------------------------------------- */ + + function test_SubmitBatch_EmitsEvent() public { + vm.prank(user2); + + vm.expectEmit(true, false, false, true); + emit ISettlement.BatchSubmitted(1, BATCH_MERKLE_ROOT, uint32(TC.MAX_TX_PER_BATCH), uint48(block.timestamp)); + (bool success, uint256 batchId) = settlement.submitBatch(BATCH_MERKLE_ROOT, uint32(TC.MAX_TX_PER_BATCH), 1); + + Types.Batch memory batch = settlement.getBatchById(uint64(batchId)); + assertTrue(success); + assertEq(settlement.getCurrentBatchId(), batchId); + + assertEq(batch.merkleRoot, BATCH_MERKLE_ROOT); + assertEq(batch.timestamp, block.timestamp); + assertEq(batch.txCount, TC.MAX_TX_PER_BATCH); + assertEq(batch.unlockTime, block.timestamp + TC.TIMELOCK_DURATION); + } + + function test_SubmitBatch_InvalidTxCount() public { + vm.startPrank(user2); + + vm.expectRevert(Errors.Settlement__InvalidInput.selector); + settlement.submitBatch(BATCH_MERKLE_ROOT, 0, 1); + + vm.expectRevert(Errors.Settlement__InvalidInput.selector); + settlement.submitBatch(BATCH_MERKLE_ROOT, uint32(TC.MAX_TX_PER_BATCH) + 1, 1); + + vm.stopPrank(); + } + + function test_SubmitBatch_AlreadySubmitted() public { + _submitBatch(); + + vm.prank(user2); + vm.expectRevert(Errors.Settlement__BatchAlreadySubmitted.selector); + settlement.submitBatch(BATCH_MERKLE_ROOT, uint32(TC.MAX_TX_PER_BATCH), 1); + } + + function test_SubmitBatch_DynamicConfigChanges() public { + uint32 newMaxTx = 5; + uint48 newTimelock = 2 days; + + vm.startPrank(DEFAULT_SENDER); + settlement.setMaxTxPerBatch(uint32(newMaxTx)); + settlement.setTimelockDuration(newTimelock); + vm.stopPrank(); + + vm.startPrank(user2); + uint32 oldLimitTxCount = uint32(TC.MAX_TX_PER_BATCH); + + assertGt(oldLimitTxCount, newMaxTx); + + vm.expectRevert(Errors.Settlement__InvalidInput.selector); + settlement.submitBatch(BATCH_MERKLE_ROOT, oldLimitTxCount, 1); + + (bool success, uint256 batchId) = settlement.submitBatch(BATCH_MERKLE_ROOT, newMaxTx, 1); + assertTrue(success); + + Types.Batch memory batch = settlement.getBatchById(uint64(batchId)); + assertEq(batch.unlockTime, block.timestamp + newTimelock); + vm.stopPrank(); + } + + function test_SubmitBatch_BatchIdIncrement() public { + vm.prank(user2); + (bool success1, uint256 batchId1) = settlement.submitBatch(BATCH_MERKLE_ROOT, uint32(TC.MAX_TX_PER_BATCH), 1); + assertTrue(success1); + + vm.warp(block.timestamp + 1 hours); + + bytes32 newRoot = keccak256(abi.encodePacked("newRoot")); + vm.prank(user2); + (bool success2, uint256 batchId2) = settlement.submitBatch(newRoot, uint32(TC.MAX_TX_PER_BATCH), 1); + assertTrue(success2); + + assertEq(batchId2, batchId1 + 1); + } + + function test_SubmitBatch_StateIntegrity() public { + bytes32 root1 = keccak256(abi.encodePacked("root1")); + bytes32 root2 = keccak256(abi.encodePacked("root2")); + + vm.startPrank(user2); + + (, uint256 batchId1) = settlement.submitBatch(root1, uint32(TC.MAX_TX_PER_BATCH), 1); + + vm.warp(block.timestamp + 1 hours); + (, uint256 batchId2) = settlement.submitBatch(root2, uint32(TC.MAX_TX_PER_BATCH), 1); + + vm.stopPrank(); + + assertEq(batchId2, batchId1 + 1); + assertEq(settlement.getBatchIdByRoot(root1), batchId1); + assertEq(settlement.getBatchIdByRoot(root2), batchId2); + + Types.Batch memory batch1Data = settlement.getBatchById(uint64(batchId1)); + assertEq(batch1Data.merkleRoot, root1); + assert(batch1Data.timestamp < settlement.getBatchById(uint64(batchId2)).timestamp); + } + + function test_SubmitBatch_AggregatorLifecycle() public { + _submitBatch(); + + vm.prank(user2); + vm.expectRevert(abi.encodeWithSelector(Ownable.OwnableUnauthorizedAccount.selector, user2)); + settlement.disapproveAggregator(user2); + + vm.prank(DEFAULT_SENDER); + settlement.disapproveAggregator(user2); + + bytes32 newRoot = keccak256(abi.encodePacked("newRoot")); + vm.prank(user2); + vm.expectRevert(Errors.Settlement__AggregatorNotApproved.selector); + settlement.submitBatch(newRoot, uint32(TC.MAX_TX_PER_BATCH), 1); + + vm.prank(user2); + vm.expectRevert(abi.encodeWithSelector(Ownable.OwnableUnauthorizedAccount.selector, user2)); + settlement.approveAggregator(user2); + + vm.prank(DEFAULT_SENDER); + settlement.approveAggregator(user2); + + vm.prank(user2); + (bool success,) = settlement.submitBatch(newRoot, uint32(TC.MAX_TX_PER_BATCH), 1); + assertTrue(success); + } + + function testFuzz_SubmitBatch_Success(bytes32 merkleRoot, uint256 txCount) public { + txCount = bound(txCount, 1, TC.MAX_TX_PER_BATCH); + + vm.assume(merkleRoot != bytes32(0)); + vm.assume(merkleRoot != BATCH_MERKLE_ROOT); + + vm.prank(user2); + (bool success, uint256 batchId) = settlement.submitBatch(merkleRoot, uint32(txCount), 1); + + assertTrue(success); + + Types.Batch memory batch = settlement.getBatchById(uint64(batchId)); + assertEq(batch.merkleRoot, merkleRoot); + assertEq(batch.txCount, txCount); + assertEq(settlement.getBatchIdByRoot(merkleRoot), batchId); + } + + function testFuzz_SubmitBatch_RevertIfMaxTxExceeded(bytes32 merkleRoot, uint256 txCount) public { + txCount = bound(txCount, TC.MAX_TX_PER_BATCH + 1, type(uint32).max); + + vm.prank(user2); + vm.expectRevert(Errors.Settlement__InvalidInput.selector); + settlement.submitBatch(merkleRoot, uint32(txCount), 1); + } + + /* -------------------------------------------------------------------------- */ + /* executeTransfer */ + /* -------------------------------------------------------------------------- */ + + function test_ExecuteTransfer_SuccessAndEmits() public { + _submitBatch(); + ExecuteData memory executeData = _getExecuteDataForIndex11(); + + _mintTokensAndApprove(executeData.data.from, executeData.data.amount); + + vm.expectEmit(true, true, false, true); + emit ISettlement.TransferExecuted( + executeData.data.from, executeData.data.to, executeData.data.amount, executeData.data.nonce + ); + + bool success = settlement.executeTransfer(executeData.txProof, executeData.wlProof, executeData.data); + + assertTrue(success); + assertEq(mockToken.balanceOf(executeData.data.from), 0); + assertEq(mockToken.balanceOf(executeData.data.to), executeData.data.amount); + } + + function test_ExecuteTransfer_RevertIfBeforeUnlock() public { + vm.prank(user2); + settlement.submitBatch(BATCH_MERKLE_ROOT, uint32(TC.MAX_TX_PER_BATCH), 1); + vm.warp(block.timestamp + TC.TIMELOCK_DURATION - 1); + + ExecuteData memory executeData = _getExecuteDataForIndex11(); + + vm.expectRevert(Errors.Settlement__BatchLocked.selector); + settlement.executeTransfer(executeData.txProof, executeData.wlProof, executeData.data); + } + + function test_ExecuteTransfer_RevertIfAlreadyExecuted() public { + _submitBatch(); + ExecuteData memory executeData = _getExecuteDataForIndex11(); + + _mintTokensAndApprove(executeData.data.from, executeData.data.amount * 2); + + settlement.executeTransfer(executeData.txProof, executeData.wlProof, executeData.data); + + vm.expectRevert(Errors.Settlement__TransferAlreadyExecuted.selector); + settlement.executeTransfer(executeData.txProof, executeData.wlProof, executeData.data); + } + + function test_ExecuteTransfer_RevertIfInsufficientBalance() public { + _submitBatch(); + ExecuteData memory executeData = _getExecuteDataForIndex11(); + + _mintTokensAndApprove(executeData.data.from, executeData.data.amount - 1); + + vm.expectRevert(Errors.Settlement__InsufficientBalance.selector); + settlement.executeTransfer(executeData.txProof, executeData.wlProof, executeData.data); + } + + function test_ExecuteTransfer_RevertIfInvalidTxProof() public { + _submitBatch(); + ExecuteData memory executeData = _getExecuteDataForIndex11(); + + executeData.txProof[0] = bytes32(uint256(executeData.txProof[0]) + 1); + + vm.expectRevert(Errors.Settlement__InvalidMerkleProof.selector); + settlement.executeTransfer(executeData.txProof, executeData.wlProof, executeData.data); + } + + function test_ExecuteTransfer_RevertIfInvalidBatch() public { + ExecuteData memory executeData = _getExecuteDataForIndex11(); + executeData.data.batchId = 999; + + vm.expectRevert(Errors.Settlement__InvalidBatch.selector); + settlement.executeTransfer(executeData.txProof, executeData.wlProof, executeData.data); + } + + function test_ExecuteTransfer_RevertIfBatchIdMismatch() public { + vm.startPrank(user2); + settlement.submitBatch(BATCH_MERKLE_ROOT, uint32(TC.MAX_TX_PER_BATCH), 1); + bytes32 otherRoot = keccak256("other"); + (, uint64 batchB) = settlement.submitBatch(otherRoot, uint32(TC.MAX_TX_PER_BATCH), 1); + vm.stopPrank(); + + vm.warp(25 hours); + ExecuteData memory executeData = _getExecuteDataForIndex11(); + executeData.data.batchId = batchB; + vm.expectRevert(Errors.Settlement__InvalidMerkleProof.selector); + settlement.executeTransfer(executeData.txProof, executeData.wlProof, executeData.data); + } + + function test_ExecuteTransfer_Batched_NoWhitelistProof_Reverts() public { + _submitBatch(); + ExecuteData memory executeData = _getExecuteDataForIndex13(); // BATCHED + + // Видаляємо whitelist proof повністю + bytes32[] memory emptyProof = new bytes32[](0); + executeData.wlProof = emptyProof; + + _mintTokensAndApprove(executeData.data.from, executeData.data.amount); + + vm.expectRevert(Errors.Settlement__NotWhitelisted.selector); + bool success = settlement.executeTransfer(executeData.txProof, executeData.wlProof, executeData.data); + + assertFalse(success); + assertEq(mockToken.balanceOf(executeData.data.from), executeData.data.amount); + assertEq(mockToken.balanceOf(executeData.data.to), 0); + } + + function test_ExecuteTransfer_Batched_InvalidWhitelistProof_Reverts() public { + _submitBatch(); + ExecuteData memory executeData = _getExecuteDataForIndex13(); // BATCHED + + executeData.wlProof[0] = keccak256(abi.encodePacked("invalid")); + + _mintTokensAndApprove(executeData.data.from, executeData.data.amount); + + vm.expectRevert(Errors.Settlement__NotWhitelisted.selector); + bool success = settlement.executeTransfer(executeData.txProof, executeData.wlProof, executeData.data); + + assertFalse(success); + assertEq(mockToken.balanceOf(executeData.data.from), executeData.data.amount); + assertEq(mockToken.balanceOf(executeData.data.to), 0); + } + + function test_ExecuteTransfer_NonBatched_SkipsWhitelistValidation() public { + _submitBatch(); + { + ExecuteData memory instantData = _getExecuteDataForIndex11(); + instantData.data.txType = Types.TxType.INSTANT; + bytes32[] memory emptyProof = new bytes32[](0); + instantData.wlProof = emptyProof; + + uint256 balanceBefore = mockToken.balanceOf(instantData.data.to); + + _mintTokensAndApprove(instantData.data.from, instantData.data.amount); + bool successInstant = settlement.executeTransfer(instantData.txProof, instantData.wlProof, instantData.data); + assertTrue(successInstant); + + uint256 balanceAfter = mockToken.balanceOf(instantData.data.to); + assertEq(balanceAfter, balanceBefore + instantData.data.amount); + } + + { + ExecuteData memory delayedData = _getExecuteDataForIndex0(); + delayedData.data.txType = Types.TxType.DELAYED; + bytes32[] memory emptyProof = new bytes32[](0); + delayedData.wlProof = emptyProof; + + uint256 balanceBefore = mockToken.balanceOf(delayedData.data.to); + + _mintTokensAndApprove(delayedData.data.from, delayedData.data.amount); + bool successDelayed = settlement.executeTransfer(delayedData.txProof, delayedData.wlProof, delayedData.data); + assertTrue(successDelayed); + + uint256 balanceAfter = mockToken.balanceOf(delayedData.data.to); + assertEq(balanceAfter, balanceBefore + delayedData.data.amount); + } + } + + /* -------------------------------------------------------------------------- */ + /* Fee Types Tests */ + /* -------------------------------------------------------------------------- */ + + function test_ExecuteTransfer_FreeWithDelayedFee() public { + _submitBatch(); + uint64 batchId = settlement.getCurrentBatchId(); + ExecuteData memory executeData = _getExecuteDataForIndex0(); + executeData.data.txType = Types.TxType.DELAYED; + executeData.data.amount = 100000000; + + uint256 expectedFee = 0; + + _mintTokensAndApprove(executeData.data.from, executeData.data.amount); + + bool success = settlement.executeTransfer(executeData.txProof, executeData.wlProof, executeData.data); + + assertTrue(success); + assertEq(mockToken.balanceOf(executeData.data.from), 0); + assertEq(mockToken.balanceOf(executeData.data.to), executeData.data.amount); + + bytes32 txHash = keccak256( + abi.encodePacked( + executeData.data.from, + executeData.data.to, + executeData.data.amount, + executeData.data.nonce, + executeData.data.timestamp, + executeData.data.recipientCount, + executeData.data.txType, + uint64(1) // batchSalt + ) + ); + + assertEq(feeModule.getFeeOfTransaction(txHash), expectedFee); + assertEq(feeModule.getBatchTotalFees(executeData.data.batchId), expectedFee); + } + + function test_ExecuteTransfer_Delayed_InvalidFrom() public { + _submitBatch(); + ExecuteData memory executeData = _getExecuteDataForIndex17(); + + vm.expectRevert(Errors.Settlement__InvalidInput.selector); + bool success = settlement.executeTransfer(executeData.txProof, executeData.wlProof, executeData.data); + + assertFalse(success); + } + + function test_ExecuteTransfer_Delayed_InvalidTo() public { + _submitBatch(); + ExecuteData memory executeData = _getExecuteDataForIndex18(); + + vm.expectRevert(Errors.Settlement__InvalidInput.selector); + bool success = settlement.executeTransfer(executeData.txProof, executeData.wlProof, executeData.data); + + assertFalse(success); + } + + function test_ExecuteTransfer_WithInstantFee() public { + _submitBatch(); + uint64 batchId = settlement.getCurrentBatchId(); + ExecuteData memory executeData = _getExecuteDataForIndex11(); + + executeData.data.txType = Types.TxType.INSTANT; + executeData.data.amount = 200000000; // Less than LARGE_VOLUME + + // INSTANT_FEE = 200_000 + uint256 expectedFee = 200_000; + + _mintTokensAndApprove(executeData.data.from, executeData.data.amount); + + bool success = settlement.executeTransfer(executeData.txProof, executeData.wlProof, executeData.data); + + assertTrue(success); + assertEq(mockToken.balanceOf(executeData.data.from), 0); + assertEq(mockToken.balanceOf(executeData.data.to), executeData.data.amount); + + bytes32 txHash = keccak256( + abi.encodePacked( + executeData.data.from, + executeData.data.to, + executeData.data.amount, + executeData.data.nonce, + executeData.data.timestamp, + executeData.data.recipientCount, + executeData.data.txType, + uint64(1) // batchSalt + ) + ); + + assertEq(feeModule.getFeeOfTransaction(txHash), expectedFee); + assertEq(feeModule.getBatchTotalFees(executeData.data.batchId), expectedFee); + } + + function test_ExecuteTransfer_Instant_InvalidRecipientCount() public { + _submitBatch(); + ExecuteData memory executeData = _getExecuteDataForIndex12(); + + executeData.data.txType = Types.TxType.INSTANT; + executeData.data.amount = 300000000; + + _mintTokensAndApprove(executeData.data.from, executeData.data.amount); + + vm.expectRevert(Errors.FeeModule__InvalidRecipientCount.selector); + bool success = settlement.executeTransfer(executeData.txProof, executeData.wlProof, executeData.data); + + assertFalse(success); + assertEq(mockToken.balanceOf(executeData.data.from), executeData.data.amount); + assertEq(mockToken.balanceOf(executeData.data.to), 0); + } + + function test_ExecuteTransfer_WithBatchedFee() public { + _submitBatch(); + uint64 batchId = settlement.getCurrentBatchId(); + ExecuteData memory executeData = _getExecuteDataForIndex13(); + + uint256 expectedFee = TC.BATCH_FEE * executeData.data.recipientCount; + + _mintTokensAndApprove(executeData.data.from, executeData.data.amount); + + bool success = settlement.executeTransfer(executeData.txProof, executeData.wlProof, executeData.data); + + assertTrue(success); + assertEq(mockToken.balanceOf(executeData.data.from), 0); + assertEq(mockToken.balanceOf(executeData.data.to), executeData.data.amount); + + bytes32 txHash = keccak256( + abi.encodePacked( + executeData.data.from, + executeData.data.to, + executeData.data.amount, + executeData.data.nonce, + executeData.data.timestamp, + executeData.data.recipientCount, + executeData.data.txType, + uint64(1) // batchSalt + ) + ); + + assertEq(feeModule.getFeeOfTransaction(txHash), expectedFee); + assertEq(feeModule.getBatchTotalFees(executeData.data.batchId), expectedFee); + assertEq(feeModule.getTotalFeesCollected(), expectedFee); + } + + function test_ExecuteTransfer_Batched_NotWhitelisted() public { + _submitBatch(); + ExecuteData memory executeData = _getExecuteDataForIndex14(); + + _mintTokensAndApprove(executeData.data.from, executeData.data.amount); + + vm.expectRevert(Errors.Settlement__NotWhitelisted.selector); + bool success = settlement.executeTransfer(executeData.txProof, executeData.wlProof, executeData.data); + + assertFalse(success); + assertEq(mockToken.balanceOf(executeData.data.from), executeData.data.amount); + assertEq(mockToken.balanceOf(executeData.data.to), 0); + } + + function test_ExecuteTransfer_Batched_InvalidRecipientCount() public { + _submitBatch(); + ExecuteData memory executeData = _getExecuteDataForIndex15(); + + _mintTokensAndApprove(executeData.data.from, executeData.data.amount); + + vm.expectRevert(Errors.FeeModule__InvalidRecipientCount.selector); + bool success = settlement.executeTransfer(executeData.txProof, executeData.wlProof, executeData.data); + + assertFalse(success); + assertEq(mockToken.balanceOf(executeData.data.from), executeData.data.amount); + assertEq(mockToken.balanceOf(executeData.data.to), 0); + } + + function test_ExecuteTransfer_WithFreeTier() public { + _submitBatch(); + uint64 batchId = settlement.getCurrentBatchId(); + ExecuteData memory executeData = _getExecuteDataForIndex10(); + + executeData.data.txType = Types.TxType.FREE_TIER; + executeData.data.amount = 50000000; + + // First free transaction - no fee + uint256 expectedFee = 0; + + _mintTokensAndApprove(executeData.data.from, executeData.data.amount); + + // Check remaining free tier before + uint256 remainingBefore = feeModule.getRemainingFreeTierTransactions(executeData.data.from); + assertEq(remainingBefore, 10); // All 10 available + + bool success = settlement.executeTransfer(executeData.txProof, executeData.wlProof, executeData.data); + + assertTrue(success); + assertEq(mockToken.balanceOf(executeData.data.from), 0); + assertEq(mockToken.balanceOf(executeData.data.to), executeData.data.amount); + + bytes32 txHash = keccak256( + abi.encodePacked( + executeData.data.from, + executeData.data.to, + executeData.data.amount, + executeData.data.nonce, + executeData.data.timestamp, + executeData.data.recipientCount, + executeData.data.txType, + uint64(1) // batchSalt + ) + ); + + assertEq(feeModule.getFeeOfTransaction(txHash), expectedFee); + + // Check remaining free tier after + uint256 remainingAfter = feeModule.getRemainingFreeTierTransactions(executeData.data.from); + assertEq(remainingAfter, 9); // 1 used + } + + function test_ExecuteTransfer_DelayedFeeWithinFreeTier() public { + _submitBatch(); + uint64 batchId = settlement.getCurrentBatchId(); + ExecuteData memory executeData = _getExecuteDataForIndex0(); + + uint256 expectedFee = 0; + + _mintTokensAndApprove(executeData.data.from, executeData.data.amount); + + bool success = settlement.executeTransfer(executeData.txProof, executeData.wlProof, executeData.data); + + assertTrue(success); + assertEq(mockToken.balanceOf(executeData.data.from), 0); + assertEq(mockToken.balanceOf(executeData.data.to), executeData.data.amount); + + bytes32 txHash = keccak256( + abi.encodePacked( + executeData.data.from, + executeData.data.to, + executeData.data.amount, + executeData.data.nonce, + executeData.data.timestamp, + executeData.data.recipientCount, + executeData.data.txType, + uint64(1) // batchSalt + ) + ); + + assertEq(feeModule.getFeeOfTransaction(txHash), expectedFee); + uint256 remaining = feeModule.getRemainingFreeTierTransactions(executeData.data.from); + assertEq(remaining, 9); + } + + function test_ExecuteTransfer_LargeVolumeNoFee() public { + _submitBatch(); + uint64 batchId = settlement.getCurrentBatchId(); + ExecuteData memory executeData = _getExecuteDataForIndex16(); + + executeData.data.txType = Types.TxType.INSTANT; + executeData.data.amount = 1000000000000000; + + // No fee for large volumes + uint256 expectedFee = 0; + + _mintTokensAndApprove(executeData.data.from, executeData.data.amount); + + bool success = settlement.executeTransfer(executeData.txProof, executeData.wlProof, executeData.data); + + assertTrue(success); + assertEq(mockToken.balanceOf(executeData.data.from), 0); + assertEq(mockToken.balanceOf(executeData.data.to), executeData.data.amount); + + bytes32 txHash = keccak256( + abi.encodePacked( + executeData.data.from, + executeData.data.to, + executeData.data.amount, + executeData.data.nonce, + executeData.data.timestamp, + executeData.data.recipientCount, + executeData.data.txType, + uint64(1) // batchSalt + ) + ); + + assertEq(feeModule.getFeeOfTransaction(txHash), expectedFee); + assertEq(feeModule.getBatchTotalFees(executeData.data.batchId), expectedFee); + } + + function test_ExecuteTransfer_DelayedTxFlow() public { + _submitBatch(); + ExecuteData memory executeData0 = _getExecuteDataForIndex0(); + + _mintTokensAndApprove(executeData0.data.from, executeData0.data.amount); + + bool success = settlement.executeTransfer(executeData0.txProof, executeData0.wlProof, executeData0.data); + assertTrue(success); + + ExecuteData memory executeData1 = _getExecuteDataForIndex1(); + _mintTokensAndApprove(executeData1.data.from, executeData1.data.amount); + + bool success1 = settlement.executeTransfer(executeData1.txProof, executeData1.wlProof, executeData1.data); + assertTrue(success1); + + ExecuteData memory executeData2 = _getExecuteDataForIndex2(); + _mintTokensAndApprove(executeData2.data.from, executeData2.data.amount); + + bool success2 = settlement.executeTransfer(executeData2.txProof, executeData2.wlProof, executeData2.data); + assertTrue(success2); + + // 3 + ExecuteData memory executeData3 = _getExecuteDataForIndex3(); + _mintTokensAndApprove(executeData3.data.from, executeData3.data.amount); + + bool success3 = settlement.executeTransfer(executeData3.txProof, executeData3.wlProof, executeData3.data); + assertTrue(success3); + + // 4 + ExecuteData memory executeData4 = _getExecuteDataForIndex4(); + _mintTokensAndApprove(executeData4.data.from, executeData4.data.amount); + + bool success4 = settlement.executeTransfer(executeData4.txProof, executeData4.wlProof, executeData4.data); + assertTrue(success4); + + // 5 + ExecuteData memory executeData5 = _getExecuteDataForIndex5(); + _mintTokensAndApprove(executeData5.data.from, executeData5.data.amount); + + bool success5 = settlement.executeTransfer(executeData5.txProof, executeData5.wlProof, executeData5.data); + assertTrue(success5); + + // 6 + ExecuteData memory executeData6 = _getExecuteDataForIndex6(); + _mintTokensAndApprove(executeData6.data.from, executeData6.data.amount); + + bool success6 = settlement.executeTransfer(executeData6.txProof, executeData6.wlProof, executeData6.data); + assertTrue(success6); + + // 7 + ExecuteData memory executeData7 = _getExecuteDataForIndex7(); + _mintTokensAndApprove(executeData7.data.from, executeData7.data.amount); + + bool success7 = settlement.executeTransfer(executeData7.txProof, executeData7.wlProof, executeData7.data); + assertTrue(success7); + + // 8 + ExecuteData memory executeData8 = _getExecuteDataForIndex8(); + _mintTokensAndApprove(executeData8.data.from, executeData8.data.amount); + + bool success8 = settlement.executeTransfer(executeData8.txProof, executeData8.wlProof, executeData8.data); + assertTrue(success8); + + // 9 + ExecuteData memory executeData9 = _getExecuteDataForIndex9(); + _mintTokensAndApprove(executeData9.data.from, executeData9.data.amount); + + bool success9 = settlement.executeTransfer(executeData9.txProof, executeData9.wlProof, executeData9.data); + assertTrue(success9); + + bytes32 txHash9 = keccak256( + abi.encodePacked( + executeData9.data.from, + executeData9.data.to, + executeData9.data.amount, + executeData9.data.nonce, + executeData9.data.timestamp, + executeData9.data.recipientCount, + executeData9.data.txType + ) + ); + assertEq(feeModule.getFeeOfTransaction(txHash9), 0); + + // 10 + ExecuteData memory executeData10 = _getExecuteDataForIndex10(); + _mintTokensAndApprove(executeData10.data.from, executeData10.data.amount); + + vm.expectRevert(Errors.FeeModule__FreeTierLimitExceeded.selector); + bool success10 = settlement.executeTransfer(executeData10.txProof, executeData10.wlProof, executeData10.data); + assertFalse(success10); + } + + function test_ExecuteTransfer_GasComparison() public { + _submitBatch(); + + // === DELAYED (free tier) === + ExecuteData memory delayedData = _getExecuteDataForIndex0(); + _mintTokensAndApprove(delayedData.data.from, delayedData.data.amount); + + uint256 gasStartDelayed = gasleft(); + settlement.executeTransfer(delayedData.txProof, delayedData.wlProof, delayedData.data); + uint256 gasUsedDelayed = gasStartDelayed - gasleft(); + + // === INSTANT === + ExecuteData memory instantData = _getExecuteDataForIndex11(); + _mintTokensAndApprove(instantData.data.from, instantData.data.amount); + + uint256 gasStartInstant = gasleft(); + settlement.executeTransfer(instantData.txProof, instantData.wlProof, instantData.data); + uint256 gasUsedInstant = gasStartInstant - gasleft(); + + // === BATCHED === + ExecuteData memory batchedData = _getExecuteDataForIndex13(); + _mintTokensAndApprove(batchedData.data.from, batchedData.data.amount); + + uint256 gasStartBatched = gasleft(); + settlement.executeTransfer(batchedData.txProof, batchedData.wlProof, batchedData.data); + uint256 gasUsedBatched = gasStartBatched - gasleft(); + + console.log("=== GAS COMPARISON ==="); + console.log("DELAYED (free): ", gasUsedDelayed); + console.log("INSTANT: ", gasUsedInstant); + console.log("BATCHED (15 rec):", gasUsedBatched); + + assertGt(gasUsedDelayed, 0); + assertGt(gasUsedInstant, 0); + assertGt(gasUsedBatched, 0); + } + + /* -------------------------------------------------------------------------- */ + /* pause/unpause */ + /* -------------------------------------------------------------------------- */ + + function test_ExecuteTransfer_PauseUnpause_Workflow() public { + _submitBatch(); + ExecuteData memory executeData = _getExecuteDataForIndex11(); + + _mintTokensAndApprove(executeData.data.from, executeData.data.amount); + + vm.prank(DEFAULT_SENDER); + settlement.pause(); + + vm.expectRevert(Pausable.EnforcedPause.selector); + settlement.executeTransfer(executeData.txProof, executeData.wlProof, executeData.data); + + vm.prank(DEFAULT_SENDER); + settlement.unpause(); + + bool success = settlement.executeTransfer(executeData.txProof, executeData.wlProof, executeData.data); + + assertTrue(success); + assertEq(mockToken.balanceOf(executeData.data.from), 0); + assertEq(mockToken.balanceOf(executeData.data.to), executeData.data.amount); + } + + /* -------------------------------------------------------------------------- */ + /* executeData HEPLERS */ + /* -------------------------------------------------------------------------- */ + + function _getExecuteDataForIndex11() internal pure returns (ExecuteData memory executeData) { + bytes32[] memory txProof = new bytes32[](5); + txProof[0] = 0x462abd36dedb15579a43289e45666716fa82e276865699156f10f01fce09bea4; + txProof[1] = 0x591ff5b157a63b6c10acb4331c80fe4c013f946c177848663e17c995d065ab6c; + txProof[2] = 0x8097f9b2b4adcb7ea03968a76be829838c7e54478a37edf3c2060e1626be7fb9; + txProof[3] = 0x31994ea0b0c611d5c8673e93d7acac8ce3c40c1a7ff9612645345abc39d59975; + txProof[4] = 0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc; + executeData.txProof = txProof; + + bytes32[] memory wlProof = new bytes32[](1); + wlProof[0] = 0x7ceb58780fb137bb02223b79c88bc6404f736f8bb4d1f0895d9884122804fb73; + executeData.wlProof = wlProof; + + executeData.data = Types.TransferData({ + from: 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266, + to: 0x70997970C51812dc3A010C7d01b50e0d17dc79C8, + amount: 200000000, + nonce: 12, + timestamp: 1766392480, + recipientCount: 1, + batchId: 1, + txType: Types.TxType.INSTANT + }); + } + + function _getExecuteDataForIndex13() internal pure returns (ExecuteData memory executeData) { + bytes32[] memory txProof = new bytes32[](5); + txProof[0] = 0x7401529a3f64f3280807f8c1c25476cb42f45a8d53eb4a8ce0489ca439530b43; + txProof[1] = 0x30265017c12a0d5f16aa7c34dea1bf8fc8554bbdc356287671971c4a4da6b460; + txProof[2] = 0x210818ded81351b6dc7f13a560472c3185d0bae960d8f7212190674caacdcfa8; + txProof[3] = 0x31994ea0b0c611d5c8673e93d7acac8ce3c40c1a7ff9612645345abc39d59975; + txProof[4] = 0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc; + executeData.txProof = txProof; + + bytes32[] memory wlProof = new bytes32[](1); + wlProof[0] = 0x7ceb58780fb137bb02223b79c88bc6404f736f8bb4d1f0895d9884122804fb73; + executeData.wlProof = wlProof; + + executeData.data = Types.TransferData({ + from: 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266, + to: 0x3C44CdDdB6a900fa2b585dd299e03d12FA4293BC, + amount: 500000000, + nonce: 14, + timestamp: 1766392482, + recipientCount: 5, + batchId: 1, + txType: Types.TxType.BATCHED + }); + } + + function _getExecuteDataForIndex14() internal pure returns (ExecuteData memory executeData) { + bytes32[] memory txProof = new bytes32[](5); + txProof[0] = 0xac933fbe94da3ad81a1b622b7e9e83dce453dc6588b908ec4e39edc0afb91dc6; + txProof[1] = 0x287d3ddd35e0c598f46919e410628799adaf9b4d25fcd921e17b0077e12bde90; + txProof[2] = 0x210818ded81351b6dc7f13a560472c3185d0bae960d8f7212190674caacdcfa8; + txProof[3] = 0x31994ea0b0c611d5c8673e93d7acac8ce3c40c1a7ff9612645345abc39d59975; + txProof[4] = 0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc; + executeData.txProof = txProof; + + bytes32[] memory wlProof = new bytes32[](1); + wlProof[0] = 0x7ceb58780fb137bb02223b79c88bc6404f736f8bb4d1f0895d9884122804fb73; + executeData.wlProof = wlProof; + + executeData.data = Types.TransferData({ + from: 0x1234567890123456789012345678901234567890, + to: 0x90F79bf6EB2c4f870365E785982E1f101E93b906, + amount: 150000000, + nonce: 15, + timestamp: 1766392483, + recipientCount: 3, + batchId: 1, + txType: Types.TxType.BATCHED + }); + } + + function _getExecuteDataForIndex15() internal pure returns (ExecuteData memory executeData) { + bytes32[] memory txProof = new bytes32[](5); + txProof[0] = 0x864e732fa92158038c741b630577f00889c249ae8441ddaff05c2e185153afc6; + txProof[1] = 0x287d3ddd35e0c598f46919e410628799adaf9b4d25fcd921e17b0077e12bde90; + txProof[2] = 0x210818ded81351b6dc7f13a560472c3185d0bae960d8f7212190674caacdcfa8; + txProof[3] = 0x31994ea0b0c611d5c8673e93d7acac8ce3c40c1a7ff9612645345abc39d59975; + txProof[4] = 0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc; + executeData.txProof = txProof; + + bytes32[] memory wlProof = new bytes32[](1); + wlProof[0] = 0x7ceb58780fb137bb02223b79c88bc6404f736f8bb4d1f0895d9884122804fb73; + executeData.wlProof = wlProof; + + executeData.data = Types.TransferData({ + from: 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266, + to: 0x70997970C51812dc3A010C7d01b50e0d17dc79C8, + amount: 100000000, + nonce: 16, + timestamp: 1766392484, + recipientCount: 1, + batchId: 1, + txType: Types.TxType.BATCHED + }); + } + + function _getExecuteDataForIndex12() internal pure returns (ExecuteData memory executeData) { + bytes32[] memory txProof = new bytes32[](5); + txProof[0] = 0x560083d61bbad176074a8ff65fa7a2b20cbc6080b8176f55d47bdd63503e7b49; + txProof[1] = 0x30265017c12a0d5f16aa7c34dea1bf8fc8554bbdc356287671971c4a4da6b460; + txProof[2] = 0x210818ded81351b6dc7f13a560472c3185d0bae960d8f7212190674caacdcfa8; + txProof[3] = 0x31994ea0b0c611d5c8673e93d7acac8ce3c40c1a7ff9612645345abc39d59975; + txProof[4] = 0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc; + executeData.txProof = txProof; + + bytes32[] memory wlProof = new bytes32[](1); + wlProof[0] = 0x7ceb58780fb137bb02223b79c88bc6404f736f8bb4d1f0895d9884122804fb73; + executeData.wlProof = wlProof; + + executeData.data = Types.TransferData({ + from: 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266, + to: 0x70997970C51812dc3A010C7d01b50e0d17dc79C8, + amount: 300000000, + nonce: 13, + timestamp: 1766392481, + recipientCount: 3, + batchId: 1, + txType: Types.TxType.INSTANT + }); + } + + function _getExecuteDataForIndex17() internal pure returns (ExecuteData memory executeData) { + bytes32[] memory txProof = new bytes32[](3); + txProof[0] = 0xa477e0f7047a672583f1234bbd1485e57bda886eeea3ea1c2f76e242083c7d85; + txProof[1] = 0x587b51946820bae3febac952ae82d0a10a2c1991bc887caf0c9831de2adb24bb; + txProof[2] = 0xb6576b13bfbf97dae871a1b20d938a53329e10800ce093213abb811e236b7f6c; + executeData.txProof = txProof; + + bytes32[] memory wlProof = new bytes32[](1); + wlProof[0] = 0x7ceb58780fb137bb02223b79c88bc6404f736f8bb4d1f0895d9884122804fb73; + executeData.wlProof = wlProof; + + executeData.data = Types.TransferData({ + from: 0x0000000000000000000000000000000000000000, + to: 0x70997970C51812dc3A010C7d01b50e0d17dc79C8, + amount: 100000000, + nonce: 18, + timestamp: 1766392486, + recipientCount: 1, + batchId: 1, + txType: Types.TxType.DELAYED + }); + } + + function _getExecuteDataForIndex18() internal pure returns (ExecuteData memory executeData) { + bytes32[] memory txProof = new bytes32[](2); + txProof[0] = 0xc86dbe3ece3fe96d98622181ce07212d536e3217636b501a87e8a0e98b001a84; + txProof[1] = 0xb6576b13bfbf97dae871a1b20d938a53329e10800ce093213abb811e236b7f6c; + executeData.txProof = txProof; + + bytes32[] memory wlProof = new bytes32[](1); + wlProof[0] = 0x7ceb58780fb137bb02223b79c88bc6404f736f8bb4d1f0895d9884122804fb73; + executeData.wlProof = wlProof; + + executeData.data = Types.TransferData({ + from: 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266, + to: 0x0000000000000000000000000000000000000000, + amount: 100000000, + nonce: 19, + timestamp: 1766392487, + recipientCount: 1, + batchId: 1, + txType: Types.TxType.DELAYED + }); + } + + function _getExecuteDataForIndex10() internal pure returns (ExecuteData memory executeData) { + bytes32[] memory txProof = new bytes32[](5); + txProof[0] = 0x24f873b47ae80c05f69983aace4819e4a005ab024673cc39313788f7a17d305b; + txProof[1] = 0x591ff5b157a63b6c10acb4331c80fe4c013f946c177848663e17c995d065ab6c; + txProof[2] = 0x8097f9b2b4adcb7ea03968a76be829838c7e54478a37edf3c2060e1626be7fb9; + txProof[3] = 0x31994ea0b0c611d5c8673e93d7acac8ce3c40c1a7ff9612645345abc39d59975; + txProof[4] = 0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc; + executeData.txProof = txProof; + + bytes32[] memory wlProof = new bytes32[](1); + wlProof[0] = 0x7ceb58780fb137bb02223b79c88bc6404f736f8bb4d1f0895d9884122804fb73; + executeData.wlProof = wlProof; + + executeData.data = Types.TransferData({ + from: 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266, + to: 0x70997970C51812dc3A010C7d01b50e0d17dc79C8, + amount: 50000000, + nonce: 11, + timestamp: 1766392479, + recipientCount: 1, + batchId: 1, + txType: Types.TxType.FREE_TIER + }); + } + + function _getExecuteDataForIndex0() internal pure returns (ExecuteData memory executeData) { + bytes32[] memory txProof = new bytes32[](5); txProof[0] = 0xaa343f658768354a32adde8928537360916413cc47445617177a82024c422cf7; + txProof[1] = 0xd5ec552c4f23fb2dbf907bf031e9e0d2e7c4a81c756071675b4a0625ad37f04c; + txProof[2] = 0xea1b9fa23ead5569d3206b76771c4f20e36694822032d195804bebccca4aec51; + txProof[3] = 0xb57edcb183a69f8f06421d1c7de51760cd1973914376637f50f10c3ed0fafeb2; + txProof[4] = 0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc; + + executeData.txProof = txProof; + bytes32[] memory wlProof = new bytes32[](1); + wlProof[0] = 0x7ceb58780fb137bb02223b79c88bc6404f736f8bb4d1f0895d9884122804fb73; + executeData.wlProof = wlProof; + executeData.data = Types.TransferData({ + from: 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266, + to: 0x70997970C51812dc3A010C7d01b50e0d17dc79C8, + amount: 100000000, + nonce: 1, + timestamp: 1766392469, + recipientCount: 1, + batchId: 1, + txType: Types.TxType.DELAYED + }); + } + + function _getExecuteDataForIndex1() internal pure returns (ExecuteData memory executeData) { + bytes32[] memory txProof = new bytes32[](5); txProof[0] = 0xa230e58e7054695cc88f543500b68c91e2bf2460ea4f50ac925640251a0e9c45; + txProof[1] = 0xd5ec552c4f23fb2dbf907bf031e9e0d2e7c4a81c756071675b4a0625ad37f04c; + txProof[2] = 0xea1b9fa23ead5569d3206b76771c4f20e36694822032d195804bebccca4aec51; + txProof[3] = 0xb57edcb183a69f8f06421d1c7de51760cd1973914376637f50f10c3ed0fafeb2; + txProof[4] = 0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc; + + executeData.txProof = txProof; + bytes32[] memory wlProof = new bytes32[](1); + wlProof[0] = 0x7ceb58780fb137bb02223b79c88bc6404f736f8bb4d1f0895d9884122804fb73; + executeData.wlProof = wlProof; + executeData.data = Types.TransferData({ + from: 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266, + to: 0x70997970C51812dc3A010C7d01b50e0d17dc79C8, + amount: 100000000, + nonce: 2, + timestamp: 1766392470, + recipientCount: 1, + batchId: 1, + txType: Types.TxType.DELAYED + }); + } + + function _getExecuteDataForIndex2() internal pure returns (ExecuteData memory executeData) { + bytes32[] memory txProof = new bytes32[](5); txProof[0] = 0x8d2fc6d794ed4c205b3516a6c58f6f87eac5b6324dc4e6f88d46ca0cd622e523; + txProof[1] = 0x33fe1a228e60193eb5abdba6048af955b49d849dc59ab5766873907ad10ad7f6; + txProof[2] = 0xea1b9fa23ead5569d3206b76771c4f20e36694822032d195804bebccca4aec51; + txProof[3] = 0xb57edcb183a69f8f06421d1c7de51760cd1973914376637f50f10c3ed0fafeb2; + txProof[4] = 0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc; + + executeData.txProof = txProof; + bytes32[] memory wlProof = new bytes32[](1); + wlProof[0] = 0x7ceb58780fb137bb02223b79c88bc6404f736f8bb4d1f0895d9884122804fb73; + executeData.wlProof = wlProof; + executeData.data = Types.TransferData({ + from: 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266, + to: 0x70997970C51812dc3A010C7d01b50e0d17dc79C8, + amount: 100000000, + nonce: 3, + timestamp: 1766392471, + recipientCount: 1, + batchId: 1, + txType: Types.TxType.DELAYED + }); + } + + function _getExecuteDataForIndex3() internal pure returns (ExecuteData memory executeData) { + bytes32[] memory txProof = new bytes32[](5); txProof[0] = 0x520268ed8b0dc9d3da345cf990135351322b673e52f2f58656bce527e24ecb4f; + txProof[1] = 0x33fe1a228e60193eb5abdba6048af955b49d849dc59ab5766873907ad10ad7f6; + txProof[2] = 0xea1b9fa23ead5569d3206b76771c4f20e36694822032d195804bebccca4aec51; + txProof[3] = 0xb57edcb183a69f8f06421d1c7de51760cd1973914376637f50f10c3ed0fafeb2; + txProof[4] = 0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc; + + executeData.txProof = txProof; + bytes32[] memory wlProof = new bytes32[](1); + wlProof[0] = 0x7ceb58780fb137bb02223b79c88bc6404f736f8bb4d1f0895d9884122804fb73; + executeData.wlProof = wlProof; + executeData.data = Types.TransferData({ + from: 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266, + to: 0x70997970C51812dc3A010C7d01b50e0d17dc79C8, + amount: 100000000, + nonce: 4, + timestamp: 1766392472, + recipientCount: 1, + batchId: 1, + txType: Types.TxType.DELAYED + }); + } + + function _getExecuteDataForIndex4() internal pure returns (ExecuteData memory executeData) { + bytes32[] memory txProof = new bytes32[](5); txProof[0] = 0x31d36b489cdd2b5dbeb0e095f19a3db47bf729f1dc646977c3347a530dfd638f; + txProof[1] = 0xee6a9ff5e1399fa946da5482a6f5d659903e7309c8747f7d22f554d54fc097e2; + txProof[2] = 0xcbc1b17bd75d49c23213016ca8a662f2adaee9977ad570f2930a8275b25bb398; + txProof[3] = 0xb57edcb183a69f8f06421d1c7de51760cd1973914376637f50f10c3ed0fafeb2; + txProof[4] = 0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc; + + executeData.txProof = txProof; + bytes32[] memory wlProof = new bytes32[](1); + wlProof[0] = 0x7ceb58780fb137bb02223b79c88bc6404f736f8bb4d1f0895d9884122804fb73; + executeData.wlProof = wlProof; + executeData.data = Types.TransferData({ + from: 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266, + to: 0x70997970C51812dc3A010C7d01b50e0d17dc79C8, + amount: 100000000, + nonce: 5, + timestamp: 1766392473, + recipientCount: 1, + batchId: 1, + txType: Types.TxType.DELAYED + }); + } + + function _getExecuteDataForIndex5() internal pure returns (ExecuteData memory executeData) { + bytes32[] memory txProof = new bytes32[](5); txProof[0] = 0xe19c6d48248dbca0b142cf85dd63eee8608c91acfefebbdc15b0efd3708d53ff; + txProof[1] = 0xee6a9ff5e1399fa946da5482a6f5d659903e7309c8747f7d22f554d54fc097e2; + txProof[2] = 0xcbc1b17bd75d49c23213016ca8a662f2adaee9977ad570f2930a8275b25bb398; + txProof[3] = 0xb57edcb183a69f8f06421d1c7de51760cd1973914376637f50f10c3ed0fafeb2; + txProof[4] = 0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc; + + executeData.txProof = txProof; + bytes32[] memory wlProof = new bytes32[](1); + wlProof[0] = 0x7ceb58780fb137bb02223b79c88bc6404f736f8bb4d1f0895d9884122804fb73; + executeData.wlProof = wlProof; + executeData.data = Types.TransferData({ + from: 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266, + to: 0x70997970C51812dc3A010C7d01b50e0d17dc79C8, + amount: 100000000, + nonce: 6, + timestamp: 1766392474, + recipientCount: 1, + batchId: 1, + txType: Types.TxType.DELAYED + }); + } + + function _getExecuteDataForIndex6() internal pure returns (ExecuteData memory executeData) { + bytes32[] memory txProof = new bytes32[](5); txProof[0] = 0x9f8a7e4c0b8338ac03dd39fb2d6fa8082cf1d32badb2fa29fa959f626793e191; + txProof[1] = 0xb3b6b7cdecd8ccdf8ba346985f530fbc878292d55f0ac9cda8689b4744570c7f; + txProof[2] = 0xcbc1b17bd75d49c23213016ca8a662f2adaee9977ad570f2930a8275b25bb398; + txProof[3] = 0xb57edcb183a69f8f06421d1c7de51760cd1973914376637f50f10c3ed0fafeb2; + txProof[4] = 0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc; + + executeData.txProof = txProof; + bytes32[] memory wlProof = new bytes32[](1); + wlProof[0] = 0x7ceb58780fb137bb02223b79c88bc6404f736f8bb4d1f0895d9884122804fb73; + executeData.wlProof = wlProof; + executeData.data = Types.TransferData({ + from: 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266, + to: 0x70997970C51812dc3A010C7d01b50e0d17dc79C8, + amount: 100000000, + nonce: 7, + timestamp: 1766392475, + recipientCount: 1, + batchId: 1, + txType: Types.TxType.DELAYED + }); + } + + function _getExecuteDataForIndex7() internal pure returns (ExecuteData memory executeData) { + bytes32[] memory txProof = new bytes32[](5); txProof[0] = 0x3d5be0058769e8f9e9d472a4f48190b04ccfa6e7aa2fa412653ed22b7649c75c; + txProof[1] = 0xb3b6b7cdecd8ccdf8ba346985f530fbc878292d55f0ac9cda8689b4744570c7f; + txProof[2] = 0xcbc1b17bd75d49c23213016ca8a662f2adaee9977ad570f2930a8275b25bb398; + txProof[3] = 0xb57edcb183a69f8f06421d1c7de51760cd1973914376637f50f10c3ed0fafeb2; + txProof[4] = 0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc; + + executeData.txProof = txProof; + bytes32[] memory wlProof = new bytes32[](1); + wlProof[0] = 0x7ceb58780fb137bb02223b79c88bc6404f736f8bb4d1f0895d9884122804fb73; + executeData.wlProof = wlProof; + executeData.data = Types.TransferData({ + from: 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266, + to: 0x70997970C51812dc3A010C7d01b50e0d17dc79C8, + amount: 100000000, + nonce: 8, + timestamp: 1766392476, + recipientCount: 1, + batchId: 1, + txType: Types.TxType.DELAYED + }); + } + + function _getExecuteDataForIndex8() internal pure returns (ExecuteData memory executeData) { + bytes32[] memory txProof = new bytes32[](5); txProof[0] = 0x5215e0e0db139e499cdf1c49c9eb5b1ec1de1eb4e527b8a5c9566128f5c20f05; + txProof[1] = 0x4c263c99b9e277450d7f2ef634c883775456dd4dffb96f46ae22fffa0ba532b1; + txProof[2] = 0x8097f9b2b4adcb7ea03968a76be829838c7e54478a37edf3c2060e1626be7fb9; + txProof[3] = 0x31994ea0b0c611d5c8673e93d7acac8ce3c40c1a7ff9612645345abc39d59975; + txProof[4] = 0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc; + + executeData.txProof = txProof; + bytes32[] memory wlProof = new bytes32[](1); + wlProof[0] = 0x7ceb58780fb137bb02223b79c88bc6404f736f8bb4d1f0895d9884122804fb73; + executeData.wlProof = wlProof; + executeData.data = Types.TransferData({ + from: 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266, + to: 0x70997970C51812dc3A010C7d01b50e0d17dc79C8, + amount: 100000000, + nonce: 9, + timestamp: 1766392477, + recipientCount: 1, + batchId: 1, + txType: Types.TxType.DELAYED + }); + } + + function _getExecuteDataForIndex9() internal pure returns (ExecuteData memory executeData) { + bytes32[] memory txProof = new bytes32[](5); txProof[0] = 0xc8570da104d31b70fa27d5a6db45ef393d9150795802006294ee05aaaf99ff4c; + txProof[1] = 0x4c263c99b9e277450d7f2ef634c883775456dd4dffb96f46ae22fffa0ba532b1; + txProof[2] = 0x8097f9b2b4adcb7ea03968a76be829838c7e54478a37edf3c2060e1626be7fb9; + txProof[3] = 0x31994ea0b0c611d5c8673e93d7acac8ce3c40c1a7ff9612645345abc39d59975; + txProof[4] = 0xc4b8283ef7ff08ddc4b02fd8a783c2b52430b7a7b359b905c99b2c99248471dc; + + executeData.txProof = txProof; + bytes32[] memory wlProof = new bytes32[](1); + wlProof[0] = 0x7ceb58780fb137bb02223b79c88bc6404f736f8bb4d1f0895d9884122804fb73; + executeData.wlProof = wlProof; + executeData.data = Types.TransferData({ + from: 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266, + to: 0x70997970C51812dc3A010C7d01b50e0d17dc79C8, + amount: 100000000, + nonce: 10, + timestamp: 1766392478, + recipientCount: 1, + batchId: 1, + txType: Types.TxType.DELAYED + }); + } + + function _getExecuteDataForIndex16() internal pure returns (ExecuteData memory executeData) { + bytes32[] memory txProof = new bytes32[](3); txProof[0] = 0xed3ccf36419833c97271315356d4320595eee8cc5119ee118bcdf1e2775bc52f; + txProof[1] = 0x587b51946820bae3febac952ae82d0a10a2c1991bc887caf0c9831de2adb24bb; + txProof[2] = 0xb6576b13bfbf97dae871a1b20d938a53329e10800ce093213abb811e236b7f6c; + executeData.txProof = txProof; + bytes32[] memory wlProof = new bytes32[](1); + wlProof[0] = 0x7ceb58780fb137bb02223b79c88bc6404f736f8bb4d1f0895d9884122804fb73; + executeData.wlProof = wlProof; + executeData.data = Types.TransferData({ + from: 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266, + to: 0x70997970C51812dc3A010C7d01b50e0d17dc79C8, + amount: 1000000000000000, + nonce: 17, + timestamp: 1766392485, + recipientCount: 1, + batchId: 1, + txType: Types.TxType.INSTANT + }); + } +} diff --git a/contracts/test/mocks/MaliciousReceiver.sol b/contracts/test/mocks/MaliciousReceiver.sol new file mode 100644 index 0000000..c2b6acb --- /dev/null +++ b/contracts/test/mocks/MaliciousReceiver.sol @@ -0,0 +1,8 @@ +// SPDX-License-Identifier: MIT +pragma solidity 0.8.25; + +contract MaliciousReceiver { + fallback() external payable { + while (true) {} + } +} diff --git a/contracts/test/unit/FeeModuleUnit.t.sol b/contracts/test/unit/FeeModuleUnit.t.sol new file mode 100644 index 0000000..dbebc25 --- /dev/null +++ b/contracts/test/unit/FeeModuleUnit.t.sol @@ -0,0 +1,219 @@ +// SPDX-License-Identifier: MIT +pragma solidity 0.8.25; + +import {Test} from "forge-std/Test.sol"; +import {TestConstants as TC} from "../utils/TestConstants.sol"; + +import {FeeModule} from "../../src/FeeModule.sol"; +import {IFeeModule} from "../../src/interfaces/IFeeModule.sol"; + +import {Types} from "../../src/libraries/Types.sol"; +import {Errors} from "../../src/libraries/Errors.sol"; + +contract FeeModuleUnitTest is Test { + FeeModule feeModule; + address settlementAddr; + address user; + address owner; + + function setUp() public { + owner = makeAddr("owner"); + + vm.prank(owner); + feeModule = new FeeModule(); + settlementAddr = makeAddr("settlement"); + user = makeAddr("user"); + + vm.prank(owner); + feeModule.setSettlement(settlementAddr); + } + + /* -------------------------------------------------------------------------- */ + /* INITIAL STATE */ + /* -------------------------------------------------------------------------- */ + + function test_Constructor_InitialValues() public view { + assertEq(feeModule.owner(), owner); + assertEq(feeModule.getSettlement(), settlementAddr); + } + + /* -------------------------------------------------------------------------- */ + /* calculateFee */ + /* -------------------------------------------------------------------------- */ + + function test_CalculateFee_InvalidInput_Reverts() public { + // sender + vm.expectRevert(Errors.FeeModule__InvalidInput.selector); + feeModule.calculateFee(address(0), Types.TxType.INSTANT, TC.VOLUME, 1); + + // volume + vm.expectRevert(Errors.FeeModule__InvalidInput.selector); + feeModule.calculateFee(user, Types.TxType.INSTANT, 0, 1); + + // recipient count + vm.expectRevert(Errors.FeeModule__InvalidInput.selector); + feeModule.calculateFee(user, Types.TxType.INSTANT, TC.VOLUME, 0); + + // recipient count + vm.expectRevert(Errors.FeeModule__InvalidRecipientCount.selector); + feeModule.calculateFee(user, Types.TxType.INSTANT, TC.VOLUME, 3); + + // recipient count + vm.expectRevert(Errors.FeeModule__InvalidRecipientCount.selector); + feeModule.calculateFee(user, Types.TxType.BATCHED, TC.VOLUME, 1); + + vm.expectRevert(Errors.FeeModule__InvalidInput.selector); + feeModule.calculateFee(address(0), Types.TxType.FREE_TIER, 0, 0); + } + + function test_CalculateFee_InstantFee() public view { + Types.FeeInfo memory info = feeModule.calculateFee(user, Types.TxType.INSTANT, TC.VOLUME, 1); + assertEq(info.fee, TC.INSTANT_FEE); + assertEq(uint256(info.txType), uint256(Types.TxType.INSTANT)); + } + + function test_CalculateFee_BatchedFee() public view { + uint256 recipientCount = 3; + Types.FeeInfo memory info = feeModule.calculateFee(user, Types.TxType.BATCHED, TC.VOLUME, recipientCount); + assertEq(info.fee, TC.BATCH_FEE * recipientCount); + assertEq(uint256(info.txType), uint256(Types.TxType.BATCHED)); + } + + function test_CalculateFee_LargeVolumeNoFee() public view { + Types.FeeInfo memory info = feeModule.calculateFee(user, Types.TxType.DELAYED, TC.LARGE_VOLUME, 1); + assertEq(info.fee, 0); + assertEq(uint256(info.txType), uint256(Types.TxType.DELAYED)); + } + + function test_CalculateFee_ReturnsCorrectFee() public view { + Types.FeeInfo memory info = feeModule.calculateFee(user, Types.TxType.INSTANT, TC.VOLUME, 1); + + assertEq(info.fee, TC.INSTANT_FEE); + assertEq(uint256(info.txType), uint256(Types.TxType.INSTANT)); + } + + function test_CalculateFee_FreeTier_NoStateChanges() public view { + Types.FeeInfo memory info = feeModule.calculateFee(user, Types.TxType.FREE_TIER, TC.VOLUME, 1); + + assertEq(info.fee, 0); + assertEq(uint256(info.txType), uint256(Types.TxType.FREE_TIER)); + } + + /* -------------------------------------------------------------------------- */ + /* applyFee */ + /* -------------------------------------------------------------------------- */ + + function test_ApplyFee_InvalidInput() public { + uint256 feeAmount = 100; + // sender + vm.expectRevert(Errors.FeeModule__InvalidInput.selector); + feeModule.applyFee(address(0), feeAmount, keccak256(abi.encodePacked("1")), 1, Types.TxType.DELAYED); + + // transferHash + vm.expectRevert(Errors.FeeModule__InvalidInput.selector); + feeModule.applyFee(user, feeAmount, bytes32(0), 1, Types.TxType.DELAYED); + + // batch id + vm.expectRevert(Errors.FeeModule__InvalidInput.selector); + feeModule.applyFee(user, feeAmount, keccak256(abi.encodePacked("1")), 0, Types.TxType.DELAYED); + } + + function test_ApplyFee_NotSettlement_Reverts() public { + vm.expectRevert(Errors.FeeModule__NotAuthorized.selector); + feeModule.applyFee(user, 1, keccak256(abi.encodePacked("1")), 1, Types.TxType.DELAYED); + } + + function test_ApplyFee_ZeroFeeSucceeds() public { + uint256 feeAmount = 0; + vm.prank(settlementAddr); + feeModule.applyFee(user, feeAmount, keccak256(abi.encodePacked("1")), 1, Types.TxType.DELAYED); + } + + function test_ApplyFee_Succeeds() public { + uint256 feeAmount = 200_000; + bytes32 transferHash = keccak256(abi.encodePacked("tx1")); + uint64 batchId = 1; + + // set transfer fee first + vm.prank(settlementAddr); + vm.expectEmit(true, false, false, true); + emit IFeeModule.FeeApplied(user, feeAmount, transferHash, batchId); + feeModule.applyFee(user, feeAmount, transferHash, batchId, Types.TxType.INSTANT); + + // verify totalFees and batchTotalFees updated + uint256 totalFees = feeModule.getTotalFeesCollected(); + assertEq(totalFees, feeAmount); + + uint256 batchFees = feeModule.getBatchTotalFees(batchId); + assertEq(batchFees, feeAmount); + } + + function test_ApplyFee_FreeTier_ConsumesQuota() public { + uint256 feeAmount = 0; + bytes32 transferHash = keccak256(abi.encodePacked("txFreeTier")); + uint64 batchId = 3; + + uint256 initialQuota = feeModule.getRemainingFreeTierTransactions(user); + + vm.prank(settlementAddr); + vm.expectEmit(true, false, false, true); + emit IFeeModule.FreeTierUsed(user, initialQuota - 1); + feeModule.applyFee(user, feeAmount, transferHash, batchId, Types.TxType.FREE_TIER); + + uint256 finalQuota = feeModule.getRemainingFreeTierTransactions(user); + assertEq(finalQuota, initialQuota - 1); + } + + /* -------------------------------------------------------------------------- */ + /* setSettlement */ + /* -------------------------------------------------------------------------- */ + + function test_SetSettlement_InvalidInput() public { + vm.prank(owner); + vm.expectRevert(Errors.FeeModule__InvalidInput.selector); + feeModule.setSettlement(address(0)); + } + + function test_SetSettlement_AlreadySettlement() public { + vm.prank(owner); + vm.expectRevert(Errors.FeeModule__AlreadySettlement.selector); + feeModule.setSettlement(settlementAddr); + } + + function test_SetSettlement_SetsAndEmits() public { + address newSettlement = makeAddr("newSettlement"); + vm.prank(owner); + vm.expectEmit(true, false, false, false); + emit IFeeModule.SettlementUpdated(newSettlement); + feeModule.setSettlement(newSettlement); + + assertEq(feeModule.getSettlement(), newSettlement); + } + + /* -------------------------------------------------------------------------- */ + /* GETTERS */ + /* -------------------------------------------------------------------------- */ + + function test_Getters() public { + assertEq(feeModule.getSettlement(), settlementAddr); + + assertEq(feeModule.getOwner(), owner); + + assertEq(feeModule.getFreeTxUsage(user).count, 0); + assertEq(feeModule.getFreeTxUsage(user).day, 0); + + assertEq(feeModule.getRemainingFreeTierTransactions(user), 10); + + uint256 feeAmount = 200_000; + bytes32 transferHash = keccak256(abi.encodePacked("tx2")); + uint64 batchId = 2; + vm.prank(settlementAddr); + feeModule.applyFee(user, feeAmount, transferHash, batchId, Types.TxType.INSTANT); + + uint256 fee = feeModule.getFeeOfTransaction(transferHash); + assertEq(fee, feeAmount); + + assertEq(feeModule.getTotalFeesCollected(), feeAmount); + assertEq(feeModule.getBatchTotalFees(batchId), feeAmount); + } +} diff --git a/contracts/test/unit/RegistryUnit.t.sol b/contracts/test/unit/RegistryUnit.t.sol new file mode 100644 index 0000000..c278248 --- /dev/null +++ b/contracts/test/unit/RegistryUnit.t.sol @@ -0,0 +1,486 @@ +// SPDX-License-Identifier: MIT +pragma solidity 0.8.25; + +import {Test} from "forge-std/Test.sol"; +import {Pausable} from "@openzeppelin/contracts/utils/Pausable.sol"; +import {IAccessControl} from "@openzeppelin/contracts/access/IAccessControl.sol"; +import {MessageHashUtils} from "@openzeppelin/contracts/utils/cryptography/MessageHashUtils.sol"; + +import {WhitelistRegistry} from "../../src/WhitelistRegistry.sol"; +import {IWhitelistRegistry} from "../../src/interfaces/IWhitelistRegistry.sol"; + +import {MaliciousReceiver} from "../mocks/MaliciousReceiver.sol"; +import {Errors} from "../../src/libraries/Errors.sol"; + +contract WhitelistRegistryUnitTest is Test { + using MessageHashUtils for bytes32; + + WhitelistRegistry registry; + + address updater; + address randomUser; + uint256 updaterPk; + uint256 randomUserPk; + + bytes32 constant DEFAULT_ADMIN_ROLE = 0x00; + + function setUp() public { + (updater, updaterPk) = makeAddrAndKey("updater"); + (randomUser, randomUserPk) = makeAddrAndKey("randomUser"); + vm.deal(randomUser, 10 ether); + + registry = new WhitelistRegistry(updater); + } + + /* -------------------------------------------------------------------------- */ + /* HELPERS */ + /* -------------------------------------------------------------------------- */ + + function _sign(bytes32 hash, uint256 privKey) internal pure returns (bytes memory signature) { + (uint8 v, bytes32 r, bytes32 s) = vm.sign(privKey, hash); + signature = abi.encodePacked(r, s, v); + } + + function _updateRoot() internal { + uint64 currentNonce = registry.getCurrentNonce(); + bytes32 root = keccak256(abi.encodePacked("new merkle root")); + + bytes32 hash = keccak256(abi.encodePacked(root, currentNonce, block.chainid, address(registry))); + bytes32 signedHash = hash.toEthSignedMessageHash(); + bytes memory sig = _sign(signedHash, updaterPk); + vm.prank(updater); + registry.updateMerkleRoot(root, currentNonce, sig); + } + + /* -------------------------------------------------------------------------- */ + /* INITIAL STATE */ + /* -------------------------------------------------------------------------- */ + + function test_Constructor_InitialValues() public view { + assertTrue(registry.hasRole(registry.getWithdrawRole(), updater)); + assertTrue(registry.hasRole(0x00, updater)); + assertTrue(registry.isAuthorizedUpdater(updater)); + assertFalse(registry.isAuthorizedUpdater(randomUser)); + } + + function test_Constructor_InvalidInput() public { + vm.expectRevert(Errors.WhitelistRegistry__InvalidInput.selector); + new WhitelistRegistry(address(0)); + } + + /* -------------------------------------------------------------------------- */ + /* updateMerkleRoot */ + /* -------------------------------------------------------------------------- */ + + function test_UpdateRoot_InvalidInput() public { + _updateRoot(); + + // invalid root + bytes32 invalidRoot = bytes32(0); + bytes memory signature = _sign("randomInput", updaterPk); + uint256 currentNonce = registry.getCurrentNonce(); + + vm.startPrank(updater); + + vm.expectRevert(Errors.WhitelistRegistry__InvalidInput.selector); + registry.updateMerkleRoot(invalidRoot, uint64(currentNonce), signature); + + // invalid signature length + bytes32 newRoot = keccak256(abi.encodePacked("new new merkle root")); + bytes memory invalidSignature = hex"1234"; + vm.expectRevert(); + registry.updateMerkleRoot(newRoot, uint64(currentNonce), invalidSignature); + + bytes memory emptySig = ""; + vm.expectRevert(Errors.WhitelistRegistry__InvalidInput.selector); + registry.updateMerkleRoot(newRoot, uint64(currentNonce), emptySig); + + // zero signature + bytes memory zeroSig = new bytes(65); + vm.expectRevert(); // без селектора + registry.updateMerkleRoot(newRoot, uint64(currentNonce), zeroSig); + + // invalid both + vm.expectRevert(Errors.WhitelistRegistry__InvalidInput.selector); + registry.updateMerkleRoot(invalidRoot, uint64(currentNonce), invalidSignature); + + vm.stopPrank(); + } + + function test_UpdateRoot_NonAuthorizedCaller() public { + bytes32 currentRoot = registry.getCurrentMerkleRoot(); + uint256 currentNonce = registry.getCurrentNonce(); + + vm.startPrank(randomUser); + bytes32 newRoot = keccak256(abi.encodePacked("new merkle root")); + + bytes32 hash = keccak256(abi.encodePacked(newRoot, currentNonce, block.chainid, address(registry))); + bytes32 signedHash = hash.toEthSignedMessageHash(); + bytes memory signature = _sign(signedHash, randomUserPk); + vm.stopPrank(); + + vm.prank(updater); + vm.expectRevert(Errors.WhitelistRegistry__NotAuthorized.selector); + registry.updateMerkleRoot(newRoot, uint64(currentNonce), signature); + + bytes32 rootAfter = registry.getCurrentMerkleRoot(); + assertEq(currentRoot, rootAfter); + } + + // random address with correct signature can update + function test_UpdateRoot_CorrectSignature() public { + bytes32 newRoot = keccak256(abi.encodePacked("new merkle root")); + uint64 currentNonce = registry.getCurrentNonce(); + + bytes32 hash = keccak256(abi.encodePacked(newRoot, currentNonce, block.chainid, address(registry))); + bytes32 signedHash = hash.toEthSignedMessageHash(); + bytes memory signature = _sign(signedHash, updaterPk); + + assertFalse(registry.isAuthorizedUpdater(randomUser)); + + vm.prank(randomUser); + registry.updateMerkleRoot(newRoot, currentNonce, signature); + + // verify that the root was not updated + bytes32 updatedRoot = registry.getCurrentMerkleRoot(); + assertEq(updatedRoot, registry.getCurrentMerkleRoot()); + } + + function test_UpdateRoot_DuplicateUpdate() public { + bytes32 sameRoot = bytes32(0); + uint64 currentNonce = registry.getCurrentNonce(); + + bytes memory signatureOne = _sign("randomInput", updaterPk); + + vm.prank(updater); + vm.expectRevert(Errors.WhitelistRegistry__DuplicateUpdate.selector); + registry.updateMerkleRoot(sameRoot, currentNonce, signatureOne); + + _updateRoot(); + + bytes32 currentRoot = registry.getCurrentMerkleRoot(); + uint64 currentNonceAfter = registry.getCurrentNonce(); + + // try to update with the same root + bytes32 hash = keccak256(abi.encodePacked(currentRoot, currentNonceAfter, block.chainid, address(registry))); + bytes32 signedHash = hash.toEthSignedMessageHash(); + bytes memory signatureTwo = _sign(signedHash, updaterPk); + + vm.prank(updater); + vm.expectRevert(Errors.WhitelistRegistry__DuplicateUpdate.selector); + registry.updateMerkleRoot(currentRoot, currentNonceAfter, signatureTwo); + + bytes32 rootAfter = registry.getCurrentMerkleRoot(); + assertEq(currentRoot, rootAfter); + } + + function test_UpdateRoot_Success() public { + assertTrue(registry.isAuthorizedUpdater(updater)); + bytes32 notUpdatedRoot = registry.getCurrentMerkleRoot(); + uint256 before = registry.getLastUpdateTime(); + uint64 currentNonce = registry.getCurrentNonce(); + bytes32 oldRoot = registry.getCurrentMerkleRoot(); + + vm.expectEmit(false, false, false, true); + emit IWhitelistRegistry.WhitelistUpdated(oldRoot, keccak256(abi.encodePacked("new merkle root")), currentNonce); + _updateRoot(); + + uint256 afterTime = registry.getLastUpdateTime(); + bytes32 updatedRoot = registry.getCurrentMerkleRoot(); + assertNotEq(updatedRoot, notUpdatedRoot); + assertTrue(afterTime != before); + assertEq(afterTime, block.timestamp); + } + + function test_UpdateRoot_SigByNewAuthorizedUpdater_Success() public { + uint64 currentNonce = registry.getCurrentNonce(); + + vm.prank(updater); + registry.addAuthorizedUpdater(randomUser); + + bytes32 newRoot = keccak256(abi.encodePacked("authorized new merkle root")); + + bytes32 hash = keccak256(abi.encodePacked(newRoot, currentNonce, block.chainid, address(registry))); + bytes32 signedHash = hash.toEthSignedMessageHash(); + bytes memory signature = _sign(signedHash, randomUserPk); + + registry.updateMerkleRoot(newRoot, currentNonce, signature); + + assertEq(registry.getCurrentMerkleRoot(), newRoot); + } + + /* -------------------------------------------------------------------------- */ + /* requestWhitelist */ + /* -------------------------------------------------------------------------- */ + + function test_RequestWhitelist_InsufficientFee() public { + uint256 requestFee = registry.getRequestFee(); + + vm.prank(randomUser); + vm.expectRevert(Errors.WhitelistRegistry__InsufficientFee.selector); + registry.requestWhitelist{value: requestFee - 1}(); + } + + function test_RequestWhitelist_RequestToFrequent() public { + uint256 requestFee = registry.getRequestFee(); + + vm.prank(randomUser); + registry.requestWhitelist{value: requestFee}(); + + vm.prank(randomUser); + vm.expectRevert(Errors.WhitelistRegistry__RequestTooFrequent.selector); + registry.requestWhitelist{value: requestFee}(); + } + + function test_RequestWhitelist_Success() public { + uint256 requestFee = registry.getRequestFee(); + + uint256 lastRequestedTimeBefore = registry.getLastRequestedTime(randomUser); + assertEq(lastRequestedTimeBefore, 0); + uint256 totalCollectedFeesBefore = registry.getTotalCollectedFees(); + + vm.prank(randomUser); + vm.expectEmit(true, false, false, false); + emit IWhitelistRegistry.WhitelistRequested(randomUser); + + uint256 blockTimestampBefore = block.timestamp; + + bool success = registry.requestWhitelist{value: requestFee}(); + assertTrue(success); + + uint256 lastRequestedTimeAfter = registry.getLastRequestedTime(randomUser); + uint256 totalCollectedFeesAfter = registry.getTotalCollectedFees(); + + assertEq(lastRequestedTimeAfter, blockTimestampBefore); + assertEq(totalCollectedFeesAfter, totalCollectedFeesBefore + requestFee); + } + + /* -------------------------------------------------------------------------- */ + /* withdraw */ + /* -------------------------------------------------------------------------- */ + + function test_Withdraw_NotAuthorized() public { + vm.prank(randomUser); + vm.expectRevert(Errors.WhitelistRegistry__NotAuthorized.selector); + registry.withdraw(); + } + + function test_Withdraw_TransferFailed() public { + vm.deal(address(registry), 1 ether); + + address maliciousReceiver = address(new MaliciousReceiver()); + + vm.startPrank(updater); + registry.grantRole(registry.getWithdrawRole(), maliciousReceiver); + vm.stopPrank(); + + vm.prank(maliciousReceiver); + vm.expectRevert(Errors.WhitelistRegistry__NothingToWithdraw.selector); + registry.withdraw(); + + vm.deal(randomUser, 10 ether); + vm.prank(randomUser); + registry.requestWhitelist{value: registry.getRequestFee()}(); + + vm.prank(maliciousReceiver); + vm.expectRevert(Errors.WhitelistRegistry__WithdrawFailed.selector); + registry.withdraw(); + assertEq(registry.getTotalCollectedFees(), registry.getRequestFee()); + assertEq(maliciousReceiver.balance, 0); + } + + function test_Withdraw_Success() public { + uint256 requestFee = registry.getRequestFee(); + + vm.prank(randomUser); + (bool success) = registry.requestWhitelist{value: requestFee}(); + assertTrue(success); + assertEq(registry.getTotalCollectedFees(), requestFee); + + uint256 updaterBalanceBefore = updater.balance; + + // Expect event + vm.prank(updater); + vm.expectEmit(true, false, false, true); + emit IWhitelistRegistry.WithdrawSuccess(updater, requestFee); + + registry.withdraw(); + + uint256 updaterBalanceAfter = updater.balance; + assertEq(updaterBalanceAfter, updaterBalanceBefore + requestFee); + assertEq(registry.getTotalCollectedFees(), 0); + } + + /* -------------------------------------------------------------------------- */ + /* addAuthorizedUpdater */ + /* -------------------------------------------------------------------------- */ + + function test_AddAuthorizedUpdater_NotAuthorized() public { + vm.prank(randomUser); + vm.expectRevert(Errors.WhitelistRegistry__NotAuthorized.selector); + registry.addAuthorizedUpdater(randomUser); + } + + function test_AddAuthorizedUpdater_InvalidInput() public { + vm.prank(updater); + vm.expectRevert(Errors.WhitelistRegistry__InvalidInput.selector); + registry.addAuthorizedUpdater(address(0)); + } + + function test_AddAuthorizedUpdater_AlreadyAuthorized() public { + assertTrue(registry.isAuthorizedUpdater(updater)); + + vm.prank(updater); + vm.expectRevert(Errors.WhitelistRegistry__AlreadyAuthorized.selector); + registry.addAuthorizedUpdater(updater); + } + + function test_AddAuthorizedUpdater_Success() public { + assertFalse(registry.isAuthorizedUpdater(randomUser)); + + vm.prank(updater); + vm.expectEmit(true, false, false, false); + emit IWhitelistRegistry.AuthorizedUpdaterAdded(randomUser); + registry.addAuthorizedUpdater(randomUser); + + assertTrue(registry.isAuthorizedUpdater(randomUser)); + } + + /* -------------------------------------------------------------------------- */ + /* removeAuthorizedUpdater */ + /* -------------------------------------------------------------------------- */ + + function test_RemoveAuthorizedUpdater_NotAuthorized() public { + vm.prank(randomUser); + vm.expectRevert(Errors.WhitelistRegistry__NotAuthorized.selector); + registry.removeAuthorizedUpdater(updater); + } + + function test_RemoveAuthorizedUpdater_InvalidInput() public { + vm.prank(updater); + vm.expectRevert(Errors.WhitelistRegistry__InvalidInput.selector); + registry.removeAuthorizedUpdater(address(0)); + } + + function test_RemoveAuthorizedUpdater_NotAuthorizedUpdater() public { + assertFalse(registry.isAuthorizedUpdater(randomUser)); + + vm.prank(updater); + vm.expectRevert(Errors.WhitelistRegistry__NotAuthorized.selector); + registry.removeAuthorizedUpdater(randomUser); + } + + function test_RemoveAuthorizedUpdater_Success() public { + assertTrue(registry.isAuthorizedUpdater(updater)); + + vm.prank(updater); + vm.expectEmit(true, false, false, false); + emit IWhitelistRegistry.AuthorizedUpdaterRemoved(updater); + registry.removeAuthorizedUpdater(updater); + + assertFalse(registry.isAuthorizedUpdater(updater)); + } + + /* -------------------------------------------------------------------------- */ + /* pause/unpause */ + /* -------------------------------------------------------------------------- */ + + function test_PauseUnpause_NotAuthorized() public { + vm.startPrank(randomUser); + vm.expectRevert( + abi.encodeWithSelector( + IAccessControl.AccessControlUnauthorizedAccount.selector, randomUser, registry.getDefaultAdminRole() + ) + ); + registry.pause(); + + vm.expectRevert( + abi.encodeWithSelector( + IAccessControl.AccessControlUnauthorizedAccount.selector, randomUser, registry.getDefaultAdminRole() + ) + ); + registry.unpause(); + vm.stopPrank(); + } + + function test_PauseUnpause_Success() public { + vm.prank(updater); + registry.pause(); + assertTrue(registry.paused()); + + vm.prank(updater); + registry.unpause(); + assertFalse(registry.paused()); + } + + function test_UpdateMerkleRoot_RevertsWhenPaused() public { + vm.prank(updater); + registry.pause(); + assertTrue(registry.paused()); + + bytes32 newRoot = keccak256(abi.encodePacked("new merkle root")); + uint256 currentNonce = registry.getCurrentNonce(); + + bytes32 hash = keccak256(abi.encodePacked(newRoot, currentNonce, block.chainid, address(registry))); + bytes32 signedHash = hash.toEthSignedMessageHash(); + bytes memory signature = _sign(signedHash, updaterPk); + + vm.prank(randomUser); + vm.expectRevert(Pausable.EnforcedPause.selector); + registry.updateMerkleRoot(newRoot, uint64(currentNonce), signature); + } + + function test_RequestWhitelist_RevertsWhenPaused() public { + vm.prank(updater); + registry.pause(); + uint256 fee = registry.getRequestFee(); + + vm.prank(randomUser); + vm.expectRevert(Pausable.EnforcedPause.selector); + registry.requestWhitelist{value: fee}(); + } + + /* -------------------------------------------------------------------------- */ + /* GETTERS */ + /* -------------------------------------------------------------------------- */ + + function test_Getters_InitialValues() public view { + // initial merkle root and timestamps + assertEq(registry.getCurrentMerkleRoot(), bytes32(0)); + assertEq(registry.getTotalCollectedFees(), 0); + assertEq(registry.getLastUpdateTime(), 0); + + // authorized updater checks + + assertTrue(registry.isAuthorizedUpdater(updater)); + assertFalse(registry.isAuthorizedUpdater(randomUser)); + + // request-related getters + assertEq(registry.getLastRequestedTime(randomUser), 0); + assertEq(registry.getRequestCooldown(), 24 hours); + assertEq(registry.getRequestFee(), 10e6); + + // roles + assertEq(registry.getDefaultAdminRole(), DEFAULT_ADMIN_ROLE); + assertTrue(registry.isAdmin(updater)); + assertFalse(registry.isAdmin(randomUser)); + assertTrue(registry.isWithdrawer(updater)); + assertFalse(registry.isWithdrawer(randomUser)); + + bytes32 expectedWithdrawRole = keccak256(abi.encodePacked("WITHDRAW_ROLE")); + assertEq(registry.getWithdrawRole(), expectedWithdrawRole); + } + + function test_VerifyWhitelist_InvalidInputs() public { + bytes32[] memory emptyProof = new bytes32[](0); + + vm.expectRevert(Errors.WhitelistRegistry__InvalidInput.selector); + registry.verifyWhitelist(emptyProof, randomUser); + + bytes32[] memory someProof = new bytes32[](1); + someProof[0] = keccak256(abi.encodePacked("some")); + + vm.expectRevert(Errors.WhitelistRegistry__InvalidInput.selector); + registry.verifyWhitelist(someProof, address(0)); + } +} diff --git a/contracts/test/unit/SettlementUnit.sol b/contracts/test/unit/SettlementUnit.sol new file mode 100644 index 0000000..b5e4d23 --- /dev/null +++ b/contracts/test/unit/SettlementUnit.sol @@ -0,0 +1,605 @@ +// SPDX-License-Identifier: MIT +pragma solidity 0.8.25; + +import {Test} from "forge-std/Test.sol"; +import {Ownable} from "@openzeppelin/contracts/access/Ownable.sol"; +import {Pausable} from "@openzeppelin/contracts/utils/Pausable.sol"; + +import {Settlement} from "../../src/Settlement.sol"; +import {ISettlement} from "../../src/interfaces/ISettlement.sol"; + +import {TestConstants as TC} from "../utils/TestConstants.sol"; +import {Types} from "../../src/libraries/Types.sol"; +import {Errors} from "../../src/libraries/Errors.sol"; + +contract SettlementUnitTest is Test { + Settlement settlement; + address feeModule; + address registry; + address recipient; + address token; + address user; + address owner; + + function setUp() public { + owner = makeAddr("owner"); + feeModule = makeAddr("feeModule"); + registry = makeAddr("registry"); + recipient = makeAddr("recipient"); + token = makeAddr("token"); + user = makeAddr("user"); + + vm.startPrank(owner); + + settlement = new Settlement(); + settlement.setWhitelistRegistry(registry); + settlement.setFeeModule(feeModule); + settlement.setToken(token); + + settlement.setMaxTxPerBatch(uint32(TC.MAX_TX_PER_BATCH)); + settlement.setTimelockDuration(uint48(TC.TIMELOCK_DURATION)); + vm.stopPrank(); + } + + /* -------------------------------------------------------------------------- */ + /* HELPERS */ + /* -------------------------------------------------------------------------- */ + + function _createBatchData() internal pure returns (bytes32 merkleRoot, uint32 txCount) { + merkleRoot = keccak256(abi.encodePacked("merkle root")); + txCount = uint32(TC.MAX_TX_PER_BATCH); + } + + function _createTransferData() internal view returns (Types.TransferData memory) { + Types.TransferData memory txData = Types.TransferData({ + from: user, + to: recipient, + amount: 1000, + nonce: 1, + timestamp: uint48(block.timestamp), + recipientCount: 1, + batchId: 0, + txType: Types.TxType.DELAYED + }); + return txData; + } + + function _createMerkleProofs() + internal + pure + returns (bytes32[] memory validTxProof, bytes32[] memory validWhitelistProof) + { + validTxProof = new bytes32[](3); + validTxProof[0] = keccak256(abi.encodePacked("tx proof 1")); + validTxProof[1] = keccak256(abi.encodePacked("tx proof 2")); + validTxProof[2] = keccak256(abi.encodePacked("tx proof 3")); + + validWhitelistProof = new bytes32[](3); + validWhitelistProof[0] = keccak256(abi.encodePacked("whitelist proof 1")); + validWhitelistProof[1] = keccak256(abi.encodePacked("whitelist proof 2")); + validWhitelistProof[2] = keccak256(abi.encodePacked("whitelist proof 3")); + } + + /* -------------------------------------------------------------------------- */ + /* INITIAL STATE */ + /* -------------------------------------------------------------------------- */ + + function test_Constructor_InitialValues() public view { + assertEq(settlement.getWhitelistRegistry(), address(registry)); + assertEq(settlement.getFeeModule(), address(feeModule)); + assertEq(settlement.getToken(), address(token)); + assertEq(settlement.getMaxTxPerBatch(), TC.MAX_TX_PER_BATCH); + assertEq(settlement.getTimelockDuration(), TC.TIMELOCK_DURATION); + assertTrue(settlement.isConfigured()); + } + + function test_IsConfigured() public { + Settlement unconfiguredSettlement = new Settlement(); + assertFalse(unconfiguredSettlement.isConfigured()); + + unconfiguredSettlement.setWhitelistRegistry(registry); + unconfiguredSettlement.setFeeModule(feeModule); + unconfiguredSettlement.setToken(token); + + assertTrue(unconfiguredSettlement.isConfigured()); + } + + /* -------------------------------------------------------------------------- */ + /* submitBatch */ + /* -------------------------------------------------------------------------- */ + + function test_SubmitBatch_AggregatorNotApproved() public { + (bytes32 merkleRoot, uint256 txCount) = _createBatchData(); + + vm.prank(user); + vm.expectRevert(Errors.Settlement__AggregatorNotApproved.selector); + settlement.submitBatch(merkleRoot, uint32(txCount), 1); + } + + function test_SubmitBatch_InvalidInput() public { + (bytes32 merkleRoot, uint32 txCount) = _createBatchData(); + uint32 zeroTxCount = 0; + bytes32 zeroMerkleRoot = bytes32(0); + + vm.startPrank(owner); + + vm.expectRevert(Errors.Settlement__InvalidInput.selector); + settlement.submitBatch(zeroMerkleRoot, txCount, 1); + + vm.expectRevert(Errors.Settlement__InvalidInput.selector); + settlement.submitBatch(merkleRoot, zeroTxCount, 1); + + vm.expectRevert(Errors.Settlement__InvalidInput.selector); + settlement.submitBatch(merkleRoot, txCount + 1, 1); + + vm.expectRevert(Errors.Settlement__InvalidInput.selector); + settlement.submitBatch(zeroMerkleRoot, zeroTxCount, 1); + + vm.expectRevert(Errors.Settlement__InvalidInput.selector); + settlement.submitBatch(zeroMerkleRoot, txCount + 1, 1); + + vm.stopPrank(); + } + + function test_SubmitBatch_AlreadySubmitted() public { + (bytes32 merkleRoot, uint32 txCount) = _createBatchData(); + + vm.startPrank(owner); + settlement.submitBatch(merkleRoot, txCount, 1); + + vm.expectRevert(Errors.Settlement__BatchAlreadySubmitted.selector); + settlement.submitBatch(merkleRoot, txCount, 1); + + vm.stopPrank(); + } + + function test_SubmitBatch_SuccessAndEmits() public { + uint256 initialBatchId = settlement.getCurrentBatchId(); + assertEq(initialBatchId, 0); + (bytes32 merkleRoot, uint32 txCount) = _createBatchData(); + + vm.prank(owner); + vm.expectEmit(true, false, false, true); + emit ISettlement.BatchSubmitted(1, merkleRoot, txCount, uint48(block.timestamp)); + + (bool success, uint64 returnedBatchId) = settlement.submitBatch(merkleRoot, txCount, 1); + + assertTrue(success); + assertEq(returnedBatchId, 1); + + uint256 newBatchId = settlement.getCurrentBatchId(); + + Types.Batch memory submittedBatch = settlement.getBatchById(uint64(newBatchId)); + assertEq(newBatchId, initialBatchId + 1); + assertEq(submittedBatch.merkleRoot, merkleRoot); + assertEq(submittedBatch.txCount, txCount); + assertEq(submittedBatch.timestamp, block.timestamp); + assertEq(submittedBatch.unlockTime, block.timestamp + settlement.getTimelockDuration()); + + assertEq(settlement.getBatchIdByRoot(merkleRoot), newBatchId); + } + + /* -------------------------------------------------------------------------- */ + /* executeTransfer */ + /* -------------------------------------------------------------------------- */ + + function test_ExecuteTransfer_NotConfigured() public { + Settlement unconfiguredSettlement = new Settlement(); + + (bytes32[] memory txProof, bytes32[] memory whitelistProof) = _createMerkleProofs(); + Types.TransferData memory txData = _createTransferData(); + + vm.expectRevert(Errors.Settlement__NotConfigured.selector); + unconfiguredSettlement.executeTransfer(txProof, whitelistProof, txData); + } + + function test_ExecuteTransfer_InvalidInput() public { + bytes32[] memory invalidTxProof = new bytes32[](0); + bytes32[] memory invalidWhitelistProof = new bytes32[](0); + + (bytes32[] memory validTxProof, bytes32[] memory validWhitelistProof) = _createMerkleProofs(); + Types.TransferData memory txData = _createTransferData(); + + // invalid txProof + vm.expectRevert(Errors.Settlement__InvalidInput.selector); + settlement.executeTransfer(invalidTxProof, validWhitelistProof, txData); + + // invalid whitelistProof + vm.expectRevert(Errors.Settlement__InvalidInput.selector); + settlement.executeTransfer(validTxProof, invalidWhitelistProof, txData); + + // invalid both + vm.expectRevert(Errors.Settlement__InvalidInput.selector); + settlement.executeTransfer(invalidTxProof, invalidWhitelistProof, txData); + + // zero batch + txData.batchId = 0; + vm.expectRevert(Errors.Settlement__InvalidInput.selector); + settlement.executeTransfer(validTxProof, validWhitelistProof, txData); + + // zero amount + txData.amount = 0; + vm.expectRevert(Errors.Settlement__InvalidInput.selector); + settlement.executeTransfer(validTxProof, validWhitelistProof, txData); + } + + function test_ExecuteTransfer_InvalidTxData() public { + (bytes32[] memory txProof, bytes32[] memory whitelistProof) = _createMerkleProofs(); + + Types.TransferData memory invalidFromData = _createTransferData(); + invalidFromData.from = address(0); + + vm.expectRevert(Errors.Settlement__InvalidInput.selector); + settlement.executeTransfer(txProof, whitelistProof, invalidFromData); + + Types.TransferData memory invalidToData = _createTransferData(); + invalidToData.to = address(0); + vm.expectRevert(Errors.Settlement__InvalidInput.selector); + settlement.executeTransfer(txProof, whitelistProof, invalidToData); + + Types.TransferData memory invalidData = _createTransferData(); + invalidData.from = address(0); + invalidData.to = address(0); + vm.expectRevert(Errors.Settlement__InvalidInput.selector); + settlement.executeTransfer(txProof, whitelistProof, invalidData); + } + + // Settlement__InvalidBatch + function test_ExecuteTransfer_InvalidBatch() public { + (bytes32[] memory txProof, bytes32[] memory whitelistProof) = _createMerkleProofs(); + + Types.TransferData memory txData = _createTransferData(); + txData.batchId = 999; + + vm.expectRevert(Errors.Settlement__InvalidBatch.selector); + settlement.executeTransfer(txProof, whitelistProof, txData); + } + + // Settlement__BatchLocked + function test_ExecuteTransfer_BatchLocked() public { + (bytes32 merkleRoot, uint32 txCount) = _createBatchData(); + + vm.prank(owner); + settlement.submitBatch(merkleRoot, txCount, 1); + + (bytes32[] memory txProof, bytes32[] memory whitelistProof) = _createMerkleProofs(); + + Types.TransferData memory txData = _createTransferData(); + txData.batchId = 1; + + vm.expectRevert(Errors.Settlement__BatchLocked.selector); + settlement.executeTransfer(txProof, whitelistProof, txData); + } + + /* -------------------------------------------------------------------------- */ + /* approveAggregator */ + /* -------------------------------------------------------------------------- */ + + function test_ApproveAggregator_InvalidInput() public { + vm.prank(owner); + vm.expectRevert(Errors.Settlement__InvalidInput.selector); + settlement.approveAggregator(address(0)); + } + + function test_ApproveAggregator_AlreadyAggregator() public { + vm.prank(owner); + vm.expectRevert(Errors.Settlement__AlreadyAggregator.selector); + settlement.approveAggregator(owner); + } + + function test_ApproveAggregator_ApprovesAndEmits() public { + vm.prank(owner); + vm.expectEmit(true, false, false, false); + emit ISettlement.AggregatorApproved(user); + settlement.approveAggregator(user); + + bool isApproved = settlement.isApprovedAggregator(user); + assertTrue(isApproved); + } + + /* -------------------------------------------------------------------------- */ + /* disapproveAggregator */ + /* -------------------------------------------------------------------------- */ + + function test_DisapproveAggregator_InvalidInput() public { + vm.prank(owner); + vm.expectRevert(Errors.Settlement__InvalidInput.selector); + settlement.disapproveAggregator(address(0)); + } + + function test_DisapproveAggregator_NotAggregator() public { + vm.prank(owner); + vm.expectRevert(Errors.Settlement__AggregatorNotApproved.selector); + settlement.disapproveAggregator(user); + } + + function test_DisapproveAggregator_DisapprovesAndEmits() public { + vm.prank(owner); + settlement.approveAggregator(user); + + vm.prank(owner); + vm.expectEmit(true, false, false, false); + emit ISettlement.AggregatorDisapproved(user); + settlement.disapproveAggregator(user); + + bool isApproved = settlement.isApprovedAggregator(user); + assertFalse(isApproved); + } + + /* onlyOwner */ + + function test_OnlyOwner() public { + vm.expectRevert(abi.encodeWithSelector(Ownable.OwnableUnauthorizedAccount.selector, address(this))); + settlement.approveAggregator(user); + + vm.expectRevert(abi.encodeWithSelector(Ownable.OwnableUnauthorizedAccount.selector, address(this))); + settlement.setWhitelistRegistry(registry); + + vm.expectRevert(abi.encodeWithSelector(Ownable.OwnableUnauthorizedAccount.selector, address(this))); + settlement.setFeeModule(feeModule); + + vm.expectRevert(abi.encodeWithSelector(Ownable.OwnableUnauthorizedAccount.selector, address(this))); + settlement.setMaxTxPerBatch(1); + + vm.expectRevert(abi.encodeWithSelector(Ownable.OwnableUnauthorizedAccount.selector, address(this))); + settlement.setTimelockDuration(1); + + vm.expectRevert(abi.encodeWithSelector(Ownable.OwnableUnauthorizedAccount.selector, address(this))); + settlement.setToken(token); + } + + /* -------------------------------------------------------------------------- */ + /* pause/unpause */ + /* -------------------------------------------------------------------------- */ + + function test_PauseUnpause_NotAuthorized() public { + vm.startPrank(user); + vm.expectRevert( + abi.encodeWithSelector(Ownable.OwnableUnauthorizedAccount.selector, user, settlement.getOwner()) + ); + settlement.pause(); + + vm.expectRevert( + abi.encodeWithSelector(Ownable.OwnableUnauthorizedAccount.selector, user, settlement.getOwner()) + ); + settlement.unpause(); + vm.stopPrank(); + } + + function test_PauseUnpause_Success() public { + vm.prank(owner); + settlement.pause(); + assertTrue(settlement.paused()); + + vm.prank(owner); + settlement.unpause(); + assertFalse(settlement.paused()); + } + + function test_ExecuteTransfer_EnforcedPause() public { + vm.prank(owner); + settlement.pause(); + + (bytes32[] memory txProof, bytes32[] memory whitelistProof) = _createMerkleProofs(); + Types.TransferData memory txData = _createTransferData(); + + vm.expectRevert(Pausable.EnforcedPause.selector); + settlement.executeTransfer(txProof, whitelistProof, txData); + } + + /* -------------------------------------------------------------------------- */ + /* SETTERS */ + /* -------------------------------------------------------------------------- */ + + /* setWhitelistRegistry */ + + function test_SetWhitelistRegistry_InvalidInput() public { + vm.prank(owner); + vm.expectRevert(Errors.Settlement__InvalidInput.selector); + settlement.setWhitelistRegistry(address(0)); + } + + function test_SetWhitelistRegistry_AlreadyRegistry() public { + vm.prank(owner); + vm.expectRevert(Errors.Settlement__AlreadyRegistry.selector); + settlement.setWhitelistRegistry(registry); + } + + function test_SetWhitelistRegistry_SetsAndEmits() public { + address newRegistry = makeAddr("newRegistry"); + vm.prank(owner); + vm.expectEmit(true, false, false, false); + emit ISettlement.WhitelistRegistryUpdated(newRegistry); + settlement.setWhitelistRegistry(newRegistry); + + address actual = settlement.getWhitelistRegistry(); + assertEq(actual, newRegistry); + } + + /* setFeeModule */ + + function test_SetFeeModule_InvalidInput() public { + vm.prank(owner); + vm.expectRevert(Errors.Settlement__InvalidInput.selector); + settlement.setFeeModule(address(0)); + } + + function test_SetFeeModule_AlreadyFeeModule() public { + vm.prank(owner); + vm.expectRevert(Errors.Settlement__AlreadyFeeModule.selector); + settlement.setFeeModule(feeModule); + } + + function test_SetFeeModule_SetsAndEmits() public { + address newFeeModule = makeAddr("newFeeModule"); + vm.prank(owner); + vm.expectEmit(true, false, false, false); + emit ISettlement.FeeModuleUpdated(newFeeModule); + settlement.setFeeModule(newFeeModule); + + address actual = settlement.getFeeModule(); + assertEq(actual, newFeeModule); + } + + /* setMaxTxPerBatch */ + + function test_SetMaxTxPerBatch_InvalidInput() public { + vm.prank(owner); + vm.expectRevert(Errors.Settlement__InvalidInput.selector); + settlement.setMaxTxPerBatch(0); + } + + function test_SetMaxTxPerBatch_AlreadySet() public { + vm.prank(owner); + settlement.setMaxTxPerBatch(100); + + vm.prank(owner); + vm.expectRevert(Errors.Settlement__AlreadySet.selector); + settlement.setMaxTxPerBatch(100); + } + + function test_SetMaxTxPerBatch_SetsAndEmits() public { + uint32 maxTx = 150; + + vm.prank(owner); + vm.expectEmit(true, false, false, false); + emit ISettlement.MaxTxPerBatchUpdated(maxTx); + settlement.setMaxTxPerBatch(uint32(maxTx)); + + uint32 actual = settlement.getMaxTxPerBatch(); + assertEq(actual, maxTx); + } + + /* setTimelockDuration */ + + function test_SetTimelockDuration_AllowZero() public { + vm.startPrank(owner); + settlement.setTimelockDuration(1); + settlement.setTimelockDuration(0); + vm.stopPrank(); + + uint256 actual = settlement.getTimelockDuration(); + assertEq(actual, 0); + } + + function test_SetTimelockDuration_AlreadyTimelock_Zero() public { + vm.prank(owner); + settlement.setTimelockDuration(0); + + vm.prank(owner); + vm.expectRevert(Errors.Settlement__AlreadyTimelockDuration.selector); + settlement.setTimelockDuration(0); + } + + function test_SetTimelockDuration_AlreadyTimelock() public { + vm.startPrank(owner); + + vm.expectRevert(Errors.Settlement__AlreadyTimelockDuration.selector); + settlement.setTimelockDuration(uint32(TC.TIMELOCK_DURATION)); + + settlement.setTimelockDuration(600); + + vm.expectRevert(Errors.Settlement__AlreadyTimelockDuration.selector); + settlement.setTimelockDuration(600); + + vm.stopPrank(); + } + + function test_SetTimelockDuration_SetsAndEmits() public { + uint48 duration = 600; + vm.prank(owner); + vm.expectEmit(true, false, false, false); + emit ISettlement.TimelockDurationUpdated(duration); + settlement.setTimelockDuration(duration); + + uint48 actual = settlement.getTimelockDuration(); + assertEq(actual, duration); + } + + /* setToken */ + + function test_SetToken_InvalidInput() public { + vm.prank(owner); + vm.expectRevert(Errors.Settlement__InvalidInput.selector); + settlement.setToken(address(0)); + } + + function test_SetToken_AlreadyToken() public { + vm.prank(owner); + vm.expectRevert(Errors.Settlement__AlreadyToken.selector); + settlement.setToken(token); + } + + function test_SetToken_SetsAndEmits() public { + address newToken = makeAddr("newToken"); + + vm.prank(owner); + vm.expectEmit(true, false, false, false); + emit ISettlement.TokenUpdated(newToken); + settlement.setToken(newToken); + + address actual = settlement.getToken(); + assertEq(actual, newToken); + } + + /* -------------------------------------------------------------------------- */ + /* GETTERS */ + /* -------------------------------------------------------------------------- */ + + function test_AllGetters() public { + assertEq(settlement.getOwner(), owner); + + // getBatchIdByHash root==0 + vm.expectRevert(Errors.Settlement__InvalidInput.selector); + settlement.getBatchIdByRoot(bytes32(0)); + + vm.prank(owner); + address newAggregator = address(0xBEEF); + settlement.approveAggregator(newAggregator); + assertTrue(settlement.isApprovedAggregator(newAggregator)); + + vm.prank(owner); + settlement.disapproveAggregator(newAggregator); + assertFalse(settlement.isApprovedAggregator(newAggregator)); + + vm.prank(owner); + bytes32 root = keccak256("root"); + (bool ok, uint256 batchId) = settlement.submitBatch(root, 3, 1); + assertTrue(ok); + + assertEq(settlement.getCurrentBatchId(), batchId); + assertEq(settlement.getBatchIdByRoot(root), batchId); + assertEq(settlement.getRootByBatchId(uint64(batchId)), root); + + Types.Batch memory batch = settlement.getBatchById(uint64(batchId)); + assertEq(batch.merkleRoot, root); + assertEq(batch.txCount, 3); + assertEq(batch.timestamp, block.timestamp); + assertEq(batch.unlockTime, block.timestamp + settlement.getTimelockDuration()); + + Types.TransferData memory txData = Types.TransferData({ + from: address(this), + to: address(0x1234), + amount: 1, + nonce: 1, + timestamp: uint48(block.timestamp), + recipientCount: 1, + batchId: uint64(batchId), + txType: Types.TxType.DELAYED + }); + + bytes32 txHash = keccak256( + abi.encodePacked( + txData.from, + txData.to, + txData.amount, + txData.nonce, + txData.timestamp, + txData.recipientCount, + txData.txType + ) + ); + + assertFalse(settlement.isExecutedTransfer(txHash)); + } +} diff --git a/contracts/test/utils/IntegrationDeployHelpers.sol b/contracts/test/utils/IntegrationDeployHelpers.sol new file mode 100644 index 0000000..3452123 --- /dev/null +++ b/contracts/test/utils/IntegrationDeployHelpers.sol @@ -0,0 +1,58 @@ +// SPDX-License-Identifier: MIT +pragma solidity 0.8.25; + +import {Test} from "forge-std/Test.sol"; +import {DeployFeeModule} from "../../script/for-tests/DeployFeeModule.s.sol"; +import {DeploySettlement} from "../../script/for-tests/DeploySettlement.s.sol"; +import {DeployRegistry} from "../../script/for-tests/DeployRegistry.s.sol"; + +import {ERC20Mock} from "@openzeppelin/contracts/mocks/token/ERC20Mock.sol"; +import {FeeModule} from "../../src/FeeModule.sol"; +import {Settlement} from "../../src/Settlement.sol"; +import {WhitelistRegistry} from "../../src/WhitelistRegistry.sol"; + +abstract contract IntegrationDeployHelpers is Test { + DeployRegistry internal _registryDeployer; + WhitelistRegistry internal registry; + + DeployFeeModule internal _feeDeployer; + FeeModule internal feeModule; + + DeploySettlement internal _settlementDeployer; + Settlement internal settlement; + + ERC20Mock mockToken; + + address internal user; + uint256 internal userPrivKey; + + address internal user2; + uint256 internal user2PrivKey; + + function _initUser() internal { + (user, userPrivKey) = makeAddrAndKey("user"); + } + + function _initUser2() internal { + (user2, user2PrivKey) = makeAddrAndKey("user2"); + } + + function _initFeeModule() internal { + _feeDeployer = new DeployFeeModule(); + feeModule = _feeDeployer.run(); + } + + function _initRegistry() internal { + _registryDeployer = new DeployRegistry(); + registry = _registryDeployer.run(); + } + + function _initSettlement() internal { + _settlementDeployer = new DeploySettlement(); + settlement = _settlementDeployer.run(); + } + + function _initToken() internal { + mockToken = new ERC20Mock(); + } +} diff --git a/contracts/test/utils/TestConstants.sol b/contracts/test/utils/TestConstants.sol new file mode 100644 index 0000000..b28ef2e --- /dev/null +++ b/contracts/test/utils/TestConstants.sol @@ -0,0 +1,21 @@ +// SPDX-License-Identifier: MIT +pragma solidity 0.8.25; + +library TestConstants { + uint256 internal constant BASE_FEE = 100_000; // Base fee = 0.1 TRX + uint256 internal constant BATCH_FEE = 50_000; // Batch fee = 0.05 TRX per recipient + uint256 internal constant INSTANT_FEE = 200_000; // Instant fee = 0.2 TRX + uint256 internal constant FREE_TX_AMOUNT = 10; // Free tier = first 10 tx/day for unbatched small users + uint256 internal constant LARGE_VOLUME = 1_000_000_000; + uint256 internal constant VOLUME = 10_000; + + uint256 internal constant REQUEST_COOLDOWN = 24 hours; + uint256 internal constant REQUEST_FEE = 10e6; // 10 TRX + bytes32 internal constant WITHDRAW_ROLE = keccak256("WITHDRAW_ROLE"); + bytes32 internal constant DEFAULT_ADMIN_ROLE = 0x00; + + uint256 internal constant MAX_TX_PER_BATCH = 22; + uint256 internal constant TIMELOCK_DURATION = 1 days; + + address internal constant UPDATER = 0xf39Fd6e51aad88F6F4ce6aB8827279cffFb92266; +} diff --git a/docs/Contract structure overview.png b/docs/Contract structure overview.png new file mode 100644 index 0000000..8be4845 Binary files /dev/null and b/docs/Contract structure overview.png differ diff --git a/docs/On-chain contract dependencies.png b/docs/On-chain contract dependencies.png new file mode 100644 index 0000000..b0f33e0 Binary files /dev/null and b/docs/On-chain contract dependencies.png differ diff --git a/docs/System flow diagram.png b/docs/System flow diagram.png new file mode 100644 index 0000000..3fc2c4f Binary files /dev/null and b/docs/System flow diagram.png differ diff --git a/docs/TIP-batch-MVP-guide.md b/docs/TIP-batch-MVP-guide.md new file mode 100644 index 0000000..4342629 --- /dev/null +++ b/docs/TIP-batch-MVP-guide.md @@ -0,0 +1,613 @@ +# TIP-batch MVP guide + +- [TIP-batch MVP guide](#tip-batch-mvp-guide) + - [General technical guide](#general-technical-guide) + - [Purpose and scope](#purpose-and-scope) + - [Audience](#audience) + - [Primary audience](#primary-audience) + - [Secondary audience](#secondary-audience) + - [MVP scope versus proposal scope](#mvp-scope-versus-proposal-scope) + - [What the protocol does in this MVP](#what-the-protocol-does-in-this-mvp) + - [What the protocol does not do in this MVP](#what-the-protocol-does-not-do-in-this-mvp) + - [Conventions and normative language](#conventions-and-normative-language) + - [System model](#system-model) + - [Component map](#component-map) + - [Roles and responsibilities](#roles-and-responsibilities) + - [Architecture sketch alignment](#architecture-sketch-alignment) + - [Architecture diagrams](#architecture-diagrams) + - [On-chain contracts](#on-chain-contracts) + - [Settlement contract](#settlement-contract) + - [Whitelist registry contract](#whitelist-registry-contract) + - [Fee module contract](#fee-module-contract) + - [Off-chain components](#off-chain-components) + - [Batch builder and executor service (Java application)](#batch-builder-and-executor-service-java-application) + - [Merkle tooling scripts (Python and Java)](#merkle-tooling-scripts-python-and-java) + - [Data model](#data-model) + - [Transfer leaf payload](#transfer-leaf-payload) + - [Whitelist leaf payload](#whitelist-leaf-payload) + - [Protocol flows](#protocol-flows) + - [Flow A: whitelist root update](#flow-a-whitelist-root-update) + - [Flow B: batch commit](#flow-b-batch-commit) + - [Flow C: approve and transfer execution](#flow-c-approve-and-transfer-execution) + - [Optional versus required elements](#optional-versus-required-elements) + - [Required for MVP operation](#required-for-mvp-operation) + - [Optional in the MVP design](#optional-in-the-mvp-design) + - [Security and trust model](#security-and-trust-model) + - [Centralization and operator trust](#centralization-and-operator-trust) + - [Allowance-based risk surface](#allowance-based-risk-surface) + - [Unlock time semantics](#unlock-time-semantics) + - [TRON resource considerations](#tron-resource-considerations) + - [Developer-oriented notes](#developer-oriented-notes) + - [Design decisions](#design-decisions) + - [Merkle root as batch commitment](#merkle-root-as-batch-commitment) + - [Per-transfer execution instead of single-call settlement](#per-transfer-execution-instead-of-single-call-settlement) + - [Sponsored execution via transferFrom](#sponsored-execution-via-transferfrom) + - [Whitelist gating via Merkle proof](#whitelist-gating-via-merkle-proof) + - [Unlock time as a review window](#unlock-time-as-a-review-window) + - [Off-chain state as in-memory storage](#off-chain-state-as-in-memory-storage) + - [Integration notes](#integration-notes) + - [Integration roles](#integration-roles) + - [Integration sequence for a dApp](#integration-sequence-for-a-dapp) + - [Edge cases](#edge-cases) + - [Allowance and balance changes between commit and execution](#allowance-and-balance-changes-between-commit-and-execution) + - [Nonce collisions and replay](#nonce-collisions-and-replay) + - [Token contract behavior](#token-contract-behavior) + - [Batch composition risks](#batch-composition-risks) + - [Limitations and non-goals](#limitations-and-non-goals) + - [Reference and examples](#reference-and-examples) + - [Terms](#terms) + - [Examples](#examples) + - [Transfer leaf payload](#transfer-leaf-payload) + - [Example whitelist input](#example-whitelist-input) + - [Sequence diagram: whitelist root update](#sequence-diagram-whitelist-root-update) + - [Sequence diagram: batch commit and execution](#sequence-diagram-batch-commit-and-execution) + - [Pseudocode: batch boundary selection](#pseudocode-batch-boundary-selection) + - [Pseudocode: Merkle commitment and per-leaf execution](#pseudocode-merkle-commitment-and-per-leaf-execution) + - [Failure classes](#failure-classes) + +This document describes the general provisions of TIP-batch MVP and consists of the following components: + +1. General technical guide. +2. Developer-oriented notes. +3. Reference and examples. + +## General technical guide {#general-technical-guide} + +### Purpose and scope {#purpose-and-scope} + +This document describes the implemented MVP of TIP-batch (TRON settlement batching layer), based on developer transcripts and a high-level architecture sketch. \ +This document targets protocol-level readers and describes current MVP behavior, not the full proposal design. + +### Audience {#audience} + +#### Primary audience {#primary-audience} + +* TRON core developers and maintainers. +* Protocol and blockchain engineers. +* Infrastructure engineers and L1/L2 engineers. +* Developers who work with TRON TVM, Energy/Bandwidth, and Stake 2.0. +* TRON community members who author or review TIPs. + +#### Secondary audience {#secondary-audience} + +* dApp developers who build sponsored execution (MetaFee-like) flows and gasless UX on TRON. +* Auditors and researchers who analyze protocol designs and MVP implementations. + +### MVP scope versus proposal scope {#mvp-scope-versus-proposal-scope} + +#### What the protocol does in this MVP {#what-the-protocol-does-in-this-mvp} + +* The system commits a Merkle root for a set of TRC-20 transfers to an on-chain settlement contract. +* The system enforces an unlock time (time lock) before transfer execution. +* The system executes each transfer on-chain after the caller supplies a Merkle inclusion proof. +* The system optionally gates a “batch-transfer” type via a whitelist Merkle proof for the sender address (txData.from). +* The system computes “virtual” fees via a fee module contract, without enforcing real fee collection in the MVP. + +#### What the protocol does not do in this MVP {#what-the-protocol-does-not-do-in-this-mvp} + +* The system does not execute an entire batch with a single on-chain token transfer call. +* The system does not implement a full dispute mechanism (fraud-proof and on-chain rollback). +* The system does not define a complete user-signed intent scheme in the transcripts. +* Existence of user-signed transfer intents in code, including signature verification and authorization rules. +* Existence of on-chain dispute hooks, batch invalidation, or batch cancellation. +* Exact mapping between the published proposal terminology and the deployed MVP contracts. + +### Conventions and normative language {#conventions-and-normative-language} + +* MUST, MUST NOT, SHOULD, SHOULD NOT, MAY indicate normative requirements for the MVP flows described in this guide. +* “Batch commitment” means the on-chain record that anchors a batch root and related metadata. +* “Transfer leaf” means the off-chain representation that the Merkle tree uses as a leaf payload, and that the settlement contract verifies via an inclusion proof. +* “Sender” means txData.from in the transfer payload passed to Settlement.executeTransfer. +* “Executor” means the account that submits on-chain transactions (submitBatch, executeTransfer) and consumes TRON resources (Energy/Bandwidth). +* “Whitelist gating” means a membership check for the sender (txData.from) under the WhitelistRegistry Merkle root, when the transaction type uses batch-transfer gating. + +### System model {#system-model} + +#### Component map {#component-map} + +On-chain components: + +* Settlement contract. +* Whitelist registry contract. +* Fee module contract. + +Off-chain components: + +* Batch builder and executor service (Java application in the MVP). +* Merkle tooling scripts (Python and Java in the MVP). +* TRON node RPC endpoint (external node, as described in transcripts). + +#### Roles and responsibilities {#roles-and-responsibilities} + +Batch builder (off-chain): + +* Collects transfer requests into a queue. +* Chooses batch boundaries (count threshold or time window). +* Builds a Merkle tree and produces a batch root. +* Produces Merkle proofs for each transfer leaf. +* Submits the batch root to the settlement contract. + +Executor (off-chain): + +* Calls executeTransfer on the settlement contract for each transfer leaf after unlock time. +* Pays TRON resource costs (Energy/Bandwidth) for the on-chain transactions. +* Treats whitelist proofs as proofs for the sender (txData.from), not for the executor address. + +Root signer (off-chain key role): + +* Signs the whitelist Merkle root, which authorizes an on-chain update of the whitelist root. + +Settlement contract (on-chain): + +* Stores batch commitments (root and metadata). +* Enforces unlock time for each committed batch. +* Verifies Merkle inclusion proofs for transfer leaves. +* Calls TRC-20 transferFrom to execute token transfers. +* Marks executed transfers to prevent replay. +* Calls the fee module for virtual fee computation. +* Queries the whitelist registry for sender whitelist gating (txData.from) when required by transaction type. + +Whitelist registry contract (on-chain): + +* Stores a whitelist Merkle root for eligible sender addresses (txData.from). +* Verifies authorization for updating the root via a role system and a root signature check. +* Emits an event for “request whitelist” as an off-chain signal. + +Fee module contract (on-chain): + +* Computes virtual fees by transaction type and parameters. +* Restricts fee application calls to the settlement contract. +* Does not enforce real fee collection in the MVP. + +#### Architecture sketch alignment {#architecture-sketch-alignment} + +The architecture sketch shows the following call direction: + +* The node commits a batch to the settlement contract. +* The settlement contract calls the whitelist registry and fee module. +* The settlement contract executes TRC-20 transfers via transferFrom. + +#### Architecture diagrams {#architecture-diagrams} + +System flow diagram: + +![System flow diagram.png](System%20flow%20diagram.png) + +On-chain contract dependencies: + +![On-chain contract dependencies.png](On-chain%20contract%20dependencies.png) + +Contract structure overview: + +![Contract structure overview.png](Contract%20structure%20overview.png) + +### On-chain contracts {#on-chain-contracts} + + +#### Settlement contract {#settlement-contract} + +Responsibilities: + +* Accept a batch commitment (Merkle root and batch metadata). +* Enforce an unlock time (challenge window) before executing transfers for that batch. +* Execute a single transfer per call, based on a verified Merkle proof. +* Prevent replay by marking a transfer as executed. + +Key operations: + +* Submit batch: store the batch root and metadata, and set an unlock time. +* Execute transfer: verify inclusion, verify optional whitelist membership for txData.from, compute fee, execute transferFrom, mark executed. + +Batch id semantics: \ +Settlement derives batchId during batch submission and uses batchId as the on-chain key for batches[batchId]. \ +Off-chain systems MAY treat batchId as a logical handle for monitoring, indexing, and API correlation. \ +Transfer leaf hashing does not require batchId when leaf encoding includes a root-binding mechanism (for example, batchSalt in metadata or an equivalent binding), because the Merkle proof already ties the leaf to the committed root. + +Per-transfer execution model: + +* The settlement contract executes exactly one TRC-20 transfer per executeTransfer call. +* Recipient count affects fee computation and tree structure metadata, but recipient count does not reduce the number of on-chain token transfer calls. + +#### Whitelist registry contract {#whitelist-registry-contract} + +Responsibilities: + +* Store a whitelist root that represents eligible sender addresses (txData.from). +* Provide a proof-based membership check for batch-transfer gating of txData.from. +* Provide administrative control over root updates via roles and signature checks. + +Root update model: + +* The root signer signs a new whitelist root off-chain. +* Any account MAY submit the signed root to the whitelist registry contract. +* The contract verifies the root signature and updates the stored whitelist root. +* An admin role manages which accounts can manage updater roles, as described in transcripts. + +Request whitelist model: + +* A user MAY call a requestWhitelist-like function and pay a small fee. +* The contract emits an event. +* An off-chain process MUST observe this event and decide whether to include the address in the next root. +* Exact role identifiers and role hierarchy in the whitelist registry. +* Fee handling for request whitelist, including fee recipient and accounting. +* Off-chain policy and SLA for processing request whitelist events. + +#### Fee module contract {#fee-module-contract} + +Responsibilities: + +* Compute virtual fees for analytics and future enforcement. +* Apply fee accounting only when the settlement contract calls the fee module. + +Fee types described in transcripts: +* Base fee for a standard transfer type. +* Batch fee for a batch-transfer type, lower than other types. +* Instant fee for an “instant” type, without actual prioritization in the MVP. +* Free tier of 10 transfers per day for each user, gated by a transaction type choice. +* Volume-based fee adjustments based on fixed constants defined in the contract. +* Exact constants and thresholds, including “volume” definition. +* Exact rules for “10 free transfers per day”, including day boundary and per-user accounting state. +* Whether the MVP stores fee counters on-chain or treats fees as off-chain metrics only. + +### Off-chain components {#off-chain-components} + +#### Batch builder and executor service (Java application) {#batch-builder-and-executor-service-java-application} + +Batch boundary rules in the MVP: \ +The MVP uses two batch boundary conditions, and either condition triggers batch creation. + +* Count condition: the service creates a batch when the queue reaches a fixed number of transfers (example value: 5 transfers). +* Time condition: the service creates a batch when a fixed time window elapses (example value: 30 seconds), even if the queue contains fewer transfers. + +State storage in the MVP: + +* The MVP stores batches and transfers in memory and locally, without a database. +* The MVP accepts this limitation due to delivery time constraints. + +TRON connectivity in the MVP: + +* The MVP uses a Java library described as “3Dent” to access TRON nodes and interact with smart contracts. +* The service sends requests to an external TRON node. +* Exact library name, artifact coordinates, and supported features (signing, contract calls, event decoding). +* Exact RPC node type (full node, solidity node) and network (Nile testnet, Shasta, mainnet). + +Operational visibility: + +* The service exposes controllers that report batch status and per-batch transaction ids. +* The service supports opening transaction ids in a TRON block explorer for testnet validation. + +#### Merkle tooling scripts (Python and Java) {#merkle-tooling-scripts-python-and-java} + +Responsibilities: + +* Generate the whitelist Merkle root from an address list. +* Generate the batch Merkle root from transfer leaf payloads. +* Generate inclusion proofs for transfer execution. +* Support deployment and end-to-end demo flows, as described in transcripts. +* Leaf encoding rules, hash function, and concatenation rules. +* Sorting rules, padding rules, and odd-leaf handling. +* Cross-language consistency checks between Python and Java implementations. + +### Data model {#data-model} + +#### Transfer leaf payload {#transfer-leaf-payload} + +The transcripts describe the following logical fields for a transfer leaf: +* Sender address (from, equals txData.from). +* Recipient address (to). +* Token amount (amount). +* Timestamp (timestamp). +* Nonce (nonce). +* Transaction type (type). +* Recipient count (recipientCount), used for fee computation, not for on-chain execution fan-out. +* Batch reference fields (batchId) as off-chain metadata, where Settlement defines batchId at submit time, and leaf hashing can omit batchId when root-binding exists. + +#### Whitelist leaf payload {#whitelist-leaf-payload} + +* The whitelist leaf represents a sender address membership element, typically an address or an address hash. +* Whether the tree uses raw addresses or hashed addresses as leaves. +* Whether the contract normalizes addresses before hashing. + +### Protocol flows {#protocol-flows} + +#### Flow A: whitelist root update {#flow-a-whitelist-root-update} + +1. The off-chain process collects eligible sender addresses (txData.from candidates). +2. The off-chain process builds a whitelist Merkle tree and produces a whitelist root. +3. The root signer signs the whitelist root. +4. A submitter sends the signed root to the whitelist registry contract. +5. The whitelist registry contract verifies the signature and stores the new root. + +#### Flow B: batch commit {#flow-b-batch-commit} + +1. The batch builder collects transfer requests into a queue. +2. The batch builder selects a batch boundary by count or time window. +3. The batch builder builds a batch Merkle tree and produces a batch root. +4. The submitter commits the batch root to the settlement contract. +5. The settlement contract stores the commitment and sets an unlock time. +6. The settlement contract defines batchId as an internal identifier, and off-chain systems treat batchId as a logical handle for monitoring and correlation. + +#### Flow C: approve and transfer execution {#flow-c-approve-and-transfer-execution} + +1. The sender submits a TRC-20 approve transaction that grants allowance to the settlement contract. +2. The executor waits until the unlock time passes. +3. The executor calls executeTransfer for a specific transfer leaf and supplies the leaf payload and Merkle proof. +4. The executor supplies a whitelist proof for the sender address (txData.from) when the transaction type requires whitelist gating. +5. The settlement contract verifies proofs, computes a virtual fee, and calls TRC-20 transferFrom. +6. The settlement contract marks the transfer as executed and rejects repeated execution attempts. + +### Optional versus required elements {#optional-versus-required-elements} + +#### Required for MVP operation {#required-for-mvp-operation} + +* Settlement contract deployment and configuration. +* Off-chain batch builder that produces batch roots and proofs. +* A funded executor account that can pay TRON resources for on-chain calls. +* TRC-20 approve from each sender address, sized to the intended transfer amounts. + +#### Optional in the MVP design {#optional-in-the-mvp-design} + +* Whitelist gating for batch-transfer types. +* Fee module integration beyond analytics and virtual accounting. +* Request whitelist flow, beyond event emission. +* Operational controllers and dashboards. + +### Security and trust model {#security-and-trust-model} + +#### Centralization and operator trust {#centralization-and-operator-trust} + +* The MVP assumes a trusted operator group for batch creation and root submission. +* The MVP uses contract ownership and role-based access control for administrative actions. +* This trust assumption acts as an explicit MVP trade-off, and Merkle commitments plus per-leaf execution provide a baseline for a later permissionless model with additional constraints. + +#### Allowance-based risk surface {#allowance-based-risk-surface} + +* The settlement contract uses transferFrom, so sender allowances define the maximum amount the settlement contract can transfer. +* A sender SHOULD scope allowances to intended amounts to limit exposure. +* Existence of per-transfer user signatures, and how the contract verifies them. +* Existence of additional constraints that bind leaf “from” to an authenticated actor. + +#### Unlock time semantics {#unlock-time-semantics} + +* The settlement contract enforces a time lock before execution. +* The MVP does not implement an on-chain fraud proof or on-chain batch rollback in transcripts. +* Unlock time acts as an operational review window for the operator group or automated checks. +* Whether the system supports batch cancellation before unlock. +* Whether the system supports marking a batch invalid after commit. + +#### TRON resource considerations {#tron-resource-considerations} + +* The executor account pays Energy/Bandwidth for commit batch and execute transfer calls. +* Stake 2.0 resource provisioning determines sustainable throughput for the executor. +* Measured Energy/Bandwidth usage per commit batch and per execute transfer. +* Maximum batch size limits and expected commit cadence. + +## Developer-oriented notes {#developer-oriented-notes} + +### Design decisions {#design-decisions} + +#### Merkle root as batch commitment {#merkle-root-as-batch-commitment} + +* The MVP uses a Merkle root to anchor a set of transfers with constant-size on-chain storage per batch. +* The MVP verifies inclusion per transfer via a Merkle proof and executes transfers individually. + +#### Per-transfer execution instead of single-call settlement {#per-transfer-execution-instead-of-single-call-settlement} + +* The MVP prioritizes correctness and implementation speed by executing one transfer per executeTransfer call. +* The MVP does not compress multiple transfers into one on-chain state transition. + +#### Sponsored execution via transferFrom {#sponsored-execution-via-transferfrom} + +* The MVP uses TRC-20 approve and transferFrom so an executor can sponsor transfer execution. +* The sender pays for approval, and the executor pays for executeTransfer resource costs. + +#### Whitelist gating via Merkle proof {#whitelist-gating-via-merkle-proof} + +* The MVP stores whitelist membership as a Merkle root to avoid large on-chain address lists. +* The MVP requires a whitelist proof only for batch-transfer types, and the whitelist proof targets the sender address (txData.from). + +#### Unlock time as a review window {#unlock-time-as-a-review-window} + +* The MVP inserts a time delay between commit and execution. +* The MVP uses unlock time as a control point for operator review, without an on-chain dispute mechanism. + +#### Off-chain state as in-memory storage {#off-chain-state-as-in-memory-storage} + +* The MVP uses in-memory and local storage for speed of delivery. +* The MVP accepts restart and durability risks in exchange for a short lead time. + +### Integration notes {#integration-notes} + +#### Integration roles {#integration-roles} + +* The batch builder service acts as an aggregator and as an executor in the MVP. +* A production system MAY separate aggregator and executor roles for isolation and security. + +#### Integration sequence for a dApp {#integration-sequence-for-a-dapp} + +* The dApp collects transfer parameters off-chain and forwards them to the batch builder. +* The dApp prompts the user to submit a TRC-20 approve transaction for the settlement contract. +* The batch builder commits a batch and executes transfers after unlock time. +* The dApp reads status via the batch builder controllers and on-chain events. +* Existence of a stable API surface for the batch builder service, including request schemas and authentication. +* Existence of an intent signature scheme or attestations for transfer authorization. + +### Edge cases {#edge-cases} + +#### Allowance and balance changes between commit and execution {#allowance-and-balance-changes-between-commit-and-execution} + +* A sender MAY reduce allowance after commit, which causes transferFrom to fail. +* A sender MAY spend tokens after commit, which reduces balance and causes transferFrom to fail. +* The executor SHOULD handle partial execution failures and report per-transfer status. + +#### Nonce collisions and replay {#nonce-collisions-and-replay} + +* A transfer leaf SHOULD carry a nonce that prevents replay. +* The settlement contract MUST reject repeated execution attempts for the same leaf or nonce, depending on implementation. +* Whether the contract tracks nonces per sender or tracks leaf hashes. +* Whether the contract rejects duplicate leaf payloads across batches. + +#### Token contract behavior {#token-contract-behavior} + +* TRC-20 tokens differ in revert patterns and return values. +* The settlement contract SHOULD handle non-standard TRC-20 behaviors if the design targets multiple tokens. +* Whether the MVP targets USDT/USDC only, or arbitrary TRC-20 tokens. +* Whether the settlement contract uses safe wrappers for token calls. + +#### Batch composition risks {#batch-composition-risks} + +* A trusted operator defines batch contents in the MVP. +* Operator errors in leaf construction cause execution failures or unintended transfers. + +### Limitations and non-goals {#limitations-and-non-goals} + +* The MVP does not implement a full rollup dispute system. +* The MVP does not minimize on-chain transfer calls, because each leaf executes separately. +* The MVP does not define a complete permissionless batch submission model in transcripts. +* The MVP does not define a decentralized whitelist update mechanism in transcripts. + +## Reference and examples {#reference-and-examples} + +### Terms {#terms} + +* Batch: a set of transfer leaves anchored by a Merkle root. +* Batch commitment: an on-chain record of a batch root and metadata. +* Transfer leaf: a payload that represents one TRC-20 transfer, hashed into the Merkle tree. +* Inclusion proof: a Merkle path that proves membership of a leaf under a root. +* Whitelist root: a Merkle root that represents eligible sender addresses (txData.from) for a gated transfer type. +* Unlock time: a time lock after commit and before execution. +* Executor: an account that submits commit and executes transactions, and pays TRON resources. + +### Examples {#examples} + +#### Transfer leaf payload {#transfer-leaf-payload} + +The MVP transcripts do not define the exact field ordering, types, and hashing rules. + +``` +```{ \ +"from": "T...sender", \ +"to": "T...recipient", \ +"token": "T...trc20Contract", \ +"amount": "1000000", \ +"nonce": 11, \ +"timestamp": 1730000000, \ +"type": "BATCH", \ +"recipientCount": 3, \ +"batchId": "0x...optional_logical_handle" \ +} +``` + +#### Example whitelist input {#example-whitelist-input} + +``` +[ +"T...addr1", \ +"T...addr2", \ +"T...addr3" \ +] +``` + + +#### Sequence diagram: whitelist root update {#sequence-diagram-whitelist-root-update} + +Actors: Root signer, Submitter, Whitelist registry contract. + + + +1. Root signer -> Off-chain tooling: Build whitelist Merkle root. +2. Root signer -> Off-chain tooling: Sign whitelist Merkle root. +3. Submitter -> Whitelist registry contract: Update root with signed root. +4. Whitelist registry contract -> On-chain state: Store new whitelist root. + + +#### Sequence diagram: batch commit and execution {#sequence-diagram-batch-commit-and-execution} + +Actors: Batch builder, Settlement contract, Whitelist registry, Fee module, TRC-20 token. + + + +1. Batch builder -> Off-chain state: Collect transfer requests. +2. Batch builder -> Off-chain tooling: Build batch Merkle root and proofs. +3. Batch builder -> Settlement contract: Commit batch root and metadata. +4. Settlement contract -> On-chain state: Store commitment and unlock time. +5. Sender -> TRC-20 token: Approve settlement contract allowance. +6. Batch builder -> Settlement contract: Call executeTransfer with leaf + proof (+ whitelist proof when required). +7. Settlement contract -> Whitelist registry: Verify whitelist membership for the sender address (txData.from) when required. +8. Settlement contract -> Fee module: Compute virtual fee. +9. Settlement contract -> TRC-20 token: Call transferFrom(from, to, amount). +10. Settlement contract -> On-chain state: Mark leaf as executed. + + +#### Pseudocode: batch boundary selection {#pseudocode-batch-boundary-selection} + +``` +state queue := [] \ +state windowStart := now() + +onTransferRequest(req): \ +enqueue(queue, req) \ +if size(queue) >= COUNT_THRESHOLD: \ +createBatchAndCommit(queue) \ +clear(queue) \ +windowStart := now() \ +return \ +if now() - windowStart >= TIME_WINDOW: \ +createBatchAndCommit(queue) \ +clear(queue) \ +windowStart := now() \ +return +``` + + +#### Pseudocode: Merkle commitment and per-leaf execution {#pseudocode-merkle-commitment-and-per-leaf-execution} + +``` +function createBatchAndCommit(queue): \ +leaves := map(queue, encodeLeafPayload) \ +root := merkleRoot(leaves) \ +metadata := buildBatchMetadata(queue) \ +sendTx(settlement.commitBatch, root, metadata) \ +batchId := readEvent(BatchSubmitted).batchId + +function executeLeaf(batchId, leafPayload, merkleProof, whitelistProofOpt): \ +assert(now() >= settlement.unlockTime(batchId)) \ +assert(settlement.verifyInclusion(leafPayload, merkleProof)) \ +if leafPayload.type == "BATCH": \ +assert(whitelistRegistry.verifyWhitelist(leafPayload.from, whitelistProofOpt)) \ +fee := feeModule.compute(leafPayload) \ +assert(trc20.allowance(leafPayload.from, settlement.address) >= leafPayload.amount) \ +sendTx(settlement.executeTransfer, leafPayload, merkleProof, whitelistProofOpt) +``` + + +### Failure classes {#failure-classes} + +* Batch not found or not committed. +* Batch locked due to unlock time. +* Merkle proof is invalid for the supplied leaf. +* Whitelist proof missing or invalid for a gated type. +* Transfer already executed (replay attempt). +* Nonce invalid or reused, depending on implementation. +* TRC-20 allowance insufficient or balance insufficient. +* TRC-20 token call failure due to non-standard behavior. \ No newline at end of file