This guide walks you through setting up the Fossil Headers DB indexer on your local machine or server.
-
Rust Toolchain (1.70 or later)
# Install rustup (Rust installer and version manager) curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh # Verify installation rustc --version cargo --version
-
PostgreSQL (16 or later)
Option A: Install locally (macOS)
brew install postgresql@16 brew services start postgresql@16
Option B: Install locally (Linux)
# Ubuntu/Debian sudo apt-get update sudo apt-get install postgresql-16 postgresql-contrib sudo systemctl start postgresql sudo systemctl enable postgresql
Option C: Use Docker (Recommended for development)
# Docker will be started automatically by `make dev-up` docker --version # Ensure Docker is installed
-
Git
# macOS brew install git # Linux (Ubuntu/Debian) sudo apt-get install git # Verify git --version
-
SQLx CLI (for manual database migrations)
cargo install sqlx-cli --version "=0.8.6" --no-default-features --features postgres -
Docker Compose (for containerized development)
# Usually included with Docker Desktop docker compose version
git clone https://github.com/OilerNetwork/fossil-headers-db.git
cd fossil-headers-dbThe project uses the Makefile for easy dependency management:
make installThis command will:
- Fetch all Rust dependencies via Cargo
- Install SQLx CLI for database migrations
- Verify toolchain compatibility
Manual alternative:
# Fetch Rust dependencies
cargo fetch
# Install SQLx CLI
cargo install sqlx-cli --version "=0.8.6" --no-default-features --features postgresOption A: Automated setup (Docker)
make dev-upThis will:
- Create a Docker network (
fossil-network) - Start PostgreSQL 16 in a Docker container
- Wait for the database to be healthy
- Automatically run all database migrations
- Expose PostgreSQL on
localhost:5432
Option B: Manual setup (Local PostgreSQL)
# Create database
createdb fossil_headers
# Set DATABASE_URL for migrations
export DATABASE_URL="postgresql://postgres:password@localhost:5432/fossil_headers"
# Run migrations
sqlx migrate runCreate a .env file in the project root:
cp .env.example .envEdit .env with your configuration:
# Database connection
DB_CONNECTION_STRING=postgresql://postgres:postgres@localhost:5432/postgres
# Ethereum RPC endpoint (required - get from Alchemy, Infura, or your node)
NODE_CONNECTION_STRING=https://eth-mainnet.g.alchemy.com/v2/YOUR_API_KEY_HERE
# HTTP server configuration
ROUTER_ENDPOINT=0.0.0.0:3000
RUST_LOG=info
# Indexer settings
INDEX_TRANSACTIONS=false # Set to true to index transaction data
START_BLOCK_OFFSET=1024 # Start backfilling from latest - 1024 blocks
# Development mode (loads .env file)
IS_DEV=trueImportant: Replace YOUR_API_KEY_HERE with your actual Ethereum RPC provider API key.
Build the project to verify everything is set up correctly:
make buildThis compiles the project in release mode. If successful, you'll see:
Compiling fossil_headers_db v0.1.0
Finished release [optimized] target(s) in X.XXs
Verify the installation by running the test suite:
make testAll tests should pass. If you see failures, check:
- PostgreSQL is running and accessible
- Database migrations have been applied
.envfile is properly configured
You'll need an Ethereum RPC endpoint to index block data. Here are recommended providers:
-
Alchemy (Recommended)
- Sign up at https://www.alchemy.com/
- Create an app for "Ethereum Mainnet"
- Copy the HTTPS endpoint
- Free tier: 300M compute units/month
-
Infura
- Sign up at https://www.infura.io/
- Create a project
- Copy the mainnet endpoint
- Free tier: 100K requests/day
-
QuickNode
- Sign up at https://www.quicknode.com/
- Create an Ethereum mainnet endpoint
- Free tier available
For production deployments, consider running your own Ethereum node:
# Using Geth
geth --http --http.api eth,net,web3 --http.addr 0.0.0.0 --http.port 8545
# Using Erigon (recommended for disk efficiency)
erigon --http --http.api eth,net,web3 --http.addr 0.0.0.0 --http.port 8545Note: Syncing a full Ethereum node requires:
- 2TB+ SSD storage
- 16GB+ RAM
- Several days to weeks for initial sync
If using a local PostgreSQL instance (not Docker):
# Create user and database
sudo -u postgres psql
postgres=# CREATE USER fossil WITH PASSWORD 'your_password';
postgres=# CREATE DATABASE fossil_headers OWNER fossil;
postgres=# GRANT ALL PRIVILEGES ON DATABASE fossil_headers TO fossil;
postgres=# \q
# Update .env with your credentials
DB_CONNECTION_STRING=postgresql://fossil:your_password@localhost:5432/fossil_headersFor production, consider managed PostgreSQL:
-
AWS RDS
- Create PostgreSQL 16 instance
- Enable automatic backups
- Configure security groups for access
- Minimum: db.t3.micro (1 vCPU, 1GB RAM)
- Recommended: db.t3.small or larger
-
Google Cloud SQL
- Create PostgreSQL 16 instance
- Configure authorized networks
- Enable automated backups
-
Azure Database for PostgreSQL
- Create Flexible Server
- PostgreSQL 16
- Configure firewall rules
Problem: rustc command not found after installation
Solution:
# Reload shell configuration
source $HOME/.cargo/env
# Or add to .bashrc / .zshrc
echo 'source $HOME/.cargo/env' >> ~/.bashrc
source ~/.bashrcProblem: Cannot connect to PostgreSQL
Solutions:
# Check if PostgreSQL is running
pg_isready -h localhost -p 5432
# For Docker setup
docker ps | grep postgres
# Check PostgreSQL logs
# Docker:
docker compose -f docker/docker-compose.local.yml logs db
# Local (macOS):
tail -f /usr/local/var/log/postgres.log
# Local (Linux):
sudo tail -f /var/log/postgresql/postgresql-16-main.logProblem: sqlx command not found
Solution:
# Ensure Cargo bin directory is in PATH
echo 'export PATH="$HOME/.cargo/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc
# Reinstall SQLx CLI
cargo install sqlx-cli --version "=0.8.6" --no-default-features --features postgres --forceProblem: Build fails with dependency errors
Solution:
# Clean build artifacts
cargo clean
# Update dependencies
cargo update
# Rebuild
cargo build --releaseProblem: OpenSSL linking errors
Solution:
# macOS
brew install openssl
export OPENSSL_DIR=$(brew --prefix openssl)
# Linux (Ubuntu/Debian)
sudo apt-get install libssl-dev pkg-configOnce installation is complete:
- Configure the indexer: See Configuration
- Run the indexer: See Quick Start
- Verify operation: See Testing
- Deploy to production: See Deployment Guide
- CPU: 2+ cores
- RAM: 4GB minimum, 8GB recommended
- Storage: 10GB for source code and dependencies
- Database Storage: Starts small, grows ~10GB per million blocks indexed
- CPU: 2+ vCPU recommended
- RAM: 4GB minimum, 8GB recommended for high-throughput indexing
- Database Storage: Plan for growth
- Headers only: ~0.5KB per block = ~10GB per million blocks
- With transactions: ~5KB per block = ~100GB per million blocks
- Ethereum has 20M+ blocks (as of 2024)
- Outbound HTTPS to Ethereum RPC endpoint
- Inbound HTTP on port 3000 (health checks)
- Database connection to PostgreSQL (port 5432 by default)
- Bandwidth: ~1-10 Mbps for RPC calls during backfilling
If you encounter issues during installation:
- Check the Troubleshooting Guide
- Review the GitHub Issues
- Join the Discussion Forum