Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions .github/ISSUE_TEMPLATE/sub-issue.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,19 +2,19 @@
name: Sub-Issue Template
about: Use this for tracking sub-tasks under major issues
title: "[Sub-Issue] "
labels:
labels: sub-issue
assignees: ''
---

### Sub-Issue: \[Brief Description of the Sub-Task]
### Sub-Issue: [Brief Description of the Sub-Task]

**Related to**: #X

---

### What needs to be done

\[Clearly describe the scope and purpose of this sub-issue. If applicable, mention relevant existing code, its limitations, and what the outcome of this sub-issue should achieve.]
[Clearly describe the scope and purpose of this sub-issue. If applicable, mention relevant existing code, its limitations, and what the outcome of this sub-issue should achieve.]

---

Expand All @@ -33,4 +33,4 @@ assignees: ''

### Why this is needed

\[Explain why the change improves maintainability, reusability, or correctness. Focus on long-term benefits like reducing duplication, simplifying onboarding, or decoupling service responsibilities.]
[Explain why the change improves maintainability, reusability, or correctness. Focus on long-term benefits like reducing duplication, simplifying onboarding, or decoupling service responsibilities.]
36 changes: 36 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -199,6 +199,42 @@ Navigate to **http://localhost:80/swagger-ui.html** to explore the API interacti

TracePcap is designed for self-hosted deployment:

### Offline / Air-gapped Deployment

For environments without internet access, use the offline deployment workflow:

**On an internet-connected machine:**

```bash
# Pull all third-party images, build local images, and save everything as .tar files
bash scripts/pull-and-save-images.sh
```

This creates a `images/` directory containing `.tar` files for all services.

**Transfer to the offline machine:**

```
images/ # all .tar files
docker-compose.offline.yml
scripts/load-images.sh
.env # copy from .env.example and configure
```

**On the offline machine:**

```bash
# Load all images into Docker
bash scripts/load-images.sh

# Start the stack
docker compose -f docker-compose.offline.yml up -d
```

> **Note**: The offline compose file defaults `LLM_API_BASE_URL` to `http://localhost:1234/v1` (LM Studio). Configure a locally-hosted LLM in `.env` before starting if you want AI features.

---

- **Development**: Use built-in configuration with exposed ports
- **Production**:
- Change default MinIO credentials in `docker-compose.yml`
Expand Down
132 changes: 132 additions & 0 deletions docker-compose.offline.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,132 @@
# docker-compose.offline.yml
#
# Offline deployment variant — uses pre-built images loaded via scripts/load-images.sh
# instead of building from source.
#
# Prerequisites:
# 1. On an internet-connected machine: bash scripts/pull-and-save-images.sh
# 2. Transfer images/, this file, and .env to the offline machine.
# 3. On the offline machine: bash scripts/load-images.sh
#
# Start: docker compose -f docker-compose.offline.yml up -d
# Stop: docker compose -f docker-compose.offline.yml down

services:
# PostgreSQL Database
postgres:
image: postgres:15-alpine
container_name: tracepcap-postgres
environment:
POSTGRES_DB: tracepcap
POSTGRES_USER: tracepcap_user
POSTGRES_PASSWORD: tracepcap_pass
TZ: Asia/Singapore
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U tracepcap_user -d tracepcap"]
interval: 10s
timeout: 5s
retries: 5
networks:
- tracepcap-network

# MinIO Object Storage
minio:
image: minio/minio:RELEASE.2024-11-07T00-52-20Z
container_name: tracepcap-minio
command: server /data --console-address ":9001"
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin
TZ: Asia/Singapore
ports:
- "9000:9000" # API
- "9001:9001" # Console
volumes:
- minio_data:/data
healthcheck:
test: ["CMD", "wget", "-q", "--spider", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
networks:
- tracepcap-network

# MinIO Client (mc) - Create bucket on startup
minio-init:
image: minio/mc:RELEASE.2024-11-21T17-21-54Z
container_name: tracepcap-minio-init
depends_on:
- minio
entrypoint: >
/bin/sh -c "
sleep 5;
/usr/bin/mc alias set myminio http://minio:9000 minioadmin minioadmin;
/usr/bin/mc mb myminio/tracepcap-files --ignore-existing;
/usr/bin/mc anonymous set public myminio/tracepcap-files;
exit 0;
"
networks:
- tracepcap-network

# Spring Boot Backend (pre-built image)
backend:
image: tracepcap-backend:latest
container_name: tracepcap-backend
environment:
SPRING_PROFILES_ACTIVE: dev
DATABASE_URL: jdbc:postgresql://postgres:5432/tracepcap
DATABASE_USERNAME: tracepcap_user
DATABASE_PASSWORD: tracepcap_pass
MINIO_ENDPOINT: http://minio:9000
MINIO_ACCESS_KEY: minioadmin
MINIO_SECRET_KEY: minioadmin
MINIO_BUCKET: tracepcap-files
APP_MEMORY_MB: ${APP_MEMORY_MB:-2048}
LLM_API_BASE_URL: ${LLM_API_BASE_URL:-http://localhost:1234/v1}
LLM_API_KEY: ${LLM_API_KEY:-your-api-key-here}
LLM_MODEL: ${LLM_MODEL:-gpt-4}
LLM_TEMPERATURE: ${LLM_TEMPERATURE:-0.7}
LLM_MAX_TOKENS: ${LLM_MAX_TOKENS:-2000}
LLM_TIMEOUT: ${LLM_TIMEOUT:-300}
TZ: Asia/Singapore
volumes:
- config_data:/app/config
depends_on:
postgres:
condition: service_healthy
minio:
condition: service_healthy
networks:
- tracepcap-network

# Nginx - Serves frontend and proxies API to backend (pre-built image)
# Note: frontend build args (VITE_*) are baked into this image at build time.
# To change them, rebuild the image with pull-and-save-images.sh.
nginx:
image: tracepcap-nginx:latest
container_name: tracepcap-nginx
environment:
APP_MEMORY_MB: ${APP_MEMORY_MB:-2048}
TZ: Asia/Singapore
ports:
- "${NGINX_PORT:-80}:80"
depends_on:
- backend
networks:
- tracepcap-network

volumes:
postgres_data:
driver: local
minio_data:
driver: local
config_data:
driver: local

networks:
tracepcap-network:
driver: bridge
49 changes: 49 additions & 0 deletions scripts/load-images.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
#!/usr/bin/env bash
# load-images.sh
#
# Run this on the OFFLINE machine after copying the images/ folder here.
#
# What it does:
# Loads every .tar file in ./images/ into the local Docker daemon.
#
# Usage:
# bash scripts/load-images.sh
#
# After loading, start the stack with:
# docker compose -f docker-compose.offline.yml up -d

set -euo pipefail

SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
ROOT_DIR="$(dirname "$SCRIPT_DIR")"
IMAGES_DIR="$ROOT_DIR/images"

if [ ! -d "$IMAGES_DIR" ]; then
echo "Error: images/ directory not found at $IMAGES_DIR"
echo "Make sure you copied the images/ folder from the internet-connected machine."
exit 1
fi

# Collect .tar files
shopt -s nullglob
TAR_FILES=("$IMAGES_DIR"/*.tar)
shopt -u nullglob

if [ ${#TAR_FILES[@]} -eq 0 ]; then
echo "Error: No .tar files found in $IMAGES_DIR/"
echo "Run pull-and-save-images.sh on an internet-connected machine first."
exit 1
fi

echo "=== Loading Docker images from images/ ==="
echo ""
for tarfile in "${TAR_FILES[@]}"; do
echo " Loading $(basename "$tarfile")..."
docker load -i "$tarfile"
done

echo ""
echo "=== All images loaded successfully ==="
echo ""
echo "Start the application with:"
echo " docker compose -f docker-compose.offline.yml up -d"
116 changes: 116 additions & 0 deletions scripts/pull-and-save-images.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
#!/usr/bin/env bash
# pull-and-save-images.sh
#
# Run this on an internet-connected machine BEFORE transferring to the offline host.
#
# What it does:
# 1. Pulls all third-party images from Docker Hub
# 2. Builds the backend and nginx images locally
# 3. Saves every image as a .tar file under ./images/
#
# Usage:
# bash scripts/pull-and-save-images.sh
#
# Build args for nginx are read from .env (if present) — copy .env.example first
# if you haven't already configured it.

set -euo pipefail

SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
ROOT_DIR="$(dirname "$SCRIPT_DIR")"
IMAGES_DIR="$ROOT_DIR/images"

BACKEND_IMAGE="tracepcap-backend:latest"
NGINX_IMAGE="tracepcap-nginx:latest"

# ---------------------------------------------------------------------------
# Helper
# ---------------------------------------------------------------------------
save_image() {
local image="$1"
local filename="$2"
echo " Saving $image -> images/$filename"
docker save "$image" -o "$IMAGES_DIR/$filename"
}

# ---------------------------------------------------------------------------
# Load build-arg overrides from .env when available
# ---------------------------------------------------------------------------
if [ -f "$ROOT_DIR/.env" ]; then
echo "Loading build args from .env"
set -a
# shellcheck source=/dev/null
source "$ROOT_DIR/.env"
set +a
fi

mkdir -p "$IMAGES_DIR"

# ---------------------------------------------------------------------------
# 1. Pull third-party images
# ---------------------------------------------------------------------------
echo ""
echo "=== [1/3] Pulling third-party images ==="

# --- Docker Hub ---
DOCKERHUB_IMAGES=(
"postgres:15-alpine"
"minio/minio:RELEASE.2024-11-07T00-52-20Z"
"minio/mc:RELEASE.2024-11-21T17-21-54Z"
)

for img in "${DOCKERHUB_IMAGES[@]}"; do
echo " Pulling $img (Docker Hub)..."
docker pull "$img"
done

# ---------------------------------------------------------------------------
# 2. Build local images
# ---------------------------------------------------------------------------
echo ""
echo "=== [2/3] Building local images ==="
cd "$ROOT_DIR"

echo " Building backend..."
docker build \
-t "$BACKEND_IMAGE" \
./backend

echo " Building nginx (frontend)..."
docker build \
--build-arg "VITE_API_BASE_URL=${VITE_API_BASE_URL:-/api}" \
--build-arg "VITE_SUPPORTED_FILE_TYPES=${VITE_SUPPORTED_FILE_TYPES:-.pcap,.pcapng,.cap}" \
--build-arg "VITE_ANALYSIS_OPTIONS=${VITE_ANALYSIS_OPTIONS:-false}" \
--build-arg "VITE_NETWORK_DIAGRAM_CONVERSATION_LIMIT=${VITE_NETWORK_DIAGRAM_CONVERSATION_LIMIT:-false}" \
-t "$NGINX_IMAGE" \
-f ./nginx/Dockerfile \
.

# ---------------------------------------------------------------------------
# 3. Save all images as tars
# ---------------------------------------------------------------------------
echo ""
echo "=== [3/3] Saving images to images/ ==="

for img in "${DOCKERHUB_IMAGES[@]}"; do
filename="$(echo "$img" | tr '/:' '_').tar"
save_image "$img" "$filename"
done
save_image "$BACKEND_IMAGE" "tracepcap-backend.tar"
save_image "$NGINX_IMAGE" "tracepcap-nginx.tar"

# ---------------------------------------------------------------------------
# Summary
# ---------------------------------------------------------------------------
echo ""
echo "=== Done ==="
echo ""
echo "Transfer the following to the offline machine:"
echo " images/ (all .tar files)"
echo " docker-compose.offline.yml"
echo " .env (or .env.example — configure before starting)"
echo " scripts/load-images.sh"
echo ""
echo "Then on the offline machine run:"
echo " bash scripts/load-images.sh"
echo " docker compose -f docker-compose.offline.yml up -d"