This guide provides backup-specific verification steps for manual E2E testing. For the complete deployment workflow, see the Manual E2E Testing Guide.
This guide covers:
- Backup configuration in environment files
- Backup template rendering verification
- Backup storage directory structure
- Manual backup execution testing
- Database-specific backup verification (SQLite vs MySQL)
- Backup file inspection and validation
Complete the standard deployment workflow first (see Manual E2E Testing Guide):
- ✅ Environment created
- ✅ Infrastructure provisioned
- ✅ Services configured
- ✅ Software released
- ✅ Services running
Your environment configuration must include backup settings:
{
"backup": {
"schedule": "0 3 * * *",
"retention_days": 7
}
}Before running the deployment workflow, verify backup configuration is present in the environment state:
# After 'create environment' command
export ENV_NAME="your-environment-name"
cat data/$ENV_NAME/environment.json | jq '.Created.context.user_inputs.backup'Expected output:
{
"schedule": "0 3 * * *",
"retention_days": 7
}If this shows null, backup configuration was not loaded from the config file. Verify the backup section is correctly formatted in your environment JSON file.
Current State: The backup service is deployed with automatic scheduled execution via crontab. The initial backup runs during the run command, then additional backups run automatically on the configured schedule.
Test both database drivers to ensure comprehensive coverage:
- SQLite + Backup - Tests file-based database backup
- MySQL + Backup - Tests network database backup with MySQL connection
After running the release command, verify backup configuration files exist on the VM:
# Set environment name and get IP from environment state
export ENV_NAME="your-environment-name"
export INSTANCE_IP=$(cat data/$ENV_NAME/environment.json | jq -r '.Running.context.runtime_outputs.instance_ip')
# Verify backup configuration is in application state
echo "Checking backup configuration in application state:"
cat data/$ENV_NAME/environment.json | jq '.Running.context.user_inputs.backup'
echo ""
# Check backup storage directory exists
echo "Checking backup storage directory on VM:"
ssh -i fixtures/testing_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
torrust@$INSTANCE_IP "ls -la /opt/torrust/storage/backup/"Expected state check:
{
"schedule": "0 3 * * *",
"retention_days": 7
}Expected directory structure:
drwxr-xr-x 3 torrust torrust 4096 <date> .
drwxr-xr-x 6 torrust torrust 4096 <date> ..
drwxr-xr-x 2 torrust torrust 4096 <date> etc
# View backup.conf
ssh -i fixtures/testing_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
torrust@$INSTANCE_IP "cat /opt/torrust/storage/backup/etc/backup.conf"Expected variables for SQLite:
BACKUP_RETENTION_DAYS=7
BACKUP_PATHS_FILE=/etc/backup/backup-paths.txt
DB_TYPE=sqlite
DB_PATH=/data/storage/tracker/lib/tracker.dbExpected variables for MySQL:
BACKUP_RETENTION_DAYS=7
BACKUP_PATHS_FILE=/etc/backup/backup-paths.txt
DB_TYPE=mysql
DB_HOST=mysql
DB_PORT=3306
DB_USER=tracker_user
DB_PASSWORD=<your_password>
DB_NAME=torrust_tracker# View backup-paths.txt
ssh -i fixtures/testing_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
torrust@$INSTANCE_IP "cat /opt/torrust/storage/backup/etc/backup-paths.txt"Expected content:
/data/storage/tracker/etc
/data/storage/prometheus/etc
/data/storage/grafana/provisioning
# Check backup service definition
ssh -i fixtures/testing_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
torrust@$INSTANCE_IP "cat /opt/torrust/docker-compose.yml | grep -A 25 'backup:'"Expected for SQLite:
- Service name:
backup - Image:
torrust/tracker-backup:latest - Restart policy:
"no"(runs once and exits) - Volumes: backup storage, tracker storage, prometheus storage, grafana storage
- No networks (SQLite doesn't need database network)
Expected for MySQL:
- Same as above, plus:
- Networks:
database_network - Depends on:
mysqlwith health condition
# View services status
ssh -i fixtures/testing_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
torrust@$INSTANCE_IP "cd /opt/torrust && docker compose ps"Expected: Backup service should show State: Exited (0) - this is correct behavior (runs once on startup and exits).
Test running a backup manually:
# SSH into the VM
ssh -i fixtures/testing_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
torrust@$INSTANCE_IP
# Navigate to application directory
cd /opt/torrust
# Run backup manually
docker compose run --rm backupExpected output:
[2026-02-03 19:05:25] Torrust Backup Container starting
[2026-02-03 19:05:25] Loading configuration from: /etc/backup/backup.conf
[2026-02-03 19:05:25] Configuration:
[2026-02-03 19:05:25] Retention: 7 days
[2026-02-03 19:05:25] Database: sqlite
[2026-02-03 19:05:25] Config paths file: /etc/backup/backup-paths.txt
[2026-02-03 19:05:25] ==========================================
[2026-02-03 19:05:25] Starting backup cycle
[2026-02-03 19:05:25] ==========================================
[2026-02-03 19:05:25] Starting SQLite backup: /data/storage/tracker/lib/database/tracker.db
[2026-02-03 19:05:25] SQLite backup completed: /backups/sqlite/sqlite_20260203_190525.db.gz
[2026-02-03 19:05:25] Size: 4.0K
[2026-02-03 19:05:25] Starting config files backup
[2026-02-03 19:05:25] Config backup completed: /backups/config/config_20260203_190525.tar.gz
[2026-02-03 19:05:25] Files backed up: 3
[2026-02-03 19:05:25] Size: 8.0K
[2026-02-03 19:05:25] Cleaning up backups older than 7 days
[2026-02-03 19:05:25] No old backups to delete
[2026-02-03 19:05:25] ==========================================
[2026-02-03 19:05:25] Backup cycle completed successfully
[2026-02-03 19:05:25] ==========================================
For MySQL deployments, you may see this warning (this is expected and not fatal):
[2026-02-03 19:47:32] Starting MySQL backup: tracker@mysql:3306
mysqldump: Error: 'Access denied; you need (at least one of) the PROCESS privilege(s) for this operation' when trying to dump tablespaces
[2026-02-03 19:47:32] MySQL backup completed: /backups/mysql/mysql_20260203_194732.sql.gz
[2026-02-03 19:47:32] Size: 4.0K
The warning appears because the backup user (tracker_user) has all necessary permissions for table backup, but lacks the PROCESS privilege for tablespace metadata. The backup still completes successfully with all table data intact.
# Check SQLite database backup files
ls -lh /opt/torrust/storage/backup/sqlite/
# Check config backup files
ls -lh /opt/torrust/storage/backup/config/
# Exit SSH
exitExpected for SQLite:
- Database file:
sqlite_<timestamp>.db.gz(compressed SQLite database) - Config archive:
config_<timestamp>.tar.gz
Expected for MySQL:
- Database dump:
mysql_<timestamp>.sql.gz(compressed SQL dump) - Config archive:
config_<timestamp>.tar.gz
For SQLite deployments, verify the database backup was created:
ssh -i fixtures/testing_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
torrust@$INSTANCE_IP "cd /opt/torrust/storage/backup/sqlite && \
gunzip -c sqlite_*.db.gz | file - && \
cd /opt/torrust/storage/backup/config && \
tar -tzf config_*.tar.gz | head -10"Expected:
- SQLite backup file:
sqlite_<timestamp>.db.gz(valid SQLite 3.x database) - Config archive contains: tracker.toml, prometheus.yml, grafana provisioning files
For MySQL deployments, verify the SQL dump was created with valid content:
# List MySQL backup files
ssh -i fixtures/testing_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
torrust@$INSTANCE_IP "ls -lh /opt/torrust/storage/backup/mysql/ | grep '\.sql\.gz'"
# Verify SQL structure (decompress and inspect first lines)
ssh -i fixtures/testing_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
torrust@$INSTANCE_IP "zcat /opt/torrust/storage/backup/mysql/mysql_*.sql.gz | head -20"Expected output:
File listing shows: mysql_<timestamp>.sql.gz with reasonable size (typically 0.5-2 KB for test database)
SQL content preview shows valid MySQL dump headers:
/*M!999999\- enable the sandbox mode */
-- MariaDB dump 10.19-11.8.3-MariaDB, for debian-linux-gnu (x86_64)
--
-- Host: mysql Database: tracker
-- Server version 8.4.8
/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
This confirms:
- ✅ SQL dump is valid and compressed
- ✅ Contains MySQL 8.4 database structure
- ✅ Table definitions are included
- ✅ File is restorable using
mysql < backup.sql
Verify the backup system cron entry was installed during the release command:
# Check if system cron entry exists
ssh -i fixtures/testing_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
torrust@$INSTANCE_IP "cat /etc/cron.d/tracker-backup"Expected output (for schedule 0 3 * * *):
# Cron expression: min hour day month dow command
# Runs at schedule: 0 3 * * *
0 3 * * * root cd /opt/torrust && /usr/local/bin/maintenance-backup.sh >> /var/log/tracker-backup.log 2>&1
The maintenance script:
- Stops the tracker service
- Runs backup container
- Restarts tracker service
- Logs all output to
/var/log/tracker-backup.log
If cron entry not found:
- The
releasecommand did not properly install the cron entry - Re-run the
releasecommand
Note: The backup will run automatically at the scheduled time (3 AM UTC in this example). To verify automatic execution, you can either:
- Wait for the scheduled time and check logs
- Manually trigger a backup (see Step 6) to verify functionality
- Check backup maintenance logs (see Step 11 below)
To verify automatic backups are running on schedule, monitor the maintenance logs:
# SSH into VM
ssh -i fixtures/testing_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
torrust@$INSTANCE_IP
# Watch backup maintenance log in real-time (check after scheduled time)
tail -f /var/log/tracker-backup.log
# Or check backup directory for multiple backup files (evidence of automatic execution)
ls -lh /opt/torrust/storage/backup/sqlite/
ls -lh /opt/torrust/storage/backup/mysql/Expected (after multiple scheduled runs):
-
Multiple backup files with different timestamps
-
Log entries showing successful backup maintenance cycles:
[2026-02-04 16:35:01] INFO: Tracker stopped successfully [2026-02-04 16:35:01] INFO: Running backup container... [2026-02-04 16:35:06] INFO: Backup completed successfully [2026-02-04 16:35:06] INFO: Starting tracker container... [2026-02-04 16:35:21] INFO: Tracker started successfully [2026-02-04 16:35:21] Backup maintenance completed (exit code: 0) -
For example:
sqlite_20260203_030000.db.gz,sqlite_20260204_030000.db.gz,sqlite_20260205_030000.db.gz
Check the backup container logs for any errors:
ssh -i fixtures/testing_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null \
torrust@$INSTANCE_IP "cd /opt/torrust && docker compose logs backup | tail -50"Expected: No errors, successful completion messages
Use this checklist to track verification progress:
Configuration & Deployment:
- Backup storage directory exists (
/opt/torrust/storage/backup/etc/) - backup.conf deployed with correct database type and path
- backup-paths.txt deployed with correct paths
- Docker compose includes backup service
- Backup service has correct restart policy (
"no") - Backup service has correct volumes
- Backup service has correct networks (none for SQLite, database_network for MySQL)
Initial Backup (during run command):
- Initial backup files created after
runcommand - Database backup file exists in correct directory (
sqlite/ormysql/) - Config backup tar.gz created in
config/directory - Files are compressed (
.db.gzor.sql.gz)
Manual Backup Execution:
- Manual backup executes without errors (
docker compose run --rm backup) - Database backup file created with new timestamp
- Config backup created with new timestamp
- Backup container logs show successful completion
Automatic Scheduled Execution (Crontab):
- System cron entry installed at
/etc/cron.d/tracker-backup - Cron schedule matches environment configuration
- Maintenance log file exists (
/var/log/tracker-backup.log) - Multiple backup files present (evidence of multiple automated runs)
- Backup files have different timestamps (at least 2-3 backups)
Data Integrity:
- Database backup files contain valid data (checked with file/gunzip/zcat)
- Config backup tar.gz contains expected files
- No errors in backup maintenance logs
Retention Cleanup:
- Retention days parameter is set correctly in backup.conf
- Old backups are cleaned up after retention period
- Cleanup messages appear in backup logs
Symptoms:
ls: cannot access '/opt/torrust/storage/backup/etc/': No such file or directoryCause: Backup section might be missing from environment configuration
Solution:
- Check environment state:
cat data/$ENV_NAME/environment.json | jq '.Running.context.user_inputs.backup' - If null, backup was not configured - recreate environment with backup section
- Re-run release command to deploy backup configuration
Symptoms:
Error: Failed to connect to MySQL at mysql:3306
Cause: MySQL service not healthy or backup service not on database network
Solution:
- Check MySQL is running:
docker compose ps mysql - Check backup service has database_network:
docker compose config | grep -A 20 backup: - Wait for MySQL to be healthy:
docker compose psshould show "healthy" status
Symptoms:
mysqldump: Got error: 2026: "TLS/SSL error: self-signed certificate in certificate chain"
Cause: MySQL 8.0+ enforces SSL by default, but the backup container needs to connect without strict SSL verification
Solution: This is automatically handled by the backup container:
- The Docker image includes a MySQL client configuration file at
/etc/mysql/mysql-client.cnfwithssl=FALSEsetting - The backup script references this config file via
--defaults-file=/etc/mysql/mysql-client.cnf - Uses
MYSQL_PWDenvironment variable for secure password handling
Status: ✅ FIXED - Backup container v1.0+ includes proper SSL handling
Symptoms: /opt/torrust/storage/backup/database/ is empty after manual backup
Cause: Backup script encountered an error during execution
Solution:
- Check backup container logs:
docker compose logs backup - Look for error messages in the output
- Verify backup.conf has correct paths and credentials
- For MySQL: verify database credentials match tracker configuration
Status: This is NOT an error - expected behavior
Explanation: The backup service is configured with restart: "no", which means it runs once and exits. This is the correct behavior. The service will only run when:
docker compose upstarts all services (backup runs once)- Manual execution:
docker compose run --rm backup - (Future) Scheduled via crontab
Implemented Features:
- ✅ Initial backup - Created automatically during
runcommand (viadocker-compose.yml) - ✅ Crontab integration - Automatic scheduled backups at configured schedule
- ✅ Manual execution - Can run on-demand with
docker compose run --rm backup - ✅ Retention cleanup - Automatically removes backups older than retention period
- ✅ Database support - Works with both SQLite and MySQL
- ✅ Configuration backup - Backs up tracker config, prometheus config, and Grafana provisioning
Known Limitations:
- ❌ Recovery from backup - Not yet implemented (requires manual restore process)
- ❌ Backup verification API - No remote endpoint to verify backup status
- ❌ Backup encryption - Backups are compressed but not encrypted
For rapid verification after deployment:
- Run
provisioncommand - Run
releasecommand (installs crontab) - Run
runcommand (creates initial backup) - SSH to VM and verify initial backup exists:
ls -lh /opt/torrust/storage/backup/sqlite/ - Manually run a second backup:
docker compose run --rm backup - Verify second backup created:
ls -lh /opt/torrust/storage/backup/sqlite/
Success: Two backup files with different timestamps exist
For comprehensive automated backup testing:
- Deploy with configured backup schedule (e.g., every hour for testing)
- Wait for scheduled backup time to pass
- Verify automatic backup executed:
grep "Backup cycle" /var/log/torrust-backup.log - Check multiple backup files created:
ls -lh /opt/torrust/storage/backup/*/ - Modify a configuration file, wait for next backup
- Verify new backup contains the modification
- Wait for retention cleanup to occur (after retention_days)
- Verify old backups were deleted
To verify retention cleanup with 7-day retention period:
- Deploy with
retention_days: 7 - Create manual backups (simulating daily backups):
docker compose run --rm backup(repeat 8 times) - Force manual backup on day 8
- Check
/var/log/torrust-backup.logfor cleanup messages - Verify first backup was deleted, most recent 7 kept
After verifying the backup service works correctly:
- Test backup restoration (manual process) - Future enhancement
- Implement automated retention testing
- Monitor disk space usage with production workloads
- Test backup functionality with different retention periods
- Manual E2E Testing Guide - Complete deployment workflow
- Tracker Verification - Tracker-specific tests
- MySQL Verification - MySQL-specific tests