This guide helps you resolve common installation and runtime issues. Most issues can be resolved in 5-10 minutes.
Quick Links: Common Issues | GPU/CUDA | Database | Network | Diagnostics
Symptoms:
- Error:
ffmpeg: command not found - Video processing fails with "FFmpeg not available"
Solution:
Linux (Ubuntu/Debian):
sudo apt update
sudo apt install ffmpegmacOS:
brew install ffmpegWindows:
- Download FFmpeg from ffmpeg.org
- Extract to
C:\ffmpeg - Add to PATH:
- Open System Properties → Environment Variables
- Edit
Pathvariable - Add
C:\ffmpeg\bin
- Restart terminal and verify:
ffmpeg -version
Verification:
ffmpeg -version
# Should show FFmpeg version infoSymptoms:
- Error:
Python 3.12+ required - Import errors with type hints
Solution:
Check your Python version:
python --version
# or
python3 --versionIf version is < 3.12, install Python 3.12:
Linux (Ubuntu):
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt install python3.12 python3.12-venvmacOS:
brew install python@3.12Windows: Download from python.org and install Python 3.12+
With uv (recommended):
# uv automatically uses correct Python version
uv syncSymptoms:
ModuleNotFoundError: No module named 'videoannotator'ImportError: cannot import name 'X' from 'Y'
Solution:
1. Ensure dependencies are installed:
# Sync all dependencies
uv sync
# If that fails, try fresh install
rm -rf .venv
uv sync2. Verify installation:
uv run python -c "import videoannotator; print(videoannotator.__version__)"3. Check you're using uv run:
# Wrong - uses system Python
python my_script.py
# Correct - uses project environment
uv run python my_script.py4. Editable install issues:
# Reinstall in editable mode
uv pip install -e .Symptoms:
- Error:
Address already in use: 18011 - API server won't start
Solution:
Option 1 - Find and kill process:
Linux/Mac:
# Find process using port 18011
lsof -i :18011
# Kill the process
kill -9 <PID>Windows (PowerShell):
# Find process
netstat -ano | findstr :18011
# Kill process
taskkill /PID <PID> /FOption 2 - Use different port:
# Set custom port
export API_PORT=18012
uv run videoannotator server --host 0.0.0.0 --port 18012
# Or in code
uvicorn videoannotator.api.main:app --port 18012Symptoms:
- Error:
No space left on device - Database writes fail
- Model downloads incomplete
Solution:
1. Check disk usage:
df -h2. Clean up:
# Remove old logs
rm -rf logs/*.log
# Clean test artifacts
rm -rf test_storage/
# Remove cached models (will re-download when needed)
rm -rf ~/.cache/huggingface/
rm -rf models/
# Clean uv cache
uv cache clean3. Move storage to larger disk:
# Set custom storage path
export STORAGE_BASE_PATH=/path/to/larger/disk/storageSymptoms:
PermissionError: [Errno 13] Permission denied- Cannot write to database
- Cannot create directories
Solution:
1. Check file ownership:
ls -la custom_storage/
ls -la tokens/2. Fix ownership (Linux/Mac):
# Fix ownership of storage directories
sudo chown -R $USER:$USER custom_storage/
sudo chown -R $USER:$USER tokens/
sudo chown -R $USER:$USER logs/3. Fix permissions:
chmod -R u+rw custom_storage/
chmod -R u+rw tokens/
chmod -R u+rw logs/Windows: Run terminal as Administrator if permission issues persist.
Symptoms:
scripts/verify_installation.pyreports failures
Solution:
Run with verbose output to see specific issues:
uv run python scripts/verify_installation.py --verboseCommon fixes:
- Python check fails: Install Python 3.12+
- FFmpeg check fails: Install FFmpeg (see above)
- Import check fails: Run
uv sync - Database check fails: Fix permissions on
custom_storage/ - GPU check fails: See GPU/CUDA section below
- Video test fails: Use
--skip-video-testif no video needed
Symptoms:
torch.cuda.is_available()returnsFalse- Models run slowly (CPU mode)
- Warning: "CUDA not available"
Solution:
1. Verify NVIDIA GPU exists:
Linux:
lspci | grep -i nvidia
# or
nvidia-smiWindows:
nvidia-smiIf nvidia-smi not found, install NVIDIA drivers from nvidia.com
2. Check CUDA installation:
nvcc --versionIf not found, install CUDA Toolkit 12.4+:
- Linux: Follow NVIDIA CUDA Installation Guide
- Windows: Download from NVIDIA CUDA Toolkit
3. Install CUDA-enabled PyTorch:
# Remove CPU-only PyTorch
uv remove torch torchvision torchaudio
# Install CUDA 12.4 version
uv add "torch==2.8.*+cu124" "torchvision==0.21.*+cu124" "torchaudio==2.8.*+cu124" --index-url https://download.pytorch.org/whl/cu1244. Verify GPU access:
uv run python -c "import torch; print(f'CUDA available: {torch.cuda.is_available()}'); print(f'GPU: {torch.cuda.get_device_name(0) if torch.cuda.is_available() else \"None\"}')"Symptoms:
- Error:
RuntimeError: CUDA out of memory - Process crashes during model inference
Solution:
1. Check GPU memory:
nvidia-smi2. Reduce batch size in pipeline configs:
# In config file
batch_size: 1 # Reduce from default3. Use mixed precision:
# Enable FP16 inference
use_fp16: true4. Clear GPU cache between runs:
import torch
torch.cuda.empty_cache()5. Upgrade GPU if consistently running out of memory:
- Minimum: GTX 1060 6GB
- Recommended: RTX 3060 12GB or better
Symptoms:
- Error:
CUDA version mismatch - PyTorch fails to initialize CUDA
Solution:
1. Check installed CUDA version:
nvidia-smi # Look at top right corner
nvcc --version2. Match PyTorch CUDA version:
For CUDA 12.4:
uv add "torch==2.8.*+cu124" "torchvision==0.21.*+cu124" --index-url https://download.pytorch.org/whl/cu124For CUDA 12.1:
uv add "torch==2.8.*+cu121" "torchvision==0.21.*+cu121" --index-url https://download.pytorch.org/whl/cu121For CPU only (no GPU):
uv add torch torchvision torchaudioNote: CUDA is not available on macOS. Apple Silicon (M1/M2/M3) uses Metal Performance Shaders (MPS), but VideoAnnotator currently runs in CPU mode on macOS.
Solution: Use Linux or Windows for GPU acceleration, or accept CPU-only performance on macOS.
Symptoms:
- Error:
database is locked - SQLite operational error
Solution:
1. Check for concurrent access:
# Find processes accessing database
lsof custom_storage/jobs.db2. Wait for long-running jobs to complete
3. Restart API server:
# Stop server (Ctrl+C)
# Start fresh
uv run videoannotator server --host 0.0.0.0 --port 180114. Use write-ahead logging (WAL mode):
sqlite3 custom_storage/jobs.db "PRAGMA journal_mode=WAL;"Symptoms:
- Error:
database disk image is malformed - Cannot query jobs
Solution:
1. Backup existing database:
cp custom_storage/jobs.db custom_storage/jobs.db.backup2. Attempt SQLite recovery:
sqlite3 custom_storage/jobs.db ".recover" | sqlite3 custom_storage/jobs_recovered.db
mv custom_storage/jobs_recovered.db custom_storage/jobs.db3. If recovery fails, start fresh:
rm custom_storage/jobs.db
# Database will be recreated on next API start
uv run videoannotator server --host 0.0.0.0 --port 18011Prevention: Regular backups:
# Add to cron/scheduled task
sqlite3 custom_storage/jobs.db ".backup 'backups/jobs_$(date +%Y%m%d).db'"Symptoms:
- Error:
attempt to write a readonly database - Job submission fails
Solution:
1. Check permissions:
ls -la custom_storage/2. Fix ownership and permissions:
# Linux/Mac
sudo chown -R $USER:$USER custom_storage/
chmod u+w custom_storage/jobs.db
# Windows - run as Administrator
icacls custom_storage\jobs.db /grant %USERNAME%:F3. Check disk space:
df -hSymptoms:
- All API requests return 401
- Error: "Invalid or missing API key"
Solution:
1. Check if authentication is enabled:
# Check environment
echo $AUTH_REQUIRED # Should be "true" or empty (defaults to true)2. Get your API key:
uv run python scripts/auth/get_api_key.py3. Use API key in requests:
export API_KEY="va_api_xxx..."
curl -X GET "http://localhost:18011/api/v1/jobs" \
-H "X-API-Key: $API_KEY"4. Disable authentication for local testing (not recommended for production):
uv run videoannotator server --dev --host 0.0.0.0 --port 18011See Authentication Guide for details.
Symptoms:
- Error:
Connection refused curl: (7) Failed to connect
Solution:
1. Verify server is running:
# Check process
ps aux | grep "videoannotator.api.main" | grep -v grep
# Check port
netstat -an | grep 18011
# or
lsof -i :180112. Start server if not running:
uv run videoannotator server --host 0.0.0.0 --port 180113. Check firewall (if accessing remotely):
Linux:
sudo ufw allow 18011Windows:
netsh advfirewall firewall add rule name="VideoAnnotator" dir=in action=allow protocol=TCP localport=180114. Check host binding:
Server should bind to 0.0.0.0 for external access:
uvicorn.run("videoannotator.api.main:app", host="0.0.0.0", port=18011)Symptoms:
- Browser console:
CORS policy: No 'Access-Control-Allow-Origin' header - Fetch requests fail from web app
- API works in Postman/curl but not in browser
Quick Solution:
# Standard usage: Official client (port 19011) works automatically
uv run videoannotator
# Custom client development: Use dev mode (allows all CORS origins)
uv run videoannotator --devWhat's Allowed by Default (v1.3.0+): The server automatically allows:
http://localhost:19011(video-annotation-viewer - official client)http://localhost:18011(server port - same-origin requests)
Custom Client Ports: If developing a custom client on a different port:
# Option 1: Set specific origin
export CORS_ORIGINS="http://localhost:YOUR_PORT"
uv run videoannotator
# Option 2: Allow all origins (development only)
uv run videoannotator --devProduction Configuration:
# Only allow your production domain
export CORS_ORIGINS="https://viewer.example.com"
uv run videoannotatorVerification: Check browser console for CORS headers:
// In browser console
fetch('http://localhost:18011/health')
.then(r => console.log(r.headers.get('access-control-allow-origin')))See CORS Guide for detailed configuration.
Use these commands to gather information for troubleshooting:
# Python version
python --version
# uv version
uv --version
# FFmpeg version
ffmpeg -version
# CUDA version (if installed)
nvcc --version
nvidia-smi
# Disk space
df -h
# Memory usage
free -h # Linux
vm_stat # macOS# Package version
uv run python -c "import videoannotator; print(videoannotator.__version__)"
# Installation verification
uv run python scripts/verify_installation.py --verbose
# Check imports
uv run python -c "
import torch
import cv2
import whisper
print(f'PyTorch: {torch.__version__}')
print(f'CUDA: {torch.cuda.is_available()}')
print(f'OpenCV: {cv2.__version__}')
print('All imports successful')
"# Basic health
curl http://localhost:18011/health
# Detailed health with API key
export API_KEY="va_api_xxx..."
curl -X GET "http://localhost:18011/api/v1/system/health" \
-H "X-API-Key: $API_KEY"
# Check specific endpoints
curl -X GET "http://localhost:18011/api/v1/pipelines" \
-H "X-API-Key: $API_KEY"# Check database file
ls -lh custom_storage/jobs.db
# Count jobs
sqlite3 custom_storage/jobs.db "SELECT COUNT(*) FROM jobs;"
# Recent jobs
sqlite3 custom_storage/jobs.db "SELECT id, status, created_at FROM jobs ORDER BY created_at DESC LIMIT 5;"
# Check for locks
lsof custom_storage/jobs.db# View recent logs
tail -f logs/videoannotator.log
# Search for errors
grep ERROR logs/videoannotator.log | tail -20
# Search for specific job
grep "job_abc123" logs/videoannotator.log
# Check API request logs
grep "POST /api/v1/jobs" logs/videoannotator.logSet environment variable for verbose output:
uv run python api_server.py --log-level debugOr modify src/utils/logging_config.py:
logger.setLevel(logging.DEBUG)Run pipelines in isolation to identify issues:
# Test OpenFace pipeline
uv run python examples/test_individual_pipelines.py --pipeline openface3_identity --video test.mp4
# Test Whisper pipeline
uv run python examples/test_individual_pipelines.py --pipeline whisper_transcription --video test.mp4Track memory usage to identify leaks:
# Install memory profiler
uv add memory-profiler
# Profile script
uv run python -m memory_profiler your_script.pyCapture network traffic for API issues:
# Use httpie for better debugging
uv add httpie
http GET http://localhost:18011/api/v1/jobs X-API-Key:$API_KEY
# Or curl with verbose output
curl -v -X GET "http://localhost:18011/api/v1/jobs" \
-H "X-API-Key: $API_KEY"If you can't resolve the issue:
-
Check GitHub Issues: github.com/InfantLab/VideoAnnotator/issues
-
Gather diagnostic information:
# Run full diagnostic uv run python scripts/verify_installation.py --verbose > diagnostic.txt 2>&1 # Include system info uname -a >> diagnostic.txt python --version >> diagnostic.txt nvidia-smi >> diagnostic.txt
-
Create detailed issue:
- OS and version
- Python version
- VideoAnnotator version
- Steps to reproduce
- Error messages (full traceback)
- Diagnostic output
- What you've tried
-
Check documentation:
| Issue Category | Typical Resolution Time |
|---|---|
| FFmpeg/Python version | 5-10 minutes |
| Import errors | 2-5 minutes |
| Port conflicts | 1-2 minutes |
| Permission errors | 3-5 minutes |
| GPU/CUDA setup | 15-30 minutes |
| Database issues | 5-10 minutes |
| Network/CORS | 5-10 minutes |
| Authentication | 2-5 minutes |
Target: 80% of issues resolved in <15 minutes ✅