WebQA Agent supports three deployment methods, in order of increasing complexity:
| Method | Use Case | Agent Isolation | Prerequisites |
|---|---|---|---|
| Local Development | Personal dev & debugging | None (subprocess) | Python, Node.js, PostgreSQL, Redis |
| Docker Compose | Single-machine / Team trial | Container-level | Docker |
| Kubernetes | Production cluster | Pod-level | K8s cluster |
Suitable for daily development and debugging. The backend launches the agent as a subprocess.
- Python 3.11+
- Node.js 18+ & npm
- PostgreSQL 14+
- Redis 6+
- Playwright Chromium (
uv run playwright install chromium)
# macOS
brew install postgresql@14 redis
brew services start postgresql@14
brew services start redis
createdb webqa# Linux (Ubuntu/Debian)
sudo apt install postgresql redis-server
sudo systemctl start postgresql redis
sudo -u postgres createdb webqacd backend
# Configure environment variables
cp env.example .env
# Edit .env: fill in LLM_API_KEY and other required values
# Install dependencies
pip install -r requirements.txt
# Initialize database
alembic upgrade head
# Start (dev mode with hot reload)
python run.pyBackend runs at http://localhost:8000, API docs: http://localhost:8000/docs
cd frontend
npm install
npm run devFrontend runs at http://localhost:5173
The agent runs as a subprocess and needs dependencies installed in the same environment:
# From project root
pip install -r webqa_agent/requirements.txt
playwright install chromium
# Optional: install Lighthouse for performance testing
cd webqa_agent && npm install && cd ..Browser → frontend (localhost:5173)
↓ API
backend (localhost:8000)
↓ subprocess
agent (same process)
↓
PostgreSQL + Redis (localhost)
Suitable for single-machine deployment. Backend and agent run in isolated containers.
- Docker 24+
- Docker Compose V2
cd deploy/docker-compose
# 1. Configure environment variables
cp .env.example .env
# Edit .env: fill in your LLM API Key
# 2. Build & start with one command
./start.shstart.sh handles environment checks, image building (backend + agent + frontend), and service startup automatically.
Manual steps (without script)
# Build all images (including agent)
docker compose --profile build-only build
# Start services (db, redis, backend, frontend)
docker compose up -dAfter startup:
- Frontend: http://localhost
# View logs
docker compose logs -f webqa-be
# List running agent containers
docker ps --filter "label=app=webqa-agent"
# Stop services
docker compose down
# Stop and clean all data (use with caution)
docker compose down -vBrowser → frontend (:80)
↓ API
webqa-be (:8000)
↓ Docker API (docker.sock)
webqa-agent (created on-demand, isolated container)
↓ HTTP callback
webqa-be
↓
PostgreSQL + Redis (containers)
↓
shared volume (/shared)
The backend creates agent containers on-demand via Docker API. Each test execution runs in its own isolated container. When the agent finishes, it notifies the backend via HTTP callback, and reports are shared through the Docker volume.
Key environment variables (configured in .env):
| Variable | Description | Default |
|---|---|---|
LLM_API_KEY |
LLM API key | Required |
LLM_BASE_URL |
LLM API endpoint | https://api.openai.com/v1 |
LLM_AVAILABLE_MODELS |
Available models (comma-separated) | gpt-4.1-mini-2025-04-14,... |
LLM_DEFAULT_MODEL |
Default model | gpt-5-mini-2025-08-07 |
JOB_TIMEOUT_SECONDS |
Execution timeout (seconds) | 7200 |
MAX_CONCURRENT_JOBS |
Max concurrent executions | 5 |
Built-in service variables (database, Redis, execution mode, etc.) are pre-configured in docker-compose.yml and generally don't need modification.
For production cluster deployments, see deploy/k8s/README.md.
WebQA Agent provides a flexible extension mechanism, allowing teams to customize and integrate their internal infrastructure (such as SSO, OSS object storage, internal LLMs, etc.).
If your team uses an internal SSO, you can:
- Implement your SSO login and cookie generation logic in
backend/app/api/environments.pyor the relevant Auth Provider modules. - In the frontend (
frontend/src/components/BusinessManager.tsx), you can keep or modify the existing SSO form fields (username, password, environment, etc.) to adapt to your internal SSO API.
By default, test reports are saved locally. If you want to upload reports to an internal OSS:
- Implement the report upload logic in
backend/app/api/internal.py. - In
backend/app/providers/__init__.py, the system supports auto-detecting internal deployment implementations. You can place your internal OSS client code in a specific provider directory, and the system will automatically load and use it.
If your team uses internally deployed LLMs:
- Configure the corresponding environment variables in
.env(e.g.,LLM_API_KEY_INTERNAL_MODEL,LLM_BASE_URL_INTERNAL_MODEL). - Add or modify the model mapping logic in
backend/app/config.pyto ensure internal models are selectable in the frontend and routed correctly.
Tip: The codebase provides an auto-load mechanism in
backend/app/providers/__init__.pyto distinguish between "open-source" and "internal deployment" versions.