The Tusk Drift Python SDK is a Python library that enables recording and replaying of both outbound and inbound network calls. This allows you to capture real API interactions during development and replay them during testing, ensuring consistent and reliable test execution without external dependencies.
The SDK instruments various Python libraries (requests, httpx, psycopg, redis, etc.) and web frameworks (Flask, FastAPI, Django) to intercept and record network traffic. During replay mode, the SDK matches incoming requests against recorded traces and returns the previously captured responses.
This guide provides step-by-step instructions for iterating on SDK instrumentations when debugging E2E tests. Use this when:
- An E2E test endpoint is failing
- You need to debug or fix instrumentation code
- You want to verify that SDK changes work correctly
E2E tests are located in drift/instrumentation/{instrumentation}/e2e-tests/:
Each test directory contains:
src/- Test application source codeapp.py- The test applicationtest_requests.py- HTTP requests to execute during recording
Dockerfile- Container configuration (builds onpython-e2e-base)docker-compose.yml- Container orchestration.tusk/- Traces and logs directoryconfig.yaml- Tusk CLI configurationtraces/- Recorded network traceslogs/- Service execution logs
entrypoint.py- Test orchestrator (runs inside container)run.sh- External test runner (starts containers)requirements.txt- Python dependencies
Before running any E2E test, you must build the shared Python e2e base image:
cd drift-python-sdk
docker build -t python-e2e-base:latest -f drift/instrumentation/e2e_common/Dockerfile.base .This image contains:
- Python 3.12
- Tusk CLI (for running replay tests)
- System utilities (curl, postgresql-client)
Important: You only need to rebuild the base image when:
- The Tusk CLI version needs to be updated
- System dependencies change
- Python version needs to be updated
cd drift/instrumentation/{instrumentation}/e2e-testsExample:
cd drift/instrumentation/flask/e2e-testsBefore running a new test iteration, delete existing traces and logs to ensure only current test data is present:
rm -rf .tusk/traces/*
rm -rf .tusk/logs/*This prevents confusion from old test runs and makes it easier to identify current issues.
Build the test container (first time only, or when requirements.txt changes):
docker compose buildStart the container in interactive mode for debugging:
docker compose run --rm app /bin/bashThis drops you into a shell inside the container where you can run commands manually.
Inside the container, start the application server in RECORD mode to capture network traffic:
TUSK_DRIFT_MODE=RECORD python src/app.pyThe server will start and wait for requests. You should see output indicating the SDK initialized and the app is running.
Open a new terminal, exec into the running container, and use curl to make requests to the endpoints you want to test:
# Find the container name
docker compose ps
# Exec into the container
docker compose exec app /bin/bash
# Make requests
curl -s http://localhost:8000/api/weather-activityTip: Check the test's src/app.py file to see all available endpoints
Wait a few seconds to ensure all traces are written to local storage:
sleep 3Stop the Python server by pressing Ctrl+C in the terminal where it's running, or:
pkill -f "python src/app.py"Run the Tusk CLI to replay the recorded traces:
TUSK_ANALYTICS_DISABLED=1 tusk drift run --print --output-format "json" --enable-service-logsFlags explained:
--print- Print test results to stdout--output-format "json"- Output results in JSON format--enable-service-logs- Write detailed service logs to.tusk/logs/for debugging
To see all available flags, run:
tusk drift run --helpInterpreting Results:
The output will be JSON with test results:
{
"test_id": "test-1",
"passed": true,
"duration": 150
}
{
"test_id": "test-2",
"passed": false,
"duration": 200
}"passed": true- Test passed successfully"passed": false- Test failed (mismatch between recording and replay)- Check
.tusk/logs/for detailed error messages and debugging information
If tests fail, check the service logs for detailed error information:
ls .tusk/logs/
cat .tusk/logs/<log-file>You can also view the traces recorded in the .tusk/traces/ directory:
cat .tusk/traces/*.jsonl | python -m json.toolWhen you need to fix instrumentation code:
- Make changes to the SDK source code in your editor
- NO need to rebuild Docker containers - the SDK is mounted as a volume, so changes propagate automatically
- Clean up traces and logs (Step 2)
- Restart the server in RECORD mode (Step 4)
- Hit the endpoints again (Steps 5-7)
- Run the CLI tests (Step 8)
- Repeat until tests pass
When you're done testing, clean up the Docker containers:
docker compose down -vEach E2E test directory has a run.sh script that automates the entire workflow:
./run.shThis script:
- Builds containers
- Runs the entrypoint (which handles setup, recording, testing, and cleanup)
- Displays results with colored output
- Exits with code 0 (success) or 1 (failure)
The actual test orchestration happens inside the container via entrypoint.py, which:
- Installs Python dependencies
- Starts app in RECORD mode
- Executes test requests
- Stops app, verifies traces
- Runs
tusk drift runCLI - Checks for socket instrumentation warnings
- Returns exit code
Use run.sh for full test runs, and use the manual steps above for iterative debugging.
The Docker Compose configuration mounts the SDK source code as a read-only volume:
volumes:
- ../../../..:/sdk # SDK source mounted at /sdkThis means:
- SDK changes propagate automatically - no need to rebuild containers
- Fast iteration - just edit the SDK code and restart the app
- Must rebuild only when - requirements.txt changes or base image needs updating
- Traces (
.tusk/traces/) - Recorded network interactions in JSONL format - Logs (
.tusk/logs/) - Detailed service logs when--enable-service-logsis used - Always clean these before re-running tests to avoid confusion
- Check service logs first - Most issues are explained in
.tusk/logs/ - Verify traces were created - Check
.tusk/traces/has files after recording - Test one endpoint at a time - Easier to isolate issues
- Check for socket warnings - Indicates missing instrumentation for a library
The SDK monitors for unpatched dependencies - libraries that make network calls without proper instrumentation. If you see this warning in logs:
[SocketInstrumentation] TCP connect() called from inbound request context, likely unpatched dependency
This indicates a library is making TCP calls that aren't being instrumented. You should either:
- Add instrumentation for that library
- Investigate which library is making the unpatched calls
To run all E2E tests across all instrumentations:
# From SDK root directory
# Sequential (default)
./run-all-e2e-tests.sh
# 2 tests in parallel
./run-all-e2e-tests.sh -c 2
# All tests in parallel (unlimited)
./run-all-e2e-tests.sh -c 0
# Run only single-instrumentation e2e tests
./run-all-e2e-tests.sh --instrumentation-only
# Run only stack tests
./run-all-e2e-tests.sh --stack-only# Build base image (first time only, or when updating CLI/Python version)
docker build -t python-e2e-base:latest -f drift/instrumentation/e2e_common/Dockerfile.base .
# Navigate to test directory
cd drift/instrumentation/flask/e2e-tests
# Clean traces and logs
rm -rf .tusk/traces/* .tusk/logs/*
# Build test container (first time only, or when requirements change)
docker compose build
# Run automated test
./run.sh
# Start container interactively for debugging
docker compose run --rm app /bin/bash
# Inside container: Start server in RECORD mode
TUSK_DRIFT_MODE=RECORD python src/app.py
# Inside container: Run test requests
python src/test_requests.py
# Inside container: Run Tusk CLI tests
TUSK_ANALYTICS_DISABLED=1 tusk drift run --print --output-format "json" --enable-service-logs
# View traces
cat .tusk/traces/*.jsonl | python -m json.tool
# View logs
cat .tusk/logs/*
# Clean up containers
docker compose down -v
# Run all E2E tests
./run-all-e2e-tests.sh