Skip to content

Commit 35d04cc

Browse files
add e2e testing guide prompt
1 parent 8d55f35 commit 35d04cc

9 files changed

Lines changed: 360 additions & 0 deletions

File tree

E2E_TESTING_GUIDE.md

Lines changed: 344 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,344 @@
1+
# E2E Testing Guide for Tusk Drift Python SDK
2+
3+
## Overview
4+
5+
The Tusk Drift Python SDK is a Python library that enables recording and replaying of both outbound and inbound network calls. This allows you to capture real API interactions during development and replay them during testing, ensuring consistent and reliable test execution without external dependencies.
6+
7+
The SDK instruments various Python libraries (requests, httpx, psycopg, redis, etc.) and web frameworks (Flask, FastAPI, Django) to intercept and record network traffic. During replay mode, the SDK matches incoming requests against recorded traces and returns the previously captured responses.
8+
9+
## Purpose of This Guide
10+
11+
This guide provides step-by-step instructions for iterating on SDK instrumentations when debugging E2E tests. Use this when:
12+
13+
- An E2E test endpoint is failing
14+
- You need to debug or fix instrumentation code
15+
- You want to verify that SDK changes work correctly
16+
17+
## E2E Test Structure
18+
19+
E2E tests are located in `drift/instrumentation/{instrumentation}/e2e-tests/`:
20+
21+
Each test directory contains:
22+
23+
- `src/` - Test application source code
24+
- `app.py` - The test application
25+
- `test_requests.py` - HTTP requests to execute during recording
26+
- `Dockerfile` - Container configuration (builds on `python-e2e-base`)
27+
- `docker-compose.yml` - Container orchestration
28+
- `.tusk/` - Traces and logs directory
29+
- `config.yaml` - Tusk CLI configuration
30+
- `traces/` - Recorded network traces
31+
- `logs/` - Service execution logs
32+
- `entrypoint.py` - Test orchestrator (runs inside container)
33+
- `run.sh` - External test runner (starts containers)
34+
- `requirements.txt` - Python dependencies
35+
36+
## Prerequisites
37+
38+
### Build the Base Image
39+
40+
Before running any E2E test, you must build the shared Python e2e base image:
41+
42+
```bash
43+
cd drift-python-sdk
44+
docker build -t python-e2e-base:latest -f drift/instrumentation/e2e_common/Dockerfile.base .
45+
```
46+
47+
This image contains:
48+
- Python 3.12
49+
- Tusk CLI (for running replay tests)
50+
- System utilities (curl, postgresql-client)
51+
52+
**Important:** You only need to rebuild the base image when:
53+
- The Tusk CLI version needs to be updated
54+
- System dependencies change
55+
- Python version needs to be updated
56+
57+
## Quick Iteration Workflow
58+
59+
### Step 1: Navigate to the E2E Test Directory
60+
61+
```bash
62+
cd drift/instrumentation/{instrumentation}/e2e-tests
63+
```
64+
65+
Example:
66+
67+
```bash
68+
cd drift/instrumentation/flask/e2e-tests
69+
```
70+
71+
### Step 2: Clean Up Previous Test Data
72+
73+
Before running a new test iteration, delete existing traces and logs to ensure only current test data is present:
74+
75+
```bash
76+
rm -rf .tusk/traces/*
77+
rm -rf .tusk/logs/*
78+
```
79+
80+
This prevents confusion from old test runs and makes it easier to identify current issues.
81+
82+
### Step 3: Build and Start Docker Container
83+
84+
Build the test container (first time only, or when requirements.txt changes):
85+
86+
```bash
87+
docker compose build
88+
```
89+
90+
Start the container in interactive mode for debugging:
91+
92+
```bash
93+
docker compose run --rm app /bin/bash
94+
```
95+
96+
This drops you into a shell inside the container where you can run commands manually.
97+
98+
### Step 4: Start Server in RECORD Mode
99+
100+
Inside the container, start the application server in RECORD mode to capture network traffic:
101+
102+
```bash
103+
TUSK_DRIFT_MODE=RECORD python src/app.py
104+
```
105+
106+
The server will start and wait for requests. You should see output indicating the SDK initialized and the app is running.
107+
108+
### Step 5: Hit the Endpoint(s) You Want to Record
109+
110+
Open a new terminal, exec into the running container, and use `curl` to make requests to the endpoints you want to test:
111+
112+
```bash
113+
# Find the container name
114+
docker compose ps
115+
116+
# Exec into the container
117+
docker compose exec app /bin/bash
118+
119+
# Make requests
120+
curl -s http://localhost:8000/api/weather-activity
121+
```
122+
123+
**Tip:** Check the test's `src/app.py` file to see all available endpoints
124+
125+
### Step 6: Wait Before Stopping the Server
126+
127+
Wait a few seconds to ensure all traces are written to local storage:
128+
129+
```bash
130+
sleep 3
131+
```
132+
133+
### Step 7: Stop the Server Process
134+
135+
Stop the Python server by pressing `Ctrl+C` in the terminal where it's running, or:
136+
137+
```bash
138+
pkill -f "python src/app.py"
139+
```
140+
141+
### Step 8: Run the Tusk CLI to Execute Tests
142+
143+
Run the Tusk CLI to replay the recorded traces:
144+
145+
```bash
146+
TUSK_ANALYTICS_DISABLED=1 tusk run --print --output-format "json" --enable-service-logs
147+
```
148+
149+
**Flags explained:**
150+
151+
- `--print` - Print test results to stdout
152+
- `--output-format "json"` - Output results in JSON format
153+
- `--enable-service-logs` - Write detailed service logs to `.tusk/logs/` for debugging
154+
155+
To see all available flags, run:
156+
157+
```bash
158+
tusk run --help
159+
```
160+
161+
**Interpreting Results:**
162+
163+
The output will be JSON with test results:
164+
165+
```json
166+
{
167+
"test_id": "test-1",
168+
"passed": true,
169+
"duration": 150
170+
}
171+
{
172+
"test_id": "test-2",
173+
"passed": false,
174+
"duration": 200
175+
}
176+
```
177+
178+
- `"passed": true` - Test passed successfully
179+
- `"passed": false` - Test failed (mismatch between recording and replay)
180+
- Check `.tusk/logs/` for detailed error messages and debugging information
181+
182+
### Step 9: Review Logs for Issues
183+
184+
If tests fail, check the service logs for detailed error information:
185+
186+
```bash
187+
ls .tusk/logs/
188+
cat .tusk/logs/<log-file>
189+
```
190+
191+
You can also view the traces recorded in the `.tusk/traces/` directory:
192+
193+
```bash
194+
cat .tusk/traces/*.jsonl | python -m json.tool
195+
```
196+
197+
### Step 10: Iterate on SDK Code
198+
199+
When you need to fix instrumentation code:
200+
201+
1. **Make changes to the SDK source code** in your editor
202+
2. **NO need to rebuild Docker containers** - the SDK is mounted as a volume, so changes propagate automatically
203+
3. **Clean up traces and logs** (Step 2)
204+
4. **Restart the server in RECORD mode** (Step 4)
205+
5. **Hit the endpoints again** (Steps 5-7)
206+
6. **Run the CLI tests** (Step 8)
207+
7. **Repeat until tests pass**
208+
209+
### Step 11: Clean Up Docker Containers
210+
211+
When you're done testing, clean up the Docker containers:
212+
213+
```bash
214+
docker compose down -v
215+
```
216+
217+
## Automated Testing
218+
219+
Each E2E test directory has a `run.sh` script that automates the entire workflow:
220+
221+
```bash
222+
./run.sh
223+
```
224+
225+
This script:
226+
227+
1. Builds containers
228+
2. Runs the entrypoint (which handles setup, recording, testing, and cleanup)
229+
3. Displays results with colored output
230+
4. Exits with code 0 (success) or 1 (failure)
231+
232+
The actual test orchestration happens inside the container via `entrypoint.py`, which:
233+
234+
1. Installs Python dependencies
235+
2. Starts app in RECORD mode
236+
3. Executes test requests
237+
4. Stops app, verifies traces
238+
5. Runs `tusk run` CLI
239+
6. Checks for socket instrumentation warnings
240+
7. Returns exit code
241+
242+
Use `run.sh` for full test runs, and use the manual steps above for iterative debugging.
243+
244+
## Important Notes
245+
246+
### SDK Volume Mounting
247+
248+
The Docker Compose configuration mounts the SDK source code as a read-only volume:
249+
250+
```yaml
251+
volumes:
252+
- ../../../..:/sdk # SDK source mounted at /sdk
253+
```
254+
255+
This means:
256+
257+
- **SDK changes propagate automatically** - no need to rebuild containers
258+
- **Fast iteration** - just edit the SDK code and restart the app
259+
- **Must rebuild only when** - requirements.txt changes or base image needs updating
260+
261+
### Traces and Logs
262+
263+
- **Traces** (`.tusk/traces/`) - Recorded network interactions in JSONL format
264+
- **Logs** (`.tusk/logs/`) - Detailed service logs when `--enable-service-logs` is used
265+
- **Always clean these before re-running tests** to avoid confusion
266+
267+
### Debugging Tips
268+
269+
1. **Check service logs first** - Most issues are explained in `.tusk/logs/`
270+
2. **Verify traces were created** - Check `.tusk/traces/` has files after recording
271+
3. **Test one endpoint at a time** - Easier to isolate issues
272+
4. **Check for socket warnings** - Indicates missing instrumentation for a library
273+
274+
### Socket Instrumentation Warnings
275+
276+
The SDK monitors for unpatched dependencies - libraries that make network calls without proper instrumentation. If you see this warning in logs:
277+
278+
```
279+
[SocketInstrumentation] TCP connect() called from inbound request context, likely unpatched dependency
280+
```
281+
282+
This indicates a library is making TCP calls that aren't being instrumented. You should either:
283+
- Add instrumentation for that library
284+
- Investigate which library is making the unpatched calls
285+
286+
## Running All Tests
287+
288+
To run all E2E tests across all instrumentations:
289+
290+
```bash
291+
# From SDK root directory
292+
293+
# Sequential (default)
294+
./run-all-e2e-tests.sh
295+
296+
# 2 tests in parallel
297+
./run-all-e2e-tests.sh 2
298+
299+
# All tests in parallel (unlimited)
300+
./run-all-e2e-tests.sh 0
301+
```
302+
303+
## Quick Reference Commands
304+
305+
```bash
306+
# Build base image (first time only, or when updating CLI/Python version)
307+
docker build -t python-e2e-base:latest -f drift/instrumentation/e2e_common/Dockerfile.base .
308+
309+
# Navigate to test directory
310+
cd drift/instrumentation/flask/e2e-tests
311+
312+
# Clean traces and logs
313+
rm -rf .tusk/traces/* .tusk/logs/*
314+
315+
# Build test container (first time only, or when requirements change)
316+
docker compose build
317+
318+
# Run automated test
319+
./run.sh
320+
321+
# Start container interactively for debugging
322+
docker compose run --rm app /bin/bash
323+
324+
# Inside container: Start server in RECORD mode
325+
TUSK_DRIFT_MODE=RECORD python src/app.py
326+
327+
# Inside container: Run test requests
328+
python src/test_requests.py
329+
330+
# Inside container: Run Tusk CLI tests
331+
TUSK_ANALYTICS_DISABLED=1 tusk run --print --output-format "json" --enable-service-logs
332+
333+
# View traces
334+
cat .tusk/traces/*.jsonl | python -m json.tool
335+
336+
# View logs
337+
cat .tusk/logs/*
338+
339+
# Clean up containers
340+
docker compose down -v
341+
342+
# Run all E2E tests
343+
./run-all-e2e-tests.sh
344+
```

drift/instrumentation/django/e2e-tests/docker-compose.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,8 @@ services:
1212
- DJANGO_SETTINGS_MODULE=settings
1313
working_dir: /app
1414
volumes:
15+
# Mount SDK source for hot reload (no rebuild needed for SDK changes)
16+
- ../../../..:/sdk
1517
# Mount app source for development
1618
- ./src:/app/src
1719
# Mount .tusk folder to persist traces

drift/instrumentation/fastapi/e2e-tests/docker-compose.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@ services:
1111
- PYTHONUNBUFFERED=1
1212
working_dir: /app
1313
volumes:
14+
# Mount SDK source for hot reload (no rebuild needed for SDK changes)
15+
- ../../../..:/sdk
1416
# Mount app source for development
1517
- ./src:/app/src
1618
# Mount .tusk folder to persist traces

drift/instrumentation/flask/e2e-tests/docker-compose.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@ services:
1111
- PYTHONUNBUFFERED=1
1212
working_dir: /app
1313
volumes:
14+
# Mount SDK source for hot reload (no rebuild needed for SDK changes)
15+
- ../../../..:/sdk
1416
# Mount app source for development
1517
- ./src:/app/src
1618
# Mount .tusk folder to persist traces

drift/instrumentation/httpx/e2e-tests/docker-compose.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@ services:
1111
- PYTHONUNBUFFERED=1
1212
working_dir: /app
1313
volumes:
14+
# Mount SDK source for hot reload (no rebuild needed for SDK changes)
15+
- ../../../..:/sdk
1416
# Mount app source for development
1517
- ./src:/app/src
1618
# Mount .tusk folder to persist traces

drift/instrumentation/psycopg/e2e-tests/docker-compose.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,8 @@ services:
3131
- PYTHONUNBUFFERED=1
3232
working_dir: /app
3333
volumes:
34+
# Mount SDK source for hot reload (no rebuild needed for SDK changes)
35+
- ../../../..:/sdk
3436
# Mount app source for development
3537
- ./src:/app/src
3638
# Mount .tusk folder to persist traces

drift/instrumentation/psycopg2/e2e-tests/docker-compose.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,8 @@ services:
3131
- PYTHONUNBUFFERED=1
3232
working_dir: /app
3333
volumes:
34+
# Mount SDK source for hot reload (no rebuild needed for SDK changes)
35+
- ../../../..:/sdk
3436
# Mount app source for development
3537
- ./src:/app/src
3638
# Mount .tusk folder to persist traces

0 commit comments

Comments
 (0)