|
| 1 | +# Benchmarking guide for Tusk Drift Python SDK |
| 2 | + |
| 3 | +## Overview |
| 4 | + |
| 5 | +The benchmarking methodology is the same across all our SDKs. |
| 6 | +It piggy backs off the end to end tests, so that test cases are automatically |
| 7 | +discovered, and we don't have to setup multiple apps in multiple different |
| 8 | +places to run tests. |
| 9 | + |
| 10 | +The simplest way to get started it simply |
| 11 | +``` |
| 12 | +./run-all-benchmarks.sh |
| 13 | +``` |
| 14 | +or to run a single (or a few) instrumentations, |
| 15 | +``` |
| 16 | +./run-all-benchmarks.sh -f flask,fastapi |
| 17 | +``` |
| 18 | + |
| 19 | +Within each instrumentation's e2e tests, you can also run it as such |
| 20 | +``` |
| 21 | +BENCHMARKS=1 ./drift/instrumentation/aiohttp/e2e-tests/run.sh |
| 22 | +``` |
| 23 | +but it is not as convenient. The run-all script is essentially just automating |
| 24 | +this for you (there's some other settings it sets automatically) |
| 25 | + |
| 26 | +## Methodology |
| 27 | + |
| 28 | +Benchmarks are run with a (by default) 3 second warm up, and 10 seconds of |
| 29 | +actual measurement. This is very important especially for tests that make http |
| 30 | +or filesystem queries, since the warm up also warms up all system caches (think |
| 31 | +ARP, DNS, buffers, etc.). Disabling warm up *will* produce nonsensical results. |
| 32 | + |
| 33 | +The benchmarking process itself is simple. We start the app exactly as how the |
| 34 | +e2e tests would, but the make_request helper function used by the e2e tests have |
| 35 | +been hijacked to run continuously instead of returning after a single call. |
| 36 | +We then measure the ops/second achieved during our 10s test interval, and print |
| 37 | +out the results. We do this first for the SDK disabled case, and then again for |
| 38 | +the SDK enabled case. The SDK is enabled/disabled through the standard |
| 39 | +`TUSK_DRIFT_MODE` envvar. |
| 40 | + |
| 41 | +## Implementing benchmarks |
| 42 | + |
| 43 | +If you've read through the above you will realise adding benchmarks is really |
| 44 | +easy! |
| 45 | +Simply add a new endpoint to an existing e2e test, or add a new e2e test |
| 46 | +entirely in the same format -- the run-all script auto discovers based on the |
| 47 | +run.sh script in each e2e test folder. |
0 commit comments