Skip to content

Commit ddcdc07

Browse files
committed
docs(contributing): clarify the container-vs-fixture test strategy
A container test anchors the plugin against a real service and catches vendor-side format drift, but it almost never reproduces the interesting edge cases that plugins actually have to handle in production: stale caches, half-configured clusters, 503 responses, overflowed counters, semantically-broken configs. Those states typically only occur on a live system under real load, not inside a freshly-started clean container. Add a Rules-of-Thumb bullet that spells out the combined pattern: one testcontainers scenario for the happy path (to notice when the vendor changes their API) plus a handful of fixture-based testcases for the weird states, side-by-side in the same `unit-test/run`.
1 parent c4cad2e commit ddcdc07

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed

CONTRIBUTING.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -143,6 +143,7 @@ Checklist:
143143
* Mainly return WARN. Only return CRIT if the operators want to or have to wake up at night. CRIT means "react immediately".
144144
* EAFP: Easier to ask for forgiveness than permission. This common Python coding style assumes the existence of valid keys or attributes and catches exceptions if the assumption proves false. This clean and fast style is characterized by the presence of many try and except statements.
145145
* **Pick the right unit-test flavor.** If the plugin parses the output of a shell command, the body of a file, or an HTTP endpoint that returns a stable text format, write fixture-based tests driven by `lib.lftest.run()` and a `TESTS` list. They run in a fraction of a second, are fully reproducible, and cover the full `tox` / Python matrix. Only reach for container-based tests (via `lib.lftest.run_container()` and testcontainers-python) when the check's behaviour really depends on live runtime state of the service (log markers, cluster topology, write-then-read flows, version-dependent API responses that cannot be captured statically).
146+
* **Combine container tests with fixtures for real coverage.** Container tests anchor the happy path against a real service, but they rarely expose the interesting edge cases: a service that just crashed, a stale cache, a half-configured cluster, a component that responds with a 503, a counter that overflowed, a config that is syntactically valid but semantically broken. Those behaviours almost only show up in real operation, not in a freshly-started clean container. The pragmatic pattern is: one testcontainers scenario for the nominal state (so we notice when the vendor changes their API), plus a handful of fixture-based testcases that capture the weird states — ideally captured from real incidents, or synthesised from the plugin code and the vendor's documentation. Both flavours live side-by-side in the same `unit-test/run` file; `tools/run-unit-tests --no-container` picks the fixture path for the fast matrix and `tools/run-container-tests` picks the live scenarios for the integration runner.
146147

147148

148149
### Return Codes

0 commit comments

Comments
 (0)