Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
155 changes: 155 additions & 0 deletions docs/issues/1525-01-persistence-test-coverage.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,155 @@
# Subissue Draft for #1525-01: Add DB Compatibility Matrix

## Goal

Establish a compatibility matrix that exercises persistence-layer tests across supported database
versions before any refactoring begins.

## Why First

The later refactors change persistence architecture, async behavior, schema setup, and backend
implementations. Running the tests against multiple database versions first gives a baseline to
detect regressions early and narrows review scope to behavior rather than guesswork.

## Scope

- Bash is acceptable for low-complexity orchestration.
- Focus only on the database compatibility matrix; end-to-end real-client testing is covered by
subissue #1525-02.

## Testing Principles

The implementation must follow these quality rules for all new and modified tests.

- **Isolation**: Each test run must be independent. Tests that spin up database containers via
`testcontainers` already get their own ephemeral container; the bash matrix script achieves
isolation by running one matrix cell at a time in a fresh process, each with an exclusively
allocated container.
- **Independent system resources**: Tests must not hard-code host ports. `testcontainers` binds
containers to random free host ports automatically — do not override this with fixed bindings.
Temporary files or directories, if needed, must be created under a `tempfile`-managed path so
they are always removed on exit.
- **Cleanup**: After each test (success or failure) all containers, volumes, and temporary files
must be released. `testcontainers` handles containers automatically when the handle is dropped;
ensure `Drop` is not suppressed.
- **Behavior, not implementation**: Tests must assert observable outcomes (e.g. the driver
correctly inserts and retrieves a torrent entry) rather than internal state (e.g. a specific SQL
query was issued).
- **Verified before done**: No test is considered complete until it has been executed and passes
in a clean environment. Include confirmation of a passing run in the PR description.

## Reference QA Workflow

The PR #1695 review branch includes a QA script that defines the expected behavior:

- `run-db-compatibility-matrix.sh`:
executes a compatibility matrix across SQLite, multiple MySQL versions, and multiple PostgreSQL
versions.

This should be treated as a reference prototype, not a production artifact. The goal is to
re-implement it in a form that integrates with the repository's normal test strategy.

## Dependency Note

PostgreSQL is not implemented yet, so this subissue cannot require successful execution against
PostgreSQL. The structure should make it easy to add PostgreSQL combinations in subissue
`#1525-08` once the driver exists.

## Proposed Branch

- `1525-01-db-compatibility-matrix`

## Tasks

### 1) Port the compatibility matrix workflow

Add a low-complexity bash compatibility-matrix runner that exercises persistence-related tests
across supported database versions.

Tests to orchestrate:

- `cargo check --workspace --all-targets`
- configuration coverage for PostgreSQL connection settings
- large-download counter saturation tests in the HTTP protocol layer
- large-download counter saturation tests in the UDP protocol layer
- SQLite driver tests
- MySQL driver tests across selected MySQL versions

Note: PostgreSQL version-matrix execution is deferred to subissue #1525-08, once the
PostgreSQL driver exists.

Steps:

- Modify current DB driver tests so the DB image version can be injected through environment
variables:
- MySQL: `TORRUST_TRACKER_CORE_MYSQL_DRIVER_IMAGE_TAG`
- PostgreSQL (reserved for subissue #1525-08): `TORRUST_TRACKER_CORE_POSTGRES_DRIVER_IMAGE_TAG`

When `TORRUST_TRACKER_CORE_MYSQL_DRIVER_IMAGE_TAG` is not set, the test falls back to the
current hardcoded default (e.g. `8.0`), preserving existing behavior. The matrix script sets
this variable explicitly for each version in the loop, so unset means "run as today" and the
matrix just expands that into multiple combinations.

- Add `contrib/dev-tools/qa/run-db-compatibility-matrix.sh` modeled after the PR prototype:
- `set -euo pipefail`
- define default version sets from env vars:
- `MYSQL_VERSIONS` defaulting to at least `8.0 8.4`
- `POSTGRES_VERSIONS` reserved for subissue #1525-08
- run pre-checks once (`cargo check --workspace --all-targets`)
- run protocol/configuration tests once
- run SQLite driver tests once
- loop MySQL versions: `docker pull mysql:<version>`, then run MySQL driver tests with
`TORRUST_TRACKER_CORE_RUN_MYSQL_DRIVER_TEST=1` and
`TORRUST_TRACKER_CORE_MYSQL_DRIVER_IMAGE_TAG=<version>`
- print a clear heading for each backend/version before executing tests
- fail fast on first failure with the failing backend/version visible in logs
- keep script complexity intentionally low; avoid re-implementing test logic already in test
functions
- Replace the current single MySQL `database` step in `.github/workflows/testing.yaml` with
execution of the new script.

Acceptance criteria:

- [ ] DB image version injection is supported via `TORRUST_TRACKER_CORE_MYSQL_DRIVER_IMAGE_TAG`
(and a reserved `POSTGRES` equivalent for subissue #1525-08).
- [ ] `contrib/dev-tools/qa/run-db-compatibility-matrix.sh` exists and runs successfully.
- [ ] The script exercises SQLite and at least two MySQL versions by default.
- [ ] Failures identify the backend/version combination that broke.
- [ ] The `database` job step in `.github/workflows/testing.yaml` runs the matrix script instead
of a single-version MySQL command.
- [ ] The script structure allows PostgreSQL to be added in subissue #1525-08 without a redesign.
- [ ] Tests do not hard-code host ports; `testcontainers` assigns random ports automatically.
- [ ] All containers started by tests are removed unconditionally on test completion or failure.

### 2) Document the workflow

Steps:

- Document the local invocation command for the matrix script.
- Document that the CI `database` step runs the same script.

Acceptance criteria:

- [ ] The matrix script is documented and runnable without ad hoc manual steps.

## Out of Scope

- qBittorrent end-to-end testing (covered by subissue #1525-02).
- Adding PostgreSQL support itself.
- Refactoring the production persistence interfaces.
- Performance benchmarking, before/after comparison, and benchmark reporting.

## Definition of Done

- [ ] `cargo test --workspace --all-targets` passes.
- [ ] `linter all` exits with code `0`.
- [ ] The matrix script has been executed successfully in a clean environment; a passing run log
is included in the PR description.

## References

- EPIC: #1525
- Reference PR: #1695
- Reference implementation branch: `josecelano:pr-1684-review` — see EPIC for checkout
instructions (`docs/issues/1525-overhaul-persistence.md`)
- Reference script: `contrib/dev-tools/qa/run-db-compatibility-matrix.sh`
184 changes: 184 additions & 0 deletions docs/issues/1525-02-qbittorrent-e2e.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,184 @@
# Subissue Draft for #1525-02: Add qBittorrent End-to-End Test

## Goal

Add a high-level end-to-end test that validates tracker behavior through a complete torrent-sharing
scenario using real containerized BitTorrent clients, covering scenarios that lower-level unit and
integration tests cannot reach.

## Why Before the Refactor

The persistence refactor changes storage behavior underneath the tracker. Having a real-client
scenario that exercises a full download cycle (seeder uploads → leecher downloads → tracker
records completion) gives a regression backstop that is not possible with protocol-level tests
alone.

## Scope

- Follow the same pattern as the existing `e2e_tests_runner` binary
(`src/console/ci/e2e/runner.rs`): a Rust binary that drives the whole scenario using
`std::process::Command` to invoke `docker compose` and any container-side commands.
- Use SQLite as the database backend; database compatibility across multiple versions is already
covered by subissue #1525-01.
- Cover one complete scenario: a seeder sharing a torrent that a leecher downloads in full.
- The binary is responsible for scaffolding (generating a temporary config and torrent file),
starting the services, sending commands into the qBittorrent containers (via their WebUI API
or `docker exec`), polling for completion, asserting the result, and tearing down.
- Do not re-test things already covered at a lower level: announce parsing, scrape format,
whitelist/key logic, or multi-database compatibility.

## Testing Principles

The implementation must follow these quality rules.

- **Isolation**: Each run of the E2E binary must be isolated from any other concurrently running
instance. Achieve this by using a unique Docker Compose project name per run (e.g.
`--project-name qbt-e2e-<random-suffix>`) so container names, networks, and volumes never
collide with a parallel run.
- **Independent system resources**: Do not bind services to fixed host ports. Let Docker assign
ephemeral host ports and discover them from the compose output, so two simultaneous runs cannot
conflict. Place all temporary files (tracker config, payload, `.torrent` file) in a
`tempfile`-managed directory created at runner start and deleted on exit.
- **Cleanup**: `docker compose down --volumes` must be called unconditionally — on success, on
assertion failure, and on panic. Use a Rust `Drop` guard or equivalent to guarantee teardown
even when the runner exits unexpectedly.
- **Mock time when possible**: Use a configurable timeout (CLI argument or env var) for the
leecher-completion poll rather than a hard-coded sleep. If any logic depends on wall-clock time
(e.g. stale peer detection), inject a mockable clock consistent with the `clock` package used
elsewhere in the codebase.
- **Behavior, not implementation**: Assert the outcome the user cares about — the leecher holds a
complete, byte-identical copy of the payload — not which internal tracker counters changed or
which announce endpoints were called.
- **Verified before done**: The binary must be executed end-to-end and produce a passing result in
a clean environment before the subissue is closed. Include a run log in the PR description.

## Reference QA Workflow

`contrib/dev-tools/qa/run-qbittorrent-e2e.py` in the PR #1695 review branch demonstrates the
scenario (seeder + leecher + tracker via Python subprocess). Treat it as a behavioral reference
only; the implementation here will use `docker compose` instead of manual container management.

## Proposed Branch

- `1525-02-qbittorrent-e2e`

## Tasks

### 1) Add a docker compose file for the E2E scenario

Add a compose file (e.g., `compose.qbittorrent-e2e.yaml`) that defines:

- the tracker service configured with SQLite
- a qbittorrent-seeder container
- a qbittorrent-leecher container

Steps:

- Define a tracker service mounting a SQLite config file (generated by the runner).
- Define seeder and leecher services using a suitable qBittorrent image.
- Configure a shared network so all containers can reach each other and the tracker.
- Define any volumes needed to mount the payload and torrent file into each client container.
- Ensure `docker compose up --wait` exits cleanly when services are healthy.
- Ensure `docker compose down --volumes` removes all containers and volumes.

Acceptance criteria:

- [ ] `docker compose -f compose.qbittorrent-e2e.yaml up --wait` starts all services without error.
- [ ] `docker compose -f compose.qbittorrent-e2e.yaml down --volumes` leaves no orphaned resources.

### 2) Implement the Rust runner binary

Add a new binary (e.g., `src/bin/qbittorrent_e2e_runner.rs`) that follows the same structure as
`src/console/ci/e2e/runner.rs`:

- Parses CLI arguments or environment variables (compose file path, payload size, timeout).
- Generates scaffolding: a temporary tracker config (SQLite) and a small deterministic payload
with its `.torrent` file.
- Calls `docker compose up` via `std::process::Command`.
- Seeds the payload: injects the torrent and payload into the seeder container via the qBittorrent
WebUI REST API (or `docker exec` as a fallback) and starts seeding.
- Leaches the payload: injects the `.torrent` file into the leecher container and starts
downloading.
- Polls for completion: queries the leecher's WebUI API until the torrent state reaches
`uploading` (100 % downloaded) or a timeout expires.
- Asserts payload integrity: compares the downloaded file against the original (hash or byte
comparison).
- Calls `docker compose down --volumes` unconditionally (even on assertion failure), mirroring
the cleanup pattern in `tracker_container.rs`.

Steps:

- Add a shared `docker compose` wrapper at `src/console/ci/compose.rs` (see below). This
module is not specific to qBittorrent and is reused by the benchmark runner in subissue
`#1525-03`.
- Add a `qbittorrent` module under `src/console/ci/` (parallel to `e2e/`) containing:
- `runner.rs` — main orchestration logic
- `qbittorrent_client.rs` — HTTP calls to the qBittorrent WebUI API
- **`src/console/ci/compose.rs` wrapper** — mirrors `docker.rs` but targets `docker compose`
subcommands. Design it around a `DockerCompose` struct that holds the compose file path and
project name:
- `DockerCompose::new(file: &Path, project: &str) -> Self`
- `up(&self) -> io::Result<()>` — runs `docker compose -f <file> -p <project> up --wait --detach`
- `down(&self) -> io::Result<()>` — runs `docker compose -f <file> -p <project> down --volumes`
- `port(&self, service: &str, container_port: u16) -> io::Result<u16>` — runs
`docker compose -f <file> -p <project> port <service> <port>` and parses the host port so
the runner never hard-codes ports
- `exec(&self, service: &str, cmd: &[&str]) -> io::Result<Output>` — wraps
`docker compose -f <file> -p <project> exec <service> <cmd…>` for injecting commands into
running containers
- Implement `Drop` on a `RunningCompose` guard returned by `up` that calls `down`
unconditionally, matching the `RunningContainer::drop` pattern in `docker.rs`
- Use `tracing` for progress output consistent with the rest of the runner
- Generate a fixed small payload (e.g., 1 MiB of deterministic bytes) at runtime; store the
`.torrent` file in a `tempfile` directory so it is cleaned up automatically.
- Re-use `tracing` for progress output, consistent with the existing runner.

Acceptance criteria:

- [ ] The runner completes a full seeder → leecher download using the containerized tracker.
- [ ] Payload integrity is verified after download (hash or byte comparison).
- [ ] The runner can be executed repeatedly without manual setup or teardown.
- [ ] No orphaned containers or volumes remain on success or failure.
- [ ] The binary is documented in the top-level module doc comment with an example invocation.
- [ ] Each invocation uses a unique compose project name so parallel runs do not conflict.
- [ ] All temporary files are placed in a managed temp directory and deleted on exit.
- [ ] No fixed host ports are used; ports are discovered dynamically from the compose output.
- [ ] `docker compose down --volumes` is called unconditionally via a `Drop` guard.

### 3) Document the E2E workflow

Steps:

- Document the local invocation command (e.g., `cargo run --bin qbittorrent_e2e_runner`).
- Document any prerequisites (Docker, image availability, open ports).
- Clarify that this test is not run in the standard `cargo test` suite due to resource
requirements and describe how it is triggered in CI (opt-in env var or separate job).

Acceptance criteria:

- [ ] The test is documented and runnable without ad hoc manual steps.

## Out of Scope

- Testing multiple database backends (covered by subissue #1525-01).
- Testing announce or scrape protocol correctness at the protocol level.
- UDP tracker E2E (can be added later without redesigning the compose setup).

## Definition of Done

- [ ] `cargo test --workspace --all-targets` passes (or the E2E test is explicitly excluded with a
documented opt-in flag).
- [ ] `linter all` exits with code `0`.
- [ ] The E2E runner has been executed successfully in a clean environment; a passing run log is
included in the PR description.

## References

- EPIC: #1525
- Reference PR: #1695
- Reference implementation branch: `josecelano:pr-1684-review` — see EPIC for checkout
instructions (`docs/issues/1525-overhaul-persistence.md`)
- Reference script: `contrib/dev-tools/qa/run-qbittorrent-e2e.py`
- Existing runner pattern: `src/console/ci/e2e/runner.rs`
- Docker command wrapper: `src/console/ci/e2e/docker.rs`
- Existing container wrapper patterns: `src/console/ci/e2e/tracker_container.rs`
Loading