Testimony is a single Go service that accepts Allure result archives, stores raw and generated artifacts in S3-compatible object storage, records metadata in SQLite, generates reports asynchronously through the Allure CLI, and serves both a lightweight browse UI and the generated report assets.
This document describes the runtime that exists in the repository today. It is intentionally grounded in the current code and packaging surfaces rather than future ideas.
At startup, cmd/testimony/main.go composes the runtime in this order:
- load environment-driven configuration from
internal/config - build the JSON logger from
internal/server - open the SQLite metadata store from
internal/db - optionally seed and enable API-key auth from
internal/auth - open the S3-compatible object store from
internal/storage - build the report generator and asynchronous generation service from
internal/generate - build the upload handler from
internal/upload - build the browse UI and report proxy from
internal/serve - start the retention worker from
internal/retention - mount the router and start the HTTP server from
internal/server
The result is one process with two persistent backends:
- SQLite for projects, reports, statuses, timestamps, and API-key hashes
- S3-compatible storage for uploaded archives and generated report files
internal/server/server.go mounts:
GET /healthz— livenessGET /readyz— readiness
Readiness is driven by a shared health state plus live dependency checks against SQLite and S3. When readiness changes, the runtime logs a structured readiness changed event. When a dependency probe fails, it logs readiness probe failed with the failing dependency name.
POST /api/v1/projects/{slug}/upload is mounted under /api/v1 in internal/server/server.go and handled by internal/upload/handler.go.
The upload path:
- validates the project slug from the route
- accepts either:
multipart/form-datawith a file part, or- a raw
application/zip/application/gziprequest body
- stages the upload into the runtime temp directory
- validates and extracts the archive with
PrepareArchive - uploads the original archive to S3 under a report-scoped key
- creates a
pendingreport row in SQLite - moves the extracted results into a report-scoped generation workspace
- enqueues asynchronous report generation
- returns
202 Acceptedwith JSON containing thereport_id,project_slug, status, format, and archive object key
Current raw archive object keys follow this pattern:
projects/{slug}/reports/{reportID}/archive.zipprojects/{slug}/reports/{reportID}/archive.tar.gz
If metadata persistence fails after the archive upload, the handler attempts an object-storage rollback so the runtime does not leave an orphaned archive behind.
internal/auth/middleware.go provides bearer-token middleware.
Current behavior:
- uploads are wrapped with auth middleware when
TESTIMONY_AUTH_ENABLED=true - browse UI and report viewing are also wrapped when
TESTIMONY_AUTH_REQUIRE_VIEWER=true - auth failures return
401 Unauthorizedand log structuredauth rejectedevents without printing secret values
Bootstrap API keys are seeded through the configured runtime and stored as hashes in SQLite.
After upload acceptance, internal/generate/service.go runs report generation in the background.
The generation service:
- marks the report as
processing - loads ready reports for the same project from SQLite
- asks the generator to fetch usable history from prior report output in S3
- executes the selected Allure CLI variant
- uploads the generated HTML tree back to S3
- marks the report
readyorfailed
The service uses a semaphore sized by TESTIMONY_GENERATE_MAX_CONCURRENCY, so uploads can return quickly while generation stays bounded.
Current status lifecycle in SQLite:
pendingprocessingreadyfailed
When generation succeeds, the primary generated object key is recorded as:
projects/{slug}/reports/{reportID}/html/index.html
The rest of the generated report is uploaded under the same html/ prefix, preserving Allure asset paths.
When generation fails, the service logs report_generation_failed and stores a truncated error message on the report row so later inspection is possible without scraping logs alone.
internal/serve/ui.go mounts the lightweight server-rendered browse UI:
GET /— list projectsGET /projects/{slug}— list reports for one project
The UI reads from SQLite and only links to report pages when a report is ready.
internal/serve/proxy.go mounts the report-serving surface:
GET /reports/{slug}/{reportID}— redirect to the trailing-slash rootGET /reports/{slug}/{reportID}/GET /reports/{slug}/{reportID}/*
The proxy resolves the report in SQLite, verifies that it belongs to the requested project and is ready, then streams the requested asset directly from S3 back to the client. This keeps generated reports in object storage while the Go service remains the stable HTTP entrypoint.
internal/retention/worker.go starts as part of normal runtime startup.
On each pass it:
- asks SQLite for expired reports using the global retention default plus any per-project overrides
- lists generated objects under the report's
html/prefix - deletes S3 objects first
- deletes the SQLite row last
This ordering prevents metadata from disappearing before storage cleanup succeeds. Cleanup outcomes are logged with structured success/failure events so operators can see whether a report was deleted or retained for retry.
| Package | Responsibility |
|---|---|
cmd/testimony |
Compose the runtime, start HTTP serving, manage readiness transitions, and perform graceful shutdown. |
internal/config |
Load and validate the TESTIMONY_* environment contract. |
internal/server |
Build the JSON logger, request logging middleware, health/readiness surfaces, and HTTP server. |
internal/upload |
Accept uploads, validate/extract archives, persist raw archives, create report rows, and dispatch generation. |
internal/storage |
Talk to S3-compatible backends and ensure the configured bucket exists. |
internal/generate |
Merge history, invoke the Allure CLI, upload generated output, and persist report status transitions. |
internal/serve |
Render the project/report browse pages and proxy generated report assets from S3. |
internal/auth |
Validate bearer tokens against stored API-key hashes. |
internal/retention |
Delete expired report artifacts and metadata on a background schedule. |
internal/db |
Persist projects, reports, statuses, timestamps, and API-key hashes in SQLite. |
The runtime is configured through environment variables loaded in internal/config/config.go.
Important characteristics of the current contract:
- the HTTP server binds with
TESTIMONY_SERVER_* - S3 access is controlled with
TESTIMONY_S3_* - SQLite uses
TESTIMONY_SQLITE_* - auth uses
TESTIMONY_AUTH_* - report generation uses
TESTIMONY_GENERATE_* - cleanup and shutdown use
TESTIMONY_RETENTION_*andTESTIMONY_SHUTDOWN_*
Both Docker Compose and Helm package the same binary by translating their settings into this existing env contract rather than inventing a second configuration layer.
docker-compose.yml packages the real runtime for local development:
- builds the image from
Dockerfile - starts MinIO alongside Testimony
- maps Testimony
8080to host18080 - maps MinIO API/console to
19000and19001 - mounts persistent data under
/var/lib/testimony - wires Testimony to MinIO through
TESTIMONY_S3_*variables - probes
http://127.0.0.1:8080/readyzinside the Testimony container
This is the easiest way to exercise the same upload, generation, browse, and proxy flow that contributors and new users see first.
chart/ packages the same runtime for Kubernetes:
chart/values.yamlexposes runtime settings under a structured values surface- the Deployment template maps those values back onto
TESTIMONY_*environment variables - credentials come from a generated Secret by default or an existing Secret when configured
- the container still serves the same health and readiness paths:
/healthzand/readyz - persistence mounts the same runtime path used by the container image
In other words, Helm is a packaging layer over the existing runtime contract, not a separate implementation.
cmd/testimony/main.go handles SIGINT and SIGTERM.
On shutdown the process:
- marks readiness false
- logs
shutdown requestedandshutdown drain started - waits for the configured drain delay
- shuts down the HTTP server with a timeout
- stops the retention worker
- logs
shutdown completewhen the server exits cleanly
This behavior is important for Kubernetes rollouts because /readyz stops advertising readiness before in-flight work is drained.
Useful built-in inspection surfaces:
- HTTP status surfaces —
GET /healthzandGET /readyz - Request logs —
http request completedincludes request ID, method, path, status, and duration - Startup logs —
sqlite store ready,s3 store ready,http server listening - Upload/generation logs —
upload accepted,report_generation_started,report_generation_completed,report_generation_failed - Retention logs —
retention worker started,retention cleanup succeeded,retention cleanup failed - Manual verification scripts —
scripts/verify-compose-e2e.shandscripts/verify-helm-chart.sh
For local packaging regressions, prefer the scripts in scripts/ because they surface failure context automatically:
- the Compose smoke script prints
docker compose psand both service logs on failure - the Helm verification script fails on specific rendered-manifest assertions after
helm lint/helm template
The current repository does not implement a separate worker service, a message queue, or an admin API for dynamic API-key management. The runtime today is one Go process with background goroutines and packaging for local Docker Compose and Kubernetes deployment.