These manifests are the inspectable, low-level self-serve Kubernetes starting point. The published Durable Workflow Helm chart is the recommended path for most operators; the raw manifests stay supported for teams that intentionally do not want Helm in the rollout.
Both paths share the same external-persistence contract, the same singleton
scheduler invariant, and the same /api/ready-based readiness contract.
Pick one or the other per environment, not both.
The default image is pinned to the public Docker Hub release tag:
durableworkflow/server:0.2
Before production use, patch every workload image to the exact published tag or digest you intend to run:
kubectl set image -n durable-workflow deploy/durable-workflow-server \
server=durableworkflow/server:0.2
kubectl set image -n durable-workflow deploy/durable-workflow-worker \
worker=durableworkflow/server:0.2
kubectl set image -n durable-workflow cronjob/durable-workflow-scheduler \
scheduler=durableworkflow/server:0.2GitHub Container Registry publishes the same release line at
ghcr.io/durable-workflow/server:0.2. Digest pinning is preferred for strict
change control.
The manifests expect you to provide:
- an external MySQL or PostgreSQL database;
- external Redis or another supported lock-capable cache backend;
- real database, Redis, worker, operator, and admin secrets;
- an ingress, gateway, or load balancer owned by your cluster platform;
- backup, restore, monitoring, and rollout procedures for your environment.
The included contract is deliberately bounded:
k8s/migration-job.yamlrunsserver-bootstrapbefore workloads start;k8s/server-deployment.yamlexposes/api/healthfor liveness and/api/readyfor usable readiness;k8s/worker-deployment.yamlruns the queue worker;k8s/scheduler-cronjob.yamlruns recurring schedule, timeout, and retention maintenance;k8s/secret.yamlseparates public config from app-level secrets and refers to externally managed database and Redis credentials.
Helm charts are now self-serve via helm/durable-workflow/;
managed-Kubernetes provider validation, advanced HA, active/active multi-region,
custom operators, storage classes, network policies, and environment-specific
security hardening are support-led or tracked separately.
Active/passive multi-region with operator-driven regional failover follows the
contract in docs/multi-region-validation.md;
each region still runs the documented single-region or small-cluster shape, and
this manifest contract does not add automatic cross-region orchestration. Use
overlays or direct
patches for namespace, image, resource, replica, ingress, and secret-manager
integration choices.