Description
Dokploy preview deployments can drift out of sync with Docker Swarm when PR lifecycle events race the preview build/create path.
In our self-hosted Dokploy v0.28.8 setup, we observed two failure modes:
- Dokploy marks a preview deployment record as
running before the Swarm service exists.
- If the PR is then closed, Dokploy removes the preview deployment record but leaves the Swarm service running, creating an orphan preview service.
This leaves live preview services on the host that are no longer represented in preview_deployments and are therefore not cleaned up by Dokploy.
Environment
- Dokploy
v0.28.8
- GitHub integration enabled for preview deployments
- Docker Swarm on a single Hetzner host
- Preview deployments using wildcard subdomains
Controlled reproduction
- Open a temporary PR against the tracked integration branch.
- Dokploy authorizes preview creation and inserts a row in
preview_deployments with status running.
- While or after the preview image build completes, inspect Swarm:
- expected preview service name from Dokploy DB exists in the preview record
- actual services initially do not contain that service even though the DB row already says
running
- Close the PR without merge.
- Observe that:
- the
preview_deployments row for that PR disappears from Dokploy DB
- but the Swarm service appears afterwards and remains running as an orphan
Actual result
Dokploy preview record is deleted, but the preview service remains live in Swarm.
In our controlled test:
- the Dokploy preview record was created while the PR was open
- after closing the PR, the Dokploy DB row was gone
- the corresponding preview service still existed in
docker service ls for more than one minute after close
We also found broader drift before cleanup:
17 preview Swarm services on the host
- only
3 preview_deployments records in Dokploy DB
So this is not just a one-off transient timing issue.
Expected result
Preview teardown should be idempotent and self-healing:
- if a PR is closed, both the Dokploy preview record and the Swarm service should be removed
- if create/delete events race, Dokploy should reconcile preview DB state against actual Swarm state
- previews should never be left running without a matching Dokploy preview record
Additional notes
We did not find an exact existing Dokploy issue for this specific failure mode. The closest related issue we found was about preview lifecycle/database isolation rather than teardown drift:
As an immediate containment step on our side, we added a host-side reconciliation script that compares Dokploy preview_deployments against docker service ls and removes unmanaged preview services. But this should not be necessary if Dokploy teardown is reliable.
Description
Dokploy preview deployments can drift out of sync with Docker Swarm when PR lifecycle events race the preview build/create path.
In our self-hosted Dokploy
v0.28.8setup, we observed two failure modes:runningbefore the Swarm service exists.This leaves live preview services on the host that are no longer represented in
preview_deploymentsand are therefore not cleaned up by Dokploy.Environment
v0.28.8Controlled reproduction
preview_deploymentswith statusrunning.runningpreview_deploymentsrow for that PR disappears from Dokploy DBActual result
Dokploy preview record is deleted, but the preview service remains live in Swarm.
In our controlled test:
docker service lsfor more than one minute after closeWe also found broader drift before cleanup:
17preview Swarm services on the host3preview_deploymentsrecords in Dokploy DBSo this is not just a one-off transient timing issue.
Expected result
Preview teardown should be idempotent and self-healing:
Additional notes
We did not find an exact existing Dokploy issue for this specific failure mode. The closest related issue we found was about preview lifecycle/database isolation rather than teardown drift:
As an immediate containment step on our side, we added a host-side reconciliation script that compares Dokploy
preview_deploymentsagainstdocker service lsand removes unmanaged preview services. But this should not be necessary if Dokploy teardown is reliable.