What versions & operating system are you using?
- wrangler:
4.84.1 (also reproduced on 4.72.0)
- miniflare:
4.20260421.0
- workerd:
1.20260421.1
- Docker CE
29.4.1 (native daemon, not Docker Desktop)
- Ubuntu 24.04 / Linux 6.17.0
Describe the Bug
On a host without cloudflare/proxy-everything pre-cached, a worker with containers + egress interception fails at runtime with:
✘ [ERROR] Uncaught Error: No such image available named cloudflare/proxy-everything:3cb1195@sha256:0ef6716c52430096900b150d84a3302057d6cd2319dae7987128c85d0733e3c8. Please ensure the container egress interceptor image is built and available.
…despite wrangler logging ⎔ Preparing container image(s)... / Downloaded newer image / ⎔ Container image(s) ready during startup. The pull appears to succeed, but workerd still can't find the image.
Root cause
pullEgressInterceptorImage invokes:
await runDockerCmd(dockerPath, ["pull", image, "--platform", "linux/amd64"]);
// image = "cloudflare/proxy-everything:3cb1195@sha256:0ef6716c52430096900b150d84a3302057d6cd2319dae7987128c85d0733e3c8"
When docker is handed a composite name:tag@digest reference, it fetches by digest and silently drops the tag — the image lands in the local store as an untagged entry, addressable only by digest. workerd then asks docker for the original name:tag@digest and gets "no such image" because there's no name:tag in the repo at all.
Minimal reproduction
$ docker pull cloudflare/proxy-everything:3cb1195@sha256:0ef6716c52430096900b150d84a3302057d6cd2319dae7987128c85d0733e3c8 --platform linux/amd64
...
Status: Downloaded newer image for cloudflare/proxy-everything@sha256:0ef6716c52430096900b150d84a3302057d6cd2319dae7987128c85d0733e3c8
docker.io/cloudflare/proxy-everything:3cb1195@sha256:0ef6716c52430096900b150d84a3302057d6cd2319dae7987128c85d0733e3c8
$ docker images cloudflare/proxy-everything
IMAGE ID DISK USAGE CONTENT SIZE EXTRA
# (empty — no tagged image)
$ docker image inspect cloudflare/proxy-everything@sha256:0ef6716c52430096900b150d84a3302057d6cd2319dae7987128c85d0733e3c8 --format '{{.RepoTags}} {{.RepoDigests}}'
[] [cloudflare/proxy-everything@sha256:0ef6716c52430096900b150d84a3302057d6cd2319dae7987128c85d0733e3c8]
Note: RepoTags=[]. The 3cb1195 tag was not applied.
Confirming the workaround:
$ docker pull cloudflare/proxy-everything:3cb1195 --platform linux/amd64
...
$ docker images cloudflare/proxy-everything
cloudflare/proxy-everything:3cb1195 7cdb883de642 15.2MB
Now wrangler dev no longer hits the "No such image available" error.
Impact
Any worker using containers with egress interception (e.g. allowedHosts / interceptOutboundHttp) breaks in wrangler dev on any host that didn't already have the cloudflare/proxy-everything:3cb1195 tag cached. Cache-hit machines accidentally dodge the bug; clean machines hit it 100% of the time. This makes it flaky and hard to diagnose (the pull looks like it worked).
Suggested fix
Either:
-
Drop the digest pinning from the pull command but keep it elsewhere (pull as name:tag, and keep the name:tag@digest form for workerd's lookup):
const pullRef = image.replace(/@sha256:[a-f0-9]+$/, "");
await runDockerCmd(dockerPath, ["pull", pullRef, "--platform", "linux/amd64"]);
Digest verification at pull time is lost, but workerd's lookup still pins by digest so content is still verified before use.
-
After pulling by digest, explicitly tag:
const digestOnly = image.replace(/:[^@]+@/, "@"); // strip the tag piece
const tagOnly = image.replace(/@sha256:[a-f0-9]+$/, "");
await runDockerCmd(dockerPath, ["pull", digestOnly, "--platform", "linux/amd64"]);
await runDockerCmd(dockerPath, ["tag", digestOnly, tagOnly]);
Happy to send a PR with whichever direction you prefer.
Workaround
For anyone else hitting this while a fix is pending: miniflare reads MINIFLARE_CONTAINER_EGRESS_IMAGE as an override for both the pull and the workerd config. Setting it to just the tag (no digest) works around both halves of the bug:
export MINIFLARE_CONTAINER_EGRESS_IMAGE=cloudflare/proxy-everything:3cb1195
Please provide any relevant error logs
[wrangler:info] ⎔ Preparing container image(s)...
docker.io/cloudflare/proxy-everything@sha256:0ef6716c52430096900b150d84a3302057d6cd2319dae7987128c85d0733e3c8: Pulling from cloudflare/proxy-everything
bc1da058f299: Pull complete
c27657e384b3: Pull complete
93bb89c5c4e0: Pull complete
Status: Downloaded newer image for cloudflare/proxy-everything@sha256:0ef6716c52430096900b150d84a3302057d6cd2319dae7987128c85d0733e3c8
⎔ Container image(s) ready
[wrangler:info] Ready on http://0.0.0.0:8788
✘ [ERROR] Uncaught Error: No such image available named cloudflare/proxy-everything:3cb1195@sha256:0ef6716c52430096900b150d84a3302057d6cd2319dae7987128c85d0733e3c8. Please ensure the container egress interceptor image is built and available.
Error checking if container is ready: connect(): Container ingress proxy is not running.
(repeats for every sandbox request)
What versions & operating system are you using?
4.84.1(also reproduced on4.72.0)4.20260421.01.20260421.129.4.1(native daemon, not Docker Desktop)Describe the Bug
On a host without
cloudflare/proxy-everythingpre-cached, a worker with containers + egress interception fails at runtime with:…despite wrangler logging
⎔ Preparing container image(s)... / Downloaded newer image / ⎔ Container image(s) readyduring startup. The pull appears to succeed, but workerd still can't find the image.Root cause
pullEgressInterceptorImageinvokes:When docker is handed a composite
name:tag@digestreference, it fetches by digest and silently drops the tag — the image lands in the local store as an untagged entry, addressable only by digest. workerd then asks docker for the originalname:tag@digestand gets "no such image" because there's noname:tagin the repo at all.Minimal reproduction
Note:
RepoTags=[]. The3cb1195tag was not applied.Confirming the workaround:
Now
wrangler devno longer hits the "No such image available" error.Impact
Any worker using containers with egress interception (e.g.
allowedHosts/interceptOutboundHttp) breaks inwrangler devon any host that didn't already have thecloudflare/proxy-everything:3cb1195tag cached. Cache-hit machines accidentally dodge the bug; clean machines hit it 100% of the time. This makes it flaky and hard to diagnose (the pull looks like it worked).Suggested fix
Either:
Drop the digest pinning from the pull command but keep it elsewhere (pull as
name:tag, and keep thename:tag@digestform for workerd's lookup):Digest verification at pull time is lost, but workerd's lookup still pins by digest so content is still verified before use.
After pulling by digest, explicitly tag:
Happy to send a PR with whichever direction you prefer.
Workaround
For anyone else hitting this while a fix is pending: miniflare reads
MINIFLARE_CONTAINER_EGRESS_IMAGEas an override for both the pull and the workerd config. Setting it to just the tag (no digest) works around both halves of the bug:Please provide any relevant error logs