Fix: blocking PATCH requests and implement true streaming for chunked…#120
Open
njuptlzf wants to merge 3 commits intocloudflare:mainfrom
Open
Fix: blocking PATCH requests and implement true streaming for chunked…#120njuptlzf wants to merge 3 commits intocloudflare:mainfrom
njuptlzf wants to merge 3 commits intocloudflare:mainfrom
Conversation
Author
pushbefore: # time regctl image copy --fast app.test.com/test/image/app:7.1.001 r2.test.site/test/image/app:7.1.001
time=2026-02-24T16:54:02.797+08:00 level=WARN msg="API field has been deprecated" api=default host=r2.test.site
time=2026-02-24T16:54:02.797+08:00 level=WARN msg="Changing reqPerSec settings for registry" orig=3 new=4 host=r2.test.site
time=2026-02-24T16:54:02.819+08:00 level=WARN msg="failed to setup CA pool" err="failed to load host specific ca (registry: r2.test.site): pem.Decode is nil: system"
Manifests: 5/5 | Blobs: 1.980GB copied, 32.000B skipped | Elapsed: 984s
r2.test.site/test/image/app:7.1.001after: # time regctl image copy --fast app.test.com/test/image/app:7.1.003.009 r2.test.site/test/image/app:7.1.003.009
time=2026-02-25T18:03:48.703+08:00 level=WARN msg="API field has been deprecated" api=default host=r2.test.site
time=2026-02-25T18:03:48.704+08:00 level=WARN msg="Changing reqPerSec settings for registry" orig=3 new=4 host=r2.test.site
time=2026-02-25T18:03:48.727+08:00 level=WARN msg="failed to setup CA pool" err="failed to load host specific ca (registry: r2.test.site): pem.Decode is nil: system"
Manifests: 5/5 | Blobs: 1.013GB copied, 833.410MB skipped | Elapsed: 330s
r2.test.site/test/image/app:7.1.003.009
real 5m30.821s
user 0m13.727s
sys 0m12.754smetrics
|
c0b5bc5 to
889cc09
Compare
roshanjonah
added a commit
to roshanjonah/serverless-registry
that referenced
this pull request
Mar 14, 2026
Apply streaming fixes from upstream PR cloudflare#120: - Eliminate blocking await req.blob() in PATCH handler by extracting size from Content-Length/Content-Range headers via getStreamSize() - Fix limit() to use TransformStream for proper backpressure handling - Parallel tee consumption for pull-through layer copies - Run R2 part upload and helper object write in parallel Additional push tool improvements: - Restore chunk size to 95MB (was 10MB, causing excessive round-trips) - Reduce push concurrency from 5 to 2 to avoid R2 contention - Add exponential backoff on retry (5s, 10s, 20s) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Context
Previously, the Registry Worker suffered from significant performance and stability issues during large image pushes. When a client (e.g., Docker, Podman, or
regctl) sent a PATCH request without aContent-Lengthheader (common in chunked uploads), the Worker would callawait req.blob().The Problem
await req.blob()buffers the entire chunk into the Worker's memory. Large chunks (50MB+) often hit the memory limit, causing OOM.Changes
getStreamSizeinsrc/utils.ts. It extracts the chunk size fromContent-LengthorContent-Rangeusing robust regex, avoiding body consumption.src/router.tsto passreq.body(aReadableStream) directly to the R2 client.limitfunction insrc/chunk.tsto use aTransformStreampattern, ensuring proper backpressure handling and avoiding stream hangs.PUSH_COMPATIBILITY_MODEinsrc/registry/r2.tsto consume teed streams concurrently usingPromise.all, preventing deadlocks.Content-Rangeparsing to handle thebytesprefix and ensured theRangeresponse header follows the standard0-Nformat.GET /v2/:name/blobs/:digest, whenPUSH_COMPATIBILITY_MODE !== "none", immediately start R2 upload in parallel with returning the response to the client. This reduces backpressure recovery time from 10–30 seconds to 1–5 seconds during large‑layer pulls 1 .Impact