You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Ship a built-in HTTP-based rolling-deploy adapter so users can warm previous bundle hashes during a rolling deploy without provisioning any external artifact store (no S3 bucket, no IAM, no aws-sdk dep). The currently-deployed Rails server already has the bundles + companion assets sitting on disk; mount a small authenticated controller that streams them, and ship a matching Http adapter that points at the previous deployment's URL. upload becomes a no-op — the running Rails server is the artifact store.
This is a follow-up to #3173 (which lands the rolling_deploy_adapter protocol + S3 / Control Plane / Filesystem reference implementations). The protocol stays load-bearing — this issue adds a fourth adapter that ships in the box and is the recommended default for users whose CI can reach their production URL.
Closes the gap between "the protocol exists" and "rolling-deploy seeding works out of the box."
The S3 reference adapter from #3173 is a complete solution, but adoption requires:
Provision an S3 bucket (or GCS, R2, Azure Blob equivalent).
Wire IAM credentials into both build CI and runtime.
Add aws-sdk-s3 (or equivalent) to the Gemfile.
Decide on retention policy.
Copy the reference class into the app, adapt it, test it.
Most teams running rolling deploys on Heroku / Fly / Render / Cloud Run / Kamal / Control Plane / Kubernetes can reach their previous deployment's URL from their build CI. For those teams, the bundles are already on the previous deployment's disk — staging them through a separate object store is wasted ceremony.
A built-in HTTP adapter lowers the adoption story to:
Set ROLLING_DEPLOY_TOKEN (shared between Rails runtime and build CI).
Set ROLLING_DEPLOY_PREVIOUS_URL (or auto-detect from a known env var).
previous_bundle_hashes is called during pre-seeding to determine which historical hashes to warm. fetch(hash) downloads bundle + companion assets to local disk. upload runs after assets:precompile to publish the new build.
See docs/pro/rolling-deploy-adapters.md for the full protocol contract, edge-case table, and three reference implementations.
Proposed design
Two pieces, shipped together:
1. Server side — mountable controller (ReactOnRailsPro::RollingDeploy::BundlesController)
A Rails controller mounted by the engine at a configurable path (default /react_on_rails_pro/rolling_deploy), exposing two endpoints:
GET /react_on_rails_pro/rolling_deploy/manifest
Returns the current deployment's bundle hash(es) so the next deploy's build CI can discover what to fetch.
hashes always contains the server bundle hash; when RSC is enabled, also the RSC bundle hash. Order is [server_bundle_hash, rsc_bundle_hash] for stability.
GET /react_on_rails_pro/rolling_deploy/bundles/:hash
Returns a tarball (application/x-tar, gzipped) containing the bundle + companion assets for the requested hash.
Tarball layout:
./bundle.js
./loadable-stats.json
./react-client-manifest.json # if RSC enabled
./react-server-client-manifest.json # if RSC enabled
The hash in the URL must match either the current server bundle hash or the RSC bundle hash. Any other hash returns 404. (The server only knows about its own hashes; it can't serve historical hashes from prior deploys it never ran.)
Why tarball over multipart or per-file requests:
Single round-trip per hash.
Atomic: client either gets the full set or fails.
Streamable on both sides; no need to buffer in memory.
gzip compresses bundles ~3-5× (server bundles are mostly text).
2. Client side — adapter class (ReactOnRailsPro::RollingDeployAdapters::Http)
moduleReactOnRailsPromoduleRollingDeployAdaptersclassHttpdefself.previous_bundle_hashesreturn[]unlessprevious_urlresponse=get_with_auth("#{previous_url}/manifest")return[]unlessresponse.code == "200"JSON.parse(response.body).fetch("hashes",[])rescueStandardError=>ewarn_and_return("manifest fetch failed: #{e.class}: #{e.message}",[])enddefself.fetch(hash)returnnilunlessprevious_urlresponse=get_with_auth("#{previous_url}/bundles/#{hash}",stream: true)returnnilunlessresponse.code == "200"dir=Rails.root.join("tmp/rolling-deploy",hash)FileUtils.mkdir_p(dir)extract_tarball(response.body,dir)bundle=dir.join("bundle.js").to_sreturnnilunlessFile.exist?(bundle)asset_names=%w[loadable-stats.jsonreact-client-manifest.jsonreact-server-client-manifest.json]assets=asset_names.map{ |n| dir.join(n).to_s}.select{ |p| File.exist?(p)}{bundle: bundle,assets: assets}rescueStandardError=>ewarn_and_return("fetch(#{hash}) failed: #{e.class}: #{e.message}",nil)enddefself.upload(_hash,bundle:,assets:)# No-op. The running Rails server IS the artifact store.# Files are already on local disk where the controller serves them.end# ... private helpers: previous_url, token, get_with_auth, extract_tarball ...endendend
Authentication & security
This endpoint serves the application's compiled server bundle. It must not be open. Specifics:
Token-based auth (shared secret)
New env var: ROLLING_DEPLOY_TOKEN (configurable via config.rolling_deploy_token).
Sent by client in Authorization: Bearer <token> header.
Server compares with ActiveSupport::SecurityUtils.secure_compare (constant-time).
On mismatch: 401 Unauthorized, no body. Same response for missing/malformed/wrong token (don't leak which).
Rails refuses to mount the controller unless a token is set with min length (e.g., 32 chars). Fail loud on misconfiguration rather than ship an open endpoint.
Per-question discussion: should we reuse RENDERER_PASSWORD?
Recommendation: no, but allow opt-in fallback.
RENDERER_PASSWORD is a runtime concern (Rails ↔ Node renderer). The new endpoint is hit by build CI, a different trust boundary.
Reusing means rotating either secret rotates both — coupling that's hard to reason about.
A dedicated ROLLING_DEPLOY_TOKEN documents intent at the call site.
Compromise: if ROLLING_DEPLOY_TOKEN isn't set, fall back to RENDERER_PASSWORD with a deprecation-style warning logged once at boot. Lets users start with one secret and graduate to two.
Rate limiting
Optional but recommended: enforce N requests per minute per IP (default: 60/min).
Use Rack::Attack if present, otherwise a simple in-memory throttle.
Counter resets on app restart (acceptable — this isn't a DDoS-grade protection).
Configurable via config.rolling_deploy_rate_limit (set to nil to disable).
Path traversal
The :hash URL param is matched against an allowlist (the current deployment's actual hashes), not used to construct file paths directly.
Even so, validate the param matches /\A[0-9a-f]+\z/ before doing anything.
TLS
Document that the endpoint must be served over HTTPS in production.
Adapter logs a warning (not an error — some test setups need plain HTTP) if the previous URL is http:// and not localhost.
What's exposed if the token leaks
An attacker can download the server bundle and asset manifests for the current deployment.
Server bundles contain compiled JS, source maps (depending on webpack config), and any constants webpack inlines (env vars, feature flags, API keys baked into the build — this is already a JS-bundle hygiene issue, but worth flagging in docs).
The endpoint does not expose runtime data, secrets in Rails.application.credentials, DB contents, etc.
Rotate the token like any other credential.
Configuration surface
# config/initializers/react_on_rails_pro.rbReactOnRailsPro.configuredo |config|
config.rolling_deploy_adapter=ReactOnRailsPro::RollingDeployAdapters::Http# Required in production.config.rolling_deploy_token=ENV["ROLLING_DEPLOY_TOKEN"]# Required at build time (where to fetch from).# Auto-detected from common env vars: PREVIOUS_DEPLOYMENT_URL,# HEROKU_RELEASE_PREV_URL, etc. Falls back to explicit config.config.rolling_deploy_previous_url=ENV["ROLLING_DEPLOY_PREVIOUS_URL"]# Optional: where to mount the controller. Default below.config.rolling_deploy_mount_path="/react_on_rails_pro/rolling_deploy"# Optional: rate limit (requests/min/IP). Default 60. nil disables.config.rolling_deploy_rate_limit=60# Optional: max bundle/tarball size to accept. Default 200 MB.config.rolling_deploy_max_size=200 * 1024 * 1024end
Engine mount
Engine auto-mounts the controller iff config.rolling_deploy_adapter == ReactOnRailsPro::RollingDeployAdapters::Http (or a subclass). Avoids surprising users who configure a custom adapter and don't expect the endpoint to appear.
Document explicit-mount form for users who want to mount under a custom path or behind their own auth middleware:
Streaming: server uses Rack::Files or ActionController::Live to avoid buffering the whole tarball in memory. Adapter streams to disk in 64KB chunks.
Error responses
Code
Condition
401
Missing/invalid token
404
Hash not in current deployment's hash set
413
(Defensive) bundle exceeds max_size
429
Rate limit exceeded
500
Unexpected server error (don't leak stack traces)
Adapter treats all non-200 responses as "skip this hash" — never raises, always logs. Consistent with the protocol's error contract (per rolling-deploy-adapters.md).
Edge cases & error handling
Scenario
Behavior
previous_url not configured
previous_bundle_hashes returns [], log info once. No-op.
404 for path-traversal attempt (../../../etc/passwd).
Rate limiter returns 429 after threshold.
RSC enabled: both server + RSC hashes returned and fetchable.
RSC disabled: only server hash returned; RSC hash request 404s.
Client-side (adapter specs)
previous_bundle_hashes returns parsed hashes on 200.
Returns [] on 401/404/500/timeout/connection refused.
fetch writes bundle + assets to tmp/rolling-deploy/<hash>/ and returns paths.
fetch returns nil on tarball corruption.
fetch returns nil when bundle.js missing from tarball.
upload is a no-op (returns truthy without doing anything).
Token sent in Authorization: Bearer header.
Token fallback to RENDERER_PASSWORD with deprecation warning when set.
Respects Timeout.timeout from caller.
Cleans up partial extraction on failure.
Integration
End-to-end: spin up a Rails server with the controller mounted, run the adapter against it, verify cache directory is populated correctly.
Run the existing rolling-deploy-cache-stager spec suite with the HTTP adapter swapped in.
Security
Path traversal regression test (specific test case for ../).
Hash whitelist test (any hash not in current set → 404).
Documentation
New page
docs/pro/rolling-deploy-http-adapter.md — full setup walkthrough:
When to use this vs. S3 / Control Plane / custom adapters (decision matrix).
Generate a token (SecureRandom.hex(32)).
Set env vars on Rails runtime + build CI.
Add initializer config.
Confirm via react_on_rails:doctor.
Verify by triggering a deploy and tailing renderer logs.
Rotation procedure.
Security model: what's exposed, what isn't, how to lock down further.
Updates to existing pages
docs/pro/rolling-deploy-adapters.md: add Http as the fourth reference impl (with a "this is the easiest option for most users" callout) and update the comparison table.
docs/pro/node-renderer.md: add a "want zero-config rolling deploy?" mention.
CHANGELOG.md: feature entry.
Decision matrix (for the docs page)
Adapter
Best for
Requires
Http (built-in)
Most rolling-deploy users
CI can reach prod URL + token
S3 / GCS / R2
Network-isolated build runners; multi-region
Object store + IAM
ControlPlane
Image-based deploys on cpln
cpln CLI in build env
Filesystem
Volume-mounted deploys; testing
Shared filesystem
Custom
Anything else
Implement the 3-method protocol
Out of scope (future follow-ups)
Mutual TLS auth instead of bearer token. Real ask for some shops; defer until requested.
Signed URLs with short TTL instead of long-lived bearer token. Nice-to-have; the bearer pattern is well-understood and matches RENDERER_PASSWORD.
Multi-region: an HTTP adapter that hits N URLs and picks the first that responds. Meaningful for multi-region rolling deploys; defer.
Diff / patch download: only fetch what changed since last deploy. Premature optimization — bundles are already small after gzip.
Auto-detect previous URL from cloud provider: Heroku / Fly / Render all expose previous-release URLs via env vars. Worth doing as a follow-up after the base impl ships.
Open questions
Engine vs. controller-only. Mounting an engine gives namespace isolation; mounting a single controller is lighter. Engine wins if we add more endpoints (health, version) later. Lean engine.
Tarball format: tar.gz vs. zip vs. multipart? Tar.gz is more portable in Ruby+streaming; zip needs the full payload to read the central directory.
Should upload ever do anything? Could optionally POST the new hash to a "register myself" endpoint so multi-version setups can track hashes server-side. Probably YAGNI — the manifest endpoint already infers this from running state.
Caching: should the controller cache the tarball on disk after first build (immutable per hash)? Avoids re-tar-gzipping on every fetch. Marginal — fetches are once per deploy. Defer.
Backpressure: if 50 build runners all hit /bundles/:hash simultaneously (e.g., parallel CI matrix), should we serialize? Probably handled by rate limiter + Rails request concurrency; revisit if reported.
Summary
Ship a built-in HTTP-based rolling-deploy adapter so users can warm previous bundle hashes during a rolling deploy without provisioning any external artifact store (no S3 bucket, no IAM, no aws-sdk dep). The currently-deployed Rails server already has the bundles + companion assets sitting on disk; mount a small authenticated controller that streams them, and ship a matching
Httpadapter that points at the previous deployment's URL.uploadbecomes a no-op — the running Rails server is the artifact store.This is a follow-up to #3173 (which lands the
rolling_deploy_adapterprotocol + S3 / Control Plane / Filesystem reference implementations). The protocol stays load-bearing — this issue adds a fourth adapter that ships in the box and is the recommended default for users whose CI can reach their production URL.Closes the gap between "the protocol exists" and "rolling-deploy seeding works out of the box."
Related: #3122 (parent: eliminate Node Renderer cold-start latency), #3173 (rolling_deploy_adapter protocol).
Motivation
The S3 reference adapter from #3173 is a complete solution, but adoption requires:
aws-sdk-s3(or equivalent) to the Gemfile.Most teams running rolling deploys on Heroku / Fly / Render / Cloud Run / Kamal / Control Plane / Kubernetes can reach their previous deployment's URL from their build CI. For those teams, the bundles are already on the previous deployment's disk — staging them through a separate object store is wasted ceremony.
A built-in HTTP adapter lowers the adoption story to:
ROLLING_DEPLOY_TOKEN(shared between Rails runtime and build CI).ROLLING_DEPLOY_PREVIOUS_URL(or auto-detect from a known env var).config.rolling_deploy_adapter = ReactOnRailsPro::RollingDeployAdapters::Http.That's it. No bucket. No IAM. No new gem.
Background
rolling_deploy_adapteris a duck-typed protocol with three class methods:previous_bundle_hashesis called during pre-seeding to determine which historical hashes to warm.fetch(hash)downloads bundle + companion assets to local disk.uploadruns afterassets:precompileto publish the new build.See
docs/pro/rolling-deploy-adapters.mdfor the full protocol contract, edge-case table, and three reference implementations.Proposed design
Two pieces, shipped together:
1. Server side — mountable controller (
ReactOnRailsPro::RollingDeploy::BundlesController)A Rails controller mounted by the engine at a configurable path (default
/react_on_rails_pro/rolling_deploy), exposing two endpoints:GET /react_on_rails_pro/rolling_deploy/manifestReturns the current deployment's bundle hash(es) so the next deploy's build CI can discover what to fetch.
Response (JSON):
{ "hashes": ["abc123...", "def456..."], "rsc_enabled": true, "generated_at": "2026-05-02T12:34:56Z" }hashesalways contains the server bundle hash; when RSC is enabled, also the RSC bundle hash. Order is[server_bundle_hash, rsc_bundle_hash]for stability.GET /react_on_rails_pro/rolling_deploy/bundles/:hashReturns a tarball (
application/x-tar, gzipped) containing the bundle + companion assets for the requested hash.Tarball layout:
The hash in the URL must match either the current server bundle hash or the RSC bundle hash. Any other hash returns
404. (The server only knows about its own hashes; it can't serve historical hashes from prior deploys it never ran.)Why tarball over multipart or per-file requests:
2. Client side — adapter class (
ReactOnRailsPro::RollingDeployAdapters::Http)Authentication & security
This endpoint serves the application's compiled server bundle. It must not be open. Specifics:
Token-based auth (shared secret)
ROLLING_DEPLOY_TOKEN(configurable viaconfig.rolling_deploy_token).Authorization: Bearer <token>header.ActiveSupport::SecurityUtils.secure_compare(constant-time).401 Unauthorized, no body. Same response for missing/malformed/wrong token (don't leak which).Per-question discussion: should we reuse
RENDERER_PASSWORD?Recommendation: no, but allow opt-in fallback.
RENDERER_PASSWORDis a runtime concern (Rails ↔ Node renderer). The new endpoint is hit by build CI, a different trust boundary.ROLLING_DEPLOY_TOKENdocuments intent at the call site.ROLLING_DEPLOY_TOKENisn't set, fall back toRENDERER_PASSWORDwith a deprecation-style warning logged once at boot. Lets users start with one secret and graduate to two.Rate limiting
Rack::Attackif present, otherwise a simple in-memory throttle.config.rolling_deploy_rate_limit(set tonilto disable).Path traversal
:hashURL param is matched against an allowlist (the current deployment's actual hashes), not used to construct file paths directly./\A[0-9a-f]+\z/before doing anything.TLS
http://and not localhost.What's exposed if the token leaks
Rails.application.credentials, DB contents, etc.Configuration surface
Engine mount
Engine auto-mounts the controller iff
config.rolling_deploy_adapter == ReactOnRailsPro::RollingDeployAdapters::Http(or a subclass). Avoids surprising users who configure a custom adapter and don't expect the endpoint to appear.Document explicit-mount form for users who want to mount under a custom path or behind their own auth middleware:
Wire format details
GET /manifestRequest:
Response (200):
protocol_versionlets us evolve the wire format without breaking older build CIs.GET /bundles/:hashRequest:
Response (200):
Streaming: server uses
Rack::FilesorActionController::Liveto avoid buffering the whole tarball in memory. Adapter streams to disk in 64KB chunks.Error responses
max_sizeAdapter treats all non-200 responses as "skip this hash" — never raises, always logs. Consistent with the protocol's error contract (per
rolling-deploy-adapters.md).Edge cases & error handling
previous_urlnot configuredprevious_bundle_hashesreturns[], log info once. No-op.previous_urlreturns connection refused[](ornilfromfetch). Skip seeding, don't fail build.Timeout.timeout. Warn and skip.loadable-stats.jsonmanifestreturns{"hashes": []}. Adapter returns[]. No-op.Httpadapter configured but no token setReactOnRailsPro.configuretime.<32charsDoctor integration
Extend
react_on_rails:doctor(already updated in #3173) to detect the HTTP adapter specifically:rolling_deploy_tokenis set and meets length requirement.rolling_deploy_previous_urlis set.http://(not localhost).GET /manifestagainstlocalhostto confirm the controller is mounted and reachable.Doctor still must not call
fetchorupload.Tests
Server-side (controller specs)
../../../etc/passwd).Client-side (adapter specs)
previous_bundle_hashesreturns parsed hashes on 200.[]on 401/404/500/timeout/connection refused.fetchwrites bundle + assets totmp/rolling-deploy/<hash>/and returns paths.fetchreturns nil on tarball corruption.fetchreturns nil when bundle.js missing from tarball.uploadis a no-op (returns truthy without doing anything).Authorization: Bearerheader.RENDERER_PASSWORDwith deprecation warning when set.Timeout.timeoutfrom caller.Integration
Security
../).Documentation
New page
docs/pro/rolling-deploy-http-adapter.md— full setup walkthrough:SecureRandom.hex(32)).react_on_rails:doctor.Updates to existing pages
docs/pro/rolling-deploy-adapters.md: addHttpas the fourth reference impl (with a "this is the easiest option for most users" callout) and update the comparison table.docs/pro/node-renderer.md: add a "want zero-config rolling deploy?" mention.CHANGELOG.md: feature entry.Decision matrix (for the docs page)
Http(built-in)S3/ GCS / R2ControlPlaneFilesystemOut of scope (future follow-ups)
RENDERER_PASSWORD.Open questions
uploadever do anything? Could optionallyPOSTthe new hash to a "register myself" endpoint so multi-version setups can track hashes server-side. Probably YAGNI — the manifest endpoint already infers this from running state./bundles/:hashsimultaneously (e.g., parallel CI matrix), should we serialize? Probably handled by rate limiter + Rails request concurrency; revisit if reported.Implementation checklist
ReactOnRailsPro::RollingDeploy::Engine(orBundlesControllermounted directly).ReactOnRailsPro::RollingDeployAdapters::Httpadapter class.config.rolling_deploy_token,rolling_deploy_previous_url,rolling_deploy_mount_path,rolling_deploy_rate_limit,rolling_deploy_max_size.configuretime when adapter isHttp.:hashparam.RENDERER_PASSWORDfallback with deprecation log.Httpadapter is configured.Http-specific configuration.docs/pro/rolling-deploy-http-adapter.mdwalkthrough.docs/pro/rolling-deploy-adapters.mdcomparison table + add as 4th reference impl.Acceptance criteria
A new ROR Pro user with a Rails app deployed on Heroku can enable rolling-deploy seeding by:
bin/rails generate react_on_rails_pro:rolling_deploy_token(orSecureRandom.hex(32)from console).ROLLING_DEPLOY_TOKENon the Heroku app + their CI.config/initializers/react_on_rails_pro.rb.No bucket. No IAM. No new gem.
References
rolling_deploy_adapterprotocol + S3 / Control Plane / Filesystem refsdocs/pro/rolling-deploy-adapters.md— protocol contractreact_on_rails_pro/lib/react_on_rails_pro/rolling_deploy_cache_stager.rb— staging implementation that consumes the adapterreact_on_rails/lib/react_on_rails/doctor.rb— adapter probe (extend for Http specifics)