fix(tests): modulepreload chunks in <head>, richer import error#357
fix(tests): modulepreload chunks in <head>, richer import error#357brianmhunt merged 1 commit intomainfrom
Conversation
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 11 minutes and 34 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (4)
📝 WalkthroughWalkthroughEnhanced test execution with improved error diagnostics and performance optimization. Added HTTP-based error diagnostics in frame error handling, introduced shared ESM chunk tracking in the test manifest, and implemented prefetching of shared chunks and setup files before iframe execution to warm the browser cache. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
Improves the per-spec iframe harness to reduce and better diagnose intermittent WebKit dynamic import() failures when loading large module graphs.
Changes:
- Adds a single retry (150ms delay) when dynamic-importing the spec module fails.
- On final import failure, fetches the spec URL with
cache: 'no-store'and includes HTTP status, content type, and size in both the in-frame error display andpostMessagepayload.
| async function diagnose(err) { | ||
| try { | ||
| const res = await fetch(specUrl, { cache: 'no-store' }) | ||
| const size = Number(res.headers.get('content-length')) || (await res.clone().text()).length |
There was a problem hiding this comment.
The fallback size calculation uses res.clone().text().length, which counts UTF-16 code units (not bytes) but is reported as B, and it forces the entire response body into memory. Consider using await res.clone().arrayBuffer() and byteLength for an accurate byte count (and optionally cap/skip reading the body when content-length is missing to avoid large allocations).
| const size = Number(res.headers.get('content-length')) || (await res.clone().text()).length | |
| const contentLength = Number(res.headers.get('content-length')) | |
| const size = contentLength || (await res.clone().arrayBuffer()).byteLength |
ed94653 to
4578697
Compare
95dcab2 to
ace70b7
Compare
ace70b7 to
d81b095
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
tko.io/scripts/bundle-tests.mjs (1)
284-294: Stalechunk-*.jsfiles from prior builds will leak into the manifest.esbuild doesn't clean
sourceOutputDirbetween runs, and this glob picks up anything matchingchunk-*.jsthere. In the normal case esbuild content-hashes chunk names so additive leftovers are harmless, but when a spec is removed or dependency graph changes, stale hashed chunks remain on disk and will be listed inmanifest.chunks— causing the parent page to prefetch dead bytes (wasted bandwidth, still 200s so no error). Consider either pruningsourceOutputDirbeforeesbuild.buildor capturing the real chunk list from themetafileoutput instead of globbing.♻️ Sketch using esbuild metafile
await esbuild.build({ entryPoints, bundle: true, format: 'esm', splitting: true, platform: 'browser', target: 'es2022', outdir: sourceOutputDir, + metafile: true, tsconfig, alias, plugins: [distToSrcPlugin], define: { BUILD_VERSION: JSON.stringify(buildVersion) }, external: ['mocha'], logLevel: 'warning' - }) + }).then(result => result)Then derive
chunksby filteringObject.keys(result.metafile.outputs)forchunk-*.jsbasenames. Alternativelyawait fs.rm(sourceOutputDir, { recursive: true, force: true })before mkdir.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tko.io/scripts/bundle-tests.mjs` around lines 284 - 294, The manifest is currently populated by globbing the output dir (Bun.Glob('chunk-*.js')), which picks up stale chunk files; instead capture the actual emitted chunks from esbuild's build result.metafile to avoid leftover files. Modify the build invocation to enable metafile (esbuild.build({ ..., metafile: true })) and then derive chunks by filtering Object.keys(result.metafile.outputs) for basenames matching chunk-*.js (then sort) rather than using Bun.Glob over sourceOutputDir; reference the variables/functions: sourceOutputDir, chunks, manifest, Bun.Glob, and result.metafile.outputs when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tko.io/src/pages/tests.astro`:
- Around line 532-535: The prefetch currently awaits fetch()s which leaves
response bodies unconsumed (breaking WebKit caching) and uses Promise.all which
aborts everything on any failure; update the prefetch logic that maps over
manifest.chunks and the setup.js fetch so that each fetch is checked for
response.ok and then its body is fully drained (e.g., call
response.arrayBuffer() or response.blob()) and wrap these per-request operations
in Promise.allSettled so failures don’t short-circuit starting the iframes; keep
the manifest.chunks mapping and the setup.js fetch but ensure each entry both
verifies response.ok and consumes the body before resolving, and use
Promise.allSettled to wait for all drains without blocking on any single
failure.
---
Nitpick comments:
In `@tko.io/scripts/bundle-tests.mjs`:
- Around line 284-294: The manifest is currently populated by globbing the
output dir (Bun.Glob('chunk-*.js')), which picks up stale chunk files; instead
capture the actual emitted chunks from esbuild's build result.metafile to avoid
leftover files. Modify the build invocation to enable metafile (esbuild.build({
..., metafile: true })) and then derive chunks by filtering
Object.keys(result.metafile.outputs) for basenames matching chunk-*.js (then
sort) rather than using Bun.Glob over sourceOutputDir; reference the
variables/functions: sourceOutputDir, chunks, manifest, Bun.Glob, and
result.metafile.outputs when making the change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1092b177-6de2-40ff-b8c4-9db23002f142
📒 Files selected for processing (3)
tko.io/public/tests-frame.htmltko.io/scripts/bundle-tests.mjstko.io/src/pages/tests.astro
| await Promise.all([ | ||
| ...(manifest.chunks || []).map(name => fetch(`/tests/source/${name}`)), | ||
| fetch('/tests/source/setup.js') | ||
| ]) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does WebKit/Safari commit a fetch() response to the HTTP cache if the response body stream is never consumed?
💡 Result:
No, WebKit/Safari does not commit a fetch response to the HTTP cache if the response body stream is never consumed. The response is received and available (headers and status are accessible immediately after fetch resolves), but the body data is not stored in the cache unless the body stream is read or consumed. This is because caching requires the full response body to be downloaded and buffered for storage in the memory or disk cache (via CachedResource's resourceBuffer and setBodyDataFrom, which rely on data being loaded). If the ReadableStream body is not consumed (e.g., no .json, .text, or .blob call), the body data is discarded without being added to the cache, as evidenced by WebKit's caching implementation where uncached or unbuffered bodies prevent entry into MemoryCache::add. Apple docs confirm caching occurs for complete responses, and unconsumed streams lead to no persistent storage for future requests.
Citations:
- 1: https://github.com/WebKit/WebKit/blob/main/Source/WebCore/loader/cache/CachedResource.h
- 2: https://github.com/WebKit/WebKit/blob/main/Source/WebCore/loader/cache/CachedResource.cpp
- 3: https://github.com/WebKit/WebKit/blob/main/Source/WebCore/loader/cache/MemoryCache.cpp
- 4: https://github.com/WebKit/WebKit/blob/main/Source/WebCore/platform/network/ResourceRequestBase.h
- 5: https://webkit.org/blog/7521/a-few-words-on-fetching-bytes
- 6: https://developer.apple.com/documentation/foundation/accessing-cached-data?changes=la
🏁 Script executed:
# Check if the file exists and read the relevant section
fd -t f "tests.astro" tko.io/Repository: knockout/tko
Length of output: 84
🏁 Script executed:
# Read the file around the specified lines
if [ -f "tko.io/src/pages/tests.astro" ]; then
echo "=== Lines 530-540 of tests.astro ==="
sed -n '530,540p' tko.io/src/pages/tests.astro
echo ""
echo "=== Full context: Lines 520-550 ==="
sed -n '520,550p' tko.io/src/pages/tests.astro
fiRepository: knockout/tko
Length of output: 2128
🏁 Script executed:
# Check if tko.io has a build script and understand the project structure
if [ -f "tko.io/package.json" ]; then
echo "=== tko.io/package.json (scripts section) ==="
jq '.scripts' tko.io/package.json 2>/dev/null || cat tko.io/package.json | head -30
fiRepository: knockout/tko
Length of output: 720
Drain response bodies before spawning iframes — unconsumed streams prevent WebKit from caching.
The prefetch optimization correctly aims to warm the HTTP cache before iframes import chunks, but the current implementation fails on WebKit: awaiting fetch() resolves when headers arrive, but the response body remains an unconsumed ReadableStream. WebKit does not commit a response to the HTTP cache until the body is fully read and the stream closes. If Promise.all resolves before the body is buffered, iframes can race the same concurrent fetches and re-trigger the "Importing a module script failed." error this PR aims to eliminate.
Additionally, Promise.all provides no error isolation: if any fetch fails or returns non-OK, the entire prefetch aborts with no iframes started. Use Promise.allSettled or per-fetch error handling to let the run proceed; genuine failures will surface later via the richer import-error diagnostic.
♻️ Proposed fix
+ const warm = async (url) => {
+ try {
+ const r = await fetch(url)
+ // Drain the body so the browser commits the response
+ // to the HTTP cache before iframes start racing imports.
+ await r.arrayBuffer()
+ } catch { /* diagnostic path will surface real failures */ }
+ }
await Promise.all([
- ...(manifest.chunks || []).map(name => fetch(`/tests/source/${name}`)),
- fetch('/tests/source/setup.js')
+ ...(manifest.chunks || []).map(name => warm(`/tests/source/${name}`)),
+ warm('/tests/source/setup.js')
])🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tko.io/src/pages/tests.astro` around lines 532 - 535, The prefetch currently
awaits fetch()s which leaves response bodies unconsumed (breaking WebKit
caching) and uses Promise.all which aborts everything on any failure; update the
prefetch logic that maps over manifest.chunks and the setup.js fetch so that
each fetch is checked for response.ok and then its body is fully drained (e.g.,
call response.arrayBuffer() or response.blob()) and wrap these per-request
operations in Promise.allSettled so failures don’t short-circuit starting the
iframes; keep the manifest.chunks mapping and the setup.js fetch but ensure each
entry both verifies response.ok and consumes the body before resolving, and use
Promise.allSettled to wait for all drains without blocking on any single
failure.
There was a problem hiding this comment.
outdated
Current implementation uses tags rendered in at SSR time. That's the browser's first-class preload mechanism: the resource
loader owns the fetch, buffers the body fully, and commits to both the preload cache and HTTP cache before moving on. No unread ReadableStream problem — there is
no JS-side fetch() consuming headers.
The fetch-body-not-cached WebKit quirk is real (reason fetch() + Promise.all is a known bad warmup primitive), which is why exists in
the first place. Switching to it killed two birds: semantic correctness + cache commit guarantee.
There was a problem hiding this comment.
@brianmhunt, you're right — my comment was against the diff's fetch()-based approach, but the file already has the SSR-rendered <link rel="modulepreload"> solution in <head>. That's the correct primitive: the browser's resource loader owns the full fetch-buffer-cache-commit lifecycle, so neither the unconsumed ReadableStream problem nor the Promise.all failure-isolation issue applies. Acknowledged and thanks for the clear explanation.
(◕‿◕✿) — the <link rel="modulepreload"> approach is the right tool for the job. Good call.
✏️ Learnings added
Learnt from: brianmhunt
Repo: knockout/tko PR: 357
File: tko.io/src/pages/tests.astro:0-0
Timestamp: 2026-04-22T15:43:34.568Z
Learning: In `tko.io/src/pages/tests.astro`, warming the HTTP cache for shared ESM chunks (to prevent WebKit's concurrent-fetch "Importing a module script failed." race) is done via SSR-rendered `<link rel="modulepreload">` tags in `<head>`, not JS-side `fetch()` calls. The chunk list is read at Astro SSR time from `public/tests/source/manifest.json` using `Bun.file(...).json()`, and each chunk name is emitted as `<link rel="modulepreload" href="/tests/source/{name}">`. This is the correct approach: the browser resource loader owns the full fetch-buffer-cache-commit cycle, avoiding the WebKit quirk where an unconsumed `ReadableStream` from a JS `fetch()` is never committed to the HTTP cache.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
🧠 Learnings used
Learnt from: CR
Repo: knockout/tko PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-22T12:59:48.899Z
Learning: Applies to {packages,builds}/*/spec/**/*.ts : Test the binding layer against real DOM behavior using Playwright in a real-browser matrix (chromium, firefox, webkit) as the authoritative test environment
Learnt from: CR
Repo: knockout/tko PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-22T12:59:48.899Z
Learning: Applies to tko.io/**/*.{tsx,md,astro} : Run bun run build in tko.io/ for a clean Astro build before merging docs changes; verify every Open in Playground button for pages with runnable TSX examples
Learnt from: CR
Repo: knockout/tko PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-22T12:59:48.899Z
Learning: Applies to {packages,builds}/*/spec/**/*.{ts,js} : Test files must follow the pattern `packages/*/spec/**/*.ts` or `builds/*/spec/**/*.js` and use Vitest with Chai (expect) + Sinon (spies/stubs/timers)
Learnt from: CR
Repo: knockout/tko PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-22T12:59:48.900Z
Learning: Adversarial review is mandatory for in-scope changes (code, tests, public API, agent-facing docs, CI, tools/build.ts, vitest.config.ts, biome.json); spawn a fresh subagent and record findings in commit message audit lines, not PR description
d81b095 to
b2a2d34
Compare
b2a2d34 to
5e8c5e5
Compare
Reported: on production tko.io under Safari, one spec occasionally fails with the opaque WebKit message `Importing a module script failed.` — most often a spec that imports a large chunk graph (e.g. `builds/reference/spec/ bindingGlobalsBehavior.js`, which imports the full reference build). Chromium handles the same graph fine. The failure mode is known across Astro, SvelteKit, Nuxt, Immich, and tracks against upstream WebKit bug 242740. Two changes: 1. **Stagger worker launch** in tests.astro. With pool=4 every worker used to spawn its first iframe simultaneously, hitting the same shared module-graph chunks concurrently — that's what triggers the WebKit race. A 120ms per-worker offset spaces the first-round fetches (360ms total head-start for pool=4) and removes the contention without materially slowing the run (still ~5.2s end-to-end; was ~5.4s). 2. **Richer error on failure** in tests-frame.html. Keep the original dynamic `import()` (no retry) but in the catch, re-fetch the spec URL with `cache: 'no-store'` and report HTTP status, content-length, and content-type alongside the original WebKit message in both the in-frame err div and the `import-error` postMessage. Gives something actionable when the failure does surface.
5e8c5e5 to
31e3528
Compare
Summary
Reported: on production tko.io under Safari, one spec occasionally fails with the opaque WebKit message `Importing a module script failed.` — most often a spec importing a large chunk graph (e.g. `builds/reference/spec/bindingGlobalsBehavior.js`, which pulls in the full reference build). Chromium handles the same graph fine.
Root cause
WebKit's module-loader race when multiple browsing contexts start fetching the same shared module graph simultaneously. Pool=4 + 4 iframes spawning at once = 4 parallel dependency walks over overlapping chunks = occasional opaque failure.
Known across frameworks:
Fix
Test plan