Skip to content

Commit b35617b

Browse files
[miniflare] Close all open handles on dispose to prevent process hangs (#13515)
Co-authored-by: devin-ai-integration[bot] <158243242+devin-ai-integration[bot]@users.noreply.github.com>
1 parent cc1413a commit b35617b

14 files changed

Lines changed: 178 additions & 17 deletions

File tree

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
---
2+
"miniflare": patch
3+
---
4+
5+
fix: close all open handles on dispose to prevent process hangs
6+
7+
Several resources were not being properly cleaned up during `Miniflare.dispose()`, which could leave the Node.js event loop alive and cause processes (particularly tests using `node --test`) to hang instead of exiting cleanly:
8+
9+
- The internal undici `Pool` used to dispatch fetch requests to the workerd runtime was not closed. Lingering TCP sockets from this pool could keep the event loop alive indefinitely.
10+
- `WebSocketServer` instances for live reload and WebSocket proxying were never closed, leaving connected clients' sockets open.
11+
- The `InspectorProxy` was not closing its runtime WebSocket connection, relying on process death to break the connection.
12+
- `HyperdriveProxyController.dispose()` had a missing `return` in a `.map()` callback, causing `Promise.allSettled` to resolve immediately without waiting for `net.Server` instances to close.
13+
- `ProxyClientBridge` was not clearing its finalization batch `setTimeout` during disposal.
14+
- `InspectorProxyController.dispose()` was not calling `server.closeAllConnections()` before `server.close()`, so active HTTP keep-alive or WebSocket connections could prevent the close callback from firing.
Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
---
2+
"wrangler": patch
3+
---
4+
5+
fix: ensure esbuild context is disposed during teardown
6+
7+
The esbuild bundler cleanup function could race with the initial build. If `BundlerController.teardown()` ran before the initial `build()` completed, the `stopWatching` closure variable would still be `undefined`, so the esbuild context was never disposed. This left the esbuild child process running, keeping the Node.js event loop alive and causing processes to hang instead of exiting cleanly.
8+
9+
The cleanup function now awaits the build promise before calling `stopWatching`, ensuring the esbuild context is always properly disposed.

fixtures/start-worker-node-test/package.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
"author": "",
77
"type": "module",
88
"scripts": {
9-
"test:ci": "node --test"
9+
"test:ci": "node --test --test-timeout=15000"
1010
},
1111
"devDependencies": {
1212
"@cloudflare/workers-tsconfig": "workspace:*",

fixtures/start-worker-node-test/src/config-errors.test.js

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -50,6 +50,7 @@ describe("startWorker - configuration errors", () => {
5050
const worker = await unstable_startWorker({
5151
config: "wrangler.json",
5252
dev: {
53+
persist: false,
5354
server: {
5455
port: await getPort(),
5556
},
@@ -76,6 +77,7 @@ describe("startWorker - configuration errors", () => {
7677
const worker = await unstable_startWorker({
7778
config: "wrangler.json",
7879
dev: {
80+
persist: false,
7981
server: {
8082
port: await getPort(),
8183
},

fixtures/start-worker-node-test/src/index.test.js

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,10 @@ describe("worker", () => {
99
let worker;
1010

1111
before(async () => {
12-
worker = await unstable_startWorker({ config: "wrangler.json" });
12+
worker = await unstable_startWorker({
13+
config: "wrangler.json",
14+
dev: { persist: false },
15+
});
1316
});
1417

1518
test("hello world", async () => {

packages/miniflare/package.json

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,6 @@
8787
"devalue": "^5.6.3",
8888
"devtools-protocol": "^0.0.1182435",
8989
"esbuild": "catalog:default",
90-
"exit-hook": "2.2.1",
9190
"expect-type": "^0.15.0",
9291
"get-port": "^7.1.0",
9392
"glob-to-regexp": "0.4.1",
Lines changed: 96 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,96 @@
1+
/**
2+
* Lightweight replacement for the `exit-hook` npm package.
3+
*
4+
* The third-party `exit-hook` registers process listeners (`exit`, `SIGINT`,
5+
* `SIGTERM`, and `message`) that are never removed — even after every
6+
* callback has been unregistered. Those dangling listeners are harmless in
7+
* long-running processes, but in short-lived child processes (e.g. those
8+
* spawned by `node --test`) they can prevent clean exit by holding refs on
9+
* handles such as the IPC channel.
10+
*
11+
* This module tracks the number of active registrations and removes **all**
12+
* process listeners once the last registration is unregistered, so they
13+
* cannot keep the event loop alive after dispose.
14+
*/
15+
16+
const callbacks = new Set<() => void>();
17+
let registered = false;
18+
let called = false;
19+
20+
function runCallbacks(): void {
21+
if (called) {
22+
return;
23+
}
24+
called = true;
25+
for (const callback of callbacks) {
26+
callback();
27+
}
28+
}
29+
30+
function onExit(): void {
31+
runCallbacks();
32+
}
33+
34+
function onSignalInt(): void {
35+
runCallbacks();
36+
// eslint-disable-next-line unicorn/no-process-exit -- intentional: replicate default SIGINT behavior
37+
process.exit(128 + 2);
38+
}
39+
40+
function onSignalTerm(): void {
41+
runCallbacks();
42+
// eslint-disable-next-line unicorn/no-process-exit -- intentional: replicate default SIGTERM behavior
43+
process.exit(128 + 15);
44+
}
45+
46+
function onMessage(message: unknown): void {
47+
if (message === "shutdown") {
48+
runCallbacks();
49+
// eslint-disable-next-line unicorn/no-process-exit -- intentional: PM2 graceful shutdown
50+
process.exit(0);
51+
}
52+
}
53+
54+
function addListeners(): void {
55+
registered = true;
56+
called = false;
57+
process.on("exit", onExit);
58+
process.on("SIGINT", onSignalInt);
59+
process.on("SIGTERM", onSignalTerm);
60+
// Only listen for IPC "shutdown" messages (PM2 support) when the process
61+
// actually has an IPC channel. Even without this guard the listener is
62+
// harmless when there is no channel, but being explicit avoids any
63+
// accidental ref of a future channel.
64+
if (process.send !== undefined) {
65+
process.on("message", onMessage);
66+
}
67+
}
68+
69+
function removeListeners(): void {
70+
registered = false;
71+
process.removeListener("exit", onExit);
72+
process.removeListener("SIGINT", onSignalInt);
73+
process.removeListener("SIGTERM", onSignalTerm);
74+
process.removeListener("message", onMessage);
75+
}
76+
77+
/**
78+
* Register a callback to run when the process exits. Returns a function
79+
* that unregisters the callback. When the last callback is unregistered,
80+
* all underlying process listeners are removed so they cannot keep the
81+
* event loop alive.
82+
*/
83+
export function exitHook(callback: () => void): () => void {
84+
callbacks.add(callback);
85+
86+
if (!registered) {
87+
addListeners();
88+
}
89+
90+
return () => {
91+
callbacks.delete(callback);
92+
if (callbacks.size === 0 && registered) {
93+
removeListeners();
94+
}
95+
};
96+
}

packages/miniflare/src/index.ts

Lines changed: 24 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,6 @@ import util from "node:util";
1111
import zlib from "node:zlib";
1212
import { checkMacOSVersion } from "@cloudflare/cli";
1313
import { removeDir, removeDirSync } from "@cloudflare/workers-utils";
14-
import exitHook from "exit-hook";
1514
import { $ as colors$, green } from "kleur/colors";
1615
import stoppable from "stoppable";
1716
import { getGlobalDispatcher, Pool } from "undici";
@@ -21,6 +20,7 @@ import SCRIPT_MINIFLARE_ZOD from "worker:shared/zod";
2120
import { WebSocketServer } from "ws";
2221
import { z } from "zod";
2322
import { fallbackCf, setupCf } from "./cf";
23+
import { exitHook } from "./exit-hook";
2424
import {
2525
coupleWebSocket,
2626
DispatchFetchDispatcher,
@@ -2408,6 +2408,9 @@ export class Miniflare {
24082408
}
24092409

24102410
if (previousEntryURL?.toString() !== this.#runtimeEntryURL.toString()) {
2411+
// Close the previous dispatcher if the entry URL changed, to avoid
2412+
// leaking sockets from the old Pool.
2413+
void this.#runtimeDispatcher?.close().catch(() => {});
24112414
this.#runtimeDispatcher = new Pool(this.#runtimeEntryURL, {
24122415
connect: { rejectUnauthorized: false },
24132416
// Disable timeouts for local dev — long-running responses (streaming,
@@ -2420,6 +2423,7 @@ export class Miniflare {
24202423
// Set up a direct dispatcher to the dev-registry-proxy socket so we can
24212424
// push registry updates without routing through the entry worker.
24222425
const devRegistryPort = maybeSocketPorts.get(SOCKET_DEV_REGISTRY);
2426+
void this.#devRegistryDispatcher?.close().catch(() => {});
24232427
if (devRegistryPort !== undefined) {
24242428
this.#devRegistryDispatcher = new Pool(
24252429
new URL(`http://127.0.0.1:${devRegistryPort}`)
@@ -3037,8 +3041,27 @@ export class Miniflare {
30373041
// Cleanup as much as possible even if `#init()` threw
30383042
await this.#proxyClient?.dispose();
30393043
await this.#runtime?.dispose();
3044+
// Close the undici Pool used for dispatching fetch requests to the
3045+
// runtime. This must happen after the runtime is disposed, so that
3046+
// in-flight connections are broken and close immediately. Without this,
3047+
// lingering sockets in the Pool can keep the Node.js event loop alive.
3048+
// The Pool may already be destroyed (e.g., if workerd was SIGKILL'd and
3049+
// all connections broke), so ignore ClientDestroyedError.
3050+
try {
3051+
await this.#runtimeDispatcher?.close();
3052+
} catch {}
3053+
// Also close the dev-registry dispatcher (same issue as above).
3054+
try {
3055+
await this.#devRegistryDispatcher?.close();
3056+
} catch {}
30403057

30413058
await this.#stopLoopbackServer();
3059+
// Close WebSocket servers so any connected clients are disconnected
3060+
// and their sockets don't keep the event loop alive. These use
3061+
// `noServer: true` so they don't own an HTTP server, but connected
3062+
// WebSocket clients still hold open sockets.
3063+
this.#liveReloadServer.close();
3064+
this.#webSocketServer.close();
30423065
// Best-effort cleanup: on Windows, workerd may not release file handles
30433066
// immediately after disposal, causing EBUSY errors. The temp directory
30443067
// lives in os.tmpdir() so the OS will clean it up eventually.

packages/miniflare/src/plugins/core/inspector-proxy/inspector-proxy-controller.ts

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -324,6 +324,10 @@ export class InspectorProxyController {
324324
await Promise.all(this.#proxies.map((proxy) => proxy.dispose()));
325325

326326
const server = await this.#server;
327+
// Force-close active connections so server.close() resolves immediately.
328+
// Without this, active HTTP keep-alive or WebSocket connections prevent
329+
// the close callback from firing, hanging the dispose.
330+
server.closeAllConnections();
327331
return new Promise((resolve, reject) => {
328332
server.close((err) => (err ? reject(err) : resolve()));
329333
});

packages/miniflare/src/plugins/core/inspector-proxy/inspector-proxy.ts

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -164,5 +164,6 @@ export class InspectorProxy {
164164
clearInterval(this.#runtimeKeepAliveInterval);
165165

166166
this.#devtoolsWs?.close();
167+
this.#runtimeWs?.close();
167168
}
168169
}

0 commit comments

Comments
 (0)