feat: add configurable maxPayloadSize for WebSocket#4955
Conversation
- Add webSocketOptions.maxDecompressedMessageSize to DispatcherBase - Propagate through Agent, Client, Pool - Increase default from 4 MB to 64 MB - Add estimated expansion check (10x ratio) - Fix actual size tracking to use configured limit - Add tests
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #4955 +/- ##
==========================================
+ Coverage 92.93% 93.03% +0.09%
==========================================
Files 112 110 -2
Lines 35725 35786 +61
==========================================
+ Hits 33202 33292 +90
+ Misses 2523 2494 -29 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
KhafraDev
left a comment
There was a problem hiding this comment.
My issues are still present. By default, code that works in ws, Bun, deno, firefox, chrome, Safari, and every other browser and environment will fail in node.
The security model of Firefox, Chrome and Safari is radically different from what we can offer in Node.js. The challenges are sometimes opposite. Out-of-memory errors cannot be easily caught by Node.js and V8 (Chrome can just crash a tab) and they are handled as best-effort cases. The only safe way to prevent a crash is to provide a solution to avoid allocating that memory. I will investigate Bun and Deno if they deal with this in some different way. |
|
Different security models, but the same spec. |
|
I did a pass over what limits other WebSocket implementations actually expose, because RFC 6455 leaves this up to implementations. A few things seem worth separating:
1) What the spec saysRFC 6455 explicitly says this is implementation-specific. In §10.4:
Ref: https://www.rfc-editor.org/rfc/rfc6455#section-10.4 So the spec does not define one universal limit, but it does explicitly expect implementations to protect themselves. 2) Browser-style WebSocket APIsFor Chrome / Firefox / Safari, I couldn't find an official public doc that says “the maximum incoming WebSocket message size is N bytes”. What MDN does document for the standard browser Deno's browser-style
So for browser-style APIs, the public contract appears to be closer to "implementation / memory dependent" than to a stable documented numeric cap. 3) Server/runtime implementations that do expose explicit limitsThese are the clearest documented limits I found:
4) What this means for undiciThe cross-implementation picture looks like this:
Also, most of the limits above are message-size limits in general, not specifically decompressed-message-size limits. In that sense, what this PR adds is actually more targeted than many existing APIs, because it directly addresses the So IMO this PR is well aligned with the ecosystem:
|
|
Note that permessage-deflate is standardized in RFC 7692 and not RFC 6455
|
- Set maxDecompressedMessageSize to 0 to disable the limit - Default remains 64 MB for decompression bomb protection - Add test for disabled limit
|
I doesn't matter if it's server or client. If one is receiving data from untrusted sources, they are vulnerable to this attack (in Node.js). I'll keep digging on more proof points. |
|
A bit more data after testing other runtimes in the same setup (host
References:
Given that, one question: would it make sense to also consider a more general My intuition is that |
|
I'm also ok in raising the limit to 64MB, I was too conservative with 4MB |
|
I'm inclined to think we should be cautious about introducing limits like 64MB that aren’t explicitly in the specification. Since the client can't currently restrict uncompressed messages, limiting only the compressed path might not provide full protection against OOM crashes; raw data could still potentially be streamed up to the 2GB limit. Unless we can implement a unified maxPayloadSize for both paths, it might be more consistent to default to the protocol or platform maximum (approx. 2GB). It may be better to let developers configure specific restrictions themselves, rather than having the library impose caps that aren't standard in browser environments. |
No, it is there since forever, the commit you are referring to is just moving a positional argument to an option.
No, it works in the same way for both the client and the server. 0 means no limit. The extension is enabled by default on the client and is disabled by default on the server due to memory fragmentation issues, see https://github.com/websockets/ws?tab=readme-ov-file#websocket-compression. Anyway a lot has changed/improved since the defaults decision and it is now used in many production environments. You can count the size of the chunks in the listener of the |
My bad, in jsdoc
This is what the PR is doing. |
Yes, but the class is not instantiated with that value (https://github.com/websockets/ws/blob/84392554/lib/websocket-server.js#L299, https://github.com/websockets/ws/blob/84392554/lib/websocket.js#L761). |
- Apply limit to both compressed and uncompressed payloads - Add raw payload size check before accepting uncompressed data - Update types and docs to reflect new option name - Add test for raw uncompressed payload limit enforcement
|
@tsctx @KhafraDev PTAL |
|
I would need a test that causes node to run out of memory without the limit (on main) before I would approve. Right now, the tests only check that messages > the configured payload size close the connection/error. |
|
I'll send it to you privately, I prefer not to disclose publicly. |
|
Shared. Note that it takes just 130KB to generate 128MB of data, it's the kind of expansion that is classified as DoS. |
Can we eventually add the repro in undici? I understand why it is private currently. |
|
once the vulnerability is disclosed, yes. https://hackerone.com/reports/3481206 There is probably enough here to understand the problem and recreate it tbh, but I prefer to not make things easy. |
|
@KhafraDev are you ok for landing this then? |
|
I agree that introducing a limit is reasonable. However, I have concerns about the 64MB default. While it serves as a baseline, it leaves almost no margin for modern, data-intensive applications. The issue is that since this limit cannot be changed without installing undici, 64MB may inadvertently become a bottleneck for typical development rather than a protective safety net. In real-world scenarios, data transfers can easily spike toward 60-70MB. A 64MB limit is low enough to be hit by legitimate usage, yet 100MB is high enough to accommodate those spikes while still providing robust protection against genuinely malicious payloads. Since Deno uses 64MB, I can understand that 64MB is appropriate in that context, but I believe that aligning with the 100MB limit in the "ws" library—which you mentioned earlier—is the right direction. It provides a "silent safety measure" that remains invisible to the average user while preventing the need to install undici just to handle slightly larger payloads. Also, while 64MB makes sense for Deno, it could cause compatibility issues where code that works in a browser (which generally allows for significantly higher message thresholds) fails here. I initially thought we should allow a much higher limit to match the practical spec, but 100MB feels like a very reasonable compromise. Given that it is difficult for users to change the limit when used as a global implementation, what do you think about setting the default to 100MB to ensure the implementation of undici remains practical without compromising security? |
|
I think we can go to 128MB without much problems. The default is 4MB right now, so anything above that would be good. |
|
@tsctx @KhafraDev ptal |
| if ( | ||
| this.#maxPayloadSize > 0 && | ||
| !isControlFrame(this.#info.opcode) && | ||
| !this.#info.compressed && |
There was a problem hiding this comment.
I don't understand why we ignore compressed messages here, but then perform the same logic in the permessage-deflate handler?
There was a problem hiding this comment.
we are doing this kind of check in 3 places:
- here for normal packets
- early rejection for compressed packets (estimating a 10x expansion rate, we can avoid trying to decompress things, saves CPU)
- actual/precise validation for compressed packages
The 2nd could be optional, but it saves CPU cycles.
There was a problem hiding this comment.
but we're skipping compressed packets here, it's not much of an early return if you still have to go all the way to the decompression handler
what I'm asking for is why we're skipping compressed packets here
There was a problem hiding this comment.
We are skipping compressed packets here because it's a different logic as @tsctx has asked.
There was a problem hiding this comment.
The logic is not different.
Assuming we send an uncompressed and compressed frame that's 129MB:
- Uncompressed: early exits here
- Compressed: skips this, goes through READ_DATA, goes to permessage deflate handler, attempt to inflate, if payload size > max payload size, exit
There was a problem hiding this comment.
I understand that, but why skip them? Couldn't an attacker just send a compressed frame that has an unlimited size, where undici will collect all fragments, attempt to inflate, then fail much later?
There was a problem hiding this comment.
The check is done chunk by chunk on compression, as soon as it goes over the limit, the connection is interrupted.
There was a problem hiding this comment.
I’ve looked into it, and while we check the max length per message unit, we seem to be missing a check for the total buffered size of fragmented messages.
We don't actually need a separate check for compression; we should simply trigger an early exit if the accumulated size plus the current payload exceeds the limit. Even with compressed data, we can perform this early check if the payload itself is already too large.
This is already handled for uncompressed fragments, but for the compressed path, we should also pass the current buffered size to the decompression side to ensure the same validation.
There was a problem hiding this comment.
I've written some pseudo-code to illustrate the issue more clearly.
createServer((socket) => {
// Start by fragmenting the message.
// At this stage, 64MB has been accumulated in the buffer.
socket.send(alloc(64 * 1024 * 1024), { fin: false });
// Case 1: Early termination is not triggered because compressed messages
// are currently being skipped.
// The pre-compressed size is 100MB. Since 64MB is already buffered,
// it clearly exceeds the limit and should be terminated immediately.
socket.send(alloc_post_compressed(100 * 1024 * 1024), { fin: true });
// alloc_post_compressed: The size represents the "expected final output."
// Case 2: The current accumulated size is not passed to the decompressor,
// so this segment is treated as only 100MB, bypassing the decompression limit.
// We should pass the current buffer size to the decompressor and
// validate the total size (current buffer + decompressed payload).
socket.send(alloc_pre_compressed(100 * 1024 * 1024), { fin: true });
// alloc_pre_compressed: The size represents the "actual memory footprint."
});There was a problem hiding this comment.
Thank you @tsctx this is what I was trying to articulate, lol.
The issue is if the compressed data is larger than the maxPayloadSize, we shouldn't bother going to READ_DATA and inflating the data which is already larger than the maxPayloadSize.
The permessage-deflate handler catches it, once the decompressed data is larger than the maxPayloadSize. This only happens once we have a full frame, so an attacker could theoretically send n fragmented frames each with unlimited size that are kept in memory until we receive the entirety of the frame.
From READ_DATA, should be helpful in understanding:
if (this.#byteOffset < this.#info.payloadLength) {
return callback()
}
const body = this.consume(this.#info.payloadLength)|
@KhafraDev ptal |
|
I'm not sure why compressed messages are being handled separately in this case still |
|
I haven't done extensive testing, but KhafraDev@903a6f0 simplifies this quite a bit (leaving most of the handling to the actual parser, not the permessage deflate handler) and fixes what I was talking about in the unresolved review comment and fixes a few bugs. |
|
Thanks, but I don’t think we can take that as-is: it drops cumulative checking for fragmented uncompressed messages, and it also starts rejecting I think the current direction is safer: keep |
|
It should do both and all the permessage-deflate-limit tests pass.
This is what I am talking about in the review: it should do both. Someone can send a compressed frame of unlimited size and undici will not error until we attempt to inflate. |
| this.#maxPayloadSize > 0 && | ||
| !isControlFrame(this.#info.opcode) && | ||
| !this.#info.compressed && | ||
| this.#fragmentsBytes + this.#info.payloadLength > this.#maxPayloadSize |
There was a problem hiding this comment.
we're adding the total frame size to the currently buffered frame size
No. The idea is that this check interprets "already buffered fragments + current frame".
| this.#info.fin = fin | ||
| this.#info.fragmented = fragmented | ||
|
|
||
| if (this.#state === parserStates.READ_DATA && !this.#validatePayloadLength()) { |
There was a problem hiding this comment.
there's only one branch where state can be set to READ_DATA from here, it should be moved
|
@KhafraDev can you please send a PR against this with the fix you want to see? Your commit also had other changes. |
|
that commit is everything I want to see |
Co-Authored-By: Khafra <maitken033380023@gmail.com>
|
Done |

Fixes #4944
Changes
Usage