Skip to content

feat: add configurable maxPayloadSize for WebSocket#4955

Merged
mcollina merged 9 commits intomainfrom
feature/configurable-max-decompressed-message-size
Apr 13, 2026
Merged

feat: add configurable maxPayloadSize for WebSocket#4955
mcollina merged 9 commits intomainfrom
feature/configurable-max-decompressed-message-size

Conversation

@mcollina
Copy link
Copy Markdown
Member

@mcollina mcollina commented Apr 2, 2026

Fixes #4944

Changes

  • Added dispatcher-level webSocketOptions.maxPayloadSize configuration
  • Default: 128 MB for both compressed and uncompressed payloads
  • Add estimated expansion check (10x ratio) for decompression bomb protection
  • Add raw payload size check before accepting uncompressed data
  • Fixed actual size tracking to use configured limit

Usage

import { Agent, WebSocket } from 'undici'

const agent = new Agent({
  webSocket: {
    maxPayloadSize: 128 * 1024 * 1024 // 128 MB
  }
})

const ws = new WebSocket('wss://example.com', { dispatcher: agent })

- Add webSocketOptions.maxDecompressedMessageSize to DispatcherBase
- Propagate through Agent, Client, Pool
- Increase default from 4 MB to 64 MB
- Add estimated expansion check (10x ratio)
- Fix actual size tracking to use configured limit
- Add tests
@mcollina mcollina changed the title feat: add configurable maxDecompressedMessageSize for WebSocket feat: add configurable maxDecompressedMessageSize for WebSocket in Agent Apr 2, 2026
@mcollina mcollina requested review from KhafraDev, ronag and tsctx April 2, 2026 05:51
@codecov-commenter
Copy link
Copy Markdown

codecov-commenter commented Apr 2, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 93.03%. Comparing base (84f23e2) to head (22fe151).
⚠️ Report is 60 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #4955      +/-   ##
==========================================
+ Coverage   92.93%   93.03%   +0.09%     
==========================================
  Files         112      110       -2     
  Lines       35725    35786      +61     
==========================================
+ Hits        33202    33292      +90     
+ Misses       2523     2494      -29     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Copy Markdown
Member

@KhafraDev KhafraDev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My issues are still present. By default, code that works in ws, Bun, deno, firefox, chrome, Safari, and every other browser and environment will fail in node.

@mcollina
Copy link
Copy Markdown
Member Author

mcollina commented Apr 2, 2026

My issues are still present. By default, code that works in ws, Bun, deno, firefox, chrome, Safari, and every other browser and environment will fail in node.

The security model of Firefox, Chrome and Safari is radically different from what we can offer in Node.js. The challenges are sometimes opposite.

Out-of-memory errors cannot be easily caught by Node.js and V8 (Chrome can just crash a tab) and they are handled as best-effort cases. The only safe way to prevent a crash is to provide a solution to avoid allocating that memory.

I will investigate Bun and Deno if they deal with this in some different way.

@KhafraDev
Copy link
Copy Markdown
Member

Different security models, but the same spec. ws added a maxPayloadSize option 2 weeks ago (possibly inspired by undici's CVE?) that is disabled by default as well.

@mcollina
Copy link
Copy Markdown
Member Author

mcollina commented Apr 2, 2026

I did a pass over what limits other WebSocket implementations actually expose, because RFC 6455 leaves this up to implementations.

A few things seem worth separating:

  • frame / message limits
  • compressed vs decompressed limits
  • documented public API limits vs implementation details

1) What the spec says

RFC 6455 explicitly says this is implementation-specific. In §10.4:

Implementations that have implementation- and/or platform-specific limitations regarding the frame size or total message size after reassembly from multiple frames MUST protect themselves against exceeding those limits ... Such an implementation SHOULD impose a limit on frame sizes and the total message size after reassembly from multiple frames.

Ref: https://www.rfc-editor.org/rfc/rfc6455#section-10.4

So the spec does not define one universal limit, but it does explicitly expect implementations to protect themselves.

2) Browser-style WebSocket APIs

For Chrome / Firefox / Safari, I couldn't find an official public doc that says “the maximum incoming WebSocket message size is N bytes”. What MDN does document for the standard browser WebSocket API is that it has no backpressure, so it can fill memory if messages arrive faster than the application can process them:

Deno's browser-style WebSocket docs are similar: they expose the standard API, but I couldn't find a documented numeric incoming-message limit there either:

So for browser-style APIs, the public contract appears to be closer to "implementation / memory dependent" than to a stable documented numeric cap.

3) Server/runtime implementations that do expose explicit limits

These are the clearest documented limits I found:

4) What this means for undici

The cross-implementation picture looks like this:

  • The spec expects implementations to defend themselves.
  • Browsers don’t seem to publish a stable numeric limit in their public WebSocket API docs.
  • Server/runtime libraries usually do expose an explicit configurable limit.
  • The defaults vary a lot: from 16 KiB (uWS) to 100 MiB (ws).

Also, most of the limits above are message-size limits in general, not specifically decompressed-message-size limits. In that sense, what this PR adds is actually more targeted than many existing APIs, because it directly addresses the permessage-deflate expansion case instead of only limiting wire payload size.

So IMO this PR is well aligned with the ecosystem:

  1. RFC 6455 says implementations should protect themselves.
  2. Server-side WebSocket stacks commonly expose a configurable limit.
  3. There is no obvious single "everyone uses X" default across the ecosystem anyway.
  4. A configurable decompressed size limit is a reasonable server/runtime safeguard, especially in Node where OOM is process-fatal rather than "just crash a tab".

@KhafraDev
Copy link
Copy Markdown
Member

KhafraDev commented Apr 2, 2026

Note that permessage-deflate is standardized in RFC 7692 and not RFC 6455

- Set maxDecompressedMessageSize to 0 to disable the limit
- Default remains 64 MB for decompression bomb protection
- Add test for disabled limit
@mcollina
Copy link
Copy Markdown
Member Author

mcollina commented Apr 3, 2026

I doesn't matter if it's server or client. If one is receiving data from untrusted sources, they are vulnerable to this attack (in Node.js).

I'll keep digging on more proof points.

@mcollina
Copy link
Copy Markdown
Member Author

mcollina commented Apr 3, 2026

A bit more data after testing other runtimes in the same setup (host ws server, perMessageDeflate: true, client in a 256MB memory-capped Docker container):

  • Bun 1.3.11 client negotiated permessage-deflate and I could reproduce the same broad failure mode as Node: 64 MiB and 80 MiB succeeded, but 96 MiB and 128 MiB caused the client process to be killed with exit 137. On the server side that showed up as close code 1006.

  • Deno 2.7.11 client behaved differently: it did not negotiate permessage-deflate at all in this repro, so it doesn't exercise the decompression path. Instead it has an explicit size limit and fails cleanly with Error: Frame too large.

    • Empirically, 67108863 bytes succeeds and 67108864 bytes fails.
    • That matches Deno's current implementation, which uses fastwebsockets; fastwebsockets defaults max_message_size to 64 << 20 and rejects when payload_len >= self.max_message_size.
    • It also documents that permessage-deflate is not supported yet.

References:

Given that, one question: would it make sense to also consider a more general maxPayload / maxMessageSize style option, in addition to (or instead of) maxDecompressedMessageSize?

My intuition is that maxDecompressedMessageSize is the right targeted fix for the expansion problem behind this issue, but a generic payload/message limit may be a better overall protection story. At the same time, without compression-driven expansion it probably isn't in the same DoS/CVE class.

@mcollina
Copy link
Copy Markdown
Member Author

mcollina commented Apr 3, 2026

I'm also ok in raising the limit to 64MB, I was too conservative with 4MB

@tsctx
Copy link
Copy Markdown
Member

tsctx commented Apr 3, 2026

I'm inclined to think we should be cautious about introducing limits like 64MB that aren’t explicitly in the specification. Since the client can't currently restrict uncompressed messages, limiting only the compressed path might not provide full protection against OOM crashes; raw data could still potentially be streamed up to the 2GB limit.

Unless we can implement a unified maxPayloadSize for both paths, it might be more consistent to default to the protocol or platform maximum (approx. 2GB). It may be better to let developers configure specific restrictions themselves, rather than having the library impose caps that aren't standard in browser environments.

@lpinca
Copy link
Copy Markdown
Member

lpinca commented Apr 3, 2026

ws added a maxPayloadSize option 2 weeks ago

No, it is there since forever, the commit you are referring to is just moving a positional argument to an option.

ws' maxPayload option: I believe 100 MB limit is server-only. For the client, it is disabled by default.

No, it works in the same way for both the client and the server. 0 means no limit.

The extension is enabled by default on the client and is disabled by default on the server due to memory fragmentation issues, see https://github.com/websockets/ws?tab=readme-ov-file#websocket-compression. Anyway a lot has changed/improved since the defaults decision and it is now used in many production environments.

You can count the size of the chunks in the listener of the 'data' event of the zlib.InflateRaw stream to prevent "zip bombs", see https://github.com/websockets/ws/blob/84392554/lib/permessage-deflate.js#L482-L506.

@KhafraDev
Copy link
Copy Markdown
Member

No, it works in the same way for both the client and the server. 0 means no limit.

My bad, in jsdoc [param=value] means value is the default (https://jsdoc.app/tags-param#optional-parameters-and-default-values)

You can count the size of the chunks in the listener of the 'data' event of the zlib.InflateRaw stream to prevent "zip bombs"

This is what the PR is doing.

@lpinca
Copy link
Copy Markdown
Member

lpinca commented Apr 3, 2026

in jsdoc [param=value] means value is the default

Yes, but the class is not instantiated with that value (https://github.com/websockets/ws/blob/84392554/lib/websocket-server.js#L299, https://github.com/websockets/ws/blob/84392554/lib/websocket.js#L761).

- Apply limit to both compressed and uncompressed payloads
- Add raw payload size check before accepting uncompressed data
- Update types and docs to reflect new option name
- Add test for raw uncompressed payload limit enforcement
@mcollina mcollina changed the title feat: add configurable maxDecompressedMessageSize for WebSocket in Agent feat: add configurable maxPayloadSize for WebSocket Apr 3, 2026
@mcollina
Copy link
Copy Markdown
Member Author

mcollina commented Apr 7, 2026

@tsctx @KhafraDev PTAL

@KhafraDev
Copy link
Copy Markdown
Member

KhafraDev commented Apr 7, 2026

I would need a test that causes node to run out of memory without the limit (on main) before I would approve. Right now, the tests only check that messages > the configured payload size close the connection/error.

@mcollina
Copy link
Copy Markdown
Member Author

mcollina commented Apr 7, 2026

I'll send it to you privately, I prefer not to disclose publicly.

@mcollina
Copy link
Copy Markdown
Member Author

mcollina commented Apr 7, 2026

Shared. Note that it takes just 130KB to generate 128MB of data, it's the kind of expansion that is classified as DoS.

@KhafraDev
Copy link
Copy Markdown
Member

I'll send it to you privately, I prefer not to disclose publicly.

Can we eventually add the repro in undici? I understand why it is private currently.

@mcollina
Copy link
Copy Markdown
Member Author

mcollina commented Apr 7, 2026

once the vulnerability is disclosed, yes.

https://hackerone.com/reports/3481206

There is probably enough here to understand the problem and recreate it tbh, but I prefer to not make things easy.

@mcollina
Copy link
Copy Markdown
Member Author

mcollina commented Apr 8, 2026

@KhafraDev are you ok for landing this then?

@tsctx
Copy link
Copy Markdown
Member

tsctx commented Apr 8, 2026

I agree that introducing a limit is reasonable. However, I have concerns about the 64MB default. While it serves as a baseline, it leaves almost no margin for modern, data-intensive applications.

The issue is that since this limit cannot be changed without installing undici, 64MB may inadvertently become a bottleneck for typical development rather than a protective safety net. In real-world scenarios, data transfers can easily spike toward 60-70MB. A 64MB limit is low enough to be hit by legitimate usage, yet 100MB is high enough to accommodate those spikes while still providing robust protection against genuinely malicious payloads.

Since Deno uses 64MB, I can understand that 64MB is appropriate in that context, but I believe that aligning with the 100MB limit in the "ws" library—which you mentioned earlier—is the right direction. It provides a "silent safety measure" that remains invisible to the average user while preventing the need to install undici just to handle slightly larger payloads.

Also, while 64MB makes sense for Deno, it could cause compatibility issues where code that works in a browser (which generally allows for significantly higher message thresholds) fails here. I initially thought we should allow a much higher limit to match the practical spec, but 100MB feels like a very reasonable compromise.

Given that it is difficult for users to change the limit when used as a global implementation, what do you think about setting the default to 100MB to ensure the implementation of undici remains practical without compromising security?

Comment thread lib/web/websocket/receiver.js Outdated
@mcollina
Copy link
Copy Markdown
Member Author

mcollina commented Apr 8, 2026

I think we can go to 128MB without much problems. The default is 4MB right now, so anything above that would be good.

@mcollina
Copy link
Copy Markdown
Member Author

mcollina commented Apr 9, 2026

@tsctx @KhafraDev ptal

Copy link
Copy Markdown
Member

@tsctx tsctx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

Comment thread lib/web/websocket/receiver.js Outdated
if (
this.#maxPayloadSize > 0 &&
!isControlFrame(this.#info.opcode) &&
!this.#info.compressed &&
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand why we ignore compressed messages here, but then perform the same logic in the permessage-deflate handler?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we are doing this kind of check in 3 places:

  1. here for normal packets
  2. early rejection for compressed packets (estimating a 10x expansion rate, we can avoid trying to decompress things, saves CPU)
  3. actual/precise validation for compressed packages

The 2nd could be optional, but it saves CPU cycles.

Copy link
Copy Markdown
Member

@KhafraDev KhafraDev Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but we're skipping compressed packets here, it's not much of an early return if you still have to go all the way to the decompression handler

what I'm asking for is why we're skipping compressed packets here

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are skipping compressed packets here because it's a different logic as @tsctx has asked.

Copy link
Copy Markdown
Member

@KhafraDev KhafraDev Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The logic is not different.

Assuming we send an uncompressed and compressed frame that's 129MB:

  • Uncompressed: early exits here
  • Compressed: skips this, goes through READ_DATA, goes to permessage deflate handler, attempt to inflate, if payload size > max payload size, exit

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand that, but why skip them? Couldn't an attacker just send a compressed frame that has an unlimited size, where undici will collect all fragments, attempt to inflate, then fail much later?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The check is done chunk by chunk on compression, as soon as it goes over the limit, the connection is interrupted.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’ve looked into it, and while we check the max length per message unit, we seem to be missing a check for the total buffered size of fragmented messages.

We don't actually need a separate check for compression; we should simply trigger an early exit if the accumulated size plus the current payload exceeds the limit. Even with compressed data, we can perform this early check if the payload itself is already too large.

This is already handled for uncompressed fragments, but for the compressed path, we should also pass the current buffered size to the decompression side to ensure the same validation.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've written some pseudo-code to illustrate the issue more clearly.

createServer((socket) => {
    // Start by fragmenting the message.
    // At this stage, 64MB has been accumulated in the buffer.
    socket.send(alloc(64 * 1024 * 1024), { fin: false }); 
    
    // Case 1: Early termination is not triggered because compressed messages 
    // are currently being skipped.
    // The pre-compressed size is 100MB. Since 64MB is already buffered, 
    // it clearly exceeds the limit and should be terminated immediately.
    socket.send(alloc_post_compressed(100 * 1024 * 1024), { fin: true }); 
    // alloc_post_compressed: The size represents the "expected final output."

    // Case 2: The current accumulated size is not passed to the decompressor, 
    // so this segment is treated as only 100MB, bypassing the decompression limit.
    // We should pass the current buffer size to the decompressor and 
    // validate the total size (current buffer + decompressed payload).
    socket.send(alloc_pre_compressed(100 * 1024 * 1024), { fin: true }); 
    // alloc_pre_compressed: The size represents the "actual memory footprint."
});

Copy link
Copy Markdown
Member

@KhafraDev KhafraDev Apr 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @tsctx this is what I was trying to articulate, lol.

The issue is if the compressed data is larger than the maxPayloadSize, we shouldn't bother going to READ_DATA and inflating the data which is already larger than the maxPayloadSize.

The permessage-deflate handler catches it, once the decompressed data is larger than the maxPayloadSize. This only happens once we have a full frame, so an attacker could theoretically send n fragmented frames each with unlimited size that are kept in memory until we receive the entirety of the frame.

From READ_DATA, should be helpful in understanding:

        if (this.#byteOffset < this.#info.payloadLength) {
          return callback()
        }

        const body = this.consume(this.#info.payloadLength)

Comment thread lib/web/websocket/permessage-deflate.js Outdated
Comment thread lib/web/websocket/permessage-deflate.js Outdated
@mcollina
Copy link
Copy Markdown
Member Author

@KhafraDev ptal

@KhafraDev
Copy link
Copy Markdown
Member

#4955 (comment)

I'm not sure why compressed messages are being handled separately in this case still

@KhafraDev
Copy link
Copy Markdown
Member

I haven't done extensive testing, but KhafraDev@903a6f0 simplifies this quite a bit (leaving most of the handling to the actual parser, not the permessage deflate handler) and fixes what I was talking about in the unresolved review comment and fixes a few bugs.

@mcollina
Copy link
Copy Markdown
Member Author

Thanks, but I don’t think we can take that as-is: it drops cumulative checking for fragmented uncompressed messages, and it also starts rejecting
compressed frames based on compressed wire size rather than actual inflated payload size.

I think the current direction is safer: keep
uncompressed header checks in the parser, and enforce compressed limits against cumulative inflated bytes.

@KhafraDev
Copy link
Copy Markdown
Member

KhafraDev commented Apr 12, 2026

It should do both and all the permessage-deflate-limit tests pass.

compressed frames based on compressed wire size rather than actual inflated payload size.

This is what I am talking about in the review: it should do both. Someone can send a compressed frame of unlimited size and undici will not error until we attempt to inflate.

Comment thread lib/web/websocket/receiver.js Outdated
this.#maxPayloadSize > 0 &&
!isControlFrame(this.#info.opcode) &&
!this.#info.compressed &&
this.#fragmentsBytes + this.#info.payloadLength > this.#maxPayloadSize
Copy link
Copy Markdown
Member

@KhafraDev KhafraDev Apr 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is completely wrong. #fragmentBytes is the actual amount buffered. payLoadlength is a field telling us the final frame size, so we're adding the total frame size to the currently buffered frame size...
image

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we're adding the total frame size to the currently buffered frame size

No. The idea is that this check interprets "already buffered fragments + current frame".

Comment thread lib/web/websocket/receiver.js Outdated
this.#info.fin = fin
this.#info.fragmented = fragmented

if (this.#state === parserStates.READ_DATA && !this.#validatePayloadLength()) {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's only one branch where state can be set to READ_DATA from here, it should be moved

Comment thread lib/web/websocket/receiver.js
@mcollina
Copy link
Copy Markdown
Member Author

@KhafraDev can you please send a PR against this with the fix you want to see? Your commit also had other changes.

@KhafraDev
Copy link
Copy Markdown
Member

that commit is everything I want to see

Co-Authored-By: Khafra <maitken033380023@gmail.com>
@mcollina
Copy link
Copy Markdown
Member Author

Done

@mcollina mcollina merged commit bd91f86 into main Apr 13, 2026
35 of 36 checks passed
@mcollina mcollina deleted the feature/configurable-max-decompressed-message-size branch April 13, 2026 06:59
@github-actions github-actions bot mentioned this pull request Apr 13, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Allow configuring maxDecompressedMessageSize

5 participants