DebertaV2 fix for running with large batches#846
DebertaV2 fix for running with large batches#846vrdn-23 wants to merge 2 commits intohuggingface:mainfrom
Conversation
|
The load test I ran to replicate this. It is easy to see the error from main when running this |
|
@vrdn-23 How is this not caught during warmup? Isn't that what warmup is for? |
|
I think it's cause warmup always creates batches of same size and this particular branch of the code for padding/masking only gets activated when we have batches of unequal length. I think it might also be helpful to have the warmup run for batches of same size (which is max size to ensure GPU has memory) and unequal sizes (to help check padding issues). Or I don't know if that is overkill. Any thoughts @michaelfeil ? |
|
@alvarobartt can we merge this in before the next release since it is a straightforward bug fix? Let me know if there is any more information I can provide to help validate the issue |
What does this PR do?
Fix shape mismatch in DeBERTa v2 embeddings mask during batched inference
Problem
DeBERTa v2 models fail with a
shape mismatch in broadcast_mulerror under concurrent load when requests get batched together (batch_size > 1 with padding).{ "level": "ERROR", "message": "shape mismatch in broadcast_mul, lhs: [2, 348, 768], rhs: [2, 348, 1, 1]" }At 50 concurrent users, 91% of requests fail with this error. Single requests always succeed because they bypass the padding/masking path.
Root Cause
In
DebertaV2Embeddings::forward, the mask reshape guard at line 179 compared shape values instead of rank (number of dimensions):When
batch_size > 1with padding, the attention mask is created as[B, L, 1](3D) and embeddings are[B, L, H](3D). Same rank, different values. The condition incorrectly evaluates totrue, causing an unnecessaryunsqueeze(2)that produces a 4D tensor[B, L, 1, 1]which cannot broadcast with the 3D embeddings[B, L, H].This only affects DeBERTa v2 — no other model applies a mask inside the embeddings layer.
Fix
Compare tensor rank instead of shape values:
A 3D mask
[B, L, 1]already broadcasts correctly with[B, L, H]. The reshape block is only needed when the mask has a different number of dimensions (e.g., 2D[B, L]→ needsunsqueezeto become[B, L, 1]).Verification
Load tested with k6 ramping to 50 concurrent users:
Before submitting
instasnapshots?Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@alvarobartt @kozistr