[ET-VK] Fix softmax NaN and depthwise conv correctness bugs#17848
[ET-VK] Fix softmax NaN and depthwise conv correctness bugs#17848meta-codesync[bot] merged 1 commit intogh/SS-JIA/457/basefrom
Conversation
Fix three bugs causing incorrect output when running the edgeTAM model
with the Vulkan backend. Together these fixes bring the model from
producing all-NaN output to matching the reference within fp32 tolerance.
**Bug 1 — softmax_packed_dim OOB max contamination (softmax.glsl)**
In `softmax_packed_dim`, each workgroup uses NWORKERS=4 threads to
collaboratively reduce along the packed dimension. Before the main loop,
each worker initializes `max_elements` by loading from texel index
`tid.x`. When NWORKERS exceeds the number of texels (e.g., a 12-element
dim has only 3 texels, but worker 3 tries to load texel index 3), the
load is out-of-bounds and returns 0 per Vulkan spec. This 0 enters the
cross-worker max reduction, so for any row where all actual values are
negative, the computed max becomes 0 instead of the true (negative) max.
Then `exp(value - 0)` underflows to 0 for all elements, giving
denominator=0 and NaN output.
Fixed by initializing `max_elements = vec4(-3.402823e+38)` (i.e.,
-FLT_MAX) so that workers with no valid texels contribute -inf to the
reduction. Also added a `safe_denominator = max(denominator, 1e-37)`
clamp as a secondary safety net against any remaining underflow edge
cases.
This affected the edgeTAM attention softmax over 12 key positions, where
~15% of query rows had all-negative attention scores and produced NaN.
**Bug 1b — softmax_nonpacked_dim defensive hardening (softmax.glsl)**
Applied similar defensive fixes to `softmax_nonpacked_dim`:
- Clamped denominator via `max(denominators, vec4(1e-37))` to prevent
0/0 = NaN if all exp values underflow.
- Added IEEE 754 bit-level NaN/Inf → 0 sanitization on output texels.
This uses `floatBitsToUint`/`uintBitsToFloat` with exponent-bit
masking rather than `isnan()` or `x != x`, which may not work reliably
on all GPU drivers due to OpIsNan bugs and ordered comparison
semantics.
- Added `memoryBarrierImage()` after the output write loop to flush
imageStore writes so they're visible to subsequent GPU operations.
**Bug 2 — conv2d_dw parameter binding mismatch (Convolution.cpp)**
The depthwise convolution code path in `add_conv2d_node` unconditionally
passed kernel parameters (stride, padding, dilation, etc.) via push
constants. However, the base `conv2d_dw.glsl` shader (used for non-3x3
and non-5x5 kernels, such as 1x1 depthwise convolutions) declares these
parameters as UBOs at binding points 4–8, not as push constants. The
`_output_tile` shader variants do use push constants, so 3x3 and 5x5
depthwise convolutions worked correctly.
For 1x1 depthwise convolutions, the shader read from unbound UBOs,
getting zeros for stride, padding, dilation, and overlay_region. With
stride=0 and overlay_region=(0,0), the convolution loop never executed,
producing output equal to just the bias (effectively zero for small
biases).
Fixed by checking whether the selected shader name contains
`_output_tile`. If not, parameters are passed via UBOs (matching the
shader's declarations) instead of push constants.
**Bug 3 — conv2d_dw workgroup size mismatch (Convolution.cpp)**
The base `conv2d_dw.glsl` shader uses a fully 1D thread mapping where
`gl_GlobalInvocationID.x` encodes all three output dimensions:
`pos.x = gid.x % W`, `pos.y = (gid.x / W) % H`,
`pos.z = gid.x / (W * H)`. The `_output_tile` variants use a 2D mapping
with spatial tiles in `.x` and channels in `.y`.
The `conv2d_global_wg_size` callback was dispatching all depthwise
shaders with workgroup size `{W*H, C_packed, 1}`, which is correct for
`_output_tile` but wrong for the base shader. With this size, all
threads have `gid.x < W*H`, so `pos.z = gid.x / (W*H) = 0` — only
channel texel 0 (channels 0–3 out of e.g. 192) gets computed.
Fixed by dispatching `{W*H*C_packed, 1, 1}` for the base shader so
that `gid.x` ranges over all spatial × channel positions.
Differential Revision: [D95217947](https://our.internmc.facebook.com/intern/diff/D95217947/)
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/17848
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New Failures, 4 Unrelated FailuresAs of commit b2e1541 with merge base 1a75394 ( NEW FAILURES - The following jobs have failed:
BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
bd536d3
into
gh/SS-JIA/457/base
Fix three bugs causing incorrect output when running the edgeTAM model
with the Vulkan backend. Together these fixes bring the model from
producing all-NaN output to matching the reference within fp32 tolerance.
**Bug 1 — softmax_packed_dim OOB max contamination (softmax.glsl)**
In `softmax_packed_dim`, each workgroup uses NWORKERS=4 threads to
collaboratively reduce along the packed dimension. Before the main loop,
each worker initializes `max_elements` by loading from texel index
`tid.x`. When NWORKERS exceeds the number of texels (e.g., a 12-element
dim has only 3 texels, but worker 3 tries to load texel index 3), the
load is out-of-bounds and returns 0 per Vulkan spec. This 0 enters the
cross-worker max reduction, so for any row where all actual values are
negative, the computed max becomes 0 instead of the true (negative) max.
Then `exp(value - 0)` underflows to 0 for all elements, giving
denominator=0 and NaN output.
Fixed by initializing `max_elements = vec4(-3.402823e+38)` (i.e.,
-FLT_MAX) so that workers with no valid texels contribute -inf to the
reduction. Also added a `safe_denominator = max(denominator, 1e-37)`
clamp as a secondary safety net against any remaining underflow edge
cases.
This affected the edgeTAM attention softmax over 12 key positions, where
~15% of query rows had all-negative attention scores and produced NaN.
**Bug 1b — softmax_nonpacked_dim defensive hardening (softmax.glsl)**
Applied similar defensive fixes to `softmax_nonpacked_dim`:
- Clamped denominator via `max(denominators, vec4(1e-37))` to prevent
0/0 = NaN if all exp values underflow.
- Added IEEE 754 bit-level NaN/Inf → 0 sanitization on output texels.
This uses `floatBitsToUint`/`uintBitsToFloat` with exponent-bit
masking rather than `isnan()` or `x != x`, which may not work reliably
on all GPU drivers due to OpIsNan bugs and ordered comparison
semantics.
- Added `memoryBarrierImage()` after the output write loop to flush
imageStore writes so they're visible to subsequent GPU operations.
**Bug 2 — conv2d_dw parameter binding mismatch (Convolution.cpp)**
The depthwise convolution code path in `add_conv2d_node` unconditionally
passed kernel parameters (stride, padding, dilation, etc.) via push
constants. However, the base `conv2d_dw.glsl` shader (used for non-3x3
and non-5x5 kernels, such as 1x1 depthwise convolutions) declares these
parameters as UBOs at binding points 4–8, not as push constants. The
`_output_tile` shader variants do use push constants, so 3x3 and 5x5
depthwise convolutions worked correctly.
For 1x1 depthwise convolutions, the shader read from unbound UBOs,
getting zeros for stride, padding, dilation, and overlay_region. With
stride=0 and overlay_region=(0,0), the convolution loop never executed,
producing output equal to just the bias (effectively zero for small
biases).
Fixed by checking whether the selected shader name contains
`_output_tile`. If not, parameters are passed via UBOs (matching the
shader's declarations) instead of push constants.
**Bug 3 — conv2d_dw workgroup size mismatch (Convolution.cpp)**
The base `conv2d_dw.glsl` shader uses a fully 1D thread mapping where
`gl_GlobalInvocationID.x` encodes all three output dimensions:
`pos.x = gid.x % W`, `pos.y = (gid.x / W) % H`,
`pos.z = gid.x / (W * H)`. The `_output_tile` variants use a 2D mapping
with spatial tiles in `.x` and channels in `.y`.
The `conv2d_global_wg_size` callback was dispatching all depthwise
shaders with workgroup size `{W*H, C_packed, 1}`, which is correct for
`_output_tile` but wrong for the base shader. With this size, all
threads have `gid.x < W*H`, so `pos.z = gid.x / (W*H) = 0` — only
channel texel 0 (channels 0–3 out of e.g. 192) gets computed.
Fixed by dispatching `{W*H*C_packed, 1, 1}` for the base shader so
that `gid.x` ranges over all spatial × channel positions.
Differential Revision: [D95217947](https://our.internmc.facebook.com/intern/diff/D95217947/)
ghstack-source-id: 347411472
Pull Request resolved: #17848
Fix three bugs causing incorrect output when running the edgeTAM model
with the Vulkan backend. Together these fixes bring the model from
producing all-NaN output to matching the reference within fp32 tolerance.
**Bug 1 — softmax_packed_dim OOB max contamination (softmax.glsl)**
In `softmax_packed_dim`, each workgroup uses NWORKERS=4 threads to
collaboratively reduce along the packed dimension. Before the main loop,
each worker initializes `max_elements` by loading from texel index
`tid.x`. When NWORKERS exceeds the number of texels (e.g., a 12-element
dim has only 3 texels, but worker 3 tries to load texel index 3), the
load is out-of-bounds and returns 0 per Vulkan spec. This 0 enters the
cross-worker max reduction, so for any row where all actual values are
negative, the computed max becomes 0 instead of the true (negative) max.
Then `exp(value - 0)` underflows to 0 for all elements, giving
denominator=0 and NaN output.
Fixed by initializing `max_elements = vec4(-3.402823e+38)` (i.e.,
-FLT_MAX) so that workers with no valid texels contribute -inf to the
reduction. Also added a `safe_denominator = max(denominator, 1e-37)`
clamp as a secondary safety net against any remaining underflow edge
cases.
This affected the edgeTAM attention softmax over 12 key positions, where
~15% of query rows had all-negative attention scores and produced NaN.
**Bug 1b — softmax_nonpacked_dim defensive hardening (softmax.glsl)**
Applied similar defensive fixes to `softmax_nonpacked_dim`:
- Clamped denominator via `max(denominators, vec4(1e-37))` to prevent
0/0 = NaN if all exp values underflow.
- Added IEEE 754 bit-level NaN/Inf → 0 sanitization on output texels.
This uses `floatBitsToUint`/`uintBitsToFloat` with exponent-bit
masking rather than `isnan()` or `x != x`, which may not work reliably
on all GPU drivers due to OpIsNan bugs and ordered comparison
semantics.
- Added `memoryBarrierImage()` after the output write loop to flush
imageStore writes so they're visible to subsequent GPU operations.
**Bug 2 — conv2d_dw parameter binding mismatch (Convolution.cpp)**
The depthwise convolution code path in `add_conv2d_node` unconditionally
passed kernel parameters (stride, padding, dilation, etc.) via push
constants. However, the base `conv2d_dw.glsl` shader (used for non-3x3
and non-5x5 kernels, such as 1x1 depthwise convolutions) declares these
parameters as UBOs at binding points 4–8, not as push constants. The
`_output_tile` shader variants do use push constants, so 3x3 and 5x5
depthwise convolutions worked correctly.
For 1x1 depthwise convolutions, the shader read from unbound UBOs,
getting zeros for stride, padding, dilation, and overlay_region. With
stride=0 and overlay_region=(0,0), the convolution loop never executed,
producing output equal to just the bias (effectively zero for small
biases).
Fixed by checking whether the selected shader name contains
`_output_tile`. If not, parameters are passed via UBOs (matching the
shader's declarations) instead of push constants.
**Bug 3 — conv2d_dw workgroup size mismatch (Convolution.cpp)**
The base `conv2d_dw.glsl` shader uses a fully 1D thread mapping where
`gl_GlobalInvocationID.x` encodes all three output dimensions:
`pos.x = gid.x % W`, `pos.y = (gid.x / W) % H`,
`pos.z = gid.x / (W * H)`. The `_output_tile` variants use a 2D mapping
with spatial tiles in `.x` and channels in `.y`.
The `conv2d_global_wg_size` callback was dispatching all depthwise
shaders with workgroup size `{W*H, C_packed, 1}`, which is correct for
`_output_tile` but wrong for the base shader. With this size, all
threads have `gid.x < W*H`, so `pos.z = gid.x / (W*H) = 0` — only
channel texel 0 (channels 0–3 out of e.g. 192) gets computed.
Fixed by dispatching `{W*H*C_packed, 1, 1}` for the base shader so
that `gid.x` ranges over all spatial × channel positions.
Differential Revision: [D95217947](https://our.internmc.facebook.com/intern/diff/D95217947/)
ghstack-source-id: 347411472
Pull Request resolved: pytorch#17848
Stack from ghstack (oldest at bottom):
Fix three bugs causing incorrect output when running the edgeTAM model
with the Vulkan backend. Together these fixes bring the model from
producing all-NaN output to matching the reference within fp32 tolerance.
Bug 1 — softmax_packed_dim OOB max contamination (softmax.glsl)
In
softmax_packed_dim, each workgroup uses NWORKERS=4 threads tocollaboratively reduce along the packed dimension. Before the main loop,
each worker initializes
max_elementsby loading from texel indextid.x. When NWORKERS exceeds the number of texels (e.g., a 12-elementdim has only 3 texels, but worker 3 tries to load texel index 3), the
load is out-of-bounds and returns 0 per Vulkan spec. This 0 enters the
cross-worker max reduction, so for any row where all actual values are
negative, the computed max becomes 0 instead of the true (negative) max.
Then
exp(value - 0)underflows to 0 for all elements, givingdenominator=0 and NaN output.
Fixed by initializing
max_elements = vec4(-3.402823e+38)(i.e.,-FLT_MAX) so that workers with no valid texels contribute -inf to the
reduction. Also added a
safe_denominator = max(denominator, 1e-37)clamp as a secondary safety net against any remaining underflow edge
cases.
This affected the edgeTAM attention softmax over 12 key positions, where
~15% of query rows had all-negative attention scores and produced NaN.
Bug 1b — softmax_nonpacked_dim defensive hardening (softmax.glsl)
Applied similar defensive fixes to
softmax_nonpacked_dim:max(denominators, vec4(1e-37))to prevent0/0 = NaN if all exp values underflow.
This uses
floatBitsToUint/uintBitsToFloatwith exponent-bitmasking rather than
isnan()orx != x, which may not work reliablyon all GPU drivers due to OpIsNan bugs and ordered comparison
semantics.
memoryBarrierImage()after the output write loop to flushimageStore writes so they're visible to subsequent GPU operations.
Bug 2 — conv2d_dw parameter binding mismatch (Convolution.cpp)
The depthwise convolution code path in
add_conv2d_nodeunconditionallypassed kernel parameters (stride, padding, dilation, etc.) via push
constants. However, the base
conv2d_dw.glslshader (used for non-3x3and non-5x5 kernels, such as 1x1 depthwise convolutions) declares these
parameters as UBOs at binding points 4–8, not as push constants. The
_output_tileshader variants do use push constants, so 3x3 and 5x5depthwise convolutions worked correctly.
For 1x1 depthwise convolutions, the shader read from unbound UBOs,
getting zeros for stride, padding, dilation, and overlay_region. With
stride=0 and overlay_region=(0,0), the convolution loop never executed,
producing output equal to just the bias (effectively zero for small
biases).
Fixed by checking whether the selected shader name contains
_output_tile. If not, parameters are passed via UBOs (matching theshader's declarations) instead of push constants.
Bug 3 — conv2d_dw workgroup size mismatch (Convolution.cpp)
The base
conv2d_dw.glslshader uses a fully 1D thread mapping wheregl_GlobalInvocationID.xencodes all three output dimensions:pos.x = gid.x % W,pos.y = (gid.x / W) % H,pos.z = gid.x / (W * H). The_output_tilevariants use a 2D mappingwith spatial tiles in
.xand channels in.y.The
conv2d_global_wg_sizecallback was dispatching all depthwiseshaders with workgroup size
{W*H, C_packed, 1}, which is correct for_output_tilebut wrong for the base shader. With this size, allthreads have
gid.x < W*H, sopos.z = gid.x / (W*H) = 0— onlychannel texel 0 (channels 0–3 out of e.g. 192) gets computed.
Fixed by dispatching
{W*H*C_packed, 1, 1}for the base shader sothat
gid.xranges over all spatial × channel positions.Differential Revision: D95217947