Fix quantized_conv1d_ncl padding bug in HiFi kernel (#18783)#18783
Fix quantized_conv1d_ncl padding bug in HiFi kernel (#18783)#18783meta-codesync[bot] merged 1 commit intopytorch:mainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18783
Note: Links to docs will display an error until the docs builds have been completed. ⏳ 1 Pending, 2 Unrelated FailuresAs of commit 2c626b7 with merge base 21d9c64 ( BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@ethansfng has exported this pull request. If you are a Meta employee, you can view the originating Diff in D100069524. |
This PR needs a
|
Summary: The HiFi optimized int8 path in `op_quantized_conv1d_ncl.cpp` simulates 1D convolution as 2D (height=1) using `xa_nn_conv2d_per_chan_sym8sxasym8s`. The NNLib convention is x=width, y=height, but the code had them swapped: x_stride/x_padding were set to 1/0 (height values) while y_stride/y_padding held the actual 1D stride/padding (width values). This caused any conv1d with padding > 0, dilation == 1, stride == 1 to compute without padding, producing incorrect results. This was introduced by D95279330 which added the quantized_conv1d_ncl kernel. Models with conv1d at dilation=1 and padding>0 (e.g., first TemporalBlock in TCN-based models like microgestures) hit this bug. Fix: swap the axes so x (width) gets the actual 1D stride/padding and y (height) gets the trivial 1/0 values. Differential Revision: D100069524
fd409fd to
74d3d64
Compare
Summary: Pull Request resolved: pytorch#18783 The HiFi optimized int8 path in `op_quantized_conv1d_ncl.cpp` simulates 1D convolution as 2D (height=1) using `xa_nn_conv2d_per_chan_sym8sxasym8s`. The NNLib convention is x=width, y=height, but the code had them swapped: x_stride/x_padding were set to 1/0 (height values) while y_stride/y_padding held the actual 1D stride/padding (width values). This caused any conv1d with padding > 0, dilation == 1, stride == 1 to compute without padding, producing incorrect results. This was introduced by D95279330 which added the quantized_conv1d_ncl kernel. Models with conv1d at dilation=1 and padding>0 (e.g., first TemporalBlock in TCN-based models like microgestures) hit this bug. Fix: swap the axes so x (width) gets the actual 1D stride/padding and y (height) gets the trivial 1/0 values. Differential Revision: D100069524
74d3d64 to
2c626b7
Compare
Differential Revision: D100069524 Pull Request resolved: pytorch#18783
Summary:
The HiFi optimized int8 path in
op_quantized_conv1d_ncl.cppsimulates 1Dconvolution as 2D (height=1) using
xa_nn_conv2d_per_chan_sym8sxasym8s.The NNLib convention is x=width, y=height, but the code had them swapped:
x_stride/x_padding were set to 1/0 (height values) while y_stride/y_padding
held the actual 1D stride/padding (width values). This caused any conv1d with
padding > 0, dilation == 1, stride == 1 to compute without padding, producing
incorrect results.
This was introduced by D95279330 which added the quantized_conv1d_ncl kernel.
Models with conv1d at dilation=1 and padding>0 (e.g., first TemporalBlock in
TCN-based models like microgestures) hit this bug.
Fix: swap the axes so x (width) gets the actual 1D stride/padding and y
(height) gets the trivial 1/0 values.
Differential Revision: D100069524