Skip to content

Fix quantized_conv1d_ncl padding bug in HiFi kernel (#18783)#18783

Merged
meta-codesync[bot] merged 1 commit intopytorch:mainfrom
ethansfng:export-D100069524
Apr 9, 2026
Merged

Fix quantized_conv1d_ncl padding bug in HiFi kernel (#18783)#18783
meta-codesync[bot] merged 1 commit intopytorch:mainfrom
ethansfng:export-D100069524

Conversation

@ethansfng
Copy link
Copy Markdown
Contributor

@ethansfng ethansfng commented Apr 8, 2026

Summary:

The HiFi optimized int8 path in op_quantized_conv1d_ncl.cpp simulates 1D
convolution as 2D (height=1) using xa_nn_conv2d_per_chan_sym8sxasym8s.
The NNLib convention is x=width, y=height, but the code had them swapped:
x_stride/x_padding were set to 1/0 (height values) while y_stride/y_padding
held the actual 1D stride/padding (width values). This caused any conv1d with
padding > 0, dilation == 1, stride == 1 to compute without padding, producing
incorrect results.

This was introduced by D95279330 which added the quantized_conv1d_ncl kernel.
Models with conv1d at dilation=1 and padding>0 (e.g., first TemporalBlock in
TCN-based models like microgestures) hit this bug.

Fix: swap the axes so x (width) gets the actual 1D stride/padding and y
(height) gets the trivial 1/0 values.

Differential Revision: D100069524

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 8, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18783

Note: Links to docs will display an error until the docs builds have been completed.

⏳ 1 Pending, 2 Unrelated Failures

As of commit 2c626b7 with merge base 21d9c64 (image):

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 8, 2026
@meta-codesync
Copy link
Copy Markdown
Contributor

meta-codesync Bot commented Apr 8, 2026

@ethansfng has exported this pull request. If you are a Meta employee, you can view the originating Diff in D100069524.

@github-actions
Copy link
Copy Markdown

github-actions Bot commented Apr 8, 2026

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@meta-codesync meta-codesync Bot changed the title Fix quantized_conv1d_ncl padding bug in HiFi kernel Fix quantized_conv1d_ncl padding bug in HiFi kernel (#18783) Apr 8, 2026
ethansfng added a commit to ethansfng/executorch that referenced this pull request Apr 8, 2026
Summary:

The HiFi optimized int8 path in `op_quantized_conv1d_ncl.cpp` simulates 1D
convolution as 2D (height=1) using `xa_nn_conv2d_per_chan_sym8sxasym8s`.
The NNLib convention is x=width, y=height, but the code had them swapped:
x_stride/x_padding were set to 1/0 (height values) while y_stride/y_padding
held the actual 1D stride/padding (width values). This caused any conv1d with
padding > 0, dilation == 1, stride == 1 to compute without padding, producing
incorrect results.

This was introduced by D95279330 which added the quantized_conv1d_ncl kernel.
Models with conv1d at dilation=1 and padding>0 (e.g., first TemporalBlock in
TCN-based models like microgestures) hit this bug.

Fix: swap the axes so x (width) gets the actual 1D stride/padding and y
(height) gets the trivial 1/0 values.

Differential Revision: D100069524
@ethansfng ethansfng force-pushed the export-D100069524 branch from fd409fd to 74d3d64 Compare April 8, 2026 23:02
Summary:
Pull Request resolved: pytorch#18783

The HiFi optimized int8 path in `op_quantized_conv1d_ncl.cpp` simulates 1D
convolution as 2D (height=1) using `xa_nn_conv2d_per_chan_sym8sxasym8s`.
The NNLib convention is x=width, y=height, but the code had them swapped:
x_stride/x_padding were set to 1/0 (height values) while y_stride/y_padding
held the actual 1D stride/padding (width values). This caused any conv1d with
padding > 0, dilation == 1, stride == 1 to compute without padding, producing
incorrect results.

This was introduced by D95279330 which added the quantized_conv1d_ncl kernel.
Models with conv1d at dilation=1 and padding>0 (e.g., first TemporalBlock in
TCN-based models like microgestures) hit this bug.

Fix: swap the axes so x (width) gets the actual 1D stride/padding and y
(height) gets the trivial 1/0 values.

Differential Revision: D100069524
@ethansfng ethansfng force-pushed the export-D100069524 branch from 74d3d64 to 2c626b7 Compare April 8, 2026 23:06
@aliafzal aliafzal self-requested a review April 8, 2026 23:09
@meta-codesync meta-codesync Bot merged commit 0ee0f67 into pytorch:main Apr 9, 2026
162 of 165 checks passed
jpiat pushed a commit to jpiat/executorch that referenced this pull request Apr 14, 2026
Differential Revision: D100069524

Pull Request resolved: pytorch#18783
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants