Skip to content

[ET-VK][qlinear] Look through output view_copy when detecting output quantization#18014

Merged
meta-codesync[bot] merged 4 commits intogh/SS-JIA/461/basefrom
gh/SS-JIA/461/head
Mar 10, 2026
Merged

[ET-VK][qlinear] Look through output view_copy when detecting output quantization#18014
meta-codesync[bot] merged 4 commits intogh/SS-JIA/461/basefrom
gh/SS-JIA/461/head

Conversation

@SS-JIA
Copy link
Copy Markdown
Contributor

@SS-JIA SS-JIA commented Mar 9, 2026

Stack from ghstack (oldest at bottom):

When aten.linear has 3D+ inputs, it decomposes into
view_copy -> mm -> view_copy. The output view_copy between mm and the
subsequent quantize_per_tensor node was preventing the pattern matcher
from detecting output quantization, causing the match to fall through
to linear_q8ta_q8csw instead of q8ta_linear_gemv. This caused a
dtype mismatch during FakeTensor re-tracing in FusePatternsPass because
linear_q8ta_q8csw's composite implementation does not dequantize its
input, producing int8 output where float32 was expected.

Mirror the existing input-side view_copy handling (lines 99-104) on the
output side so the quantize node is found through the view_copy.

Differential Revision: D95807075

cc @manuelcandales @digantdesai @cbilgin

…quantization

When `aten.linear` has 3D+ inputs, it decomposes into
`view_copy -> mm -> view_copy`. The output view_copy between mm and the
subsequent quantize_per_tensor node was preventing the pattern matcher
from detecting output quantization, causing the match to fall through
to `linear_q8ta_q8csw` instead of `q8ta_linear_gemv`. This caused a
dtype mismatch during FakeTensor re-tracing in FusePatternsPass because
`linear_q8ta_q8csw`'s composite implementation does not dequantize its
input, producing int8 output where float32 was expected.

Mirror the existing input-side view_copy handling (lines 99-104) on the
output side so the quantize node is found through the view_copy.

Differential Revision: [D95807075](https://our.internmc.facebook.com/intern/diff/D95807075/)

[ghstack-poisoned]
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Mar 9, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18014

Note: Links to docs will display an error until the docs builds have been completed.

❌ 5 New Failures, 1 Cancelled Job

As of commit d7eda9c with merge base f09bd55 (image):

NEW FAILURES - The following jobs have failed:

CANCELLED JOB - The following job was cancelled. Please retry:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@github-actions
Copy link
Copy Markdown

github-actions Bot commented Mar 9, 2026

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

…ing output quantization"

When `aten.linear` has 3D+ inputs, it decomposes into
`view_copy -> mm -> view_copy`. The output view_copy between mm and the
subsequent quantize_per_tensor node was preventing the pattern matcher
from detecting output quantization, causing the match to fall through
to `linear_q8ta_q8csw` instead of `q8ta_linear_gemv`. This caused a
dtype mismatch during FakeTensor re-tracing in FusePatternsPass because
`linear_q8ta_q8csw`'s composite implementation does not dequantize its
input, producing int8 output where float32 was expected.

Mirror the existing input-side view_copy handling (lines 99-104) on the
output side so the quantize node is found through the view_copy.

Differential Revision: [D95807075](https://our.internmc.facebook.com/intern/diff/D95807075/)

[ghstack-poisoned]
ssjia added 2 commits March 9, 2026 12:11
…ing output quantization"

When `aten.linear` has 3D+ inputs, it decomposes into
`view_copy -> mm -> view_copy`. The output view_copy between mm and the
subsequent quantize_per_tensor node was preventing the pattern matcher
from detecting output quantization, causing the match to fall through
to `linear_q8ta_q8csw` instead of `q8ta_linear_gemv`. This caused a
dtype mismatch during FakeTensor re-tracing in FusePatternsPass because
`linear_q8ta_q8csw`'s composite implementation does not dequantize its
input, producing int8 output where float32 was expected.

Mirror the existing input-side view_copy handling (lines 99-104) on the
output side so the quantize node is found through the view_copy.

Differential Revision: [D95807075](https://our.internmc.facebook.com/intern/diff/D95807075/)

[ghstack-poisoned]
…ing output quantization"

When `aten.linear` has 3D+ inputs, it decomposes into
`view_copy -> mm -> view_copy`. The output view_copy between mm and the
subsequent quantize_per_tensor node was preventing the pattern matcher
from detecting output quantization, causing the match to fall through
to `linear_q8ta_q8csw` instead of `q8ta_linear_gemv`. This caused a
dtype mismatch during FakeTensor re-tracing in FusePatternsPass because
`linear_q8ta_q8csw`'s composite implementation does not dequantize its
input, producing int8 output where float32 was expected.

Mirror the existing input-side view_copy handling (lines 99-104) on the
output side so the quantize node is found through the view_copy.

Differential Revision: [D95807075](https://our.internmc.facebook.com/intern/diff/D95807075/)

[ghstack-poisoned]
@digantdesai digantdesai added the module: vulkan Issues related to the Vulkan delegate and code under backends/vulkan/ label Mar 10, 2026
@meta-codesync meta-codesync Bot merged commit 420ce2c into gh/SS-JIA/461/base Mar 10, 2026
213 of 221 checks passed
@meta-codesync meta-codesync Bot deleted the gh/SS-JIA/461/head branch March 10, 2026 08:53
@meta-codesync meta-codesync Bot temporarily deployed to cherry-pick-bot March 10, 2026 08:53 Inactive
SS-JIA pushed a commit that referenced this pull request Mar 10, 2026
…quantization

Pull Request resolved: #18014

When `aten.linear` has 3D+ inputs, it decomposes into
`view_copy -> mm -> view_copy`. The output view_copy between mm and the
subsequent quantize_per_tensor node was preventing the pattern matcher
from detecting output quantization, causing the match to fall through
to `linear_q8ta_q8csw` instead of `q8ta_linear_gemv`. This caused a
dtype mismatch during FakeTensor re-tracing in FusePatternsPass because
`linear_q8ta_q8csw`'s composite implementation does not dequantize its
input, producing int8 output where float32 was expected.

Mirror the existing input-side view_copy handling (lines 99-104) on the
output side so the quantize node is found through the view_copy.
ghstack-source-id: 349646653
@exported-using-ghexport

Differential Revision: [D95807075](https://our.internmc.facebook.com/intern/diff/D95807075/)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported module: vulkan Issues related to the Vulkan delegate and code under backends/vulkan/

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants