Add a16w8 per-op test for var (#19596)#19596
Conversation
Summary: Add int16 activation / int8 weight (a16w8) quantization tests for `aten.exp` on Ethos-U55 and Ethos-U85. ## Context The `exp` op is part of the softmax decomposition (`softmax(x) = exp(x) / sum(exp(x))`), which is used in the attention mechanism of EMG2Pose Conformer models. This op was identified as the root cause of the U85 SNR regression investigated in SEV T267939669 — without dedicated a16w8 per-op coverage, the numerics issue was only visible at the full-model level. Adding per-op tests allows us to catch int16 precision regressions at the operator granularity before they propagate to end-to-end model accuracy. ## Changes - Add `a16w8_exp_test_parameters` dict with 3 test configurations covering rank-1, rank-2, and rank-3 tensors - Add `test_exp_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16` - Add `test_exp_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs - Register `ops/test_exp.py` in `fbcode/` and `xplat/` `targets.bzl` bypass-pytorch-oss-checks Differential Revision: D104532358
Summary: Add int16 activation / int8 weight (a16w8) quantization tests for `aten.reciprocal` on Ethos-U55 and Ethos-U85. ## Context The `reciprocal` op is the second half of the softmax decomposition (`softmax(x) = exp(x) * reciprocal(sum(exp(x)))`), paired with `exp`. Together they form the attention mechanism in EMG2Pose Conformer models. Like `exp`, this op was implicated in the U85 SNR regression (SEV T267939669) — the division-by-reciprocal path can amplify quantization error when the denominator is itself quantized at int16. Adding dedicated a16w8 coverage isolates reciprocal numerics from the rest of the softmax pipeline. ## Changes - Add `a16w8_reciprocal_test_parameters` dict with 3 test configurations covering rank-1, rank-2, and rank-3 tensors (all shifted by +0.1 to avoid division near zero) - Add `test_reciprocal_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16` - Add `test_reciprocal_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs - Register `ops/test_reciprocal.py` in `fbcode/` and `xplat/` `targets.bzl` bypass-pytorch-oss-checks Differential Revision: D104532357
Summary: Add int16 activation / int8 weight (a16w8) quantization tests for `aten.mean.dim` on Ethos-U55 and Ethos-U85. ## Context The `mean_dim` op is a core component of the LayerNorm decomposition (`LayerNorm = (x - mean) / sqrt(var + eps) * gamma + beta`). It is used across multiple EMG production models including CC, CASCADE, HW, WAKE, and BTD. Despite this wide usage, no a16w8 per-op coverage existed — the int16 quantization path was only exercised indirectly through end-to-end model tests, making it difficult to isolate mean-specific numerics issues from other LayerNorm components. ## Changes - Add `a16w8_mean_test_parameters` dict with 11 test configurations covering keepdim/no-keepdim, positive/negative dims, dim=None, and ranks 1-4 - Add `test_mean_dim_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16` - Add `test_mean_dim_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs - Register `ops/test_mean_dim.py` in `fbcode/` and `xplat/` `targets.bzl` bypass-pytorch-oss-checks Differential Revision: D104532361
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19596
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ❌ 2 New Failures, 40 Pending, 1 Unrelated FailureAs of commit 76e5af8 with merge base 58b4f26 ( NEW FAILURES - The following jobs have failed:
FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
|
8989a2c to
6c91b26
Compare
This PR needs a
|
8989a2c to
6b40fe6
Compare
d004872 to
578a5dd
Compare
|
@christine-long-meta has exported this pull request. If you are a Meta employee, you can view the originating Diff in D104532362. |
f1d1cf7 to
7aa22ac
Compare
Summary: Pull Request resolved: pytorch#19596 Add int16 activation / int8 weight (a16w8) quantization tests for `aten.var` on Ethos-U55 and Ethos-U85. ## Changes - Add `a16w8_var_test_parameters` dict with 4 test configurations covering keepdim/no-keepdim and correction values 0, 0.5, and 1 - Add `test_var_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16` - Add `test_var_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs - Register `ops/test_var.py` in `fbcode/` and `xplat/` `targets.bzl` bypass-pytorch-oss-checks Differential Revision: D104532362
7aa22ac to
5e7cdc5
Compare
|
@christine-long-meta has exported this pull request. If you are a Meta employee, you can view the originating Diff in D104532362. |
3 similar comments
|
@christine-long-meta has exported this pull request. If you are a Meta employee, you can view the originating Diff in D104532362. |
|
@christine-long-meta has exported this pull request. If you are a Meta employee, you can view the originating Diff in D104532362. |
|
@christine-long-meta has exported this pull request. If you are a Meta employee, you can view the originating Diff in D104532362. |
5e7cdc5 to
447a40d
Compare
Summary: Pull Request resolved: pytorch#19596 Add int16 activation / int8 weight (a16w8) quantization tests for `aten.var` on Ethos-U55 and Ethos-U85. ## Changes - Add `a16w8_var_test_parameters` dict with 4 test configurations covering keepdim/no-keepdim and correction values 0, 0.5, and 1 - Add `test_var_a16w8_u55_INT` using `EthosU55PipelineINT` with `a16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16` - Add `test_var_a16w8_u85_INT` using `EthosU85PipelineINT` with same kwargs - Register `ops/test_var.py` in `fbcode/` and `xplat/` `targets.bzl` bypass-pytorch-oss-checks Differential Revision: D104532362
447a40d to
76e5af8
Compare
Summary:
Add int16 activation / int8 weight (a16w8) quantization tests for
aten.varon Ethos-U55 and Ethos-U85.Changes
a16w8_var_test_parametersdict with 4 test configurations covering keepdim/no-keepdim and correction values 0, 0.5, and 1test_var_a16w8_u55_INTusingEthosU55PipelineINTwitha16w8_quantization=True, symmetric_io_quantization=True, qtol=128, epsilon=2**-16test_var_a16w8_u85_INTusingEthosU85PipelineINTwith same kwargsops/test_var.pyinfbcode/andxplat/targets.bzlbypass-pytorch-oss-checks
Differential Revision: D104532362