Skip to content

Exclude small-k and small-n Matmul nodes from Int8 quantization#1256

Open
nv-samcheng wants to merge 2 commits intoNVIDIA:mainfrom
nv-samcheng:dev-samcheng-filter-small-kn-gemm-int8
Open

Exclude small-k and small-n Matmul nodes from Int8 quantization#1256
nv-samcheng wants to merge 2 commits intoNVIDIA:mainfrom
nv-samcheng:dev-samcheng-filter-small-kn-gemm-int8

Conversation

@nv-samcheng
Copy link
Copy Markdown
Contributor

@nv-samcheng nv-samcheng commented Apr 14, 2026

What does this PR do?

Exclude small-dimension MatMul nodes from INT8 quantization. MatMuls with N or K < 16 cannot efficiently use INT8, causing performance regressions.

Before your PR is "Ready for review"

Make sure you read and follow Contributor guidelines and your commits are signed (git commit -s -S).

Make sure you read and follow the Security Best Practices (e.g. avoiding hardcoded trust_remote_code=True, torch.load(..., weights_only=False), pickle, etc.).

  • Is this change backward compatible?: ✅ / ❌ / N/A
  • If you copied code from any other sources or added a new PIP dependency, did you follow guidance in CONTRIBUTING.md: ✅ / ❌ / N/A
  • Did you write any new necessary tests?: ✅ / ❌ / N/A
  • Did you update Changelog?: ✅ / ❌ / N/A

Additional Information

Summary by CodeRabbit

  • Bug Fixes

    • Refined quantization logic for matrix-multiplication ops to better exclude cases that shouldn't be quantized, including very small-dimension GEMMs and GEMV patterns, using inferred input/output shapes to decide exclusions.
  • Tests

    • Expanded unit tests to cover matrix-multiplication exclusion rules and edge cases, validating shape inference and runtime-determined input dimensions.

@nv-samcheng nv-samcheng requested a review from a team as a code owner April 14, 2026 12:15
@nv-samcheng nv-samcheng requested a review from ajrasane April 14, 2026 12:15
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Apr 14, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 14, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro Plus

Run ID: a9cfe281-465c-466c-9687-89f7df5a4bcf

📥 Commits

Reviewing files that changed from the base of the PR and between fb54122 and 4deee67.

📒 Files selected for processing (1)
  • modelopt/onnx/quantization/graph_utils.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • modelopt/onnx/quantization/graph_utils.py

📝 Walkthrough

Walkthrough

Extended MatMul exclusion logic to also exclude "small-gemm" MatMul nodes when inferred N or K is below 16. Added helper _get_inp_b_k_dim to derive the MatMul second-input K dimension from constant initializers, value_info, or runtime outputs; applied these checks during shape- and runtime-based inference.

Changes

Cohort / File(s) Summary
MatMul Dimension-Based Exclusion Logic
modelopt/onnx/quantization/graph_utils.py
Added _MIN_MATMUL_DIM_INT8 = 16 and _get_inp_b_k_dim() to infer MatMul B/K dimension from initializers, value_info_map, or output_map. Updated _exclude_matmuls_by_shape_inference and _exclude_matmuls_by_inference to exclude MatMul when N==1 or K==1 (existing) or when 0 < N < 16 or 0 < K < 16 (new). Prevent duplicate outputs and extend model outputs to include non-constant MatMul B for runtime inference; added control-flow continue points.
MatMul Exclusion Logic Unit Tests
tests/unit/onnx/quantization/test_graph_utils.py
Added helpers to build minimal MatMul ONNX models and retrieve nodes. Introduced tests for _get_inp_b_k_dim (constant initializer, runtime output, indeterminate) and for _exclude_matmuls_by_shape_inference covering exclusion when N or K < 16, GEMV cases (N=1), and non-excluded large-dimension cases.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 4
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately and concisely summarizes the main change: excluding MatMul nodes with small K or N dimensions from INT8 quantization.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Security Anti-Patterns ✅ Passed No security anti-patterns detected in modified files. Code focuses on MatMul node exclusion logic without use of unsafe deserialization, code evaluation, or other security vulnerabilities.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
tests/unit/onnx/quantization/test_graph_utils.py (1)

119-182: Add targeted tests for Gemm(transB=1) and inference-based exclusion.

Nice coverage for MatMul shape-inference. Please add one case validating K extraction when op="Gemm" with transB=1, plus one test for _exclude_matmuls_by_inference (shared inp_b variable case) to lock in the new runtime-output extension path.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/onnx/quantization/test_graph_utils.py` around lines 119 - 182, Add
two unit tests in tests/unit/onnx/quantization/test_graph_utils.py: one that
constructs a Gemm model with op="Gemm" and attribute transB=1 and asserts
_get_inp_b_k_dim on its node returns the correct K (e.g., when B is constant
with shape [..., K, N] transposed), and a second test that exercises
_exclude_matmuls_by_shape_inference where multiple MatMul/Gemm nodes share the
same inp_b Variable (use calibration_shapes only for "A" and provide an
output_map or runtime-output scenario so the code path that reads K from
runtime-output is used) and assert the expected node id is excluded; reference
helpers _make_matmul_model, _get_matmul_nodes, _get_inp_b_k_dim, and
_exclude_matmuls_by_shape_inference to locate relevant setup and ensure
names/ids match existing tests (e.g., "MatMul_0").
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@modelopt/onnx/quantization/graph_utils.py`:
- Around line 1235-1261: The _get_inp_b_k_dim function currently always reads K
from axis -2 which is wrong for Gemm when transB=1; update _get_inp_b_k_dim to
detect transB (default 0 for MatMul) from the node (check for attribute "transB"
on matmul_node) and compute k_axis = -1 if transB > 0 else -2, then use k_axis
when indexing into inp_b.values.shape, inp_b_info.type.tensor_type.shape.dim,
and output_map[inp_b.name].shape so all three fallback paths respect
transposition; also add unit tests that cover Gemm nodes with transB=1 to
prevent regressions.
- Around line 1343-1348: The code adds matmul outputs and second-input Variable
names to model.graph.output without deduplication, which can create duplicate
output names; update the logic (in the block handling matmul_nodes / uses of
matmul_node.outputs[0].name and matmul_node.inputs[1].name) to track
already-added output names (e.g., a set of names) and only call
model.graph.output.extend with onnx.ValueInfoProto for a name if it is not
already present in that set (and add it to the set after extending), ensuring
you still skip Constants by checking isinstance(matmul_node.inputs[1],
Variable).

---

Nitpick comments:
In `@tests/unit/onnx/quantization/test_graph_utils.py`:
- Around line 119-182: Add two unit tests in
tests/unit/onnx/quantization/test_graph_utils.py: one that constructs a Gemm
model with op="Gemm" and attribute transB=1 and asserts _get_inp_b_k_dim on its
node returns the correct K (e.g., when B is constant with shape [..., K, N]
transposed), and a second test that exercises
_exclude_matmuls_by_shape_inference where multiple MatMul/Gemm nodes share the
same inp_b Variable (use calibration_shapes only for "A" and provide an
output_map or runtime-output scenario so the code path that reads K from
runtime-output is used) and assert the expected node id is excluded; reference
helpers _make_matmul_model, _get_matmul_nodes, _get_inp_b_k_dim, and
_exclude_matmuls_by_shape_inference to locate relevant setup and ensure
names/ids match existing tests (e.g., "MatMul_0").
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro Plus

Run ID: 3a5d8843-1a90-424d-a931-a88d63dc0fa0

📥 Commits

Reviewing files that changed from the base of the PR and between b6c6ec3 and fb54122.

📒 Files selected for processing (2)
  • modelopt/onnx/quantization/graph_utils.py
  • tests/unit/onnx/quantization/test_graph_utils.py

@nv-samcheng nv-samcheng changed the title Exclude small-k and small-n Conv nodes from Int8 quantization Exclude small-k and small-n Matmul nodes from Int8 quantization Apr 14, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant