You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
## What does this PR do?
**Type of change:** ? documentation
**Overview:** Md update to add perplexity and kl divergence benchmark
info.
## Before your PR is "*Ready for review*"
<!-- If you haven't finished some of the above items you can still open
`Draft` PR. -->
- **Make sure you read and follow [Contributor
guidelines](https://github.com/NVIDIA/Model-Optimizer/blob/main/CONTRIBUTING.md)**
and your commits are signed.
- **Is this change backward compatible?**: NA
- **Did you write any new necessary tests?**: NA
- **Did you add or update any necessary documentation?**: Yes
- **Did you update
[Changelog](https://github.com/NVIDIA/Model-Optimizer/blob/main/CHANGELOG.rst)?**:
NA
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **Documentation**
* Expanded accuracy comparison section with three detailed benchmark
metrics: MMLU scores, Perplexity (PPL), and KL-divergence.
* Added comprehensive tables showing results across models and
quantization configurations.
* Included evaluation guides and references for each metric.
<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Signed-off-by: unknown <ynankani@nvidia.com>
Copy file name to clipboardExpand all lines: examples/windows/Benchmark.md
+55Lines changed: 55 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,6 +24,8 @@ Memory savings and inference speedup are compared to the ONNX FP16 baseline.
24
24
25
25
### 1.2 Accuracy Comparison
26
26
27
+
#### 1.2.1 MMLU
28
+
27
29
For accuracy evaluation, the [Massive Multitask Language Understanding (MMLU)](https://arxiv.org/abs/2009.03300) benchmark has been utilized. Please refer to the [detailed instructions](./accuracy_benchmark/README.md) for running the MMLU accuracy benchmark.
28
30
29
31
The table below shows the MMLU 5-shot score for some models.
@@ -39,3 +41,56 @@ The table below shows the MMLU 5-shot score for some models.
Perplexity measures how well a probability model predicts a sample. Lower perplexity values indicate better model quality. The following table shows perplexity values at input sequence length 1024 with chunk size of 512.
48
+
49
+
**Learn more about Perplexity:**[Perplexity - Wikipedia](https://en.wikipedia.org/wiki/Perplexity) | [Hugging Face - Perplexity of Fixed-Length Models](https://huggingface.co/docs/transformers/en/perplexity)
50
+
51
+
-**FP16-MB**: Baseline FP16 genai model (Model Builder)
52
+
-**Mixed AWQ-MO**: Important linear layers in INT8, rest in INT4 (AWQ), with ModelOpt.
53
+
-**Mixed RTN-MO**: Important linear layers in INT8, rest in INT4 (RTN), with ModelOpt.
54
+
-**Pure INT4 AWQ-MO**: All linear layers INT4 (AWQ) with ModelOpt.
55
+
-**Pure INT4 RTN-MO**: All linear layers INT4 (RTN) with ModelOpt.
56
+
-**Pure INT8 RTN-MO**: All linear layers INT8 (RTN) with ModelOpt.
57
+
-**Pure INT8 AWQ-MO**: All linear layers INT8 (AWQ) with ModelOpt.
For detailed instructions on evaluating perplexity, please refer to the [Perplexity Evaluation Guide](./accuracy_benchmark/perplexity_metrics/README.md).
69
+
70
+
#### 1.2.3 KL-divergence
71
+
72
+
KL-divergence (Kullback-Leibler divergence) quantifies the distributional difference between the quantized model and the baseline model. Lower KL-divergence values indicate that the quantized model's output distribution is closer to the original model.
73
+
74
+
**Learn more about KL-divergence:**[KL Divergence - Wikipedia](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence) | [Understanding KL Divergence](https://www.countbayesie.com/blog/2017/5/9/kullback-leibler-divergence-explained)
75
+
76
+
**Supported backends:** PyTorch and Onnxruntim-cuda, onnxruntime-trt-rtx-ep are both supported for evaluation.
77
+
78
+
-**Baseline model**: Hugging Face FP16 model
79
+
-**Quantized models**: Models where quantization is simulated (a.k.a. fake quantization), typically using the PyTorch-CUDA backend for evaluation. Fake quantization means quantized weights and dequantized simultaneously to simulate quantization. The inference backend column in the table below indicates whether the reported results are from PyTorch simulation or ONNX-runtime-based inference.
*All KL-divergence results above are obtained via PyTorch fake quantization simulation unless otherwise noted. Inference with ONNX-runtime can also be evaluated .*
95
+
96
+
For detailed instructions on computing KL-divergence, please refer to the [KL-divergence Evaluation Guide](./accuracy_benchmark/kl_divergence_metrics/README.md).
Copy file name to clipboardExpand all lines: examples/windows/onnx_ptq/genai_llm/README.md
+76Lines changed: 76 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -82,6 +82,82 @@ Note:
82
82
83
83
Please refer to `quantize.py` for further details on command-line parameters.
84
84
85
+
#### Mixed Precision Quantization (INT4 + INT8)
86
+
87
+
ModelOpt-Windows supports **mixed precision quantization**, where different layers in the model can be quantized to different bit-widths. This approach combines INT4 quantization for most layers (for maximum compression and speed) with INT8 quantization for important or sensitive layers (to preserve accuracy).
88
+
89
+
##### Why Use Mixed Precision?
90
+
91
+
Mixed precision quantization provides an optimal balance between:
92
+
93
+
-**Model Size**: Primarily INT4 keeps the model small
94
+
-**Inference Speed**: INT4 layers run faster and smaller
95
+
-**Accuracy Preservation**: Critical layers in INT8 maintain model quality
96
+
97
+
Based on benchmark results, mixed precision quantization shows significant advantages:
-**Compatibility**: Works with both `awq_lite` and `rtn_dq` algorithms
157
+
-**Automatic Detection**: The `--layers_8bit` option automatically enables mixed quantization
158
+
159
+
For more benchmark results and detailed accuracy metrics, refer to the [Benchmark Guide](../../Benchmark.md).
160
+
85
161
### Evaluate the Quantized Model
86
162
87
163
To evaluate the quantized model, please refer to the [accuracy benchmarking](../../accuracy_benchmark/README.md) and [onnxruntime-genai performance benchmarking](https://github.com/microsoft/onnxruntime-genai/tree/main/benchmark/python).
0 commit comments