We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
1 parent 132ac9d commit 4f3ee41Copy full SHA for 4f3ee41
modelopt/onnx/autocast/convert.py
@@ -220,7 +220,7 @@ def convert_to_f16(
220
infer_shapes. This is a workaround (WAR) when only type inference is
221
needed without shape inference. Default: False.
222
opset: Target ONNX opset version. If None, uses default minimum opset based on precision type
223
- (22 for bf16, 13 for fp16) and Q/DQ node requirements. The opset may be automatically
+ (22 for bf16, 19 for fp16) and Q/DQ node requirements. The opset may be automatically
224
increased if Q/DQ nodes in the model require a higher version (e.g., FP8 requires 19,
225
INT4 requires 21, NVFP4 requires 23).
226
"""
0 commit comments