Skip to content

Commit 84b4ec8

Browse files
committed
Fix ruff format in fold_qdq_scale_fp16_to_fp32_casts
Signed-off-by: ajrasane <arasane@nvidia.com> Signed-off-by: ajrasane <131806219+ajrasane@users.noreply.github.com>
1 parent df090ec commit 84b4ec8

1 file changed

Lines changed: 1 addition & 3 deletions

File tree

modelopt/onnx/utils.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1666,9 +1666,7 @@ def fold_qdq_scale_fp16_to_fp32_casts(onnx_model: onnx.ModelProto) -> onnx.Model
16661666
# other ops would silently receive FP16 instead of the FP32 they requested.
16671667
cast_output = cast_node.output[0]
16681668
consumers = consumer_map.get(cast_output, [])
1669-
if not consumers or not all(
1670-
c.op_type in qdq_ops and i == 1 for c, i in consumers
1671-
):
1669+
if not consumers or not all(c.op_type in qdq_ops and i == 1 for c, i in consumers):
16721670
continue
16731671

16741672
# Bypass the cast so the scale stays FP16

0 commit comments

Comments
 (0)