Commit 7802809
Fix ConvertMmToBmmPass for quantized (int8/int16) mm ops (#18974)
Summary:
This diff is experimental, but appears to address incomplete support for INT pathways for BMM. TBD.
The pass converts rank-2 mm to rank-3 bmm (required by TOSA spec) via
unsqueeze/bmm/squeeze. Previously it called super().call() to re-trace
the graph on FakeTensors for shape propagation, but aten.bmm rejects
int8/int16 FakeTensors, causing failures for any quantized mm ops.
Since mm→bmm is a pure shape transformation (adding a batch dim of 1),
we can set the output metadata directly: unsqueeze the mm's FakeTensor
for the bmm node, and use the original for the squeeze. No need to
re-execute the op.
Reviewed By: digantdesai
Differential Revision: D998571371 parent 063f9c9 commit 7802809
1 file changed
Lines changed: 12 additions & 4 deletions
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
44 | 44 | | |
45 | 45 | | |
46 | 46 | | |
| 47 | + | |
| 48 | + | |
47 | 49 | | |
48 | 50 | | |
49 | 51 | | |
50 | 52 | | |
51 | 53 | | |
52 | | - | |
53 | | - | |
| 54 | + | |
| 55 | + | |
54 | 56 | | |
55 | 57 | | |
56 | 58 | | |
| |||
65 | 67 | | |
66 | 68 | | |
67 | 69 | | |
| 70 | + | |
68 | 71 | | |
69 | 72 | | |
70 | 73 | | |
| |||
76 | 79 | | |
77 | 80 | | |
78 | 81 | | |
| 82 | + | |
| 83 | + | |
| 84 | + | |
| 85 | + | |
| 86 | + | |
79 | 87 | | |
80 | 88 | | |
81 | 89 | | |
82 | | - | |
| 90 | + | |
83 | 91 | | |
84 | 92 | | |
85 | 93 | | |
| |||
91 | 99 | | |
92 | 100 | | |
93 | 101 | | |
| 102 | + | |
94 | 103 | | |
95 | 104 | | |
96 | 105 | | |
| |||
101 | 110 | | |
102 | 111 | | |
103 | 112 | | |
104 | | - | |
105 | 113 | | |
106 | 114 | | |
0 commit comments