Commit 94e1305
committed
Cast int input to fp32 in torch reciprocal converter
`torch.reciprocal` returns a float for int inputs in PyTorch, but
`mb.inverse` only accepts fp16/fp32. As a result, common patterns like
`1 / x.shape[0]` (which TorchScript traces as
`reciprocal(prim::NumToTensor(int))`) failed conversion with:
Op (op_type: inverse) Input x expects tensor or scalar of dtype
from type domain ['fp16', 'fp32'] but got tensor[1, int32]
Insert a fp32 cast before `mb.inverse` when the input dtype is integer,
mirroring the pattern already used by `log`, `sqrt`, and other unary
ops that share the same MIL constraint.
Verified end-to-end on the issue repro and a representative RoPE-style
inverse-frequency expression.
Fixes #2579.1 parent e95804f commit 94e1305
2 files changed
Lines changed: 37 additions & 1 deletion
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
7323 | 7323 | | |
7324 | 7324 | | |
7325 | 7325 | | |
7326 | | - | |
| 7326 | + | |
| 7327 | + | |
| 7328 | + | |
| 7329 | + | |
| 7330 | + | |
| 7331 | + | |
| 7332 | + | |
| 7333 | + | |
7327 | 7334 | | |
7328 | 7335 | | |
7329 | 7336 | | |
| |||
Lines changed: 29 additions & 0 deletions
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
6676 | 6676 | | |
6677 | 6677 | | |
6678 | 6678 | | |
| 6679 | + | |
| 6680 | + | |
| 6681 | + | |
| 6682 | + | |
| 6683 | + | |
| 6684 | + | |
| 6685 | + | |
| 6686 | + | |
| 6687 | + | |
| 6688 | + | |
| 6689 | + | |
| 6690 | + | |
| 6691 | + | |
| 6692 | + | |
| 6693 | + | |
| 6694 | + | |
| 6695 | + | |
| 6696 | + | |
| 6697 | + | |
| 6698 | + | |
| 6699 | + | |
| 6700 | + | |
| 6701 | + | |
| 6702 | + | |
| 6703 | + | |
| 6704 | + | |
| 6705 | + | |
| 6706 | + | |
| 6707 | + | |
6679 | 6708 | | |
6680 | 6709 | | |
6681 | 6710 | | |
| |||
0 commit comments