|
| 1 | +2026-04-20 Jakub Jelinek <jakub@redhat.com> |
| 2 | + |
| 3 | + PR middle-end/123635 |
| 4 | + * gimple-lower-bitint.cc (bitint_large_huge::lower_shift_stmt): In the |
| 5 | + RSHIFT_EXPR case, use p2 in two LE_EXPR conditions rather than just |
| 6 | + one. In LSHIFT_EXPR case, use signed RSHIFT_EXPR instead of unsigned. |
| 7 | + (bitint_large_huge::lower_muldiv_stmt): For unsigned MULT_EXPR with |
| 8 | + bitint_extended if prec is not multiple of limb_prec, clear padding |
| 9 | + bits after libgcc call. |
| 10 | + (bitint_large_huge::lower_float_conv_stmt): Use signed RSHIFT_EXPR |
| 11 | + instead of unsigned. |
| 12 | + |
| 13 | +2026-04-20 Jakub Jelinek <jakub@redhat.com> |
| 14 | + |
| 15 | + PR middle-end/123635 |
| 16 | + * gimple-lower-bitint.cc (bitint_precision_kind): Assert the current |
| 17 | + assumptions, that bitint_ext_full for abi_limb_prec > limb_prec is |
| 18 | + supported only when abi_limb_prec is limb_Prec * 2 and it is not |
| 19 | + big endian in that case. |
| 20 | + (bitint_large_huge::lower_mergeable_stmt): Don't set separate_ext |
| 21 | + fir bitint_ext_full for bit-field stores. Guard the condition |
| 22 | + on an extra limb of padding bits to be extended rather than including |
| 23 | + earlier extensions in that too. If already sign extending before |
| 24 | + and type is unsigned, set zero_ms_limb instead and later handle it. |
| 25 | + (bitint_large_huge::lower_shift_stmt): Handle bitint_ext_full. |
| 26 | + |
| 27 | +2026-04-20 Soumya AR <soumyaa@nvidia.com> |
| 28 | + |
| 29 | + * config/aarch64/aarch64-narrow-gp-writes.cc (narrow_dimode_src): Remove |
| 30 | + redundant checks. Don't recurse when an operand remains DImode. |
| 31 | + (narrow_gp_writes::optimize_compare_arith_insn): Use |
| 32 | + HOST_WIDE_INT_PRINT_HEX. |
| 33 | + (narrow_gp_writes::optimize_single_set_insn): Likewise. |
| 34 | + |
1 | 35 | 2026-04-19 Richard Sandiford <rdsandiford@googlemail.com> |
2 | 36 |
|
3 | 37 | PR rtl-optimization/124643 |
|
0 commit comments