We generally allow merging mergeable stmts with some final cast (but not
further casts or mergeable operations after the cast). As some casts
are handled conditionally, if (idx < cst) handle_operand (idx); else if
idx == cst) handle_operand (cst); else ..., we must sure that e.g. the
mergeable PLUS_EXPR/MINUS_EXPR/NEGATE_EXPR never appear in handle_operand
called from such casts, because it ICEs on invalid SSA_NAME form (that part
could be fixable by adding further PHIs) but also because we'd need to
correctly propagate the overflow flags from the if to else if.
So, instead lower_mergeable_stmt handles an outermost widening cast (or
widening cast feeding outermost store) specially.
The problem was similar to PR113408, that VIEW_CONVERT_EXPR tree is
present in the gimple_assign_rhs1 while it is not for NOP_EXPR/CONVERT_EXPR,
so the checks whether the outermost cast should be handled didn't handle
the VCE case and so handle_plus_minus was called from the conditional
handle_cast.
2024-01-27 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/113568
* gimple-lower-bitint.cc (bitint_large_huge::lower_mergeable_stmt):
For VIEW_CONVERT_EXPR use first operand of rhs1 instead of rhs1
in the widening extension checks.
* gcc.dg/bitint-78.c: New test.
rhs1 = gimple_assign_rhs1 (store_operand
? SSA_NAME_DEF_STMT (store_operand)
: stmt);
+ if (TREE_CODE (rhs1) == VIEW_CONVERT_EXPR)
+ rhs1 = TREE_OPERAND (rhs1, 0);
/* Optimize mergeable ops ending with widening cast to _BitInt
(or followed by store). We can lower just the limbs of the
cast operand and widen afterwards. */
--- /dev/null
+/* PR tree-optimization/113568 */
+/* { dg-do compile { target bitint } } */
+/* { dg-options "-O2 -std=c23" } */
+
+signed char c;
+#if __BITINT_MAXWIDTH__ >= 464
+_BitInt(464) g;
+
+void
+foo (void)
+{
+ _BitInt(464) a[2] = {};
+ _BitInt(464) b;
+ while (c)
+ {
+ b = g + 1;
+ g = a[0];
+ a[0] = b;
+ }
+}
+#endif