This removes an (broken) simplification from fold which is already handled in match.
The reason why it was broken is because of the use of wi::to_wide on the RHS of the
rotate which could be 2 different types even though the LHS was the same type.
Since it is already handled in match (by the patterns for
`Turn (a OP c1) OP c2 into a OP (c1+c2).`). It can be removed without losing any optimizations.
Bootstrapped and tested on x86_64-linux-gnu.
PR middle-end/117492
gcc/ChangeLog:
* fold-const.cc (fold_binary_loc): Remove `Two consecutive rotates adding up
to the some integer` simplifcation.
gcc/testsuite/ChangeLog:
* gcc.dg/torture/pr117492-1.c: New test.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
arg01, arg1));
}
- /* Two consecutive rotates adding up to the some integer
- multiple of the precision of the type can be ignored. */
- if (code == RROTATE_EXPR && TREE_CODE (arg1) == INTEGER_CST
- && TREE_CODE (arg0) == RROTATE_EXPR
- && TREE_CODE (TREE_OPERAND (arg0, 1)) == INTEGER_CST
- && wi::umod_trunc (wi::to_wide (arg1)
- + wi::to_wide (TREE_OPERAND (arg0, 1)),
- prec) == 0)
- return fold_convert_loc (loc, type, TREE_OPERAND (arg0, 0));
-
return NULL_TREE;
case MIN_EXPR:
--- /dev/null
+/* { dg-do compile } */
+/* PR middle-end/117492 */
+
+/* This code would ICE in fold due to code which was using wi::to_wide with different types
+ and adding them. */
+
+typedef unsigned u;
+
+u
+foo(u x)
+{
+ return
+ __builtin_stdc_rotate_left((unsigned)
+ __builtin_stdc_rotate_right(x, 0x100000001ull),
+ 1);
+}