aarch64_general_gimple_fold_builtin doesn't check whether the LHS of a
function call is null before converting it to an assign statement. To avoid
returning an invalid GIMPLE statement in this case, we instead assign the
expression result to a new (unused) variable.
This change only affects code that:
1) Calls an intrinsic function that has no side effects;
2) Does not use or store the value returned by the intrinsic;
3) Uses parameters that prevent the front-end eliminating the call prior to
gimplification.
The ICE is unlikely to have occurred in the wild, as it relies on the presence
of a redundant intrinsic call.
gcc/ChangeLog:
* config/aarch64/aarch64-builtins.cc
(aarch64_general_gimple_fold_builtin): Add fixup for invalid GIMPLE.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/advsimd-intrinsics/ignored_return_1.c: New test.
default:
break;
}
+
+ /* GIMPLE assign statements (unlike calls) require a non-null lhs. If we
+ created an assign statement with a null lhs, then fix this by assigning
+ to a new (and subsequently unused) variable. */
+ if (new_stmt && is_gimple_assign (new_stmt) && !gimple_assign_lhs (new_stmt))
+ {
+ tree new_lhs = make_ssa_name (gimple_call_return_type (stmt));
+ gimple_assign_set_lhs (new_stmt, new_lhs);
+ }
+
return new_stmt;
}
--- /dev/null
+/* { dg-do compile { target { aarch64*-*-* } } } */
+
+#include <arm_neon.h>
+
+int8_t *bar();
+
+void foo() {
+ __builtin_aarch64_ld1v16qi(bar());
+}