aarch64: remove extra XTN in vector concatenation
GIMPLE code which performs a narrowing truncation on the result of a
vector concatenation currently results in an unnecessary XTN being
emitted following a UZP1 to concate the operands. In cases such as this,
UZP1 should instead use a smaller arrangement specifier to replace the
XTN instruction. This is seen in cases such as in this GIMPLE example:
int32x2_t foo (svint64_t a, svint64_t b)
{
vector(2) int vect__2.8;
long int _1;
long int _3;
vector(2) long int _12;
<bb 2> [local count:
1073741824]:
_1 = svaddv_s64 ({ -1, 0, 0, 0, 0, 0, 0, 0, ... }, a_6(D));
_3 = svaddv_s64 ({ -1, 0, 0, 0, 0, 0, 0, 0, ... }, b_7(D));
_12 = {_1, _3};
vect__2.8_13 = (vector(2) int) _12;
return vect__2.8_13;
}
Original assembly generated:
bar:
ptrue p3.b, all
uaddv d0, p3, z0.d
uaddv d1, p3, z1.d
uzp1 v0.2d, v0.2d, v1.2d
xtn v0.2s, v0.2d
ret
This patch therefore defines the *aarch64_trunc_concat<mode> insn which
truncates the concatenation result, rather than concatenating the
truncated operands (such as in *aarch64_narrow_trunc<mode>), resulting
in the following optimised assembly being emitted:
bar:
ptrue p3.b, all
uaddv d0, p3, z0.d
uaddv d1, p3, z1.d
uzp1 v0.2s, v0.2s, v1.2s
ret
This patch passes all regression tests on aarch64 with no new failures.
A supporting test for this optimisation is also written and passes.
OK for master? I do not have commit rights so I cannot push the patch
myself.
gcc/ChangeLog:
* config/aarch64/aarch64-simd.md: (*aarch64_trunc_concat)
new insn definition.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/sve/truncated_concatenation_1.c: new test
for the above example and other modes covered by insn
definitions.