aarch64: Prevent simd tests from being optimised away
The vqdml[as]l[hs]_laneq_* tests were folded at compile time, meaning
that we didn't have any Advanced SIMD instructions in the assembly.
Kyrill's preference was to use wrapper functions, so this patch does
that for the failing tests and for others that had scan-assemblers
with inline intrinsics calls. (There were some tests that already
used wrapper functions, some that used volatile, some that used
inline asm barriers, and some that had no separation.)
Doing that for vqdmulhs_lane_s32.c meant that we generated the scalar
form of the instruction, rather than a vector instruction operating
on lane 0. That seems fair enough, so the patch keeps that test but
adds a second one for lane 1.
gcc/testsuite/
* gcc.target/aarch64/simd/vfma_f64.c: Use a wrapper function
rather than an asm barrier.
* gcc.target/aarch64/simd/vfms_f64.c: Likewise.
* gcc.target/aarch64/simd/vmul_f64_1.c: Use a wrapper function
rather than volatile.
* gcc.target/aarch64/simd/vmul_n_f64_1.c: Likewise.
* gcc.target/aarch64/simd/vqdmlalh_laneq_s16_1.c: Use a wrapper
function. Remove -fno-inline.
* gcc.target/aarch64/simd/vqdmlals_laneq_s32_1.c: Likewise.
* gcc.target/aarch64/simd/vqdmlslh_laneq_s16_1.c: Likewise.
* gcc.target/aarch64/simd/vqdmlsls_laneq_s32_1.c: Likewise.
* gcc.target/aarch64/simd/vqdmulhh_lane_s16.c: Likewise.
* gcc.target/aarch64/simd/vqdmulhh_laneq_s16_1.c: Likewise.
* gcc.target/aarch64/simd/vqdmulhs_laneq_s32_1.c: Likewise.
* gcc.target/aarch64/simd/vqrdmulhh_lane_s16.c: Likewise.
* gcc.target/aarch64/simd/vqrdmulhh_laneq_s16_1.c: Likewise.
* gcc.target/aarch64/simd/vqrdmulhs_lane_s32.c: Likewise.
* gcc.target/aarch64/simd/vqrdmulhs_laneq_s32_1.c: Likewise.
* gcc.target/aarch64/simd/vqdmulhs_lane_s32.c: Likewise.
Allow the scalar form to be used when operating on lane 0.
Add a test for lane 1.