From: Richard Sandiford Date: Tue, 9 May 2023 17:57:23 +0000 (+0100) Subject: aarch64: Improve register allocation for lane instructions X-Git-Tag: basepoints/gcc-15~9517 X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=6d25ea520f7ed58568c9a0031409bc8e38b673f3;p=thirdparty%2Fgcc.git aarch64: Improve register allocation for lane instructions REG_ALLOC_ORDER is much less important than it used to be, but it is still used as a tie-breaker when multiple registers in a class are equally good. Previously aarch64 used the default approach of allocating in order of increasing register number. But as the comment in the patch says, it's better to allocate FP and predicate registers in the opposite order, so that we don't eat into smaller register classes unnecessarily. This fixes some existing FIXMEs and improves the register allocation for some Arm ACLE code. Doing this also showed that *vcond_mask_ (predicated MOV/SEL) unnecessarily required p0-p7 rather than p0-p15 for the unpredicated movprfx alternatives. Only the predicated movprfx alternative requires p0-p7 (due to the movprfx itself, rather than due to the main instruction). gcc/ * config/aarch64/aarch64-protos.h (aarch64_adjust_reg_alloc_order): Declare. * config/aarch64/aarch64.h (REG_ALLOC_ORDER): Define. (ADJUST_REG_ALLOC_ORDER): Likewise. * config/aarch64/aarch64.cc (aarch64_adjust_reg_alloc_order): New function. * config/aarch64/aarch64-sve.md (*vcond_mask_): Use Upa rather than Upl for unpredicated movprfx alternatives. gcc/testsuite/ * gcc.target/aarch64/sve/acle/asm/abd_f16.c: Remove XFAILs. * gcc.target/aarch64/sve/acle/asm/abd_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/abd_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/abd_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/abd_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/abd_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/abd_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/abd_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/abd_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/abd_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/abd_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/add_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/add_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/add_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/add_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/add_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/add_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/add_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/add_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/and_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/and_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/and_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/and_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/and_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/and_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/and_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/and_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/asr_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/asr_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/bic_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/bic_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/bic_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/bic_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/bic_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/bic_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/bic_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/bic_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/div_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/div_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/div_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/div_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/div_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/div_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/div_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/divr_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/divr_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/divr_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/divr_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/divr_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/divr_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/divr_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/dot_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/dot_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/dot_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/dot_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/eor_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/eor_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/eor_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/eor_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/eor_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/eor_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/eor_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/eor_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsl_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsl_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsl_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsl_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsl_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsl_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsl_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsl_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsl_wide_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsl_wide_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsl_wide_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsl_wide_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsl_wide_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsl_wide_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsr_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/lsr_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mad_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mad_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mad_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mad_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mad_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mad_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mad_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mad_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mad_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mad_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mad_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/max_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/max_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/max_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/max_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/max_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/max_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/max_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/max_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/min_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/min_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/min_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/min_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/min_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/min_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/min_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/min_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mla_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mla_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mla_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mla_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mla_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mla_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mla_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mla_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mla_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mla_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mla_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mls_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mls_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mls_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mls_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mls_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mls_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mls_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mls_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mls_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mls_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mls_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/msb_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/msb_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/msb_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/msb_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/msb_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/msb_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/msb_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/msb_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/msb_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/msb_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/msb_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_f16_notrap.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_f32_notrap.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_f64_notrap.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mulh_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mulh_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mulh_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mulh_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mulh_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mulh_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mulh_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mulh_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mulx_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mulx_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mulx_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/nmad_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/nmad_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/nmad_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/nmla_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/nmla_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/nmla_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/nmls_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/nmls_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/nmls_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/nmsb_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/nmsb_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/nmsb_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/orr_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/orr_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/orr_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/orr_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/orr_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/orr_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/orr_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/orr_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/scale_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/scale_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/scale_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/sub_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/sub_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/sub_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/sub_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/sub_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/sub_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/sub_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/sub_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/subr_f16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/subr_f16_notrap.c: Likewise. * gcc.target/aarch64/sve/acle/asm/subr_f32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/subr_f32_notrap.c: Likewise. * gcc.target/aarch64/sve/acle/asm/subr_f64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/subr_f64_notrap.c: Likewise. * gcc.target/aarch64/sve/acle/asm/subr_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/subr_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/subr_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/subr_s8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/subr_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/subr_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/subr_u64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/subr_u8.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/bcax_s16.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/bcax_s32.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/bcax_s64.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/bcax_s8.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/bcax_u16.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/bcax_u32.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/bcax_u64.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/bcax_u8.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qadd_s16.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qadd_s32.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qadd_s64.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qadd_s8.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qadd_u16.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qadd_u32.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qadd_u64.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qadd_u8.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qdmlalb_s16.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qdmlalb_s32.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qdmlalb_s64.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s16.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s32.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s64.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsub_s16.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsub_s32.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsub_s64.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsub_s8.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsub_u16.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsub_u32.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsub_u64.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsub_u8.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsubr_s16.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsubr_s32.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsubr_s64.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsubr_s8.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsubr_u16.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsubr_u32.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsubr_u64.c: Likewise. * gcc.target/aarch64/sve2/acle/asm/qsubr_u8.c: Likewise. --- diff --git a/gcc/config/aarch64/aarch64-protos.h b/gcc/config/aarch64/aarch64-protos.h index b138494384b5..2f055a26f927 100644 --- a/gcc/config/aarch64/aarch64-protos.h +++ b/gcc/config/aarch64/aarch64-protos.h @@ -1067,4 +1067,6 @@ extern bool aarch64_harden_sls_blr_p (void); extern void aarch64_output_patchable_area (unsigned int, bool); +extern void aarch64_adjust_reg_alloc_order (); + #endif /* GCC_AARCH64_PROTOS_H */ diff --git a/gcc/config/aarch64/aarch64-sve.md b/gcc/config/aarch64/aarch64-sve.md index 4b4c02c90fec..2898b85376b8 100644 --- a/gcc/config/aarch64/aarch64-sve.md +++ b/gcc/config/aarch64/aarch64-sve.md @@ -7624,7 +7624,7 @@ (define_insn "*vcond_mask_" [(set (match_operand:SVE_ALL 0 "register_operand" "=w, w, w, w, ?w, ?&w, ?&w") (unspec:SVE_ALL - [(match_operand: 3 "register_operand" "Upa, Upa, Upa, Upa, Upl, Upl, Upl") + [(match_operand: 3 "register_operand" "Upa, Upa, Upa, Upa, Upl, Upa, Upa") (match_operand:SVE_ALL 1 "aarch64_sve_reg_or_dup_imm" "w, vss, vss, Ufc, Ufc, vss, Ufc") (match_operand:SVE_ALL 2 "aarch64_simd_reg_or_zero" "w, 0, Dz, 0, Dz, w, w")] UNSPEC_SEL))] diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc index 546cb1213315..bf3d1b39d26d 100644 --- a/gcc/config/aarch64/aarch64.cc +++ b/gcc/config/aarch64/aarch64.cc @@ -27501,6 +27501,44 @@ aarch64_output_load_tp (rtx dest) return ""; } +/* Set up the value of REG_ALLOC_ORDER from scratch. + + It was previously good practice to put call-clobbered registers ahead + of call-preserved registers, but that isn't necessary these days. + IRA's model of register save/restore costs is much more sophisticated + than the model that a simple ordering could provide. We leave + HONOR_REG_ALLOC_ORDER undefined so that we can get the full benefit + of IRA's model. + + However, it is still useful to list registers that are members of + multiple classes after registers that are members of fewer classes. + For example, we have: + + - FP_LO8_REGS: v0-v7 + - FP_LO_REGS: v0-v15 + - FP_REGS: v0-v31 + + If, as a tie-breaker, we allocate FP_REGS in the order v0-v31, + we run the risk of starving other (lower-priority) pseudos that + require FP_LO8_REGS or FP_LO_REGS. Allocating FP_LO_REGS in the + order v0-v15 could similarly starve pseudos that require FP_LO8_REGS. + Allocating downwards rather than upwards avoids this problem, at least + in code that has reasonable register pressure. + + The situation for predicate registers is similar. */ + +void +aarch64_adjust_reg_alloc_order () +{ + for (int i = 0; i < FIRST_PSEUDO_REGISTER; ++i) + if (IN_RANGE (i, V0_REGNUM, V31_REGNUM)) + reg_alloc_order[i] = V31_REGNUM - (i - V0_REGNUM); + else if (IN_RANGE (i, P0_REGNUM, P15_REGNUM)) + reg_alloc_order[i] = P15_REGNUM - (i - P0_REGNUM); + else + reg_alloc_order[i] = i; +} + /* Target-specific selftests. */ #if CHECKING_P diff --git a/gcc/config/aarch64/aarch64.h b/gcc/config/aarch64/aarch64.h index 155cace6afea..801f9ebc5721 100644 --- a/gcc/config/aarch64/aarch64.h +++ b/gcc/config/aarch64/aarch64.h @@ -1292,4 +1292,9 @@ extern poly_uint16 aarch64_sve_vg; STACK_BOUNDARY / BITS_PER_UNIT) \ : (crtl->outgoing_args_size + STACK_POINTER_OFFSET)) +/* Filled in by aarch64_adjust_reg_alloc_order, which is called before + the first relevant use. */ +#define REG_ALLOC_ORDER {} +#define ADJUST_REG_ALLOC_ORDER aarch64_adjust_reg_alloc_order () + #endif /* GCC_AARCH64_H */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_f16.c index c019f248d20a..e84df047b6ec 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_f16.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (abd_1_f16_m_tied1, svfloat16_t, z0 = svabd_m (p0, z0, 1)) /* -** abd_1_f16_m_untied: { xfail *-*-* } +** abd_1_f16_m_untied: ** fmov (z[0-9]+\.h), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fabd z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_f32.c index bff37580c432..f2fcb34216a7 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_f32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (abd_1_f32_m_tied1, svfloat32_t, z0 = svabd_m (p0, z0, 1)) /* -** abd_1_f32_m_untied: { xfail *-*-* } +** abd_1_f32_m_untied: ** fmov (z[0-9]+\.s), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fabd z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_f64.c index c1e5f14e619a..952bd46a3335 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_f64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (abd_1_f64_m_tied1, svfloat64_t, z0 = svabd_m (p0, z0, 1)) /* -** abd_1_f64_m_untied: { xfail *-*-* } +** abd_1_f64_m_untied: ** fmov (z[0-9]+\.d), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fabd z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s16.c index e2d0c0fb7ef3..7d055eb31ed4 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (abd_w0_s16_m_tied1, svint16_t, int16_t, z0 = svabd_m (p0, z0, x0)) /* -** abd_w0_s16_m_untied: { xfail *-*-* } +** abd_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** sabd z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (abd_1_s16_m_tied1, svint16_t, z0 = svabd_m (p0, z0, 1)) /* -** abd_1_s16_m_untied: { xfail *-*-* } +** abd_1_s16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** sabd z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s32.c index 5c95ec04df11..2489b24e379d 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (abd_1_s32_m_tied1, svint32_t, z0 = svabd_m (p0, z0, 1)) /* -** abd_1_s32_m_untied: { xfail *-*-* } +** abd_1_s32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** sabd z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s64.c index 2402ecf2918e..0d324c999371 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (abd_1_s64_m_tied1, svint64_t, z0 = svabd_m (p0, z0, 1)) /* -** abd_1_s64_m_untied: { xfail *-*-* } +** abd_1_s64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** sabd z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s8.c index 49a2cc388f96..51e4a8aa6ff3 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (abd_w0_s8_m_tied1, svint8_t, int8_t, z0 = svabd_m (p0, z0, x0)) /* -** abd_w0_s8_m_untied: { xfail *-*-* } +** abd_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** sabd z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (abd_1_s8_m_tied1, svint8_t, z0 = svabd_m (p0, z0, 1)) /* -** abd_1_s8_m_untied: { xfail *-*-* } +** abd_1_s8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** sabd z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u16.c index 60aa9429ea62..89dc58dcc17e 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (abd_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svabd_m (p0, z0, x0)) /* -** abd_w0_u16_m_untied: { xfail *-*-* } +** abd_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** uabd z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (abd_1_u16_m_tied1, svuint16_t, z0 = svabd_m (p0, z0, 1)) /* -** abd_1_u16_m_untied: { xfail *-*-* } +** abd_1_u16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** uabd z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u32.c index bc24107837c8..4e4d0bc649ac 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (abd_1_u32_m_tied1, svuint32_t, z0 = svabd_m (p0, z0, 1)) /* -** abd_1_u32_m_untied: { xfail *-*-* } +** abd_1_u32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** uabd z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u64.c index d2cdaa06a5a6..2aa9937743f4 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (abd_1_u64_m_tied1, svuint64_t, z0 = svabd_m (p0, z0, 1)) /* -** abd_1_u64_m_untied: { xfail *-*-* } +** abd_1_u64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** uabd z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u8.c index 454ef153cc3c..78a16324a072 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/abd_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (abd_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svabd_m (p0, z0, x0)) /* -** abd_w0_u8_m_untied: { xfail *-*-* } +** abd_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** uabd z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (abd_1_u8_m_tied1, svuint8_t, z0 = svabd_m (p0, z0, 1)) /* -** abd_1_u8_m_untied: { xfail *-*-* } +** abd_1_u8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** uabd z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s16.c index c0883edf9ab4..85a63f34006e 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (add_w0_s16_m_tied1, svint16_t, int16_t, z0 = svadd_m (p0, z0, x0)) /* -** add_w0_s16_m_untied: { xfail *-*-* } +** add_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** add z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (add_1_s16_m_tied1, svint16_t, z0 = svadd_m (p0, z0, 1)) /* -** add_1_s16_m_untied: { xfail *-*-* } +** add_1_s16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** add z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s32.c index 887038ba3c7d..4ba210cd24b6 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (add_1_s32_m_tied1, svint32_t, z0 = svadd_m (p0, z0, 1)) /* -** add_1_s32_m_untied: { xfail *-*-* } +** add_1_s32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** add z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s64.c index aab63ef6211f..ff8cc6d5aade 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (add_1_s64_m_tied1, svint64_t, z0 = svadd_m (p0, z0, 1)) /* -** add_1_s64_m_untied: { xfail *-*-* } +** add_1_s64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** add z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s8.c index 0889c189d596..2e79ba2b12bb 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (add_w0_s8_m_tied1, svint8_t, int8_t, z0 = svadd_m (p0, z0, x0)) /* -** add_w0_s8_m_untied: { xfail *-*-* } +** add_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** add z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (add_1_s8_m_tied1, svint8_t, z0 = svadd_m (p0, z0, 1)) /* -** add_1_s8_m_untied: { xfail *-*-* } +** add_1_s8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** add z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u16.c index 25cb90353d3b..85880c8ab53c 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (add_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svadd_m (p0, z0, x0)) /* -** add_w0_u16_m_untied: { xfail *-*-* } +** add_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** add z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (add_1_u16_m_tied1, svuint16_t, z0 = svadd_m (p0, z0, 1)) /* -** add_1_u16_m_untied: { xfail *-*-* } +** add_1_u16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** add z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u32.c index ee979489b529..74dfe0cd8d54 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (add_1_u32_m_tied1, svuint32_t, z0 = svadd_m (p0, z0, 1)) /* -** add_1_u32_m_untied: { xfail *-*-* } +** add_1_u32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** add z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u64.c index 25d2972a695b..efb8820669cb 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (add_1_u64_m_tied1, svuint64_t, z0 = svadd_m (p0, z0, 1)) /* -** add_1_u64_m_untied: { xfail *-*-* } +** add_1_u64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** add z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u8.c index 06b68c97ce8c..812c6a526b64 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/add_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (add_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svadd_m (p0, z0, x0)) /* -** add_w0_u8_m_untied: { xfail *-*-* } +** add_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** add z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (add_1_u8_m_tied1, svuint8_t, z0 = svadd_m (p0, z0, 1)) /* -** add_1_u8_m_untied: { xfail *-*-* } +** add_1_u8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** add z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s16.c index d54613e915d2..02d830a200cc 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (and_w0_s16_m_tied1, svint16_t, int16_t, z0 = svand_m (p0, z0, x0)) /* -** and_w0_s16_m_untied: { xfail *-*-* } +** and_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** and z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (and_1_s16_m_tied1, svint16_t, z0 = svand_m (p0, z0, 1)) /* -** and_1_s16_m_untied: { xfail *-*-* } +** and_1_s16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** and z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s32.c index 7f4082b327b2..c78c18664ce7 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (and_1_s32_m_tied1, svint32_t, z0 = svand_m (p0, z0, 1)) /* -** and_1_s32_m_untied: { xfail *-*-* } +** and_1_s32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** and z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s64.c index 8868258dca65..8ef1f63c6073 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (and_1_s64_m_tied1, svint64_t, z0 = svand_m (p0, z0, 1)) /* -** and_1_s64_m_untied: { xfail *-*-* } +** and_1_s64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** and z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s8.c index 61d168d3fdf8..a2856cd0b0f5 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (and_w0_s8_m_tied1, svint8_t, int8_t, z0 = svand_m (p0, z0, x0)) /* -** and_w0_s8_m_untied: { xfail *-*-* } +** and_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** and z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (and_1_s8_m_tied1, svint8_t, z0 = svand_m (p0, z0, 1)) /* -** and_1_s8_m_untied: { xfail *-*-* } +** and_1_s8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** and z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u16.c index 875a08d71d18..443a2a8b0707 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (and_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svand_m (p0, z0, x0)) /* -** and_w0_u16_m_untied: { xfail *-*-* } +** and_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** and z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (and_1_u16_m_tied1, svuint16_t, z0 = svand_m (p0, z0, 1)) /* -** and_1_u16_m_untied: { xfail *-*-* } +** and_1_u16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** and z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u32.c index 80ff503963ff..07d251e8b6fa 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (and_1_u32_m_tied1, svuint32_t, z0 = svand_m (p0, z0, 1)) /* -** and_1_u32_m_untied: { xfail *-*-* } +** and_1_u32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** and z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u64.c index 906b19c37353..5e2ee4d1a255 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (and_1_u64_m_tied1, svuint64_t, z0 = svand_m (p0, z0, 1)) /* -** and_1_u64_m_untied: { xfail *-*-* } +** and_1_u64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** and z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u8.c index b0f1c9529f05..373aafe357c3 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/and_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (and_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svand_m (p0, z0, x0)) /* -** and_w0_u8_m_untied: { xfail *-*-* } +** and_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** and z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (and_1_u8_m_tied1, svuint8_t, z0 = svand_m (p0, z0, 1)) /* -** and_1_u8_m_untied: { xfail *-*-* } +** and_1_u8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** and z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/asr_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/asr_s16.c index 877bf10685a4..f9ce790da95e 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/asr_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/asr_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (asr_w0_s16_m_tied1, svint16_t, uint16_t, z0 = svasr_m (p0, z0, x0)) /* -** asr_w0_s16_m_untied: { xfail *-*-* } +** asr_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** asr z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/asr_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/asr_s8.c index 992e93fdef7a..5cf3a712c282 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/asr_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/asr_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (asr_w0_s8_m_tied1, svint8_t, uint8_t, z0 = svasr_m (p0, z0, x0)) /* -** asr_w0_s8_m_untied: { xfail *-*-* } +** asr_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** asr z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s16.c index c80f5697f5f4..79848b15b858 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (bic_w0_s16_m_tied1, svint16_t, int16_t, z0 = svbic_m (p0, z0, x0)) /* -** bic_w0_s16_m_untied: { xfail *-*-* } +** bic_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** bic z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (bic_1_s16_m_tied1, svint16_t, z0 = svbic_m (p0, z0, 1)) /* -** bic_1_s16_m_untied: { xfail *-*-* } +** bic_1_s16_m_untied: ** mov (z[0-9]+\.h), #-2 ** movprfx z0, z1 ** and z0\.h, p0/m, z0\.h, \1 @@ -127,7 +127,7 @@ TEST_UNIFORM_ZX (bic_w0_s16_z_tied1, svint16_t, int16_t, z0 = svbic_z (p0, z0, x0)) /* -** bic_w0_s16_z_untied: { xfail *-*-* } +** bic_w0_s16_z_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0\.h, p0/z, z1\.h ** bic z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s32.c index e02c66947d6c..04367a8fad04 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (bic_1_s32_m_tied1, svint32_t, z0 = svbic_m (p0, z0, 1)) /* -** bic_1_s32_m_untied: { xfail *-*-* } +** bic_1_s32_m_untied: ** mov (z[0-9]+\.s), #-2 ** movprfx z0, z1 ** and z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s64.c index 57c1e535fea3..b4c19d190644 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (bic_1_s64_m_tied1, svint64_t, z0 = svbic_m (p0, z0, 1)) /* -** bic_1_s64_m_untied: { xfail *-*-* } +** bic_1_s64_m_untied: ** mov (z[0-9]+\.d), #-2 ** movprfx z0, z1 ** and z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s8.c index 0958a3403939..d1ffefa77ee0 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (bic_w0_s8_m_tied1, svint8_t, int8_t, z0 = svbic_m (p0, z0, x0)) /* -** bic_w0_s8_m_untied: { xfail *-*-* } +** bic_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** bic z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (bic_1_s8_m_tied1, svint8_t, z0 = svbic_m (p0, z0, 1)) /* -** bic_1_s8_m_untied: { xfail *-*-* } +** bic_1_s8_m_untied: ** mov (z[0-9]+\.b), #-2 ** movprfx z0, z1 ** and z0\.b, p0/m, z0\.b, \1 @@ -127,7 +127,7 @@ TEST_UNIFORM_ZX (bic_w0_s8_z_tied1, svint8_t, int8_t, z0 = svbic_z (p0, z0, x0)) /* -** bic_w0_s8_z_untied: { xfail *-*-* } +** bic_w0_s8_z_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0\.b, p0/z, z1\.b ** bic z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u16.c index 30209ffb418f..fb16646e2055 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (bic_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svbic_m (p0, z0, x0)) /* -** bic_w0_u16_m_untied: { xfail *-*-* } +** bic_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** bic z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (bic_1_u16_m_tied1, svuint16_t, z0 = svbic_m (p0, z0, 1)) /* -** bic_1_u16_m_untied: { xfail *-*-* } +** bic_1_u16_m_untied: ** mov (z[0-9]+\.h), #-2 ** movprfx z0, z1 ** and z0\.h, p0/m, z0\.h, \1 @@ -127,7 +127,7 @@ TEST_UNIFORM_ZX (bic_w0_u16_z_tied1, svuint16_t, uint16_t, z0 = svbic_z (p0, z0, x0)) /* -** bic_w0_u16_z_untied: { xfail *-*-* } +** bic_w0_u16_z_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0\.h, p0/z, z1\.h ** bic z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u32.c index 9f08ab40a8c5..764fd1938528 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (bic_1_u32_m_tied1, svuint32_t, z0 = svbic_m (p0, z0, 1)) /* -** bic_1_u32_m_untied: { xfail *-*-* } +** bic_1_u32_m_untied: ** mov (z[0-9]+\.s), #-2 ** movprfx z0, z1 ** and z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u64.c index de84f3af6ff4..e4399807ad43 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (bic_1_u64_m_tied1, svuint64_t, z0 = svbic_m (p0, z0, 1)) /* -** bic_1_u64_m_untied: { xfail *-*-* } +** bic_1_u64_m_untied: ** mov (z[0-9]+\.d), #-2 ** movprfx z0, z1 ** and z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u8.c index 80c489b9cdb2..b7528ceac336 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/bic_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (bic_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svbic_m (p0, z0, x0)) /* -** bic_w0_u8_m_untied: { xfail *-*-* } +** bic_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** bic z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (bic_1_u8_m_tied1, svuint8_t, z0 = svbic_m (p0, z0, 1)) /* -** bic_1_u8_m_untied: { xfail *-*-* } +** bic_1_u8_m_untied: ** mov (z[0-9]+\.b), #-2 ** movprfx z0, z1 ** and z0\.b, p0/m, z0\.b, \1 @@ -127,7 +127,7 @@ TEST_UNIFORM_ZX (bic_w0_u8_z_tied1, svuint8_t, uint8_t, z0 = svbic_z (p0, z0, x0)) /* -** bic_w0_u8_z_untied: { xfail *-*-* } +** bic_w0_u8_z_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0\.b, p0/z, z1\.b ** bic z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_f16.c index 8bcd094c9960..90f93643a444 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_f16.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (div_1_f16_m_tied1, svfloat16_t, z0 = svdiv_m (p0, z0, 1)) /* -** div_1_f16_m_untied: { xfail *-*-* } +** div_1_f16_m_untied: ** fmov (z[0-9]+\.h), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fdiv z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_f32.c index 546c61dc7830..7c1894ebe529 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_f32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (div_1_f32_m_tied1, svfloat32_t, z0 = svdiv_m (p0, z0, 1)) /* -** div_1_f32_m_untied: { xfail *-*-* } +** div_1_f32_m_untied: ** fmov (z[0-9]+\.s), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fdiv z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_f64.c index 1e24bc264840..93517c5b50f8 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_f64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (div_1_f64_m_tied1, svfloat64_t, z0 = svdiv_m (p0, z0, 1)) /* -** div_1_f64_m_untied: { xfail *-*-* } +** div_1_f64_m_untied: ** fmov (z[0-9]+\.d), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fdiv z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_s32.c index 8e70ae797a72..c49ca1aa5243 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (div_2_s32_m_tied1, svint32_t, z0 = svdiv_m (p0, z0, 2)) /* -** div_2_s32_m_untied: { xfail *-*-* } +** div_2_s32_m_untied: ** mov (z[0-9]+\.s), #2 ** movprfx z0, z1 ** sdiv z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_s64.c index 439da1f571f0..464dca28d747 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (div_2_s64_m_tied1, svint64_t, z0 = svdiv_m (p0, z0, 2)) /* -** div_2_s64_m_untied: { xfail *-*-* } +** div_2_s64_m_untied: ** mov (z[0-9]+\.d), #2 ** movprfx z0, z1 ** sdiv z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_u32.c index 8e8e464b7771..232ccacf524f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (div_2_u32_m_tied1, svuint32_t, z0 = svdiv_m (p0, z0, 2)) /* -** div_2_u32_m_untied: { xfail *-*-* } +** div_2_u32_m_untied: ** mov (z[0-9]+\.s), #2 ** movprfx z0, z1 ** udiv z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_u64.c index fc152e8e57bc..ac7c026eea37 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (div_2_u64_m_tied1, svuint64_t, z0 = svdiv_m (p0, z0, 2)) /* -** div_2_u64_m_untied: { xfail *-*-* } +** div_2_u64_m_untied: ** mov (z[0-9]+\.d), #2 ** movprfx z0, z1 ** udiv z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_f16.c index e293be65a060..ad6eb656b10b 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_f16.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (divr_1_f16_m_tied1, svfloat16_t, z0 = svdivr_m (p0, z0, 1)) /* -** divr_1_f16_m_untied: { xfail *-*-* } +** divr_1_f16_m_untied: ** fmov (z[0-9]+\.h), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fdivr z0\.h, p0/m, z0\.h, \1 @@ -85,7 +85,7 @@ TEST_UNIFORM_Z (divr_0p5_f16_m_tied1, svfloat16_t, z0 = svdivr_m (p0, z0, 0.5)) /* -** divr_0p5_f16_m_untied: { xfail *-*-* } +** divr_0p5_f16_m_untied: ** fmov (z[0-9]+\.h), #(?:0\.5|5\.0e-1) ** movprfx z0, z1 ** fdivr z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_f32.c index 04a7ac40bb24..60fd70711ecb 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_f32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (divr_1_f32_m_tied1, svfloat32_t, z0 = svdivr_m (p0, z0, 1)) /* -** divr_1_f32_m_untied: { xfail *-*-* } +** divr_1_f32_m_untied: ** fmov (z[0-9]+\.s), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fdivr z0\.s, p0/m, z0\.s, \1 @@ -85,7 +85,7 @@ TEST_UNIFORM_Z (divr_0p5_f32_m_tied1, svfloat32_t, z0 = svdivr_m (p0, z0, 0.5)) /* -** divr_0p5_f32_m_untied: { xfail *-*-* } +** divr_0p5_f32_m_untied: ** fmov (z[0-9]+\.s), #(?:0\.5|5\.0e-1) ** movprfx z0, z1 ** fdivr z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_f64.c index bef1a9b059cb..f465a27b9415 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_f64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (divr_1_f64_m_tied1, svfloat64_t, z0 = svdivr_m (p0, z0, 1)) /* -** divr_1_f64_m_untied: { xfail *-*-* } +** divr_1_f64_m_untied: ** fmov (z[0-9]+\.d), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fdivr z0\.d, p0/m, z0\.d, \1 @@ -85,7 +85,7 @@ TEST_UNIFORM_Z (divr_0p5_f64_m_tied1, svfloat64_t, z0 = svdivr_m (p0, z0, 0.5)) /* -** divr_0p5_f64_m_untied: { xfail *-*-* } +** divr_0p5_f64_m_untied: ** fmov (z[0-9]+\.d), #(?:0\.5|5\.0e-1) ** movprfx z0, z1 ** fdivr z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_s32.c index 75a6c1d979d0..dab18b0fd9f3 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (divr_2_s32_m_tied1, svint32_t, z0 = svdivr_m (p0, z0, 2)) /* -** divr_2_s32_m_untied: { xfail *-*-* } +** divr_2_s32_m_untied: ** mov (z[0-9]+\.s), #2 ** movprfx z0, z1 ** sdivr z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_s64.c index 8f4939a91fb9..4668437dce38 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (divr_2_s64_m_tied1, svint64_t, z0 = svdivr_m (p0, z0, 2)) /* -** divr_2_s64_m_untied: { xfail *-*-* } +** divr_2_s64_m_untied: ** mov (z[0-9]+\.d), #2 ** movprfx z0, z1 ** sdivr z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_u32.c index 84c243b44c2e..c6d4b04f5460 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (divr_2_u32_m_tied1, svuint32_t, z0 = svdivr_m (p0, z0, 2)) /* -** divr_2_u32_m_untied: { xfail *-*-* } +** divr_2_u32_m_untied: ** mov (z[0-9]+\.s), #2 ** movprfx z0, z1 ** udivr z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_u64.c index 03bb624726fd..ace600adf037 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/divr_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (divr_2_u64_m_tied1, svuint64_t, z0 = svdivr_m (p0, z0, 2)) /* -** divr_2_u64_m_untied: { xfail *-*-* } +** divr_2_u64_m_untied: ** mov (z[0-9]+\.d), #2 ** movprfx z0, z1 ** udivr z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_s32.c index 605bd1b30f25..0d9d6afe2f20 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_s32.c @@ -54,7 +54,7 @@ TEST_DUAL_ZX (dot_w0_s32_tied1, svint32_t, svint8_t, int8_t, z0 = svdot (z0, z4, x0)) /* -** dot_w0_s32_untied: { xfail *-*-* } +** dot_w0_s32_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** sdot z0\.s, z4\.b, \1 @@ -75,7 +75,7 @@ TEST_DUAL_Z (dot_9_s32_tied1, svint32_t, svint8_t, z0 = svdot (z0, z4, 9)) /* -** dot_9_s32_untied: { xfail *-*-* } +** dot_9_s32_untied: ** mov (z[0-9]+\.b), #9 ** movprfx z0, z1 ** sdot z0\.s, z4\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_s64.c index b6574740b7e7..a119d9cc94d9 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_s64.c @@ -54,7 +54,7 @@ TEST_DUAL_ZX (dot_w0_s64_tied1, svint64_t, svint16_t, int16_t, z0 = svdot (z0, z4, x0)) /* -** dot_w0_s64_untied: { xfail *-*-* } +** dot_w0_s64_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** sdot z0\.d, z4\.h, \1 @@ -75,7 +75,7 @@ TEST_DUAL_Z (dot_9_s64_tied1, svint64_t, svint16_t, z0 = svdot (z0, z4, 9)) /* -** dot_9_s64_untied: { xfail *-*-* } +** dot_9_s64_untied: ** mov (z[0-9]+\.h), #9 ** movprfx z0, z1 ** sdot z0\.d, z4\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_u32.c index 541e71cc212e..3e57074e6994 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_u32.c @@ -54,7 +54,7 @@ TEST_DUAL_ZX (dot_w0_u32_tied1, svuint32_t, svuint8_t, uint8_t, z0 = svdot (z0, z4, x0)) /* -** dot_w0_u32_untied: { xfail *-*-* } +** dot_w0_u32_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** udot z0\.s, z4\.b, \1 @@ -75,7 +75,7 @@ TEST_DUAL_Z (dot_9_u32_tied1, svuint32_t, svuint8_t, z0 = svdot (z0, z4, 9)) /* -** dot_9_u32_untied: { xfail *-*-* } +** dot_9_u32_untied: ** mov (z[0-9]+\.b), #9 ** movprfx z0, z1 ** udot z0\.s, z4\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_u64.c index cc0e853737df..88d9047ba009 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/dot_u64.c @@ -54,7 +54,7 @@ TEST_DUAL_ZX (dot_w0_u64_tied1, svuint64_t, svuint16_t, uint16_t, z0 = svdot (z0, z4, x0)) /* -** dot_w0_u64_untied: { xfail *-*-* } +** dot_w0_u64_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** udot z0\.d, z4\.h, \1 @@ -75,7 +75,7 @@ TEST_DUAL_Z (dot_9_u64_tied1, svuint64_t, svuint16_t, z0 = svdot (z0, z4, 9)) /* -** dot_9_u64_untied: { xfail *-*-* } +** dot_9_u64_untied: ** mov (z[0-9]+\.h), #9 ** movprfx z0, z1 ** udot z0\.d, z4\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s16.c index 7cf73609a1aa..683248d0887f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (eor_w0_s16_m_tied1, svint16_t, int16_t, z0 = sveor_m (p0, z0, x0)) /* -** eor_w0_s16_m_untied: { xfail *-*-* } +** eor_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** eor z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (eor_1_s16_m_tied1, svint16_t, z0 = sveor_m (p0, z0, 1)) /* -** eor_1_s16_m_untied: { xfail *-*-* } +** eor_1_s16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** eor z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s32.c index d5aecb201330..4c3ba9ab422f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (eor_1_s32_m_tied1, svint32_t, z0 = sveor_m (p0, z0, 1)) /* -** eor_1_s32_m_untied: { xfail *-*-* } +** eor_1_s32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** eor z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s64.c index 157128974bf0..83817cc66948 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (eor_1_s64_m_tied1, svint64_t, z0 = sveor_m (p0, z0, 1)) /* -** eor_1_s64_m_untied: { xfail *-*-* } +** eor_1_s64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** eor z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s8.c index 083ac2dde06e..91f3ea8459b1 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (eor_w0_s8_m_tied1, svint8_t, int8_t, z0 = sveor_m (p0, z0, x0)) /* -** eor_w0_s8_m_untied: { xfail *-*-* } +** eor_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** eor z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (eor_1_s8_m_tied1, svint8_t, z0 = sveor_m (p0, z0, 1)) /* -** eor_1_s8_m_untied: { xfail *-*-* } +** eor_1_s8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** eor z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u16.c index 40b43a5f89b4..875b8d0c4cb0 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (eor_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = sveor_m (p0, z0, x0)) /* -** eor_w0_u16_m_untied: { xfail *-*-* } +** eor_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** eor z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (eor_1_u16_m_tied1, svuint16_t, z0 = sveor_m (p0, z0, 1)) /* -** eor_1_u16_m_untied: { xfail *-*-* } +** eor_1_u16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** eor z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u32.c index 8e46d08caccd..6add2b7c1ebf 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (eor_1_u32_m_tied1, svuint32_t, z0 = sveor_m (p0, z0, 1)) /* -** eor_1_u32_m_untied: { xfail *-*-* } +** eor_1_u32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** eor z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u64.c index a82398f919ac..ee0bda271b28 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (eor_1_u64_m_tied1, svuint64_t, z0 = sveor_m (p0, z0, 1)) /* -** eor_1_u64_m_untied: { xfail *-*-* } +** eor_1_u64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** eor z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u8.c index 006637699e8b..fdb0fb1022a6 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/eor_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (eor_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = sveor_m (p0, z0, x0)) /* -** eor_w0_u8_m_untied: { xfail *-*-* } +** eor_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** eor z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (eor_1_u8_m_tied1, svuint8_t, z0 = sveor_m (p0, z0, 1)) /* -** eor_1_u8_m_untied: { xfail *-*-* } +** eor_1_u8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** eor z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s16.c index edaaca5f155b..d5c5fd54e791 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (lsl_w0_s16_m_tied1, svint16_t, uint16_t, z0 = svlsl_m (p0, z0, x0)) /* -** lsl_w0_s16_m_untied: { xfail *-*-* } +** lsl_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** lsl z0\.h, p0/m, z0\.h, \1 @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (lsl_16_s16_m_tied1, svint16_t, z0 = svlsl_m (p0, z0, 16)) /* -** lsl_16_s16_m_untied: { xfail *-*-* } +** lsl_16_s16_m_untied: ** mov (z[0-9]+\.h), #16 ** movprfx z0, z1 ** lsl z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s32.c index f98f1f94b449..b5df8a843188 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s32.c @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (lsl_32_s32_m_tied1, svint32_t, z0 = svlsl_m (p0, z0, 32)) /* -** lsl_32_s32_m_untied: { xfail *-*-* } +** lsl_32_s32_m_untied: ** mov (z[0-9]+\.s), #32 ** movprfx z0, z1 ** lsl z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s64.c index 39753986b1b3..850a798fe1f8 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s64.c @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (lsl_64_s64_m_tied1, svint64_t, z0 = svlsl_m (p0, z0, 64)) /* -** lsl_64_s64_m_untied: { xfail *-*-* } +** lsl_64_s64_m_untied: ** mov (z[0-9]+\.d), #64 ** movprfx z0, z1 ** lsl z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s8.c index 9a9cc959c33d..d8776597129c 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (lsl_w0_s8_m_tied1, svint8_t, uint8_t, z0 = svlsl_m (p0, z0, x0)) /* -** lsl_w0_s8_m_untied: { xfail *-*-* } +** lsl_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** lsl z0\.b, p0/m, z0\.b, \1 @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (lsl_8_s8_m_tied1, svint8_t, z0 = svlsl_m (p0, z0, 8)) /* -** lsl_8_s8_m_untied: { xfail *-*-* } +** lsl_8_s8_m_untied: ** mov (z[0-9]+\.b), #8 ** movprfx z0, z1 ** lsl z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u16.c index 57db0fda66af..068e49b88120 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (lsl_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svlsl_m (p0, z0, x0)) /* -** lsl_w0_u16_m_untied: { xfail *-*-* } +** lsl_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** lsl z0\.h, p0/m, z0\.h, \1 @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (lsl_16_u16_m_tied1, svuint16_t, z0 = svlsl_m (p0, z0, 16)) /* -** lsl_16_u16_m_untied: { xfail *-*-* } +** lsl_16_u16_m_untied: ** mov (z[0-9]+\.h), #16 ** movprfx z0, z1 ** lsl z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u32.c index 8773f15db44b..9c2be1de9675 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u32.c @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (lsl_32_u32_m_tied1, svuint32_t, z0 = svlsl_m (p0, z0, 32)) /* -** lsl_32_u32_m_untied: { xfail *-*-* } +** lsl_32_u32_m_untied: ** mov (z[0-9]+\.s), #32 ** movprfx z0, z1 ** lsl z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u64.c index 7b12bd43e1ae..0c1e473ce9d3 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u64.c @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (lsl_64_u64_m_tied1, svuint64_t, z0 = svlsl_m (p0, z0, 64)) /* -** lsl_64_u64_m_untied: { xfail *-*-* } +** lsl_64_u64_m_untied: ** mov (z[0-9]+\.d), #64 ** movprfx z0, z1 ** lsl z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u8.c index 894b5513857b..59d386c0f775 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (lsl_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svlsl_m (p0, z0, x0)) /* -** lsl_w0_u8_m_untied: { xfail *-*-* } +** lsl_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** lsl z0\.b, p0/m, z0\.b, \1 @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (lsl_8_u8_m_tied1, svuint8_t, z0 = svlsl_m (p0, z0, 8)) /* -** lsl_8_u8_m_untied: { xfail *-*-* } +** lsl_8_u8_m_untied: ** mov (z[0-9]+\.b), #8 ** movprfx z0, z1 ** lsl z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_s16.c index a0207726144b..7244f64fb1df 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_s16.c @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (lsl_wide_16_s16_m_tied1, svint16_t, z0 = svlsl_wide_m (p0, z0, 16)) /* -** lsl_wide_16_s16_m_untied: { xfail *-*-* } +** lsl_wide_16_s16_m_untied: ** mov (z[0-9]+\.d), #16 ** movprfx z0, z1 ** lsl z0\.h, p0/m, z0\.h, \1 @@ -217,7 +217,7 @@ TEST_UNIFORM_Z (lsl_wide_16_s16_z_tied1, svint16_t, z0 = svlsl_wide_z (p0, z0, 16)) /* -** lsl_wide_16_s16_z_untied: { xfail *-*-* } +** lsl_wide_16_s16_z_untied: ** mov (z[0-9]+\.d), #16 ** movprfx z0\.h, p0/z, z1\.h ** lsl z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_s32.c index bd67b7006b5c..04333ce477af 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_s32.c @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (lsl_wide_32_s32_m_tied1, svint32_t, z0 = svlsl_wide_m (p0, z0, 32)) /* -** lsl_wide_32_s32_m_untied: { xfail *-*-* } +** lsl_wide_32_s32_m_untied: ** mov (z[0-9]+\.d), #32 ** movprfx z0, z1 ** lsl z0\.s, p0/m, z0\.s, \1 @@ -217,7 +217,7 @@ TEST_UNIFORM_Z (lsl_wide_32_s32_z_tied1, svint32_t, z0 = svlsl_wide_z (p0, z0, 32)) /* -** lsl_wide_32_s32_z_untied: { xfail *-*-* } +** lsl_wide_32_s32_z_untied: ** mov (z[0-9]+\.d), #32 ** movprfx z0\.s, p0/z, z1\.s ** lsl z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_s8.c index 7eb8627041d9..5847db7bd97f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_s8.c @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (lsl_wide_8_s8_m_tied1, svint8_t, z0 = svlsl_wide_m (p0, z0, 8)) /* -** lsl_wide_8_s8_m_untied: { xfail *-*-* } +** lsl_wide_8_s8_m_untied: ** mov (z[0-9]+\.d), #8 ** movprfx z0, z1 ** lsl z0\.b, p0/m, z0\.b, \1 @@ -217,7 +217,7 @@ TEST_UNIFORM_Z (lsl_wide_8_s8_z_tied1, svint8_t, z0 = svlsl_wide_z (p0, z0, 8)) /* -** lsl_wide_8_s8_z_untied: { xfail *-*-* } +** lsl_wide_8_s8_z_untied: ** mov (z[0-9]+\.d), #8 ** movprfx z0\.b, p0/z, z1\.b ** lsl z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_u16.c index 482f8d0557ba..2c047b7f7e5c 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_u16.c @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (lsl_wide_16_u16_m_tied1, svuint16_t, z0 = svlsl_wide_m (p0, z0, 16)) /* -** lsl_wide_16_u16_m_untied: { xfail *-*-* } +** lsl_wide_16_u16_m_untied: ** mov (z[0-9]+\.d), #16 ** movprfx z0, z1 ** lsl z0\.h, p0/m, z0\.h, \1 @@ -217,7 +217,7 @@ TEST_UNIFORM_Z (lsl_wide_16_u16_z_tied1, svuint16_t, z0 = svlsl_wide_z (p0, z0, 16)) /* -** lsl_wide_16_u16_z_untied: { xfail *-*-* } +** lsl_wide_16_u16_z_untied: ** mov (z[0-9]+\.d), #16 ** movprfx z0\.h, p0/z, z1\.h ** lsl z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_u32.c index 612897d24dfd..1e149633473b 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_u32.c @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (lsl_wide_32_u32_m_tied1, svuint32_t, z0 = svlsl_wide_m (p0, z0, 32)) /* -** lsl_wide_32_u32_m_untied: { xfail *-*-* } +** lsl_wide_32_u32_m_untied: ** mov (z[0-9]+\.d), #32 ** movprfx z0, z1 ** lsl z0\.s, p0/m, z0\.s, \1 @@ -217,7 +217,7 @@ TEST_UNIFORM_Z (lsl_wide_32_u32_z_tied1, svuint32_t, z0 = svlsl_wide_z (p0, z0, 32)) /* -** lsl_wide_32_u32_z_untied: { xfail *-*-* } +** lsl_wide_32_u32_z_untied: ** mov (z[0-9]+\.d), #32 ** movprfx z0\.s, p0/z, z1\.s ** lsl z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_u8.c index 6ca2f9e7da22..55f272170779 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsl_wide_u8.c @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (lsl_wide_8_u8_m_tied1, svuint8_t, z0 = svlsl_wide_m (p0, z0, 8)) /* -** lsl_wide_8_u8_m_untied: { xfail *-*-* } +** lsl_wide_8_u8_m_untied: ** mov (z[0-9]+\.d), #8 ** movprfx z0, z1 ** lsl z0\.b, p0/m, z0\.b, \1 @@ -217,7 +217,7 @@ TEST_UNIFORM_Z (lsl_wide_8_u8_z_tied1, svuint8_t, z0 = svlsl_wide_z (p0, z0, 8)) /* -** lsl_wide_8_u8_z_untied: { xfail *-*-* } +** lsl_wide_8_u8_z_untied: ** mov (z[0-9]+\.d), #8 ** movprfx z0\.b, p0/z, z1\.b ** lsl z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsr_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsr_u16.c index 61575645fad0..a41411986f79 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsr_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsr_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (lsr_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svlsr_m (p0, z0, x0)) /* -** lsr_w0_u16_m_untied: { xfail *-*-* } +** lsr_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** lsr z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsr_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsr_u8.c index a049ca90556e..b773eedba7fe 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsr_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/lsr_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (lsr_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svlsr_m (p0, z0, x0)) /* -** lsr_w0_u8_m_untied: { xfail *-*-* } +** lsr_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** lsr z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_f16.c index 4b3148419c5c..60d23b356982 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_f16.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mad_2_f16_m_tied1, svfloat16_t, z0 = svmad_m (p0, z0, z1, 2)) /* -** mad_2_f16_m_untied: { xfail *-*-* } +** mad_2_f16_m_untied: ** fmov (z[0-9]+\.h), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmad z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_f32.c index d5dbc85d5a3c..1c89ac8cbf93 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_f32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mad_2_f32_m_tied1, svfloat32_t, z0 = svmad_m (p0, z0, z1, 2)) /* -** mad_2_f32_m_untied: { xfail *-*-* } +** mad_2_f32_m_untied: ** fmov (z[0-9]+\.s), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmad z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_f64.c index 7b5dc22826e4..cc5f8dd90347 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_f64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mad_2_f64_m_tied1, svfloat64_t, z0 = svmad_m (p0, z0, z1, 2)) /* -** mad_2_f64_m_untied: { xfail *-*-* } +** mad_2_f64_m_untied: ** fmov (z[0-9]+\.d), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmad z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s16.c index 02a6d4588b85..4644fa9866c3 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s16.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (mad_w0_s16_m_tied1, svint16_t, int16_t, z0 = svmad_m (p0, z0, z1, x0)) /* -** mad_w0_s16_m_untied: { xfail *-*-* } +** mad_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** mad z0\.h, p0/m, z2\.h, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mad_11_s16_m_tied1, svint16_t, z0 = svmad_m (p0, z0, z1, 11)) /* -** mad_11_s16_m_untied: { xfail *-*-* } +** mad_11_s16_m_untied: ** mov (z[0-9]+\.h), #11 ** movprfx z0, z1 ** mad z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s32.c index d676a0c11420..36efef54df72 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mad_11_s32_m_tied1, svint32_t, z0 = svmad_m (p0, z0, z1, 11)) /* -** mad_11_s32_m_untied: { xfail *-*-* } +** mad_11_s32_m_untied: ** mov (z[0-9]+\.s), #11 ** movprfx z0, z1 ** mad z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s64.c index 7aa017536af7..4df7bc417728 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mad_11_s64_m_tied1, svint64_t, z0 = svmad_m (p0, z0, z1, 11)) /* -** mad_11_s64_m_untied: { xfail *-*-* } +** mad_11_s64_m_untied: ** mov (z[0-9]+\.d), #11 ** movprfx z0, z1 ** mad z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s8.c index 90d712686ca5..7e3dd6767998 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_s8.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (mad_w0_s8_m_tied1, svint8_t, int8_t, z0 = svmad_m (p0, z0, z1, x0)) /* -** mad_w0_s8_m_untied: { xfail *-*-* } +** mad_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** mad z0\.b, p0/m, z2\.b, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mad_11_s8_m_tied1, svint8_t, z0 = svmad_m (p0, z0, z1, 11)) /* -** mad_11_s8_m_untied: { xfail *-*-* } +** mad_11_s8_m_untied: ** mov (z[0-9]+\.b), #11 ** movprfx z0, z1 ** mad z0\.b, p0/m, z2\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u16.c index 1d2ad9c5fc9d..bebb8995c48b 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u16.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (mad_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svmad_m (p0, z0, z1, x0)) /* -** mad_w0_u16_m_untied: { xfail *-*-* } +** mad_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** mad z0\.h, p0/m, z2\.h, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mad_11_u16_m_tied1, svuint16_t, z0 = svmad_m (p0, z0, z1, 11)) /* -** mad_11_u16_m_untied: { xfail *-*-* } +** mad_11_u16_m_untied: ** mov (z[0-9]+\.h), #11 ** movprfx z0, z1 ** mad z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u32.c index 4b51958b176c..3f4486d3f4fa 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mad_11_u32_m_tied1, svuint32_t, z0 = svmad_m (p0, z0, z1, 11)) /* -** mad_11_u32_m_untied: { xfail *-*-* } +** mad_11_u32_m_untied: ** mov (z[0-9]+\.s), #11 ** movprfx z0, z1 ** mad z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u64.c index c4939093effb..e4d9a73fbac8 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mad_11_u64_m_tied1, svuint64_t, z0 = svmad_m (p0, z0, z1, 11)) /* -** mad_11_u64_m_untied: { xfail *-*-* } +** mad_11_u64_m_untied: ** mov (z[0-9]+\.d), #11 ** movprfx z0, z1 ** mad z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u8.c index 0b4b1b8cfe6e..01ce99845ae5 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mad_u8.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (mad_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svmad_m (p0, z0, z1, x0)) /* -** mad_w0_u8_m_untied: { xfail *-*-* } +** mad_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** mad z0\.b, p0/m, z2\.b, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mad_11_u8_m_tied1, svuint8_t, z0 = svmad_m (p0, z0, z1, 11)) /* -** mad_11_u8_m_untied: { xfail *-*-* } +** mad_11_u8_m_untied: ** mov (z[0-9]+\.b), #11 ** movprfx z0, z1 ** mad z0\.b, p0/m, z2\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s16.c index 6a2167522827..637715edb329 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (max_w0_s16_m_tied1, svint16_t, int16_t, z0 = svmax_m (p0, z0, x0)) /* -** max_w0_s16_m_untied: { xfail *-*-* } +** max_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** smax z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (max_1_s16_m_tied1, svint16_t, z0 = svmax_m (p0, z0, 1)) /* -** max_1_s16_m_untied: { xfail *-*-* } +** max_1_s16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** smax z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s32.c index 07402c7a9019..428709fc74fd 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (max_1_s32_m_tied1, svint32_t, z0 = svmax_m (p0, z0, 1)) /* -** max_1_s32_m_untied: { xfail *-*-* } +** max_1_s32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** smax z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s64.c index 66f00fdf170a..284e097de030 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (max_1_s64_m_tied1, svint64_t, z0 = svmax_m (p0, z0, 1)) /* -** max_1_s64_m_untied: { xfail *-*-* } +** max_1_s64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** smax z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s8.c index c651a26f0d1a..123f1a96ea6f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (max_w0_s8_m_tied1, svint8_t, int8_t, z0 = svmax_m (p0, z0, x0)) /* -** max_w0_s8_m_untied: { xfail *-*-* } +** max_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** smax z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (max_1_s8_m_tied1, svint8_t, z0 = svmax_m (p0, z0, 1)) /* -** max_1_s8_m_untied: { xfail *-*-* } +** max_1_s8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** smax z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u16.c index 9a0b9543169d..459f89a1f0bb 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (max_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svmax_m (p0, z0, x0)) /* -** max_w0_u16_m_untied: { xfail *-*-* } +** max_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** umax z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (max_1_u16_m_tied1, svuint16_t, z0 = svmax_m (p0, z0, 1)) /* -** max_1_u16_m_untied: { xfail *-*-* } +** max_1_u16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** umax z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u32.c index 91eba25c1316..1ed5c28b9415 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (max_1_u32_m_tied1, svuint32_t, z0 = svmax_m (p0, z0, 1)) /* -** max_1_u32_m_untied: { xfail *-*-* } +** max_1_u32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** umax z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u64.c index 5be4c9fb77ff..47d7c8398d7f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (max_1_u64_m_tied1, svuint64_t, z0 = svmax_m (p0, z0, 1)) /* -** max_1_u64_m_untied: { xfail *-*-* } +** max_1_u64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** umax z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u8.c index 04c9ddb36a23..4301f3eb6410 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/max_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (max_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svmax_m (p0, z0, x0)) /* -** max_w0_u8_m_untied: { xfail *-*-* } +** max_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** umax z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (max_1_u8_m_tied1, svuint8_t, z0 = svmax_m (p0, z0, 1)) /* -** max_1_u8_m_untied: { xfail *-*-* } +** max_1_u8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** umax z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s16.c index 14dfcc4c333b..a6c41cce07c7 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (min_w0_s16_m_tied1, svint16_t, int16_t, z0 = svmin_m (p0, z0, x0)) /* -** min_w0_s16_m_untied: { xfail *-*-* } +** min_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** smin z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (min_1_s16_m_tied1, svint16_t, z0 = svmin_m (p0, z0, 1)) /* -** min_1_s16_m_untied: { xfail *-*-* } +** min_1_s16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** smin z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s32.c index cee2b649d4f7..ae9d13e342a9 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (min_1_s32_m_tied1, svint32_t, z0 = svmin_m (p0, z0, 1)) /* -** min_1_s32_m_untied: { xfail *-*-* } +** min_1_s32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** smin z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s64.c index 0d20bd0b28d6..dc2150040b07 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (min_1_s64_m_tied1, svint64_t, z0 = svmin_m (p0, z0, 1)) /* -** min_1_s64_m_untied: { xfail *-*-* } +** min_1_s64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** smin z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s8.c index 714b1576d5c6..0c0107e3ce2b 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (min_w0_s8_m_tied1, svint8_t, int8_t, z0 = svmin_m (p0, z0, x0)) /* -** min_w0_s8_m_untied: { xfail *-*-* } +** min_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** smin z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (min_1_s8_m_tied1, svint8_t, z0 = svmin_m (p0, z0, 1)) /* -** min_1_s8_m_untied: { xfail *-*-* } +** min_1_s8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** smin z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u16.c index df35cf1135ec..97c22427eb34 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (min_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svmin_m (p0, z0, x0)) /* -** min_w0_u16_m_untied: { xfail *-*-* } +** min_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** umin z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (min_1_u16_m_tied1, svuint16_t, z0 = svmin_m (p0, z0, 1)) /* -** min_1_u16_m_untied: { xfail *-*-* } +** min_1_u16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** umin z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u32.c index 7f84d099d611..e5abd3c56192 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (min_1_u32_m_tied1, svuint32_t, z0 = svmin_m (p0, z0, 1)) /* -** min_1_u32_m_untied: { xfail *-*-* } +** min_1_u32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** umin z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u64.c index 06e6e5099204..b8b6829507bc 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (min_1_u64_m_tied1, svuint64_t, z0 = svmin_m (p0, z0, 1)) /* -** min_1_u64_m_untied: { xfail *-*-* } +** min_1_u64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** umin z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u8.c index 2ca274278a29..3179dad35dd6 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/min_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (min_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svmin_m (p0, z0, x0)) /* -** min_w0_u8_m_untied: { xfail *-*-* } +** min_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** umin z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (min_1_u8_m_tied1, svuint8_t, z0 = svmin_m (p0, z0, 1)) /* -** min_1_u8_m_untied: { xfail *-*-* } +** min_1_u8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** umin z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_f16.c index d32ce5845d10..a1d06c098719 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_f16.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mla_2_f16_m_tied1, svfloat16_t, z0 = svmla_m (p0, z0, z1, 2)) /* -** mla_2_f16_m_untied: { xfail *-*-* } +** mla_2_f16_m_untied: ** fmov (z[0-9]+\.h), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmla z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_f32.c index d10ba69a53ef..8741a3523b7a 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_f32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mla_2_f32_m_tied1, svfloat32_t, z0 = svmla_m (p0, z0, z1, 2)) /* -** mla_2_f32_m_untied: { xfail *-*-* } +** mla_2_f32_m_untied: ** fmov (z[0-9]+\.s), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmla z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_f64.c index 94c1e0b07532..505f77a871c0 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_f64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mla_2_f64_m_tied1, svfloat64_t, z0 = svmla_m (p0, z0, z1, 2)) /* -** mla_2_f64_m_untied: { xfail *-*-* } +** mla_2_f64_m_untied: ** fmov (z[0-9]+\.d), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmla z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s16.c index f3ed191db6ab..9905f6e3ac35 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s16.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (mla_w0_s16_m_tied1, svint16_t, int16_t, z0 = svmla_m (p0, z0, z1, x0)) /* -** mla_w0_s16_m_untied: { xfail *-*-* } +** mla_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** mla z0\.h, p0/m, z2\.h, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mla_11_s16_m_tied1, svint16_t, z0 = svmla_m (p0, z0, z1, 11)) /* -** mla_11_s16_m_untied: { xfail *-*-* } +** mla_11_s16_m_untied: ** mov (z[0-9]+\.h), #11 ** movprfx z0, z1 ** mla z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s32.c index 5e8001a71d81..a9c32cca1ba2 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mla_11_s32_m_tied1, svint32_t, z0 = svmla_m (p0, z0, z1, 11)) /* -** mla_11_s32_m_untied: { xfail *-*-* } +** mla_11_s32_m_untied: ** mov (z[0-9]+\.s), #11 ** movprfx z0, z1 ** mla z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s64.c index 7b619e521195..ed2693b01b42 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mla_11_s64_m_tied1, svint64_t, z0 = svmla_m (p0, z0, z1, 11)) /* -** mla_11_s64_m_untied: { xfail *-*-* } +** mla_11_s64_m_untied: ** mov (z[0-9]+\.d), #11 ** movprfx z0, z1 ** mla z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s8.c index 47468947d78b..151cf6547b67 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_s8.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (mla_w0_s8_m_tied1, svint8_t, int8_t, z0 = svmla_m (p0, z0, z1, x0)) /* -** mla_w0_s8_m_untied: { xfail *-*-* } +** mla_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** mla z0\.b, p0/m, z2\.b, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mla_11_s8_m_tied1, svint8_t, z0 = svmla_m (p0, z0, z1, 11)) /* -** mla_11_s8_m_untied: { xfail *-*-* } +** mla_11_s8_m_untied: ** mov (z[0-9]+\.b), #11 ** movprfx z0, z1 ** mla z0\.b, p0/m, z2\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u16.c index 7238e428f686..36c60ba7264c 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u16.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (mla_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svmla_m (p0, z0, z1, x0)) /* -** mla_w0_u16_m_untied: { xfail *-*-* } +** mla_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** mla z0\.h, p0/m, z2\.h, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mla_11_u16_m_tied1, svuint16_t, z0 = svmla_m (p0, z0, z1, 11)) /* -** mla_11_u16_m_untied: { xfail *-*-* } +** mla_11_u16_m_untied: ** mov (z[0-9]+\.h), #11 ** movprfx z0, z1 ** mla z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u32.c index 7a68bce3d1f5..69503c438c86 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mla_11_u32_m_tied1, svuint32_t, z0 = svmla_m (p0, z0, z1, 11)) /* -** mla_11_u32_m_untied: { xfail *-*-* } +** mla_11_u32_m_untied: ** mov (z[0-9]+\.s), #11 ** movprfx z0, z1 ** mla z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u64.c index 6233265c8303..5fcbcf6f69f6 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mla_11_u64_m_tied1, svuint64_t, z0 = svmla_m (p0, z0, z1, 11)) /* -** mla_11_u64_m_untied: { xfail *-*-* } +** mla_11_u64_m_untied: ** mov (z[0-9]+\.d), #11 ** movprfx z0, z1 ** mla z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u8.c index 832ed41410e3..ec92434fb7a7 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mla_u8.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (mla_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svmla_m (p0, z0, z1, x0)) /* -** mla_w0_u8_m_untied: { xfail *-*-* } +** mla_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** mla z0\.b, p0/m, z2\.b, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mla_11_u8_m_tied1, svuint8_t, z0 = svmla_m (p0, z0, z1, 11)) /* -** mla_11_u8_m_untied: { xfail *-*-* } +** mla_11_u8_m_untied: ** mov (z[0-9]+\.b), #11 ** movprfx z0, z1 ** mla z0\.b, p0/m, z2\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_f16.c index b58104d5eafe..1b217dcea3b4 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_f16.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mls_2_f16_m_tied1, svfloat16_t, z0 = svmls_m (p0, z0, z1, 2)) /* -** mls_2_f16_m_untied: { xfail *-*-* } +** mls_2_f16_m_untied: ** fmov (z[0-9]+\.h), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmls z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_f32.c index 7d6e60519b0c..dddfb2cfbecf 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_f32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mls_2_f32_m_tied1, svfloat32_t, z0 = svmls_m (p0, z0, z1, 2)) /* -** mls_2_f32_m_untied: { xfail *-*-* } +** mls_2_f32_m_untied: ** fmov (z[0-9]+\.s), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmls z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_f64.c index a6ed28eec5c3..1836674ac976 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_f64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mls_2_f64_m_tied1, svfloat64_t, z0 = svmls_m (p0, z0, z1, 2)) /* -** mls_2_f64_m_untied: { xfail *-*-* } +** mls_2_f64_m_untied: ** fmov (z[0-9]+\.d), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmls z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s16.c index e199829c4adc..1cf387c38f8c 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s16.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (mls_w0_s16_m_tied1, svint16_t, int16_t, z0 = svmls_m (p0, z0, z1, x0)) /* -** mls_w0_s16_m_untied: { xfail *-*-* } +** mls_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** mls z0\.h, p0/m, z2\.h, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mls_11_s16_m_tied1, svint16_t, z0 = svmls_m (p0, z0, z1, 11)) /* -** mls_11_s16_m_untied: { xfail *-*-* } +** mls_11_s16_m_untied: ** mov (z[0-9]+\.h), #11 ** movprfx z0, z1 ** mls z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s32.c index fe386d01cd9e..35c3cc248a10 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mls_11_s32_m_tied1, svint32_t, z0 = svmls_m (p0, z0, z1, 11)) /* -** mls_11_s32_m_untied: { xfail *-*-* } +** mls_11_s32_m_untied: ** mov (z[0-9]+\.s), #11 ** movprfx z0, z1 ** mls z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s64.c index 2998d733fbc2..2c51d530341f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mls_11_s64_m_tied1, svint64_t, z0 = svmls_m (p0, z0, z1, 11)) /* -** mls_11_s64_m_untied: { xfail *-*-* } +** mls_11_s64_m_untied: ** mov (z[0-9]+\.d), #11 ** movprfx z0, z1 ** mls z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s8.c index c60c431455f0..c1151e9299d7 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_s8.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (mls_w0_s8_m_tied1, svint8_t, int8_t, z0 = svmls_m (p0, z0, z1, x0)) /* -** mls_w0_s8_m_untied: { xfail *-*-* } +** mls_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** mls z0\.b, p0/m, z2\.b, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mls_11_s8_m_tied1, svint8_t, z0 = svmls_m (p0, z0, z1, 11)) /* -** mls_11_s8_m_untied: { xfail *-*-* } +** mls_11_s8_m_untied: ** mov (z[0-9]+\.b), #11 ** movprfx z0, z1 ** mls z0\.b, p0/m, z2\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u16.c index e8a9f5cd94c6..48aabf85e566 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u16.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (mls_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svmls_m (p0, z0, z1, x0)) /* -** mls_w0_u16_m_untied: { xfail *-*-* } +** mls_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** mls z0\.h, p0/m, z2\.h, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mls_11_u16_m_tied1, svuint16_t, z0 = svmls_m (p0, z0, z1, 11)) /* -** mls_11_u16_m_untied: { xfail *-*-* } +** mls_11_u16_m_untied: ** mov (z[0-9]+\.h), #11 ** movprfx z0, z1 ** mls z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u32.c index 47e885012efb..4748372a3989 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mls_11_u32_m_tied1, svuint32_t, z0 = svmls_m (p0, z0, z1, 11)) /* -** mls_11_u32_m_untied: { xfail *-*-* } +** mls_11_u32_m_untied: ** mov (z[0-9]+\.s), #11 ** movprfx z0, z1 ** mls z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u64.c index 4d441b759206..25a43a549018 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mls_11_u64_m_tied1, svuint64_t, z0 = svmls_m (p0, z0, z1, 11)) /* -** mls_11_u64_m_untied: { xfail *-*-* } +** mls_11_u64_m_untied: ** mov (z[0-9]+\.d), #11 ** movprfx z0, z1 ** mls z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u8.c index 0489aaa7cf96..5bf03f5a42e4 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mls_u8.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (mls_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svmls_m (p0, z0, z1, x0)) /* -** mls_w0_u8_m_untied: { xfail *-*-* } +** mls_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** mls z0\.b, p0/m, z2\.b, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (mls_11_u8_m_tied1, svuint8_t, z0 = svmls_m (p0, z0, z1, 11)) /* -** mls_11_u8_m_untied: { xfail *-*-* } +** mls_11_u8_m_untied: ** mov (z[0-9]+\.b), #11 ** movprfx z0, z1 ** mls z0\.b, p0/m, z2\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_f16.c index 894961a9ec58..b8be34459ff6 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_f16.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (msb_2_f16_m_tied1, svfloat16_t, z0 = svmsb_m (p0, z0, z1, 2)) /* -** msb_2_f16_m_untied: { xfail *-*-* } +** msb_2_f16_m_untied: ** fmov (z[0-9]+\.h), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmsb z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_f32.c index 0d0915958a3d..d1bd768dca23 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_f32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (msb_2_f32_m_tied1, svfloat32_t, z0 = svmsb_m (p0, z0, z1, 2)) /* -** msb_2_f32_m_untied: { xfail *-*-* } +** msb_2_f32_m_untied: ** fmov (z[0-9]+\.s), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmsb z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_f64.c index 52dc3968e247..902558807bca 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_f64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (msb_2_f64_m_tied1, svfloat64_t, z0 = svmsb_m (p0, z0, z1, 2)) /* -** msb_2_f64_m_untied: { xfail *-*-* } +** msb_2_f64_m_untied: ** fmov (z[0-9]+\.d), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmsb z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s16.c index 56347cfb9182..e2b8e8b5352c 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s16.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (msb_w0_s16_m_tied1, svint16_t, int16_t, z0 = svmsb_m (p0, z0, z1, x0)) /* -** msb_w0_s16_m_untied: { xfail *-*-* } +** msb_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** msb z0\.h, p0/m, z2\.h, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (msb_11_s16_m_tied1, svint16_t, z0 = svmsb_m (p0, z0, z1, 11)) /* -** msb_11_s16_m_untied: { xfail *-*-* } +** msb_11_s16_m_untied: ** mov (z[0-9]+\.h), #11 ** movprfx z0, z1 ** msb z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s32.c index fb7a7815b57e..afb4d5e8cb5c 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (msb_11_s32_m_tied1, svint32_t, z0 = svmsb_m (p0, z0, z1, 11)) /* -** msb_11_s32_m_untied: { xfail *-*-* } +** msb_11_s32_m_untied: ** mov (z[0-9]+\.s), #11 ** movprfx z0, z1 ** msb z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s64.c index 6829fab36550..c3343aff20f2 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (msb_11_s64_m_tied1, svint64_t, z0 = svmsb_m (p0, z0, z1, 11)) /* -** msb_11_s64_m_untied: { xfail *-*-* } +** msb_11_s64_m_untied: ** mov (z[0-9]+\.d), #11 ** movprfx z0, z1 ** msb z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s8.c index d7fcafdd0dfa..255535e41b4c 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_s8.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (msb_w0_s8_m_tied1, svint8_t, int8_t, z0 = svmsb_m (p0, z0, z1, x0)) /* -** msb_w0_s8_m_untied: { xfail *-*-* } +** msb_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** msb z0\.b, p0/m, z2\.b, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (msb_11_s8_m_tied1, svint8_t, z0 = svmsb_m (p0, z0, z1, 11)) /* -** msb_11_s8_m_untied: { xfail *-*-* } +** msb_11_s8_m_untied: ** mov (z[0-9]+\.b), #11 ** movprfx z0, z1 ** msb z0\.b, p0/m, z2\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u16.c index 437a96040e12..d7fe8f081b6c 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u16.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (msb_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svmsb_m (p0, z0, z1, x0)) /* -** msb_w0_u16_m_untied: { xfail *-*-* } +** msb_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** msb z0\.h, p0/m, z2\.h, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (msb_11_u16_m_tied1, svuint16_t, z0 = svmsb_m (p0, z0, z1, 11)) /* -** msb_11_u16_m_untied: { xfail *-*-* } +** msb_11_u16_m_untied: ** mov (z[0-9]+\.h), #11 ** movprfx z0, z1 ** msb z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u32.c index aaaf0344aeac..99b61193f2e9 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (msb_11_u32_m_tied1, svuint32_t, z0 = svmsb_m (p0, z0, z1, 11)) /* -** msb_11_u32_m_untied: { xfail *-*-* } +** msb_11_u32_m_untied: ** mov (z[0-9]+\.s), #11 ** movprfx z0, z1 ** msb z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u64.c index 5c5d33073786..a7aa611977b5 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (msb_11_u64_m_tied1, svuint64_t, z0 = svmsb_m (p0, z0, z1, 11)) /* -** msb_11_u64_m_untied: { xfail *-*-* } +** msb_11_u64_m_untied: ** mov (z[0-9]+\.d), #11 ** movprfx z0, z1 ** msb z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u8.c index 5665ec9e3207..17ce5e99aa42 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/msb_u8.c @@ -54,7 +54,7 @@ TEST_UNIFORM_ZX (msb_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svmsb_m (p0, z0, z1, x0)) /* -** msb_w0_u8_m_untied: { xfail *-*-* } +** msb_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** msb z0\.b, p0/m, z2\.b, \1 @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (msb_11_u8_m_tied1, svuint8_t, z0 = svmsb_m (p0, z0, z1, 11)) /* -** msb_11_u8_m_untied: { xfail *-*-* } +** msb_11_u8_m_untied: ** mov (z[0-9]+\.b), #11 ** movprfx z0, z1 ** msb z0\.b, p0/m, z2\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f16.c index ef3de0c59532..fd9753b0ee24 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f16.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mul_1_f16_m_tied1, svfloat16_t, z0 = svmul_m (p0, z0, 1)) /* -** mul_1_f16_m_untied: { xfail *-*-* } +** mul_1_f16_m_untied: ** fmov (z[0-9]+\.h), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fmul z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f16_notrap.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f16_notrap.c index 481fe999c47c..6520aa8601a3 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f16_notrap.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f16_notrap.c @@ -65,7 +65,7 @@ TEST_UNIFORM_Z (mul_1_f16_m_tied1, svfloat16_t, z0 = svmul_m (p0, z0, 1)) /* -** mul_1_f16_m_untied: { xfail *-*-* } +** mul_1_f16_m_untied: ** fmov (z[0-9]+\.h), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fmul z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f32.c index 5b3df6fde9af..3c6433753595 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mul_1_f32_m_tied1, svfloat32_t, z0 = svmul_m (p0, z0, 1)) /* -** mul_1_f32_m_untied: { xfail *-*-* } +** mul_1_f32_m_untied: ** fmov (z[0-9]+\.s), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fmul z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f32_notrap.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f32_notrap.c index eb2d240efd63..137fb054d738 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f32_notrap.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f32_notrap.c @@ -65,7 +65,7 @@ TEST_UNIFORM_Z (mul_1_f32_m_tied1, svfloat32_t, z0 = svmul_m (p0, z0, 1)) /* -** mul_1_f32_m_untied: { xfail *-*-* } +** mul_1_f32_m_untied: ** fmov (z[0-9]+\.s), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fmul z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f64.c index f5654a9f19dc..00a46c22d1df 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mul_1_f64_m_tied1, svfloat64_t, z0 = svmul_m (p0, z0, 1)) /* -** mul_1_f64_m_untied: { xfail *-*-* } +** mul_1_f64_m_untied: ** fmov (z[0-9]+\.d), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fmul z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f64_notrap.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f64_notrap.c index d865618d4659..0a6b92a26861 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f64_notrap.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_f64_notrap.c @@ -65,7 +65,7 @@ TEST_UNIFORM_Z (mul_1_f64_m_tied1, svfloat64_t, z0 = svmul_m (p0, z0, 1)) /* -** mul_1_f64_m_untied: { xfail *-*-* } +** mul_1_f64_m_untied: ** fmov (z[0-9]+\.d), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fmul z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s16.c index aa08bc274050..80295f7bec3a 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (mul_w0_s16_m_tied1, svint16_t, int16_t, z0 = svmul_m (p0, z0, x0)) /* -** mul_w0_s16_m_untied: { xfail *-*-* } +** mul_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** mul z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mul_2_s16_m_tied1, svint16_t, z0 = svmul_m (p0, z0, 2)) /* -** mul_2_s16_m_untied: { xfail *-*-* } +** mul_2_s16_m_untied: ** mov (z[0-9]+\.h), #2 ** movprfx z0, z1 ** mul z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s32.c index 7acf77fdbbff..01c224932d99 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mul_2_s32_m_tied1, svint32_t, z0 = svmul_m (p0, z0, 2)) /* -** mul_2_s32_m_untied: { xfail *-*-* } +** mul_2_s32_m_untied: ** mov (z[0-9]+\.s), #2 ** movprfx z0, z1 ** mul z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s64.c index 549105f1efd1..c3cf581a0a4f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mul_2_s64_m_tied1, svint64_t, z0 = svmul_m (p0, z0, 2)) /* -** mul_2_s64_m_untied: { xfail *-*-* } +** mul_2_s64_m_untied: ** mov (z[0-9]+\.d), #2 ** movprfx z0, z1 ** mul z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s8.c index 012e6f250989..4ac4c8eeb2aa 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (mul_w0_s8_m_tied1, svint8_t, int8_t, z0 = svmul_m (p0, z0, x0)) /* -** mul_w0_s8_m_untied: { xfail *-*-* } +** mul_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** mul z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mul_2_s8_m_tied1, svint8_t, z0 = svmul_m (p0, z0, 2)) /* -** mul_2_s8_m_untied: { xfail *-*-* } +** mul_2_s8_m_untied: ** mov (z[0-9]+\.b), #2 ** movprfx z0, z1 ** mul z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u16.c index 300987eb6e63..affee965005d 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (mul_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svmul_m (p0, z0, x0)) /* -** mul_w0_u16_m_untied: { xfail *-*-* } +** mul_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** mul z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mul_2_u16_m_tied1, svuint16_t, z0 = svmul_m (p0, z0, 2)) /* -** mul_2_u16_m_untied: { xfail *-*-* } +** mul_2_u16_m_untied: ** mov (z[0-9]+\.h), #2 ** movprfx z0, z1 ** mul z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u32.c index 288d17b163ce..38b4bc71b401 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mul_2_u32_m_tied1, svuint32_t, z0 = svmul_m (p0, z0, 2)) /* -** mul_2_u32_m_untied: { xfail *-*-* } +** mul_2_u32_m_untied: ** mov (z[0-9]+\.s), #2 ** movprfx z0, z1 ** mul z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u64.c index f6959dbc7235..ab655554db7f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mul_2_u64_m_tied1, svuint64_t, z0 = svmul_m (p0, z0, 2)) /* -** mul_2_u64_m_untied: { xfail *-*-* } +** mul_2_u64_m_untied: ** mov (z[0-9]+\.d), #2 ** movprfx z0, z1 ** mul z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u8.c index b2745a48f506..ef0a5220dc08 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (mul_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svmul_m (p0, z0, x0)) /* -** mul_w0_u8_m_untied: { xfail *-*-* } +** mul_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** mul z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mul_2_u8_m_tied1, svuint8_t, z0 = svmul_m (p0, z0, 2)) /* -** mul_2_u8_m_untied: { xfail *-*-* } +** mul_2_u8_m_untied: ** mov (z[0-9]+\.b), #2 ** movprfx z0, z1 ** mul z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s16.c index a81532f5d898..576aedce8dd4 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (mulh_w0_s16_m_tied1, svint16_t, int16_t, z0 = svmulh_m (p0, z0, x0)) /* -** mulh_w0_s16_m_untied: { xfail *-*-* } +** mulh_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** smulh z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mulh_11_s16_m_tied1, svint16_t, z0 = svmulh_m (p0, z0, 11)) /* -** mulh_11_s16_m_untied: { xfail *-*-* } +** mulh_11_s16_m_untied: ** mov (z[0-9]+\.h), #11 ** movprfx z0, z1 ** smulh z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s32.c index 078feeb6a322..331a46fad762 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mulh_11_s32_m_tied1, svint32_t, z0 = svmulh_m (p0, z0, 11)) /* -** mulh_11_s32_m_untied: { xfail *-*-* } +** mulh_11_s32_m_untied: ** mov (z[0-9]+\.s), #11 ** movprfx z0, z1 ** smulh z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s64.c index a87d4d5ce0b1..c284bcf789d9 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mulh_11_s64_m_tied1, svint64_t, z0 = svmulh_m (p0, z0, 11)) /* -** mulh_11_s64_m_untied: { xfail *-*-* } +** mulh_11_s64_m_untied: ** mov (z[0-9]+\.d), #11 ** movprfx z0, z1 ** smulh z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s8.c index f9cd01afdc96..43271097e12d 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (mulh_w0_s8_m_tied1, svint8_t, int8_t, z0 = svmulh_m (p0, z0, x0)) /* -** mulh_w0_s8_m_untied: { xfail *-*-* } +** mulh_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** smulh z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mulh_11_s8_m_tied1, svint8_t, z0 = svmulh_m (p0, z0, 11)) /* -** mulh_11_s8_m_untied: { xfail *-*-* } +** mulh_11_s8_m_untied: ** mov (z[0-9]+\.b), #11 ** movprfx z0, z1 ** smulh z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u16.c index e9173eb243ec..7f239984ca83 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (mulh_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svmulh_m (p0, z0, x0)) /* -** mulh_w0_u16_m_untied: { xfail *-*-* } +** mulh_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** umulh z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mulh_11_u16_m_tied1, svuint16_t, z0 = svmulh_m (p0, z0, 11)) /* -** mulh_11_u16_m_untied: { xfail *-*-* } +** mulh_11_u16_m_untied: ** mov (z[0-9]+\.h), #11 ** movprfx z0, z1 ** umulh z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u32.c index de1f24f090cd..2c187d620418 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mulh_11_u32_m_tied1, svuint32_t, z0 = svmulh_m (p0, z0, 11)) /* -** mulh_11_u32_m_untied: { xfail *-*-* } +** mulh_11_u32_m_untied: ** mov (z[0-9]+\.s), #11 ** movprfx z0, z1 ** umulh z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u64.c index 0d7e12a7c841..1176a31317e5 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mulh_11_u64_m_tied1, svuint64_t, z0 = svmulh_m (p0, z0, 11)) /* -** mulh_11_u64_m_untied: { xfail *-*-* } +** mulh_11_u64_m_untied: ** mov (z[0-9]+\.d), #11 ** movprfx z0, z1 ** umulh z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u8.c index db7b1be1bdf9..5bd1009a2840 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulh_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (mulh_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svmulh_m (p0, z0, x0)) /* -** mulh_w0_u8_m_untied: { xfail *-*-* } +** mulh_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** umulh z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mulh_11_u8_m_tied1, svuint8_t, z0 = svmulh_m (p0, z0, 11)) /* -** mulh_11_u8_m_untied: { xfail *-*-* } +** mulh_11_u8_m_untied: ** mov (z[0-9]+\.b), #11 ** movprfx z0, z1 ** umulh z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulx_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulx_f16.c index b8d6bf5d92c8..174c10e83dcc 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulx_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulx_f16.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mulx_1_f16_m_tied1, svfloat16_t, z0 = svmulx_m (p0, z0, 1)) /* -** mulx_1_f16_m_untied: { xfail *-*-* } +** mulx_1_f16_m_untied: ** fmov (z[0-9]+\.h), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fmulx z0\.h, p0/m, z0\.h, \1 @@ -85,7 +85,7 @@ TEST_UNIFORM_Z (mulx_0p5_f16_m_tied1, svfloat16_t, z0 = svmulx_m (p0, z0, 0.5)) /* -** mulx_0p5_f16_m_untied: { xfail *-*-* } +** mulx_0p5_f16_m_untied: ** fmov (z[0-9]+\.h), #(?:0\.5|5\.0e-1) ** movprfx z0, z1 ** fmulx z0\.h, p0/m, z0\.h, \1 @@ -106,7 +106,7 @@ TEST_UNIFORM_Z (mulx_2_f16_m_tied1, svfloat16_t, z0 = svmulx_m (p0, z0, 2)) /* -** mulx_2_f16_m_untied: { xfail *-*-* } +** mulx_2_f16_m_untied: ** fmov (z[0-9]+\.h), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmulx z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulx_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulx_f32.c index b8f5c1310d76..8baf4e849d23 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulx_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulx_f32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mulx_1_f32_m_tied1, svfloat32_t, z0 = svmulx_m (p0, z0, 1)) /* -** mulx_1_f32_m_untied: { xfail *-*-* } +** mulx_1_f32_m_untied: ** fmov (z[0-9]+\.s), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fmulx z0\.s, p0/m, z0\.s, \1 @@ -85,7 +85,7 @@ TEST_UNIFORM_Z (mulx_0p5_f32_m_tied1, svfloat32_t, z0 = svmulx_m (p0, z0, 0.5)) /* -** mulx_0p5_f32_m_untied: { xfail *-*-* } +** mulx_0p5_f32_m_untied: ** fmov (z[0-9]+\.s), #(?:0\.5|5\.0e-1) ** movprfx z0, z1 ** fmulx z0\.s, p0/m, z0\.s, \1 @@ -106,7 +106,7 @@ TEST_UNIFORM_Z (mulx_2_f32_m_tied1, svfloat32_t, z0 = svmulx_m (p0, z0, 2)) /* -** mulx_2_f32_m_untied: { xfail *-*-* } +** mulx_2_f32_m_untied: ** fmov (z[0-9]+\.s), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmulx z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulx_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulx_f64.c index 746cc94143dc..1ab13caba56b 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulx_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mulx_f64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (mulx_1_f64_m_tied1, svfloat64_t, z0 = svmulx_m (p0, z0, 1)) /* -** mulx_1_f64_m_untied: { xfail *-*-* } +** mulx_1_f64_m_untied: ** fmov (z[0-9]+\.d), #1\.0(?:e\+0)? ** movprfx z0, z1 ** fmulx z0\.d, p0/m, z0\.d, \1 @@ -85,7 +85,7 @@ TEST_UNIFORM_Z (mulx_0p5_f64_m_tied1, svfloat64_t, z0 = svmulx_m (p0, z0, 0.5)) /* -** mulx_0p5_f64_m_untied: { xfail *-*-* } +** mulx_0p5_f64_m_untied: ** fmov (z[0-9]+\.d), #(?:0\.5|5\.0e-1) ** movprfx z0, z1 ** fmulx z0\.d, p0/m, z0\.d, \1 @@ -106,7 +106,7 @@ TEST_UNIFORM_Z (mulx_2_f64_m_tied1, svfloat64_t, z0 = svmulx_m (p0, z0, 2)) /* -** mulx_2_f64_m_untied: { xfail *-*-* } +** mulx_2_f64_m_untied: ** fmov (z[0-9]+\.d), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fmulx z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmad_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmad_f16.c index 92e0664e6476..b280f2685ff0 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmad_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmad_f16.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (nmad_2_f16_m_tied1, svfloat16_t, z0 = svnmad_m (p0, z0, z1, 2)) /* -** nmad_2_f16_m_untied: { xfail *-*-* } +** nmad_2_f16_m_untied: ** fmov (z[0-9]+\.h), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fnmad z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmad_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmad_f32.c index cef731ebcfe8..f8c91b5b52f2 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmad_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmad_f32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (nmad_2_f32_m_tied1, svfloat32_t, z0 = svnmad_m (p0, z0, z1, 2)) /* -** nmad_2_f32_m_untied: { xfail *-*-* } +** nmad_2_f32_m_untied: ** fmov (z[0-9]+\.s), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fnmad z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmad_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmad_f64.c index 43b97c0de50e..4ff6471b2e16 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmad_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmad_f64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (nmad_2_f64_m_tied1, svfloat64_t, z0 = svnmad_m (p0, z0, z1, 2)) /* -** nmad_2_f64_m_untied: { xfail *-*-* } +** nmad_2_f64_m_untied: ** fmov (z[0-9]+\.d), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fnmad z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmla_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmla_f16.c index 75d0ec7d3ab3..cd5bb6fd5bab 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmla_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmla_f16.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (nmla_2_f16_m_tied1, svfloat16_t, z0 = svnmla_m (p0, z0, z1, 2)) /* -** nmla_2_f16_m_untied: { xfail *-*-* } +** nmla_2_f16_m_untied: ** fmov (z[0-9]+\.h), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fnmla z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmla_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmla_f32.c index da594d3eb955..f8d44fd4d250 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmla_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmla_f32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (nmla_2_f32_m_tied1, svfloat32_t, z0 = svnmla_m (p0, z0, z1, 2)) /* -** nmla_2_f32_m_untied: { xfail *-*-* } +** nmla_2_f32_m_untied: ** fmov (z[0-9]+\.s), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fnmla z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmla_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmla_f64.c index 73f15f417627..4e599be327c2 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmla_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmla_f64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (nmla_2_f64_m_tied1, svfloat64_t, z0 = svnmla_m (p0, z0, z1, 2)) /* -** nmla_2_f64_m_untied: { xfail *-*-* } +** nmla_2_f64_m_untied: ** fmov (z[0-9]+\.d), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fnmla z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmls_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmls_f16.c index ccf7e51ffc99..dc8b1fea7c5a 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmls_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmls_f16.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (nmls_2_f16_m_tied1, svfloat16_t, z0 = svnmls_m (p0, z0, z1, 2)) /* -** nmls_2_f16_m_untied: { xfail *-*-* } +** nmls_2_f16_m_untied: ** fmov (z[0-9]+\.h), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fnmls z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmls_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmls_f32.c index 10d345026f70..84e74e13aa62 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmls_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmls_f32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (nmls_2_f32_m_tied1, svfloat32_t, z0 = svnmls_m (p0, z0, z1, 2)) /* -** nmls_2_f32_m_untied: { xfail *-*-* } +** nmls_2_f32_m_untied: ** fmov (z[0-9]+\.s), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fnmls z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmls_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmls_f64.c index bf2a4418a9fe..27d4682d28ff 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmls_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmls_f64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (nmls_2_f64_m_tied1, svfloat64_t, z0 = svnmls_m (p0, z0, z1, 2)) /* -** nmls_2_f64_m_untied: { xfail *-*-* } +** nmls_2_f64_m_untied: ** fmov (z[0-9]+\.d), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fnmls z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmsb_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmsb_f16.c index 5311ceb4408f..c485fb6b6545 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmsb_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmsb_f16.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (nmsb_2_f16_m_tied1, svfloat16_t, z0 = svnmsb_m (p0, z0, z1, 2)) /* -** nmsb_2_f16_m_untied: { xfail *-*-* } +** nmsb_2_f16_m_untied: ** fmov (z[0-9]+\.h), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fnmsb z0\.h, p0/m, z2\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmsb_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmsb_f32.c index 6f1407a8717e..1c1294d5458f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmsb_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmsb_f32.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (nmsb_2_f32_m_tied1, svfloat32_t, z0 = svnmsb_m (p0, z0, z1, 2)) /* -** nmsb_2_f32_m_untied: { xfail *-*-* } +** nmsb_2_f32_m_untied: ** fmov (z[0-9]+\.s), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fnmsb z0\.s, p0/m, z2\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmsb_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmsb_f64.c index 5e4e1dd7ea67..50c55a093064 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmsb_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/nmsb_f64.c @@ -75,7 +75,7 @@ TEST_UNIFORM_Z (nmsb_2_f64_m_tied1, svfloat64_t, z0 = svnmsb_m (p0, z0, z1, 2)) /* -** nmsb_2_f64_m_untied: { xfail *-*-* } +** nmsb_2_f64_m_untied: ** fmov (z[0-9]+\.d), #2\.0(?:e\+0)? ** movprfx z0, z1 ** fnmsb z0\.d, p0/m, z2\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s16.c index 62b707a9c696..f91af0a2494a 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (orr_w0_s16_m_tied1, svint16_t, int16_t, z0 = svorr_m (p0, z0, x0)) /* -** orr_w0_s16_m_untied: { xfail *-*-* } +** orr_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** orr z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (orr_1_s16_m_tied1, svint16_t, z0 = svorr_m (p0, z0, 1)) /* -** orr_1_s16_m_untied: { xfail *-*-* } +** orr_1_s16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** orr z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s32.c index 2e0e1e8883dd..514e65a788e9 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (orr_1_s32_m_tied1, svint32_t, z0 = svorr_m (p0, z0, 1)) /* -** orr_1_s32_m_untied: { xfail *-*-* } +** orr_1_s32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** orr z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s64.c index 1538fdd14b13..4f6cad749c5c 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (orr_1_s64_m_tied1, svint64_t, z0 = svorr_m (p0, z0, 1)) /* -** orr_1_s64_m_untied: { xfail *-*-* } +** orr_1_s64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** orr z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s8.c index b6483b6e76ec..d8a175b9a03b 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (orr_w0_s8_m_tied1, svint8_t, int8_t, z0 = svorr_m (p0, z0, x0)) /* -** orr_w0_s8_m_untied: { xfail *-*-* } +** orr_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** orr z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (orr_1_s8_m_tied1, svint8_t, z0 = svorr_m (p0, z0, 1)) /* -** orr_1_s8_m_untied: { xfail *-*-* } +** orr_1_s8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** orr z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u16.c index 000a0444c9b0..4f2e28d10dcf 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (orr_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svorr_m (p0, z0, x0)) /* -** orr_w0_u16_m_untied: { xfail *-*-* } +** orr_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** orr z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (orr_1_u16_m_tied1, svuint16_t, z0 = svorr_m (p0, z0, 1)) /* -** orr_1_u16_m_untied: { xfail *-*-* } +** orr_1_u16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** orr z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u32.c index 8e2351d162b8..0f155c6e9d74 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (orr_1_u32_m_tied1, svuint32_t, z0 = svorr_m (p0, z0, 1)) /* -** orr_1_u32_m_untied: { xfail *-*-* } +** orr_1_u32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** orr z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u64.c index 323e2101e472..eec5e98444bb 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (orr_1_u64_m_tied1, svuint64_t, z0 = svorr_m (p0, z0, 1)) /* -** orr_1_u64_m_untied: { xfail *-*-* } +** orr_1_u64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** orr z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u8.c index efe5591b4728..17be109914de 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/orr_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (orr_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svorr_m (p0, z0, x0)) /* -** orr_w0_u8_m_untied: { xfail *-*-* } +** orr_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** orr z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (orr_1_u8_m_tied1, svuint8_t, z0 = svorr_m (p0, z0, 1)) /* -** orr_1_u8_m_untied: { xfail *-*-* } +** orr_1_u8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** orr z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/scale_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/scale_f16.c index 9c554255b443..cb4225c9a477 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/scale_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/scale_f16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (scale_w0_f16_m_tied1, svfloat16_t, int16_t, z0 = svscale_m (p0, z0, x0)) /* -** scale_w0_f16_m_untied: { xfail *-*-* } +** scale_w0_f16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** fscale z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (scale_3_f16_m_tied1, svfloat16_t, z0 = svscale_m (p0, z0, 3)) /* -** scale_3_f16_m_untied: { xfail *-*-* } +** scale_3_f16_m_untied: ** mov (z[0-9]+\.h), #3 ** movprfx z0, z1 ** fscale z0\.h, p0/m, z0\.h, \1 @@ -127,7 +127,7 @@ TEST_UNIFORM_ZX (scale_w0_f16_z_tied1, svfloat16_t, int16_t, z0 = svscale_z (p0, z0, x0)) /* -** scale_w0_f16_z_untied: { xfail *-*-* } +** scale_w0_f16_z_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0\.h, p0/z, z1\.h ** fscale z0\.h, p0/m, z0\.h, \1 @@ -149,7 +149,7 @@ TEST_UNIFORM_Z (scale_3_f16_z_tied1, svfloat16_t, z0 = svscale_z (p0, z0, 3)) /* -** scale_3_f16_z_untied: { xfail *-*-* } +** scale_3_f16_z_untied: ** mov (z[0-9]+\.h), #3 ** movprfx z0\.h, p0/z, z1\.h ** fscale z0\.h, p0/m, z0\.h, \1 @@ -211,7 +211,7 @@ TEST_UNIFORM_ZX (scale_w0_f16_x_tied1, svfloat16_t, int16_t, z0 = svscale_x (p0, z0, x0)) /* -** scale_w0_f16_x_untied: { xfail *-*-* } +** scale_w0_f16_x_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** fscale z0\.h, p0/m, z0\.h, \1 @@ -232,7 +232,7 @@ TEST_UNIFORM_Z (scale_3_f16_x_tied1, svfloat16_t, z0 = svscale_x (p0, z0, 3)) /* -** scale_3_f16_x_untied: { xfail *-*-* } +** scale_3_f16_x_untied: ** mov (z[0-9]+\.h), #3 ** movprfx z0, z1 ** fscale z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/scale_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/scale_f32.c index 12a1b1d8686b..5079ee364937 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/scale_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/scale_f32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (scale_3_f32_m_tied1, svfloat32_t, z0 = svscale_m (p0, z0, 3)) /* -** scale_3_f32_m_untied: { xfail *-*-* } +** scale_3_f32_m_untied: ** mov (z[0-9]+\.s), #3 ** movprfx z0, z1 ** fscale z0\.s, p0/m, z0\.s, \1 @@ -149,7 +149,7 @@ TEST_UNIFORM_Z (scale_3_f32_z_tied1, svfloat32_t, z0 = svscale_z (p0, z0, 3)) /* -** scale_3_f32_z_untied: { xfail *-*-* } +** scale_3_f32_z_untied: ** mov (z[0-9]+\.s), #3 ** movprfx z0\.s, p0/z, z1\.s ** fscale z0\.s, p0/m, z0\.s, \1 @@ -232,7 +232,7 @@ TEST_UNIFORM_Z (scale_3_f32_x_tied1, svfloat32_t, z0 = svscale_x (p0, z0, 3)) /* -** scale_3_f32_x_untied: { xfail *-*-* } +** scale_3_f32_x_untied: ** mov (z[0-9]+\.s), #3 ** movprfx z0, z1 ** fscale z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/scale_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/scale_f64.c index f6b117185848..4d6235bfbaf3 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/scale_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/scale_f64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (scale_3_f64_m_tied1, svfloat64_t, z0 = svscale_m (p0, z0, 3)) /* -** scale_3_f64_m_untied: { xfail *-*-* } +** scale_3_f64_m_untied: ** mov (z[0-9]+\.d), #3 ** movprfx z0, z1 ** fscale z0\.d, p0/m, z0\.d, \1 @@ -149,7 +149,7 @@ TEST_UNIFORM_Z (scale_3_f64_z_tied1, svfloat64_t, z0 = svscale_z (p0, z0, 3)) /* -** scale_3_f64_z_untied: { xfail *-*-* } +** scale_3_f64_z_untied: ** mov (z[0-9]+\.d), #3 ** movprfx z0\.d, p0/z, z1\.d ** fscale z0\.d, p0/m, z0\.d, \1 @@ -232,7 +232,7 @@ TEST_UNIFORM_Z (scale_3_f64_x_tied1, svfloat64_t, z0 = svscale_x (p0, z0, 3)) /* -** scale_3_f64_x_untied: { xfail *-*-* } +** scale_3_f64_x_untied: ** mov (z[0-9]+\.d), #3 ** movprfx z0, z1 ** fscale z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s16.c index aea8ea2b4aa5..5b156a796126 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (sub_w0_s16_m_tied1, svint16_t, int16_t, z0 = svsub_m (p0, z0, x0)) /* -** sub_w0_s16_m_untied: { xfail *-*-* } +** sub_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** sub z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (sub_1_s16_m_tied1, svint16_t, z0 = svsub_m (p0, z0, 1)) /* -** sub_1_s16_m_untied: { xfail *-*-* } +** sub_1_s16_m_untied: ** mov (z[0-9]+)\.b, #-1 ** movprfx z0, z1 ** add z0\.h, p0/m, z0\.h, \1\.h diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s32.c index db6f3df90199..344be4fa50bd 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (sub_1_s32_m_tied1, svint32_t, z0 = svsub_m (p0, z0, 1)) /* -** sub_1_s32_m_untied: { xfail *-*-* } +** sub_1_s32_m_untied: ** mov (z[0-9]+)\.b, #-1 ** movprfx z0, z1 ** add z0\.s, p0/m, z0\.s, \1\.s diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s64.c index b9184c3a821c..b6eb7f2fc22f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (sub_1_s64_m_tied1, svint64_t, z0 = svsub_m (p0, z0, 1)) /* -** sub_1_s64_m_untied: { xfail *-*-* } +** sub_1_s64_m_untied: ** mov (z[0-9]+)\.b, #-1 ** movprfx z0, z1 ** add z0\.d, p0/m, z0\.d, \1\.d diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s8.c index 0d7ba99aa569..3edd4b09a963 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (sub_w0_s8_m_tied1, svint8_t, int8_t, z0 = svsub_m (p0, z0, x0)) /* -** sub_w0_s8_m_untied: { xfail *-*-* } +** sub_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** sub z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (sub_1_s8_m_tied1, svint8_t, z0 = svsub_m (p0, z0, 1)) /* -** sub_1_s8_m_untied: { xfail *-*-* } +** sub_1_s8_m_untied: ** mov (z[0-9]+\.b), #-1 ** movprfx z0, z1 ** add z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u16.c index 89620e159bf3..77cf40891c29 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (sub_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svsub_m (p0, z0, x0)) /* -** sub_w0_u16_m_untied: { xfail *-*-* } +** sub_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** sub z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (sub_1_u16_m_tied1, svuint16_t, z0 = svsub_m (p0, z0, 1)) /* -** sub_1_u16_m_untied: { xfail *-*-* } +** sub_1_u16_m_untied: ** mov (z[0-9]+)\.b, #-1 ** movprfx z0, z1 ** add z0\.h, p0/m, z0\.h, \1\.h diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u32.c index c4b405d4dd4f..0befdd72ec5f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (sub_1_u32_m_tied1, svuint32_t, z0 = svsub_m (p0, z0, 1)) /* -** sub_1_u32_m_untied: { xfail *-*-* } +** sub_1_u32_m_untied: ** mov (z[0-9]+)\.b, #-1 ** movprfx z0, z1 ** add z0\.s, p0/m, z0\.s, \1\.s diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u64.c index fb7f7173a006..3602c112ceae 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (sub_1_u64_m_tied1, svuint64_t, z0 = svsub_m (p0, z0, 1)) /* -** sub_1_u64_m_untied: { xfail *-*-* } +** sub_1_u64_m_untied: ** mov (z[0-9]+)\.b, #-1 ** movprfx z0, z1 ** add z0\.d, p0/m, z0\.d, \1\.d diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u8.c index 4552041910f7..036fca2bb296 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/sub_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (sub_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svsub_m (p0, z0, x0)) /* -** sub_w0_u8_m_untied: { xfail *-*-* } +** sub_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** sub z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (sub_1_u8_m_tied1, svuint8_t, z0 = svsub_m (p0, z0, 1)) /* -** sub_1_u8_m_untied: { xfail *-*-* } +** sub_1_u8_m_untied: ** mov (z[0-9]+\.b), #-1 ** movprfx z0, z1 ** add z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f16.c index 6929b2862184..b4d6f7bdd7eb 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f16.c @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (subr_m1_f16_m_tied1, svfloat16_t, z0 = svsubr_m (p0, z0, -1)) /* -** subr_m1_f16_m_untied: { xfail *-*-* } +** subr_m1_f16_m_untied: ** fmov (z[0-9]+\.h), #-1\.0(?:e\+0)? ** movprfx z0, z1 ** fsubr z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f16_notrap.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f16_notrap.c index a31ebd2ef7f3..78985a1311ba 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f16_notrap.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f16_notrap.c @@ -103,7 +103,7 @@ TEST_UNIFORM_Z (subr_m1_f16_m_tied1, svfloat16_t, z0 = svsubr_m (p0, z0, -1)) /* -** subr_m1_f16_m_untied: { xfail *-*-* } +** subr_m1_f16_m_untied: ** fmov (z[0-9]+\.h), #-1\.0(?:e\+0)? ** movprfx z0, z1 ** fsubr z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f32.c index 5bf90a391451..a0a4b98675ca 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f32.c @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (subr_m1_f32_m_tied1, svfloat32_t, z0 = svsubr_m (p0, z0, -1)) /* -** subr_m1_f32_m_untied: { xfail *-*-* } +** subr_m1_f32_m_untied: ** fmov (z[0-9]+\.s), #-1\.0(?:e\+0)? ** movprfx z0, z1 ** fsubr z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f32_notrap.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f32_notrap.c index 75ae0dc61641..04aec038aadb 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f32_notrap.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f32_notrap.c @@ -103,7 +103,7 @@ TEST_UNIFORM_Z (subr_m1_f32_m_tied1, svfloat32_t, z0 = svsubr_m (p0, z0, -1)) /* -** subr_m1_f32_m_untied: { xfail *-*-* } +** subr_m1_f32_m_untied: ** fmov (z[0-9]+\.s), #-1\.0(?:e\+0)? ** movprfx z0, z1 ** fsubr z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f64.c index 7091c40bbb22..64806b395d2e 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f64.c @@ -102,7 +102,7 @@ TEST_UNIFORM_Z (subr_m1_f64_m_tied1, svfloat64_t, z0 = svsubr_m (p0, z0, -1)) /* -** subr_m1_f64_m_untied: { xfail *-*-* } +** subr_m1_f64_m_untied: ** fmov (z[0-9]+\.d), #-1\.0(?:e\+0)? ** movprfx z0, z1 ** fsubr z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f64_notrap.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f64_notrap.c index 98598dd7702c..7458e5cc66d7 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f64_notrap.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_f64_notrap.c @@ -103,7 +103,7 @@ TEST_UNIFORM_Z (subr_m1_f64_m_tied1, svfloat64_t, z0 = svsubr_m (p0, z0, -1)) /* -** subr_m1_f64_m_untied: { xfail *-*-* } +** subr_m1_f64_m_untied: ** fmov (z[0-9]+\.d), #-1\.0(?:e\+0)? ** movprfx z0, z1 ** fsubr z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s16.c index d3dad62dafeb..a63a9bca7870 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (subr_w0_s16_m_tied1, svint16_t, int16_t, z0 = svsubr_m (p0, z0, x0)) /* -** subr_w0_s16_m_untied: { xfail *-*-* } +** subr_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** subr z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (subr_1_s16_m_tied1, svint16_t, z0 = svsubr_m (p0, z0, 1)) /* -** subr_1_s16_m_untied: { xfail *-*-* } +** subr_1_s16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** subr z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s32.c index ce62e2f210a2..e709abe424f8 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (subr_1_s32_m_tied1, svint32_t, z0 = svsubr_m (p0, z0, 1)) /* -** subr_1_s32_m_untied: { xfail *-*-* } +** subr_1_s32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** subr z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s64.c index ada9e977c99f..bafcd8ecd41f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (subr_1_s64_m_tied1, svint64_t, z0 = svsubr_m (p0, z0, 1)) /* -** subr_1_s64_m_untied: { xfail *-*-* } +** subr_1_s64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** subr z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s8.c index 90d2a6de9a5f..b9615de6655f 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (subr_w0_s8_m_tied1, svint8_t, int8_t, z0 = svsubr_m (p0, z0, x0)) /* -** subr_w0_s8_m_untied: { xfail *-*-* } +** subr_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** subr z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (subr_1_s8_m_tied1, svint8_t, z0 = svsubr_m (p0, z0, 1)) /* -** subr_1_s8_m_untied: { xfail *-*-* } +** subr_1_s8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** subr z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u16.c index 379a80fb1897..0c344c4d10f6 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (subr_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svsubr_m (p0, z0, x0)) /* -** subr_w0_u16_m_untied: { xfail *-*-* } +** subr_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** subr z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (subr_1_u16_m_tied1, svuint16_t, z0 = svsubr_m (p0, z0, 1)) /* -** subr_1_u16_m_untied: { xfail *-*-* } +** subr_1_u16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** subr z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u32.c index 215f8b449221..9d3a69cf9eab 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (subr_1_u32_m_tied1, svuint32_t, z0 = svsubr_m (p0, z0, 1)) /* -** subr_1_u32_m_untied: { xfail *-*-* } +** subr_1_u32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** subr z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u64.c index 78d94515bd4c..4d48e9446576 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (subr_1_u64_m_tied1, svuint64_t, z0 = svsubr_m (p0, z0, 1)) /* -** subr_1_u64_m_untied: { xfail *-*-* } +** subr_1_u64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** subr z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u8.c index fe5f96da8335..65606b6dda03 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/subr_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (subr_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svsubr_m (p0, z0, x0)) /* -** subr_w0_u8_m_untied: { xfail *-*-* } +** subr_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** subr z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (subr_1_u8_m_tied1, svuint8_t, z0 = svsubr_m (p0, z0, 1)) /* -** subr_1_u8_m_untied: { xfail *-*-* } +** subr_1_u8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** subr z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s16.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s16.c index acad87d96354..5716b89bf712 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s16.c @@ -66,7 +66,7 @@ TEST_UNIFORM_ZX (bcax_w0_s16_tied2, svint16_t, int16_t, z0 = svbcax (z1, z0, x0)) /* -** bcax_w0_s16_untied: { xfail *-*-*} +** bcax_w0_s16_untied: ** mov (z[0-9]+)\.h, w0 ** movprfx z0, z1 ** bcax z0\.d, z0\.d, (z2\.d, \1\.d|\1\.d, z2\.d) @@ -99,7 +99,7 @@ TEST_UNIFORM_Z (bcax_11_s16_tied2, svint16_t, z0 = svbcax (z1, z0, 11)) /* -** bcax_11_s16_untied: { xfail *-*-*} +** bcax_11_s16_untied: ** mov (z[0-9]+)\.h, #11 ** movprfx z0, z1 ** bcax z0\.d, z0\.d, (z2\.d, \1\.d|\1\.d, z2\.d) diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s32.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s32.c index aeb435746567..161234015553 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s32.c @@ -99,7 +99,7 @@ TEST_UNIFORM_Z (bcax_11_s32_tied2, svint32_t, z0 = svbcax (z1, z0, 11)) /* -** bcax_11_s32_untied: { xfail *-*-*} +** bcax_11_s32_untied: ** mov (z[0-9]+)\.s, #11 ** movprfx z0, z1 ** bcax z0\.d, z0\.d, (z2\.d, \1\.d|\1\.d, z2\.d) diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s64.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s64.c index 2087e5833425..54ca151da23b 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s64.c @@ -99,7 +99,7 @@ TEST_UNIFORM_Z (bcax_11_s64_tied2, svint64_t, z0 = svbcax (z1, z0, 11)) /* -** bcax_11_s64_untied: { xfail *-*-*} +** bcax_11_s64_untied: ** mov (z[0-9]+\.d), #11 ** movprfx z0, z1 ** bcax z0\.d, z0\.d, (z2\.d, \1|\1, z2\.d) diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s8.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s8.c index 548aafad8573..3e2a0ee77d82 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_s8.c @@ -66,7 +66,7 @@ TEST_UNIFORM_ZX (bcax_w0_s8_tied2, svint8_t, int8_t, z0 = svbcax (z1, z0, x0)) /* -** bcax_w0_s8_untied: { xfail *-*-*} +** bcax_w0_s8_untied: ** mov (z[0-9]+)\.b, w0 ** movprfx z0, z1 ** bcax z0\.d, z0\.d, (z2\.d, \1\.d|\1\.d, z2\.d) @@ -99,7 +99,7 @@ TEST_UNIFORM_Z (bcax_11_s8_tied2, svint8_t, z0 = svbcax (z1, z0, 11)) /* -** bcax_11_s8_untied: { xfail *-*-*} +** bcax_11_s8_untied: ** mov (z[0-9]+)\.b, #11 ** movprfx z0, z1 ** bcax z0\.d, z0\.d, (z2\.d, \1\.d|\1\.d, z2\.d) diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u16.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u16.c index b63a4774ba73..72c40ace3046 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u16.c @@ -66,7 +66,7 @@ TEST_UNIFORM_ZX (bcax_w0_u16_tied2, svuint16_t, uint16_t, z0 = svbcax (z1, z0, x0)) /* -** bcax_w0_u16_untied: { xfail *-*-*} +** bcax_w0_u16_untied: ** mov (z[0-9]+)\.h, w0 ** movprfx z0, z1 ** bcax z0\.d, z0\.d, (z2\.d, \1\.d|\1\.d, z2\.d) @@ -99,7 +99,7 @@ TEST_UNIFORM_Z (bcax_11_u16_tied2, svuint16_t, z0 = svbcax (z1, z0, 11)) /* -** bcax_11_u16_untied: { xfail *-*-*} +** bcax_11_u16_untied: ** mov (z[0-9]+)\.h, #11 ** movprfx z0, z1 ** bcax z0\.d, z0\.d, (z2\.d, \1\.d|\1\.d, z2\.d) diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u32.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u32.c index d03c938b77e5..ca75164eca29 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u32.c @@ -99,7 +99,7 @@ TEST_UNIFORM_Z (bcax_11_u32_tied2, svuint32_t, z0 = svbcax (z1, z0, 11)) /* -** bcax_11_u32_untied: { xfail *-*-*} +** bcax_11_u32_untied: ** mov (z[0-9]+)\.s, #11 ** movprfx z0, z1 ** bcax z0\.d, z0\.d, (z2\.d, \1\.d|\1\.d, z2\.d) diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u64.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u64.c index e03906214e84..8145a0c6258a 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u64.c @@ -99,7 +99,7 @@ TEST_UNIFORM_Z (bcax_11_u64_tied2, svuint64_t, z0 = svbcax (z1, z0, 11)) /* -** bcax_11_u64_untied: { xfail *-*-*} +** bcax_11_u64_untied: ** mov (z[0-9]+\.d), #11 ** movprfx z0, z1 ** bcax z0\.d, z0\.d, (z2\.d, \1|\1, z2\.d) diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u8.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u8.c index 0957d58bd0ec..655d271a92b1 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/bcax_u8.c @@ -66,7 +66,7 @@ TEST_UNIFORM_ZX (bcax_w0_u8_tied2, svuint8_t, uint8_t, z0 = svbcax (z1, z0, x0)) /* -** bcax_w0_u8_untied: { xfail *-*-*} +** bcax_w0_u8_untied: ** mov (z[0-9]+)\.b, w0 ** movprfx z0, z1 ** bcax z0\.d, z0\.d, (z2\.d, \1\.d|\1\.d, z2\.d) @@ -99,7 +99,7 @@ TEST_UNIFORM_Z (bcax_11_u8_tied2, svuint8_t, z0 = svbcax (z1, z0, 11)) /* -** bcax_11_u8_untied: { xfail *-*-*} +** bcax_11_u8_untied: ** mov (z[0-9]+)\.b, #11 ** movprfx z0, z1 ** bcax z0\.d, z0\.d, (z2\.d, \1\.d|\1\.d, z2\.d) diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s16.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s16.c index 6330c4265bb1..5c53cac76087 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s16.c @@ -163,7 +163,7 @@ TEST_UNIFORM_ZX (qadd_w0_s16_m_tied1, svint16_t, int16_t, z0 = svqadd_m (p0, z0, x0)) /* -** qadd_w0_s16_m_untied: { xfail *-*-* } +** qadd_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** sqadd z0\.h, p0/m, z0\.h, \1 @@ -184,7 +184,7 @@ TEST_UNIFORM_Z (qadd_1_s16_m_tied1, svint16_t, z0 = svqadd_m (p0, z0, 1)) /* -** qadd_1_s16_m_untied: { xfail *-*-* } +** qadd_1_s16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** sqadd z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s32.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s32.c index bab4874bc392..bb355c5a76d6 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s32.c @@ -184,7 +184,7 @@ TEST_UNIFORM_Z (qadd_1_s32_m_tied1, svint32_t, z0 = svqadd_m (p0, z0, 1)) /* -** qadd_1_s32_m_untied: { xfail *-*-* } +** qadd_1_s32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** sqadd z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s64.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s64.c index c2ad92123e5b..8c3509879851 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s64.c @@ -184,7 +184,7 @@ TEST_UNIFORM_Z (qadd_1_s64_m_tied1, svint64_t, z0 = svqadd_m (p0, z0, 1)) /* -** qadd_1_s64_m_untied: { xfail *-*-* } +** qadd_1_s64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** sqadd z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s8.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s8.c index 61343beacb89..2a514e324801 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_s8.c @@ -163,7 +163,7 @@ TEST_UNIFORM_ZX (qadd_w0_s8_m_tied1, svint8_t, int8_t, z0 = svqadd_m (p0, z0, x0)) /* -** qadd_w0_s8_m_untied: { xfail *-*-* } +** qadd_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** sqadd z0\.b, p0/m, z0\.b, \1 @@ -184,7 +184,7 @@ TEST_UNIFORM_Z (qadd_1_s8_m_tied1, svint8_t, z0 = svqadd_m (p0, z0, 1)) /* -** qadd_1_s8_m_untied: { xfail *-*-* } +** qadd_1_s8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** sqadd z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u16.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u16.c index f6c7ca9e075b..870a91063253 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u16.c @@ -166,7 +166,7 @@ TEST_UNIFORM_ZX (qadd_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svqadd_m (p0, z0, x0)) /* -** qadd_w0_u16_m_untied: { xfail *-*-* } +** qadd_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** uqadd z0\.h, p0/m, z0\.h, \1 @@ -187,7 +187,7 @@ TEST_UNIFORM_Z (qadd_1_u16_m_tied1, svuint16_t, z0 = svqadd_m (p0, z0, 1)) /* -** qadd_1_u16_m_untied: { xfail *-*-* } +** qadd_1_u16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** uqadd z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u32.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u32.c index 7701d13a051d..94c05fdc137c 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u32.c @@ -187,7 +187,7 @@ TEST_UNIFORM_Z (qadd_1_u32_m_tied1, svuint32_t, z0 = svqadd_m (p0, z0, 1)) /* -** qadd_1_u32_m_untied: { xfail *-*-* } +** qadd_1_u32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** uqadd z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u64.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u64.c index df8c3f8637be..cf5b2d27b740 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u64.c @@ -187,7 +187,7 @@ TEST_UNIFORM_Z (qadd_1_u64_m_tied1, svuint64_t, z0 = svqadd_m (p0, z0, 1)) /* -** qadd_1_u64_m_untied: { xfail *-*-* } +** qadd_1_u64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** uqadd z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u8.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u8.c index 6c856e2871c2..77cb1b71dd4b 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qadd_u8.c @@ -163,7 +163,7 @@ TEST_UNIFORM_ZX (qadd_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svqadd_m (p0, z0, x0)) /* -** qadd_w0_u8_m_untied: { xfail *-*-* } +** qadd_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** uqadd z0\.b, p0/m, z0\.b, \1 @@ -184,7 +184,7 @@ TEST_UNIFORM_Z (qadd_1_u8_m_tied1, svuint8_t, z0 = svqadd_m (p0, z0, 1)) /* -** qadd_1_u8_m_untied: { xfail *-*-* } +** qadd_1_u8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** uqadd z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalb_s16.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalb_s16.c index 4d1e90395e21..a37743be9d86 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalb_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalb_s16.c @@ -54,7 +54,7 @@ TEST_DUAL_ZX (qdmlalb_w0_s16_tied1, svint16_t, svint8_t, int8_t, z0 = svqdmlalb (z0, z4, x0)) /* -** qdmlalb_w0_s16_untied: { xfail *-*-* } +** qdmlalb_w0_s16_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** sqdmlalb z0\.h, z4\.b, \1 @@ -75,7 +75,7 @@ TEST_DUAL_Z (qdmlalb_11_s16_tied1, svint16_t, svint8_t, z0 = svqdmlalb (z0, z4, 11)) /* -** qdmlalb_11_s16_untied: { xfail *-*-* } +** qdmlalb_11_s16_untied: ** mov (z[0-9]+\.b), #11 ** movprfx z0, z1 ** sqdmlalb z0\.h, z4\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalb_s32.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalb_s32.c index 94373773e61e..1c319eaac056 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalb_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalb_s32.c @@ -54,7 +54,7 @@ TEST_DUAL_ZX (qdmlalb_w0_s32_tied1, svint32_t, svint16_t, int16_t, z0 = svqdmlalb (z0, z4, x0)) /* -** qdmlalb_w0_s32_untied: { xfail *-*-* } +** qdmlalb_w0_s32_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** sqdmlalb z0\.s, z4\.h, \1 @@ -75,7 +75,7 @@ TEST_DUAL_Z (qdmlalb_11_s32_tied1, svint32_t, svint16_t, z0 = svqdmlalb (z0, z4, 11)) /* -** qdmlalb_11_s32_untied: { xfail *-*-* } +** qdmlalb_11_s32_untied: ** mov (z[0-9]+\.h), #11 ** movprfx z0, z1 ** sqdmlalb z0\.s, z4\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalb_s64.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalb_s64.c index 8ac848b0b75f..3f2ab8865787 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalb_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalb_s64.c @@ -75,7 +75,7 @@ TEST_DUAL_Z (qdmlalb_11_s64_tied1, svint64_t, svint32_t, z0 = svqdmlalb (z0, z4, 11)) /* -** qdmlalb_11_s64_untied: { xfail *-*-* } +** qdmlalb_11_s64_untied: ** mov (z[0-9]+\.s), #11 ** movprfx z0, z1 ** sqdmlalb z0\.d, z4\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s16.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s16.c index d591db3cfb8d..e21d31fdbab8 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s16.c @@ -54,7 +54,7 @@ TEST_DUAL_ZX (qdmlalbt_w0_s16_tied1, svint16_t, svint8_t, int8_t, z0 = svqdmlalbt (z0, z4, x0)) /* -** qdmlalbt_w0_s16_untied: { xfail *-*-*} +** qdmlalbt_w0_s16_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** sqdmlalbt z0\.h, z4\.b, \1 @@ -75,7 +75,7 @@ TEST_DUAL_Z (qdmlalbt_11_s16_tied1, svint16_t, svint8_t, z0 = svqdmlalbt (z0, z4, 11)) /* -** qdmlalbt_11_s16_untied: { xfail *-*-*} +** qdmlalbt_11_s16_untied: ** mov (z[0-9]+\.b), #11 ** movprfx z0, z1 ** sqdmlalbt z0\.h, z4\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s32.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s32.c index e8326fed6171..32978e0913e5 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s32.c @@ -54,7 +54,7 @@ TEST_DUAL_ZX (qdmlalbt_w0_s32_tied1, svint32_t, svint16_t, int16_t, z0 = svqdmlalbt (z0, z4, x0)) /* -** qdmlalbt_w0_s32_untied: { xfail *-*-*} +** qdmlalbt_w0_s32_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** sqdmlalbt z0\.s, z4\.h, \1 @@ -75,7 +75,7 @@ TEST_DUAL_Z (qdmlalbt_11_s32_tied1, svint32_t, svint16_t, z0 = svqdmlalbt (z0, z4, 11)) /* -** qdmlalbt_11_s32_untied: { xfail *-*-*} +** qdmlalbt_11_s32_untied: ** mov (z[0-9]+\.h), #11 ** movprfx z0, z1 ** sqdmlalbt z0\.s, z4\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s64.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s64.c index f29e4de18dc2..22886bca5047 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qdmlalbt_s64.c @@ -75,7 +75,7 @@ TEST_DUAL_Z (qdmlalbt_11_s64_tied1, svint64_t, svint32_t, z0 = svqdmlalbt (z0, z4, 11)) /* -** qdmlalbt_11_s64_untied: { xfail *-*-*} +** qdmlalbt_11_s64_untied: ** mov (z[0-9]+\.s), #11 ** movprfx z0, z1 ** sqdmlalbt z0\.d, z4\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s16.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s16.c index c102e58ed910..624f8bc3dce5 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s16.c @@ -163,7 +163,7 @@ TEST_UNIFORM_ZX (qsub_w0_s16_m_tied1, svint16_t, int16_t, z0 = svqsub_m (p0, z0, x0)) /* -** qsub_w0_s16_m_untied: { xfail *-*-* } +** qsub_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** sqsub z0\.h, p0/m, z0\.h, \1 @@ -184,7 +184,7 @@ TEST_UNIFORM_Z (qsub_1_s16_m_tied1, svint16_t, z0 = svqsub_m (p0, z0, 1)) /* -** qsub_1_s16_m_untied: { xfail *-*-* } +** qsub_1_s16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** sqsub z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s32.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s32.c index e703ce9be7c9..b435f692b8cc 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s32.c @@ -184,7 +184,7 @@ TEST_UNIFORM_Z (qsub_1_s32_m_tied1, svint32_t, z0 = svqsub_m (p0, z0, 1)) /* -** qsub_1_s32_m_untied: { xfail *-*-* } +** qsub_1_s32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** sqsub z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s64.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s64.c index e901013f7faa..07eac9d0bdc5 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s64.c @@ -184,7 +184,7 @@ TEST_UNIFORM_Z (qsub_1_s64_m_tied1, svint64_t, z0 = svqsub_m (p0, z0, 1)) /* -** qsub_1_s64_m_untied: { xfail *-*-* } +** qsub_1_s64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** sqsub z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s8.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s8.c index 067ee6e6cb10..71eec645eebd 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_s8.c @@ -163,7 +163,7 @@ TEST_UNIFORM_ZX (qsub_w0_s8_m_tied1, svint8_t, int8_t, z0 = svqsub_m (p0, z0, x0)) /* -** qsub_w0_s8_m_untied: { xfail *-*-* } +** qsub_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** sqsub z0\.b, p0/m, z0\.b, \1 @@ -184,7 +184,7 @@ TEST_UNIFORM_Z (qsub_1_s8_m_tied1, svint8_t, z0 = svqsub_m (p0, z0, 1)) /* -** qsub_1_s8_m_untied: { xfail *-*-* } +** qsub_1_s8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** sqsub z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u16.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u16.c index 61be74634723..a544d8cfcf84 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u16.c @@ -166,7 +166,7 @@ TEST_UNIFORM_ZX (qsub_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svqsub_m (p0, z0, x0)) /* -** qsub_w0_u16_m_untied: { xfail *-*-* } +** qsub_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** uqsub z0\.h, p0/m, z0\.h, \1 @@ -187,7 +187,7 @@ TEST_UNIFORM_Z (qsub_1_u16_m_tied1, svuint16_t, z0 = svqsub_m (p0, z0, 1)) /* -** qsub_1_u16_m_untied: { xfail *-*-* } +** qsub_1_u16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** uqsub z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u32.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u32.c index d90dcadb263e..20c95d22ccec 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u32.c @@ -187,7 +187,7 @@ TEST_UNIFORM_Z (qsub_1_u32_m_tied1, svuint32_t, z0 = svqsub_m (p0, z0, 1)) /* -** qsub_1_u32_m_untied: { xfail *-*-* } +** qsub_1_u32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** uqsub z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u64.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u64.c index b25c6a569ba3..a5a0d2428212 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u64.c @@ -187,7 +187,7 @@ TEST_UNIFORM_Z (qsub_1_u64_m_tied1, svuint64_t, z0 = svqsub_m (p0, z0, 1)) /* -** qsub_1_u64_m_untied: { xfail *-*-* } +** qsub_1_u64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** uqsub z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u8.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u8.c index 686b2b425fb5..cdcf039bbaac 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsub_u8.c @@ -163,7 +163,7 @@ TEST_UNIFORM_ZX (qsub_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svqsub_m (p0, z0, x0)) /* -** qsub_w0_u8_m_untied: { xfail *-*-* } +** qsub_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** uqsub z0\.b, p0/m, z0\.b, \1 @@ -184,7 +184,7 @@ TEST_UNIFORM_Z (qsub_1_u8_m_tied1, svuint8_t, z0 = svqsub_m (p0, z0, 1)) /* -** qsub_1_u8_m_untied: { xfail *-*-* } +** qsub_1_u8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** uqsub z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s16.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s16.c index 577310d9614b..ed315171d3b6 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (qsubr_w0_s16_m_tied1, svint16_t, int16_t, z0 = svqsubr_m (p0, z0, x0)) /* -** qsubr_w0_s16_m_untied: { xfail *-*-* } +** qsubr_w0_s16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** sqsubr z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (qsubr_1_s16_m_tied1, svint16_t, z0 = svqsubr_m (p0, z0, 1)) /* -** qsubr_1_s16_m_untied: { xfail *-*-* } +** qsubr_1_s16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** sqsubr z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s32.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s32.c index f6a06c380610..810e01e829af 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (qsubr_1_s32_m_tied1, svint32_t, z0 = svqsubr_m (p0, z0, 1)) /* -** qsubr_1_s32_m_untied: { xfail *-*-* } +** qsubr_1_s32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** sqsubr z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s64.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s64.c index 12b06356a6c6..03a4eebd31dd 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (qsubr_1_s64_m_tied1, svint64_t, z0 = svqsubr_m (p0, z0, 1)) /* -** qsubr_1_s64_m_untied: { xfail *-*-* } +** qsubr_1_s64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** sqsubr z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s8.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s8.c index ce814a8393e9..88c5387506b8 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_s8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (qsubr_w0_s8_m_tied1, svint8_t, int8_t, z0 = svqsubr_m (p0, z0, x0)) /* -** qsubr_w0_s8_m_untied: { xfail *-*-* } +** qsubr_w0_s8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** sqsubr z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (qsubr_1_s8_m_tied1, svint8_t, z0 = svqsubr_m (p0, z0, 1)) /* -** qsubr_1_s8_m_untied: { xfail *-*-* } +** qsubr_1_s8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** sqsubr z0\.b, p0/m, z0\.b, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u16.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u16.c index f406bf2ed86c..974e564ff106 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u16.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (qsubr_w0_u16_m_tied1, svuint16_t, uint16_t, z0 = svqsubr_m (p0, z0, x0)) /* -** qsubr_w0_u16_m_untied: { xfail *-*-* } +** qsubr_w0_u16_m_untied: ** mov (z[0-9]+\.h), w0 ** movprfx z0, z1 ** uqsubr z0\.h, p0/m, z0\.h, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (qsubr_1_u16_m_tied1, svuint16_t, z0 = svqsubr_m (p0, z0, 1)) /* -** qsubr_1_u16_m_untied: { xfail *-*-* } +** qsubr_1_u16_m_untied: ** mov (z[0-9]+\.h), #1 ** movprfx z0, z1 ** uqsubr z0\.h, p0/m, z0\.h, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u32.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u32.c index 5c4bc9ee1979..54c9bdabc648 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u32.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (qsubr_1_u32_m_tied1, svuint32_t, z0 = svqsubr_m (p0, z0, 1)) /* -** qsubr_1_u32_m_untied: { xfail *-*-* } +** qsubr_1_u32_m_untied: ** mov (z[0-9]+\.s), #1 ** movprfx z0, z1 ** uqsubr z0\.s, p0/m, z0\.s, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u64.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u64.c index d0d146ea5e65..75769d5aa572 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u64.c @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (qsubr_1_u64_m_tied1, svuint64_t, z0 = svqsubr_m (p0, z0, 1)) /* -** qsubr_1_u64_m_untied: { xfail *-*-* } +** qsubr_1_u64_m_untied: ** mov (z[0-9]+\.d), #1 ** movprfx z0, z1 ** uqsubr z0\.d, p0/m, z0\.d, \1 diff --git a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u8.c b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u8.c index 7b487fd93b19..279d611af275 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve2/acle/asm/qsubr_u8.c @@ -43,7 +43,7 @@ TEST_UNIFORM_ZX (qsubr_w0_u8_m_tied1, svuint8_t, uint8_t, z0 = svqsubr_m (p0, z0, x0)) /* -** qsubr_w0_u8_m_untied: { xfail *-*-* } +** qsubr_w0_u8_m_untied: ** mov (z[0-9]+\.b), w0 ** movprfx z0, z1 ** uqsubr z0\.b, p0/m, z0\.b, \1 @@ -64,7 +64,7 @@ TEST_UNIFORM_Z (qsubr_1_u8_m_tied1, svuint8_t, z0 = svqsubr_m (p0, z0, 1)) /* -** qsubr_1_u8_m_untied: { xfail *-*-* } +** qsubr_1_u8_m_untied: ** mov (z[0-9]+\.b), #1 ** movprfx z0, z1 ** uqsubr z0\.b, p0/m, z0\.b, \1