From: Kewen Lin Date: Wed, 2 Sep 2020 02:12:32 +0000 (-0500) Subject: rs6000: Backport fixes for PR92923 and PR93136 X-Git-Tag: releases/gcc-9.4.0~720 X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=8c18220564f8372f4d45ed1a4df3fc7f71928654;p=thirdparty%2Fgcc.git rs6000: Backport fixes for PR92923 and PR93136 This patch is to backport the fix for PR92923 and its sequent fix for PR93136. We found the builtin functions needlessly using VIEW_CONVERT_EXPRs on their operands can probably cause remarkable performance degradation especailly when they are used in the hotspot. One typical case is ...github.com/antonblanchard/crc32-vpmsum/blob/master/vec_crc32.c With this patch, the execution time can improve 47.81%. Apart from the original fixes, this patch also gets two cases below updated. During the regression testing I found two cases failed due to icf optimization able to be adopted with this patch, the function bodies use tail calls, the expected assembly instructions are gone. gcc.target/powerpc/fold-vec-logical-ands-longlong.c gcc.target/powerpc/fold-vec-logical-ors-longlong.c Bootstrapped/regtested on powerpc64{,le}-linux-gnu P8. 2019-12-30 Peter Bergner gcc/ChangeLog PR target/92923 * config/rs6000/rs6000-builtin.def (VAND, VANDC, VNOR, VOR, VXOR): Delete. (EQV_V16QI_UNS, EQV_V8HI_UNS, EQV_V4SI_UNS, EQV_V2DI_UNS, EQV_V1TI_UNS, NAND_V16QI_UNS, NAND_V8HI_UNS, NAND_V4SI_UNS, NAND_V2DI_UNS, NAND_V1TI_UNS, ORC_V16QI_UNS, ORC_V8HI_UNS, ORC_V4SI_UNS, ORC_V2DI_UNS, ORC_V1TI_UNS, VAND_V16QI_UNS, VAND_V16QI, VAND_V8HI_UNS, VAND_V8HI, VAND_V4SI_UNS, VAND_V4SI, VAND_V2DI_UNS, VAND_V2DI, VAND_V4SF, VAND_V2DF, VANDC_V16QI_UNS, VANDC_V16QI, VANDC_V8HI_UNS, VANDC_V8HI, VANDC_V4SI_UNS, VANDC_V4SI, VANDC_V2DI_UNS, VANDC_V2DI, VANDC_V4SF, VANDC_V2DF, VNOR_V16QI_UNS, VNOR_V16QI, VNOR_V8HI_UNS, VNOR_V8HI, VNOR_V4SI_UNS, VNOR_V4SI, VNOR_V2DI_UNS, VNOR_V2DI, VNOR_V4SF, VNOR_V2DF, VOR_V16QI_UNS, VOR_V16QI, VOR_V8HI_UNS, VOR_V8HI, VOR_V4SI_UNS, VOR_V4SI, VOR_V2DI_UNS, VOR_V2DI, VOR_V4SF, VOR_V2DF, VXOR_V16QI_UNS, VXOR_V16QI, VXOR_V8HI_UNS, VXOR_V8HI, VXOR_V4SI_UNS, VXOR_V4SI, VXOR_V2DI_UNS, VXOR_V2DI, VXOR_V4SF, VXOR_V2DF): Add definitions. * config/rs6000/rs6000-c.c (altivec_overloaded_builtins) : Remove. : Add definitions. : Change unsigned usages to use the new *_UNS definition names. * config/rs6000/rs6000.c (rs6000_gimple_fold_builtin) : Use new definition names. (builtin_function_type) : Handle unsigned builtins. 2019-12-30 Peter Bergner 2020-02-08 Peter Bergner gcc/testsuite/ChangeLog * gcc.target/powerpc/fold-vec-logical-ands-longlong.c: Adjust. * gcc.target/powerpc/fold-vec-logical-ors-longlong.c: Likewise. PR target/92923 * gcc.target/powerpc/pr92923-1.c: New test. * gcc.target/powerpc/pr92923-2.c: Likewise. PR target/93136 * gcc.dg/vmx/ops.c: Add -flax-vector-conversions to dg-options. * gcc.target/powerpc/vsx-vector-6.h: Split tests into smaller functions. * gcc.target/powerpc/vsx-vector-6.p7.c: Adjust scan-assembler-times regex directives. Adjust expected instruction counts. * gcc.target/powerpc/vsx-vector-6.p8.c: Likewise. * gcc.target/powerpc/vsx-vector-6.p9.c: Likewise. (cherry picked from commit 4559be2358020714ec7521c80589992716d23035) (cherry picked from commit 4b39d801b2698d0f756231f6f8fa0be5a36f0c05) --- diff --git a/gcc/config/rs6000/rs6000-builtin.def b/gcc/config/rs6000/rs6000-builtin.def index 554316d35ad5..bc929d5e6f67 100644 --- a/gcc/config/rs6000/rs6000-builtin.def +++ b/gcc/config/rs6000/rs6000-builtin.def @@ -1001,8 +1001,26 @@ BU_ALTIVEC_2 (VADDUHS, "vadduhs", CONST, altivec_vadduhs) BU_ALTIVEC_2 (VADDSHS, "vaddshs", CONST, altivec_vaddshs) BU_ALTIVEC_2 (VADDUWS, "vadduws", CONST, altivec_vadduws) BU_ALTIVEC_2 (VADDSWS, "vaddsws", CONST, altivec_vaddsws) -BU_ALTIVEC_2 (VAND, "vand", CONST, andv4si3) -BU_ALTIVEC_2 (VANDC, "vandc", CONST, andcv4si3) +BU_ALTIVEC_2 (VAND_V16QI_UNS, "vand_v16qi_uns", CONST, andv16qi3) +BU_ALTIVEC_2 (VAND_V16QI, "vand_v16qi", CONST, andv16qi3) +BU_ALTIVEC_2 (VAND_V8HI_UNS, "vand_v8hi_uns", CONST, andv8hi3) +BU_ALTIVEC_2 (VAND_V8HI, "vand_v8hi", CONST, andv8hi3) +BU_ALTIVEC_2 (VAND_V4SI_UNS, "vand_v4si_uns", CONST, andv4si3) +BU_ALTIVEC_2 (VAND_V4SI, "vand_v4si", CONST, andv4si3) +BU_ALTIVEC_2 (VAND_V2DI_UNS, "vand_v2di_uns", CONST, andv2di3) +BU_ALTIVEC_2 (VAND_V2DI, "vand_v2di", CONST, andv2di3) +BU_ALTIVEC_2 (VAND_V4SF, "vand_v4sf", CONST, andv4sf3) +BU_ALTIVEC_2 (VAND_V2DF, "vand_v2df", CONST, andv2df3) +BU_ALTIVEC_2 (VANDC_V16QI_UNS,"vandc_v16qi_uns",CONST, andcv16qi3) +BU_ALTIVEC_2 (VANDC_V16QI, "vandc_v16qi", CONST, andcv16qi3) +BU_ALTIVEC_2 (VANDC_V8HI_UNS, "vandc_v8hi_uns", CONST, andcv8hi3) +BU_ALTIVEC_2 (VANDC_V8HI, "vandc_v8hi", CONST, andcv8hi3) +BU_ALTIVEC_2 (VANDC_V4SI_UNS, "vandc_v4si_uns", CONST, andcv4si3) +BU_ALTIVEC_2 (VANDC_V4SI, "vandc_v4si", CONST, andcv4si3) +BU_ALTIVEC_2 (VANDC_V2DI_UNS, "vandc_v2di_uns", CONST, andcv2di3) +BU_ALTIVEC_2 (VANDC_V2DI, "vandc_v2di", CONST, andcv2di3) +BU_ALTIVEC_2 (VANDC_V4SF, "vandc_v4sf", CONST, andcv4sf3) +BU_ALTIVEC_2 (VANDC_V2DF, "vandc_v2df", CONST, andcv2df3) BU_ALTIVEC_2 (VAVGUB, "vavgub", CONST, uavgv16qi3_ceil) BU_ALTIVEC_2 (VAVGSB, "vavgsb", CONST, avgv16qi3_ceil) BU_ALTIVEC_2 (VAVGUH, "vavguh", CONST, uavgv8hi3_ceil) @@ -1058,8 +1076,27 @@ BU_ALTIVEC_2 (VMULOUH, "vmulouh", CONST, vec_widen_umult_odd_v8hi) BU_ALTIVEC_2 (VMULOSH, "vmulosh", CONST, vec_widen_smult_odd_v8hi) BU_P8V_AV_2 (VMULOUW, "vmulouw", CONST, vec_widen_umult_odd_v4si) BU_P8V_AV_2 (VMULOSW, "vmulosw", CONST, vec_widen_smult_odd_v4si) -BU_ALTIVEC_2 (VNOR, "vnor", CONST, norv4si3) -BU_ALTIVEC_2 (VOR, "vor", CONST, iorv4si3) +BU_ALTIVEC_2 (VNOR_V16QI_UNS, "vnor_v16qi_uns", CONST, norv16qi3) +BU_ALTIVEC_2 (VNOR_V16QI, "vnor_v16qi", CONST, norv16qi3) +BU_ALTIVEC_2 (VNOR_V8HI_UNS, "vnor_v8hi_uns", CONST, norv8hi3) +BU_ALTIVEC_2 (VNOR_V8HI, "vnor_v8hi", CONST, norv8hi3) +BU_ALTIVEC_2 (VNOR_V4SI_UNS, "vnor_v4si_uns", CONST, norv4si3) +BU_ALTIVEC_2 (VNOR_V4SI, "vnor_v4si", CONST, norv4si3) +BU_ALTIVEC_2 (VNOR_V2DI_UNS, "vnor_v2di_uns", CONST, norv2di3) +BU_ALTIVEC_2 (VNOR_V2DI, "vnor_v2di", CONST, norv2di3) +BU_ALTIVEC_2 (VNOR_V4SF, "vnor_v4sf", CONST, norv4sf3) +BU_ALTIVEC_2 (VNOR_V2DF, "vnor_v2df", CONST, norv2df3) +BU_ALTIVEC_2 (VOR_V16QI_UNS, "vor_v16qi_uns", CONST, iorv16qi3) +BU_ALTIVEC_2 (VOR_V16QI, "vor_v16qi", CONST, iorv16qi3) +BU_ALTIVEC_2 (VOR_V8HI_UNS, "vor_v8hi_uns", CONST, iorv8hi3) +BU_ALTIVEC_2 (VOR_V8HI, "vor_v8hi", CONST, iorv8hi3) +BU_ALTIVEC_2 (VOR_V4SI_UNS, "vor_v4si_uns", CONST, iorv4si3) +BU_ALTIVEC_2 (VOR_V4SI, "vor_v4si", CONST, iorv4si3) +BU_ALTIVEC_2 (VOR_V2DI_UNS, "vor_v2di_uns", CONST, iorv2di3) +BU_ALTIVEC_2 (VOR_V2DI, "vor_v2di", CONST, iorv2di3) +BU_ALTIVEC_2 (VOR_V4SF, "vor_v4sf", CONST, iorv4sf3) +BU_ALTIVEC_2 (VOR_V2DF, "vor_v2df", CONST, iorv2df3) + BU_ALTIVEC_2 (VPKUHUM, "vpkuhum", CONST, altivec_vpkuhum) BU_ALTIVEC_2 (VPKUWUM, "vpkuwum", CONST, altivec_vpkuwum) BU_ALTIVEC_2 (VPKPX, "vpkpx", CONST, altivec_vpkpx) @@ -1106,7 +1143,17 @@ BU_ALTIVEC_2 (VSUM4SHS, "vsum4shs", CONST, altivec_vsum4shs) BU_ALTIVEC_2 (VSUM2SWS, "vsum2sws", CONST, altivec_vsum2sws) BU_ALTIVEC_2 (VSUMSWS, "vsumsws", CONST, altivec_vsumsws) BU_ALTIVEC_2 (VSUMSWS_BE, "vsumsws_be", CONST, altivec_vsumsws_direct) -BU_ALTIVEC_2 (VXOR, "vxor", CONST, xorv4si3) +BU_ALTIVEC_2 (VXOR_V16QI_UNS, "vxor_v16qi_uns", CONST, xorv16qi3) +BU_ALTIVEC_2 (VXOR_V16QI, "vxor_v16qi", CONST, xorv16qi3) +BU_ALTIVEC_2 (VXOR_V8HI_UNS, "vxor_v8hi_uns", CONST, xorv8hi3) +BU_ALTIVEC_2 (VXOR_V8HI, "vxor_v8hi", CONST, xorv8hi3) +BU_ALTIVEC_2 (VXOR_V4SI_UNS, "vxor_v4si_uns", CONST, xorv4si3) +BU_ALTIVEC_2 (VXOR_V4SI, "vxor_v4si", CONST, xorv4si3) +BU_ALTIVEC_2 (VXOR_V2DI_UNS, "vxor_v2di_uns", CONST, xorv2di3) +BU_ALTIVEC_2 (VXOR_V2DI, "vxor_v2di", CONST, xorv2di3) +BU_ALTIVEC_2 (VXOR_V4SF, "vxor_v4sf", CONST, xorv4sf3) +BU_ALTIVEC_2 (VXOR_V2DF, "vxor_v2df", CONST, xorv2df3) + BU_ALTIVEC_2 (COPYSIGN_V4SF, "copysignfp", CONST, vector_copysignv4sf3) /* Altivec ABS functions. */ @@ -1925,26 +1972,41 @@ BU_P8V_AV_2 (VSUBCUQ, "vsubcuq", CONST, altivec_vsubcuq) BU_P8V_AV_2 (VSUBUDM, "vsubudm", CONST, subv2di3) BU_P8V_AV_2 (VSUBUQM, "vsubuqm", CONST, altivec_vsubuqm) +BU_P8V_AV_2 (EQV_V16QI_UNS, "eqv_v16qi_uns",CONST, eqvv16qi3) BU_P8V_AV_2 (EQV_V16QI, "eqv_v16qi", CONST, eqvv16qi3) +BU_P8V_AV_2 (EQV_V8HI_UNS, "eqv_v8hi_uns", CONST, eqvv8hi3) BU_P8V_AV_2 (EQV_V8HI, "eqv_v8hi", CONST, eqvv8hi3) +BU_P8V_AV_2 (EQV_V4SI_UNS, "eqv_v4si_uns", CONST, eqvv4si3) BU_P8V_AV_2 (EQV_V4SI, "eqv_v4si", CONST, eqvv4si3) +BU_P8V_AV_2 (EQV_V2DI_UNS, "eqv_v2di_uns", CONST, eqvv2di3) BU_P8V_AV_2 (EQV_V2DI, "eqv_v2di", CONST, eqvv2di3) +BU_P8V_AV_2 (EQV_V1TI_UNS, "eqv_v1ti_uns", CONST, eqvv1ti3) BU_P8V_AV_2 (EQV_V1TI, "eqv_v1ti", CONST, eqvv1ti3) BU_P8V_AV_2 (EQV_V4SF, "eqv_v4sf", CONST, eqvv4sf3) BU_P8V_AV_2 (EQV_V2DF, "eqv_v2df", CONST, eqvv2df3) -BU_P8V_AV_2 (NAND_V16QI, "nand_v16qi", CONST, nandv16qi3) -BU_P8V_AV_2 (NAND_V8HI, "nand_v8hi", CONST, nandv8hi3) -BU_P8V_AV_2 (NAND_V4SI, "nand_v4si", CONST, nandv4si3) -BU_P8V_AV_2 (NAND_V2DI, "nand_v2di", CONST, nandv2di3) -BU_P8V_AV_2 (NAND_V1TI, "nand_v1ti", CONST, nandv1ti3) -BU_P8V_AV_2 (NAND_V4SF, "nand_v4sf", CONST, nandv4sf3) -BU_P8V_AV_2 (NAND_V2DF, "nand_v2df", CONST, nandv2df3) - +BU_P8V_AV_2 (NAND_V16QI_UNS, "nand_v16qi_uns", CONST, nandv16qi3) +BU_P8V_AV_2 (NAND_V16QI, "nand_v16qi", CONST, nandv16qi3) +BU_P8V_AV_2 (NAND_V8HI_UNS, "nand_v8hi_uns", CONST, nandv8hi3) +BU_P8V_AV_2 (NAND_V8HI, "nand_v8hi", CONST, nandv8hi3) +BU_P8V_AV_2 (NAND_V4SI_UNS, "nand_v4si_uns", CONST, nandv4si3) +BU_P8V_AV_2 (NAND_V4SI, "nand_v4si", CONST, nandv4si3) +BU_P8V_AV_2 (NAND_V2DI_UNS, "nand_v2di_uns", CONST, nandv2di3) +BU_P8V_AV_2 (NAND_V2DI, "nand_v2di", CONST, nandv2di3) +BU_P8V_AV_2 (NAND_V1TI_UNS, "nand_v1ti_uns", CONST, nandv1ti3) +BU_P8V_AV_2 (NAND_V1TI, "nand_v1ti", CONST, nandv1ti3) +BU_P8V_AV_2 (NAND_V4SF, "nand_v4sf", CONST, nandv4sf3) +BU_P8V_AV_2 (NAND_V2DF, "nand_v2df", CONST, nandv2df3) + +BU_P8V_AV_2 (ORC_V16QI_UNS, "orc_v16qi_uns",CONST, orcv16qi3) BU_P8V_AV_2 (ORC_V16QI, "orc_v16qi", CONST, orcv16qi3) +BU_P8V_AV_2 (ORC_V8HI_UNS, "orc_v8hi_uns", CONST, orcv8hi3) BU_P8V_AV_2 (ORC_V8HI, "orc_v8hi", CONST, orcv8hi3) +BU_P8V_AV_2 (ORC_V4SI_UNS, "orc_v4si_uns", CONST, orcv4si3) BU_P8V_AV_2 (ORC_V4SI, "orc_v4si", CONST, orcv4si3) +BU_P8V_AV_2 (ORC_V2DI_UNS, "orc_v2di_uns", CONST, orcv2di3) BU_P8V_AV_2 (ORC_V2DI, "orc_v2di", CONST, orcv2di3) +BU_P8V_AV_2 (ORC_V1TI_UNS, "orc_v1ti_uns", CONST, orcv1ti3) BU_P8V_AV_2 (ORC_V1TI, "orc_v1ti", CONST, orcv1ti3) BU_P8V_AV_2 (ORC_V4SF, "orc_v4sf", CONST, orcv4sf3) BU_P8V_AV_2 (ORC_V2DF, "orc_v2df", CONST, orcv2df3) diff --git a/gcc/config/rs6000/rs6000-c.c b/gcc/config/rs6000/rs6000-c.c index cf4341da8f2e..16d68a083cb1 100644 --- a/gcc/config/rs6000/rs6000-c.c +++ b/gcc/config/rs6000/rs6000-c.c @@ -1121,142 +1121,145 @@ const struct altivec_builtin_types altivec_overloaded_builtins[] = { RS6000_BTI_unsigned_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_unsigned_V16QI, 0 }, { ALTIVEC_BUILTIN_VEC_VADDUBS, ALTIVEC_BUILTIN_VADDUBS, RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, RS6000_BTI_bool_V16QI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_bool_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V4SF, RS6000_BTI_V4SF, RS6000_BTI_bool_V4SI, RS6000_BTI_V4SF, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_bool_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V2DF, RS6000_BTI_V2DF, RS6000_BTI_bool_V2DI, RS6000_BTI_V2DF, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V2DI, RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V2DI_UNS, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V2DI_UNS, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V2DI_UNS, RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_unsigned_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V2DI_UNS, RS6000_BTI_bool_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_bool_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V4SI_UNS, RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V4SI, RS6000_BTI_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V4SI, RS6000_BTI_V4SI, RS6000_BTI_V4SI, RS6000_BTI_bool_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V4SI, RS6000_BTI_V4SI, RS6000_BTI_V4SI, RS6000_BTI_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V4SI_UNS, RS6000_BTI_unsigned_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_unsigned_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V4SI_UNS, RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI, RS6000_BTI_bool_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V4SI_UNS, RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V8HI_UNS, RS6000_BTI_bool_V8HI, RS6000_BTI_bool_V8HI, RS6000_BTI_bool_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V8HI, RS6000_BTI_V8HI, RS6000_BTI_bool_V8HI, RS6000_BTI_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V8HI, RS6000_BTI_V8HI, RS6000_BTI_V8HI, RS6000_BTI_bool_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V8HI, RS6000_BTI_V8HI, RS6000_BTI_V8HI, RS6000_BTI_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V8HI_UNS, RS6000_BTI_unsigned_V8HI, RS6000_BTI_bool_V8HI, RS6000_BTI_unsigned_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V8HI_UNS, RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V8HI, RS6000_BTI_bool_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V8HI_UNS, RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V16QI, RS6000_BTI_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_V16QI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V16QI_UNS, RS6000_BTI_bool_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_bool_V16QI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V16QI, RS6000_BTI_V16QI, RS6000_BTI_V16QI, RS6000_BTI_bool_V16QI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V16QI, RS6000_BTI_V16QI, RS6000_BTI_V16QI, RS6000_BTI_V16QI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V16QI_UNS, RS6000_BTI_unsigned_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_unsigned_V16QI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V16QI_UNS, RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, RS6000_BTI_bool_V16QI, 0 }, - { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND, + { ALTIVEC_BUILTIN_VEC_AND, ALTIVEC_BUILTIN_VAND_V16QI_UNS, RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_bool_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V4SF, RS6000_BTI_V4SF, RS6000_BTI_bool_V4SI, RS6000_BTI_V4SF, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_bool_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V2DF, RS6000_BTI_V2DF, RS6000_BTI_bool_V2DI, RS6000_BTI_V2DF, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V2DI, RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V2DI_UNS, RS6000_BTI_bool_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_bool_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V2DI_UNS, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V2DI_UNS, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V2DI_UNS, RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_unsigned_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V4SI_UNS, RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V4SI, RS6000_BTI_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V4SI, RS6000_BTI_V4SI, RS6000_BTI_V4SI, RS6000_BTI_bool_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V4SI, RS6000_BTI_V4SI, RS6000_BTI_V4SI, RS6000_BTI_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V4SI_UNS, RS6000_BTI_unsigned_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_unsigned_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V4SI_UNS, RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI, RS6000_BTI_bool_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V4SI_UNS, RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V8HI_UNS, RS6000_BTI_bool_V8HI, RS6000_BTI_bool_V8HI, RS6000_BTI_bool_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V8HI, RS6000_BTI_V8HI, RS6000_BTI_bool_V8HI, RS6000_BTI_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V8HI, RS6000_BTI_V8HI, RS6000_BTI_V8HI, RS6000_BTI_bool_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V8HI, RS6000_BTI_V8HI, RS6000_BTI_V8HI, RS6000_BTI_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V8HI_UNS, RS6000_BTI_unsigned_V8HI, RS6000_BTI_bool_V8HI, RS6000_BTI_unsigned_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V8HI_UNS, RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V8HI, RS6000_BTI_bool_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V8HI_UNS, RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V16QI, RS6000_BTI_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_V16QI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V16QI_UNS, RS6000_BTI_bool_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_bool_V16QI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V16QI, RS6000_BTI_V16QI, RS6000_BTI_V16QI, RS6000_BTI_bool_V16QI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V16QI, RS6000_BTI_V16QI, RS6000_BTI_V16QI, RS6000_BTI_V16QI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V16QI_UNS, RS6000_BTI_unsigned_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_unsigned_V16QI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V16QI_UNS, RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, RS6000_BTI_bool_V16QI, 0 }, - { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC, + { ALTIVEC_BUILTIN_VEC_ANDC, ALTIVEC_BUILTIN_VANDC_V16QI_UNS, RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, 0 }, + { ALTIVEC_BUILTIN_VEC_AVG, ALTIVEC_BUILTIN_VAVGUB, RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, 0 }, { ALTIVEC_BUILTIN_VEC_AVG, ALTIVEC_BUILTIN_VAVGSB, @@ -2312,110 +2315,112 @@ const struct altivec_builtin_types altivec_overloaded_builtins[] = { { ALTIVEC_BUILTIN_VEC_NEARBYINT, VSX_BUILTIN_XVRSPI, RS6000_BTI_V4SF, RS6000_BTI_V4SF, 0, 0 }, - { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR, + { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, 0 }, - { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR, + { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF, 0 }, - { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR, + { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR, + { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR, + { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR_V2DI, RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR, + { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR_V2DI_UNS, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR, + { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR_V2DI_UNS, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR, + { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR_V2DI_UNS, RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_unsigned_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR, + { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR_V2DI_UNS, RS6000_BTI_bool_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_bool_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR, + { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR_V4SI, RS6000_BTI_V4SI, RS6000_BTI_V4SI, RS6000_BTI_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR, + { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR_V4SI_UNS, RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR, + { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR_V4SI_UNS, RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR, + { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR_V8HI, RS6000_BTI_V8HI, RS6000_BTI_V8HI, RS6000_BTI_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR, + { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR_V8HI_UNS, RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR, + { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR_V8HI_UNS, RS6000_BTI_bool_V8HI, RS6000_BTI_bool_V8HI, RS6000_BTI_bool_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR, + { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR_V16QI, RS6000_BTI_V16QI, RS6000_BTI_V16QI, RS6000_BTI_V16QI, 0 }, - { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR, + { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR_V16QI_UNS, RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, 0 }, - { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR, + { ALTIVEC_BUILTIN_VEC_NOR, ALTIVEC_BUILTIN_VNOR_V16QI_UNS, RS6000_BTI_bool_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_bool_V16QI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_bool_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V4SF, RS6000_BTI_V4SF, RS6000_BTI_bool_V4SI, RS6000_BTI_V4SF, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_bool_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V2DF, RS6000_BTI_V2DF, RS6000_BTI_bool_V2DI, RS6000_BTI_V2DF, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V2DI, RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V2DI_UNS, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V2DI_UNS, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V2DI_UNS, RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_unsigned_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V2DI_UNS, RS6000_BTI_bool_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_bool_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V4SI_UNS, RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V4SI, RS6000_BTI_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V4SI, RS6000_BTI_V4SI, RS6000_BTI_V4SI, RS6000_BTI_bool_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V4SI, RS6000_BTI_V4SI, RS6000_BTI_V4SI, RS6000_BTI_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V4SI_UNS, RS6000_BTI_unsigned_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_unsigned_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V4SI_UNS, RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI, RS6000_BTI_bool_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V4SI_UNS, RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V8HI_UNS, RS6000_BTI_bool_V8HI, RS6000_BTI_bool_V8HI, RS6000_BTI_bool_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V8HI, RS6000_BTI_V8HI, RS6000_BTI_bool_V8HI, RS6000_BTI_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V8HI, RS6000_BTI_V8HI, RS6000_BTI_V8HI, RS6000_BTI_bool_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V8HI, RS6000_BTI_V8HI, RS6000_BTI_V8HI, RS6000_BTI_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V8HI_UNS, RS6000_BTI_unsigned_V8HI, RS6000_BTI_bool_V8HI, RS6000_BTI_unsigned_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V8HI_UNS, RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V8HI, RS6000_BTI_bool_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V8HI_UNS, RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V16QI, RS6000_BTI_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_V16QI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V16QI_UNS, RS6000_BTI_bool_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_bool_V16QI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V16QI, RS6000_BTI_V16QI, RS6000_BTI_V16QI, RS6000_BTI_bool_V16QI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V16QI, RS6000_BTI_V16QI, RS6000_BTI_V16QI, RS6000_BTI_V16QI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V16QI_UNS, RS6000_BTI_unsigned_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_unsigned_V16QI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V16QI_UNS, RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, RS6000_BTI_bool_V16QI, 0 }, - { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR, + { ALTIVEC_BUILTIN_VEC_OR, ALTIVEC_BUILTIN_VOR_V16QI_UNS, RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, 0 }, + { ALTIVEC_BUILTIN_VEC_PACK, ALTIVEC_BUILTIN_VPKUHUM, RS6000_BTI_V16QI, RS6000_BTI_V8HI, RS6000_BTI_V8HI, 0 }, { ALTIVEC_BUILTIN_VEC_PACK, ALTIVEC_BUILTIN_VPKUHUM, @@ -3283,73 +3288,79 @@ const struct altivec_builtin_types altivec_overloaded_builtins[] = { ~RS6000_BTI_unsigned_V16QI, 0 }, { VSX_BUILTIN_VEC_XL_BE, VSX_BUILTIN_LD_ELEMREV_V16QI, RS6000_BTI_unsigned_V16QI, RS6000_BTI_INTSI, ~RS6000_BTI_UINTQI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V4SI, RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_bool_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V4SF, RS6000_BTI_V4SF, RS6000_BTI_bool_V4SI, RS6000_BTI_V4SF, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_V2DF, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V2DI, RS6000_BTI_V2DF, RS6000_BTI_V2DF, RS6000_BTI_bool_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V2DF, RS6000_BTI_V2DF, RS6000_BTI_bool_V2DI, RS6000_BTI_V2DF, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V2DI, RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V2DI_UNS, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V2DI_UNS, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V2DI_UNS, RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_unsigned_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V2DI_UNS, RS6000_BTI_bool_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_bool_V2DI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V4SI_UNS, RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V4SI, RS6000_BTI_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V4SI, RS6000_BTI_V4SI, RS6000_BTI_V4SI, RS6000_BTI_bool_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V4SI, RS6000_BTI_V4SI, RS6000_BTI_V4SI, RS6000_BTI_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V4SI_UNS, RS6000_BTI_unsigned_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_unsigned_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V4SI_UNS, RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI, RS6000_BTI_bool_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V4SI_UNS, RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V8HI_UNS, RS6000_BTI_bool_V8HI, RS6000_BTI_bool_V8HI, RS6000_BTI_bool_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V8HI, RS6000_BTI_V8HI, RS6000_BTI_bool_V8HI, RS6000_BTI_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V8HI, RS6000_BTI_V8HI, RS6000_BTI_V8HI, RS6000_BTI_bool_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V8HI, RS6000_BTI_V8HI, RS6000_BTI_V8HI, RS6000_BTI_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V8HI_UNS, RS6000_BTI_unsigned_V8HI, RS6000_BTI_bool_V8HI, RS6000_BTI_unsigned_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V8HI_UNS, RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V8HI, RS6000_BTI_bool_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V8HI_UNS, RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V8HI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, - RS6000_BTI_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_V16QI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, - RS6000_BTI_bool_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_bool_V16QI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V16QI, RS6000_BTI_V16QI, RS6000_BTI_V16QI, RS6000_BTI_bool_V16QI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V16QI, + RS6000_BTI_V16QI, RS6000_BTI_V16QI, RS6000_BTI_unsigned_V16QI, 0 }, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V16QI, RS6000_BTI_V16QI, RS6000_BTI_V16QI, RS6000_BTI_V16QI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, - RS6000_BTI_unsigned_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_unsigned_V16QI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V16QI_UNS, + RS6000_BTI_bool_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_V16QI, 0 }, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V16QI_UNS, + RS6000_BTI_bool_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_unsigned_V16QI, 0 }, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V16QI_UNS, + RS6000_BTI_bool_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_bool_V16QI, 0 }, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V16QI_UNS, + RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, RS6000_BTI_V16QI, 0 }, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V16QI_UNS, RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, RS6000_BTI_bool_V16QI, 0 }, - { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR, + { ALTIVEC_BUILTIN_VEC_XOR, ALTIVEC_BUILTIN_VXOR_V16QI_UNS, RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, 0 }, /* Ternary AltiVec/VSX builtins. */ @@ -4595,15 +4606,15 @@ const struct altivec_builtin_types altivec_overloaded_builtins[] = { RS6000_BTI_V16QI, RS6000_BTI_V16QI, RS6000_BTI_bool_V16QI, 0 }, { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V16QI, RS6000_BTI_V16QI, RS6000_BTI_V16QI, RS6000_BTI_V16QI, 0 }, - { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V16QI, + { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V16QI_UNS, RS6000_BTI_bool_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_bool_V16QI, 0 }, - { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V16QI, + { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V16QI_UNS, RS6000_BTI_unsigned_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_unsigned_V16QI, 0 }, - { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V16QI, + { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V16QI_UNS, RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, RS6000_BTI_bool_V16QI, 0 }, - { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V16QI, + { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V16QI_UNS, RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, 0 }, { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V8HI, @@ -4612,15 +4623,15 @@ const struct altivec_builtin_types altivec_overloaded_builtins[] = { RS6000_BTI_V8HI, RS6000_BTI_V8HI, RS6000_BTI_bool_V8HI, 0 }, { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V8HI, RS6000_BTI_V8HI, RS6000_BTI_V8HI, RS6000_BTI_V8HI, 0 }, - { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V8HI, + { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V8HI_UNS, RS6000_BTI_bool_V8HI, RS6000_BTI_bool_V8HI, RS6000_BTI_bool_V8HI, 0 }, - { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V8HI, + { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V8HI_UNS, RS6000_BTI_unsigned_V8HI, RS6000_BTI_bool_V8HI, RS6000_BTI_unsigned_V8HI, 0 }, - { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V8HI, + { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V8HI_UNS, RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V8HI, RS6000_BTI_bool_V8HI, 0 }, - { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V8HI, + { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V8HI_UNS, RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V8HI, 0 }, { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V4SI, @@ -4629,15 +4640,15 @@ const struct altivec_builtin_types altivec_overloaded_builtins[] = { RS6000_BTI_V4SI, RS6000_BTI_V4SI, RS6000_BTI_bool_V4SI, 0 }, { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V4SI, RS6000_BTI_V4SI, RS6000_BTI_V4SI, RS6000_BTI_V4SI, 0 }, - { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V4SI, + { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V4SI_UNS, RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, 0 }, - { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V4SI, + { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V4SI_UNS, RS6000_BTI_unsigned_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_unsigned_V4SI, 0 }, - { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V4SI, + { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V4SI_UNS, RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI, RS6000_BTI_bool_V4SI, 0 }, - { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V4SI, + { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V4SI_UNS, RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI, 0 }, { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V2DI, @@ -4646,15 +4657,15 @@ const struct altivec_builtin_types altivec_overloaded_builtins[] = { RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI, 0 }, { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, 0 }, - { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V2DI, + { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V2DI_UNS, RS6000_BTI_bool_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_bool_V2DI, 0 }, - { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V2DI, + { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V2DI_UNS, RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_unsigned_V2DI, 0 }, - { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V2DI, + { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V2DI_UNS, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, 0 }, - { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V2DI, + { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V2DI_UNS, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, 0 }, { P8V_BUILTIN_VEC_EQV, P8V_BUILTIN_EQV_V4SF, @@ -4668,16 +4679,16 @@ const struct altivec_builtin_types altivec_overloaded_builtins[] = { RS6000_BTI_V16QI, RS6000_BTI_V16QI, RS6000_BTI_bool_V16QI, 0 }, { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V16QI, RS6000_BTI_V16QI, RS6000_BTI_V16QI, RS6000_BTI_V16QI, 0 }, - { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V16QI, + { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V16QI_UNS, RS6000_BTI_unsigned_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_unsigned_V16QI, 0 }, - { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V16QI, + { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V16QI_UNS, RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, RS6000_BTI_bool_V16QI, 0 }, - { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V16QI, + { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V16QI_UNS, RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, 0 }, - { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V16QI, + { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V16QI_UNS, RS6000_BTI_bool_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_bool_V16QI, 0 }, { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V8HI, RS6000_BTI_V8HI, RS6000_BTI_bool_V8HI, RS6000_BTI_V8HI, 0 }, @@ -4685,16 +4696,16 @@ const struct altivec_builtin_types altivec_overloaded_builtins[] = { RS6000_BTI_V8HI, RS6000_BTI_V8HI, RS6000_BTI_bool_V8HI, 0 }, { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V8HI, RS6000_BTI_V8HI, RS6000_BTI_V8HI, RS6000_BTI_V8HI, 0 }, - { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V8HI, + { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V8HI_UNS, RS6000_BTI_unsigned_V8HI, RS6000_BTI_bool_V8HI, RS6000_BTI_unsigned_V8HI, 0 }, - { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V8HI, + { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V8HI_UNS, RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V8HI, RS6000_BTI_bool_V8HI, 0 }, - { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V8HI, + { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V8HI_UNS, RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V8HI, 0 }, - { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V8HI, + { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V8HI_UNS, RS6000_BTI_bool_V8HI, RS6000_BTI_bool_V8HI, RS6000_BTI_bool_V8HI, 0 }, { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V4SI, RS6000_BTI_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_V4SI, 0 }, @@ -4702,16 +4713,16 @@ const struct altivec_builtin_types altivec_overloaded_builtins[] = { RS6000_BTI_V4SI, RS6000_BTI_V4SI, RS6000_BTI_bool_V4SI, 0 }, { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V4SI, RS6000_BTI_V4SI, RS6000_BTI_V4SI, RS6000_BTI_V4SI, 0 }, - { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V4SI, + { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V4SI_UNS, RS6000_BTI_unsigned_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_unsigned_V4SI, 0 }, - { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V4SI, + { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V4SI_UNS, RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI, RS6000_BTI_bool_V4SI, 0 }, - { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V4SI, + { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V4SI_UNS, RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI, 0 }, - { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V4SI, + { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V4SI_UNS, RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, 0 }, { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V2DI, RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_V2DI, 0 }, @@ -4719,16 +4730,16 @@ const struct altivec_builtin_types altivec_overloaded_builtins[] = { RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI, 0 }, { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, 0 }, - { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V2DI, + { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V2DI_UNS, RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_unsigned_V2DI, 0 }, - { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V2DI, + { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V2DI_UNS, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, 0 }, - { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V2DI, + { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V2DI_UNS, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, 0 }, - { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V2DI, + { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V2DI_UNS, RS6000_BTI_bool_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_bool_V2DI, 0 }, { P8V_BUILTIN_VEC_NAND, P8V_BUILTIN_NAND_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, 0 }, @@ -4741,16 +4752,16 @@ const struct altivec_builtin_types altivec_overloaded_builtins[] = { RS6000_BTI_V16QI, RS6000_BTI_V16QI, RS6000_BTI_bool_V16QI, 0 }, { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V16QI, RS6000_BTI_V16QI, RS6000_BTI_V16QI, RS6000_BTI_V16QI, 0 }, - { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V16QI, + { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V16QI_UNS, RS6000_BTI_unsigned_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_unsigned_V16QI, 0 }, - { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V16QI, + { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V16QI_UNS, RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, RS6000_BTI_bool_V16QI, 0 }, - { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V16QI, + { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V16QI_UNS, RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, RS6000_BTI_unsigned_V16QI, 0 }, - { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V16QI, + { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V16QI_UNS, RS6000_BTI_bool_V16QI, RS6000_BTI_bool_V16QI, RS6000_BTI_bool_V16QI, 0 }, { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V8HI, RS6000_BTI_V8HI, RS6000_BTI_bool_V8HI, RS6000_BTI_V8HI, 0 }, @@ -4758,16 +4769,16 @@ const struct altivec_builtin_types altivec_overloaded_builtins[] = { RS6000_BTI_V8HI, RS6000_BTI_V8HI, RS6000_BTI_bool_V8HI, 0 }, { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V8HI, RS6000_BTI_V8HI, RS6000_BTI_V8HI, RS6000_BTI_V8HI, 0 }, - { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V8HI, + { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V8HI_UNS, RS6000_BTI_unsigned_V8HI, RS6000_BTI_bool_V8HI, RS6000_BTI_unsigned_V8HI, 0 }, - { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V8HI, + { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V8HI_UNS, RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V8HI, RS6000_BTI_bool_V8HI, 0 }, - { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V8HI, + { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V8HI_UNS, RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V8HI, RS6000_BTI_unsigned_V8HI, 0 }, - { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V8HI, + { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V8HI_UNS, RS6000_BTI_bool_V8HI, RS6000_BTI_bool_V8HI, RS6000_BTI_bool_V8HI, 0 }, { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V4SI, RS6000_BTI_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_V4SI, 0 }, @@ -4775,16 +4786,16 @@ const struct altivec_builtin_types altivec_overloaded_builtins[] = { RS6000_BTI_V4SI, RS6000_BTI_V4SI, RS6000_BTI_bool_V4SI, 0 }, { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V4SI, RS6000_BTI_V4SI, RS6000_BTI_V4SI, RS6000_BTI_V4SI, 0 }, - { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V4SI, + { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V4SI_UNS, RS6000_BTI_unsigned_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_unsigned_V4SI, 0 }, - { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V4SI, + { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V4SI_UNS, RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI, RS6000_BTI_bool_V4SI, 0 }, - { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V4SI, + { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V4SI_UNS, RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI, RS6000_BTI_unsigned_V4SI, 0 }, - { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V4SI, + { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V4SI_UNS, RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, RS6000_BTI_bool_V4SI, 0 }, { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V2DI, RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_V2DI, 0 }, @@ -4792,16 +4803,16 @@ const struct altivec_builtin_types altivec_overloaded_builtins[] = { RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_bool_V2DI, 0 }, { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, RS6000_BTI_V2DI, 0 }, - { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V2DI, + { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V2DI_UNS, RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_unsigned_V2DI, 0 }, - { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V2DI, + { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V2DI_UNS, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_bool_V2DI, 0 }, - { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V2DI, + { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V2DI_UNS, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, RS6000_BTI_unsigned_V2DI, 0 }, - { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V2DI, + { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V2DI_UNS, RS6000_BTI_bool_V2DI, RS6000_BTI_bool_V2DI, RS6000_BTI_bool_V2DI, 0 }, { P8V_BUILTIN_VEC_ORC, P8V_BUILTIN_ORC_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, RS6000_BTI_V4SF, 0 }, diff --git a/gcc/config/rs6000/rs6000.c b/gcc/config/rs6000/rs6000.c index fa3e04571b69..d5a6150d6c7e 100644 --- a/gcc/config/rs6000/rs6000.c +++ b/gcc/config/rs6000/rs6000.c @@ -15731,7 +15731,16 @@ rs6000_gimple_fold_builtin (gimple_stmt_iterator *gsi) gsi_replace (gsi, g, true); return true; /* Flavors of vec_and. */ - case ALTIVEC_BUILTIN_VAND: + case ALTIVEC_BUILTIN_VAND_V16QI_UNS: + case ALTIVEC_BUILTIN_VAND_V16QI: + case ALTIVEC_BUILTIN_VAND_V8HI_UNS: + case ALTIVEC_BUILTIN_VAND_V8HI: + case ALTIVEC_BUILTIN_VAND_V4SI_UNS: + case ALTIVEC_BUILTIN_VAND_V4SI: + case ALTIVEC_BUILTIN_VAND_V2DI_UNS: + case ALTIVEC_BUILTIN_VAND_V2DI: + case ALTIVEC_BUILTIN_VAND_V4SF: + case ALTIVEC_BUILTIN_VAND_V2DF: arg0 = gimple_call_arg (stmt, 0); arg1 = gimple_call_arg (stmt, 1); lhs = gimple_call_lhs (stmt); @@ -15740,7 +15749,16 @@ rs6000_gimple_fold_builtin (gimple_stmt_iterator *gsi) gsi_replace (gsi, g, true); return true; /* Flavors of vec_andc. */ - case ALTIVEC_BUILTIN_VANDC: + case ALTIVEC_BUILTIN_VANDC_V16QI_UNS: + case ALTIVEC_BUILTIN_VANDC_V16QI: + case ALTIVEC_BUILTIN_VANDC_V8HI_UNS: + case ALTIVEC_BUILTIN_VANDC_V8HI: + case ALTIVEC_BUILTIN_VANDC_V4SI_UNS: + case ALTIVEC_BUILTIN_VANDC_V4SI: + case ALTIVEC_BUILTIN_VANDC_V2DI_UNS: + case ALTIVEC_BUILTIN_VANDC_V2DI: + case ALTIVEC_BUILTIN_VANDC_V4SF: + case ALTIVEC_BUILTIN_VANDC_V2DF: arg0 = gimple_call_arg (stmt, 0); arg1 = gimple_call_arg (stmt, 1); lhs = gimple_call_lhs (stmt); @@ -15754,12 +15772,16 @@ rs6000_gimple_fold_builtin (gimple_stmt_iterator *gsi) return true; /* Flavors of vec_nand. */ case P8V_BUILTIN_VEC_NAND: + case P8V_BUILTIN_NAND_V16QI_UNS: case P8V_BUILTIN_NAND_V16QI: + case P8V_BUILTIN_NAND_V8HI_UNS: case P8V_BUILTIN_NAND_V8HI: + case P8V_BUILTIN_NAND_V4SI_UNS: case P8V_BUILTIN_NAND_V4SI: + case P8V_BUILTIN_NAND_V2DI_UNS: + case P8V_BUILTIN_NAND_V2DI: case P8V_BUILTIN_NAND_V4SF: case P8V_BUILTIN_NAND_V2DF: - case P8V_BUILTIN_NAND_V2DI: arg0 = gimple_call_arg (stmt, 0); arg1 = gimple_call_arg (stmt, 1); lhs = gimple_call_lhs (stmt); @@ -15772,7 +15794,16 @@ rs6000_gimple_fold_builtin (gimple_stmt_iterator *gsi) gsi_replace (gsi, g, true); return true; /* Flavors of vec_or. */ - case ALTIVEC_BUILTIN_VOR: + case ALTIVEC_BUILTIN_VOR_V16QI_UNS: + case ALTIVEC_BUILTIN_VOR_V16QI: + case ALTIVEC_BUILTIN_VOR_V8HI_UNS: + case ALTIVEC_BUILTIN_VOR_V8HI: + case ALTIVEC_BUILTIN_VOR_V4SI_UNS: + case ALTIVEC_BUILTIN_VOR_V4SI: + case ALTIVEC_BUILTIN_VOR_V2DI_UNS: + case ALTIVEC_BUILTIN_VOR_V2DI: + case ALTIVEC_BUILTIN_VOR_V4SF: + case ALTIVEC_BUILTIN_VOR_V2DF: arg0 = gimple_call_arg (stmt, 0); arg1 = gimple_call_arg (stmt, 1); lhs = gimple_call_lhs (stmt); @@ -15781,12 +15812,16 @@ rs6000_gimple_fold_builtin (gimple_stmt_iterator *gsi) gsi_replace (gsi, g, true); return true; /* flavors of vec_orc. */ + case P8V_BUILTIN_ORC_V16QI_UNS: case P8V_BUILTIN_ORC_V16QI: + case P8V_BUILTIN_ORC_V8HI_UNS: case P8V_BUILTIN_ORC_V8HI: + case P8V_BUILTIN_ORC_V4SI_UNS: case P8V_BUILTIN_ORC_V4SI: + case P8V_BUILTIN_ORC_V2DI_UNS: + case P8V_BUILTIN_ORC_V2DI: case P8V_BUILTIN_ORC_V4SF: case P8V_BUILTIN_ORC_V2DF: - case P8V_BUILTIN_ORC_V2DI: arg0 = gimple_call_arg (stmt, 0); arg1 = gimple_call_arg (stmt, 1); lhs = gimple_call_lhs (stmt); @@ -15799,7 +15834,16 @@ rs6000_gimple_fold_builtin (gimple_stmt_iterator *gsi) gsi_replace (gsi, g, true); return true; /* Flavors of vec_xor. */ - case ALTIVEC_BUILTIN_VXOR: + case ALTIVEC_BUILTIN_VXOR_V16QI_UNS: + case ALTIVEC_BUILTIN_VXOR_V16QI: + case ALTIVEC_BUILTIN_VXOR_V8HI_UNS: + case ALTIVEC_BUILTIN_VXOR_V8HI: + case ALTIVEC_BUILTIN_VXOR_V4SI_UNS: + case ALTIVEC_BUILTIN_VXOR_V4SI: + case ALTIVEC_BUILTIN_VXOR_V2DI_UNS: + case ALTIVEC_BUILTIN_VXOR_V2DI: + case ALTIVEC_BUILTIN_VXOR_V4SF: + case ALTIVEC_BUILTIN_VXOR_V2DF: arg0 = gimple_call_arg (stmt, 0); arg1 = gimple_call_arg (stmt, 1); lhs = gimple_call_lhs (stmt); @@ -15808,7 +15852,16 @@ rs6000_gimple_fold_builtin (gimple_stmt_iterator *gsi) gsi_replace (gsi, g, true); return true; /* Flavors of vec_nor. */ - case ALTIVEC_BUILTIN_VNOR: + case ALTIVEC_BUILTIN_VNOR_V16QI_UNS: + case ALTIVEC_BUILTIN_VNOR_V16QI: + case ALTIVEC_BUILTIN_VNOR_V8HI_UNS: + case ALTIVEC_BUILTIN_VNOR_V8HI: + case ALTIVEC_BUILTIN_VNOR_V4SI_UNS: + case ALTIVEC_BUILTIN_VNOR_V4SI: + case ALTIVEC_BUILTIN_VNOR_V2DI_UNS: + case ALTIVEC_BUILTIN_VNOR_V2DI: + case ALTIVEC_BUILTIN_VNOR_V4SF: + case ALTIVEC_BUILTIN_VNOR_V2DF: arg0 = gimple_call_arg (stmt, 0); arg1 = gimple_call_arg (stmt, 1); lhs = gimple_call_lhs (stmt); @@ -17881,6 +17934,41 @@ builtin_function_type (machine_mode mode_ret, machine_mode mode_arg0, case ALTIVEC_BUILTIN_VMINUW: case P8V_BUILTIN_VMAXUD: case P8V_BUILTIN_VMINUD: + case ALTIVEC_BUILTIN_VAND_V16QI_UNS: + case ALTIVEC_BUILTIN_VAND_V8HI_UNS: + case ALTIVEC_BUILTIN_VAND_V4SI_UNS: + case ALTIVEC_BUILTIN_VAND_V2DI_UNS: + case ALTIVEC_BUILTIN_VANDC_V16QI_UNS: + case ALTIVEC_BUILTIN_VANDC_V8HI_UNS: + case ALTIVEC_BUILTIN_VANDC_V4SI_UNS: + case ALTIVEC_BUILTIN_VANDC_V2DI_UNS: + case ALTIVEC_BUILTIN_VNOR_V16QI_UNS: + case ALTIVEC_BUILTIN_VNOR_V8HI_UNS: + case ALTIVEC_BUILTIN_VNOR_V4SI_UNS: + case ALTIVEC_BUILTIN_VNOR_V2DI_UNS: + case ALTIVEC_BUILTIN_VOR_V16QI_UNS: + case ALTIVEC_BUILTIN_VOR_V8HI_UNS: + case ALTIVEC_BUILTIN_VOR_V4SI_UNS: + case ALTIVEC_BUILTIN_VOR_V2DI_UNS: + case ALTIVEC_BUILTIN_VXOR_V16QI_UNS: + case ALTIVEC_BUILTIN_VXOR_V8HI_UNS: + case ALTIVEC_BUILTIN_VXOR_V4SI_UNS: + case ALTIVEC_BUILTIN_VXOR_V2DI_UNS: + case P8V_BUILTIN_EQV_V16QI_UNS: + case P8V_BUILTIN_EQV_V8HI_UNS: + case P8V_BUILTIN_EQV_V4SI_UNS: + case P8V_BUILTIN_EQV_V2DI_UNS: + case P8V_BUILTIN_EQV_V1TI_UNS: + case P8V_BUILTIN_NAND_V16QI_UNS: + case P8V_BUILTIN_NAND_V8HI_UNS: + case P8V_BUILTIN_NAND_V4SI_UNS: + case P8V_BUILTIN_NAND_V2DI_UNS: + case P8V_BUILTIN_NAND_V1TI_UNS: + case P8V_BUILTIN_ORC_V16QI_UNS: + case P8V_BUILTIN_ORC_V8HI_UNS: + case P8V_BUILTIN_ORC_V4SI_UNS: + case P8V_BUILTIN_ORC_V2DI_UNS: + case P8V_BUILTIN_ORC_V1TI_UNS: h.uns_p[0] = 1; h.uns_p[1] = 1; h.uns_p[2] = 1; diff --git a/gcc/testsuite/gcc.dg/vmx/ops.c b/gcc/testsuite/gcc.dg/vmx/ops.c index 6aff80bbd1a9..4a0650c06bdf 100644 --- a/gcc/testsuite/gcc.dg/vmx/ops.c +++ b/gcc/testsuite/gcc.dg/vmx/ops.c @@ -1,5 +1,5 @@ /* { dg-do compile } */ -/* { dg-options "-maltivec -mabi=altivec -std=gnu99 -mno-vsx -Wno-deprecated" } */ +/* { dg-options "-maltivec -mabi=altivec -std=gnu99 -mno-vsx -Wno-deprecated -flax-vector-conversions" } */ #include #include extern char * *var_char_ptr; diff --git a/gcc/testsuite/gcc.target/powerpc/fold-vec-logical-ands-longlong.c b/gcc/testsuite/gcc.target/powerpc/fold-vec-logical-ands-longlong.c index 76bece11a99e..ad760f55fac2 100644 --- a/gcc/testsuite/gcc.target/powerpc/fold-vec-logical-ands-longlong.c +++ b/gcc/testsuite/gcc.target/powerpc/fold-vec-logical-ands-longlong.c @@ -3,7 +3,9 @@ /* { dg-do compile } */ /* { dg-require-effective-target powerpc_vsx_ok } */ -/* { dg-options "-mvsx -O2" } */ +/* Disable ipa-icf to avoid compiler to generate tail call for some function, + we can not get the expected assembly due the omitted function body. */ +/* { dg-options "-mvsx -O2 -fno-ipa-icf" } */ #include diff --git a/gcc/testsuite/gcc.target/powerpc/fold-vec-logical-ors-longlong.c b/gcc/testsuite/gcc.target/powerpc/fold-vec-logical-ors-longlong.c index 10c69d3d87b5..9aa3738d77a7 100644 --- a/gcc/testsuite/gcc.target/powerpc/fold-vec-logical-ors-longlong.c +++ b/gcc/testsuite/gcc.target/powerpc/fold-vec-logical-ors-longlong.c @@ -3,7 +3,9 @@ /* { dg-do compile } */ /* { dg-require-effective-target powerpc_p8vector_ok } */ -/* { dg-options "-mpower8-vector -O2" } */ +/* Disable ipa-icf to avoid compiler to generate tail call for some function, + we can not get the expected assembly due the omitted function body. */ +/* { dg-options "-mpower8-vector -O2 -fno-ipa-icf" } */ #include diff --git a/gcc/testsuite/gcc.target/powerpc/pr92923-1.c b/gcc/testsuite/gcc.target/powerpc/pr92923-1.c new file mode 100644 index 000000000000..f901244fcf7a --- /dev/null +++ b/gcc/testsuite/gcc.target/powerpc/pr92923-1.c @@ -0,0 +1,453 @@ +/* { dg-do compile } */ +/* { dg-require-effective-target powerpc_altivec_ok } */ +/* { dg-options "-maltivec -O2 -fdump-tree-gimple" } */ + +/* Verify that overloaded built-ins for "and", "andc", "nor", "or" and "xor" + do not produce VIEW_CONVERT_EXPR operations on their operands. Like so: + + _1 = VIEW_CONVERT_EXPR<__vector signed int>(x); + _2 = VIEW_CONVERT_EXPR<__vector signed int>(y); + _3 = __builtin_altivec_vand (_1, _2); + D.3245 = VIEW_CONVERT_EXPR(_3); +*/ + +typedef __attribute__((altivec(vector__))) __attribute__((altivec(bool__))) char bcvec_t; +typedef __attribute__((altivec(vector__))) signed char scvec_t; +typedef __attribute__((altivec(vector__))) unsigned char ucvec_t; + +typedef __attribute__((altivec(vector__))) __attribute__((altivec(bool__))) short bsvec_t; +typedef __attribute__((altivec(vector__))) signed short ssvec_t; +typedef __attribute__((altivec(vector__))) unsigned short usvec_t; + +typedef __attribute__((altivec(vector__))) __attribute__((altivec(bool__))) int bivec_t; +typedef __attribute__((altivec(vector__))) signed int sivec_t; +typedef __attribute__((altivec(vector__))) unsigned int uivec_t; + +typedef __attribute__((altivec(vector__))) __attribute__((altivec(bool__))) long long bllvec_t; +typedef __attribute__((altivec(vector__))) signed long long sllvec_t; +typedef __attribute__((altivec(vector__))) unsigned long long ullvec_t; + +typedef __attribute__((altivec(vector__))) double dvec_t; +typedef __attribute__((altivec(vector__))) float fvec_t; + +bcvec_t +and_0 (bcvec_t x, bcvec_t y) +{ + return __builtin_vec_and (x, y); +} + +scvec_t +and_1 (scvec_t x, scvec_t y) +{ + return __builtin_vec_and (x, y); +} + +ucvec_t +and_2 (ucvec_t x, ucvec_t y) +{ + return __builtin_vec_and (x, y); +} + +bsvec_t +and_3 (bsvec_t x, bsvec_t y) +{ + return __builtin_vec_and (x, y); +} + +ssvec_t +and_4 (ssvec_t x, ssvec_t y) +{ + return __builtin_vec_and (x, y); +} + +usvec_t +and_5 (usvec_t x, usvec_t y) +{ + return __builtin_vec_and (x, y); +} + +bivec_t +and_6 (bivec_t x, bivec_t y) +{ + return __builtin_vec_and (x, y); +} + +sivec_t +and_7 (sivec_t x, sivec_t y) +{ + return __builtin_vec_and (x, y); +} + +uivec_t +and_8 (uivec_t x, uivec_t y) +{ + return __builtin_vec_and (x, y); +} + +bllvec_t +and_9 (bllvec_t x, bllvec_t y) +{ + return __builtin_vec_and (x, y); +} + +sllvec_t +and_10 (sllvec_t x, sllvec_t y) +{ + return __builtin_vec_and (x, y); +} + +ullvec_t +and_11 (ullvec_t x, ullvec_t y) +{ + return __builtin_vec_and (x, y); +} + +dvec_t +and_12 (dvec_t x, dvec_t y) +{ + return __builtin_vec_and (x, y); +} + +fvec_t +and_13 (fvec_t x, fvec_t y) +{ + return __builtin_vec_and (x, y); +} + +bcvec_t +andc_0 (bcvec_t x, bcvec_t y) +{ + return __builtin_vec_andc (x, y); +} + +scvec_t +andc_1 (scvec_t x, scvec_t y) +{ + return __builtin_vec_andc (x, y); +} + +ucvec_t +andc_2 (ucvec_t x, ucvec_t y) +{ + return __builtin_vec_andc (x, y); +} + +bsvec_t +andc_3 (bsvec_t x, bsvec_t y) +{ + return __builtin_vec_andc (x, y); +} + +ssvec_t +andc_4 (ssvec_t x, ssvec_t y) +{ + return __builtin_vec_andc (x, y); +} + +usvec_t +andc_5 (usvec_t x, usvec_t y) +{ + return __builtin_vec_andc (x, y); +} + +bivec_t +andc_6 (bivec_t x, bivec_t y) +{ + return __builtin_vec_andc (x, y); +} + +sivec_t +andc_7 (sivec_t x, sivec_t y) +{ + return __builtin_vec_andc (x, y); +} + +uivec_t +andc_8 (uivec_t x, uivec_t y) +{ + return __builtin_vec_andc (x, y); +} + +bllvec_t +andc_9 (bllvec_t x, bllvec_t y) +{ + return __builtin_vec_andc (x, y); +} + +sllvec_t +andc_10 (sllvec_t x, sllvec_t y) +{ + return __builtin_vec_andc (x, y); +} + +ullvec_t +andc_11 (ullvec_t x, ullvec_t y) +{ + return __builtin_vec_andc (x, y); +} + +dvec_t +andc_12 (dvec_t x, dvec_t y) +{ + return __builtin_vec_andc (x, y); +} + +fvec_t +andc_13 (fvec_t x, fvec_t y) +{ + return __builtin_vec_andc (x, y); +} + +bcvec_t +nor_0 (bcvec_t x, bcvec_t y) +{ + return __builtin_vec_nor (x, y); +} + +scvec_t +nor_1 (scvec_t x, scvec_t y) +{ + return __builtin_vec_nor (x, y); +} + +ucvec_t +nor_2 (ucvec_t x, ucvec_t y) +{ + return __builtin_vec_nor (x, y); +} + +bsvec_t +nor_3 (bsvec_t x, bsvec_t y) +{ + return __builtin_vec_nor (x, y); +} + +ssvec_t +nor_4 (ssvec_t x, ssvec_t y) +{ + return __builtin_vec_nor (x, y); +} + +usvec_t +nor_5 (usvec_t x, usvec_t y) +{ + return __builtin_vec_nor (x, y); +} + +bivec_t +nor_6 (bivec_t x, bivec_t y) +{ + return __builtin_vec_nor (x, y); +} + +sivec_t +nor_7 (sivec_t x, sivec_t y) +{ + return __builtin_vec_nor (x, y); +} + +uivec_t +nor_8 (uivec_t x, uivec_t y) +{ + return __builtin_vec_nor (x, y); +} + +bllvec_t +nor_9 (bllvec_t x, bllvec_t y) +{ + return __builtin_vec_nor (x, y); +} + +sllvec_t +nor_10 (sllvec_t x, sllvec_t y) +{ + return __builtin_vec_nor (x, y); +} + +ullvec_t +nor_11 (ullvec_t x, ullvec_t y) +{ + return __builtin_vec_nor (x, y); +} + +dvec_t +nor_12 (dvec_t x, dvec_t y) +{ + return __builtin_vec_nor (x, y); +} + +fvec_t +nor_13 (fvec_t x, fvec_t y) +{ + return __builtin_vec_nor (x, y); +} + +bcvec_t +or_0 (bcvec_t x, bcvec_t y) +{ + return __builtin_vec_or (x, y); +} + +scvec_t +or_1 (scvec_t x, scvec_t y) +{ + return __builtin_vec_or (x, y); +} + +ucvec_t +or_2 (ucvec_t x, ucvec_t y) +{ + return __builtin_vec_or (x, y); +} + +bsvec_t +or_3 (bsvec_t x, bsvec_t y) +{ + return __builtin_vec_or (x, y); +} + +ssvec_t +or_4 (ssvec_t x, ssvec_t y) +{ + return __builtin_vec_or (x, y); +} + +usvec_t +or_5 (usvec_t x, usvec_t y) +{ + return __builtin_vec_or (x, y); +} + +bivec_t +or_6 (bivec_t x, bivec_t y) +{ + return __builtin_vec_or (x, y); +} + +sivec_t +or_7 (sivec_t x, sivec_t y) +{ + return __builtin_vec_or (x, y); +} + +uivec_t +or_8 (uivec_t x, uivec_t y) +{ + return __builtin_vec_or (x, y); +} + +bllvec_t +or_9 (bllvec_t x, bllvec_t y) +{ + return __builtin_vec_or (x, y); +} + +sllvec_t +or_10 (sllvec_t x, sllvec_t y) +{ + return __builtin_vec_or (x, y); +} + +ullvec_t +or_11 (ullvec_t x, ullvec_t y) +{ + return __builtin_vec_or (x, y); +} + +dvec_t +or_12 (dvec_t x, dvec_t y) +{ + return __builtin_vec_or (x, y); +} + +fvec_t +or_13 (fvec_t x, fvec_t y) +{ + return __builtin_vec_or (x, y); +} + +bcvec_t +xor_0 (bcvec_t x, bcvec_t y) +{ + return __builtin_vec_xor (x, y); +} + +scvec_t +xor_1 (scvec_t x, scvec_t y) +{ + return __builtin_vec_xor (x, y); +} + +ucvec_t +xor_2 (ucvec_t x, ucvec_t y) +{ + return __builtin_vec_xor (x, y); +} + +bsvec_t +xor_3 (bsvec_t x, bsvec_t y) +{ + return __builtin_vec_xor (x, y); +} + +ssvec_t +xor_4 (ssvec_t x, ssvec_t y) +{ + return __builtin_vec_xor (x, y); +} + +usvec_t +xor_5 (usvec_t x, usvec_t y) +{ + return __builtin_vec_xor (x, y); +} + +bivec_t +xor_6 (bivec_t x, bivec_t y) +{ + return __builtin_vec_xor (x, y); +} + +sivec_t +xor_7 (sivec_t x, sivec_t y) +{ + return __builtin_vec_xor (x, y); +} + +uivec_t +xor_8 (uivec_t x, uivec_t y) +{ + return __builtin_vec_xor (x, y); +} + +bllvec_t +xor_9 (bllvec_t x, bllvec_t y) +{ + return __builtin_vec_xor (x, y); +} + +sllvec_t +xor_10 (sllvec_t x, sllvec_t y) +{ + return __builtin_vec_xor (x, y); +} + +ullvec_t +xor_11 (ullvec_t x, ullvec_t y) +{ + return __builtin_vec_xor (x, y); +} + +dvec_t +xor_12 (dvec_t x, dvec_t y) +{ + return __builtin_vec_xor (x, y); +} + +fvec_t +xor_13 (fvec_t x, fvec_t y) +{ + return __builtin_vec_xor (x, y); +} + +/* { dg-final { scan-tree-dump-not "VIEW_CONVERT_EXPR" "gimple" } } */ diff --git a/gcc/testsuite/gcc.target/powerpc/pr92923-2.c b/gcc/testsuite/gcc.target/powerpc/pr92923-2.c new file mode 100644 index 000000000000..ebecb69915f5 --- /dev/null +++ b/gcc/testsuite/gcc.target/powerpc/pr92923-2.c @@ -0,0 +1,285 @@ +/* { dg-do compile } */ +/* { dg-require-effective-target powerpc_p8vector_ok } */ +/* { dg-options "-mdejagnu-cpu=power8 -O2 -fdump-tree-gimple" } */ + +/* Verify that overloaded built-ins for "eqv", "nand" and "orc" do not + produce VIEW_CONVERT_EXPR operations on their operands. Like so: + + _1 = VIEW_CONVERT_EXPR<__vector signed int>(x); + _2 = VIEW_CONVERT_EXPR<__vector signed int>(y); + _3 = __builtin_altivec_vand (_1, _2); + D.3245 = VIEW_CONVERT_EXPR(_3); +*/ + +typedef __attribute__((altivec(vector__))) __attribute__((altivec(bool__))) char bcvec_t; +typedef __attribute__((altivec(vector__))) signed char scvec_t; +typedef __attribute__((altivec(vector__))) unsigned char ucvec_t; + +typedef __attribute__((altivec(vector__))) __attribute__((altivec(bool__))) short bsvec_t; +typedef __attribute__((altivec(vector__))) signed short ssvec_t; +typedef __attribute__((altivec(vector__))) unsigned short usvec_t; + +typedef __attribute__((altivec(vector__))) __attribute__((altivec(bool__))) int bivec_t; +typedef __attribute__((altivec(vector__))) signed int sivec_t; +typedef __attribute__((altivec(vector__))) unsigned int uivec_t; + +typedef __attribute__((altivec(vector__))) __attribute__((altivec(bool__))) long long bllvec_t; +typedef __attribute__((altivec(vector__))) signed long long sllvec_t; +typedef __attribute__((altivec(vector__))) unsigned long long ullvec_t; + +typedef __attribute__((altivec(vector__))) double dvec_t; +typedef __attribute__((altivec(vector__))) float fvec_t; + +bcvec_t +eqv_0 (bcvec_t x, bcvec_t y) +{ + return __builtin_vec_eqv (x, y); +} + +scvec_t +eqv_1 (scvec_t x, scvec_t y) +{ + return __builtin_vec_eqv (x, y); +} + +ucvec_t +eqv_2 (ucvec_t x, ucvec_t y) +{ + return __builtin_vec_eqv (x, y); +} + +bsvec_t +eqv_3 (bsvec_t x, bsvec_t y) +{ + return __builtin_vec_eqv (x, y); +} + +ssvec_t +eqv_4 (ssvec_t x, ssvec_t y) +{ + return __builtin_vec_eqv (x, y); +} + +usvec_t +eqv_5 (usvec_t x, usvec_t y) +{ + return __builtin_vec_eqv (x, y); +} + +bivec_t +eqv_6 (bivec_t x, bivec_t y) +{ + return __builtin_vec_eqv (x, y); +} + +sivec_t +eqv_7 (sivec_t x, sivec_t y) +{ + return __builtin_vec_eqv (x, y); +} + +uivec_t +eqv_8 (uivec_t x, uivec_t y) +{ + return __builtin_vec_eqv (x, y); +} + +bllvec_t +eqv_9 (bllvec_t x, bllvec_t y) +{ + return __builtin_vec_eqv (x, y); +} + +sllvec_t +eqv_10 (sllvec_t x, sllvec_t y) +{ + return __builtin_vec_eqv (x, y); +} + +ullvec_t +eqv_11 (ullvec_t x, ullvec_t y) +{ + return __builtin_vec_eqv (x, y); +} + +dvec_t +eqv_12 (dvec_t x, dvec_t y) +{ + return __builtin_vec_eqv (x, y); +} + +fvec_t +eqv_13 (fvec_t x, fvec_t y) +{ + return __builtin_vec_eqv (x, y); +} + +bcvec_t +nand_0 (bcvec_t x, bcvec_t y) +{ + return __builtin_vec_nand (x, y); +} + +scvec_t +nand_1 (scvec_t x, scvec_t y) +{ + return __builtin_vec_nand (x, y); +} + +ucvec_t +nand_2 (ucvec_t x, ucvec_t y) +{ + return __builtin_vec_nand (x, y); +} + +bsvec_t +nand_3 (bsvec_t x, bsvec_t y) +{ + return __builtin_vec_nand (x, y); +} + +ssvec_t +nand_4 (ssvec_t x, ssvec_t y) +{ + return __builtin_vec_nand (x, y); +} + +usvec_t +nand_5 (usvec_t x, usvec_t y) +{ + return __builtin_vec_nand (x, y); +} + +bivec_t +nand_6 (bivec_t x, bivec_t y) +{ + return __builtin_vec_nand (x, y); +} + +sivec_t +nand_7 (sivec_t x, sivec_t y) +{ + return __builtin_vec_nand (x, y); +} + +uivec_t +nand_8 (uivec_t x, uivec_t y) +{ + return __builtin_vec_nand (x, y); +} + +bllvec_t +nand_9 (bllvec_t x, bllvec_t y) +{ + return __builtin_vec_nand (x, y); +} + +sllvec_t +nand_10 (sllvec_t x, sllvec_t y) +{ + return __builtin_vec_nand (x, y); +} + +ullvec_t +nand_11 (ullvec_t x, ullvec_t y) +{ + return __builtin_vec_nand (x, y); +} + +dvec_t +nand_12 (dvec_t x, dvec_t y) +{ + return __builtin_vec_nand (x, y); +} + +fvec_t +nand_13 (fvec_t x, fvec_t y) +{ + return __builtin_vec_nand (x, y); +} + +bcvec_t +orc_0 (bcvec_t x, bcvec_t y) +{ + return __builtin_vec_orc (x, y); +} + +scvec_t +orc_1 (scvec_t x, scvec_t y) +{ + return __builtin_vec_orc (x, y); +} + +ucvec_t +orc_2 (ucvec_t x, ucvec_t y) +{ + return __builtin_vec_orc (x, y); +} + +bsvec_t +orc_3 (bsvec_t x, bsvec_t y) +{ + return __builtin_vec_orc (x, y); +} + +ssvec_t +orc_4 (ssvec_t x, ssvec_t y) +{ + return __builtin_vec_orc (x, y); +} + +usvec_t +orc_5 (usvec_t x, usvec_t y) +{ + return __builtin_vec_orc (x, y); +} + +bivec_t +orc_6 (bivec_t x, bivec_t y) +{ + return __builtin_vec_orc (x, y); +} + +sivec_t +orc_7 (sivec_t x, sivec_t y) +{ + return __builtin_vec_orc (x, y); +} + +uivec_t +orc_8 (uivec_t x, uivec_t y) +{ + return __builtin_vec_orc (x, y); +} + +bllvec_t +orc_9 (bllvec_t x, bllvec_t y) +{ + return __builtin_vec_orc (x, y); +} + +sllvec_t +orc_10 (sllvec_t x, sllvec_t y) +{ + return __builtin_vec_orc (x, y); +} + +ullvec_t +orc_11 (ullvec_t x, ullvec_t y) +{ + return __builtin_vec_orc (x, y); +} + +dvec_t +orc_12 (dvec_t x, dvec_t y) +{ + return __builtin_vec_orc (x, y); +} + +fvec_t +orc_13 (fvec_t x, fvec_t y) +{ + return __builtin_vec_orc (x, y); +} + +/* { dg-final { scan-tree-dump-not "VIEW_CONVERT_EXPR" "gimple" } } */ diff --git a/gcc/testsuite/gcc.target/powerpc/vsx-vector-6.h b/gcc/testsuite/gcc.target/powerpc/vsx-vector-6.h index a891b64e6faa..0106e8d2901a 100644 --- a/gcc/testsuite/gcc.target/powerpc/vsx-vector-6.h +++ b/gcc/testsuite/gcc.target/powerpc/vsx-vector-6.h @@ -1,167 +1,154 @@ -/* This test code is included into vsx-vector-6-be.c and vsx-vector-6-le.c. - The two files have the tests for the number of instructions generated for - LE versus BE. */ +/* This test code is included into vsx-vector-6.p7.c, vsx-vector-6.p8.c + and vsx-vector-6.p9.c. The .c files have the tests for the number + of instructions generated for each cpu type. */ #include -void foo (vector double *out, vector double *in, vector long *p_l, vector bool long *p_b, - vector unsigned char *p_uc, int *i, vector float *p_f, - vector bool char *outbc, vector bool int *outbi, - vector bool short *outbsi, vector int *outsi, - vector unsigned int *outui, vector signed char *outsc, - vector unsigned char *outuc) +typedef struct { + vector double d; + vector float f; + vector long sl; + vector int si; + vector short ss; + vector char sc; + vector unsigned int ui; + vector unsigned short int us; + vector unsigned char uc; + vector bool long long bll; + vector bool long bl; + vector bool int bi; + vector bool short bs; + vector bool char bc; +} opnd_t; + +void +func_1op (opnd_t *dst, opnd_t *src) { - vector double in0 = in[0]; - vector double in1 = in[1]; - vector double in2 = in[2]; - vector long inl = *p_l; - vector bool long inb = *p_b; - vector bool long long inbl0; - vector bool long long inbl1; - vector unsigned char uc = *p_uc; - vector float inf0; - vector float inf1; - vector float inf2; - vector char inc0; - vector char inc1; - vector bool char inbc0; - vector bool char inbc1; - vector bool short inbs0; - vector bool short inbs1; - vector bool int inbi0; - vector bool int inbi1; - vector signed short int inssi0, inssi1; - vector unsigned short int inusi0, inusi1; - vector signed int insi0, insi1; - vector unsigned int inui0, inui1; - vector unsigned char inuc0, inuc1; - - *out++ = vec_abs (in0); - *out++ = vec_add (in0, in1); - *out++ = vec_and (in0, in1); - *out++ = vec_and (in0, inb); - *out++ = vec_and (inb, in0); - *out++ = vec_andc (in0, in1); - *out++ = vec_andc (in0, inb); - *out++ = vec_andc (inb, in0); - *out++ = vec_andc (inbl0, in0); - *out++ = vec_andc (in0, inbl0); - - *out++ = vec_ceil (in0); - *p_b++ = vec_cmpeq (in0, in1); - *p_b++ = vec_cmpgt (in0, in1); - *p_b++ = vec_cmpge (in0, in1); - *p_b++ = vec_cmplt (in0, in1); - *p_b++ = vec_cmple (in0, in1); - *out++ = vec_div (in0, in1); - *out++ = vec_floor (in0); - *out++ = vec_madd (in0, in1, in2); - *out++ = vec_msub (in0, in1, in2); - *out++ = vec_max (in0, in1); - *out++ = vec_min (in0, in1); - *out++ = vec_msub (in0, in1, in2); - *out++ = vec_mul (in0, in1); - *out++ = vec_nearbyint (in0); - *out++ = vec_nmadd (in0, in1, in2); - *out++ = vec_nmsub (in0, in1, in2); - *out++ = vec_nor (in0, in1); - *out++ = vec_or (in0, in1); - *out++ = vec_or (in0, inb); - *out++ = vec_or (inb, in0); - *out++ = vec_perm (in0, in1, uc); - *out++ = vec_rint (in0); - *out++ = vec_sel (in0, in1, inl); - *out++ = vec_sel (in0, in1, inb); - *out++ = vec_sub (in0, in1); - *out++ = vec_sqrt (in0); - *out++ = vec_trunc (in0); - *out++ = vec_xor (in0, in1); - *out++ = vec_xor (in0, inb); - *out++ = vec_xor (inb, in0); - - *i++ = vec_all_eq (in0, in1); - *i++ = vec_all_ge (in0, in1); - *i++ = vec_all_gt (in0, in1); - *i++ = vec_all_le (in0, in1); - *i++ = vec_all_lt (in0, in1); - *i++ = vec_all_nan (in0); - *i++ = vec_all_ne (in0, in1); - *i++ = vec_all_nge (in0, in1); - *i++ = vec_all_ngt (in0, in1); - *i++ = vec_all_nle (in0, in1); - *i++ = vec_all_nlt (in0, in1); - *i++ = vec_all_numeric (in0); - *i++ = vec_any_eq (in0, in1); - *i++ = vec_any_ge (in0, in1); - *i++ = vec_any_gt (in0, in1); - *i++ = vec_any_le (in0, in1); - *i++ = vec_any_lt (in0, in1); - *i++ = vec_any_nan (in0); - *i++ = vec_any_ne (in0, in1); - *i++ = vec_any_nge (in0, in1); - *i++ = vec_any_ngt (in0, in1); - *i++ = vec_any_nle (in0, in1); - *i++ = vec_any_nlt (in0, in1); - *i++ = vec_any_numeric (in0); - - *p_f++ = vec_msub (inf0, inf1, inf2); - *p_f++ = vec_nmsub (inf0, inf1, inf2); - *p_f++ = vec_nmadd (inf0, inf1, inf2); - *p_f++ = vec_or (inf0, inf1); - *p_f++ = vec_trunc (inf0); - - *out++ = vec_or (inbl0, in0); - *out++ = vec_or (in0, inbl0); - - *out++ = vec_nor (in0, in1); - - *outbc++ = vec_nor (inbc0, inbc1); - *outbc++ = vec_andc (inbc0, inbc1); - *outbc++ = vec_or (inbc0, inbc1); - - *outuc++ = vec_max (inuc0, inuc1); - - *outbi++ = vec_andc (inbi0, inbi1); - *outbsi++ = vec_andc (inbs0, inbs1); - - *outbsi++ = vec_andc (inbs0, inbs1); - - *outbi++ = vec_nor (inbi0, inbi1); - *outbi++ = vec_or (inbi0, inbi1); - - *outbsi++ = vec_nor (inbs0, inbs1); - *outbsi++ = vec_or (inbs0, inbs1); - - *outsi++ = vec_msums(inssi0, inssi1, insi0); - *outui++ = vec_msums(inusi0, inusi1, inui0); - - *p_f++ = vec_nor (inf0, inf1); - - *p_f++ = vec_andc (inf0, inf1); - *p_f++ = vec_andc (inbi0, inf0); - *p_f++ = vec_andc (inf0, inbi0); - - *in++ = vec_andc (inbl0, in1); - *in++ = vec_andc (in0, inbl1); + dst[0].d = vec_abs (src[0].d); + dst[1].d = vec_ceil (src[1].d); + dst[2].d = vec_floor (src[2].d); + dst[3].d = vec_nearbyint (src[3].d); + dst[4].d = vec_rint (src[4].d); + dst[5].d = vec_sqrt (src[5].d); + dst[6].d = vec_trunc (src[6].d); + dst[7].f = vec_trunc (src[7].f); } -int main() +void +func_2op (opnd_t *dst, opnd_t *src0, opnd_t *src1) { - vector double *out; - vector double *in; - vector long *p_l; - vector bool long *p_b; - vector unsigned char *p_uc; - int *i; - vector float *p_f; - vector bool char *outbc; - vector bool int *outbi; - vector bool short *outbsi; - vector int *outsi; - vector unsigned int *outui; - vector signed char *outsc; - vector unsigned char *outuc; - - foo (out, in, p_l, p_b, p_uc, i, p_f, outbc, - outbi, outbsi, outsi, outui, outsc, outuc); + dst[0].d = vec_add (src0[0].d, src1[0].d); + dst[1].d = vec_div (src0[1].d, src1[1].d); + dst[2].d = vec_max (src0[2].d, src1[2].d); + dst[3].uc = vec_max (src0[3].uc, src1[3].uc); + dst[4].d = vec_min (src0[4].d, src1[4].d); + dst[5].d = vec_mul (src0[5].d, src1[5].d); + dst[6].d = vec_sub (src0[6].d, src1[6].d); +} + +void +func_2lop (opnd_t *dst, opnd_t *src0, opnd_t *src1) +{ + dst[0].d = vec_and (src0[0].d, src1[0].d); + dst[1].d = vec_and (src0[1].d, src1[1].bl); + dst[2].d = vec_and (src0[2].bl, src1[2].d); + + dst[3].d = vec_andc (src0[3].d, src1[3].d); + dst[4].d = vec_andc (src0[4].d, src1[4].bl); + dst[5].d = vec_andc (src0[5].bl, src1[5].d); + dst[6].d = vec_andc (src0[6].bll, src1[6].d); + dst[7].d = vec_andc (src0[7].d, src1[7].bll); + dst[8].bi = vec_andc (src0[8].bi, src1[8].bi); + dst[9].bs = vec_andc (src0[9].bs, src1[9].bs); + dst[10].bc = vec_andc (src0[10].bc, src1[10].bc); + dst[11].f = vec_andc (src0[11].f, src1[11].f); + dst[12].f = vec_andc (src0[12].bi, src1[12].f); + dst[13].f = vec_andc (src0[13].f, src1[13].bi); + dst[14].d = vec_andc (src0[14].bll, src1[14].d); + dst[15].d = vec_andc (src0[15].d, src1[15].bll); + + dst[16].d = vec_nor (src0[16].d, src1[16].d); + dst[17].f = vec_nor (src0[17].f, src1[17].f); + dst[18].bi = vec_nor (src0[18].bi, src1[18].bi); + dst[19].bs = vec_nor (src0[19].bs, src1[19].bs); + dst[20].bc = vec_nor (src0[20].bc, src1[20].bc); + + dst[21].d = vec_or (src0[21].d, src1[21].d); + dst[22].d = vec_or (src0[22].d, src1[22].bl); + dst[23].d = vec_or (src0[23].bl, src1[23].d); + dst[24].d = vec_or (src0[24].bll, src1[24].d); + dst[25].d = vec_or (src0[25].d, src1[25].bll); + dst[26].f = vec_or (src0[26].f, src1[26].f); + dst[27].bi = vec_or (src0[27].bi, src1[27].bi); + dst[28].bs = vec_or (src0[28].bs, src1[28].bs); + dst[29].bc = vec_or (src0[29].bc, src1[29].bc); + + dst[30].d = vec_xor (src0[30].d, src1[30].d); + dst[31].d = vec_xor (src0[31].d, src1[31].bl); + dst[32].d = vec_xor (src0[32].bl, src1[32].d); +} + +void +func_cmp (opnd_t *dst, opnd_t *src0, opnd_t *src1) +{ + dst[0].bl = vec_cmpeq (src0[0].d, src1[0].d); + dst[1].bl = vec_cmpgt (src0[1].d, src1[1].d); + dst[2].bl = vec_cmpge (src0[2].d, src1[2].d); + dst[3].bl = vec_cmplt (src0[3].d, src1[3].d); + dst[4].bl = vec_cmple (src0[4].d, src1[4].d); +} + +void +func_all_cmp (int *dst, opnd_t *src0, opnd_t *src1) +{ + dst[0] = vec_all_eq (src0[0].d, src1[0].d); + dst[1] = vec_all_ge (src0[1].d, src1[1].d); + dst[2] = vec_all_gt (src0[2].d, src1[2].d); + dst[3] = vec_all_le (src0[3].d, src1[3].d); + dst[4] = vec_all_lt (src0[4].d, src1[4].d); + dst[5] = vec_all_nan (src0[5].d); + dst[6] = vec_all_ne (src0[6].d, src1[6].d); + dst[7] = vec_all_nge (src0[7].d, src1[7].d); + dst[8] = vec_all_ngt (src0[8].d, src1[8].d); + dst[9] = vec_all_nle (src0[9].d, src1[9].d); + dst[10] = vec_all_nlt (src0[10].d, src1[10].d); + dst[11] = vec_all_numeric (src0[11].d); + dst[12] = vec_any_eq (src0[12].d, src1[12].d); + dst[13] = vec_any_ge (src0[13].d, src1[13].d); + dst[14] = vec_any_gt (src0[14].d, src1[14].d); + dst[15] = vec_any_le (src0[15].d, src1[15].d); + dst[16] = vec_any_lt (src0[16].d, src1[16].d); + dst[17] = vec_any_nan (src0[17].d); + dst[18] = vec_any_ne (src0[18].d, src1[18].d); + dst[19] = vec_any_nge (src0[19].d, src1[19].d); + dst[20] = vec_any_ngt (src0[20].d, src1[20].d); + dst[21] = vec_any_nle (src0[21].d, src1[21].d); + dst[22] = vec_any_nlt (src0[22].d, src1[22].d); + dst[23] = vec_any_numeric (src0[23].d); +} + +void +func_3op (opnd_t *dst, opnd_t *src0, opnd_t *src1, opnd_t *src2) +{ + dst[0].d = vec_madd (src0[0].d, src1[0].d, src2[0].d); + dst[1].d = vec_msub (src0[1].d, src1[1].d, src2[1].d); + dst[2].d = vec_nmadd (src0[2].d, src1[2].d, src2[2].d); + dst[3].d = vec_nmsub (src0[3].d, src1[3].d, src2[3].d); + + dst[4].f = vec_madd (src0[4].f, src1[4].f, src2[4].f); + dst[5].f = vec_msub (src0[5].f, src1[5].f, src2[5].f); + dst[6].f = vec_nmsub (src0[6].f, src1[6].f, src2[6].f); + dst[7].f = vec_nmadd (src0[7].f, src1[7].f, src2[7].f); + +#if defined (__BIG_ENDIAN__) || defined (_ARCH_PWR9) + dst[8].d = vec_perm (src0[8].d, src1[8].d, src2[8].uc); +#else + dst[8].d = vec_perm (src0[8].d, src1[8].d, ~src2[8].uc); +#endif + + dst[9].d = vec_sel (src0[9].d, src1[9].d, src2[9].d); + dst[10].d = vec_sel (src0[10].d, src1[10].d, src2[10].bl); + + dst[11].si = vec_msums(src0[11].ss, src1[11].ss, src2[11].si); + dst[12].ui = vec_msums(src0[12].us, src1[12].us, src2[12].ui); } diff --git a/gcc/testsuite/gcc.target/powerpc/vsx-vector-6.p7.c b/gcc/testsuite/gcc.target/powerpc/vsx-vector-6.p7.c index 0be7e7c68950..ff560dd8d4f4 100644 --- a/gcc/testsuite/gcc.target/powerpc/vsx-vector-6.p7.c +++ b/gcc/testsuite/gcc.target/powerpc/vsx-vector-6.p7.c @@ -1,41 +1,43 @@ -/* { dg-do compile { target { lp64 && be } } } */ +/* { dg-do compile { target lp64 } } */ /* { dg-skip-if "" { powerpc*-*-darwin* } } */ /* { dg-require-effective-target powerpc_vsx_ok } */ -/* { dg-options "-mvsx -O2 -mdejagnu-cpu=power7 -dp" } */ - -/* Expected instruction counts for Power 7 */ - -/* { dg-final { scan-assembler-times "xvabsdp" 1 } } */ -/* { dg-final { scan-assembler-times "xvadddp" 1 } } */ -/* { dg-final { scan-assembler-times "xxlnor" 5 } } */ -/* { dg-final { scan-assembler-times {\mxvcmpeqdp\s} 1 } } */ -/* { dg-final { scan-assembler-times {\mxvcmpeqdp\.\s} 5 } } */ -/* { dg-final { scan-assembler-times {\mxvcmpgtdp\s} 2 } } */ -/* { dg-final { scan-assembler-times {\mxvcmpgtdp\.\s} 5 } } */ -/* { dg-final { scan-assembler-times {\mxvcmpgedp\s} 1 } } */ -/* { dg-final { scan-assembler-times {\mxvcmpgedp\.\s} 6 } } */ -/* { dg-final { scan-assembler-times "xvrdpim" 1 } } */ -/* { dg-final { scan-assembler-times "xvmaddadp" 1 } } */ -/* { dg-final { scan-assembler-times "xvmsubadp" 1 } } */ -/* { dg-final { scan-assembler-times "xvsubdp" 1 } } */ -/* { dg-final { scan-assembler-times "xvmaxdp" 1 } } */ -/* { dg-final { scan-assembler-times "xvmindp" 1 } } */ -/* { dg-final { scan-assembler-times "xvmuldp" 1 } } */ -/* { dg-final { scan-assembler-times "vperm" 2 } } */ -/* { dg-final { scan-assembler-times "xvrdpic" 2 } } */ -/* { dg-final { scan-assembler-times "xvsqrtdp" 1 } } */ -/* { dg-final { scan-assembler-times "xvrdpiz" 1 } } */ -/* { dg-final { scan-assembler-times "xvmsubasp" 1 } } */ -/* { dg-final { scan-assembler-times "xvnmaddasp" 1 } } */ -/* { dg-final { scan-assembler-times "xvnmaddadp" 1 } } */ -/* { dg-final { scan-assembler-times "xvnmsubadp" 1 } } */ -/* { dg-final { scan-assembler-times "vmsumshs" 2 } } */ -/* { dg-final { scan-assembler-times "xxland" 13 } } */ -/* { dg-final { scan-assembler-times "xxlxor" 2 } } */ -/* { dg-final { scan-assembler-times "xxsel" 4 } } */ -/* { dg-final { scan-assembler-times "xvrdpip" 1 } } */ -/* { dg-final { scan-assembler-times "xvdivdp" 1 } } */ -/* { dg-final { scan-assembler-times "xvrdpi" 7 } } */ +/* { dg-options "-O2 -mdejagnu-cpu=power7" } */ /* Source code for the test in vsx-vector-6.h */ #include "vsx-vector-6.h" + +/* { dg-final { scan-assembler-times {\mvmaxub\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mvmsumshs\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mvmsumuhs\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mvperm\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvabsdp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvadddp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvcmpeqdp\M} 9 } } */ +/* { dg-final { scan-assembler-times {\mxvcmpgedp\M} 10 } } */ +/* { dg-final { scan-assembler-times {\mxvcmpgtdp\M} 10 } } */ +/* { dg-final { scan-assembler-times {\mxvdivdp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvmadd[am]dp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvmadd[am]sp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvmaxdp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvmindp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvmsub[am]dp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvmsub[am]sp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvmuldp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvnmadd[am]dp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvnmadd[am]sp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvnmsub[am]dp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvnmsub[am]sp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvrdpi\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvrdpic\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvrdpim\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvrdpip\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvrdpiz\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvrspiz\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvsqrtdp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvsubdp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxxland\M} 3 } } */ +/* { dg-final { scan-assembler-times {\mxxlandc\M} 13 } } */ +/* { dg-final { scan-assembler-times {\mxxlnor\M} 5 } } */ +/* { dg-final { scan-assembler-times {\mxxlor\M} 9 } } */ +/* { dg-final { scan-assembler-times {\mxxlxor\M} 3 } } */ +/* { dg-final { scan-assembler-times {\mxxsel\M} 2 } } */ diff --git a/gcc/testsuite/gcc.target/powerpc/vsx-vector-6.p8.c b/gcc/testsuite/gcc.target/powerpc/vsx-vector-6.p8.c index 09a1d96e6be6..a99da6504928 100644 --- a/gcc/testsuite/gcc.target/powerpc/vsx-vector-6.p8.c +++ b/gcc/testsuite/gcc.target/powerpc/vsx-vector-6.p8.c @@ -1,49 +1,43 @@ /* { dg-do compile { target lp64 } } */ /* { dg-skip-if "" { powerpc*-*-darwin* } } */ /* { dg-require-effective-target powerpc_vsx_ok } */ -/* { dg-options "-mvsx -O2 -mdejagnu-cpu=power8" } */ - -/* Expected instruction counts for Power 8. */ - -/* { dg-final { scan-assembler-times "xvabsdp" 1 } } */ -/* { dg-final { scan-assembler-times "xvadddp" 1 } } */ -/* { dg-final { scan-assembler-times "xxlnor" 6 { target le } } } */ -/* { dg-final { scan-assembler-times "xxlnor" 5 { target be } } } */ - -/* We generate xxlor instructions for many reasons other than or'ing vector - operands or calling __builtin_vec_or(), which means we cannot rely on - their usage counts being stable. Therefore, we just ensure at least one - xxlor instruction was generated. */ -/* { dg-final { scan-assembler "xxlor" } } */ - -/* { dg-final { scan-assembler-times {\mxvcmpeqdp\s} 1 } } */ -/* { dg-final { scan-assembler-times {\mxvcmpeqdp\.\s} 5 } } */ -/* { dg-final { scan-assembler-times {\mxvcmpgtdp\s} 2 } } */ -/* { dg-final { scan-assembler-times {\mxvcmpgtdp\.\s} 6 } } */ -/* { dg-final { scan-assembler-times {\mxvcmpgedp\s} 2 } } */ -/* { dg-final { scan-assembler-times {\mxvcmpgedp\.\s} 4 } } */ -/* { dg-final { scan-assembler-times "xvrdpim" 1 } } */ -/* { dg-final { scan-assembler-times "xvmaddadp" 1 } } */ -/* { dg-final { scan-assembler-times "xvmsubadp" 1 } } */ -/* { dg-final { scan-assembler-times "xvsubdp" 1 } } */ -/* { dg-final { scan-assembler-times "xvmaxdp" 1 } } */ -/* { dg-final { scan-assembler-times "xvmindp" 1 } } */ -/* { dg-final { scan-assembler-times "xvmuldp" 1 } } */ -/* { dg-final { scan-assembler-times "vperm" 1 } } */ -/* { dg-final { scan-assembler-times "xvrdpic" 1 } } */ -/* { dg-final { scan-assembler-times "xvsqrtdp" 1 } } */ -/* { dg-final { scan-assembler-times "xvrdpiz" 1 } } */ -/* { dg-final { scan-assembler-times "xvmsubasp" 1 } } */ -/* { dg-final { scan-assembler-times "xvnmaddasp" 1 } } */ -/* { dg-final { scan-assembler-times "xvnmaddadp" 1 } } */ -/* { dg-final { scan-assembler-times "xvnmsubadp" 1 } } */ -/* { dg-final { scan-assembler-times "vmsumshs" 1 } } */ -/* { dg-final { scan-assembler-times "xxland" 13 } } */ -/* { dg-final { scan-assembler-times "xxlxor" 2 } } */ -/* { dg-final { scan-assembler-times "xxsel" 2 } } */ -/* { dg-final { scan-assembler-times "xvrdpip" 1 } } */ -/* { dg-final { scan-assembler-times "xvdivdp" 1 } } */ -/* { dg-final { scan-assembler-times "xvrdpi" 5 } } */ +/* { dg-options "-O2 -mdejagnu-cpu=power8" } */ /* Source code for the test in vsx-vector-6.h */ #include "vsx-vector-6.h" + +/* { dg-final { scan-assembler-times {\mvmaxub\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mvmsumshs\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mvmsumuhs\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mvperm\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvabsdp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvadddp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvcmpeqdp\M} 9 } } */ +/* { dg-final { scan-assembler-times {\mxvcmpgedp\M} 10 } } */ +/* { dg-final { scan-assembler-times {\mxvcmpgtdp\M} 10 } } */ +/* { dg-final { scan-assembler-times {\mxvdivdp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvmadd[am]dp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvmadd[am]sp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvmaxdp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvmindp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvmsub[am]dp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvmsub[am]sp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvmuldp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvnmadd[am]dp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvnmadd[am]sp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvnmsub[am]dp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvnmsub[am]sp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvrdpi\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvrdpic\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvrdpim\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvrdpip\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvrdpiz\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvrspiz\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvsqrtdp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvsubdp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxxland\M} 3 } } */ +/* { dg-final { scan-assembler-times {\mxxlandc\M} 13 } } */ +/* { dg-final { scan-assembler-times {\mxxlnor\M} 5 } } */ +/* { dg-final { scan-assembler-times {\mxxlor\M} 9 } } */ +/* { dg-final { scan-assembler-times {\mxxlxor\M} 3 } } */ +/* { dg-final { scan-assembler-times {\mxxsel\M} 2 } } */ diff --git a/gcc/testsuite/gcc.target/powerpc/vsx-vector-6.p9.c b/gcc/testsuite/gcc.target/powerpc/vsx-vector-6.p9.c index 5f1bafcde17e..eabdf71a7bec 100644 --- a/gcc/testsuite/gcc.target/powerpc/vsx-vector-6.p9.c +++ b/gcc/testsuite/gcc.target/powerpc/vsx-vector-6.p9.c @@ -1,38 +1,42 @@ /* { dg-do compile { target lp64 } } */ /* { dg-skip-if "" { powerpc*-*-darwin* } } */ -/* { dg-require-effective-target powerpc_p9vector_ok } */ -/* { dg-options "-mvsx -O2 -mdejagnu-cpu=power9" } */ - -/* Expected instruction counts for Power9. */ - -/* { dg-final { scan-assembler-times "xvabsdp" 1 } } */ -/* { dg-final { scan-assembler-times "xvadddp" 1 } } */ -/* { dg-final { scan-assembler-times "xxlnor" 5 } } */ - -/* We generate xxlor instructions for many reasons other than or'ing vector - operands or calling __builtin_vec_or(), which means we cannot rely on - their usage counts being stable. Therefore, we just ensure at least one - xxlor instruction was generated. */ -/* { dg-final { scan-assembler "xxlor" } } */ - -/* { dg-final { scan-assembler-times "xvcmpeqdp" 5 } } */ -/* { dg-final { scan-assembler-times "xvcmpgtdp" 8 } } */ -/* { dg-final { scan-assembler-times "xvcmpgedp" 8 } } */ -/* { dg-final { scan-assembler-times "xvrdpim" 1 } } */ -/* { dg-final { scan-assembler-times "xvmaddadp" 1 } } */ -/* { dg-final { scan-assembler-times "xvmsubadp" 1 } } */ -/* { dg-final { scan-assembler-times "xvsubdp" 1 } } */ -/* { dg-final { scan-assembler-times "xvmaxdp" 1 } } */ -/* { dg-final { scan-assembler-times "xvmindp" 1 } } */ -/* { dg-final { scan-assembler-times "xvmuldp" 1 } } */ -/* { dg-final { scan-assembler-times "vperm" 1 } } */ -/* { dg-final { scan-assembler-times "xvrdpic" 1 } } */ -/* { dg-final { scan-assembler-times "xvsqrtdp" 1 } } */ -/* { dg-final { scan-assembler-times "xvrdpiz" 1 } } */ -/* { dg-final { scan-assembler-times "xvmsubasp" 1 } } */ -/* { dg-final { scan-assembler-times "xvnmaddasp" 1 } } */ -/* { dg-final { scan-assembler-times "vmsumshs" 1 } } */ -/* { dg-final { scan-assembler-times "xxland" 13 } } */ +/* { dg-require-effective-target powerpc_vsx_ok } */ +/* { dg-options "-O2 -mdejagnu-cpu=power9" } */ /* Source code for the test in vsx-vector-6.h */ #include "vsx-vector-6.h" + +/* { dg-final { scan-assembler-times {\mvmaxub\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mvmsumshs\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mvmsumuhs\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mvpermr?\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvabsdp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvadddp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvcmpeqdp\M} 9 } } */ +/* { dg-final { scan-assembler-times {\mxvcmpgedp\M} 10 } } */ +/* { dg-final { scan-assembler-times {\mxvcmpgtdp\M} 10 } } */ +/* { dg-final { scan-assembler-times {\mxvdivdp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvmadd[am]dp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvmadd[am]sp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvmaxdp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvmindp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvmsub[am]sp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvmuldp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvnmadd[am]dp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvnmadd[am]sp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvnmsub[am]dp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvnmsub[am]sp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvrdpi\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvrdpic\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvrdpim\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvrdpip\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvrdpiz\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvrspiz\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvsqrtdp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxvmsub[am]dp\M} 1 } } */ +/* { dg-final { scan-assembler-times {\mxxland\M} 3 } } */ +/* { dg-final { scan-assembler-times {\mxxlandc\M} 13 } } */ +/* { dg-final { scan-assembler-times {\mxxlnor\M} 5 } } */ +/* { dg-final { scan-assembler-times {\mxxlor\M} 9 } } */ +/* { dg-final { scan-assembler-times {\mxxlxor\M} 3 } } */ +/* { dg-final { scan-assembler-times {\mxxsel\M} 2 } } */