]> git.ipfire.org Git - thirdparty/gcc.git/commit
SVE intrinsics: Add fold_active_lanes_to method to refactor svmul and svdiv.
authorJennifer Schmitz <jschmitz@nvidia.com>
Fri, 27 Sep 2024 15:02:53 +0000 (08:02 -0700)
committerJennifer Schmitz <jschmitz@nvidia.com>
Fri, 18 Oct 2024 13:12:47 +0000 (15:12 +0200)
commite69c2e212011f2bfa6f8c3748d902690b7a3639a
treef4e01c9bcca2477fe0e03593a397287cba4e3a7b
parent94b95f7a3f188bcfcf45beeef9c472248b1810ef
SVE intrinsics: Add fold_active_lanes_to method to refactor svmul and svdiv.

As suggested in
https://gcc.gnu.org/pipermail/gcc-patches/2024-September/663275.html,
this patch adds the method gimple_folder::fold_active_lanes_to (tree X).
This method folds active lanes to X and sets inactive lanes according to
the predication, returning a new gimple statement. That makes folding of
SVE intrinsics easier and reduces code duplication in the
svxxx_impl::fold implementations.
Using this new method, svdiv_impl::fold and svmul_impl::fold were refactored.
Additionally, the method was used for two optimizations:
1) Fold svdiv to the dividend, if the divisor is all ones and
2) for svmul, if one of the operands is all ones, fold to the other operand.
Both optimizations were previously applied to _x and _m predication on
the RTL level, but not for _z, where svdiv/svmul were still being used.
For both optimization, codegen was improved by this patch, for example by
skipping sel instructions with all-same operands and replacing sel
instructions by mov instructions.

The patch was bootstrapped and regtested on aarch64-linux-gnu, no regression.
OK for mainline?

Signed-off-by: Jennifer Schmitz <jschmitz@nvidia.com>
gcc/
* config/aarch64/aarch64-sve-builtins-base.cc (svdiv_impl::fold):
Refactor using fold_active_lanes_to and fold to dividend, is the
divisor is all ones.
(svmul_impl::fold): Refactor using fold_active_lanes_to and fold
to the other operand, if one of the operands is all ones.
* config/aarch64/aarch64-sve-builtins.h: Declare
gimple_folder::fold_active_lanes_to (tree).
* config/aarch64/aarch64-sve-builtins.cc
(gimple_folder::fold_actives_lanes_to): Add new method to fold
actives lanes to given argument and setting inactives lanes
according to the predication.

gcc/testsuite/
* gcc.target/aarch64/sve/acle/asm/div_s32.c: Adjust expected outcome.
* gcc.target/aarch64/sve/acle/asm/div_s64.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/div_u32.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/div_u64.c: Likewise.
* gcc.target/aarch64/sve/fold_div_zero.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/mul_s16.c: New test.
* gcc.target/aarch64/sve/acle/asm/mul_s32.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/mul_s64.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/mul_s8.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/mul_u16.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/mul_u32.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/mul_u64.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/mul_u8.c: Likewise.
* gcc.target/aarch64/sve/mul_const_run.c: Likewise.
17 files changed:
gcc/config/aarch64/aarch64-sve-builtins-base.cc
gcc/config/aarch64/aarch64-sve-builtins.cc
gcc/config/aarch64/aarch64-sve-builtins.h
gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_s32.c
gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_s64.c
gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_u32.c
gcc/testsuite/gcc.target/aarch64/sve/acle/asm/div_u64.c
gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s16.c
gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s32.c
gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s64.c
gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s8.c
gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u16.c
gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u32.c
gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u64.c
gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u8.c
gcc/testsuite/gcc.target/aarch64/sve/fold_div_zero.c
gcc/testsuite/gcc.target/aarch64/sve/mul_const_run.c