1. This patch optimize the codegen of the following auto-vectorization codes:
void foo (int32_t * __restrict a, int64_t * __restrict b, int64_t * __restrict c, int n)
{
for (int i = 0; i < n; i++)
c[i] = (int64_t)a[i] + b[i];
}
Combine instruction from:
...
vsext.vf2
vadd.vv
...
into:
...
vwadd.wv
...
Since for PLUS operation, GCC prefer the following RTL operand order when combining:
(plus: (sign_extend:..)
(reg:)
instead of
(plus: (reg:..)
(sign_extend:)
which is different from MINUS pattern.
I split patterns of vwadd/vwsub, and add dedicated patterns for them.
2. This patch not only optimize the case as above (1) mentioned, also enhance vwadd.vv/vwsub.vv
optimization for complicate PLUS/MINUS codes, consider this following codes:
The reason why current upstream GCC can not optimize codes using vwadd thoroughly is combine PASS
needs intermediate RTL IR (extend one of the operand pattern (vwadd.wv)), then base on this intermediate
RTL IR, extend the other operand to generate vwadd.vv.
So vwadd.wv/vwsub.wv definitely helps to vwadd.vv/vwsub.vv code optimizations.
gcc/ChangeLog:
* config/riscv/riscv-vector-builtins-bases.cc: Change vwadd.wv/vwsub.wv
intrinsic API expander
* config/riscv/vector.md
(@pred_single_widen_<plus_minus:optab><any_extend:su><mode>): Remove it.
(@pred_single_widen_sub<any_extend:su><mode>): New pattern.
(@pred_single_widen_add<any_extend:su><mode>): New pattern.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/widen/widen-5.c: New test.
* gcc.target/riscv/rvv/autovec/widen/widen-6.c: New test.
* gcc.target/riscv/rvv/autovec/widen/widen-complicate-1.c: New test.
* gcc.target/riscv/rvv/autovec/widen/widen-complicate-2.c: New test.
* gcc.target/riscv/rvv/autovec/widen/widen_run-5.c: New test.
* gcc.target/riscv/rvv/autovec/widen/widen_run-6.c: New test.