]> git.ipfire.org Git - thirdparty/gcc.git/log
thirdparty/gcc.git
4 months agoUpdate Copyright year in ChangeLog files
Jakub Jelinek [Wed, 3 Jan 2024 10:35:18 +0000 (11:35 +0100)] 
Update Copyright year in ChangeLog files

2023 -> 2024

4 months agoRotate ChangeLog files.
Jakub Jelinek [Wed, 3 Jan 2024 10:28:42 +0000 (11:28 +0100)] 
Rotate ChangeLog files.

Rotate ChangeLog files for ChangeLogs with yearly cadence.

4 months agoLoongArch: Provide fmin/fmax RTL pattern for vectors
Xi Ruoyao [Sat, 30 Dec 2023 13:40:11 +0000 (21:40 +0800)] 
LoongArch: Provide fmin/fmax RTL pattern for vectors

We already had smin/smax RTL pattern using vfmin/vfmax instructions.
But for smin/smax, it's unspecified what will happen if either operand
contains any NaN operands.  So we would not vectorize the loop with
-fno-finite-math-only (the default for all optimization levels expect
-Ofast).

But, LoongArch vfmin/vfmax instruction is IEEE-754-2008 conformant so we
can also use them and vectorize the loop.

gcc/ChangeLog:

* config/loongarch/simd.md (fmax<mode>3): New define_insn.
(fmin<mode>3): Likewise.
(reduc_fmax_scal_<mode>3): New define_expand.
(reduc_fmin_scal_<mode>3): Likewise.

gcc/testsuite/ChangeLog:

* gcc.target/loongarch/vfmax-vfmin.c: New test.

4 months agoRISC-V: Make liveness be aware of rgroup number of LENS[dynamic LMUL]
Juzhe-Zhong [Tue, 2 Jan 2024 03:37:43 +0000 (11:37 +0800)] 
RISC-V: Make liveness be aware of rgroup number of LENS[dynamic LMUL]

This patch fixes the following situation:
vl4re16.v       v12,0(a5)
...
vl4re16.v       v16,0(a3)
vs4r.v  v12,0(a5)
...
vl4re16.v       v4,0(a0)
vs4r.v  v16,0(a3)
...
vsetvli a3,zero,e16,m4,ta,ma
...
vmv.v.x v8,t6
vmsgeu.vv       v2,v16,v8
vsub.vv v16,v16,v8
vs4r.v  v16,0(a5)
...
vs4r.v  v4,0(a0)
vmsgeu.vv       v1,v4,v8
...
vsub.vv v4,v4,v8
slli    a6,a4,2
vs4r.v  v4,0(a5)
...
vsub.vv v4,v12,v8
vmsgeu.vv       v3,v12,v8
vs4r.v  v4,0(a5)
...

There are many spills which are 'vs4r.v'.  The root cause is that we don't count
vector REG liveness referencing the rgroup controls.

_29 = _25->iatom[0]; is transformed into the following vect statement with 4 different loop_len (loop_len_74, loop_len_75, loop_len_76, loop_len_77).

  vect__29.11_78 = .MASK_LEN_LOAD (vectp_sb.9_72, 32B, { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 }, loop_len_74, 0);
  vect__29.12_80 = .MASK_LEN_LOAD (vectp_sb.9_79, 32B, { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 }, loop_len_75, 0);
  vect__29.13_82 = .MASK_LEN_LOAD (vectp_sb.9_81, 32B, { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 }, loop_len_76, 0);
  vect__29.14_84 = .MASK_LEN_LOAD (vectp_sb.9_83, 32B, { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 }, loop_len_77, 0);

which are the LENS number (LOOP_VINFO_LENS (loop_vinfo).length ()).

Count liveness according to LOOP_VINFO_LENS (loop_vinfo).length () to compute liveness more accurately:

vsetivli zero,8,e16,m1,ta,ma
vmsgeu.vi v19,v14,8
vadd.vi v18,v14,-8
vmsgeu.vi v17,v1,8
vadd.vi v16,v1,-8
vlm.v v15,0(a5)
...

Tested no regression, ok for trunk ?

PR target/113112

gcc/ChangeLog:

* config/riscv/riscv-vector-costs.cc (compute_nregs_for_mode): Add rgroup info.
(max_number_of_live_regs): Ditto.
(has_unexpected_spills_p): Ditto.

gcc/testsuite/ChangeLog:

* gcc.dg/vect/costmodel/riscv/rvv/pr113112-5.c: New test.

4 months agolibstdc++: testsuite: Reduce max_size_type.cc exec time [PR113175]
Patrick Palka [Wed, 3 Jan 2024 02:31:20 +0000 (21:31 -0500)] 
libstdc++: testsuite: Reduce max_size_type.cc exec time [PR113175]

The adjustment to max_size_type.cc in r14-205-g83470a5cd4c3d2
inadvertently increased the execution time of this test by over 5x due
to making the two main loops actually run in the signed_p case instead
of being dead code.

To compensate, this patch cuts the relevant loops' range [-1000,1000] by
10x as proposed in the PR.  This shouldn't significantly weaken the test
since the same important edge cases are still checked in the smaller range
and/or elsewhere.  On my machine this reduces the test's execution time by
roughly 10x (and 1.6x relative to before r14-205).

PR testsuite/113175

libstdc++-v3/ChangeLog:

* testsuite/std/ranges/iota/max_size_type.cc (test02): Reduce
'limit' to 100 from 1000 and adjust 'log2_limit' accordingly.
(test03): Likewise.

4 months agoDaily bump.
GCC Administrator [Wed, 3 Jan 2024 00:17:41 +0000 (00:17 +0000)] 
Daily bump.

4 months agoRISC-V: Use vector_length_operand instead of csr_operand in vsetvl patterns
Jun Sha (Joshua) [Fri, 29 Dec 2023 04:10:44 +0000 (12:10 +0800)] 
RISC-V: Use vector_length_operand instead of csr_operand in vsetvl patterns

This patch replaces csr_operand by vector_length_operand in the vsetvl
patterns.  This allows future changes in the vector code (i.e. in the
vector_length_operand predicate) without affecting scalar patterns that
use the csr_operand predicate.

gcc/ChangeLog:

* config/riscv/vector.md:
Use vector_length_operand for vsetvl patterns.

Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
4 months agolibsanitizer: Enable LSan and TSan for riscv64
Andreas Schwab [Mon, 18 Dec 2023 14:19:54 +0000 (15:19 +0100)] 
libsanitizer: Enable LSan and TSan for riscv64

libsanitizer:
* configure.tgt (riscv64-*-linux*): Enable LSan and TSan.

4 months agoaarch64: fortran: Adjust vect-8.f90 for libmvec
Szabolcs Nagy [Wed, 27 Dec 2023 11:12:23 +0000 (11:12 +0000)] 
aarch64: fortran: Adjust vect-8.f90 for libmvec

With new glibc one more loop can be vectorized via simd exp in libmvec.

Found by the Linaro TCWG CI.

gcc/testsuite/ChangeLog:

* gfortran.dg/vect/vect-8.f90: Accept more vectorized loops.

4 months agoRISC-V: Add simplification of dummy len and dummy mask COND_LEN_xxx pattern
Juzhe-Zhong [Tue, 2 Jan 2024 07:26:55 +0000 (15:26 +0800)] 
RISC-V: Add simplification of dummy len and dummy mask COND_LEN_xxx pattern

In https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=d1eacedc6d9ba9f5522f2c8d49ccfdf7939ad72d
I optimize COND_LEN_xxx pattern with dummy len and dummy mask with too simply solution which
causes redundant vsetvli in the following case:

vsetvli a5,a2,e8,m1,ta,ma
vle32.v v8,0(a0)
vsetivli zero,16,e32,m4,tu,mu   ----> We should apply VLMAX instead of a CONST_INT AVL
slli a4,a5,2
vand.vv v0,v8,v16
vand.vv v4,v8,v12
vmseq.vi v0,v0,0
sub a2,a2,a5
vneg.v v4,v8,v0.t
vsetvli zero,a5,e32,m4,ta,ma

The root cause above is the following codes:

is_vlmax_len_p (...)
   return poly_int_rtx_p (len, &value)
        && known_eq (value, GET_MODE_NUNITS (mode))
        && !satisfies_constraint_K (len);            ---> incorrect check.

Actually, we should not elide the VLMAX situation that has AVL in range of [0,31].

After removing the the check above, we will have this following issue:

        vsetivli        zero,4,e32,m1,ta,ma
        vlseg4e32.v     v4,(a5)
        vlseg4e32.v     v12,(a3)
        vsetvli a5,zero,e32,m1,tu,ma             ---> This is redundant since VLMAX AVL = 4 when it is fixed-vlmax
        vfadd.vf        v3,v13,fa0
        vfadd.vf        v1,v12,fa1
        vfmul.vv        v17,v3,v5
        vfmul.vv        v16,v1,v5

Since all the following operations (vfadd.vf ... etc) are COND_LEN_xxx with dummy len and dummy mask,
we add the simplification operations dummy len and dummy mask into VLMAX TA and MA policy.

So, after this patch. Both cases are optimal codegen now:

case 1:
vsetvli a5,a2,e32,m1,ta,mu
vle32.v v2,0(a0)
slli a4,a5,2
vand.vv v1,v2,v3
vand.vv v0,v2,v4
sub a2,a2,a5
vmseq.vi v0,v0,0
vneg.v v1,v2,v0.t
vse32.v v1,0(a1)

case 2:
vsetivli zero,4,e32,m1,tu,ma
addi a4,a5,400
vlseg4e32.v v12,(a3)
vfadd.vf v3,v13,fa0
vfadd.vf v1,v12,fa1
vlseg4e32.v v4,(a4)
vfadd.vf v2,v14,fa1
vfmul.vv v17,v3,v5
vfmul.vv v16,v1,v5

This patch is just additional fix of previous approved patch.
Tested on both RV32 and RV64 newlib no regression. Committed.

gcc/ChangeLog:

* config/riscv/riscv-v.cc (is_vlmax_len_p): Remove satisfies_constraint_K.
(expand_cond_len_op): Add simplification of dummy len and dummy mask.

gcc/testsuite/ChangeLog:

* gcc.target/riscv/rvv/base/vf_avl-3.c: New test.

4 months agoaarch64: add 'AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA'
Di Zhao [Tue, 2 Jan 2024 04:35:03 +0000 (12:35 +0800)] 
aarch64: add 'AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA'

This patch adds a new tuning option
'AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA', to consider fully
pipelined FMAs in reassociation. Also, set this option by default
for Ampere CPUs.

gcc/ChangeLog:

* config/aarch64/aarch64-tuning-flags.def
(AARCH64_EXTRA_TUNING_OPTION): New tuning option
AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA.
* config/aarch64/aarch64.cc
(aarch64_override_options_internal): Set
param_fully_pipelined_fma according to tuning option.
* config/aarch64/tuning_models/ampere1.h: Add
AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA to tune_flags.
* config/aarch64/tuning_models/ampere1a.h: Likewise.
* config/aarch64/tuning_models/ampere1b.h: Likewise.

4 months agoRISC-V: Modify copyright year of vector-crypto.md
Feng Wang [Tue, 2 Jan 2024 02:19:49 +0000 (02:19 +0000)] 
RISC-V: Modify copyright year of vector-crypto.md

gcc/ChangeLog:
* config/riscv/vector-crypto.md: Modify copyright year.

4 months agoRISC-V: Declare STMT_VINFO_TYPE (...) as local variable
Juzhe-Zhong [Tue, 2 Jan 2024 01:52:04 +0000 (09:52 +0800)] 
RISC-V: Declare STMT_VINFO_TYPE (...) as local variable

Committed.

gcc/ChangeLog:

* config/riscv/riscv-vector-costs.cc: Move STMT_VINFO_TYPE (...) to local.

4 months agoLoongArch: Added TLS Le Relax support.
Lulu Cheng [Tue, 12 Dec 2023 08:32:31 +0000 (16:32 +0800)] 
LoongArch: Added TLS Le Relax support.

Check whether the assembler supports tls le relax. If it supports it, the assembly
instruction sequence of tls le relax will be generated by default.

The original way to obtain the tls le symbol address:
    lu12i.w $rd, %le_hi20(sym)
    ori $rd, $rd, %le_lo12(sym)
    add.{w/d} $rd, $rd, $tp

If the assembler supports tls le relax, the following sequence is generated:

    lu12i.w $rd, %le_hi20_r(sym)
    add.{w/d} $rd,$rd,$tp,%le_add_r(sym)
    addi.{w/d} $rd,$rd,%le_lo12_r(sym)

gcc/ChangeLog:

* config.in: Regenerate.
* config/loongarch/loongarch-opts.h (HAVE_AS_TLS_LE_RELAXATION): Define.
* config/loongarch/loongarch.cc (loongarch_legitimize_tls_address):
Added TLS Le Relax support.
(loongarch_print_operand_reloc): Add the output string of TLS Le Relax.
* config/loongarch/loongarch.md (@add_tls_le_relax<mode>): New template.
* configure: Regenerate.
* configure.ac: Check if binutils supports TLS le relax.

gcc/testsuite/ChangeLog:

* lib/target-supports.exp: Add a function to check whether binutil supports
TLS Le Relax.
* gcc.target/loongarch/tls-le-relax.c: New test.

4 months agoRISC-V: Add crypto machine descriptions
Feng Wang [Fri, 22 Dec 2023 01:59:36 +0000 (01:59 +0000)] 
RISC-V: Add crypto machine descriptions

Co-Authored by: Songhe Zhu <zhusonghe@eswincomputing.com>
Co-Authored by: Ciyan Pan <panciyan@eswincomputing.com>
gcc/ChangeLog:

* config/riscv/iterators.md: Add rotate insn name.
* config/riscv/riscv.md: Add new insns name for crypto vector.
* config/riscv/vector-iterators.md: Add new iterators for crypto vector.
* config/riscv/vector.md: Add the corresponding attr for crypto vector.
* config/riscv/vector-crypto.md: New file.The machine descriptions for crypto vector.

4 months agoRISC-V: Count pointer type SSA into RVV regs liveness for dynamic LMUL cost model
Juzhe-Zhong [Fri, 29 Dec 2023 01:21:02 +0000 (09:21 +0800)] 
RISC-V: Count pointer type SSA into RVV regs liveness for dynamic LMUL cost model

This patch fixes the following choosing unexpected big LMUL which cause register spillings.

Before this patch, choosing LMUL = 4:

addi sp,sp,-160
addiw t1,a2,-1
li a5,7
bleu t1,a5,.L16
vsetivli zero,8,e64,m4,ta,ma
vmv.v.x v4,a0
vs4r.v v4,0(sp)                        ---> spill to the stack.
vmv.v.x v4,a1
addi a5,sp,64
vs4r.v v4,0(a5)                        ---> spill to the stack.

The root cause is the following codes:

                  if (poly_int_tree_p (var)
                      || (is_gimple_val (var)
                         && !POINTER_TYPE_P (TREE_TYPE (var))))

We count the variable as consuming a RVV reg group when it is not POINTER_TYPE.

It is right for load/store STMT for example:

_1 = (MEM)*addr -->  addr won't be allocated an RVV vector group.

However, we find it is not right for non-load/store STMT:

_3 = _1 == x_8(D);

_1 is pointer type too but we does allocate a RVV register group for it.

So after this patch, we are choosing the perfect LMUL for the testcase in this patch:

ble a2,zero,.L17
addiw a7,a2,-1
li a5,3
bleu a7,a5,.L15
srliw a5,a7,2
slli a6,a5,1
add a6,a6,a5
lui a5,%hi(replacements)
addi t1,a5,%lo(replacements)
slli a6,a6,5
lui t4,%hi(.LANCHOR0)
lui t3,%hi(.LANCHOR0+8)
lui a3,%hi(.LANCHOR0+16)
lui a4,%hi(.LC1)
vsetivli zero,4,e16,mf2,ta,ma
addi t4,t4,%lo(.LANCHOR0)
addi t3,t3,%lo(.LANCHOR0+8)
addi a3,a3,%lo(.LANCHOR0+16)
addi a4,a4,%lo(.LC1)
add a6,t1,a6
addi a5,a5,%lo(replacements)
vle16.v v18,0(t4)
vle16.v v17,0(t3)
vle16.v v16,0(a3)
vmsgeu.vi v25,v18,4
vadd.vi v24,v18,-4
vmsgeu.vi v23,v17,4
vadd.vi v22,v17,-4
vlm.v v21,0(a4)
vmsgeu.vi v20,v16,4
vadd.vi v19,v16,-4
vsetvli zero,zero,e64,m2,ta,mu
vmv.v.x v12,a0
vmv.v.x v14,a1
.L4:
vlseg3e64.v v6,(a5)
vmseq.vv v2,v6,v12
vmseq.vv v0,v8,v12
vmsne.vv v1,v8,v12
vmand.mm v1,v1,v2
vmerge.vvm v2,v8,v14,v0
vmv1r.v v0,v1
addi a4,a5,24
vmerge.vvm v6,v6,v14,v0
vmerge.vim v2,v2,0,v0
vrgatherei16.vv v4,v6,v18
vmv1r.v v0,v25
vrgatherei16.vv v4,v2,v24,v0.t
vs1r.v v4,0(a5)
addi a3,a5,48
vmv1r.v v0,v21
vmv2r.v v4,v2
vcompress.vm v4,v6,v0
vs1r.v v4,0(a4)
vmv1r.v v0,v23
addi a4,a5,72
vrgatherei16.vv v4,v6,v17
vrgatherei16.vv v4,v2,v22,v0.t
vs1r.v v4,0(a3)
vmv1r.v v0,v20
vrgatherei16.vv v4,v6,v16
addi a5,a5,96
vrgatherei16.vv v4,v2,v19,v0.t
vs1r.v v4,0(a4)
bne a6,a5,.L4

No spillings, no "sp" register used.

Tested on both RV32 and RV64, no regression.

Ok for trunk ?

PR target/113112

gcc/ChangeLog:

* config/riscv/riscv-vector-costs.cc (compute_nregs_for_mode): Fix
pointer type liveness count.

gcc/testsuite/ChangeLog:

* gcc.dg/vect/costmodel/riscv/rvv/pr113112-4.c: New test.

4 months agoDaily bump.
GCC Administrator [Tue, 2 Jan 2024 00:18:25 +0000 (00:18 +0000)] 
Daily bump.

4 months agoDaily bump.
GCC Administrator [Mon, 1 Jan 2024 00:18:40 +0000 (00:18 +0000)] 
Daily bump.

4 months agoi386: Tweak define_insn_and_split to fix FAIL of gcc.target/i386/pr43644-2.c
Roger Sayle [Sun, 31 Dec 2023 21:37:24 +0000 (21:37 +0000)] 
i386: Tweak define_insn_and_split to fix FAIL of gcc.target/i386/pr43644-2.c

This patch resolves the failure of pr43644-2.c in the testsuite, a code
quality test I added back in July, that started failing as the code GCC
generates for 128-bit values (and their parameter passing) has been in
flux.

The function:

unsigned __int128 foo(unsigned __int128 x, unsigned long long y) {
  return x+y;
}

currently generates:

foo:    movq    %rdx, %rcx
        movq    %rdi, %rax
        movq    %rsi, %rdx
        addq    %rcx, %rax
        adcq    $0, %rdx
        ret

and with this patch, we now generate:

foo: movq    %rdi, %rax
        addq    %rdx, %rax
        movq    %rsi, %rdx
        adcq    $0, %rdx

which is optimal.

2023-12-31  Uros Bizjak  <ubizjak@gmail.com>
    Roger Sayle  <roger@nextmovesoftware.com>

gcc/ChangeLog
PR target/43644
* config/i386/i386.md (*add<dwi>3_doubleword_concat_zext): Tweak
order of instructions after split, to minimize number of moves.

gcc/testsuite/ChangeLog
PR target/43644
* gcc.target/i386/pr43644-2.c: Expect 2 movq instructions.

4 months agolibstdc++ testsuite/20_util/hash/quality.cc: Increase timeout 3x
Hans-Peter Nilsson [Fri, 29 Dec 2023 22:06:45 +0000 (23:06 +0100)] 
libstdc++ testsuite/20_util/hash/quality.cc: Increase timeout 3x

Testing for mmix (a 64-bit target using Knuth's simulator).  The test
is largely pruned for simulators, but still needs 5m57s on my laptop
from 3.5 years ago to run to successful completion.  Perhaps slow
hosted targets could also have problems so increasing the timeout
limit, not just for simulators but for everyone, and by more than a
factor 2.

* testsuite/20_util/hash/quality.cc: Increase timeout by a factor 3.

4 months agolibstdc++: [_Hashtable] Extend the small size optimization
François Dumont [Wed, 27 Sep 2023 04:53:51 +0000 (06:53 +0200)] 
libstdc++: [_Hashtable] Extend the small size optimization

A number of methods were still not using the small size optimization which
is to prefer an O(N) research to a hash computation as long as N is small.

libstdc++-v3/ChangeLog:

* include/bits/hashtable.h: Move comment about all equivalent values
being next to each other in the class documentation header.
(_M_reinsert_node, _M_merge_unique): Implement small size optimization.
(_M_find_tr, _M_count_tr, _M_equal_range_tr): Likewise.

4 months agolibstdc++: [_Hashtable] Enhance performance benches
François Dumont [Wed, 3 May 2023 04:38:16 +0000 (06:38 +0200)] 
libstdc++: [_Hashtable] Enhance performance benches

Add benches on insert with hint and before begin cache.

libstdc++-v3/ChangeLog:

* testsuite/performance/23_containers/insert/54075.cc: Add lookup on unknown entries
w/o copy to see potential impact of memory fragmentation enhancements.
* testsuite/performance/23_containers/insert/unordered_multiset_hint.cc: Enhance hash
functor to make it perfect, exactly 1 entry per bucket. Also use hash functor tagged as
slow or not to bench w/o hash code cache.
* testsuite/performance/23_containers/insert/unordered_set_hint.cc: New test case. Like
previous one but using std::unordered_set.
* testsuite/performance/23_containers/insert/unordered_set_range_insert.cc: New test case.
Check performance of range-insertion compared to individual insertions.
* testsuite/performance/23_containers/insert_erase/unordered_small_size.cc: Add same bench
but after a copy to demonstrate impact of enhancements regarding memory fragmentation.

4 months agoDaily bump.
GCC Administrator [Sun, 31 Dec 2023 00:16:46 +0000 (00:16 +0000)] 
Daily bump.

4 months agoC: Fix type compatibility for structs with variable sized fields.
Martin Uecker [Fri, 22 Dec 2023 16:32:34 +0000 (17:32 +0100)] 
C: Fix type compatibility for structs with variable sized fields.

This fixes the test gcc.dg/gnu23-tag-4.c introduced by commit 23fee88f
which fails for -march=... because the DECL_FIELD_BIT_OFFSET are set
inconsistently for types with and without variable-sized field.  This
is fixed by testing for DECL_ALIGN instead.  The code is further
simplified by removing some unnecessary conditions, i.e. anon_field is
set unconditionaly and all fields are assumed to be DECL_FIELDs.

gcc/c:
* c-typeck.cc (tagged_types_tu_compatible_p): Revise.

gcc/testsuite:
* gcc.dg/c23-tag-9.c: New test.

4 months agoMAINTAINERS: Update my email address
Joseph Myers [Sat, 30 Dec 2023 00:27:32 +0000 (00:27 +0000)] 
MAINTAINERS: Update my email address

There will be another update in January.

* MAINTAINERS: Update my email address.

4 months agoDaily bump.
GCC Administrator [Sat, 30 Dec 2023 00:17:48 +0000 (00:17 +0000)] 
Daily bump.

4 months agoDisable FMADD in chains for Zen4 and generic
Jan Hubicka [Fri, 29 Dec 2023 22:51:03 +0000 (23:51 +0100)] 
Disable FMADD in chains for Zen4 and generic

this patch disables use of FMA in matrix multiplication loop for generic (for
x86-64-v3) and zen4.  I tested this on zen4 and Xenon Gold Gold 6212U.

For Intel this is neutral both on the matrix multiplication microbenchmark
(attached) and spec2k17 where the difference was within noise for Core.

On core the micro-benchmark runs as follows:

With FMA:

       578,500,241      cycles:u                         #    3.645 GHz
                ( +-  0.12% )
       753,318,477      instructions:u                   #    1.30  insn per
cycle              ( +-  0.00% )
       125,417,701      branches:u                       #  790.227 M/sec
                ( +-  0.00% )
          0.159146 +- 0.000363 seconds time elapsed  ( +-  0.23% )

No FMA:

       577,573,960      cycles:u                         #    3.514 GHz
                ( +-  0.15% )
       878,318,479      instructions:u                   #    1.52  insn per
cycle              ( +-  0.00% )
       125,417,702      branches:u                       #  763.035 M/sec
                ( +-  0.00% )
          0.164734 +- 0.000321 seconds time elapsed  ( +-  0.19% )

So the cycle count is unchanged and discrete multiply+add takes same time as
FMA.

While on zen:

With FMA:
         484875179      cycles:u                         #    3.599 GHz
             ( +-  0.05% )  (82.11%)
         752031517      instructions:u                   #    1.55  insn per
cycle
         125106525      branches:u                       #  928.712 M/sec
             ( +-  0.03% )  (85.09%)
            128356      branch-misses:u                  #    0.10% of all
branches          ( +-  0.06% )  (83.58%)

No FMA:
         375875209      cycles:u                         #    3.592 GHz
             ( +-  0.08% )  (80.74%)
         875725341      instructions:u                   #    2.33  insn per
cycle
         124903825      branches:u                       #    1.194 G/sec
             ( +-  0.04% )  (84.59%)
          0.105203 +- 0.000188 seconds time elapsed  ( +-  0.18% )

The diffrerence is that Cores understand the fact that fmadd does not need
all three parameters to start computation, while Zen cores doesn't.

Since this seems noticeable win on zen and not loss on Core it seems like good
default for generic.

float a[SIZE][SIZE];
float b[SIZE][SIZE];
float c[SIZE][SIZE];

void init(void)
{
   int i, j, k;
   for(i=0; i<SIZE; ++i)
   {
      for(j=0; j<SIZE; ++j)
      {
         a[i][j] = (float)i + j;
         b[i][j] = (float)i - j;
         c[i][j] = 0.0f;
      }
   }
}

void mult(void)
{
   int i, j, k;

   for(i=0; i<SIZE; ++i)
   {
      for(j=0; j<SIZE; ++j)
      {
         for(k=0; k<SIZE; ++k)
         {
            c[i][j] += a[i][k] * b[k][j];
         }
      }
   }
}

int main(void)
{
   clock_t s, e;

   init();
   s=clock();
   mult();
   e=clock();
   printf("        mult took %10d clocks\n", (int)(e-s));

   return 0;

}

gcc/ChangeLog:

* config/i386/x86-tune.def (X86_TUNE_AVOID_128FMA_CHAINS,
X86_TUNE_AVOID_256FMA_CHAINS): Enable for znver4 and Core.

4 months agoAArch64: Update costing for vector conversions [PR110625]
Tamar Christina [Fri, 29 Dec 2023 15:58:29 +0000 (15:58 +0000)] 
AArch64: Update costing for vector conversions [PR110625]

In gimple the operation

short _8;
double _9;
_9 = (double) _8;

denotes two operations on AArch64.  First we have to widen from short to
long and then convert this integer to a double.

Currently however we only count the widen/truncate operations:

(double) _5 6 times vec_promote_demote costs 12 in body
(double) _5 12 times vec_promote_demote costs 24 in body

but not the actual conversion operation, which needs an additional 12
instructions in the attached testcase.   Without this the attached testcase ends
up incorrectly thinking that it's beneficial to vectorize the loop at a very
high VF = 8 (4x unrolled).

Because we can't change the mid-end to account for this the costing code in the
backend now keeps track of whether the previous operation was a
promotion/demotion and ajdusts the expected number of instructions to:

1. If it's the first FLOAT_EXPR and the precision of the lhs and rhs are
   different, double it, since we need to convert and promote.
2. If it's the previous operation was a demonition/promotion then reduce the
   cost of the current operation by the amount we added extra in the last.

with the patch we get:

(double) _5 6 times vec_promote_demote costs 24 in body
(double) _5 12 times vec_promote_demote costs 36 in body

which correctly accounts for 30 operations.

This fixes the 16% regression in imagick in SPECCPU 2017 reported on Neoverse N2
and using the new generic Armv9-a cost model.

gcc/ChangeLog:

PR target/110625
* config/aarch64/aarch64.cc (aarch64_vector_costs::add_stmt_cost):
Adjust throughput and latency calculations for vector conversions.
(class aarch64_vector_costs): Add m_num_last_promote_demote.

gcc/testsuite/ChangeLog:

PR target/110625
* gcc.target/aarch64/pr110625_4.c: New test.
* gcc.target/aarch64/sve/unpack_fcvt_signed_1.c: Add
--param aarch64-sve-compare-costs=0.
* gcc.target/aarch64/sve/unpack_fcvt_unsigned_1.c: Likewise

4 months agoLoongArch: Fix the format of bstrins_<mode>_for_ior_mask condition (NFC)
Xi Ruoyao [Fri, 29 Dec 2023 12:04:34 +0000 (20:04 +0800)] 
LoongArch: Fix the format of bstrins_<mode>_for_ior_mask condition (NFC)

gcc/ChangeLog:

* config/loongarch/loongarch.md (bstrins_<mode>_for_ior_mask):
For the condition, remove unneeded trailing "\" and move "&&" to
follow GNU coding style.  NFC.

4 months agoLoongArch: Replace -mexplicit-relocs=auto simple-used address peephole2 with combine
Xi Ruoyao [Mon, 11 Dec 2023 20:54:21 +0000 (04:54 +0800)] 
LoongArch: Replace -mexplicit-relocs=auto simple-used address peephole2 with combine

The problem with peephole2 is it uses a naive sliding-window algorithm
and misses many cases.  For example:

    float a[10000];
    float t() { return a[0] + a[8000]; }

is compiled to:

    la.local    $r13,a
    la.local    $r12,a+32768
    fld.s       $f1,$r13,0
    fld.s       $f0,$r12,-768
    fadd.s      $f0,$f1,$f0

by trunk.  But as we've explained in r14-4851, the following would be
better with -mexplicit-relocs=auto:

    pcalau12i   $r13,%pc_hi20(a)
    pcalau12i   $r12,%pc_hi20(a+32000)
    fld.s       $f1,$r13,%pc_lo12(a)
    fld.s       $f0,$r12,%pc_lo12(a+32000)
    fadd.s      $f0,$f1,$f0

However the sliding-window algorithm just won't detect the pcalau12i/fld
pair to be optimized.  Use a define_insn_and_rewrite in combine pass
will work around the issue.

gcc/ChangeLog:

* config/loongarch/predicates.md
(symbolic_pcrel_offset_operand): New define_predicate.
(mem_simple_ldst_operand): Likewise.
* config/loongarch/loongarch-protos.h
(loongarch_rewrite_mem_for_simple_ldst): Declare.
* config/loongarch/loongarch.cc
(loongarch_rewrite_mem_for_simple_ldst): Implement.
* config/loongarch/loongarch.md (simple_load<mode>): New
define_insn_and_rewrite.
(simple_load_<su>ext<SUBDI:mode><GPR:mode>): Likewise.
(simple_store<mode>): Likewise.
(define_peephole2): Remove la.local/[f]ld peepholes.

gcc/testsuite/ChangeLog:

* gcc.target/loongarch/explicit-relocs-auto-single-load-store-2.c:
New test.
* gcc.target/loongarch/explicit-relocs-auto-single-load-store-3.c:
New test.

4 months agoi386: Fix TARGET_USE_VECTOR_FP_CONVERTS SF->DF float_extend splitter [PR113133]
Uros Bizjak [Fri, 29 Dec 2023 08:47:43 +0000 (09:47 +0100)] 
i386: Fix TARGET_USE_VECTOR_FP_CONVERTS SF->DF float_extend splitter [PR113133]

The post-reload splitter currently allows xmm16+ registers with TARGET_EVEX512.
The splitter changes SFmode of the output operand to V4SFmode, but the vector
mode is currently unsupported in xmm16+ without TARGET_AVX512VL. lowpart_subreg
returns NULL_RTX in this case and the compilation fails with invalid RTX.

The patch removes support for x/ymm16+ registers with TARGET_EVEX512.  The
support should be restored once ix86_hard_regno_mode_ok is fixed to allow
16-byte modes in x/ymm16+ with TARGET_EVEX512.

PR target/113133

gcc/ChangeLog:

* config/i386/i386.md
(TARGET_USE_VECTOR_FP_CONVERTS SF->DF float_extend splitter):
Do not handle xmm16+ with TARGET_EVEX512.

gcc/testsuite/ChangeLog:

* gcc.target/i386/pr113133-1.c: New test.
* gcc.target/i386/pr113133-2.c: New test.

4 months agoFix gen-vect-26.c testcase after loops with multiple exits [PR113167]
Andrew Pinski [Fri, 29 Dec 2023 04:26:01 +0000 (20:26 -0800)] 
Fix gen-vect-26.c testcase after loops with multiple exits [PR113167]

This fixes the gcc.dg/tree-ssa/gen-vect-26.c testcase by adding
`#pragma GCC novector` in front of the loop that is doing the checking
of the result. We only want to test the first loop to see if it can be
vectorize.

Committed as obvious after testing on x86_64-linux-gnu with -m32.

gcc/testsuite/ChangeLog:

PR testsuite/113167
* gcc.dg/tree-ssa/gen-vect-26.c: Mark the test/check loop
as novector.

Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
4 months agoRISC-V: Robostify testcase pr113112-1.c
Juzhe-Zhong [Fri, 29 Dec 2023 01:39:36 +0000 (09:39 +0800)] 
RISC-V: Robostify testcase pr113112-1.c

The redudant dump check is fragile and easily changed, not necessary.

Tested on both RV32/RV64 no regression.

Remove it and committed.

gcc/testsuite/ChangeLog:

* gcc.dg/vect/costmodel/riscv/rvv/pr113112-1.c: Remove redundant checks.

4 months agoRISC-V: Disallow transformation into VLMAX AVL for cond_len_xxx when length is in...
Juzhe-Zhong [Wed, 27 Dec 2023 02:38:26 +0000 (10:38 +0800)] 
RISC-V: Disallow transformation into VLMAX AVL for cond_len_xxx when length is in range [0, 31]

Notice we have this following situation:

        vsetivli        zero,4,e32,m1,ta,ma
        vlseg4e32.v     v4,(a5)
        vlseg4e32.v     v12,(a3)
        vsetvli a5,zero,e32,m1,tu,ma             ---> This is redundant since VLMAX AVL = 4 when it is fixed-vlmax
        vfadd.vf        v3,v13,fa0
        vfadd.vf        v1,v12,fa1
        vfmul.vv        v17,v3,v5
        vfmul.vv        v16,v1,v5

The rootcause is that we transform COND_LEN_xxx into VLMAX AVL when len == NUNITS blindly.
However, we don't need to transform all of them since when len is range of [0,31], we don't need to
consume scalar registers.

After this patch:

vsetivli zero,4,e32,m1,tu,ma
addi a4,a5,400
vlseg4e32.v v12,(a3)
vfadd.vf v3,v13,fa0
vfadd.vf v1,v12,fa1
vlseg4e32.v v4,(a4)
vfadd.vf v2,v14,fa1
vfmul.vv v17,v3,v5
vfmul.vv v16,v1,v5

Tested on both RV32 and RV64 no regression.

Ok for trunk ?

gcc/ChangeLog:

* config/riscv/riscv-v.cc (is_vlmax_len_p): New function.
(expand_load_store): Disallow transformation into VLMAX when len is in range of [0,31]
(expand_cond_len_op): Ditto.
(expand_gather_scatter): Ditto.
(expand_lanes_load_store): Ditto.
(expand_fold_extract_last): Ditto.

gcc/testsuite/ChangeLog:

* gcc.target/riscv/rvv/autovec/post-ra-avl.c: Adapt test.
* gcc.target/riscv/rvv/base/vf_avl-2.c: New test.

4 months agoDaily bump.
GCC Administrator [Fri, 29 Dec 2023 00:17:56 +0000 (00:17 +0000)] 
Daily bump.

4 months agoFortran: Add Developer Options mini-section to documentation
Rimvydas Jasinskas [Sat, 23 Dec 2023 18:59:09 +0000 (18:59 +0000)] 
Fortran: Add Developer Options mini-section to documentation

Separate out -fdump-* options to the new section.  Sort by option name.

While there, document -save-temps intermediates.

gcc/fortran/ChangeLog:

PR fortran/81615
* invoke.texi: Add Developer Options section.  Move '-fdump-*'
to it.  Add small examples about changed '-save-temps' behavior.

Signed-off-by: Rimvydas Jasinskas <rimvydas.jas@gmail.com>
4 months agotestsuite: XFAIL linkage testcases on AIX.
David Edelsohn [Thu, 28 Dec 2023 19:42:14 +0000 (14:42 -0500)] 
testsuite: XFAIL linkage testcases on AIX.

The template linkage2.C and linkage3.C testcases expect a
decoration that does not match AIX assembler syntax.  Expect failure.

gcc/testsuite/ChangeLog:
* g++.dg/template/linkage2.C: XFAIL on AIX.
* g++.dg/template/linkage3.C: Same.

Signed-off-by: David Edelsohn <dje.gcc@gmail.com>
4 months agoi386: Cleanup ix86_expand_{unary|binary}_operator issues
Uros Bizjak [Thu, 28 Dec 2023 11:31:30 +0000 (12:31 +0100)] 
i386: Cleanup ix86_expand_{unary|binary}_operator issues

Move ix86_expand_unary_operator from i386.cc to i386-expand.cc, re-arrange
prototypes and do some cosmetic changes with the usage of TARGET_APX_NDD.

No functional changes.

gcc/ChangeLog:

* config/i386/i386.cc (ix86_unary_operator_ok): Move from here...
* config/i386/i386-expand.cc (ix86_unary_operator_ok): ... to here.
* config/i386/i386-protos.h: Re-arrange ix86_{unary|binary}_operator_ok
and ix86_expand_{unary|binary}_operator prototypes.
* config/i386/i386.md: Cosmetic changes with the usage of
TARGET_APX_NDD in ix86_expand_{unary|binary}_operator
and ix86_{unary|binary}_operator_ok function calls.

4 months agoRISC-V: Make dynamic LMUL cost model more accurate for conversion codes
Juzhe-Zhong [Thu, 28 Dec 2023 01:33:32 +0000 (09:33 +0800)] 
RISC-V: Make dynamic LMUL cost model more accurate for conversion codes

Notice current dynamic LMUL is not accurate for conversion codes.
Refine for it, there is current case is changed from choosing LMUL = 4 into LMUL = 8.

Tested no regression, committed.

Before this patch (LMUL = 4):                  After this patch (LMUL = 8):
        lw      a7,56(sp)                             lw a7,56(sp)
        ld      t5,0(sp)                              ld t5,0(sp)
        ld      t1,8(sp)                              ld t1,8(sp)
        ld      t6,16(sp)                             ld t6,16(sp)
        ld      t0,24(sp)                             ld t0,24(sp)
        ld      t3,32(sp)                             ld t3,32(sp)
        ld      t4,40(sp)                             ld t4,40(sp)
        ble     a7,zero,.L5                           ble a7,zero,.L5
.L3:                                               .L3:
        vsetvli a4,a7,e32,m2,ta,ma                    vsetvli a4,a7,e32,m4,ta
        vle8.v  v1,0(a2)                              vle8.v v3,0(a2)
        vle8.v  v4,0(a1)                              vle8.v v16,0(t0)
        vsext.vf4       v8,v1                         vle8.v v7,0(a1)
        vsext.vf4       v2,v4                         vle8.v v12,0(t6)
        vsetvli zero,zero,e8,mf2,ta,ma                vle8.v v2,0(a5)
        vadd.vv v4,v4,v1                              vle8.v v1,0(t5)
        vsetvli zero,zero,e32,m2,ta,ma                vsext.vf4 v20,v3
        vle8.v  v5,0(t0)                              vsext.vf4 v8,v7
        vle8.v  v6,0(t6)                              vadd.vv v8,v8,v20
        vadd.vv v2,v2,v8                              vadd.vv v8,v8,v8
        vadd.vv v2,v2,v2                              vadd.vv v8,v8,v20
        vadd.vv v2,v2,v8                              vsetvli zero,zero,e8,m1
        vsetvli zero,zero,e8,mf2,ta,ma                vadd.vv v15,v12,v16
        vadd.vv v6,v6,v5                              vsetvli zero,zero,e32,m4
        vsetvli zero,zero,e32,m2,ta,ma                vsext.vf4 v12,v15
        vle8.v  v8,0(t5)                              vadd.vv v8,v8,v12
        vle8.v  v9,0(a5)                              vsetvli zero,zero,e8,m1
        vsext.vf4       v10,v4                        vadd.vv v7,v7,v3
        vsext.vf4       v12,v6                        vsetvli zero,zero,e32,m4
        vadd.vv v2,v2,v12                             vsext.vf4 v4,v7
        vadd.vv v2,v2,v10                             vadd.vv v8,v8,v4
        vsetvli zero,zero,e16,m1,ta,ma                vsetvli zero,zero,e16,m2
        vncvt.x.x.w     v4,v2                         vncvt.x.x.w v4,v8
        vsetvli zero,zero,e32,m2,ta,ma                vsetvli zero,zero,e8,m1
        vadd.vv v6,v2,v2                              vncvt.x.x.w v4,v4
        vsetvli zero,zero,e8,mf2,ta,ma                vadd.vv v15,v3,v4
        vncvt.x.x.w     v4,v4                         vadd.vv v2,v2,v4
        vadd.vv v5,v5,v4                              vse8.v v15,0(t4)
        vadd.vv v9,v9,v4                              vadd.vv v3,v16,v4
        vadd.vv v1,v1,v4                              vse8.v v2,0(a3)
        vadd.vv v4,v8,v4                              vadd.vv v1,v1,v4
        vse8.v  v1,0(t4)                              vse8.v v1,0(a6)
        vse8.v  v9,0(a3)                              vse8.v v3,0(t1)
        vsetvli zero,zero,e32,m2,ta,ma                vsetvli zero,zero,e32,m4
        vse8.v  v4,0(a6)                              vsext.vf4 v4,v3
        vsext.vf4       v8,v5                         vadd.vv v4,v4,v8
        vse8.v  v5,0(t1)                              vsetvli zero,zero,e64,m8
        vadd.vv v2,v8,v2                              vsext.vf2 v16,v4
        vsetvli zero,zero,e64,m4,ta,ma                vse64.v v16,0(t3)
        vsext.vf2       v8,v2                         vsetvli zero,zero,e32,m4
        vsetvli zero,zero,e32,m2,ta,ma                vadd.vv v8,v8,v8
        slli    t2,a4,3                               vsext.vf4 v4,v15
        vse64.v v8,0(t3)                              slli t2,a4,3
        vsext.vf4       v2,v1                         vadd.vv v4,v8,v4
        sub     a7,a7,a4                              sub a7,a7,a4
        vadd.vv v2,v6,v2                              vsetvli zero,zero,e64,m8
        vsetvli zero,zero,e64,m4,ta,ma                vsext.vf2 v8,v4
        vsext.vf2       v4,v2                         vse64.v v8,0(a0)
        vse64.v v4,0(a0)                              add a1,a1,a4
        add     a2,a2,a4                              add a2,a2,a4
        add     a1,a1,a4                              add a5,a5,a4
        add     t6,t6,a4                              add t5,t5,a4
        add     t0,t0,a4                              add t6,t6,a4
        add     a5,a5,a4                              add t0,t0,a4
        add     t5,t5,a4                              add t4,t4,a4
        add     t4,t4,a4                              add a3,a3,a4
        add     a3,a3,a4                              add a6,a6,a4
        add     a6,a6,a4                              add t1,t1,a4
        add     t1,t1,a4                              add t3,t3,t2
        add     t3,t3,t2                              add a0,a0,t2
        add     a0,a0,t2                              bne a7,zero,.L3
        bne     a7,zero,.L3                         .L5:
.L5:                                                  ret
        ret

gcc/ChangeLog:

* config/riscv/riscv-vector-costs.cc (is_gimple_assign_or_call): Change interface.
(get_live_range): New function.

gcc/testsuite/ChangeLog:

* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-3.c: Adapt test.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-5.c: Ditto.

4 months agoDaily bump.
GCC Administrator [Thu, 28 Dec 2023 00:19:23 +0000 (00:19 +0000)] 
Daily bump.

4 months agoLoongArch: Fix infinite secondary reloading of FCCmode [PR113148]
Xi Ruoyao [Tue, 26 Dec 2023 20:28:56 +0000 (04:28 +0800)] 
LoongArch: Fix infinite secondary reloading of FCCmode [PR113148]

The GCC internal doc says:

     X might be a pseudo-register or a 'subreg' of a pseudo-register,
     which could either be in a hard register or in memory.  Use
     'true_regnum' to find out; it will return -1 if the pseudo is in
     memory and the hard register number if it is in a register.

So "MEM_P (x)" is not enough for checking if we are reloading from/to
the memory.  This bug has caused reload pass to stall and finally ICE
complaining with "maximum number of generated reload insns per insn
achieved", since r14-6814.

Check if "true_regnum (x)" is -1 besides "MEM_P (x)" to fix the issue.

gcc/ChangeLog:

PR target/113148
* config/loongarch/loongarch.cc (loongarch_secondary_reload):
Check if regno == -1 besides MEM_P (x) for reloading FCCmode
from/to FPR to/from memory.

gcc/testsuite/ChangeLog:

PR target/113148
* gcc.target/loongarch/pr113148.c: New test.

4 months agoLoongArch: Expand left rotate to right rotate with negated amount
Xi Ruoyao [Sat, 16 Dec 2023 21:38:20 +0000 (05:38 +0800)] 
LoongArch: Expand left rotate to right rotate with negated amount

gcc/ChangeLog:

* config/loongarch/loongarch.md (rotl<mode>3):
New define_expand.
* config/loongarch/simd.md (vrotl<mode>3): Likewise.
(rotl<mode>3): Likewise.

gcc/testsuite/ChangeLog:

* gcc.target/loongarch/rotl-with-rotr.c: New test.
* gcc.target/loongarch/rotl-with-vrotr-b.c: New test.
* gcc.target/loongarch/rotl-with-vrotr-h.c: New test.
* gcc.target/loongarch/rotl-with-vrotr-w.c: New test.
* gcc.target/loongarch/rotl-with-vrotr-d.c: New test.
* gcc.target/loongarch/rotl-with-xvrotr-b.c: New test.
* gcc.target/loongarch/rotl-with-xvrotr-h.c: New test.
* gcc.target/loongarch/rotl-with-xvrotr-w.c: New test.
* gcc.target/loongarch/rotl-with-xvrotr-d.c: New test.

4 months agoRISC-V: Make known NITERS loop be aware of dynamic lmul cost model liveness information
Juzhe-Zhong [Wed, 27 Dec 2023 08:16:41 +0000 (16:16 +0800)] 
RISC-V: Make known NITERS loop be aware of dynamic lmul cost model liveness information

Consider this following case:

int f[12][100];

void bad1(int v1, int v2)
{
  for (int r = 0; r < 100; r += 4)
    {
      int i = r + 1;
      f[0][r] = f[1][r] * (f[2][r]) - f[1][i] * (f[2][i]);
      f[0][i] = f[1][r] * (f[2][i]) + f[1][i] * (f[2][r]);
      f[0][r+2] = f[1][r+2] * (f[2][r+2]) - f[1][i+2] * (f[2][i+2]);
      f[0][i+2] = f[1][r+2] * (f[2][i+2]) + f[1][i+2] * (f[2][r+2]);
    }
}

Pick up LMUL = 8 VLS blindly:

        lui     a4,%hi(f)
        addi    a4,a4,%lo(f)
        addi    sp,sp,-592
        addi    a3,a4,800
        lui     a5,%hi(.LANCHOR0)
        vl8re32.v       v24,0(a3)
        addi    a5,a5,%lo(.LANCHOR0)
        addi    a1,a4,400
        addi    a3,sp,140
        vl8re32.v       v16,0(a1)
        vl4re16.v       v4,0(a5)
        addi    a7,a5,192
        vs4r.v  v4,0(a3)
        addi    t0,a5,64
        addi    a3,sp,336
        li      t2,32
        addi    a2,a5,128
        vsetvli a5,zero,e32,m8,ta,ma
        vrgatherei16.vv v8,v16,v4
        vmul.vv v8,v8,v24
        vl8re32.v       v0,0(a7)
        vs8r.v  v8,0(a3)
        vmsltu.vx       v8,v0,t2
        addi    a3,sp,12
        addi    t2,sp,204
        vsm.v   v8,0(t2)
        vl4re16.v       v4,0(t0)
        vl4re16.v       v0,0(a2)
        vs4r.v  v4,0(a3)
        addi    t0,sp,336
        vrgatherei16.vv v8,v24,v4
        addi    a3,sp,208
        vrgatherei16.vv v24,v16,v0
        vs4r.v  v0,0(a3)
        vmul.vv v8,v8,v24
        vlm.v   v0,0(t2)
        vl8re32.v       v24,0(t0)
        addi    a3,sp,208
        vsub.vv v16,v24,v8
        addi    t6,a4,528
        vadd.vv v8,v24,v8
        addi    t5,a4,928
        vmerge.vvm      v8,v8,v16,v0
        addi    t3,a4,128
        vs8r.v  v8,0(a4)
        addi    t4,a4,1056
        addi    t1,a4,656
        addi    a0,a4,256
        addi    a6,a4,1184
        addi    a1,a4,784
        addi    a7,a4,384
        addi    a4,sp,140
        vl4re16.v       v0,0(a3)
        vl8re32.v       v24,0(t6)
        vl4re16.v       v4,0(a4)
        vrgatherei16.vv v16,v24,v0
        addi    a3,sp,12
        vs8r.v  v16,0(t0)
        vl8re32.v       v8,0(t5)
        vrgatherei16.vv v16,v24,v4
        vl4re16.v       v4,0(a3)
        vrgatherei16.vv v24,v8,v4
        vmul.vv v16,v16,v8
        vl8re32.v       v8,0(t0)
        vmul.vv v8,v8,v24
        vsub.vv v24,v16,v8
        vlm.v   v0,0(t2)
        addi    a3,sp,208
        vadd.vv v8,v8,v16
        vl8re32.v       v16,0(t4)
        vmerge.vvm      v8,v8,v24,v0
        vrgatherei16.vv v24,v16,v4
        vs8r.v  v24,0(t0)
        vl4re16.v       v28,0(a3)
        addi    a3,sp,464
        vs8r.v  v8,0(t3)
        vl8re32.v       v8,0(t1)
        vrgatherei16.vv v0,v8,v28
        vs8r.v  v0,0(a3)
        addi    a3,sp,140
        vl4re16.v       v24,0(a3)
        addi    a3,sp,464
        vrgatherei16.vv v0,v8,v24
        vl8re32.v       v24,0(t0)
        vmv8r.v v8,v0
        vl8re32.v       v0,0(a3)
        vmul.vv v8,v8,v16
        vmul.vv v24,v24,v0
        vsub.vv v16,v8,v24
        vadd.vv v8,v8,v24
        vsetivli        zero,4,e32,m8,ta,ma
        vle32.v v24,0(a6)
        vsetvli a4,zero,e32,m8,ta,ma
        addi    a4,sp,12
        vlm.v   v0,0(t2)
        vmerge.vvm      v8,v8,v16,v0
        vl4re16.v       v16,0(a4)
        vrgatherei16.vv v0,v24,v16
        vsetivli        zero,4,e32,m8,ta,ma
        vs8r.v  v0,0(a4)
        addi    a4,sp,208
        vl4re16.v       v0,0(a4)
        vs8r.v  v8,0(a0)
        vle32.v v16,0(a1)
        vsetvli a5,zero,e32,m8,ta,ma
        vrgatherei16.vv v8,v16,v0
        vs8r.v  v8,0(a4)
        addi    a4,sp,140
        vl4re16.v       v4,0(a4)
        addi    a5,sp,12
        vrgatherei16.vv v8,v16,v4
        vl8re32.v       v0,0(a5)
        vsetivli        zero,4,e32,m8,ta,ma
        addi    a5,sp,208
        vmv8r.v v16,v8
        vl8re32.v       v8,0(a5)
        vmul.vv v24,v24,v16
        vmul.vv v8,v0,v8
        vsub.vv v16,v24,v8
        vadd.vv v8,v8,v24
        vsetvli a5,zero,e8,m2,ta,ma
        vlm.v   v0,0(t2)
        vsetivli        zero,4,e32,m8,ta,ma
        vmerge.vvm      v8,v8,v16,v0
        vse32.v v8,0(a7)
        addi    sp,sp,592
        jr      ra

This patch makes loop with known NITERS be aware of liveness estimation, after this patch, choosing LMUL = 4:

lui a5,%hi(f)
addi a5,a5,%lo(f)
addi a3,a5,400
addi a4,a5,800
vsetivli zero,8,e32,m2,ta,ma
vlseg4e32.v v16,(a3)
vlseg4e32.v v8,(a4)
vmul.vv v2,v8,v16
addi a3,a5,528
vmv.v.v v24,v10
vnmsub.vv v24,v18,v2
addi a4,a5,928
vmul.vv v2,v12,v22
vmul.vv v6,v8,v18
vmv.v.v v30,v2
vmacc.vv v30,v14,v20
vmv.v.v v26,v6
vmacc.vv v26,v10,v16
vmul.vv v4,v12,v20
vmv.v.v v28,v14
vnmsub.vv v28,v22,v4
vsseg4e32.v v24,(a5)
vlseg4e32.v v16,(a3)
vlseg4e32.v v8,(a4)
vmul.vv v2,v8,v16
addi a6,a5,128
vmv.v.v v24,v10
vnmsub.vv v24,v18,v2
addi a0,a5,656
vmul.vv v2,v12,v22
addi a1,a5,1056
vmv.v.v v30,v2
vmacc.vv v30,v14,v20
vmul.vv v6,v8,v18
vmul.vv v4,v12,v20
vmv.v.v v26,v6
vmacc.vv v26,v10,v16
vmv.v.v v28,v14
vnmsub.vv v28,v22,v4
vsseg4e32.v v24,(a6)
vlseg4e32.v v16,(a0)
vlseg4e32.v v8,(a1)
vmul.vv v2,v8,v16
addi a2,a5,256
vmv.v.v v24,v10
vnmsub.vv v24,v18,v2
addi a3,a5,784
vmul.vv v2,v12,v22
addi a4,a5,1184
vmv.v.v v30,v2
vmacc.vv v30,v14,v20
vmul.vv v6,v8,v18
vmul.vv v4,v12,v20
vmv.v.v v26,v6
vmacc.vv v26,v10,v16
vmv.v.v v28,v14
vnmsub.vv v28,v22,v4
addi a5,a5,384
vsseg4e32.v v24,(a2)
vsetivli zero,1,e32,m2,ta,ma
vlseg4e32.v v16,(a3)
vlseg4e32.v v8,(a4)
vmul.vv v2,v16,v8
vmul.vv v6,v18,v8
vmv.v.v v24,v18
vnmsub.vv v24,v10,v2
vmul.vv v4,v20,v12
vmul.vv v2,v22,v12
vmv.v.v v26,v6
vmacc.vv v26,v16,v10
vmv.v.v v28,v22
vnmsub.vv v28,v14,v4
vmv.v.v v30,v2
vmacc.vv v30,v20,v14
vsseg4e32.v v24,(a5)
ret

Tested on both RV32 and RV64 no regressions.

PR target/113112

gcc/ChangeLog:

* config/riscv/riscv-vector-costs.cc (is_gimple_assign_or_call): New function.
(get_first_lane_point): Ditto.
(get_last_lane_point): Ditto.
(max_number_of_live_regs): Refine live point dump.
(compute_estimated_lmul): Make unknown NITERS loop be aware of liveness.
(costs::better_main_loop_than_p): Ditto.
* config/riscv/riscv-vector-costs.h (struct stmt_point): Add new member.

gcc/testsuite/ChangeLog:

* gcc.dg/vect/costmodel/riscv/rvv/pr113112-1.c:
* gcc.dg/vect/costmodel/riscv/rvv/pr113112-3.c: New test.

4 months agoLoongArch: Fix ICE when passing two same vector argument consecutively
Chenghui Pan [Fri, 22 Dec 2023 08:18:44 +0000 (16:18 +0800)] 
LoongArch: Fix ICE when passing two same vector argument consecutively

Following code will cause ICE on LoongArch target:

  #include <lsxintrin.h>

  extern void bar (__m128i, __m128i);

  __m128i a;

  void
  foo ()
  {
    bar (a, a);
  }

It is caused by missing constraint definition in mov<mode>_lsx. This
patch fixes the template and remove the unnecessary processing from
loongarch_split_move () function.

This patch also cleanup the redundant definition from
loongarch_split_move () and loongarch_split_move_p ().

gcc/ChangeLog:

* config/loongarch/lasx.md: Use loongarch_split_move and
loongarch_split_move_p directly.
* config/loongarch/loongarch-protos.h
(loongarch_split_move): Remove unnecessary argument.
(loongarch_split_move_insn_p): Delete.
(loongarch_split_move_insn): Delete.
* config/loongarch/loongarch.cc
(loongarch_split_move_insn_p): Delete.
(loongarch_load_store_insns): Use loongarch_split_move_p
directly.
(loongarch_split_move): remove the unnecessary processing.
(loongarch_split_move_insn): Delete.
* config/loongarch/lsx.md: Use loongarch_split_move and
loongarch_split_move_p directly.

gcc/testsuite/ChangeLog:

* gcc.target/loongarch/vector/lsx/lsx-mov-1.c: New test.

4 months agoLoongArch: Fix insn output of vec_concat templates for LASX.
Chenghui Pan [Fri, 22 Dec 2023 08:22:03 +0000 (16:22 +0800)] 
LoongArch: Fix insn output of vec_concat templates for LASX.

When investigaing failure of gcc.dg/vect/slp-reduc-sad.c, following
instruction block are being generated by vec_concatv32qi (which is
generated by vec_initv32qiv16qi) at entrance of foo() function:

  vldx    $vr3,$r5,$r6
  vld     $vr2,$r5,0
  xvpermi.q       $xr2,$xr3,0x20

causes the reversion of vec_initv32qiv16qi operation's high and
low 128-bit part.

According to other target's similar impl and LSX impl for following
RTL representation, current definition in lasx.md of "vec_concat<mode>"
are wrong:

  (set (op0) (vec_concat (op1) (op2)))

For correct behavior, the last argument of xvpermi.q should be 0x02
instead of 0x20. This patch fixes this issue and cleanup the vec_concat
template impl.

gcc/ChangeLog:

* config/loongarch/lasx.md (vec_concatv4di): Delete.
(vec_concatv8si): Delete.
(vec_concatv16hi): Delete.
(vec_concatv32qi): Delete.
(vec_concatv4df): Delete.
(vec_concatv8sf): Delete.
(vec_concat<mode>): New template with insn output fixed.

4 months agoLoongArch: Fixed bug in *bstrins_<mode>_for_ior_mask template.
Li Wei [Mon, 25 Dec 2023 03:20:23 +0000 (11:20 +0800)] 
LoongArch: Fixed bug in *bstrins_<mode>_for_ior_mask template.

We found that using the latest compiled gcc will cause a miscompare error
when running spec2006 400.perlbench test with -flto turned on.  After testing,
it was found that only the LoongArch architecture will report errors.
The first error commit was located through the git bisect command as
r14-3773-g5b857e87201335.  Through debugging, it was found that the problem
was that the split condition of the *bstrins_<mode>_for_ior_mask template was
empty, which should actually be consistent with the insn condition.

gcc/ChangeLog:

* config/loongarch/loongarch.md: Adjust.

4 months agors6000: Clean up the pre-checkings of expand_block_compare
Haochen Gui [Wed, 27 Dec 2023 02:32:21 +0000 (10:32 +0800)] 
rs6000: Clean up the pre-checkings of expand_block_compare

Remove P7 CPU test as only P7 above can enter this function and P7 LE is
excluded by the checking of targetm.slow_unaligned_access on word_mode.
Also performance test shows the expand of block compare is better than
library on P7 BE when the length is from 16 bytes to 64 bytes.

gcc/
* config/rs6000/rs6000-string.cc (expand_block_compare): Assert
only P7 above can enter this function.  Remove P7 CPU test and let
P7 BE do the expand.

gcc/testsuite/
* gcc.target/powerpc/block-cmp-4.c: New.

4 months agors6000: Call library for block memory compare when optimizing for size
Haochen Gui [Wed, 27 Dec 2023 02:30:56 +0000 (10:30 +0800)] 
rs6000: Call library for block memory compare when optimizing for size

gcc/
* config/rs6000/rs6000.md (cmpmemsi): Fail when optimizing for size.

gcc/testsuite/
* gcc.target/powerpc/block-cmp-3.c: New.

4 months agors6000: Correct definition of macro of fixed point efficient unaligned
Haochen Gui [Wed, 27 Dec 2023 02:30:06 +0000 (10:30 +0800)] 
rs6000: Correct definition of macro of fixed point efficient unaligned

Marco TARGET_EFFICIENT_OVERLAPPING_UNALIGNED is used in rs6000-string.cc
to guard the platform which is efficient on fixed point unaligned
load/store.  It's originally defined by TARGET_EFFICIENT_UNALIGNED_VSX
which is enabled from P8 and can be disabled by mno-vsx option. So the
definition is improper.  This patch corrects it and call
slow_unaligned_access to judge if fixed point unaligned load/store is
efficient or not.

gcc/
* config/rs6000/rs6000.h (TARGET_EFFICIENT_OVERLAPPING_UNALIGNED):
Remove.
* config/rs6000/rs6000-string.cc (select_block_compare_mode):
Replace TARGET_EFFICIENT_OVERLAPPING_UNALIGNED with
targetm.slow_unaligned_access.
(expand_block_compare_gpr): Likewise.
(expand_block_compare): Likewise.
(expand_strncmp_gpr_sequence): Likewise.

gcc/testsuite/
* gcc.target/powerpc/block-cmp-1.c: New.
* gcc.target/powerpc/block-cmp-2.c: New.

4 months agotestsuite: 32 bit AIX 2 byte wchar
David Edelsohn [Wed, 27 Dec 2023 00:44:34 +0000 (00:44 +0000)] 
testsuite: 32 bit AIX 2 byte wchar

32 bit AIX supports 2 byte wchar.  The wchar-multi1.C testcase assumes
4 byte wchar.  Update the testcase to require 4 byte wchar.

gcc/testsuite/ChangeLog:
* g++.dg/cpp23/wchar-multi1.C: Require 4 byte wchar_t.

Signed-off-by: David Edelsohn <dje.gcc@gmail.com>
4 months agotestsuite: AIX csect section name.
David Edelsohn [Tue, 26 Dec 2023 16:44:09 +0000 (16:44 +0000)] 
testsuite: AIX csect section name.

AIX sections use the csect directive to name a section.  Check for
csect name in attr-section testcases.

gcc/testsuite/ChangeLog:
* g++.dg/ext/attr-section1.C: Test for csect section directive.
* g++.dg/ext/attr-section1a.C: Same.
* g++.dg/ext/attr-section2.C: Same.
* g++.dg/ext/attr-section2a.C: Same.
* g++.dg/ext/attr-section2b.C: Same.

Signed-off-by: David Edelsohn <dje.gcc@gmail.com>
4 months agoDaily bump.
GCC Administrator [Wed, 27 Dec 2023 00:18:20 +0000 (00:18 +0000)] 
Daily bump.

4 months agotestsuite: Skip analyzer out-of-bounds-diagram on AIX.
David Edelsohn [Tue, 26 Dec 2023 16:44:09 +0000 (16:44 +0000)] 
testsuite: Skip analyzer out-of-bounds-diagram on AIX.

The out-of-bounds diagram tests fail on AIX.

gcc/testsuite/ChangeLog:
* gcc.dg/analyzer/out-of-bounds-diagram-17.c: Skip on AIX.
* gcc.dg/analyzer/out-of-bounds-diagram-18.c: Same.

Signed-off-by: David Edelsohn <dje.gcc@gmail.com>
4 months agotestsuite: Skip split DWARF on AIX.
David Edelsohn [Tue, 26 Dec 2023 16:41:40 +0000 (11:41 -0500)] 
testsuite: Skip split DWARF on AIX.

AIX does not support split DWARF.

gcc/testsuite/ChangeLog:
* gcc.dg/pr111409.c: Skip on AIX.

Signed-off-by: David Edelsohn <dje.gcc@gmail.com>
4 months agotestsuite: Disable strub on AIX.
David Edelsohn [Tue, 26 Dec 2023 16:34:11 +0000 (11:34 -0500)] 
testsuite: Disable strub on AIX.

AIX does not support stack scrubbing. Set strub as unsupported on AIX
and ensure that testcases check for strub support.

gcc/testsuite/ChangeLog:
* c-c++-common/strub-unsupported-2.c: Require strub.
* c-c++-common/strub-unsupported-3.c: Same.
* c-c++-common/strub-unsupported.c: Same.
* lib/target-supports.exp (check_effective_target_strub): Return 0
for AIX.

Signed-off-by: David Edelsohn <dje.gcc@gmail.com>
4 months agoRISC-V: Fix typo
Juzhe-Zhong [Tue, 26 Dec 2023 10:53:49 +0000 (18:53 +0800)] 
RISC-V: Fix typo

gcc/testsuite/ChangeLog:

* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-10.c: Fix typo.

4 months agoRISC-V: Some minior tweak on dynamic LMUL cost model
Juzhe-Zhong [Tue, 26 Dec 2023 08:42:27 +0000 (16:42 +0800)] 
RISC-V: Some minior tweak on dynamic LMUL cost model

Tweak some codes of dynamic LMUL cost model to make computation more predictable and accurate.

Tested on both RV32 and RV64 no regression.

Committed.

PR target/113112

gcc/ChangeLog:

* config/riscv/riscv-vector-costs.cc (compute_estimated_lmul): Tweak LMUL estimation.
(has_unexpected_spills_p): Ditto.
(costs::record_potential_unexpected_spills): Ditto.

gcc/testsuite/ChangeLog:

* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-1.c: Add more checks.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-2.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-3.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-4.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-5.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-6.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-7.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-1.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-2.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-3.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-4.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-5.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-1.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-2.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-3.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-5.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-6.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-7.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-8.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-1.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-10.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-11.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-2.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-3.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-4.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-5.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-6.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-7.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-8.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-9.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-12.c: New test.
* gcc.dg/vect/costmodel/riscv/rvv/pr113112-2.c: New test.

4 months agoFix compile options of pr110279-1.c and pr110279-2.c
Di Zhao [Tue, 26 Dec 2023 08:36:02 +0000 (16:36 +0800)] 
Fix compile options of pr110279-1.c and pr110279-2.c

The two testcases are for targets that support FMA. And
pr110279-2.c assumes reassoc_width of FMUL to be 4.

This patch adds missing options, to fix regression test failures
on nvptx/GCN (default reassoc_width of FMUL is 1) and x86_64
(need "-mfma").

gcc/testsuite/ChangeLog:

* gcc.dg/pr110279-1.c: Add "-mcpu=generic" for aarch64; add
"-mfma" for x86_64.
* gcc.dg/pr110279-2.c: Replace "-march=armv8.2-a" with
"-mcpu=generic"; limit the check to be on aarch64.

4 months agotestsuite: Add dg-require-effective-target powerpc_pcrel for testcase [PR110320]
Jeevitha [Tue, 26 Dec 2023 04:17:54 +0000 (22:17 -0600)] 
testsuite: Add dg-require-effective-target powerpc_pcrel for testcase [PR110320]

Add dg-require-effective-target directive that allows the test case to run specifically
on powerpc_pcrel target.

2023-12-26  Jeevitha Palanisamy  <jeevitha@linux.ibm.com>

gcc/testsuite/
PR target/110320
* gcc.target/powerpc/pr110320-1.c: Add dg-require-effective-target powerpc_pcrel.

4 months agoDaily bump.
GCC Administrator [Tue, 26 Dec 2023 00:19:10 +0000 (00:19 +0000)] 
Daily bump.

4 months agotestsuite: Skip analyzer tests on AIX.
David Edelsohn [Mon, 25 Dec 2023 17:41:50 +0000 (12:41 -0500)] 
testsuite: Skip analyzer tests on AIX.

Some new analyzer tests fail on AIX.

gcc/testsuite/ChangeLog:
* c-c++-common/analyzer/capacity-1.c: Skip on AIX.
* c-c++-common/analyzer/capacity-2.c: Same.
* c-c++-common/analyzer/fd-glibc-byte-stream-socket.c: Same.
* c-c++-common/analyzer/fd-manpage-getaddrinfo-client.c: Same.
* c-c++-common/analyzer/fd-mappage-getaddrinfo-server.c: Same.
* gcc.dg/analyzer/fd-glibc-byte-stream-connection-server.c: Same.

Signed-off-by: David Edelsohn <dje.gcc@gmail.com>
4 months agoRISC-V: Move RVV V_REGS liveness computation into analyze_loop_vinfo
Juzhe-Zhong [Mon, 25 Dec 2023 09:17:25 +0000 (17:17 +0800)] 
RISC-V: Move RVV V_REGS liveness computation into analyze_loop_vinfo

Currently, we compute RVV V_REGS liveness during better_main_loop_than_p which is not appropriate
time to do that since we for example, when have the codes will finally pick LMUL = 8 vectorization
factor, we compute liveness for LMUL = 8 multiple times which are redundant.

Since we have leverage the current ARM SVE COST model:

  /* Do one-time initialization based on the vinfo.  */
  loop_vec_info loop_vinfo = dyn_cast<loop_vec_info> (m_vinfo);
  if (!m_analyzed_vinfo)
    {
      if (loop_vinfo)
analyze_loop_vinfo (loop_vinfo);

      m_analyzed_vinfo = true;
    }

Analyze COST model only once for each cost model.

So here we move dynamic LMUL liveness information into analyze_loop_vinfo.

/* Do one-time initialization of the costs given that we're
   costing the loop vectorization described by LOOP_VINFO.  */
void
costs::analyze_loop_vinfo (loop_vec_info loop_vinfo)
{
  ...

  /* Detect whether the LOOP has unexpected spills.  */
  record_potential_unexpected_spills (loop_vinfo);
}

So that we can avoid redundant computations and the current dynamic LMUL cost model flow is much
more reasonable and consistent with others.

Tested on RV32 and RV64 no regressions.

gcc/ChangeLog:

* config/riscv/riscv-vector-costs.cc (compute_estimated_lmul): Allow
fractional vecrtor.
(preferred_new_lmul_p): Move RVV V_REGS liveness computation into analyze_loop_vinfo.
(has_unexpected_spills_p): New function.
(costs::record_potential_unexpected_spills): Ditto.
(costs::better_main_loop_than_p): Move RVV V_REGS liveness computation into
analyze_loop_vinfo.
* config/riscv/riscv-vector-costs.h: New functions and variables.

gcc/testsuite/ChangeLog:

* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul-mixed-1.c: Robostify test.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-1.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-2.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-3.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-4.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-5.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-6.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-7.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-1.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-2.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-3.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-4.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-5.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-6.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-1.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-10.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-2.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-3.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-5.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-6.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-7.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-8.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-1.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-10.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-11.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-2.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-3.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-4.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-5.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-6.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-7.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-8.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-9.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/no-dynamic-lmul-1.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/pr111848.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/pr113112-1.c: Ditto.

4 months agomiddle-end: explicitly initialize vec_stmts [PR113132]
Tamar Christina [Mon, 25 Dec 2023 10:58:40 +0000 (10:58 +0000)] 
middle-end: explicitly initialize vec_stmts [PR113132]

when configured with --enable-checking=release we get a false
positive on the use of vec_stmts as the compiler seems unable
to notice it gets initialized through the pass-by-reference.

This explicitly initializes the local.

gcc/ChangeLog:

PR bootstrap/113132
* tree-vect-loop.cc (vect_create_epilog_for_reduction): Initialize vec_stmts;

4 months agors6000: Change GPR2 to volatile & non-fixed register for function that does not use...
Jeevitha [Mon, 25 Dec 2023 10:06:54 +0000 (04:06 -0600)] 
rs6000: Change GPR2 to volatile & non-fixed register for function that does not use TOC [PR110320]

Normally, GPR2 is the TOC pointer and is defined as a fixed and non-volatile
register. However, it can be used as volatile for PCREL addressing. Therefore,
modified r2 to be non-fixed in FIXED_REGISTERS and set it to fixed if it is not
PCREL and also when the user explicitly requests TOC or fixed. If the register
r2 is fixed, it is made as non-volatile. Changes in register preservation roles
can be accomplished with the help of available target hooks
(TARGET_CONDITIONAL_REGISTER_USAGE).

2023-12-24  Jeevitha Palanisamy  <jeevitha@linux.ibm.com>

gcc/
PR target/110320
* config/rs6000/rs6000.cc (rs6000_conditional_register_usage): Change
GPR2 to volatile and non-fixed register for PCREL.
* config/rs6000/rs6000.h (FIXED_REGISTERS): Modify GPR2 to not fixed.

gcc/testsuite/
PR target/110320
* gcc.target/powerpc/pr110320-1.c: New testcase.
* gcc.target/powerpc/pr110320-2.c: New testcase.
* gcc.target/powerpc/pr110320-3.c: New testcase.

Co-authored-by: Peter Bergner <bergner@linux.ibm.com>
4 months agoRISC-V: Add one more ASM check in PR113112-1.c
Juzhe-Zhong [Mon, 25 Dec 2023 06:19:52 +0000 (14:19 +0800)] 
RISC-V: Add one more ASM check in PR113112-1.c

gcc/testsuite/ChangeLog:

* gcc.dg/vect/costmodel/riscv/rvv/pr113112-1.c: Add one more ASM check.

4 months agomatch: Improve `(a != b) ? (a + b) : (2 * a)` pattern [PR19832]
Andrew Pinski [Sun, 24 Dec 2023 23:51:35 +0000 (15:51 -0800)] 
match: Improve `(a != b) ? (a + b) : (2 * a)` pattern [PR19832]

In the testcase provided, we would match f_plus but not g_plus
due to a missing `:c` on the plus operator. This fixes the oversight
there.

Note this was noted in https://github.com/llvm/llvm-project/issues/76318 .

Committed as obvious after bootstrap/test on x86_64-linux-gnu.

PR tree-optimization/19832

gcc/ChangeLog:

* match.pd (`(a != b) ? (a + b) : (2 * a)`): Add `:c`
on the plus operator.

gcc/testsuite/ChangeLog:

* gcc.dg/tree-ssa/phi-opt-same-2.c: New test.

Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
4 months agoDaily bump.
GCC Administrator [Mon, 25 Dec 2023 00:18:09 +0000 (00:18 +0000)] 
Daily bump.

4 months agotestsuite: un-xfail TSVC loops that check for exit control flow vectorization
Tamar Christina [Sun, 24 Dec 2023 19:20:08 +0000 (19:20 +0000)] 
testsuite: un-xfail TSVC loops that check for exit control flow vectorization

The following three tests now correctly work for targets that have an
implementation of cbranch for vectors so XFAILs are conditionally removed gated
on vect_early_break support.

gcc/testsuite/ChangeLog:

* gcc.dg/vect/tsvc/vect-tsvc-s332.c: Remove xfail when early break
supported.
* gcc.dg/vect/tsvc/vect-tsvc-s481.c: Likewise.
* gcc.dg/vect/tsvc/vect-tsvc-s482.c: Likewise.

4 months agotestsuite: Add tests for early break vectorization
Tamar Christina [Sun, 24 Dec 2023 19:19:38 +0000 (19:19 +0000)] 
testsuite: Add tests for early break vectorization

This adds new test to check for all the early break functionality.
It includes a number of codegen and runtime tests checking the values at
different needles in the array.

They also check the values on different array sizes and peeling positions,
datatypes, VL, ncopies and every other variant I could think of.

Additionally it also contains reduced cases from issues found running over
various codebases.

Bootstrapped Regtested on aarch64-none-linux-gnu and no issues.

Also regtested with:
 -march=armv8.3-a+sve
 -march=armv8.3-a+nosve
 -march=armv9-a

Bootstrapped Regtested x86_64-pc-linux-gnu and no issues.

On the tests I have disabled x86_64 on it's because the target is missing
cbranch for all types.  I think it should be possible to add them for the
missing type since all we care about is if a bit is set or not.

Bootstrap and Regtest on arm-none-linux-gnueabihf still running
and test on arm-none-eabi -march=armv8.1-m.main+mve -mfpu=auto running.

gcc/ChangeLog:

* doc/sourcebuild.texi (check_effective_target_vect_early_break_hw,
check_effective_target_vect_early_break): Document.

gcc/testsuite/ChangeLog:

* lib/target-supports.exp (add_options_for_vect_early_break,
check_effective_target_vect_early_break_hw,
check_effective_target_vect_early_break): New.
* g++.dg/vect/vect-early-break_1.cc: New test.
* g++.dg/vect/vect-early-break_2.cc: New test.
* g++.dg/vect/vect-early-break_3.cc: New test.
* gcc.dg/vect/vect-early-break-run_1.c: New test.
* gcc.dg/vect/vect-early-break-run_10.c: New test.
* gcc.dg/vect/vect-early-break-run_2.c: New test.
* gcc.dg/vect/vect-early-break-run_3.c: New test.
* gcc.dg/vect/vect-early-break-run_4.c: New test.
* gcc.dg/vect/vect-early-break-run_5.c: New test.
* gcc.dg/vect/vect-early-break-run_6.c: New test.
* gcc.dg/vect/vect-early-break-run_7.c: New test.
* gcc.dg/vect/vect-early-break-run_8.c: New test.
* gcc.dg/vect/vect-early-break-run_9.c: New test.
* gcc.dg/vect/vect-early-break-template_1.c: New test.
* gcc.dg/vect/vect-early-break-template_2.c: New test.
* gcc.dg/vect/vect-early-break_1.c: New test.
* gcc.dg/vect/vect-early-break_10.c: New test.
* gcc.dg/vect/vect-early-break_11.c: New test.
* gcc.dg/vect/vect-early-break_12.c: New test.
* gcc.dg/vect/vect-early-break_13.c: New test.
* gcc.dg/vect/vect-early-break_14.c: New test.
* gcc.dg/vect/vect-early-break_15.c: New test.
* gcc.dg/vect/vect-early-break_16.c: New test.
* gcc.dg/vect/vect-early-break_17.c: New test.
* gcc.dg/vect/vect-early-break_18.c: New test.
* gcc.dg/vect/vect-early-break_19.c: New test.
* gcc.dg/vect/vect-early-break_2.c: New test.
* gcc.dg/vect/vect-early-break_20.c: New test.
* gcc.dg/vect/vect-early-break_21.c: New test.
* gcc.dg/vect/vect-early-break_22.c: New test.
* gcc.dg/vect/vect-early-break_23.c: New test.
* gcc.dg/vect/vect-early-break_24.c: New test.
* gcc.dg/vect/vect-early-break_25.c: New test.
* gcc.dg/vect/vect-early-break_26.c: New test.
* gcc.dg/vect/vect-early-break_27.c: New test.
* gcc.dg/vect/vect-early-break_28.c: New test.
* gcc.dg/vect/vect-early-break_29.c: New test.
* gcc.dg/vect/vect-early-break_3.c: New test.
* gcc.dg/vect/vect-early-break_30.c: New test.
* gcc.dg/vect/vect-early-break_31.c: New test.
* gcc.dg/vect/vect-early-break_32.c: New test.
* gcc.dg/vect/vect-early-break_33.c: New test.
* gcc.dg/vect/vect-early-break_34.c: New test.
* gcc.dg/vect/vect-early-break_35.c: New test.
* gcc.dg/vect/vect-early-break_36.c: New test.
* gcc.dg/vect/vect-early-break_37.c: New test.
* gcc.dg/vect/vect-early-break_38.c: New test.
* gcc.dg/vect/vect-early-break_39.c: New test.
* gcc.dg/vect/vect-early-break_4.c: New test.
* gcc.dg/vect/vect-early-break_40.c: New test.
* gcc.dg/vect/vect-early-break_41.c: New test.
* gcc.dg/vect/vect-early-break_42.c: New test.
* gcc.dg/vect/vect-early-break_43.c: New test.
* gcc.dg/vect/vect-early-break_44.c: New test.
* gcc.dg/vect/vect-early-break_45.c: New test.
* gcc.dg/vect/vect-early-break_46.c: New test.
* gcc.dg/vect/vect-early-break_47.c: New test.
* gcc.dg/vect/vect-early-break_48.c: New test.
* gcc.dg/vect/vect-early-break_49.c: New test.
* gcc.dg/vect/vect-early-break_5.c: New test.
* gcc.dg/vect/vect-early-break_50.c: New test.
* gcc.dg/vect/vect-early-break_51.c: New test.
* gcc.dg/vect/vect-early-break_52.c: New test.
* gcc.dg/vect/vect-early-break_53.c: New test.
* gcc.dg/vect/vect-early-break_54.c: New test.
* gcc.dg/vect/vect-early-break_55.c: New test.
* gcc.dg/vect/vect-early-break_56.c: New test.
* gcc.dg/vect/vect-early-break_57.c: New test.
* gcc.dg/vect/vect-early-break_58.c: New test.
* gcc.dg/vect/vect-early-break_59.c: New test.
* gcc.dg/vect/vect-early-break_6.c: New test.
* gcc.dg/vect/vect-early-break_60.c: New test.
* gcc.dg/vect/vect-early-break_61.c: New test.
* gcc.dg/vect/vect-early-break_62.c: New test.
* gcc.dg/vect/vect-early-break_63.c: New test.
* gcc.dg/vect/vect-early-break_64.c: New test.
* gcc.dg/vect/vect-early-break_65.c: New test.
* gcc.dg/vect/vect-early-break_66.c: New test.
* gcc.dg/vect/vect-early-break_67.c: New test.
* gcc.dg/vect/vect-early-break_68.c: New test.
* gcc.dg/vect/vect-early-break_69.c: New test.
* gcc.dg/vect/vect-early-break_7.c: New test.
* gcc.dg/vect/vect-early-break_70.c: New test.
* gcc.dg/vect/vect-early-break_71.c: New test.
* gcc.dg/vect/vect-early-break_72.c: New test.
* gcc.dg/vect/vect-early-break_73.c: New test.
* gcc.dg/vect/vect-early-break_74.c: New test.
* gcc.dg/vect/vect-early-break_75.c: New test.
* gcc.dg/vect/vect-early-break_76.c: New test.
* gcc.dg/vect/vect-early-break_77.c: New test.
* gcc.dg/vect/vect-early-break_78.c: New test.
* gcc.dg/vect/vect-early-break_79.c: New test.
* gcc.dg/vect/vect-early-break_8.c: New test.
* gcc.dg/vect/vect-early-break_80.c: New test.
* gcc.dg/vect/vect-early-break_81.c: New test.
* gcc.dg/vect/vect-early-break_82.c: New test.
* gcc.dg/vect/vect-early-break_83.c: New test.
* gcc.dg/vect/vect-early-break_84.c: New test.
* gcc.dg/vect/vect-early-break_85.c: New test.
* gcc.dg/vect/vect-early-break_86.c: New test.
* gcc.dg/vect/vect-early-break_87.c: New test.
* gcc.dg/vect/vect-early-break_88.c: New test.
* gcc.dg/vect/vect-early-break_89.c: New test.
* gcc.dg/vect/vect-early-break_9.c: New test.
* gcc.dg/vect/vect-early-break_90.c: New test.
* gcc.dg/vect/vect-early-break_91.c: New test.
* gcc.dg/vect/vect-early-break_92.c: New test.
* gcc.dg/vect/vect-early-break_93.c: New test.

4 months agoAArch64: Add implementation for vector cbranch for Advanced SIMD
Tamar Christina [Sun, 24 Dec 2023 19:18:53 +0000 (19:18 +0000)] 
AArch64: Add implementation for vector cbranch for Advanced SIMD

Hi All,

This adds an implementation for conditional branch optab for AArch64.

For e.g.

void f1 ()
{
  for (int i = 0; i < N; i++)
    {
      b[i] += a[i];
      if (a[i] > 0)
break;
    }
}

For 128-bit vectors we generate:

        cmgt    v1.4s, v1.4s, #0
        umaxp   v1.4s, v1.4s, v1.4s
        fmov    x3, d1
        cbnz    x3, .L8

and of 64-bit vector we can omit the compression:

        cmgt    v1.2s, v1.2s, #0
        fmov    x2, d1
        cbz     x2, .L13

gcc/ChangeLog:

* config/aarch64/aarch64-simd.md (cbranch<mode>4): New.

gcc/testsuite/ChangeLog:

* gcc.target/aarch64/sve/vect-early-break-cbranch.c: New test.
* gcc.target/aarch64/vect-early-break-cbranch.c: New test.

4 months agomiddle-end: Support vectorization of loops with multiple exits.
Tamar Christina [Sun, 24 Dec 2023 19:18:12 +0000 (19:18 +0000)] 
middle-end: Support vectorization of loops with multiple exits.

Hi All,

This patch adds initial support for early break vectorization in GCC. In other
words it implements support for vectorization of loops with multiple exits.
The support is added for any target that implements a vector cbranch optab,
this includes both fully masked and non-masked targets.

Depending on the operation, the vectorizer may also require support for boolean
mask reductions using Inclusive OR/Bitwise AND.  This is however only checked
then the comparison would produce multiple statements.

This also fully decouples the vectorizer's notion of exit from the existing loop
infrastructure's exit.  Before this patch the vectorizer always picked the
natural loop latch connected exit as the main exit.

After this patch the vectorizer is free to choose any exit it deems appropriate
as the main exit.  This means that even if the main exit is not countable (i.e.
the termination condition could not be determined) we might still be able to
vectorize should one of the other exits be countable.

In such situations the loop is reflowed which enabled vectorization of many
other loop forms.

Concretely the kind of loops supported are of the forms:

 for (int i = 0; i < N; i++)
 {
   <statements1>
   if (<condition>)
     {
       ...
       <action>;
     }
   <statements2>
 }

where <action> can be:
 - break
 - return
 - goto

Any number of statements can be used before the <action> occurs.

Since this is an initial version for GCC 14 it has the following limitations and
features:

- Only fixed sized iterations and buffers are supported.  That is to say any
  vectors loaded or stored must be to statically allocated arrays with known
  sizes. N must also be known.  This limitation is because our primary target
  for this optimization is SVE.  For VLA SVE we can't easily do cross page
  iteraion checks. The result is likely to also not be beneficial. For that
  reason we punt support for variable buffers till we have First-Faulting
  support in GCC 15.
- any stores in <statements1> should not be to the same objects as in
  <condition>.  Loads are fine as long as they don't have the possibility to
  alias.  More concretely, we block RAW dependencies when the intermediate value
  can't be separated fromt the store, or the store itself can't be moved.
- Prologue peeling, alignment peelinig and loop versioning are supported.
- Fully masked loops, unmasked loops and partially masked loops are supported
- Any number of loop early exits are supported.
- No support for epilogue vectorization.  The only epilogue supported is the
  scalar final one.  Peeling code supports it but the code motion code cannot
  find instructions to make the move in the epilog.
- Early breaks are only supported for inner loop vectorization.

With the help of IPA and LTO this still gets hit quite often.  During bootstrap
it hit rather frequently.  Additionally TSVC s332, s481 and s482 all pass now
since these are tests for support for early exit vectorization.

This implementation does not support completely handling the early break inside
the vector loop itself but instead supports adding checks such that if we know
that we have to exit in the current iteration then we branch to scalar code to
actually do the final VF iterations which handles all the code in <action>.

For the scalar loop we know that whatever exit you take you have to perform at
most VF iterations.  For vector code we only case about the state of fully
performed iteration and reset the scalar code to the (partially) remaining loop.

That is to say, the first vector loop executes so long as the early exit isn't
needed.  Once the exit is taken, the scalar code will perform at most VF extra
iterations.  The exact number depending on peeling and iteration start and which
exit was taken (natural or early).   For this scalar loop, all early exits are
treated the same.

When we vectorize we move any statement not related to the early break itself
and that would be incorrect to execute before the break (i.e. has side effects)
to after the break.  If this is not possible we decline to vectorize.  The
analysis and code motion also takes into account that it doesn't introduce a RAW
dependency after the move of the stores.

This means that we check at the start of iterations whether we are going to exit
or not.  During the analyis phase we check whether we are allowed to do this
moving of statements.  Also note that we only move the scalar statements, but
only do so after peeling but just before we start transforming statements.

With this the vector flow no longer necessarily needs to match that of the
scalar code.  In addition most of the infrastructure is in place to support
general control flow safely, however we are punting this to GCC 15.

Codegen:

for e.g.

unsigned vect_a[N];
unsigned vect_b[N];

unsigned test4(unsigned x)
{
 unsigned ret = 0;
 for (int i = 0; i < N; i++)
 {
   vect_b[i] = x + i;
   if (vect_a[i] > x)
     break;
   vect_a[i] = x;

 }
 return ret;
}

We generate for Adv. SIMD:

test4:
        adrp    x2, .LC0
        adrp    x3, .LANCHOR0
        dup     v2.4s, w0
        add     x3, x3, :lo12:.LANCHOR0
        movi    v4.4s, 0x4
        add     x4, x3, 3216
        ldr     q1, [x2, #:lo12:.LC0]
        mov     x1, 0
        mov     w2, 0
        .p2align 3,,7
.L3:
        ldr     q0, [x3, x1]
        add     v3.4s, v1.4s, v2.4s
        add     v1.4s, v1.4s, v4.4s
        cmhi    v0.4s, v0.4s, v2.4s
        umaxp   v0.4s, v0.4s, v0.4s
        fmov    x5, d0
        cbnz    x5, .L6
        add     w2, w2, 1
        str     q3, [x1, x4]
        str     q2, [x3, x1]
        add     x1, x1, 16
        cmp     w2, 200
        bne     .L3
        mov     w7, 3
.L2:
        lsl     w2, w2, 2
        add     x5, x3, 3216
        add     w6, w2, w0
        sxtw    x4, w2
        ldr     w1, [x3, x4, lsl 2]
        str     w6, [x5, x4, lsl 2]
        cmp     w0, w1
        bcc     .L4
        add     w1, w2, 1
        str     w0, [x3, x4, lsl 2]
        add     w6, w1, w0
        sxtw    x1, w1
        ldr     w4, [x3, x1, lsl 2]
        str     w6, [x5, x1, lsl 2]
        cmp     w0, w4
        bcc     .L4
        add     w4, w2, 2
        str     w0, [x3, x1, lsl 2]
        sxtw    x1, w4
        add     w6, w1, w0
        ldr     w4, [x3, x1, lsl 2]
        str     w6, [x5, x1, lsl 2]
        cmp     w0, w4
        bcc     .L4
        str     w0, [x3, x1, lsl 2]
        add     w2, w2, 3
        cmp     w7, 3
        beq     .L4
        sxtw    x1, w2
        add     w2, w2, w0
        ldr     w4, [x3, x1, lsl 2]
        str     w2, [x5, x1, lsl 2]
        cmp     w0, w4
        bcc     .L4
        str     w0, [x3, x1, lsl 2]
.L4:
        mov     w0, 0
        ret
        .p2align 2,,3
.L6:
        mov     w7, 4
        b       .L2

and for SVE:

test4:
        adrp    x2, .LANCHOR0
        add     x2, x2, :lo12:.LANCHOR0
        add     x5, x2, 3216
        mov     x3, 0
        mov     w1, 0
        cntw    x4
        mov     z1.s, w0
        index   z0.s, #0, #1
        ptrue   p1.b, all
        ptrue   p0.s, all
        .p2align 3,,7
.L3:
        ld1w    z2.s, p1/z, [x2, x3, lsl 2]
        add     z3.s, z0.s, z1.s
        cmplo   p2.s, p0/z, z1.s, z2.s
        b.any   .L2
        st1w    z3.s, p1, [x5, x3, lsl 2]
        add     w1, w1, 1
        st1w    z1.s, p1, [x2, x3, lsl 2]
        add     x3, x3, x4
        incw    z0.s
        cmp     w3, 803
        bls     .L3
.L5:
        mov     w0, 0
        ret
        .p2align 2,,3
.L2:
        cntw    x5
        mul     w1, w1, w5
        cbz     w5, .L5
        sxtw    x1, w1
        sub     w5, w5, #1
        add     x5, x5, x1
        add     x6, x2, 3216
        b       .L6
        .p2align 2,,3
.L14:
        str     w0, [x2, x1, lsl 2]
        cmp     x1, x5
        beq     .L5
        mov     x1, x4
.L6:
        ldr     w3, [x2, x1, lsl 2]
        add     w4, w0, w1
        str     w4, [x6, x1, lsl 2]
        add     x4, x1, 1
        cmp     w0, w3
        bcs     .L14
        mov     w0, 0
        ret

On the workloads this work is based on we see between 2-3x performance uplift
using this patch.

Follow up plan:
 - Boolean vectorization has several shortcomings.  I've filed PR110223 with the
   bigger ones that cause vectorization to fail with this patch.
 - SLP support.  This is planned for GCC 15 as for majority of the cases build
   SLP itself fails.  This means I'll need to spend time in making this more
   robust first.  Additionally it requires:
     * Adding support for vectorizing CFG (gconds)
     * Support for CFG to differ between vector and scalar loops.
   Both of which would be disruptive to the tree and I suspect I'll be handling
   fallouts from this patch for a while.  So I plan to work on the surrounding
   building blocks first for the remainder of the year.

Additionally it also contains reduced cases from issues found running over
various codebases.

Bootstrapped Regtested on aarch64-none-linux-gnu and no issues.

Also regtested with:
 -march=armv8.3-a+sve
 -march=armv8.3-a+nosve
 -march=armv9-a
 -mcpu=neoverse-v1
 -mcpu=neoverse-n2

Bootstrapped Regtested x86_64-pc-linux-gnu and no issues.
Bootstrap and Regtest on arm-none-linux-gnueabihf and no issues.

gcc/ChangeLog:

* tree-if-conv.cc (idx_within_array_bound): Expose.
* tree-vect-data-refs.cc (vect_analyze_early_break_dependences): New.
(vect_analyze_data_ref_dependences): Use it.
* tree-vect-loop-manip.cc (vect_iv_increment_position): New.
(vect_set_loop_controls_directly,
vect_set_loop_condition_partial_vectors,
vect_set_loop_condition_partial_vectors_avx512,
vect_set_loop_condition_normal): Support multiple exits.
(slpeel_tree_duplicate_loop_to_edge_cfg): Support LCSAA peeling for
multiple exits.
(slpeel_can_duplicate_loop_p): Change vectorizer from looking at BB
count and instead look at loop shape.
(vect_update_ivs_after_vectorizer): Drop asserts.
(vect_gen_vector_loop_niters_mult_vf): Support peeled vector iterations.
(vect_do_peeling): Support multiple exits.
(vect_loop_versioning): Likewise.
* tree-vect-loop.cc (_loop_vec_info::_loop_vec_info): Initialise
early_breaks.
(vect_analyze_loop_form): Support loop flows with more than single BB
loop body.
(vect_create_loop_vinfo): Support niters analysis for multiple exits.
(vect_analyze_loop): Likewise.
(vect_get_vect_def): New.
(vect_create_epilog_for_reduction): Support early exit reductions.
(vectorizable_live_operation_1): New.
(find_connected_edge): New.
(vectorizable_live_operation): Support early exit live operations.
(move_early_exit_stmts): New.
(vect_transform_loop): Use it.
* tree-vect-patterns.cc (vect_init_pattern_stmt): Support gcond.
(vect_recog_bitfield_ref_pattern): Support gconds and bools.
(vect_recog_gcond_pattern): New.
(possible_vector_mask_operation_p): Support gcond masks.
(vect_determine_mask_precision): Likewise.
(vect_mark_pattern_stmts): Set gcond def type.
(can_vectorize_live_stmts): Force early break inductions to be live.
* tree-vect-stmts.cc (vect_stmt_relevant_p): Add relevancy analysis for
early breaks.
(vect_mark_stmts_to_be_vectorized): Process gcond usage.
(perm_mask_for_reverse): Expose.
(vectorizable_comparison_1): New.
(vectorizable_early_exit): New.
(vect_analyze_stmt): Support early break and gcond.
(vect_transform_stmt): Likewise.
(vect_is_simple_use): Likewise.
(vect_get_vector_types_for_stmt): Likewise.
* tree-vectorizer.cc (pass_vectorize::execute): Update exits for value
numbering.
* tree-vectorizer.h (enum vect_def_type): Add vect_condition_def.
(LOOP_VINFO_EARLY_BREAKS, LOOP_VINFO_EARLY_BRK_STORES,
LOOP_VINFO_EARLY_BREAKS_VECT_PEELED, LOOP_VINFO_EARLY_BRK_DEST_BB,
LOOP_VINFO_EARLY_BRK_VUSES): New.
(is_loop_header_bb_p): Drop assert.
(class loop): Add early_breaks, early_break_stores, early_break_dest_bb,
early_break_vuses.
(vect_iv_increment_position, perm_mask_for_reverse,
ref_within_array_bound): New.
(slpeel_tree_duplicate_loop_to_edge_cfg): Update for early breaks.

4 months agomiddle-end: prevent LIM from hoising vector compares from gconds if target does not...
Tamar Christina [Sun, 24 Dec 2023 19:17:13 +0000 (19:17 +0000)] 
middle-end: prevent LIM from hoising vector compares from gconds if target does not support it.

LIM notices that in some cases the condition and the results are loop
invariant and tries to move them out of the loop.

While the resulting code is operationally sound, moving the compare out of the
gcond results in generating code that no longer branches, so cbranch is no
longer applicable.  As such I now add code to check during this motion to see
if the target supports flag setting vector comparison as general operation.

I have tried writing a GIMPLE testcase for this but the gimple FE seems to be
having some trouble with the vector types.  It seems to fail parsing.

The early break code testsuite however has a test for this
(vect-early-break_67.c).

gcc/ChangeLog:

* tree-ssa-loop-im.cc (determine_max_movement): Import insn-codes.h
and optabs-tree.h and check for vector compare motion out of gcond.

4 months agotestsuite: Add more pragma novector to new tests
Tamar Christina [Sun, 24 Dec 2023 19:16:40 +0000 (19:16 +0000)] 
testsuite: Add more pragma novector to new tests

This updates the testsuite and adds more #pragma GCC novector to various tests
that would otherwise vectorize the vector result checking code.

This cleans out the testsuite since the last rebase and prepares for the landing
of the early break patch.

gcc/testsuite/ChangeLog:

* gcc.dg/vect/no-scevccp-slp-30.c: Add pragma GCC novector to abort
loop.
* gcc.dg/vect/no-scevccp-slp-31.c: Likewise.
* gcc.dg/vect/no-section-anchors-vect-69.c: Likewise.
* gcc.target/aarch64/vect-xorsign_exec.c: Likewise.
* gcc.target/i386/avx512er-vrcp28ps-3.c: Likewise.
* gcc.target/i386/avx512er-vrsqrt28ps-3.c: Likewise.
* gcc.target/i386/avx512er-vrsqrt28ps-5.c: Likewise.
* gcc.target/i386/avx512f-ceil-sfix-vec-1.c: Likewise.
* gcc.target/i386/avx512f-ceil-vec-1.c: Likewise.
* gcc.target/i386/avx512f-ceilf-sfix-vec-1.c: Likewise.
* gcc.target/i386/avx512f-ceilf-vec-1.c: Likewise.
* gcc.target/i386/avx512f-floor-sfix-vec-1.c: Likewise.
* gcc.target/i386/avx512f-floor-vec-1.c: Likewise.
* gcc.target/i386/avx512f-floorf-sfix-vec-1.c: Likewise.
* gcc.target/i386/avx512f-floorf-vec-1.c: Likewise.
* gcc.target/i386/avx512f-rint-sfix-vec-1.c: Likewise.
* gcc.target/i386/avx512f-rintf-sfix-vec-1.c: Likewise.
* gcc.target/i386/avx512f-round-sfix-vec-1.c: Likewise.
* gcc.target/i386/avx512f-roundf-sfix-vec-1.c: Likewise.
* gcc.target/i386/avx512f-trunc-vec-1.c: Likewise.
* gcc.target/i386/avx512f-truncf-vec-1.c: Likewise.
* gcc.target/i386/vect-alignment-peeling-1.c: Likewise.
* gcc.target/i386/vect-alignment-peeling-2.c: Likewise.
* gcc.target/i386/vect-pack-trunc-1.c: Likewise.
* gcc.target/i386/vect-pack-trunc-2.c: Likewise.
* gcc.target/i386/vect-perm-even-1.c: Likewise.
* gcc.target/i386/vect-unpack-1.c: Likewise.

4 months agohppa: Fix pr110279-1.c on hppa
John David Anglin [Sun, 24 Dec 2023 19:03:59 +0000 (19:03 +0000)] 
hppa: Fix pr110279-1.c on hppa

2023-12-24  John David Anglin  <danglin@gcc.gnu.org>

gcc/testsuite/ChangeLog:

* gcc.dg/pr110279-1.c: Add -march=2.0 option on hppa*-*-*.

4 months agoRISC-V: XFail the signbit-5 run test for RVV
Pan Li [Wed, 20 Dec 2023 12:48:18 +0000 (20:48 +0800)] 
RISC-V: XFail the signbit-5 run test for RVV

This patch would like to XFail the signbit-5 run test case for
the RVV.  Given the case has one limitation like "This test does not
work when the truth type does not match vector type." in the beginning
of the test file.  Aka, the RVV vector truth type is not integer type.

The target board of riscv-sim like below will pick up `-march=rv64gcv`
when building the run test elf. Thus, the RVV cannot bypass this test
case like aarch64_sve with additional option `-march=armv8-a`.

  riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow

For RVV, we leverage dg-xfail-run-if for this case like `amdgcn`.

The signbit-5.c passed test with below configurations but we need
further investigation for the failures of other configurations.

* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64imafdcv/-mabi=lp64d/-mcmodel=medlow

gcc/testsuite/ChangeLog:

* gcc.dg/signbit-5.c: XFail for the riscv_v.

Signed-off-by: Pan Li <pan2.li@intel.com>
4 months agoCRIS: Fix PR middle-end/113109; "throw" failing
Hans-Peter Nilsson [Sat, 23 Dec 2023 23:10:32 +0000 (00:10 +0100)] 
CRIS: Fix PR middle-end/113109; "throw" failing

TL;DR: the "dse1" pass removed the eh-return-address store.  The
PA also marks its EH_RETURN_HANDLER_RTX as volatile, for the same
reason, as does visum.  See PR32769 - it's the same thing on PA.

Conceptually, it's logical that stores to incoming args are
optimized out on the return path or if no loads are seen -
at least before epilogue expansion, when the subsequent load
isn't seen in the RTL, as is the case for the "dse1" pass.

I haven't looked into why this problem, that appeared for the PA
already in 2007, was seen for CRIS only recently (with
r14-6674-g4759383245ac97).

PR middle-end/113109
* config/cris/cris.cc (cris_eh_return_handler_rtx): New function.
* config/cris/cris-protos.h (cris_eh_return_handler_rtx): Prototype.
* config/cris/cris.h (EH_RETURN_HANDLER_RTX): Redefine to call
cris_eh_return_handler_rtx.

4 months agoDaily bump.
GCC Administrator [Sun, 24 Dec 2023 00:17:37 +0000 (00:17 +0000)] 
Daily bump.

4 months agoLoongArch: Add sign_extend pattern for 32-bit rotate shift
Xi Ruoyao [Sat, 16 Dec 2023 20:26:23 +0000 (04:26 +0800)] 
LoongArch: Add sign_extend pattern for 32-bit rotate shift

Remove a redundant sign extension.

gcc/ChangeLog:

* config/loongarch/loongarch.md (rotrsi3_extend): New
define_insn.

gcc/testsuite/ChangeLog:

* gcc.target/loongarch/rotrw.c: New test.

4 months agoLoongArch: Implement FCCmode reload and cstore<ANYF:mode>4
Xi Ruoyao [Thu, 14 Dec 2023 17:49:40 +0000 (01:49 +0800)] 
LoongArch: Implement FCCmode reload and cstore<ANYF:mode>4

We used a branch to load floating-point comparison results into GPR.
This is very slow when the branch is not predictable.

Implement movfcc so we can reload FCCmode into GPRs, FPRs, and MEM.
Then implement cstore<ANYF:mode>4.

gcc/ChangeLog:

* config/loongarch/loongarch-tune.h
(loongarch_rtx_cost_data::movcf2gr): New field.
(loongarch_rtx_cost_data::movcf2gr_): New method.
(loongarch_rtx_cost_data::use_movcf2gr): New method.
* config/loongarch/loongarch-def.cc
(loongarch_rtx_cost_data::loongarch_rtx_cost_data): Set movcf2gr
to COSTS_N_INSNS (7) and movgr2cf to COSTS_N_INSNS (15), based
on timing on LA464.
(loongarch_cpu_rtx_cost_data): Set movcf2gr and movgr2cf to
COSTS_N_INSNS (1) for LA664.
(loongarch_rtx_cost_optimize_size): Set movcf2gr and movgr2cf to
COSTS_N_INSNS (1) + 1.
* config/loongarch/predicates.md (loongarch_fcmp_operator): New
predicate.
* config/loongarch/loongarch.md (movfcc): Change to
define_expand.
(movfcc_internal): New define_insn.
(fcc_to_<X:mode>): New define_insn.
(cstore<ANYF:mode>4): New define_expand.
* config/loongarch/loongarch.cc
(loongarch_hard_regno_mode_ok_uncached): Allow FCCmode in GPRs
and GPRs.
(loongarch_secondary_reload): Reload FCCmode via FPR and/or GPR.
(loongarch_emit_float_compare): Call gen_reg_rtx instead of
loongarch_allocate_fcc.
(loongarch_allocate_fcc): Remove.
(loongarch_move_to_gpr_cost): Handle FCC_REGS -> GR_REGS.
(loongarch_move_from_gpr_cost): Handle GR_REGS -> FCC_REGS.
(loongarch_register_move_cost): Handle FCC_REGS -> FCC_REGS,
FCC_REGS -> FP_REGS, and FP_REGS -> FCC_REGS.

gcc/testsuite/ChangeLog:

* gcc.target/loongarch/movcf2gr.c: New test.
* gcc.target/loongarch/movcf2gr-via-fr.c: New test.

4 months agoGCN, nvptx: Basic '__cxa_guard_{acquire,abort,release}' for C++ static local variable...
Thomas Schwinge [Wed, 20 Dec 2023 11:27:48 +0000 (12:27 +0100)] 
GCN, nvptx: Basic '__cxa_guard_{acquire,abort,release}' for C++ static local variables support

For now, for single-threaded GCN, nvptx target use only; extension for
multi-threaded offloading use is to follow later.  Eventually switch to
libstdc++-v3/libsupc++ proper.

libgcc/
* c++-minimal/README: New.
* c++-minimal/guard.c: New.
* config/gcn/t-amdgcn (LIB2ADD): Add it.
* config/nvptx/t-nvptx (LIB2ADD): Likewise.

4 months agoMIPS: Don't add nan2008 option for -mtune=native
YunQiang Su [Sat, 23 Dec 2023 08:40:42 +0000 (16:40 +0800)] 
MIPS: Don't add nan2008 option for -mtune=native

Users may wish just use -mtune=native for performance tuning only.
Let's don't make trouble for its case.

gcc/

* config/mips/driver-native.cc (host_detect_local_cpu):
don't add nan2008 option for -mtune=native.

4 months agoMIPS: Put the ret to the end of args of reconcat [PR112759]
YunQiang Su [Mon, 18 Dec 2023 23:36:52 +0000 (07:36 +0800)] 
MIPS: Put the ret to the end of args of reconcat [PR112759]

The function `reconcat` cannot append string(s) to NULL,
as the concat process will stop at the first NULL.

Let's always put the `ret` to the end, as it may be NULL.
We keep use reconcat here, due to that reconcat can make it
easier if we add more hardware features detecting, for example
by hwcap.

gcc/

PR target/112759
* config/mips/driver-native.cc (host_detect_local_cpu):
Put the ret to the end of args of reconcat.

4 months agoRISC-V: Make PHI initial value occupy live V_REG in dynamic LMUL cost model analysis
Juzhe-Zhong [Fri, 22 Dec 2023 23:07:42 +0000 (07:07 +0800)] 
RISC-V: Make PHI initial value occupy live V_REG in dynamic LMUL cost model analysis

Consider this following case:

foo:
        ble     a0,zero,.L11
        lui     a2,%hi(.LANCHOR0)
        addi    sp,sp,-128
        addi    a2,a2,%lo(.LANCHOR0)
        mv      a1,a0
        vsetvli a6,zero,e32,m8,ta,ma
        vid.v   v8
        vs8r.v  v8,0(sp)                     ---> spill
.L3:
        vl8re32.v       v16,0(sp)            ---> reload
        vsetvli a4,a1,e8,m2,ta,ma
        li      a3,0
        vsetvli a5,zero,e32,m8,ta,ma
        vmv8r.v v0,v16
        vmv.v.x v8,a4
        vmv.v.i v24,0
        vadd.vv v8,v16,v8
        vmv8r.v v16,v24
        vs8r.v  v8,0(sp)                    ---> spill
.L4:
        addiw   a3,a3,1
        vadd.vv v8,v0,v16
        vadd.vi v16,v16,1
        vadd.vv v24,v24,v8
        bne     a0,a3,.L4
        vsetvli zero,a4,e32,m8,ta,ma
        sub     a1,a1,a4
        vse32.v v24,0(a2)
        slli    a4,a4,2
        add     a2,a2,a4
        bne     a1,zero,.L3
        li      a0,0
        addi    sp,sp,128
        jr      ra
.L11:
        li      a0,0
        ret

Pick unexpected LMUL = 8.

The root cause is we didn't involve PHI initial value in the dynamic LMUL calculation:

  # j_17 = PHI <j_11(9), 0(5)>                       ---> # vect_vec_iv_.8_24 = PHI <_25(9), { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }(5)>

We didn't count { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 } in consuming vector register but it does allocate an vector register group for it.

This patch fixes this missing count. Then after this patch we pick up perfect LMUL (LMUL = M4)

foo:
ble a0,zero,.L9
lui a4,%hi(.LANCHOR0)
addi a4,a4,%lo(.LANCHOR0)
mv a2,a0
vsetivli zero,16,e32,m4,ta,ma
vid.v v20
.L3:
vsetvli a3,a2,e8,m1,ta,ma
li a5,0
vsetivli zero,16,e32,m4,ta,ma
vmv4r.v v16,v20
vmv.v.i v12,0
vmv.v.x v4,a3
vmv4r.v v8,v12
vadd.vv v20,v20,v4
.L4:
addiw a5,a5,1
vmv4r.v v4,v8
vadd.vi v8,v8,1
vadd.vv v4,v16,v4
vadd.vv v12,v12,v4
bne a0,a5,.L4
slli a5,a3,2
vsetvli zero,a3,e32,m4,ta,ma
sub a2,a2,a3
vse32.v v12,0(a4)
add a4,a4,a5
bne a2,zero,.L3
.L9:
li a0,0
ret

Tested on --with-arch=gcv no regression.

PR target/113112

gcc/ChangeLog:

* config/riscv/riscv-vector-costs.cc (max_number_of_live_regs): Refine dump information.
(preferred_new_lmul_p): Make PHI initial value into live regs calculation.

gcc/testsuite/ChangeLog:

* gcc.dg/vect/costmodel/riscv/rvv/pr113112-1.c: New test.

4 months agoDaily bump.
GCC Administrator [Sat, 23 Dec 2023 00:17:03 +0000 (00:17 +0000)] 
Daily bump.

4 months agoc23: construct composite type for tagged types
Martin Uecker [Sun, 20 Aug 2023 12:08:23 +0000 (14:08 +0200)] 
c23: construct composite type for tagged types

Support for constructing composite types for structs and unions
in C23.

gcc/c:
* c-typeck.cc (composite_type_internal): Adapted from
composite_type to support structs and unions.
(composite_type): New wrapper function.
(build_conditional_operator): Return composite type.
* c-decl.cc (finish_struct): Allow NULL for
enclosing_struct_parse_info.

gcc/testsuite:
* gcc.dg/c23-tag-alias-6.c: New test.
* gcc.dg/c23-tag-alias-7.c: New test.
* gcc.dg/c23-tag-composite-1.c: New test.
* gcc.dg/c23-tag-composite-2.c: New test.
* gcc.dg/c23-tag-composite-3.c: New test.
* gcc.dg/c23-tag-composite-4.c: New test.
* gcc.dg/c23-tag-composite-5.c: New test.
* gcc.dg/c23-tag-composite-6.c: New test.
* gcc.dg/c23-tag-composite-7.c: New test.
* gcc.dg/c23-tag-composite-8.c: New test.
* gcc.dg/c23-tag-composite-9.c: New test.
* gcc.dg/c23-tag-composite-10.c: New test.
* gcc.dg/gnu23-tag-composite-1.c: New test.
* gcc.dg/gnu23-tag-composite-2.c: New test.
* gcc.dg/gnu23-tag-composite-3.c: New test.
* gcc.dg/gnu23-tag-composite-4.c: New test.
* gcc.dg/gnu23-tag-composite-5.c: New test.

4 months agoOpenMP: Add prettyprinter support for context selectors.
Sandra Loosemore [Fri, 22 Dec 2023 16:35:45 +0000 (16:35 +0000)] 
OpenMP: Add prettyprinter support for context selectors.

With the change to use enumerators instead of strings to represent
context selector and selector-set names, the default tree-list output
for dumping selectors is less helpful for debugging and harder to use
in test cases.  This patch adds support for dumping context selectors
using syntax similar to that used for input to the compiler.

gcc/ChangeLog
* omp-general.cc (omp_context_name_list_prop): Remove static qualifer.
* omp-general.h (omp_context_name_list_prop): Declare.
* tree-cfg.cc (dump_function_to_file): Intercept
"omp declare variant base" attribute for special handling.
* tree-pretty-print.cc: Include omp-general.h.
(dump_omp_context_selector): New.
(print_omp_context_selector): New.
* tree-pretty-print.h (print_omp_context_selector): Declare.

4 months agocombine: Don't optimize paradoxical SUBREG AND CONST_INT on WORD_REGISTER_OPERATIONS...
Jakub Jelinek [Fri, 22 Dec 2023 11:29:34 +0000 (12:29 +0100)] 
combine: Don't optimize paradoxical SUBREG AND CONST_INT on WORD_REGISTER_OPERATIONS targets [PR112758]

As discussed in the PR, the following testcase is miscompiled on RISC-V
64-bit, because num_sign_bit_copies in one spot pretends the bits in
a paradoxical SUBREG beyond SUBREG_REG SImode are all sign bit copies:
5444              /* For paradoxical SUBREGs on machines where all register operations
5445                 affect the entire register, just look inside.  Note that we are
5446                 passing MODE to the recursive call, so the number of sign bit
5447                 copies will remain relative to that mode, not the inner mode.
5448
5449                 This works only if loads sign extend.  Otherwise, if we get a
5450                 reload for the inner part, it may be loaded from the stack, and
5451                 then we lose all sign bit copies that existed before the store
5452                 to the stack.  */
5453              if (WORD_REGISTER_OPERATIONS
5454                  && load_extend_op (inner_mode) == SIGN_EXTEND
5455                  && paradoxical_subreg_p (x)
5456                  && MEM_P (SUBREG_REG (x)))
and then optimizes based on that in one place, but then the
r7-1077 optimization triggers in and treats all the upper bits in
paradoxical SUBREG as undefined and performs based on that another
optimization.  The r7-1077 optimization is done only if SUBREG_REG
is either a REG or MEM, from the discussions in the PR seems that if
it is a REG, the upper bits in paradoxical SUBREG on
WORD_REGISTER_OPERATIONS targets aren't really undefined, but we can't
tell what values they have because we don't see the operation which
computed that REG, and for MEM it depends on load_extend_op - if
it is SIGN_EXTEND, the upper bits are sign bit copies and so something
not really usable for the optimization, if ZERO_EXTEND, they are zeros
and it is usable for the optimization, for UNKNOWN I think it is better
to punt as well.

So, the following patch basically disables the r7-1077 optimization
on WORD_REGISTER_OPERATIONS unless we know it is still ok for sure,
which is either if sub_width is >= BITS_PER_WORD because then the
WORD_REGISTER_OPERATIONS rules don't apply, or load_extend_op on a MEM
is ZERO_EXTEND.

2023-12-22  Jakub Jelinek  <jakub@redhat.com>

PR rtl-optimization/112758
* combine.cc (make_compopund_operation_int): Optimize AND of a SUBREG
based on nonzero_bits of SUBREG_REG and constant mask on
WORD_REGISTER_OPERATIONS targets only if it is a zero extending
MEM load.

* gcc.c-torture/execute/pr112758.c: New test.

4 months agosymtab-thunks: Use aggregate_value_p even on is_gimple_reg_type returns [PR112941]
Jakub Jelinek [Fri, 22 Dec 2023 11:28:54 +0000 (12:28 +0100)] 
symtab-thunks: Use aggregate_value_p even on is_gimple_reg_type returns [PR112941]

Large/huge _BitInt types are returned in memory and the bitint lowering
pass right now relies on that.
The gimplification etc. use aggregate_value_p to see if it should be
returned in memory or not and use
  <retval> = _123;
  return <retval>;
rather than
  return _123;
But expand_thunk used e.g. by IPA-ICF was performing an optimization,
assuming is_gimple_reg_type is always passed in registers and not calling
aggregate_value_p in that case.  The following patch changes it to match
what the gimplification etc. are doing.

2023-12-22  Jakub Jelinek  <jakub@redhat.com>

PR tree-optimization/112941
* symtab-thunks.cc (expand_thunk): Check aggregate_value_p regardless
of whether is_gimple_reg_type (restype) or not.

* gcc.dg/bitint-60.c: New test.

4 months agolower-bitint: Handle unreleased SSA_NAMEs from earlier passes gracefully [PR113102]
Jakub Jelinek [Fri, 22 Dec 2023 11:28:06 +0000 (12:28 +0100)] 
lower-bitint: Handle unreleased SSA_NAMEs from earlier passes gracefully [PR113102]

On the following testcase earlier passes leave around an unreleased
SSA_NAME - non-GIMPLE_NOP SSA_NAME_DEF_STMT which isn't in any bb.
The following patch makes bitint lowering resistent against those,
the first hunk is where we'd for certain kinds of stmts try to ammend
them and the latter is where we'd otherwise try to remove them,
neither of which works.  The other loops over all SSA_NAMEs either
already also check gimple_bb (SSA_NAME_DEF_STMT (s)) or it doesn't
matter that much if we process it or not (worst case it means e.g.
the pass wouldn't return early even when it otherwise could).

2023-12-22  Jakub Jelinek  <jakub@redhat.com>

PR tree-optimization/113102
* gimple-lower-bitint.cc (gimple_lower_bitint): Handle unreleased
large/huge _BitInt SSA_NAMEs.

* gcc.dg/bitint-59.c: New test.

4 months agolower-bitint: Fix handle_cast ICE [PR113102]
Jakub Jelinek [Fri, 22 Dec 2023 11:27:05 +0000 (12:27 +0100)] 
lower-bitint: Fix handle_cast ICE [PR113102]

My recent change to use m_data[save_data_cnt] instead of
m_data[save_data_cnt + 1] when inside of a loop (m_bb is non-NULL)
broke the following testcase.  When we create a PHI node on the loop
using prepare_data_in_out, both m_data[save_data_cnt{, + 1}] are
computed and the fix was right, but there are also cases when we in
a loop (m_bb non-NULL) emit a nested cast with too few limbs and
then just use constant indexes for all accesses - in that case
only m_data[save_data_cnt + 1] is initialized and m_data[save_data_cnt]
is NULL.  In those cases, we want to use the former.

2023-12-22  Jakub Jelinek  <jakub@redhat.com>

PR tree-optimization/113102
* gimple-lower-bitint.cc (bitint_large_huge::handle_cast): Only
use m_data[save_data_cnt] if it is non-NULL.

* gcc.dg/bitint-58.c: New test.

4 months agoAllow overriding EXPECT
Christophe Lyon [Thu, 21 Dec 2023 10:51:10 +0000 (10:51 +0000)] 
Allow overriding EXPECT

While investigating possible race conditions in the GCC testsuites
caused by bufferization issues, I wanted to investigate workarounds
similar to GDB's READ1 [1], and I noticed it was not always possible
to override EXPECT when running 'make check'.

This patch adds the missing support in various Makefiles.

I was not able to test the patch for all the libraries updated here,
but I confirmed it works as intended/needed for libstdc++.

libatomic, libitm, libgomp already work as intended because their
Makefiles do not have:
MAKEOVERRIDES=

Tested on (native) aarch64-linux-gnu, confirmed the patch introduces
the behaviour I want in gcc, g++, gfortran and libstdc++.

I updated (but could not test) libgm2, libphobos, libquadmath and
libssp for consistency since their Makefiles have MAKEOVERRIDES=

libffi, libgo, libsanitizer seem to need a similar update, but they
are imported from their respective upstream repo, so should not be
patched here.

[1] https://github.com/bminor/binutils-gdb/blob/master/gdb/testsuite/README#L269

2023-12-21  Christophe Lyon  <christophe.lyon@linaro.org>

gcc/
* Makefile.in: Allow overriding EXEPCT.

libgm2/
* Makefile.am: Allow overriding EXEPCT.
* Makefile.in: Regenerate.

libphobos/
* Makefile.am: Allow overriding EXEPCT.
* Makefile.in: Regenerate.

libquadmath/
* Makefile.am: Allow overriding EXEPCT.
* Makefile.in: Regenerate.

libssp/
* Makefile.am: Allow overriding EXEPCT.
* Makefile.in: Regenerate.

libstdc++-v3/
* Makefile.am: Allow overriding EXEPCT.
* Makefile.in: Regenerate.

4 months agoc++: testsuite: Remove testsuite_tr1.h includes
Ken Matsui [Wed, 20 Dec 2023 18:47:27 +0000 (10:47 -0800)] 
c++: testsuite: Remove testsuite_tr1.h includes

This patch removes the testsuite_tr1.h dependency from g++.dg/ext/is_*.C
tests since the header is supposed to be used only by libstdc++, not
front-end.  This also includes test code consistency fixes.

For the record this fixes the test failures reported at
https://gcc.gnu.org/pipermail/gcc-patches/2023-December/641058.html

gcc/testsuite/ChangeLog:

* g++.dg/ext/is_array.C: Remove testsuite_tr1.h.  Add necessary
definitions accordingly.  Tweak macros for consistency across
test codes.
* g++.dg/ext/is_bounded_array.C: Likewise.
* g++.dg/ext/is_function.C: Likewise.
* g++.dg/ext/is_member_function_pointer.C: Likewise.
* g++.dg/ext/is_member_object_pointer.C: Likewise.
* g++.dg/ext/is_member_pointer.C: Likewise.
* g++.dg/ext/is_object.C: Likewise.
* g++.dg/ext/is_reference.C: Likewise.
* g++.dg/ext/is_scoped_enum.C: Likewise.

Signed-off-by: Ken Matsui <kmatsui@gcc.gnu.org>
Reviewed-by: Patrick Palka <ppalka@redhat.com>
Reviewed-by: Jason Merrill <jason@redhat.com>
4 months agoLoongArch: Add asm modifiers to the LSX and LASX directives in the doc.
chenxiaolong [Tue, 5 Dec 2023 06:44:35 +0000 (14:44 +0800)] 
LoongArch: Add asm modifiers to the LSX and LASX directives in the doc.

gcc/ChangeLog:

* doc/extend.texi:Add modifiers to the vector of asm in the doc.
* doc/md.texi:Refine the description of the modifier 'f' in the doc.

4 months agoc++: computed goto from catch block [PR81438]
Jason Merrill [Thu, 21 Dec 2023 02:34:49 +0000 (21:34 -0500)] 
c++: computed goto from catch block [PR81438]

As with 37722, we don't clean up the exception object if a computed goto
leaves a catch block, but we can warn about that.

PR c++/81438

gcc/cp/ChangeLog:

* decl.cc (poplevel_named_label_1): Handle leaving catch.
(check_previous_goto_1): Likewise.
(check_goto_1): Likewise.

gcc/testsuite/ChangeLog:

* g++.dg/ext/label15.C: Require indirect_jumps.
* g++.dg/ext/label16.C: New test.

4 months agoTestsuite: Fix failures in g++.dg/analyzer/placement-new-size.C
Sandra Loosemore [Fri, 22 Dec 2023 02:10:32 +0000 (02:10 +0000)] 
Testsuite: Fix failures in g++.dg/analyzer/placement-new-size.C

This testcase was failing on uses of int8_t, int64_t, etc without
including <stdint.h>.

gcc/testsuite/ChangeLog
* g++.dg/analyzer/placement-new-size.C: Include <stdint.h>.  Also
add missing newline to end of file.

4 months agoc++: sizeof... mangling with alias template [PR95298]
Jason Merrill [Sat, 18 Nov 2023 19:35:22 +0000 (14:35 -0500)] 
c++: sizeof... mangling with alias template [PR95298]

We were getting sizeof... mangling wrong when the argument after
substitution was a pack expansion that is not a simple T..., such as
list<T>... in variadic-mangle4.C or (A+1)... in variadic-mangle5.C.  In the
former case we ICEd; in the latter case we wrongly mangled it as sZ
<expression>.

PR c++/95298

gcc/cp/ChangeLog:

* mangle.cc (write_expression): Handle v18 sizeof... bug.
* pt.cc (tsubst_pack_expansion): Keep TREE_VEC for sizeof...
(tsubst_expr): Don't strip TREE_VEC here.

gcc/testsuite/ChangeLog:

* g++.dg/cpp0x/variadic-mangle2.C: Add non-member.
* g++.dg/cpp0x/variadic-mangle4.C: New test.
* g++.dg/cpp0x/variadic-mangle5.C: New test.
* g++.dg/cpp0x/variadic-mangle5a.C: New test.

4 months agotestsuite: suppress mangling compatibility aliases
Jason Merrill [Thu, 21 Dec 2023 21:16:37 +0000 (16:16 -0500)] 
testsuite: suppress mangling compatibility aliases

Recently a mangling test failed on a target with no mangling alias support
because I hadn't updated the expected mangling, but it was still passing on
x86_64-pc-linux-gnu because of the alias for the old mangling.  So let's
avoid these aliases in mangling tests.

gcc/testsuite/ChangeLog:

* g++.dg/abi/mangle-arm-crypto.C: Specify -fabi-compat-version.
* g++.dg/abi/mangle-concepts1.C
* g++.dg/abi/mangle-neon-aarch64.C
* g++.dg/abi/mangle-neon.C
* g++.dg/abi/mangle-regparm.C
* g++.dg/abi/mangle-regparm1a.C
* g++.dg/abi/mangle-ttp1.C
* g++.dg/abi/mangle-union1.C
* g++.dg/abi/mangle1.C
* g++.dg/abi/mangle13.C
* g++.dg/abi/mangle15.C
* g++.dg/abi/mangle16.C
* g++.dg/abi/mangle18-1.C
* g++.dg/abi/mangle19-1.C
* g++.dg/abi/mangle20-1.C
* g++.dg/abi/mangle22.C
* g++.dg/abi/mangle23.C
* g++.dg/abi/mangle24.C
* g++.dg/abi/mangle25.C
* g++.dg/abi/mangle26.C
* g++.dg/abi/mangle27.C
* g++.dg/abi/mangle28.C
* g++.dg/abi/mangle29.C
* g++.dg/abi/mangle3-2.C
* g++.dg/abi/mangle3.C
* g++.dg/abi/mangle30.C
* g++.dg/abi/mangle31.C
* g++.dg/abi/mangle32.C
* g++.dg/abi/mangle33.C
* g++.dg/abi/mangle34.C
* g++.dg/abi/mangle35.C
* g++.dg/abi/mangle36.C
* g++.dg/abi/mangle37.C
* g++.dg/abi/mangle39.C
* g++.dg/abi/mangle40.C
* g++.dg/abi/mangle43.C
* g++.dg/abi/mangle44.C
* g++.dg/abi/mangle45.C
* g++.dg/abi/mangle46.C
* g++.dg/abi/mangle47.C
* g++.dg/abi/mangle48.C
* g++.dg/abi/mangle49.C
* g++.dg/abi/mangle5.C
* g++.dg/abi/mangle50.C
* g++.dg/abi/mangle51.C
* g++.dg/abi/mangle52.C
* g++.dg/abi/mangle53.C
* g++.dg/abi/mangle54.C
* g++.dg/abi/mangle55.C
* g++.dg/abi/mangle56.C
* g++.dg/abi/mangle57.C
* g++.dg/abi/mangle58.C
* g++.dg/abi/mangle59.C
* g++.dg/abi/mangle6.C
* g++.dg/abi/mangle60.C
* g++.dg/abi/mangle61.C
* g++.dg/abi/mangle62.C
* g++.dg/abi/mangle62a.C
* g++.dg/abi/mangle63.C
* g++.dg/abi/mangle64.C
* g++.dg/abi/mangle65.C
* g++.dg/abi/mangle66.C
* g++.dg/abi/mangle68.C
* g++.dg/abi/mangle69.C
* g++.dg/abi/mangle7.C
* g++.dg/abi/mangle70.C
* g++.dg/abi/mangle71.C
* g++.dg/abi/mangle72.C
* g++.dg/abi/mangle73.C
* g++.dg/abi/mangle74.C
* g++.dg/abi/mangle75.C
* g++.dg/abi/mangle76.C
* g++.dg/abi/mangle77.C
* g++.dg/abi/mangle78.C
* g++.dg/abi/mangle8.C
* g++.dg/abi/mangle9.C: Likewise.

4 months agoDaily bump.
GCC Administrator [Fri, 22 Dec 2023 00:18:02 +0000 (00:18 +0000)] 
Daily bump.

4 months agolibstdc++: implement std::generator
Arsen Arsenović [Thu, 2 Nov 2023 15:14:34 +0000 (16:14 +0100)] 
libstdc++: implement std::generator

libstdc++-v3/ChangeLog:

* include/Makefile.am: Install std/generator, bits/elements_of.h
as freestanding.
* include/Makefile.in: Regenerate.
* include/bits/version.def: Add __cpp_lib_generator.
* include/bits/version.h: Regenerate.
* include/precompiled/stdc++.h: Include <generator>.
* include/std/ranges: Include bits/elements_of.h
* include/bits/elements_of.h: New file.
* include/std/generator: New file.
* testsuite/24_iterators/range_generators/01.cc: New test.
* testsuite/24_iterators/range_generators/02.cc: New test.
* testsuite/24_iterators/range_generators/copy.cc: New test.
* testsuite/24_iterators/range_generators/except.cc: New test.
* testsuite/24_iterators/range_generators/synopsis.cc: New test.
* testsuite/24_iterators/range_generators/subrange.cc: New test.

4 months agolibstdc++: add missing include in ranges_util.h
Arsen Arsenović [Sun, 5 Nov 2023 12:54:54 +0000 (13:54 +0100)] 
libstdc++: add missing include in ranges_util.h

libstdc++-v3/ChangeLog:

* include/bits/ranges_util.h: Add missing <bits/invoke.h>
include.