David Malcolm [Thu, 4 Jan 2024 14:19:06 +0000 (09:19 -0500)]
analyzer: add sarif properties for checker events
As another followup to r14-6057-g12b67d1e13b3cf, optionally add SARIF
property bags to threadFlowLocation objects when writing out diagnostic
paths, and add analyzer-specific properties to them.
This was useful for debugging PR analyzer/112790.
gcc/analyzer/ChangeLog:
* checker-event.cc: Include "diagnostic-format-sarif.h" and
"tree-logical-location.h".
(checker_event::maybe_add_sarif_properties): New.
(superedge_event::maybe_add_sarif_properties): New.
(superedge_event::superedge_event): Add comment.
* checker-event.h (checker_event::maybe_add_sarif_properties): New
decl.
(superedge_event::maybe_add_sarif_properties): New decl.
gcc/ChangeLog:
* diagnostic-format-sarif.cc
(sarif_builder::make_logical_location_object): Convert to...
(make_sarif_logical_location_object): ...this.
(sarif_builder::set_any_logical_locs_arr): Update for above
change.
(sarif_builder::make_thread_flow_location_object): Call
maybe_add_sarif_properties on each diagnostic_event.
* diagnostic-format-sarif.h (class logical_location): New forward
decl.
(make_sarif_logical_location_object): New decl.
* diagnostic-path.h (class sarif_object): New forward decl.
(diagnostic_event::maybe_add_sarif_properties): New vfunc.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
Kuan-Lin Chen [Wed, 20 Dec 2023 07:18:59 +0000 (15:18 +0800)]
RISC-V: Nan-box the result of movhf on soft-fp16
According to spec, fmv.h checks if the input operands are correctly
NaN-boxed. If not, the input value is treated as an n-bit canonical NaN.
This patch fixs the issue that operands returned by soft-fp16 libgcc
(i.e., __truncdfhf2) was not correctly NaN-boxed.
gcc/ChangeLog:
* config/riscv/riscv.cc (riscv_legitimize_move): Expand movfh
with Nan-boxing value.
* config/riscv/riscv.md (*movhf_softfloat_unspec): New pattern.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/_Float16-nanboxing.c: New test.
Co-authored-by: Patrick Lin <patrick@andestech.com> Co-authored-by: Rufus Chen <rufus@andestech.com> Co-authored-by: Monk Chiang <monk.chiang@sifive.com>
Roger Sayle [Thu, 4 Jan 2024 10:49:33 +0000 (10:49 +0000)]
Improved RTL expansion of field assignments into promoted registers.
This patch fixes PR rtl-optmization/104914 by tweaking/improving the way
the fields are written into a pseudo register that needs to be kept sign
extended.
<bb 5> [local count: 1073741824]:
val ={v} {CLOBBER(eol)};
return;
}
Here four bytes are being sequentially written into the SImode value
val. On some platforms, such as MIPS64, this SImode value is kept in
a 64-bit register, suitably sign-extended. The function expand_assignment
contains logic to handle this via SUBREG_PROMOTED_VAR_P (around line 6264
in expr.cc) which outputs an explicit extension operation after each
store_field (typically insv) to such promoted/extended pseudos.
The first observation is that there's no need to perform sign extension
after each byte in the example above; the extension is only required
after changes to the most significant byte (i.e. to a field that overlaps
the most significant bit).
The bug fix is actually a bit more subtle, but at this point during
code expansion it's not safe to use a SUBREG when sign-extending this
field. Currently, GCC generates (sign_extend:DI (subreg:SI (reg:DI) 0))
but combine (and other RTL optimizers) later realize that because SImode
values are always sign-extended in their 64-bit hard registers that
this is a no-op and eliminates it. The trouble is that it's unsafe to
refer to the SImode lowpart of a 64-bit register using SUBREG at those
critical points when temporarily the value isn't correctly sign-extended,
and the usual backend invariants don't hold. At these critical points,
the middle-end needs to use an explicit TRUNCATE rtx (as this isn't a
TRULY_NOOP_TRUNCATION), so that the explicit sign-extension looks like
(sign_extend:DI (truncate:SI (reg:DI)), which avoids the problem.
2024-01-04 Roger Sayle <roger@nextmovesoftware.com>
Jeff Law <jlaw@ventanamicro.com>
gcc/ChangeLog
PR rtl-optimization/104914
* expr.cc (expand_assignment): When target is SUBREG_PROMOTED_VAR_P
a sign or zero extension is only required if the modified field
overlaps the SUBREG's most significant bit. On MODE_REP_EXTENDED
targets, don't refer to the temporarily incorrectly extended value
using a SUBREG, but instead generate an explicit TRUNCATE rtx.
Juzhe-Zhong [Thu, 4 Jan 2024 08:22:48 +0000 (16:22 +0800)]
RISC-V: Make liveness estimation be aware of .vi variant
Consider this following case:
void
f (int *restrict a, int *restrict b, int *restrict c, int *restrict d, int n)
{
for (int i = 0; i < n; i++)
{
int tmp = b[i] + 15;
int tmp2 = tmp + b[i];
c[i] = tmp2 + b[i];
d[i] = tmp + tmp2 + b[i];
}
}
Current dynamic LMUL cost model choose LMUL = 4 because we count the "15" as
consuming 1 vector register group which is not accurate.
We teach the dynamic LMUL cost model be aware of the potential vi variant instructions
transformation, so that we can choose LMUL = 8 according to more accurate cost model.
Kito Cheng [Mon, 25 Dec 2023 08:45:21 +0000 (16:45 +0800)]
RISC-V: Fix misaligned stack offset for interrupt function
`interrupt` function will backup fcsr register, but it fixed to SImode,
it's not big issue since fcsr only used 8 bits so far, however the
offset should still using UNITS_PER_WORD to prevent the stack offset
become non 8 byte aligned, it will cause problem for RV64.
gcc/ChangeLog:
* config/riscv/riscv.cc (riscv_for_each_saved_reg): Adjust the
offset of fcsr.
chenxiaolong [Fri, 29 Dec 2023 07:48:06 +0000 (15:48 +0800)]
LoongArch: testsuite:Add loongarch to gcc.dg/vect/slp-26.c.
In the LoongArch architecture, GCC supports the vectorization function tested
by vect/slp-26.c, but there is no detection of loongarch in dg-finals. Add
loongarch to the appropriate dg-finals.
chenxiaolong [Fri, 29 Dec 2023 01:45:15 +0000 (09:45 +0800)]
LoongArch: testsuite:Fix FAIL in lasx-xvstelm.c file.
After implementing the cost model on the LoongArch architecture, the GCC
compiler code has this feature turned on by default, which causes the
lasx-xvstelm.c file test to fail. Through analysis, this test case can
generate vectorization instructions required for detection only after
disabling the functionality of the cost model with the "-fno-vect-cost-model"
compilation option.
gcc/testsuite/ChangeLog:
* gcc.target/loongarch/vector/lasx/lasx-xvstelm.c:Add compile
option "-fno-vect-cost-model" to dg-options.
There are currently two versions of the implementations of constant
vector permutation: loongarch_expand_vec_perm_const_1 and
loongarch_expand_vec_perm_const_2. The implementations of the two
versions are different. Currently, only the implementation of
loongarch_expand_vec_perm_const_1 is used for 256-bit vectors. We
hope to streamline the code as much as possible while retaining the
better-performing implementation of the two. By repeatedly testing
spec2006 and spec2017, we got the following Merged version.
Compared with the pre-merger version, the number of lines of code
in loongarch.cc has been reduced by 888 lines. At the same time,
the performance of SPECint2006 under Ofast has been improved by 0.97%,
and the performance of SPEC2017 fprate has been improved by 0.27%.
YunQiang Su [Fri, 29 Dec 2023 16:17:52 +0000 (00:17 +0800)]
MIPS: Add pattern insqisi_extended and inshisi_extended
This match pattern allows combination (zero_extract:DI 8, 24, QI)
with an sign-extend to 32bit INS instruction on TARGET_64BIT.
For SI mode, if the sign-bit is modified by bitops, we will need a
sign-extend operation. Since 32bit INS instruction can be sure that
result is sign-extended, and the QImode src register is safe for INS, too.
YunQiang Su [Fri, 29 Dec 2023 17:34:28 +0000 (01:34 +0800)]
MIPS: Implement TARGET_INSN_COSTS
When combine some instructions, the generic `rtx_cost`
may over estimate the cost of result RTL, due to that
the RTL may be quite complex and `rtx_cost` has no
information that this RTL can be convert to simple
hardware instruction(s).
In this case, Let's use `insn_count * perf_ratio` to
estimate the cost if both of them are available.
Otherwise fallback to pattern_cost.
When non-speed, Let's use the length as cost.
gcc
* config/mips/mips.cc (mips_insn_cost): New function.
gcc/testsuite
* gcc.target/mips/data-sym-multi-pool.c: Skip Os or -O0.
YunQiang Su [Fri, 29 Dec 2023 16:17:52 +0000 (00:17 +0800)]
MIPS: define_attr perf_ratio in mips.md
The accurate cost of an pattern can get with
insn_count * perf_ratio
The default value is set to 0 instead of 1, since that
we will need to distinguish the default value and it is
really set for an pattern. Since it is not set for most
patterns yet, to use it, we will need to be sure that it's
value is greater than 0.
This attr will be used in `mips_insn_cost`.
gcc
* config/mips/mips.md (perf_ratio): New attribute.
Juzhe-Zhong [Wed, 3 Jan 2024 22:38:43 +0000 (06:38 +0800)]
RISC-V: Fix bug of earliest fusion for infinite loop[VSETVL PASS]
As PR113206 and PR113209, the bugs happens on the following situation:
li a4,32
...
vsetvli zero,a4,e8,m8,ta,ma
...
slliw a4,a3,24
sraiw a4,a4,24
bge a3,a1,.L8
sb a4,%lo(e)(a0)
vsetvli zero,a4,e8,m8,ta,ma --> a4 is polluted value not the expected "32".
...
.L7:
j .L7 ---> infinite loop.
The root cause is that infinite loop confuse earliest computation and let earliest fusion
happens on unexpected place.
Disable blocks that belong to infinite loop to fix this bug since applying ealiest LCM fusion
on infinite loop seems quite complicated and we don't see any benefits.
Note that disabling earliest fusion on infinite loops doesn't hurt the vsetvli performance,
instead, it does improve codegen of some cases.
Tested on both RV32 and RV64 no regression.
PR target/113206
PR target/113209
gcc/ChangeLog:
* config/riscv/riscv-vsetvl.cc (invalid_opt_bb_p): New function.
(pre_vsetvl::compute_lcm_local_properties): Disable earliest fusion on
blocks belong to infinite loop.
(pre_vsetvl::emit_vsetvl): Remove fake edges.
* config/riscv/t-riscv: Add a new include file.
Patrick Palka [Wed, 3 Jan 2024 20:43:28 +0000 (15:43 -0500)]
c++: bad direct reference binding via conv fn [PR113064]
When computing a direct reference binding via a conversion function
yields a bad conversion, reference_binding incorrectly commits to that
conversion instead of trying a conversion via a temporary. This causes
us to reject the first testcase because the bad direct conversion to B&&
via the && conversion operator prevents us from considering the good
conversion via the & conversion operator and a temporary. (Similar
story for the second testcase.)
This patch fixes this by making reference_binding not prematurely commit
to such a bad direct conversion. We still fall back to it if using a
temporary also fails (otherwise the diagnostic for cpp0x/explicit7.C
regresses).
PR c++/113064
gcc/cp/ChangeLog:
* call.cc (reference_binding): Still try a conversion via a
temporary if a direct conversion was bad.
gcc/testsuite/ChangeLog:
* g++.dg/cpp0x/rv-conv4.C: New test.
* g++.dg/cpp0x/rv-conv5.C: New test.
gcc/c/
* c-parser.cc (c_parser_omp_clause_name): Move handling of indirect
clause to correspond to alphabetical order.
gcc/cp/
* parser.cc (cp_parser_omp_clause_name): Move handling of indirect
clause to correspond to alphabetical order.
gcc/
* tree-core.h (enum omp_clause_code): Move OMP_CLAUSE_INDIRECT to before
OMP_CLAUSE__SIMDUID_.
* tree.cc (omp_clause_num_ops): Update position of entry for
OMP_CLAUSE_INDIRECT to correspond with omp_clause_code.
(omp_clause_code_name): Likewise.
nvptx: Restructure code generating function map labels
This restructures the code generating FUNC_MAP and IND_FUNC_MAP labels
in the assembly code for mkoffload to consume, hopefully making it a
bit clearer and easier to search for.
Jakub Jelinek [Wed, 3 Jan 2024 11:11:32 +0000 (12:11 +0100)]
Small tweaks for update-copyright.py
update-copyright.py --this-year FAILs on two spots in the modula2
directories.
One is gpl_v3_without_node.texi, I think that is similar to other
license files which we already exclude from updates.
And the other is GmcOptions.cc, which has lines like
mcPrintf_printf0 ((const char *) "Copyright ", 10);
mcPrintf_printf1 ((const char *) "Copyright (C) %d Free Software Foundation, Inc.\\n", 49, (const unsigned char *) &year, (sizeof (year)-1));
mcPrintf_printf1 ((const char *) "Copyright (C) %d Free Software Foundation, Inc.\\n", 49, (const unsigned char *) &year, (sizeof (year)-1));
which update-copyhright.py obviously can't grok. The file is generated
and doesn't contain normal Copyright year which should be updated, so I think
it is also ok to skip it.
Xi Ruoyao [Sat, 30 Dec 2023 13:40:11 +0000 (21:40 +0800)]
LoongArch: Provide fmin/fmax RTL pattern for vectors
We already had smin/smax RTL pattern using vfmin/vfmax instructions.
But for smin/smax, it's unspecified what will happen if either operand
contains any NaN operands. So we would not vectorize the loop with
-fno-finite-math-only (the default for all optimization levels expect
-Ofast).
But, LoongArch vfmin/vfmax instruction is IEEE-754-2008 conformant so we
can also use them and vectorize the loop.
gcc/ChangeLog:
* config/loongarch/simd.md (fmax<mode>3): New define_insn.
(fmin<mode>3): Likewise.
(reduc_fmax_scal_<mode>3): New define_expand.
(reduc_fmin_scal_<mode>3): Likewise.
Patrick Palka [Wed, 3 Jan 2024 02:31:20 +0000 (21:31 -0500)]
libstdc++: testsuite: Reduce max_size_type.cc exec time [PR113175]
The adjustment to max_size_type.cc in r14-205-g83470a5cd4c3d2
inadvertently increased the execution time of this test by over 5x due
to making the two main loops actually run in the signed_p case instead
of being dead code.
To compensate, this patch cuts the relevant loops' range [-1000,1000] by
10x as proposed in the PR. This shouldn't significantly weaken the test
since the same important edge cases are still checked in the smaller range
and/or elsewhere. On my machine this reduces the test's execution time by
roughly 10x (and 1.6x relative to before r14-205).
PR testsuite/113175
libstdc++-v3/ChangeLog:
* testsuite/std/ranges/iota/max_size_type.cc (test02): Reduce
'limit' to 100 from 1000 and adjust 'log2_limit' accordingly.
(test03): Likewise.
Jun Sha (Joshua) [Fri, 29 Dec 2023 04:10:44 +0000 (12:10 +0800)]
RISC-V: Use vector_length_operand instead of csr_operand in vsetvl patterns
This patch replaces csr_operand by vector_length_operand in the vsetvl
patterns. This allows future changes in the vector code (i.e. in the
vector_length_operand predicate) without affecting scalar patterns that
use the csr_operand predicate.
gcc/ChangeLog:
* config/riscv/vector.md:
Use vector_length_operand for vsetvl patterns.
Co-authored-by: Jin Ma <jinma@linux.alibaba.com> Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com> Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
Juzhe-Zhong [Tue, 2 Jan 2024 07:26:55 +0000 (15:26 +0800)]
RISC-V: Add simplification of dummy len and dummy mask COND_LEN_xxx pattern
In https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=d1eacedc6d9ba9f5522f2c8d49ccfdf7939ad72d
I optimize COND_LEN_xxx pattern with dummy len and dummy mask with too simply solution which
causes redundant vsetvli in the following case:
vsetvli a5,a2,e8,m1,ta,ma
vle32.v v8,0(a0)
vsetivli zero,16,e32,m4,tu,mu ----> We should apply VLMAX instead of a CONST_INT AVL
slli a4,a5,2
vand.vv v0,v8,v16
vand.vv v4,v8,v12
vmseq.vi v0,v0,0
sub a2,a2,a5
vneg.v v4,v8,v0.t
vsetvli zero,a5,e32,m4,ta,ma
Actually, we should not elide the VLMAX situation that has AVL in range of [0,31].
After removing the the check above, we will have this following issue:
vsetivli zero,4,e32,m1,ta,ma
vlseg4e32.v v4,(a5)
vlseg4e32.v v12,(a3)
vsetvli a5,zero,e32,m1,tu,ma ---> This is redundant since VLMAX AVL = 4 when it is fixed-vlmax
vfadd.vf v3,v13,fa0
vfadd.vf v1,v12,fa1
vfmul.vv v17,v3,v5
vfmul.vv v16,v1,v5
Since all the following operations (vfadd.vf ... etc) are COND_LEN_xxx with dummy len and dummy mask,
we add the simplification operations dummy len and dummy mask into VLMAX TA and MA policy.
So, after this patch. Both cases are optimal codegen now:
case 1:
vsetvli a5,a2,e32,m1,ta,mu
vle32.v v2,0(a0)
slli a4,a5,2
vand.vv v1,v2,v3
vand.vv v0,v2,v4
sub a2,a2,a5
vmseq.vi v0,v0,0
vneg.v v1,v2,v0.t
vse32.v v1,0(a1)
This patch adds a new tuning option
'AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA', to consider fully
pipelined FMAs in reassociation. Also, set this option by default
for Ampere CPUs.
gcc/ChangeLog:
* config/aarch64/aarch64-tuning-flags.def
(AARCH64_EXTRA_TUNING_OPTION): New tuning option
AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA.
* config/aarch64/aarch64.cc
(aarch64_override_options_internal): Set
param_fully_pipelined_fma according to tuning option.
* config/aarch64/tuning_models/ampere1.h: Add
AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA to tune_flags.
* config/aarch64/tuning_models/ampere1a.h: Likewise.
* config/aarch64/tuning_models/ampere1b.h: Likewise.
* config/riscv/iterators.md: Add rotate insn name.
* config/riscv/riscv.md: Add new insns name for crypto vector.
* config/riscv/vector-iterators.md: Add new iterators for crypto vector.
* config/riscv/vector.md: Add the corresponding attr for crypto vector.
* config/riscv/vector-crypto.md: New file.The machine descriptions for crypto vector.
Roger Sayle [Sun, 31 Dec 2023 21:37:24 +0000 (21:37 +0000)]
i386: Tweak define_insn_and_split to fix FAIL of gcc.target/i386/pr43644-2.c
This patch resolves the failure of pr43644-2.c in the testsuite, a code
quality test I added back in July, that started failing as the code GCC
generates for 128-bit values (and their parameter passing) has been in
flux.
The function:
unsigned __int128 foo(unsigned __int128 x, unsigned long long y) {
return x+y;
}
2023-12-31 Uros Bizjak <ubizjak@gmail.com>
Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
PR target/43644
* config/i386/i386.md (*add<dwi>3_doubleword_concat_zext): Tweak
order of instructions after split, to minimize number of moves.
Testing for mmix (a 64-bit target using Knuth's simulator). The test
is largely pruned for simulators, but still needs 5m57s on my laptop
from 3.5 years ago to run to successful completion. Perhaps slow
hosted targets could also have problems so increasing the timeout
limit, not just for simulators but for everyone, and by more than a
factor 2.
* testsuite/20_util/hash/quality.cc: Increase timeout by a factor 3.
libstdc++: [_Hashtable] Extend the small size optimization
A number of methods were still not using the small size optimization which
is to prefer an O(N) research to a hash computation as long as N is small.
libstdc++-v3/ChangeLog:
* include/bits/hashtable.h: Move comment about all equivalent values
being next to each other in the class documentation header.
(_M_reinsert_node, _M_merge_unique): Implement small size optimization.
(_M_find_tr, _M_count_tr, _M_equal_range_tr): Likewise.
Add benches on insert with hint and before begin cache.
libstdc++-v3/ChangeLog:
* testsuite/performance/23_containers/insert/54075.cc: Add lookup on unknown entries
w/o copy to see potential impact of memory fragmentation enhancements.
* testsuite/performance/23_containers/insert/unordered_multiset_hint.cc: Enhance hash
functor to make it perfect, exactly 1 entry per bucket. Also use hash functor tagged as
slow or not to bench w/o hash code cache.
* testsuite/performance/23_containers/insert/unordered_set_hint.cc: New test case. Like
previous one but using std::unordered_set.
* testsuite/performance/23_containers/insert/unordered_set_range_insert.cc: New test case.
Check performance of range-insertion compared to individual insertions.
* testsuite/performance/23_containers/insert_erase/unordered_small_size.cc: Add same bench
but after a copy to demonstrate impact of enhancements regarding memory fragmentation.
Martin Uecker [Fri, 22 Dec 2023 16:32:34 +0000 (17:32 +0100)]
C: Fix type compatibility for structs with variable sized fields.
This fixes the test gcc.dg/gnu23-tag-4.c introduced by commit 23fee88f
which fails for -march=... because the DECL_FIELD_BIT_OFFSET are set
inconsistently for types with and without variable-sized field. This
is fixed by testing for DECL_ALIGN instead. The code is further
simplified by removing some unnecessary conditions, i.e. anon_field is
set unconditionaly and all fields are assumed to be DECL_FIELDs.
Tamar Christina [Fri, 29 Dec 2023 15:58:29 +0000 (15:58 +0000)]
AArch64: Update costing for vector conversions [PR110625]
In gimple the operation
short _8;
double _9;
_9 = (double) _8;
denotes two operations on AArch64. First we have to widen from short to
long and then convert this integer to a double.
Currently however we only count the widen/truncate operations:
(double) _5 6 times vec_promote_demote costs 12 in body
(double) _5 12 times vec_promote_demote costs 24 in body
but not the actual conversion operation, which needs an additional 12
instructions in the attached testcase. Without this the attached testcase ends
up incorrectly thinking that it's beneficial to vectorize the loop at a very
high VF = 8 (4x unrolled).
Because we can't change the mid-end to account for this the costing code in the
backend now keeps track of whether the previous operation was a
promotion/demotion and ajdusts the expected number of instructions to:
1. If it's the first FLOAT_EXPR and the precision of the lhs and rhs are
different, double it, since we need to convert and promote.
2. If it's the previous operation was a demonition/promotion then reduce the
cost of the current operation by the amount we added extra in the last.
with the patch we get:
(double) _5 6 times vec_promote_demote costs 24 in body
(double) _5 12 times vec_promote_demote costs 36 in body
which correctly accounts for 30 operations.
This fixes the 16% regression in imagick in SPECCPU 2017 reported on Neoverse N2
and using the new generic Armv9-a cost model.
gcc/ChangeLog:
PR target/110625
* config/aarch64/aarch64.cc (aarch64_vector_costs::add_stmt_cost):
Adjust throughput and latency calculations for vector conversions.
(class aarch64_vector_costs): Add m_num_last_promote_demote.
Xi Ruoyao [Fri, 29 Dec 2023 12:04:34 +0000 (20:04 +0800)]
LoongArch: Fix the format of bstrins_<mode>_for_ior_mask condition (NFC)
gcc/ChangeLog:
* config/loongarch/loongarch.md (bstrins_<mode>_for_ior_mask):
For the condition, remove unneeded trailing "\" and move "&&" to
follow GNU coding style. NFC.
However the sliding-window algorithm just won't detect the pcalau12i/fld
pair to be optimized. Use a define_insn_and_rewrite in combine pass
will work around the issue.
* gcc.target/loongarch/explicit-relocs-auto-single-load-store-2.c:
New test.
* gcc.target/loongarch/explicit-relocs-auto-single-load-store-3.c:
New test.
The post-reload splitter currently allows xmm16+ registers with TARGET_EVEX512.
The splitter changes SFmode of the output operand to V4SFmode, but the vector
mode is currently unsupported in xmm16+ without TARGET_AVX512VL. lowpart_subreg
returns NULL_RTX in this case and the compilation fails with invalid RTX.
The patch removes support for x/ymm16+ registers with TARGET_EVEX512. The
support should be restored once ix86_hard_regno_mode_ok is fixed to allow
16-byte modes in x/ymm16+ with TARGET_EVEX512.
PR target/113133
gcc/ChangeLog:
* config/i386/i386.md
(TARGET_USE_VECTOR_FP_CONVERTS SF->DF float_extend splitter):
Do not handle xmm16+ with TARGET_EVEX512.
gcc/testsuite/ChangeLog:
* gcc.target/i386/pr113133-1.c: New test.
* gcc.target/i386/pr113133-2.c: New test.
Andrew Pinski [Fri, 29 Dec 2023 04:26:01 +0000 (20:26 -0800)]
Fix gen-vect-26.c testcase after loops with multiple exits [PR113167]
This fixes the gcc.dg/tree-ssa/gen-vect-26.c testcase by adding
`#pragma GCC novector` in front of the loop that is doing the checking
of the result. We only want to test the first loop to see if it can be
vectorize.
Committed as obvious after testing on x86_64-linux-gnu with -m32.
gcc/testsuite/ChangeLog:
PR testsuite/113167
* gcc.dg/tree-ssa/gen-vect-26.c: Mark the test/check loop
as novector.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
Juzhe-Zhong [Wed, 27 Dec 2023 02:38:26 +0000 (10:38 +0800)]
RISC-V: Disallow transformation into VLMAX AVL for cond_len_xxx when length is in range [0, 31]
Notice we have this following situation:
vsetivli zero,4,e32,m1,ta,ma
vlseg4e32.v v4,(a5)
vlseg4e32.v v12,(a3)
vsetvli a5,zero,e32,m1,tu,ma ---> This is redundant since VLMAX AVL = 4 when it is fixed-vlmax
vfadd.vf v3,v13,fa0
vfadd.vf v1,v12,fa1
vfmul.vv v17,v3,v5
vfmul.vv v16,v1,v5
The rootcause is that we transform COND_LEN_xxx into VLMAX AVL when len == NUNITS blindly.
However, we don't need to transform all of them since when len is range of [0,31], we don't need to
consume scalar registers.
* config/riscv/riscv-v.cc (is_vlmax_len_p): New function.
(expand_load_store): Disallow transformation into VLMAX when len is in range of [0,31]
(expand_cond_len_op): Ditto.
(expand_gather_scatter): Ditto.
(expand_lanes_load_store): Ditto.
(expand_fold_extract_last): Ditto.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/post-ra-avl.c: Adapt test.
* gcc.target/riscv/rvv/base/vf_avl-2.c: New test.
Move ix86_expand_unary_operator from i386.cc to i386-expand.cc, re-arrange
prototypes and do some cosmetic changes with the usage of TARGET_APX_NDD.
No functional changes.
gcc/ChangeLog:
* config/i386/i386.cc (ix86_unary_operator_ok): Move from here...
* config/i386/i386-expand.cc (ix86_unary_operator_ok): ... to here.
* config/i386/i386-protos.h: Re-arrange ix86_{unary|binary}_operator_ok
and ix86_expand_{unary|binary}_operator prototypes.
* config/i386/i386.md: Cosmetic changes with the usage of
TARGET_APX_NDD in ix86_expand_{unary|binary}_operator
and ix86_{unary|binary}_operator_ok function calls.
Juzhe-Zhong [Thu, 28 Dec 2023 01:33:32 +0000 (09:33 +0800)]
RISC-V: Make dynamic LMUL cost model more accurate for conversion codes
Notice current dynamic LMUL is not accurate for conversion codes.
Refine for it, there is current case is changed from choosing LMUL = 4 into LMUL = 8.
Xi Ruoyao [Tue, 26 Dec 2023 20:28:56 +0000 (04:28 +0800)]
LoongArch: Fix infinite secondary reloading of FCCmode [PR113148]
The GCC internal doc says:
X might be a pseudo-register or a 'subreg' of a pseudo-register,
which could either be in a hard register or in memory. Use
'true_regnum' to find out; it will return -1 if the pseudo is in
memory and the hard register number if it is in a register.
So "MEM_P (x)" is not enough for checking if we are reloading from/to
the memory. This bug has caused reload pass to stall and finally ICE
complaining with "maximum number of generated reload insns per insn
achieved", since r14-6814.
Check if "true_regnum (x)" is -1 besides "MEM_P (x)" to fix the issue.
gcc/ChangeLog:
PR target/113148
* config/loongarch/loongarch.cc (loongarch_secondary_reload):
Check if regno == -1 besides MEM_P (x) for reloading FCCmode
from/to FPR to/from memory.
gcc/testsuite/ChangeLog:
PR target/113148
* gcc.target/loongarch/pr113148.c: New test.
* gcc.target/loongarch/rotl-with-rotr.c: New test.
* gcc.target/loongarch/rotl-with-vrotr-b.c: New test.
* gcc.target/loongarch/rotl-with-vrotr-h.c: New test.
* gcc.target/loongarch/rotl-with-vrotr-w.c: New test.
* gcc.target/loongarch/rotl-with-vrotr-d.c: New test.
* gcc.target/loongarch/rotl-with-xvrotr-b.c: New test.
* gcc.target/loongarch/rotl-with-xvrotr-h.c: New test.
* gcc.target/loongarch/rotl-with-xvrotr-w.c: New test.
* gcc.target/loongarch/rotl-with-xvrotr-d.c: New test.
* config/riscv/riscv-vector-costs.cc (is_gimple_assign_or_call): New function.
(get_first_lane_point): Ditto.
(get_last_lane_point): Ditto.
(max_number_of_live_regs): Refine live point dump.
(compute_estimated_lmul): Make unknown NITERS loop be aware of liveness.
(costs::better_main_loop_than_p): Ditto.
* config/riscv/riscv-vector-costs.h (struct stmt_point): Add new member.
gcc/testsuite/ChangeLog:
* gcc.dg/vect/costmodel/riscv/rvv/pr113112-1.c:
* gcc.dg/vect/costmodel/riscv/rvv/pr113112-3.c: New test.
Chenghui Pan [Fri, 22 Dec 2023 08:18:44 +0000 (16:18 +0800)]
LoongArch: Fix ICE when passing two same vector argument consecutively
Following code will cause ICE on LoongArch target:
#include <lsxintrin.h>
extern void bar (__m128i, __m128i);
__m128i a;
void
foo ()
{
bar (a, a);
}
It is caused by missing constraint definition in mov<mode>_lsx. This
patch fixes the template and remove the unnecessary processing from
loongarch_split_move () function.
This patch also cleanup the redundant definition from
loongarch_split_move () and loongarch_split_move_p ().
gcc/ChangeLog:
* config/loongarch/lasx.md: Use loongarch_split_move and
loongarch_split_move_p directly.
* config/loongarch/loongarch-protos.h
(loongarch_split_move): Remove unnecessary argument.
(loongarch_split_move_insn_p): Delete.
(loongarch_split_move_insn): Delete.
* config/loongarch/loongarch.cc
(loongarch_split_move_insn_p): Delete.
(loongarch_load_store_insns): Use loongarch_split_move_p
directly.
(loongarch_split_move): remove the unnecessary processing.
(loongarch_split_move_insn): Delete.
* config/loongarch/lsx.md: Use loongarch_split_move and
loongarch_split_move_p directly.
gcc/testsuite/ChangeLog:
* gcc.target/loongarch/vector/lsx/lsx-mov-1.c: New test.
Chenghui Pan [Fri, 22 Dec 2023 08:22:03 +0000 (16:22 +0800)]
LoongArch: Fix insn output of vec_concat templates for LASX.
When investigaing failure of gcc.dg/vect/slp-reduc-sad.c, following
instruction block are being generated by vec_concatv32qi (which is
generated by vec_initv32qiv16qi) at entrance of foo() function:
causes the reversion of vec_initv32qiv16qi operation's high and
low 128-bit part.
According to other target's similar impl and LSX impl for following
RTL representation, current definition in lasx.md of "vec_concat<mode>"
are wrong:
(set (op0) (vec_concat (op1) (op2)))
For correct behavior, the last argument of xvpermi.q should be 0x02
instead of 0x20. This patch fixes this issue and cleanup the vec_concat
template impl.
Li Wei [Mon, 25 Dec 2023 03:20:23 +0000 (11:20 +0800)]
LoongArch: Fixed bug in *bstrins_<mode>_for_ior_mask template.
We found that using the latest compiled gcc will cause a miscompare error
when running spec2006 400.perlbench test with -flto turned on. After testing,
it was found that only the LoongArch architecture will report errors.
The first error commit was located through the git bisect command as r14-3773-g5b857e87201335. Through debugging, it was found that the problem
was that the split condition of the *bstrins_<mode>_for_ior_mask template was
empty, which should actually be consistent with the insn condition.
Haochen Gui [Wed, 27 Dec 2023 02:32:21 +0000 (10:32 +0800)]
rs6000: Clean up the pre-checkings of expand_block_compare
Remove P7 CPU test as only P7 above can enter this function and P7 LE is
excluded by the checking of targetm.slow_unaligned_access on word_mode.
Also performance test shows the expand of block compare is better than
library on P7 BE when the length is from 16 bytes to 64 bytes.
gcc/
* config/rs6000/rs6000-string.cc (expand_block_compare): Assert
only P7 above can enter this function. Remove P7 CPU test and let
P7 BE do the expand.
Haochen Gui [Wed, 27 Dec 2023 02:30:06 +0000 (10:30 +0800)]
rs6000: Correct definition of macro of fixed point efficient unaligned
Marco TARGET_EFFICIENT_OVERLAPPING_UNALIGNED is used in rs6000-string.cc
to guard the platform which is efficient on fixed point unaligned
load/store. It's originally defined by TARGET_EFFICIENT_UNALIGNED_VSX
which is enabled from P8 and can be disabled by mno-vsx option. So the
definition is improper. This patch corrects it and call
slow_unaligned_access to judge if fixed point unaligned load/store is
efficient or not.
Di Zhao [Tue, 26 Dec 2023 08:36:02 +0000 (16:36 +0800)]
Fix compile options of pr110279-1.c and pr110279-2.c
The two testcases are for targets that support FMA. And
pr110279-2.c assumes reassoc_width of FMUL to be 4.
This patch adds missing options, to fix regression test failures
on nvptx/GCN (default reassoc_width of FMUL is 1) and x86_64
(need "-mfma").
gcc/testsuite/ChangeLog:
* gcc.dg/pr110279-1.c: Add "-mcpu=generic" for aarch64; add
"-mfma" for x86_64.
* gcc.dg/pr110279-2.c: Replace "-march=armv8.2-a" with
"-mcpu=generic"; limit the check to be on aarch64.
Juzhe-Zhong [Mon, 25 Dec 2023 09:17:25 +0000 (17:17 +0800)]
RISC-V: Move RVV V_REGS liveness computation into analyze_loop_vinfo
Currently, we compute RVV V_REGS liveness during better_main_loop_than_p which is not appropriate
time to do that since we for example, when have the codes will finally pick LMUL = 8 vectorization
factor, we compute liveness for LMUL = 8 multiple times which are redundant.
Since we have leverage the current ARM SVE COST model:
/* Do one-time initialization based on the vinfo. */
loop_vec_info loop_vinfo = dyn_cast<loop_vec_info> (m_vinfo);
if (!m_analyzed_vinfo)
{
if (loop_vinfo)
analyze_loop_vinfo (loop_vinfo);
m_analyzed_vinfo = true;
}
Analyze COST model only once for each cost model.
So here we move dynamic LMUL liveness information into analyze_loop_vinfo.
/* Do one-time initialization of the costs given that we're
costing the loop vectorization described by LOOP_VINFO. */
void
costs::analyze_loop_vinfo (loop_vec_info loop_vinfo)
{
...
/* Detect whether the LOOP has unexpected spills. */
record_potential_unexpected_spills (loop_vinfo);
}
So that we can avoid redundant computations and the current dynamic LMUL cost model flow is much
more reasonable and consistent with others.
Tested on RV32 and RV64 no regressions.
gcc/ChangeLog:
* config/riscv/riscv-vector-costs.cc (compute_estimated_lmul): Allow
fractional vecrtor.
(preferred_new_lmul_p): Move RVV V_REGS liveness computation into analyze_loop_vinfo.
(has_unexpected_spills_p): New function.
(costs::record_potential_unexpected_spills): Ditto.
(costs::better_main_loop_than_p): Move RVV V_REGS liveness computation into
analyze_loop_vinfo.
* config/riscv/riscv-vector-costs.h: New functions and variables.
when configured with --enable-checking=release we get a false
positive on the use of vec_stmts as the compiler seems unable
to notice it gets initialized through the pass-by-reference.
Jeevitha [Mon, 25 Dec 2023 10:06:54 +0000 (04:06 -0600)]
rs6000: Change GPR2 to volatile & non-fixed register for function that does not use TOC [PR110320]
Normally, GPR2 is the TOC pointer and is defined as a fixed and non-volatile
register. However, it can be used as volatile for PCREL addressing. Therefore,
modified r2 to be non-fixed in FIXED_REGISTERS and set it to fixed if it is not
PCREL and also when the user explicitly requests TOC or fixed. If the register
r2 is fixed, it is made as non-volatile. Changes in register preservation roles
can be accomplished with the help of available target hooks
(TARGET_CONDITIONAL_REGISTER_USAGE).
gcc/
PR target/110320
* config/rs6000/rs6000.cc (rs6000_conditional_register_usage): Change
GPR2 to volatile and non-fixed register for PCREL.
* config/rs6000/rs6000.h (FIXED_REGISTERS): Modify GPR2 to not fixed.
gcc/testsuite/
PR target/110320
* gcc.target/powerpc/pr110320-1.c: New testcase.
* gcc.target/powerpc/pr110320-2.c: New testcase.
* gcc.target/powerpc/pr110320-3.c: New testcase.
Co-authored-by: Peter Bergner <bergner@linux.ibm.com>
Tamar Christina [Sun, 24 Dec 2023 19:20:08 +0000 (19:20 +0000)]
testsuite: un-xfail TSVC loops that check for exit control flow vectorization
The following three tests now correctly work for targets that have an
implementation of cbranch for vectors so XFAILs are conditionally removed gated
on vect_early_break support.
gcc/testsuite/ChangeLog:
* gcc.dg/vect/tsvc/vect-tsvc-s332.c: Remove xfail when early break
supported.
* gcc.dg/vect/tsvc/vect-tsvc-s481.c: Likewise.
* gcc.dg/vect/tsvc/vect-tsvc-s482.c: Likewise.
Tamar Christina [Sun, 24 Dec 2023 19:19:38 +0000 (19:19 +0000)]
testsuite: Add tests for early break vectorization
This adds new test to check for all the early break functionality.
It includes a number of codegen and runtime tests checking the values at
different needles in the array.
They also check the values on different array sizes and peeling positions,
datatypes, VL, ncopies and every other variant I could think of.
Additionally it also contains reduced cases from issues found running over
various codebases.
Bootstrapped Regtested on aarch64-none-linux-gnu and no issues.
Also regtested with:
-march=armv8.3-a+sve
-march=armv8.3-a+nosve
-march=armv9-a
Bootstrapped Regtested x86_64-pc-linux-gnu and no issues.
On the tests I have disabled x86_64 on it's because the target is missing
cbranch for all types. I think it should be possible to add them for the
missing type since all we care about is if a bit is set or not.
Bootstrap and Regtest on arm-none-linux-gnueabihf still running
and test on arm-none-eabi -march=armv8.1-m.main+mve -mfpu=auto running.
* lib/target-supports.exp (add_options_for_vect_early_break,
check_effective_target_vect_early_break_hw,
check_effective_target_vect_early_break): New.
* g++.dg/vect/vect-early-break_1.cc: New test.
* g++.dg/vect/vect-early-break_2.cc: New test.
* g++.dg/vect/vect-early-break_3.cc: New test.
* gcc.dg/vect/vect-early-break-run_1.c: New test.
* gcc.dg/vect/vect-early-break-run_10.c: New test.
* gcc.dg/vect/vect-early-break-run_2.c: New test.
* gcc.dg/vect/vect-early-break-run_3.c: New test.
* gcc.dg/vect/vect-early-break-run_4.c: New test.
* gcc.dg/vect/vect-early-break-run_5.c: New test.
* gcc.dg/vect/vect-early-break-run_6.c: New test.
* gcc.dg/vect/vect-early-break-run_7.c: New test.
* gcc.dg/vect/vect-early-break-run_8.c: New test.
* gcc.dg/vect/vect-early-break-run_9.c: New test.
* gcc.dg/vect/vect-early-break-template_1.c: New test.
* gcc.dg/vect/vect-early-break-template_2.c: New test.
* gcc.dg/vect/vect-early-break_1.c: New test.
* gcc.dg/vect/vect-early-break_10.c: New test.
* gcc.dg/vect/vect-early-break_11.c: New test.
* gcc.dg/vect/vect-early-break_12.c: New test.
* gcc.dg/vect/vect-early-break_13.c: New test.
* gcc.dg/vect/vect-early-break_14.c: New test.
* gcc.dg/vect/vect-early-break_15.c: New test.
* gcc.dg/vect/vect-early-break_16.c: New test.
* gcc.dg/vect/vect-early-break_17.c: New test.
* gcc.dg/vect/vect-early-break_18.c: New test.
* gcc.dg/vect/vect-early-break_19.c: New test.
* gcc.dg/vect/vect-early-break_2.c: New test.
* gcc.dg/vect/vect-early-break_20.c: New test.
* gcc.dg/vect/vect-early-break_21.c: New test.
* gcc.dg/vect/vect-early-break_22.c: New test.
* gcc.dg/vect/vect-early-break_23.c: New test.
* gcc.dg/vect/vect-early-break_24.c: New test.
* gcc.dg/vect/vect-early-break_25.c: New test.
* gcc.dg/vect/vect-early-break_26.c: New test.
* gcc.dg/vect/vect-early-break_27.c: New test.
* gcc.dg/vect/vect-early-break_28.c: New test.
* gcc.dg/vect/vect-early-break_29.c: New test.
* gcc.dg/vect/vect-early-break_3.c: New test.
* gcc.dg/vect/vect-early-break_30.c: New test.
* gcc.dg/vect/vect-early-break_31.c: New test.
* gcc.dg/vect/vect-early-break_32.c: New test.
* gcc.dg/vect/vect-early-break_33.c: New test.
* gcc.dg/vect/vect-early-break_34.c: New test.
* gcc.dg/vect/vect-early-break_35.c: New test.
* gcc.dg/vect/vect-early-break_36.c: New test.
* gcc.dg/vect/vect-early-break_37.c: New test.
* gcc.dg/vect/vect-early-break_38.c: New test.
* gcc.dg/vect/vect-early-break_39.c: New test.
* gcc.dg/vect/vect-early-break_4.c: New test.
* gcc.dg/vect/vect-early-break_40.c: New test.
* gcc.dg/vect/vect-early-break_41.c: New test.
* gcc.dg/vect/vect-early-break_42.c: New test.
* gcc.dg/vect/vect-early-break_43.c: New test.
* gcc.dg/vect/vect-early-break_44.c: New test.
* gcc.dg/vect/vect-early-break_45.c: New test.
* gcc.dg/vect/vect-early-break_46.c: New test.
* gcc.dg/vect/vect-early-break_47.c: New test.
* gcc.dg/vect/vect-early-break_48.c: New test.
* gcc.dg/vect/vect-early-break_49.c: New test.
* gcc.dg/vect/vect-early-break_5.c: New test.
* gcc.dg/vect/vect-early-break_50.c: New test.
* gcc.dg/vect/vect-early-break_51.c: New test.
* gcc.dg/vect/vect-early-break_52.c: New test.
* gcc.dg/vect/vect-early-break_53.c: New test.
* gcc.dg/vect/vect-early-break_54.c: New test.
* gcc.dg/vect/vect-early-break_55.c: New test.
* gcc.dg/vect/vect-early-break_56.c: New test.
* gcc.dg/vect/vect-early-break_57.c: New test.
* gcc.dg/vect/vect-early-break_58.c: New test.
* gcc.dg/vect/vect-early-break_59.c: New test.
* gcc.dg/vect/vect-early-break_6.c: New test.
* gcc.dg/vect/vect-early-break_60.c: New test.
* gcc.dg/vect/vect-early-break_61.c: New test.
* gcc.dg/vect/vect-early-break_62.c: New test.
* gcc.dg/vect/vect-early-break_63.c: New test.
* gcc.dg/vect/vect-early-break_64.c: New test.
* gcc.dg/vect/vect-early-break_65.c: New test.
* gcc.dg/vect/vect-early-break_66.c: New test.
* gcc.dg/vect/vect-early-break_67.c: New test.
* gcc.dg/vect/vect-early-break_68.c: New test.
* gcc.dg/vect/vect-early-break_69.c: New test.
* gcc.dg/vect/vect-early-break_7.c: New test.
* gcc.dg/vect/vect-early-break_70.c: New test.
* gcc.dg/vect/vect-early-break_71.c: New test.
* gcc.dg/vect/vect-early-break_72.c: New test.
* gcc.dg/vect/vect-early-break_73.c: New test.
* gcc.dg/vect/vect-early-break_74.c: New test.
* gcc.dg/vect/vect-early-break_75.c: New test.
* gcc.dg/vect/vect-early-break_76.c: New test.
* gcc.dg/vect/vect-early-break_77.c: New test.
* gcc.dg/vect/vect-early-break_78.c: New test.
* gcc.dg/vect/vect-early-break_79.c: New test.
* gcc.dg/vect/vect-early-break_8.c: New test.
* gcc.dg/vect/vect-early-break_80.c: New test.
* gcc.dg/vect/vect-early-break_81.c: New test.
* gcc.dg/vect/vect-early-break_82.c: New test.
* gcc.dg/vect/vect-early-break_83.c: New test.
* gcc.dg/vect/vect-early-break_84.c: New test.
* gcc.dg/vect/vect-early-break_85.c: New test.
* gcc.dg/vect/vect-early-break_86.c: New test.
* gcc.dg/vect/vect-early-break_87.c: New test.
* gcc.dg/vect/vect-early-break_88.c: New test.
* gcc.dg/vect/vect-early-break_89.c: New test.
* gcc.dg/vect/vect-early-break_9.c: New test.
* gcc.dg/vect/vect-early-break_90.c: New test.
* gcc.dg/vect/vect-early-break_91.c: New test.
* gcc.dg/vect/vect-early-break_92.c: New test.
* gcc.dg/vect/vect-early-break_93.c: New test.
Tamar Christina [Sun, 24 Dec 2023 19:18:12 +0000 (19:18 +0000)]
middle-end: Support vectorization of loops with multiple exits.
Hi All,
This patch adds initial support for early break vectorization in GCC. In other
words it implements support for vectorization of loops with multiple exits.
The support is added for any target that implements a vector cbranch optab,
this includes both fully masked and non-masked targets.
Depending on the operation, the vectorizer may also require support for boolean
mask reductions using Inclusive OR/Bitwise AND. This is however only checked
then the comparison would produce multiple statements.
This also fully decouples the vectorizer's notion of exit from the existing loop
infrastructure's exit. Before this patch the vectorizer always picked the
natural loop latch connected exit as the main exit.
After this patch the vectorizer is free to choose any exit it deems appropriate
as the main exit. This means that even if the main exit is not countable (i.e.
the termination condition could not be determined) we might still be able to
vectorize should one of the other exits be countable.
In such situations the loop is reflowed which enabled vectorization of many
other loop forms.
Concretely the kind of loops supported are of the forms:
for (int i = 0; i < N; i++)
{
<statements1>
if (<condition>)
{
...
<action>;
}
<statements2>
}
where <action> can be:
- break
- return
- goto
Any number of statements can be used before the <action> occurs.
Since this is an initial version for GCC 14 it has the following limitations and
features:
- Only fixed sized iterations and buffers are supported. That is to say any
vectors loaded or stored must be to statically allocated arrays with known
sizes. N must also be known. This limitation is because our primary target
for this optimization is SVE. For VLA SVE we can't easily do cross page
iteraion checks. The result is likely to also not be beneficial. For that
reason we punt support for variable buffers till we have First-Faulting
support in GCC 15.
- any stores in <statements1> should not be to the same objects as in
<condition>. Loads are fine as long as they don't have the possibility to
alias. More concretely, we block RAW dependencies when the intermediate value
can't be separated fromt the store, or the store itself can't be moved.
- Prologue peeling, alignment peelinig and loop versioning are supported.
- Fully masked loops, unmasked loops and partially masked loops are supported
- Any number of loop early exits are supported.
- No support for epilogue vectorization. The only epilogue supported is the
scalar final one. Peeling code supports it but the code motion code cannot
find instructions to make the move in the epilog.
- Early breaks are only supported for inner loop vectorization.
With the help of IPA and LTO this still gets hit quite often. During bootstrap
it hit rather frequently. Additionally TSVC s332, s481 and s482 all pass now
since these are tests for support for early exit vectorization.
This implementation does not support completely handling the early break inside
the vector loop itself but instead supports adding checks such that if we know
that we have to exit in the current iteration then we branch to scalar code to
actually do the final VF iterations which handles all the code in <action>.
For the scalar loop we know that whatever exit you take you have to perform at
most VF iterations. For vector code we only case about the state of fully
performed iteration and reset the scalar code to the (partially) remaining loop.
That is to say, the first vector loop executes so long as the early exit isn't
needed. Once the exit is taken, the scalar code will perform at most VF extra
iterations. The exact number depending on peeling and iteration start and which
exit was taken (natural or early). For this scalar loop, all early exits are
treated the same.
When we vectorize we move any statement not related to the early break itself
and that would be incorrect to execute before the break (i.e. has side effects)
to after the break. If this is not possible we decline to vectorize. The
analysis and code motion also takes into account that it doesn't introduce a RAW
dependency after the move of the stores.
This means that we check at the start of iterations whether we are going to exit
or not. During the analyis phase we check whether we are allowed to do this
moving of statements. Also note that we only move the scalar statements, but
only do so after peeling but just before we start transforming statements.
With this the vector flow no longer necessarily needs to match that of the
scalar code. In addition most of the infrastructure is in place to support
general control flow safely, however we are punting this to GCC 15.
Codegen:
for e.g.
unsigned vect_a[N];
unsigned vect_b[N];
unsigned test4(unsigned x)
{
unsigned ret = 0;
for (int i = 0; i < N; i++)
{
vect_b[i] = x + i;
if (vect_a[i] > x)
break;
vect_a[i] = x;
On the workloads this work is based on we see between 2-3x performance uplift
using this patch.
Follow up plan:
- Boolean vectorization has several shortcomings. I've filed PR110223 with the
bigger ones that cause vectorization to fail with this patch.
- SLP support. This is planned for GCC 15 as for majority of the cases build
SLP itself fails. This means I'll need to spend time in making this more
robust first. Additionally it requires:
* Adding support for vectorizing CFG (gconds)
* Support for CFG to differ between vector and scalar loops.
Both of which would be disruptive to the tree and I suspect I'll be handling
fallouts from this patch for a while. So I plan to work on the surrounding
building blocks first for the remainder of the year.
Additionally it also contains reduced cases from issues found running over
various codebases.
Bootstrapped Regtested on aarch64-none-linux-gnu and no issues.
Also regtested with:
-march=armv8.3-a+sve
-march=armv8.3-a+nosve
-march=armv9-a
-mcpu=neoverse-v1
-mcpu=neoverse-n2
Bootstrapped Regtested x86_64-pc-linux-gnu and no issues.
Bootstrap and Regtest on arm-none-linux-gnueabihf and no issues.
gcc/ChangeLog:
* tree-if-conv.cc (idx_within_array_bound): Expose.
* tree-vect-data-refs.cc (vect_analyze_early_break_dependences): New.
(vect_analyze_data_ref_dependences): Use it.
* tree-vect-loop-manip.cc (vect_iv_increment_position): New.
(vect_set_loop_controls_directly,
vect_set_loop_condition_partial_vectors,
vect_set_loop_condition_partial_vectors_avx512,
vect_set_loop_condition_normal): Support multiple exits.
(slpeel_tree_duplicate_loop_to_edge_cfg): Support LCSAA peeling for
multiple exits.
(slpeel_can_duplicate_loop_p): Change vectorizer from looking at BB
count and instead look at loop shape.
(vect_update_ivs_after_vectorizer): Drop asserts.
(vect_gen_vector_loop_niters_mult_vf): Support peeled vector iterations.
(vect_do_peeling): Support multiple exits.
(vect_loop_versioning): Likewise.
* tree-vect-loop.cc (_loop_vec_info::_loop_vec_info): Initialise
early_breaks.
(vect_analyze_loop_form): Support loop flows with more than single BB
loop body.
(vect_create_loop_vinfo): Support niters analysis for multiple exits.
(vect_analyze_loop): Likewise.
(vect_get_vect_def): New.
(vect_create_epilog_for_reduction): Support early exit reductions.
(vectorizable_live_operation_1): New.
(find_connected_edge): New.
(vectorizable_live_operation): Support early exit live operations.
(move_early_exit_stmts): New.
(vect_transform_loop): Use it.
* tree-vect-patterns.cc (vect_init_pattern_stmt): Support gcond.
(vect_recog_bitfield_ref_pattern): Support gconds and bools.
(vect_recog_gcond_pattern): New.
(possible_vector_mask_operation_p): Support gcond masks.
(vect_determine_mask_precision): Likewise.
(vect_mark_pattern_stmts): Set gcond def type.
(can_vectorize_live_stmts): Force early break inductions to be live.
* tree-vect-stmts.cc (vect_stmt_relevant_p): Add relevancy analysis for
early breaks.
(vect_mark_stmts_to_be_vectorized): Process gcond usage.
(perm_mask_for_reverse): Expose.
(vectorizable_comparison_1): New.
(vectorizable_early_exit): New.
(vect_analyze_stmt): Support early break and gcond.
(vect_transform_stmt): Likewise.
(vect_is_simple_use): Likewise.
(vect_get_vector_types_for_stmt): Likewise.
* tree-vectorizer.cc (pass_vectorize::execute): Update exits for value
numbering.
* tree-vectorizer.h (enum vect_def_type): Add vect_condition_def.
(LOOP_VINFO_EARLY_BREAKS, LOOP_VINFO_EARLY_BRK_STORES,
LOOP_VINFO_EARLY_BREAKS_VECT_PEELED, LOOP_VINFO_EARLY_BRK_DEST_BB,
LOOP_VINFO_EARLY_BRK_VUSES): New.
(is_loop_header_bb_p): Drop assert.
(class loop): Add early_breaks, early_break_stores, early_break_dest_bb,
early_break_vuses.
(vect_iv_increment_position, perm_mask_for_reverse,
ref_within_array_bound): New.
(slpeel_tree_duplicate_loop_to_edge_cfg): Update for early breaks.
Tamar Christina [Sun, 24 Dec 2023 19:17:13 +0000 (19:17 +0000)]
middle-end: prevent LIM from hoising vector compares from gconds if target does not support it.
LIM notices that in some cases the condition and the results are loop
invariant and tries to move them out of the loop.
While the resulting code is operationally sound, moving the compare out of the
gcond results in generating code that no longer branches, so cbranch is no
longer applicable. As such I now add code to check during this motion to see
if the target supports flag setting vector comparison as general operation.
I have tried writing a GIMPLE testcase for this but the gimple FE seems to be
having some trouble with the vector types. It seems to fail parsing.
The early break code testsuite however has a test for this
(vect-early-break_67.c).
gcc/ChangeLog:
* tree-ssa-loop-im.cc (determine_max_movement): Import insn-codes.h
and optabs-tree.h and check for vector compare motion out of gcond.