Ken Matsui [Thu, 7 Dec 2023 05:32:58 +0000 (21:32 -0800)]
c++: Accept the use of built-in trait identifiers
This patch accepts the use of built-in trait identifiers when they are
actually not used as traits. Specifically, we check if the subsequent
token is '(' for ordinary built-in traits or is '<' only for the special
__type_pack_element built-in trait. If those identifiers are used
differently, the parser treats them as normal identifiers. This allows
us to accept code like: struct __is_pointer {};.
gcc/cp/ChangeLog:
* parser.cc (cp_lexer_lookup_trait): Rename to ...
(cp_lexer_peek_trait): ... this. Handle a subsequent token for
the corresponding built-in trait.
(cp_lexer_lookup_trait_expr): Rename to ...
(cp_lexer_peek_trait_expr): ... this.
(cp_lexer_lookup_trait_type): Rename to ...
(cp_lexer_peek_trait_type): ... this.
(cp_lexer_next_token_is_decl_specifier_keyword): Call
cp_lexer_peek_trait_type.
(cp_parser_simple_type_specifier): Likewise.
(cp_parser_primary_expression): Call cp_lexer_peek_trait_expr.
Ken Matsui [Thu, 7 Dec 2023 05:32:57 +0000 (21:32 -0800)]
c-family, c++: Look up built-in traits via identifier node
Since RID_MAX soon reaches 255 and all built-in traits are used
approximately once in a C++ translation unit, this patch removes
all RID values for built-in traits and uses the identifier node to
look up the specific trait. Rather than holding traits as keywords,
we set all trait identifiers as cik_trait, which is a new
cp_identifier_kind. As cik_reserved_for_udlit was unused and
cp_identifier_kind is 3 bits, we replaced the unused field with the new
cik_trait. Also, the later patch handles a subsequent token to the
built-in identifier so that we accept the use of non-function-like
built-in trait identifiers.
gcc/c-family/ChangeLog:
* c-common.cc (c_common_reswords): Remove all mappings of
built-in traits.
* c-common.h (enum rid): Remove all RID values for built-in
traits.
gcc/cp/ChangeLog:
* cp-objcp-common.cc (names_builtin_p): Remove all RID value
cases for built-in traits. Check for built-in traits via
the new cik_trait kind.
* cp-tree.h (enum cp_trait_kind): Set its underlying type to
addr_space_t.
(struct cp_trait): New struct to hold trait information.
(cp_traits): New array to hold a mapping to all traits.
(cik_reserved_for_udlit): Rename to ...
(cik_trait): ... this.
(IDENTIFIER_ANY_OP_P): Exclude cik_trait.
(IDENTIFIER_TRAIT_P): New macro to detect cik_trait.
* lex.cc (cp_traits): Define its values, declared in cp-tree.h.
(init_cp_traits): New function to set cik_trait and
IDENTIFIER_CP_INDEX for all built-in trait identifiers.
(cxx_init): Call init_cp_traits function.
* parser.cc (cp_lexer_lookup_trait): New function to look up a
built-in trait by IDENTIFIER_CP_INDEX.
(cp_lexer_lookup_trait_expr): Likewise, look up an
expression-yielding built-in trait.
(cp_lexer_lookup_trait_type): Likewise, look up a type-yielding
built-in trait.
(cp_keyword_starts_decl_specifier_p): Remove all RID value cases
for built-in traits.
(cp_lexer_next_token_is_decl_specifier_keyword): Handle
type-yielding built-in traits.
(cp_parser_primary_expression): Remove all RID value cases for
built-in traits. Handle expression-yielding built-in traits.
(cp_parser_trait): Handle cp_trait instead of enum rid.
(cp_parser_simple_type_specifier): Remove all RID value cases
for built-in traits. Handle type-yielding built-in traits.
Co-authored-by: Patrick Palka <ppalka@redhat.com> Signed-off-by: Ken Matsui <kmatsui@gcc.gnu.org>
Jeff Law [Sun, 10 Dec 2023 17:41:05 +0000 (10:41 -0700)]
[committed] Support uaddv and usubv on the H8
This patch adds uaddv/usubv support on the H8 port to speed up those pesky
builtin-overflow tests. It's a variant of something I'd been running for a
while -- the major change between the old approach I'd been using and this
patch is this version does not expose the CC register until after reload to be
consistent with the rest of the H8 port.
The general approach is to first clear the GPR that's going to hold the
overflow status, perform the arithmetic operation (add/sub), then use addx to
move the overflow indicator (in the C bit) into the GPR holding the overflow
status.
That's a significant improvement over the mess of logicals that's generated by
the generic code.
Handling signed overflow is possible and something I'll probably port to this
scheme at some point. It's a bit more complex because we can't trivially move
the bit from CCR into the right position in a GPR and other quirks of the H8.
This has been regression tested on the H8 without problems. Pushing to the
trunk.
gcc/
* config/h8300/addsub.md (uaddv<mode>4, usubv<mode>4): New expanders.
(uaddv): New define_insn_and_split plus post-reload pattern.
Jeff Law [Sun, 10 Dec 2023 17:29:23 +0000 (10:29 -0700)]
[committed] Provide patterns for signed bitfield extractions on H8
Inspired by Roger's work on the ARC port, this patch provides a
define_and_split pattern to optimize sign extended bitfields starting at
position 0 using an approach that doesn't require shifting.
It then builds on that to provide another define_and_split pattern to support
arbitrary signed bitfield extractions -- it uses a right logical shift to move
the bitfield into position 0, then the specialized pattern above to sign extend
the MSB of the field through the rest of the register.
This is often, but certainly not always, better than a two shift approach. The
code uses the sizes of the sequences to select between the two shift approach
and single shift with extension from an arbitrary location approach.
There's certainly further improvements that could be made here, but I think
we're getting the bulk of the improvements already.
Regression tested on the H8 port without errors. Installing on the trunk.
gcc/
* config/h8300/h8300-protos.h (use_extvsi): Prototype.
* config/h8300/combiner.md: Two new define_insn_and_split patterns
to implement signed bitfield extractions.
* config/h8300/h8300.cc (use_extvsi): New function.
Jeff Law [Sun, 10 Dec 2023 17:05:18 +0000 (10:05 -0700)]
[committed] Fix length computation of single bit bitfield extraction on H8
Various approaches are used to optimize extracting a sign extended single bit
bitfield. The length computation of 10 bytes was conservatively correct, but
inaccurate.
In particular when the bit we want is in the low half word we don't need the
move high half to low half instruction. Account for that in the length
computation.
This was spotted when looking at regressions in the generalized signed bitfield
extraction pattern.
This has been regression tested on the H8 port.
gcc/
* config/h8300/combiner.md (single bit signed bitfield extraction): Fix
length computation when the bit we want is in the low half word.
Jeff Law [Sun, 10 Dec 2023 16:32:55 +0000 (09:32 -0700)]
[committed] Fix length computation for logical shifts on H8
This fixes the length computation for logical shifts on the H8/SX.
The H8/SX has a richer set of logical shifts compared to early parts in the H8
family. It has special 2 byte instructions for shifts by power of two
immediate values as well as a special 4 byte shift by other immediate values.
These were never accounted for (AFIACT) in the length computation for shifts.
Until now that's mostly just affected branch shortening. But an upcoming patch
uses instruction lengths to select between two potential sequences and getting
these lengths wrong will cause it to miss optimization opportunities on the
H8/SX.
gcc
* config/h8300/h8300.cc (compute_a_shift_length): Fix computation
of logical shifts on the H8/SX.
Jakub Jelinek [Sat, 9 Dec 2023 20:41:00 +0000 (21:41 +0100)]
phiopt: Fix ICE with large --param l1-cache-line-size= [PR112887]
This function is never called when param_l1_cache_line_size is 0,
but it uses int and unsigned int variables to hold alignment in
bits, so for large param_l1_cache_line_size it is zero and e.g.
DECL_ALIGN () % param_align_bits can divide by zero.
Looking at the code, the function uses tree_fits_uhwi_p on the trees
before converting them using tree_to_uhwi to int variables, which
looks just wrong, either it would need to punt if it doesn't fit
into those and also check for overflows during the computation,
or use unsigned HOST_WIDE_INT for all of this. That also fixes
the division by zero, as param_l1_cache_line_size maximum is INT_MAX,
that multiplied by 8 will always fit.
2023-12-09 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/112887
* tree-ssa-phiopt.cc (hoist_adjacent_loads): Change type of
param_align, param_align_bits, offset1, offset2, size2 and align1
variables from int or unsigned int to unsigned HOST_WIDE_INT.
Jonathan Wakely [Fri, 8 Dec 2023 13:47:04 +0000 (13:47 +0000)]
libstdc++: Fix resolution of LWG 4016 for std::ranges::to [PR112876]
What I implemented in r14-6199-g45630fbcf7875b does not match what I
proposed for LWG 4016, and it imposes additional, unwanted requirements
on the emplace and insert member functions of the container being
populated.
libstdc++-v3/ChangeLog:
PR libstdc++/112876
* include/std/ranges (ranges::to): Do not try to use an iterator
returned by the container's emplace or insert member functions.
* testsuite/std/ranges/conv/1.cc (Cont4::emplace, Cont4::insert):
Use the iterator parameter. Do not return an iterator.
Jakub Jelinek [Sat, 9 Dec 2023 09:20:05 +0000 (10:20 +0100)]
c++: Don't diagnose ignoring of attributes if all ignored attributes are attribute_ignored_p
There is another thing I wonder about: with -Wno-attributes= we are
supposed to ignore the attributes altogether, but we are actually still
warning about them when we emit these generic warnings about ignoring
all attributes which appertain to this and that (perhaps with some
exceptions we first remove from the attribute chain), like:
void foo () { [[foo::bar]]; }
with -Wattributes -Wno-attributes=foo::bar
Shouldn't we call some helper function in cases like this and warn
not when std_attrs (or how the attribute chain var is called) is non-NULL,
but if it is non-NULL and contains at least one non-attribute_ignored_p
attribute?
I've kept warnings for cases where the C++ standard says explicitly any
attributes aren't ok -
"If an attribute-specifier-seq appertains to a friend declaration, that
declaration shall be a definition."
or
https://eel.is/c++draft/dcl.type.elab#3
or
https://eel.is/c++draft/temp.spec#temp.explicit-3
For some changes I haven't figured out how could I cover it in the
testsuite.
Note, C uses a different strategy, it has c_warn_unused_attributes
function which warns about all the attributes one by one unless they
are ignored (or allowed in certain position).
Though that is just a single diagnostic wording, while C++ FE just warns
that there are some ignored attributes and doesn't name them individually
(except for namespace and using namespace) and uses different wordings in
different spots.
testsuite: Remove gcc.dg/tree-ssa/scev-3.c -4.c and 5.c
These tests were recently xfailed on ilp32 targets though
passing on almost all ilp32 targets (known exceptions: ia32
and some arm subtargets). They've been changed around too
much to remain useful.
Alexandre Oliva [Sat, 9 Dec 2023 00:41:33 +0000 (21:41 -0300)]
strub: skip emutls after strubm errors
The emutls pass requires PROP_ssa, but if the strubm pass (or any
other pre-SSA pass) issues errors, all of the build_ssa_passes are
skipped, so the property is not set, but emutls still attempts to run,
on targets that use it, despite earlier errors, so it hits the
unsatisfied requirement.
Adjust emutls to be skipped in case of earlier errors.
for gcc/ChangeLog
* tree-emutls.cc: Include diagnostic-core.h.
(pass_ipa_lower_emutls::gate): Skip if errors were seen.
Patrick Palka [Fri, 8 Dec 2023 21:57:13 +0000 (16:57 -0500)]
c++: decltype of (non-captured variable) [PR83167]
For decltype((x)) within a lambda where x is not captured, we dubiously
require that the lambda has a capture default, unlike for decltype(x).
But according to [expr.prim.id.unqual]/3 we should just ignore the lambda
in this case. This patch narrowly fixes this issue by disabling the
capture_decltype handling and falling back to the ordinary handling when
the innermost lambda has no capture-default. In fact, we can restrict
the special handling to only by-copy lambdas since that's what
[expr.prim.id.unqual]/3 is concerned with; for by-ref implicit captures
both code paths should give the same result anyway.
During review some other issues were discovered which are documented in
a new FIXME.
PR c++/83167
gcc/cp/ChangeLog:
* semantics.cc (capture_decltype): Inline into its only caller ...
(finish_decltype_type): ... here. Update nearby comment to refer
to recent standard. Add FIXME. Restrict uncaptured variable type
transformation to happen only for lambdas with a by-copy
capture-default.
gcc/testsuite/ChangeLog:
* g++.dg/cpp0x/lambda/lambda-decltype4.C: New test.
David Malcolm [Fri, 8 Dec 2023 20:59:43 +0000 (15:59 -0500)]
analyzer: fix ICE on infoleak with poisoned size
gcc/analyzer/ChangeLog:
* region-model.cc (contains_uninit_p): Only check for
svalues that the infoleak warning can handle.
gcc/testsuite/ChangeLog:
* gcc.dg/plugin/infoleak-uninit-size-1.c: New test.
* gcc.dg/plugin/infoleak-uninit-size-2.c: New test.
* gcc.dg/plugin/plugin.exp: Add the new tests.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
[PR112875][LRA]: Fix an assert in lra elimination code
PR112875 test ran into a wrong assert (gcc_unreachable) in elimination
in a debug insn. The insn seems ok. So I change the assertion.
To be more accurate I made it the same as analogous reload pass code.
Jakub Jelinek [Fri, 8 Dec 2023 19:58:38 +0000 (20:58 +0100)]
c++: Fix parsing [[]][[]];
When working on the previous patch I put [[]] [[]] asm (""); into a
testcase, but was surprised it wasn't parsed.
The problem is that when cp_parser_std_attribute_spec returns NULL, it
can mean 2 different things, one is that the next token(s) are neither
[[ nor alignas (in that case the caller should break from the loop),
or when we parsed something like [[]] - it was valid attribute specifier,
but didn't specify any attributes in it.
The following patch fixes that by using a magic value of void_list_node
for the case where the first tokens are neither [[ nor alignas and so
where cp_parser_std_attribute_spec_seq should stop iterating to differentiate
it from NULL_TREE which stands for some attribute specifier has been parsed,
but it didn't contain any (or any valid) attributes.
2023-12-08 Jakub Jelinek <jakub@redhat.com>
* parser.cc (cp_parser_std_attribute_spec): Return void_list_node
rather than NULL_TREE if token is neither CPP_OPEN_SQUARE nor
RID_ALIGNAS CPP_KEYWORD.
(cp_parser_std_attribute_spec_seq): For attr_spec == void_list_node
break, for attr_spec == NULL_TREE continue.
Jakub Jelinek [Fri, 8 Dec 2023 19:56:48 +0000 (20:56 +0100)]
c++: Unshare folded SAVE_EXPR arguments during cp_fold [PR112727]
The following testcase is miscompiled because two ubsan instrumentations
run into each other.
The first one is the shift instrumentation. Before the C++ FE calls
it, it wraps the 2 shift arguments with cp_save_expr, so that side-effects
in them aren't evaluated multiple times. And, ubsan_instrument_shift
itself uses unshare_expr on any uses of the operands to make sure further
modifications in them don't affect other copies of them (the only not
unshared ones are the one the caller then uses for the actual operation
after the instrumentation, which means there is no tree sharing).
Now, if there are side-effects in the first operand like say function
call, cp_save_expr wraps it into a SAVE_EXPR, and ubsan_instrument_shift
in this mode emits something like
if (..., SAVE_EXPR <foo ()>, SAVE_EXPR <op1> > const)
__ubsan_handle_shift_out_of_bounds (..., SAVE_EXPR <foo ()>, ...);
and caller adds
SAVE_EXPR <foo ()> << SAVE_EXPR <op1>
after it in a COMPOUND_EXPR. So far so good.
If there are no side-effects and cp_save_expr doesn't create SAVE_EXPR,
everything is ok as well because of the unshare_expr.
We have
if (..., SAVE_EXPR <op1> > const)
__ubsan_handle_shift_out_of_bounds (..., ptr->something[i], ...);
and
ptr->something[i] << SAVE_EXPR <op1>
where ptr->something[i] is unshared.
In the testcase below, the !x->s[j] ? 1 : 0 expression is wrapped initially
into a SAVE_EXPR though, and unshare_expr doesn't unshare SAVE_EXPRs nor
anything used in them for obvious reasons, so we end up with:
if (..., SAVE_EXPR <!(bool) VIEW_CONVERT_EXPR<const struct S *>(x)->s[j] ? 1 : 0>, SAVE_EXPR <op1> > const)
__ubsan_handle_shift_out_of_bounds (..., SAVE_EXPR <!(bool) VIEW_CONVERT_EXPR<const struct S *>(x)->s[j] ? 1 : 0>, ...);
and
SAVE_EXPR <!(bool) VIEW_CONVERT_EXPR<const struct S *>(x)->s[j] ? 1 : 0> << SAVE_EXPR <op1>
So far good as well. But later during cp_fold of the SAVE_EXPR we find
out that VIEW_CONVERT_EXPR<const struct S *>(x)->s[j] ? 0 : 1 is actually
invariant (has TREE_READONLY set) and so cp_fold simplifies the above to
if (..., SAVE_EXPR <op1> > const)
__ubsan_handle_shift_out_of_bounds (..., (bool) VIEW_CONVERT_EXPR<const struct S *>(x)->s[j] ? 0 : 1, ...);
and
((bool) VIEW_CONVERT_EXPR<const struct S *>(x)->s[j] ? 0 : 1) << SAVE_EXPR <op1>
with the s[j] ARRAY_REFs and other expressions shared in between the two
uses (and obviously the expression optimized away from the COMPOUND_EXPR in
the if condition.
Then comes another ubsan instrumentation at genericization time,
this time to instrument the ARRAY_REFs with strict bounds checking,
and replaces the s[j] in there with s[.UBSAN_BOUNDS (0B, SAVE_EXPR<j>, 8), SAVE_EXPR<j>]
As the trees are shared, it does that just once though.
And as the if body is gimplified first, the SAVE_EXPR<j> is evaluated inside
of the if body and when it is used again after the if, it uses a potentially
uninitialized value of j.1 (always uninitialized if the shift count isn't
out of bounds).
The following patch fixes that by unshare_expr unsharing the folded argument
of a SAVE_EXPR if we've folded the SAVE_EXPR into an invariant and it is
used more than once.
2023-12-08 Jakub Jelinek <jakub@redhat.com>
PR sanitizer/112727
* cp-gimplify.cc (cp_fold): If SAVE_EXPR has been previously
folded, unshare_expr what is returned.
Patrick Palka [Fri, 8 Dec 2023 18:34:04 +0000 (13:34 -0500)]
c++: guard more against undiagnosed error_mark_node [PR112658]
This adds a sanity check to cp_parser_expression_statement similar to
the one in finish_expr_stmt added by r6-6795-g0fd9d4921f7ba2, which
effectively downgrades accepts-invalid/wrong-code bugs like this one
into ice-on-invalid/ice-on-valid ones.
PR c++/112658
gcc/cp/ChangeLog:
* parser.cc (cp_parser_expression_statement): If the statement
is error_mark_node, make sure we've seen_error().
Patrick Palka [Fri, 8 Dec 2023 18:33:55 +0000 (13:33 -0500)]
c++: undiagnosed error_mark_node from cp_build_c_cast [PR112658]
When cp_build_c_cast commits to an erroneous const_cast, we neglect to
replay errors from build_const_cast_1 which can lead to us incorrectly
accepting (and "miscompiling") the cast, or triggering the assert in
finish_expr_stmt.
This patch fixes this oversight. This was the original fix for the ICE
in PR112658 before r14-5941-g305a2686c99bf9 made us accept the testcase
there after all. I wasn't able to come up with an alternate testcase for
which this fix has an effect anymore, but below is a reduced version of
the PR112658 testcase (accepted ever since r14-5941) for good measure.
PR c++/112658
PR c++/94264
gcc/cp/ChangeLog:
* typeck.cc (cp_build_c_cast): If we're committed to a const_cast
and the result is erroneous, call build_const_cast_1 a second
time to issue errors. Use complain=tf_none instead of =false.
Robin Dapp [Fri, 1 Dec 2023 09:07:23 +0000 (10:07 +0100)]
RISC-V: Add vectorized strcmp and strncmp.
This patch adds vectorized strcmp and strncmp implementations and
tests. Similar to strlen, expansion is still guarded by
-minline-str(n)cmp.
gcc/ChangeLog:
PR target/112109
* config/riscv/riscv-protos.h (expand_strcmp): Declare.
* config/riscv/riscv-string.cc (riscv_expand_strcmp): Add
strategy handling and delegation to scalar and vector expanders.
(expand_strcmp): Vectorized implementation.
* config/riscv/riscv.md: Add TARGET_VECTOR to strcmp and strncmp
expander.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/builtin/strcmp-run.c: New test.
* gcc.target/riscv/rvv/autovec/builtin/strcmp.c: New test.
* gcc.target/riscv/rvv/autovec/builtin/strncmp-run.c: New test.
* gcc.target/riscv/rvv/autovec/builtin/strncmp.c: New test.
Robin Dapp [Fri, 1 Dec 2023 08:57:15 +0000 (09:57 +0100)]
RISC-V: Add vectorized strlen.
This patch implements a vectorized strlen by re-using and slightly
adjusting the rawmemchr implementation. Rawmemchr returns the address
of the needle while strlen returns the difference between needle address
and start address.
As before, strlen expansion is guarded by -minline-strlen.
While testing with -minline-strlen I encountered a vsetvl problem in
memcpy-chk.c where we didn't insert a vsetvl at the proper spot (after
a setjmp). This needs to be fixed separately and I figured I'd post
this patch as-is.
early-ra's likely_operand_match_p didn't handle relaxed and special
memory constraints, which meant that the pass wasn't able to match
LD1RQ instructions to their constraints, and so backed out of
trying to allocate. This patch fixes that by switching the sense
of the match: does the rtx seem appropriate for the constraint?,
rather than: does the constraint seem appropriate for the rtx?
Also, I came across a case that needed more general equivalence
detection. Previously we would only record equivalences after
the last definition of the source register, but it's worth trying
to handle cases where the destination register's live range is
restricted to a block, and the next definition of the source
occurs only after the end of the destination register's live range.
The patch also fixes a cut-&-pasto that Alex noticed (thanks).
gcc/
* config/aarch64/aarch64-early-ra.cc (allocno_info::chain_next):
Put into an enum with...
(allocno_info::last_def_point): ...new member variable.
(allocno_info::m_current_bb_point): New member variable.
(likely_operand_match_p): Switch based on get_constraint_type,
rather than based on rtx code. Handle relaxed and special memory
constraints.
(early_ra::record_copy): Allow the source of an equivalence to be
assigned to more than once.
(early_ra::record_allocno_use): Invalidate any previous equivalence.
Initialize last_def_point.
(early_ra::record_allocno_def): Set last_def_point.
(early_ra::valid_equivalence_p): New function, split out from...
(early_ra::record_copy): ...here. Use last_def_point to handle
source registers that have a later definition.
(make_pass_aarch64_early_ra): Fix comment.
gcc/testsuite/
* gcc.target/aarch64/sme/strided_2.c: New test.
Tobias Burnus [Fri, 8 Dec 2023 14:18:25 +0000 (15:18 +0100)]
OpenMP/Fortran: Implement omp allocators/allocate for ptr/allocatables
This commit adds -fopenmp-allocators which enables support for
'omp allocators' and 'omp allocate' that are associated with a Fortran
allocate-stmt. If such a construct is encountered, an error is shown,
unless the -fopenmp-allocators flag is present.
With -fopenmp -fopenmp-allocators, those constructs get turned into
GOMP_alloc allocations, while -fopenmp-allocators (also without -fopenmp)
ensures deallocation and reallocation (via intrinsic assignments) are
properly directed to GOMP_free/omp_realloc - while normal Fortran
allocations are processed by free/realloc.
In order to distinguish a 'malloc'ed from a 'GOMP_alloc'ed memory, the
version field of the Fortran array discriptor is (mis)used: 0 indicates
the normal Fortran allocation while 1 denotes GOMP_alloc. For scalars,
there is record keeping in libgomp: GOMP_add_alloc(ptr) will add the
pointer address to a splay_tree while GOMP_is_alloc(ptr) will return
true it was previously added but also removes it from the list.
Besides Fortran FE work, BUILT_IN_GOMP_REALLOC is no part of
omp-builtins.def and libgomp gains the mentioned two new function.
Szabolcs Nagy [Fri, 29 Sep 2023 12:55:51 +0000 (13:55 +0100)]
libgcc: aarch64: Add SME unwinder support
To support the ZA lazy save scheme, the PCS requires the unwinder to
reset the SME state to PSTATE.SM=0, PSTATE.ZA=0, TPIDR2_EL0=0 on entry
to an exception handler. We use the __arm_za_disable SME runtime call
unconditionally to achieve this.
https://github.com/ARM-software/abi-aa/blob/main/aapcs64/aapcs64.rst#exceptions
The hidden alias is used to avoid a PLT and avoid inconsistent VPCS
marking (we don't rely on special PCS at the call site). In case of
static linking the SME runtime init code is linked in code that raises
exceptions.
libgcc/ChangeLog:
* config/aarch64/__arm_za_disable.S: Add hidden alias.
* config/aarch64/aarch64-unwind.h: Reset the SME state before
EH return via the _Unwind_Frames_Extra hook.
Szabolcs Nagy [Tue, 15 Nov 2022 14:08:55 +0000 (14:08 +0000)]
libgcc: aarch64: Add SME runtime support
The call ABI for SME (Scalable Matrix Extension) requires a number of
helper routines which are added to libgcc so they are tied to the
compiler version instead of the libc version. See
https://github.com/ARM-software/abi-aa/blob/main/aapcs64/aapcs64.rst#sme-support-routines
The routines are in shared libgcc and static libgcc eh, even though
they are not related to exception handling. This is to avoid linking
a copy of the routines into dynamic linked binaries, because TPIDR2_EL0
block can be extended in the future which is better to handle in a
single place per process.
The support routines have to decide if SME is accessible or not. Linux
tells userspace if SME is accessible via AT_HWCAP2, otherwise a new
__aarch64_sme_accessible symbol was introduced that a libc can define.
Due to libgcc and libc build order, the symbol availability cannot be
checked so for __aarch64_sme_accessible an unistd.h feature test macro
is used while such detection mechanism is not available for __getauxval
so we rely on configure checks based on the target triplet.
Asm helper code is added to make writing the routines easier.
libgcc/ChangeLog:
* config/aarch64/t-aarch64: Add sources to the build.
* config/aarch64/__aarch64_have_sme.c: New file.
* config/aarch64/__arm_sme_state.S: New file.
* config/aarch64/__arm_tpidr2_restore.S: New file.
* config/aarch64/__arm_tpidr2_save.S: New file.
* config/aarch64/__arm_za_disable.S: New file.
* config/aarch64/aarch64-asm.h: New file.
* config/aarch64/libgcc-sme.ver: New file.
Szabolcs Nagy [Mon, 4 Dec 2023 10:52:52 +0000 (10:52 +0000)]
libgcc: aarch64: Configure check for __getauxval
Add configure check for the __getauxval ABI symbol, which is always
available on aarch64 glibc, and may be available on other linux C
runtimes. For now only enabled on glibc, others have to override it
target_configargs=libgcc_cv_have___getauxval=yes
This is deliberately obscure as it should be auto detected, ideally
via a feature test macro in unistd.h (link time detection is not
possible since the libc may not be installed at libgcc build time),
but currently there is no such feature test mechanism.
Without __getauxval, libgcc cannot do runtime CPU feature detection
and has to assume only the build time known features are available.
Richard Biener [Fri, 8 Dec 2023 08:14:43 +0000 (09:14 +0100)]
tree-optimization/112909 - uninit diagnostic with abnormal copy
The following avoids spurious uninit diagnostics for SSA name
copies which mostly appear when the source is marked as abnormal
which prevents copy propagation.
To prevent regressions I remove the bail out for anonymous SSA
names in the PHI arg place from warn_uninitialized_phi leaving
that to warn_uninit where I handle SSA copies from a SSA name
which isn't anonymous. In theory this might cause more
valid and false positive diagnostics to pop up.
PR tree-optimization/112909
* tree-ssa-uninit.cc (find_uninit_use): Look through a
single level of SSA name copies with single use.
Jiahao Xu [Wed, 29 Nov 2023 03:16:59 +0000 (11:16 +0800)]
LoongArch: Fix lsx-vshuf.c and lasx-xvshuf_b.c tests fail on LA664 [PR112611]
For [x]vshuf instructions, if the index value in the selector exceeds 63, it triggers
undefined behavior on LA464, but not on LA664. To ensure compatibility of these two
tests on both LA464 and LA664, we have modified both tests to ensure that the index
value in the selector does not exceed 63.
gcc/testsuite/ChangeLog:
PR target/112611
* gcc.target/loongarch/vector/lasx/lasx-xvshuf_b.c: Sure index less than 64.
* gcc.target/loongarch/vector/lsx/lsx-vshuf.c: Ditto.
Jiahao Xu [Wed, 6 Dec 2023 07:04:53 +0000 (15:04 +0800)]
LoongArch: Vectorized loop unrolling is disable for divf/sqrtf/rsqrtf when -mrecip is enabled.
Using -mrecip generates a sequence of instructions to replace divf, sqrtf and rsqrtf. The number
of generated instructions is close to or exceeds the maximum issue instructions per cycle of the
LoongArch, so vectorized loop unrolling is not performed on them.
gcc/ChangeLog:
* config/loongarch/loongarch.cc (loongarch_vector_costs::determine_suggested_unroll_factor):
If m_has_recip is true, uf return 1.
(loongarch_vector_costs::add_stmt_cost): Detect the use of approximate instruction sequence.
Jiahao Xu [Wed, 6 Dec 2023 07:04:52 +0000 (15:04 +0800)]
LoongArch: New options -mrecip and -mrecip= with ffast-math.
When both the -mrecip and -mfrecipe options are enabled, use approximate reciprocal
instructions and approximate reciprocal square root instructions with additional
Newton-Raphson steps to implement single precision floating-point division, square
root and reciprocal square root operations, for a better performance.
gcc/ChangeLog:
* config/loongarch/genopts/loongarch.opt.in (recip_mask): New variable.
(-mrecip, -mrecip): New options.
* config/loongarch/lasx.md (div<mode>3): New expander.
(*div<mode>3): Rename.
(sqrt<mode>2): New expander.
(*sqrt<mode>2): Rename.
(rsqrt<mode>2): New expander.
* config/loongarch/loongarch-protos.h (loongarch_emit_swrsqrtsf): New prototype.
(loongarch_emit_swdivsf): Ditto.
* config/loongarch/loongarch.cc (loongarch_option_override_internal): Set
recip_mask for -mrecip and -mrecip= options.
(loongarch_emit_swrsqrtsf): New function.
(loongarch_emit_swdivsf): Ditto.
* config/loongarch/loongarch.h (RECIP_MASK_NONE, RECIP_MASK_DIV, RECIP_MASK_SQRT
RECIP_MASK_RSQRT, RECIP_MASK_VEC_DIV, RECIP_MASK_VEC_SQRT, RECIP_MASK_VEC_RSQRT
RECIP_MASK_ALL): New bitmasks.
(TARGET_RECIP_DIV, TARGET_RECIP_SQRT, TARGET_RECIP_RSQRT, TARGET_RECIP_VEC_DIV
TARGET_RECIP_VEC_SQRT, TARGET_RECIP_VEC_RSQRT): New tests.
* config/loongarch/loongarch.md (sqrt<mode>2): New expander.
(*sqrt<mode>2): Rename.
(rsqrt<mode>2): New expander.
* config/loongarch/loongarch.opt (recip_mask): New variable.
(-mrecip, -mrecip): New options.
* config/loongarch/lsx.md (div<mode>3): New expander.
(*div<mode>3): Rename.
(sqrt<mode>2): New expander.
(*sqrt<mode>2): Rename.
(rsqrt<mode>2): New expander.
* config/loongarch/predicates.md (reg_or_vecotr_1_operand): New predicate.
* doc/invoke.texi (LoongArch Options): Document new options.
gcc/testsuite/ChangeLog:
* gcc.target/loongarch/divf.c: New test.
* gcc.target/loongarch/recip-divf.c: New test.
* gcc.target/loongarch/recip-sqrtf.c: New test.
* gcc.target/loongarch/sqrtf.c: New test.
* gcc.target/loongarch/vector/lasx/lasx-divf.c: New test.
* gcc.target/loongarch/vector/lasx/lasx-recip-divf.c: New test.
* gcc.target/loongarch/vector/lasx/lasx-recip-sqrtf.c: New test.
* gcc.target/loongarch/vector/lasx/lasx-recip.c: New test.
* gcc.target/loongarch/vector/lasx/lasx-sqrtf.c: New test.
* gcc.target/loongarch/vector/lsx/lsx-divf.c: New test.
* gcc.target/loongarch/vector/lsx/lsx-recip-divf.c: New test.
* gcc.target/loongarch/vector/lsx/lsx-recip-sqrtf.c: New test.
* gcc.target/loongarch/vector/lsx/lsx-recip.c: New test.
* gcc.target/loongarch/vector/lsx/lsx-sqrtf.c: New test.
Jiahao Xu [Wed, 6 Dec 2023 07:04:51 +0000 (15:04 +0800)]
LoongArch: Redefine pattern for xvfrecip/vfrecip instructions.
Redefine pattern for [x]vfrecip instructions use rtx code instead of unspec, and enable
[x]vfrecip instructions to be generated during auto-vectorization.
gcc/ChangeLog:
* config/loongarch/lasx.md (lasx_xvfrecip_<flasxfmt>): Renamed to ..
(recip<mode>3): .. this.
* config/loongarch/loongarch-builtins.cc (CODE_FOR_lsx_vfrecip_d): Redefine
to new pattern name.
(CODE_FOR_lsx_vfrecip_s): Ditto.
(CODE_FOR_lasx_xvfrecip_d): Ditto.
(CODE_FOR_lasx_xvfrecip_s): Ditto.
(loongarch_expand_builtin_direct): For the vector recip instructions, construct a
temporary parameter const1_vector.
* config/loongarch/lsx.md (lsx_vfrecip_<flsxfmt>): Renamed to ..
(recip<mode>3): .. this.
* config/loongarch/predicates.md (const_vector_1_operand): New predicate.
Jiahao Xu [Wed, 6 Dec 2023 07:04:49 +0000 (15:04 +0800)]
LoongArch: Add support for LoongArch V1.1 approximate instructions.
This patch adds define_insn/builtins/intrinsics for these instructions, and add option
-mfrecipe to control instruction generation.
gcc/ChangeLog:
* config/loongarch/genopts/isa-evolution.in (fecipe): Add.
* config/loongarch/larchintrin.h (__frecipe_s): New intrinsic.
(__frecipe_d): Ditto.
(__frsqrte_s): Ditto.
(__frsqrte_d): Ditto.
* config/loongarch/lasx.md (lasx_xvfrecipe_<flasxfmt>): New insn pattern.
(lasx_xvfrsqrte_<flasxfmt>): Ditto.
* config/loongarch/lasxintrin.h (__lasx_xvfrecipe_s): New intrinsic.
(__lasx_xvfrecipe_d): Ditto.
(__lasx_xvfrsqrte_s): Ditto.
(__lasx_xvfrsqrte_d): Ditto.
* config/loongarch/loongarch-builtins.cc (AVAIL_ALL): Add predicates.
(LSX_EXT_BUILTIN): New macro.
(LASX_EXT_BUILTIN): Ditto.
* config/loongarch/loongarch-cpucfg-map.h: Regenerate.
* config/loongarch/loongarch-c.cc: Add builtin macro "__loongarch_frecipe".
* config/loongarch/loongarch-def.cc: Regenerate.
* config/loongarch/loongarch-str.h (OPTSTR_FRECIPE): Regenerate.
* config/loongarch/loongarch.cc (loongarch_asm_code_end): Dump status for TARGET_FRECIPE.
* config/loongarch/loongarch.md (loongarch_frecipe_<fmt>): New insn pattern.
(loongarch_frsqrte_<fmt>): Ditto.
* config/loongarch/loongarch.opt: Regenerate.
* config/loongarch/lsx.md (lsx_vfrecipe_<flsxfmt>): New insn pattern.
(lsx_vfrsqrte_<flsxfmt>): Ditto.
* config/loongarch/lsxintrin.h (__lsx_vfrecipe_s): New intrinsic.
(__lsx_vfrecipe_d): Ditto.
(__lsx_vfrsqrte_s): Ditto.
(__lsx_vfrsqrte_d): Ditto.
* doc/extend.texi: Add documentation for LoongArch new builtins and intrinsics.
gcc/testsuite/ChangeLog:
* gcc.target/loongarch/larch-frecipe-builtin.c: New test.
* gcc.target/loongarch/vector/lasx/lasx-frecipe-builtin.c: New test.
* gcc.target/loongarch/vector/lsx/lsx-frecipe-builtin.c: New test.
Richard Biener [Wed, 6 Dec 2023 10:33:10 +0000 (11:33 +0100)]
Shrink out-of-SSA dump
The following removes the second GIMPLE function dump after
remove_ssa_form which used to rewrite the IL with the coalescing
result but doesn't do so since a long time now.
* tree-outof-ssa.cc (rewrite_out_of_ssa): Dump GIMPLE once only,
after final IL adjustments.
Pan Li [Fri, 8 Dec 2023 06:48:48 +0000 (14:48 +0800)]
RISC-V: Fix ICE for incorrect mode attr in V_F2DI_CONVERT_BRIDGE
The mode attr V_F2DI_CONVERT_BRIDGE converts the floating-point mode
to the widden floating-point by design. But we take (RVVM1HF "RVVM2SI") by
mistake.
This patch would like to fix it by replacing the
(RVVM1HF "RVVM2SI") to (RVVM1HF "RVVM2SF") as design.
gcc/ChangeLog:
* config/riscv/vector-iterators.md: Replace RVVM2SI to RVVM2SF
for mode attr V_F2DI_CONVERT_BRIDGE.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/unop/math-lroundf16-rv64-ice-1.c: New test.
Jiahao Xu [Fri, 17 Nov 2023 09:00:21 +0000 (17:00 +0800)]
LoongArch: Add support for xorsign.
This patch adds support for xorsign pattern to scalar fp and vector. With the
new expands, uniformly using vector bitwise logical operations to handle xorsign.
On LoongArch64, floating-point registers and vector registers share the same register,
so this patch also allows conversion between LSX vector mode and scalar fp mode to
avoid unnecessary instruction generation.
gcc/ChangeLog:
* config/loongarch/lasx.md (xorsign<mode>3): New expander.
* config/loongarch/loongarch.cc (loongarch_can_change_mode_class): Allow
conversion between LSX vector mode and scalar fp mode.
* config/loongarch/loongarch.md (@xorsign<mode>3): New expander.
* config/loongarch/lsx.md (@xorsign<mode>3): Ditto.
gcc/testsuite/ChangeLog:
* gcc.target/loongarch/vector/lasx/lasx-xorsign-run.c: New test.
* gcc.target/loongarch/vector/lasx/lasx-xorsign.c: New test.
* gcc.target/loongarch/vector/lsx/lsx-xorsign-run.c: New test.
* gcc.target/loongarch/vector/lsx/lsx-xorsign.c: New test.
* gcc.target/loongarch/xorsign-run.c: New test.
* gcc.target/loongarch/xorsign.c: New test.
Jakub Jelinek [Fri, 8 Dec 2023 08:03:18 +0000 (09:03 +0100)]
lower-bitint: Avoid merging non-mergeable stmt with cast and mergeable stmt [PR112902]
Before bitint lowering, the IL has:
b.0_1 = b;
_2 = -b.0_1;
_3 = (unsigned _BitInt(512)) _2;
a.1_4 = a;
a.2_5 = (unsigned _BitInt(512)) a.1_4;
_6 = _3 * a.2_5;
on the first function. Now, gimple_lower_bitint has an optimization
(when not -O0) that it avoids assigning underlying VAR_DECLs for certain
SSA_NAMEs where it is possible to lower it in a single loop (or straight
line code) rather than in multiple loops.
So, e.g. the multiplication above uses handle_operand_addr, which can deal
with INTEGER_CST arguments, loads but also casts, so it is fine
not to assign an underlying VAR_DECL for SSA_NAMEs a.1_4 and a.2_5, as
the multiplication can handle it fine.
The more problematic case is the other multiplication operand.
It is again a result of a (in this case narrowing) cast, so it is fine
not to assign VAR_DECL for _3. Normally we can merge the load (b.0_1)
with the negation (_2) and even with the following cast (_3). If _3
was used in a mergeable operation like addition, subtraction, negation,
&|^ or equality comparison, all of b.0_1, _2 and _3 could be without
underlying VAR_DECLs.
The problem is that the current code does that even when the cast is used
by a non-mergeable operation, and handle_operand_addr certainly can't handle
the mergeable operations feeding the rhs1 of the cast, for multiplication
we don't emit any loop in which it could appear, for other operations like
shifts or non-equality comparisons we emit loops, but either in the reverse
direction or with unpredictable indexes (for shifts).
So, in order to lower the above correctly, we need to have an underlying
VAR_DECL for either _2 or _3; if we choose _2, then the load and negation
would be done in one loop and extension handled as part of the
multiplication, if we choose _3, then the load, negation and cast are done
in one loop and the multiplication just uses the underlying VAR_DECL
computed by that.
It is far easier to do this for _3, which is what the following patch
implements.
It actually already had code for most of it, just it did that for widening
casts only (optimize unless the cast rhs1 is not SSA_NAME, or is SSA_NAME
defined in some other bb, or with more than one use, etc.).
This falls through into such code even for the narrowing or same precision
casts, unless the cast is used in a mergeable operation.
2023-12-08 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/112902
* gimple-lower-bitint.cc (gimple_lower_bitint): For a narrowing
or same precision cast don't set SSA_NAME_VERSION in m_names only
if use_stmt is mergeable_op or fall through into the check that
use is a store or rhs1 is not mergeable or other reasons prevent
merging.
Jakub Jelinek [Fri, 8 Dec 2023 08:02:15 +0000 (09:02 +0100)]
vr-values: Avoid ICEs on large _BitInt cast to floating point [PR112901]
For casts from integers to floating point,
simplify_float_conversion_using_ranges uses SCALAR_INT_TYPE_MODE
and queries optabs on the optimization it wants to make.
That doesn't really work for large/huge BITINT_TYPE, those have BLKmode
which is not scalar int mode. Querying an optab is not useful for that
either.
I think it is best to just skip this optimization for those bitints,
after all, bitint lowering uses ranges already to determine minimum
precision for bitint operands of the integer to float casts.
2023-12-08 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/112901
* vr-values.cc
(simplify_using_ranges::simplify_float_conversion_using_ranges):
Return false if rhs1 has BITINT_TYPE type with BLKmode TYPE_MODE.
Jakub Jelinek [Fri, 8 Dec 2023 07:56:33 +0000 (08:56 +0100)]
haifa-sched: Avoid overflows in extend_h_i_d [PR112411]
On Thu, Dec 07, 2023 at 09:36:23AM +0100, Jakub Jelinek wrote:
> Without the dg-skip-if I got on 64-bit host with
> -O3 --param min-nondebug-insn-uid=0x40000000:
> cc1: out of memory allocating 571230784744 bytes after a total of 2772992 bytes
I've looked at this and the problem is in haifa-sched.cc:
9047 h_i_d.safe_grow_cleared (3 * get_max_uid () / 2, true);
get_max_uid () is 0x4000024d with the --param min-nondebug-insn-uid=0x40000000
and so 3 * get_max_uid () / 2 actually overflows to -536870028 but as vec.h
then treats the value as unsigned, it attempts to allocate
0xe0000374U * 152UL bytes, i.e. those 532GB. If the above is fixed to do
3U * get_max_uid () / 2 instead, it will get slightly better and will only
need 0x60000373U * 152UL bytes, i.e. 228GB.
Perhaps more could be helped by making the vector indirect (contain pointers
to haifa_insn_data_def rather than the structures themselves) and pool allocate
those, but the more important question is how sparse are uids in normal
compilations without those large --param min-nondebug-insn-uid= parameters.
Because if they aren't enough, such a change would increase compile time memory
just to help the unusual case.
2023-12-08 Jakub Jelinek <jakub@redhat.com>
PR middle-end/112411
* haifa-sched.cc (extend_h_i_d): Use 3U instead of 3 in
3 * get_max_uid () / 2 calculation.
Lulu Cheng [Fri, 1 Dec 2023 03:51:51 +0000 (11:51 +0800)]
LoongArch: Remove the definition of ISA_BASE_LA64V110 from the code.
The instructions defined in LoongArch Reference Manual v1.1 are not the instruction
set v1.1 version. The CPU defined later may only support some instructions in
LoongArch Reference Manual v1.1. Therefore, the macro ISA_BASE_LA64V110 and
related definitions are removed here.
gcc/ChangeLog:
* config/loongarch/genopts/loongarch-strings: Delete STR_ISA_BASE_LA64V110.
* config/loongarch/genopts/loongarch.opt.in: Likewise.
* config/loongarch/loongarch-cpu.cc (ISA_BASE_LA64V110_FEATURES): Delete macro.
(fill_native_cpu_config): Define a new variable hw_isa_evolution record the
extended instruction set support read from cpucfg.
* config/loongarch/loongarch-def.cc: Set evolution at initialization.
* config/loongarch/loongarch-def.h (ISA_BASE_LA64V100): Delete.
(ISA_BASE_LA64V110): Likewise.
(N_ISA_BASE_TYPES): Likewise.
(defined): Likewise.
* config/loongarch/loongarch-opts.cc: Likewise.
* config/loongarch/loongarch-opts.h (TARGET_64BIT): Likewise.
(ISA_BASE_IS_LA64V110): Likewise.
* config/loongarch/loongarch-str.h (STR_ISA_BASE_LA64V110): Likewise.
* config/loongarch/loongarch.opt: Regenerate.
Xi Ruoyao [Fri, 1 Dec 2023 02:09:33 +0000 (10:09 +0800)]
LoongArch: Switch loongarch-def from C to C++ to make it possible.
We'll use HOST_WIDE_INT in LoongArch static properties in following patches.
To keep the same readability as C99 designated initializers, create a
std::array like data structure with position setter function, and add
field setter functions for structs used in loongarch-def.cc.
Remove unneeded guards #if
!defined(IN_LIBGCC2) && !defined(IN_TARGET_LIBS) && !defined(IN_RTS)
in loongarch-def.h and loongarch-opts.h.
gcc/ChangeLog:
* config/loongarch/loongarch-def.h: Remove extern "C".
(loongarch_isa_base_strings): Declare as loongarch_def_array
instead of plain array.
(loongarch_isa_ext_strings): Likewise.
(loongarch_abi_base_strings): Likewise.
(loongarch_abi_ext_strings): Likewise.
(loongarch_cmodel_strings): Likewise.
(loongarch_cpu_strings): Likewise.
(loongarch_cpu_default_isa): Likewise.
(loongarch_cpu_issue_rate): Likewise.
(loongarch_cpu_multipass_dfa_lookahead): Likewise.
(loongarch_cpu_cache): Likewise.
(loongarch_cpu_align): Likewise.
(loongarch_cpu_rtx_cost_data): Likewise.
(loongarch_isa): Add a constructor and field setter functions.
* config/loongarch/loongarch-opts.h (loongarch-defs.h): Do not
include for target libraries.
* config/loongarch/loongarch-opts.cc: Comment code that doesn't
run and causes compilation errors.
* config/loongarch/loongarch-tune.h (LOONGARCH_TUNE_H): Likewise.
(struct loongarch_rtx_cost_data): Likewise.
(struct loongarch_cache): Likewise.
(struct loongarch_align): Likewise.
* config/loongarch/t-loongarch: Compile loongarch-def.cc with the
C++ compiler.
* config/loongarch/loongarch-def-array.h: New file for a
std:array like data structure with position setter function.
* config/loongarch/loongarch-def.c: Rename to ...
* config/loongarch/loongarch-def.cc: ... here.
(loongarch_cpu_strings): Define as loongarch_def_array instead
of plain array.
(loongarch_cpu_default_isa): Likewise.
(loongarch_cpu_cache): Likewise.
(loongarch_cpu_align): Likewise.
(loongarch_cpu_rtx_cost_data): Likewise.
(loongarch_cpu_issue_rate): Likewise.
(loongarch_cpu_multipass_dfa_lookahead): Likewise.
(loongarch_isa_base_strings): Likewise.
(loongarch_isa_ext_strings): Likewise.
(loongarch_abi_base_strings): Likewise.
(loongarch_abi_ext_strings): Likewise.
(loongarch_cmodel_strings): Likewise.
(abi_minimal_isa): Likewise.
(loongarch_rtx_cost_optimize_size): Use field setter functions
instead of designated initializers.
(loongarch_rtx_cost_data): Implement default constructor.
Jakub Jelinek [Fri, 8 Dec 2023 07:29:44 +0000 (08:29 +0100)]
Add IntegerRange for -param=min-nondebug-insn-uid= and fix vector growing in LRA and vec [PR112411]
As documented, --param min-nondebug-insn-uid= is very useful in debugging
-fcompare-debug issues in RTL dumps, without it it is really hard to
find differences. With it, DEBUG_INSNs generally use low INSN_UIDs
(1+) and non-DEBUG_INSNs use INSN_UIDs from the parameter up.
For good results, the parameter should be larger than the number of
DEBUG_INSNs in all or at least problematic functions, so I typically
use --param min-nondebug-insn-uid=10000 or --param
min-nondebug-insn-uid=1000.
The PR is about using --param min-nondebug-insn-uid=2147483647 or
similar behavior can be achieved with that minus some epsilon,
INSN_UIDs for the non-debug insns then wrap around and as they are signed,
all kinds of things break. Obviously, that can happen even without that
option, but functions containing more than 2147483647 insns usually don't
compile much earlier due to getting out of memory.
As it is a debugging option, I'd prefer not to impose any drastically small
limits on it because if a function has a lot of DEBUG_INSNs, it is useful
to start still above them, otherwise the allocation of uids will DTRT
even for DEBUG_INSNs but there will be then differences in non-DEBUG_INSN
allocations.
So, the following patch uses 0x40000000 limit, half the maximum amount for
DEBUG_INSNs and half for non-DEBUG_INSNs, it will still result in very
unlikely overflows in real world.
Note, using large min-nondebug-insn-uid is very expensive for compile time
memory and compile time, because DF as well as various RTL passes use
arrays indexed by INSN_UIDs, e.g. LRA with sizeof (void *) elements,
ditto df (df->insns).
Now, in LRA I've ran into ICEs already with
--param min-nondebug-insn-uid=0x2aaaaaaa
on 64-bit host. It uses a custom vector management and wants to grow
allocation 1.5x when growing, but all this computation is done in int,
so already 0x2aaaaaab * 3 / 2 + 1 overflows to negative value. And
unlike vec.cc growing which also uses unsigned int type for the above
(and the + 1 is not there), it also doesn't make sure if there is an
overflow that it allocates at least as much as needed, vec.cc
does
if ...
else
/* Grow slower when large. */
alloc = (alloc * 3 / 2);
/* If this is still too small, set it to the right size. */
if (alloc < desired)
alloc = desired;
so even if there is overflow during the * 1.5 computation, but
desired is still representable in the range of the alloced counter
(31-bits in both vec.h and LRA), it doesn't grow exponentially but
at least works for the current value.
The patch now uses there
lra_insn_recog_data_len = index * 3U / 2;
if (lra_insn_recog_data_len <= index)
lra_insn_recog_data_len = index + 1;
basically do what vec.cc does. I thought we could do better for
both vec.cc and LRA on 64-bit hosts even without growing the allocated
counters, but now that I look at it again, perhaps we can't.
The above overflows already with original alloc or lra_insn_recog_data_len
0x55555556, where 0x5555555 * 3U / 2 is still 0x7fffffff
and so representable in the 32-bit, but 0x55555556 * 3U / 2 is
1. I thought that we could use alloc * (size_t) 3 / 2 so that on 64-bit
hosts it wouldn't overflow that quickly, but 0x55555556 * (size_t) 3 / 2
there is 0x80000001 which is still ok in unsigned, but given that vec.h
then stores the counter into unsigned m_alloc:31; bit-field, it is too much.
With the lra.cc change, one can actually compile simple function
with -O0 on 64-bit host with --param min-nondebug-insn-uid=0x40000000
(i.e. the new limit), but already needed quite a big part of my 32GB
RAM + 24GB swap.
The patch adds a dg-skip-if for that case though, because such option
is way too much for 32-bit hosts even at -O0 and empty function,
and with -O3 on a longer function it is too much for average 64-bit host
as well. Without the dg-skip-if I got on 64-bit host:
cc1: out of memory allocating 571230784744 bytes after a total of 2772992 bytes
and
cc1: out of memory allocating 1388 bytes after a total of 2002944 bytes
on 32-bit host. A test requiring more than 532GB of RAM on 64-bit hosts
is just too much for our testsuite.
2023-12-08 Jakub Jelinek <jakub@redhat.com>
PR middle-end/112411
* params.opt (-param=min-nondebug-insn-uid=): Add
IntegerRange(0, 1073741824).
* lra.cc (check_and_expand_insn_recog_data): Use 3U rather than 3
in * 3 / 2 computation and if the result is smaller or equal to
index, use index + 1.
* gcc.dg/params/blocksort-part.c: Add dg-skip-if for
--param min-nondebug-insn-uid=1073741824.
Haochen Jiang [Fri, 10 Nov 2023 02:03:37 +0000 (10:03 +0800)]
i386: Mark Xeon Phi ISAs as deprecated
Since Knight Landing and Knight Mill microarchitectures are EOL, we
would like to remove its support in GCC 15. In GCC 14, we will first
emit a warning for the usage.
gcc/ChangeLog:
* config/i386/driver-i386.cc (host_detect_local_cpu):
Do not append "-mno-" for Xeon Phi ISAs.
* config/i386/i386-options.cc (ix86_option_override_internal):
Emit a warning for KNL/KNM targets.
* config/i386/i386.opt: Emit a warning for Xeon Phi ISAs.
Hao Liu [Wed, 6 Dec 2023 06:52:19 +0000 (14:52 +0800)]
tree-optimization/112774: extend the SCEV CHREC tree with a nonwrapping flag
The flag is defined as CHREC_NOWRAP(tree), and will be dumped from
"{offset, +, 1}_1" to "{offset, +, 1}<nw>_1" (nw is short for nonwrapping).
Two SCEV interfaces record_nonwrapping_chrec and nonwrapping_chrec_p are
added to set and check the flag respectively.
As resetting the SCEV cache (i.e., the chrec trees) may not reset the
loop->estimate_state, free_numbers_of_iterations_estimates is called
explicitly in loop vectorization to make sure the flag can be
calculated propriately by niter.
gcc/ChangeLog:
PR tree-optimization/112774
* tree-pretty-print.cc: if nonwrapping flag is set, chrec will be
printed with additional <nw> info.
* tree-scalar-evolution.cc: add record_nonwrapping_chrec and
nonwrapping_chrec_p to set and check the new flag respectively.
* tree-scalar-evolution.h: Likewise.
* tree-ssa-loop-niter.cc (idx_infer_loop_bounds,
infer_loop_bounds_from_pointer_arith, infer_loop_bounds_from_signedness,
scev_probably_wraps_p): call record_nonwrapping_chrec before
record_nonwrapping_iv, call nonwrapping_chrec_p to check the flag is
set and return false from scev_probably_wraps_p.
* tree-vect-loop.cc (vect_analyze_loop): call
free_numbers_of_iterations_estimates explicitly.
* tree-core.h: document the nothrow_flag usage in CHREC_NOWRAP
* tree.h: add CHREC_NOWRAP(NODE), base.nothrow_flag is used to
represent the nonwrapping info.
David Malcolm [Fri, 8 Dec 2023 00:42:45 +0000 (19:42 -0500)]
analyzer: fix ICE for 2 bits before the start of base region [PR112889]
Cncrete bindings were using -1 and -2 in the offset field to signify
deleted and empty hash slots, but these are valid values, leading to
assertion failures inside hash_map::put on a debug build, and probable
bugs in a release build.
Fix by using the size field rather than the offset.
gcc/analyzer/ChangeLog:
PR analyzer/112889
* store.h (concrete_binding::concrete_binding): Strengthen
assertion to require size to be be positive, rather than just
non-zero.
(concrete_binding::mark_deleted): Use size rather than start bit
offset.
(concrete_binding::mark_empty): Likewise.
(concrete_binding::is_deleted): Likewise.
(concrete_binding::is_empty): Likewise.
gcc/testsuite/ChangeLog:
PR analyzer/112889
* c-c++-common/analyzer/ice-pr112889.c: New test.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
Juzhe-Zhong [Thu, 7 Dec 2023 22:09:10 +0000 (06:09 +0800)]
RISC-V: Support interleave vector with different step sequence
This patch fixes 64 ICEs in full coverage testing since they happens due to same reason.
Before this patch:
internal compiler error: in expand_const_vector, at config/riscv/riscv-v.cc:1270
appears 400 times in full coverage testing report.
The root cause is we didn't support interleave vector with different steps.
Here is the story:
We already supported interleave with single same step, that is:
e.g. v = { 0, 100, 2, 102, 4, 104, ... }
This sequence can be interpreted as interleave vector by 2 seperate sequences:
sequence1 = { 0, 2, 4, ... } and sequence2 = { 100, 102, 104, ... }.
Their step are both 2.
However, we didn't support interleave vector when they have different steps which
cause ICE in such situations.
This patch support different steps interleaved vector for the following 2 situations:
1. When vector can be extended EEW:
Case 1: { 0, 0, 1, 0, 2, 0, ... }
It's interleaved by sequence1 = { 0, 1, 2, ... } and sequence1 = { 0, 0, 0, ... }
Suppose the original vector can be extended EEW, e.g. mode = RVVM1SImode.
Then such interleaved vector can be achieved with { 1, 2, 3, ... } with RVVM1DImode.
So, for this situation the codegen is pretty efficient and clean:
* config/riscv/riscv-protos.h (expand_vec_series): Adapt function.
* config/riscv/riscv-v.cc (rvv_builder::double_steps_npatterns_p): New function.
(expand_vec_series): Adapt function.
(expand_const_vector): Support new interleave vector with different step.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/slp-interleave-1.c: New test.
* gcc.target/riscv/rvv/autovec/slp-interleave-2.c: New test.
* gcc.target/riscv/rvv/autovec/slp-interleave-3.c: New test.
* gcc.target/riscv/rvv/autovec/slp-interleave-4.c: New test.
Jonathan Wakely [Thu, 7 Dec 2023 12:40:18 +0000 (12:40 +0000)]
libstdc++: Fix misleading typedef name in <format>
This local typedef for uintptr_t was accidentally named uint64_t,
probably from a careless code completion shortcut. We don't need the
typedef at all since it's only used once. Just use __UINTPTR_TYPE__
directly instead.
libstdc++-v3/ChangeLog:
* include/std/format (_Iter_sink<charT, contiguous_iterator>):
Remove uint64_t local type.
Jonathan Wakely [Thu, 7 Dec 2023 11:00:02 +0000 (11:00 +0000)]
libstdc++: Use <cstdint> instead of <stdint.h> in <bits/atomic_wait.h>
In r14-5922-g6c8f2d3a08bc01 I added <stdint.h> to <bits/atomic_wait.h>,
so that uintptr_t is declared if that header is compiled as a header
unit. I used <stdint.h> because that's what <atomic> already includes,
so it seemed simpler to be consistent. However, this means that name
lookup for uintptr_t in <bits/atomic_wait.h> depends on whether
<cstdint> has been included by another header first. Whether name lookup
finds std::uintptr_t or ::uintptr_t will depend on include order. This
causes problems when compiling modules with Clang:
bits/atomic_wait.h:251:7: error: 'std::__detail::__waiter_pool_base' has different definitions in different modules; first difference is defined here found method '_S_for' with body
_S_for(const void* __addr) noexcept
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
bits/atomic_wait.h:251:7: note: but in 'tm.<global>' found method '_S_for' with different body
_S_for(const void* __addr) noexcept
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
By including <cstdint> we would ensure that name lookup always finds the
name in namespace std. Alternatively, we can stop including <stdint.h>
for those types, so that we don't declare the entire contents of
<stdint.h> when we only need a couple of types from it. This patch does
the former, which is appropriate for backporting.
libstdc++-v3/ChangeLog:
* include/bits/atomic_wait.h: Include <cstdint> instead of
<stdint.h>.
Jonathan Wakely [Wed, 6 Dec 2023 17:21:29 +0000 (17:21 +0000)]
libstdc++: Fix recent changes to __glibcxx_assert [PR112882]
The changes in r14-6198-g5e8a30d8b8f4d7 were broken, as I used
_GLIBCXX17_CONSTEXPR for the 'if _GLIBCXX17_CONSTEXPR (true)' condition,
forgetting that it would also be used for the is_constant_evaluated()
check. Using 'if constexpr (std::is_constant_evaluated())' is a bug.
Additionally, relying on __glibcxx_assert_fail to give a "not a constant
expression" error is a problem because at -O0 an undefined reference to
__glibcxx_assert_fail is present in the compiled code. This means you
can't use libstdc++ headers without also linking to libstdc++ for the
symbol definition.
This fix rewrites the __glibcxx_assert macro again. This still avoids
doing the duplicate checks, once for constexpr and once at runtime (if
_GLIBCXX_ASSERTIONS is defined). When _GLIBCXX_ASSERTIONS is defined we
still rely on __glibcxx_assert_fail to give a "not a constant
expression" error during constant evaluation (because when assertions
are defined it's not a problem to emit a reference to the symbol). But
when that macro is not defined, we use a new inline (but not constexpr)
overload of __glibcxx_assert_fail to cause compilation to fail. That
inline function doesn't cause an undefined reference to a symbol in the
library (and will be optimized away anyway).
We can also add always_inline to the __is_constant_evaluated function,
although this doesn't actually matter for -O0 and it's always inlined
with any optimization enabled.
libstdc++-v3/ChangeLog:
PR libstdc++/112882
* include/bits/c++config (__is_constant_evaluated): Add
always_inline attribute.
(_GLIBCXX_DO_ASSERT): Remove macro.
(__glibcxx_assert): Define separately for assertions-enabled and
constexpr-only cases.
This pass adds a simple register allocator for FP & SIMD registers.
Its main purpose is to make use of SME2's strided LD1, ST1 and LUTI2/4
instructions, which require a very specific grouping structure,
and so would be difficult to exploit with general allocation.
The allocator is very simple. It gives up on anything that would
require spilling, or that it might not handle well for other reasons.
The allocator needs to track liveness at the level of individual FPRs.
Doing that fixes a lot of the PRs relating to redundant moves caused by
structure loads and stores. That particular problem is going to be
fixed more generally for GCC 15 by Lehua's RA patches.
However, the early-RA pass runs before scheduling, so it has a chance
to bag a spill-free allocation of vector code before the scheduler moves
things around. It could therefore still be useful for non-SME code
(e.g. for hand-scheduled ACLE code) even after Lehua's patches are in.
The pass is controlled by a tristate switch:
- -mearly-ra=all: run on all functions
- -mearly-ra=strided: run on functions that have access to strided registers
- -mearly-ra=none: don't run on any function
The patch makes -mearly-ra=all the default at -O2 and above for now.
We can revisit this for GCC 15 once Lehua's patches are in;
-mearly-ra=strided might then be more appropriate.
As said previously, the pass is very naive. There's much more that we
could do, such as handling invariants better. The main focus is on not
committing to a bad allocation, rather than on handling as much as
possible.
gcc/
PR rtl-optimization/106694
PR rtl-optimization/109078
PR rtl-optimization/109391
* config.gcc: Add aarch64-early-ra.o for AArch64 targets.
* config/aarch64/t-aarch64 (aarch64-early-ra.o): New rule.
* config/aarch64/aarch64-opts.h (aarch64_early_ra_scope): New enum.
* config/aarch64/aarch64.opt (mearly_ra): New option.
* doc/invoke.texi: Document it.
* common/config/aarch64/aarch64-common.cc
(aarch_option_optimization_table): Use -mearly-ra=strided by
default for -O2 and above.
* config/aarch64/aarch64-passes.def (pass_aarch64_early_ra): New pass.
* config/aarch64/aarch64-protos.h (aarch64_strided_registers_p)
(make_pass_aarch64_early_ra): Declare.
* config/aarch64/aarch64-sme.md (@aarch64_sme_lut<LUTI_BITS><mode>):
Add a stride_type attribute.
(@aarch64_sme_lut<LUTI_BITS><mode>_strided2): New pattern.
(@aarch64_sme_lut<LUTI_BITS><mode>_strided4): Likewise.
* config/aarch64/aarch64-sve-builtins-base.cc (svld1_impl::expand)
(svldnt1_impl::expand, svst1_impl::expand, svstn1_impl::expand): Handle
new way of defining multi-register loads and stores.
* config/aarch64/aarch64-sve.md (@aarch64_ld1<SVE_FULLx24:mode>)
(@aarch64_ldnt1<SVE_FULLx24:mode>, @aarch64_st1<SVE_FULLx24:mode>)
(@aarch64_stnt1<SVE_FULLx24:mode>): Delete.
* config/aarch64/aarch64-sve2.md (@aarch64_<LD1_COUNT:optab><mode>)
(@aarch64_<LD1_COUNT:optab><mode>_strided2): New patterns.
(@aarch64_<LD1_COUNT:optab><mode>_strided4): Likewise.
(@aarch64_<ST1_COUNT:optab><mode>): Likewise.
(@aarch64_<ST1_COUNT:optab><mode>_strided2): Likewise.
(@aarch64_<ST1_COUNT:optab><mode>_strided4): Likewise.
* config/aarch64/aarch64.cc (aarch64_strided_registers_p): New
function.
* config/aarch64/aarch64.md (UNSPEC_LD1_SVE_COUNT): Delete.
(UNSPEC_ST1_SVE_COUNT, UNSPEC_LDNT1_SVE_COUNT): Likewise.
(UNSPEC_STNT1_SVE_COUNT): Likewise.
(stride_type): New attribute.
* config/aarch64/constraints.md (Uwd, Uwt): New constraints.
* config/aarch64/iterators.md (UNSPEC_LD1_COUNT, UNSPEC_LDNT1_COUNT)
(UNSPEC_ST1_COUNT, UNSPEC_STNT1_COUNT): New unspecs.
(optab): Handle them.
(LD1_COUNT, ST1_COUNT): New iterators.
* config/aarch64/aarch64-early-ra.cc: New file.
gcc/testsuite/
PR rtl-optimization/106694
PR rtl-optimization/109078
PR rtl-optimization/109391
* gcc.target/aarch64/ldp_stp_16.c (cons4_4_float): Tighten expected
output test.
* gcc.target/aarch64/sve/shift_1.c: Allow reversed shifts for .s
as well as .d.
* gcc.target/aarch64/sme/strided_1.c: New test.
* gcc.target/aarch64/pr109078.c: Likewise.
* gcc.target/aarch64/pr109391.c: Likewise.
* gcc.target/aarch64/sve/pr106694.c: Likewise.
Ezra Sitorus [Thu, 7 Dec 2023 15:41:06 +0000 (15:41 +0000)]
arm: vld1_types_x4 ACLE intrinsics
This patch is part of a series of patches implementing the _xN
variants of the vld1 intrinsic for the arm port. This patch adds the
_x4 variants of the vld1 intrinsic.
The previous vld1_x4 has been updated to vld1q_x4 to take into
account that it works with 4-word-length types. vld1_x4 is now
only for 2-word-length types.
ISA documents:
https://developer.arm.com/documentation/ddi0487/latest/
gcc/ChangeLog:
* config/arm/arm_neon.h
(vld1_u8_x4, vld1_u16_x4, vld1_u32_x4, vld1_u64_x4): New
(vld1_s8_x4, vld1_s16_x4, vld1_s32_x4, vld1_s64_x4): New.
(vld1_f16_x4, vld1_f32_x4): New.
(vld1_p8_x4, vld1_p16_x4, vld1_p64_x4): New.
(vld1_bf16_x4): New.
(vld1q_types_x4): Updated to use vld1q_x4
from arm_neon_builtins.def
* config/arm/arm_neon_builtins.def
(vld1_x4): Updated entries.
(vld1q_x4): New entries, but comes from the old vld1_x2
* config/arm/neon.md (neon_vld1q_x4<mode>):
Updated from neon_vld1_x4<mode>.
gcc/testsuite/ChangeLog:
* gcc.target/arm/simd/vld1_base_xN_1.c: Add new tests.
* gcc.target/arm/simd/vld1_bf16_xN_1.c: Add new tests.
* gcc.target/arm/simd/vld1_fp16_xN_1.c: Add new tests.
* gcc.target/arm/simd/vld1_p64_xN_1.c: Add new tests.
Ezra Sitorus [Thu, 7 Dec 2023 15:41:05 +0000 (15:41 +0000)]
arm: vld1_types_x3 ACLE intrinsics
This patch is part of a series of patches implementing the _xN
variants of the vld1 intrinsic for the arm port. This patch adds the
_x3 variants of the vld1 intrinsic.
The previous vld1_x3 has been updated to vld1q_x3 to take into
account that it works with 4-word-length types. vld1_x3 is now
only for 2-word-length types.
ISA documents:
https://developer.arm.com/documentation/ddi0487/latest/
gcc/ChangeLog:
* config/arm/arm_neon.h
(vld1_u8_x3, vld1_u16_x3, vld1_u32_x3, vld1_u64_x3): New
(vld1_s8_x3, vld1_s16_x3, vld1_s32_x3, vld1_s64_x3): New.
(vld1_f16_x3, vld1_f32_x3): New.
(vld1_p8_x3, vld1_p16_x3, vld1_p64_x3): New.
(vld1_bf16_x3): New.
(vld1q_types_x3): Updated to use vld1q_x3 from
arm_neon_builtins.def
* config/arm/arm_neon_builtins.def
(vld1_x3): Updated entries.
(vld1q_x3): New entries, but comes from the old vld1_x2
* config/arm/neon.md (neon_vld1q_x3<mode>): Updated from
neon_vld1_x3<mode>.
gcc/testsuite/ChangeLog:
* gcc.target/arm/simd/vld1_base_xN_1.c: Add new tests.
* gcc.target/arm/simd/vld1_bf16_xN_1.c: Add new tests.
* gcc.target/arm/simd/vld1_fp16_xN_1.c: Add new tests.
* gcc.target/arm/simd/vld1_p64_xN_1.c: Add new tests.
Ezra Sitorus [Thu, 7 Dec 2023 15:41:04 +0000 (15:41 +0000)]
arm: vld1_types_x2 ACLE intrinsics
This patch is part of a series of patches implementing the _xN
variants of the vld1 intrinsic for the arm port. This patch adds the
_x2 variants of the vld1 intrinsic.
The previous vld1_x2 has been updated to vld1q_x2 to take into
account that it works with 4-word-length types. vld1_x2 is now
only for 2-word-length types.
ISA documents:
https://developer.arm.com/documentation/ddi0487/latest/
gcc/ChangeLog:
* config/arm/arm_neon.h
(vld1_u8_x2, vld1_u16_x2, vld1_u32_x2, vld1_u64_x2): New
(vld1_s8_x2, vld1_s16_x2, vld1_s32_x2, vld1_s64_x2): New.
(vld1_f16_x2, vld1_f32_x2): New.
(vld1_p8_x2, vld1_p16_x2, vld1_p64_x2): New.
(vld1_bf16_x2): New.
(vld1q_types_x2): Updated to use vld1q_x2 from
arm_neon_builtins.def
* config/arm/arm_neon_builtins.def
(vld1_x2): Updated entries.
(vld1q_x2): New entries, but comes from the old vld1_x2
* config/arm/neon.md
(neon_vld1<VMEMX2_q>_x2<VDQX:mode>): Updated
from neon_vld1_x2<mode>.
gcc/testsuite/ChangeLog:
* gcc.target/arm/simd/vld1_base_xN_1.c: Add new tests.
* gcc.target/arm/simd/vld1_bf16_xN_1.c: Add new tests.
* gcc.target/arm/simd/vld1_fp16_xN_1.c: Add new tests.
* gcc.target/arm/simd/vld1_p64_xN_1.c: Add new tests.
Ezra Sitorus [Thu, 7 Dec 2023 15:36:52 +0000 (15:36 +0000)]
arm: vst1q_types_x4 ACLE intrinsics
This patch is part of a series of patches implementing the _xN
variants of the vst1q intrinsic for the arm port. This patch adds the
_x4 variants of the vst1q intrinsic.
Ezra Sitorus [Thu, 7 Dec 2023 15:36:51 +0000 (15:36 +0000)]
arm: vst1q_types_x3 ACLE intrinsics
This patch is part of a series of patches implementing the _xN
variants of the vst1q intrinsic for the arm port. This patch adds the
_x3 variants of the vst1q intrinsic.
Ezra Sitorus [Thu, 7 Dec 2023 15:36:50 +0000 (15:36 +0000)]
arm: vst1q_types_x2 ACLE intrinsics
This patch is part of a series of patches implementing the _xN
variants of the vst1q intrinsic for the arm port. This patch adds the
_x2 variants of the vst1q intrinsic.
Ezra Sitorus [Thu, 7 Dec 2023 15:28:44 +0000 (15:28 +0000)]
arm: vst1_types_x4 ACLE intrinsics
This patch is part of a series of patches implementing the _xN
variants of the vst1 intrinsic for the arm port. This patch adds the
_x4 variants of the vst1 intrinsic.
Ezra Sitorus [Thu, 7 Dec 2023 15:28:43 +0000 (15:28 +0000)]
arm: vst1_types_x3 ACLE intrinsics
This patch is part of a series of patches implementing the _xN
variants of the vst1 intrinsic for the arm port. This patch adds the
_x3 variants of the vst1 intrinsic.
Ezra Sitorus [Thu, 7 Dec 2023 15:28:42 +0000 (15:28 +0000)]
arm: vst1_types_x2 ACLE intrinsics
This patch is part of a series of patches implementing the _xN
variants of the vst1 intrinsic for the arm port. This patch adds the
_x2 variants of the vst1 intrinsic.