The following properly guards the simplifications that move
operations into VEC_CONDs, in particular when that changes the
type constraints on this operation.
This needed a genmatch fix which was recording spurious implicit fors
when tcc_comparison is used in a C expression.
PR middle-end/114070
* genmatch.cc (parser::parse_c_expr): Do not record operand
lists but only mark operators used.
* match.pd ((c ? a : b) op (c ? d : e) --> c ? (a op d) : (b op e)):
Properly guard the case of tcc_comparison changing the VEC_COND
value operand type.
Jakub Jelinek [Mon, 26 Feb 2024 06:30:05 +0000 (07:30 +0100)]
i386: Fix up x86_function_profiler -masm=intel support [PR114094]
In my r14-8214 changes I apparently forgot one \n at the end of an instruction.
The corresponding AT&T line looks like:
"1:\tcall\t*%s@GOTPCREL(%%rip)\n"
but the Intel variant was
"1:\tcall\t[QWORD PTR %s@GOTPCREL[rip]]"
and the memory operand size is 1 byte. As the result, the rest of 511
bytes is ignored by GCC. Implement ldtilecfg and sttilecfg intrinsics
with a pointer to XImode to honor the 512-byte memory block.
gcc/ChangeLog:
PR target/114098
* config/i386/amxtileintrin.h (_tile_loadconfig): Use
__builtin_ia32_ldtilecfg.
(_tile_storeconfig): Use __builtin_ia32_sttilecfg.
* config/i386/i386-builtin.def (BDESC): Add
__builtin_ia32_ldtilecfg and __builtin_ia32_sttilecfg.
* config/i386/i386-expand.cc (ix86_expand_builtin): Handle
IX86_BUILTIN_LDTILECFG and IX86_BUILTIN_STTILECFG.
* config/i386/i386.md (ldtilecfg): New pattern.
(sttilecfg): Likewise.
gcc/testsuite/ChangeLog:
PR target/114098
* gcc.target/i386/amxtile-4.c: New test.
Jerry DeLisle [Sun, 25 Feb 2024 22:50:07 +0000 (14:50 -0800)]
libgfortran: Propagate user defined iostat and iomsg.
PR libfortran/105456
libgfortran/ChangeLog:
* io/list_read.c (list_formatted_read_scalar): Add checks
for the case where a user defines their own error codes
and error messages and generate the runtime error.
* io/transfer.c (st_read_done): Whitespace.
Gaius Mulley [Sun, 25 Feb 2024 11:08:37 +0000 (11:08 +0000)]
PR modula2/113749 m2 enabled build times out on i686-gnu-hurd
The bug fix changes the FIO module to use the target O_RDONLY,
O_WRONLY, SEEK_SET and SEEK_END (now available from the module wrapc).
Also rebuilt are the bootstrap tools mc and pge as they have their
own wrapc and C translations of FIO.
vect: Tighten check for impossible SLP layouts [PR113205]
During its forward pass, the SLP layout code tries to calculate
the cost of a layout change on an incoming edge. This is taken
as the minimum of two costs: one in which the source partition
keeps its current layout (chosen earlier during the pass) and
one in which the source partition switches to the new layout.
The latter can sometimes be arranged by the backward pass.
If only one of the costs is valid, the other cost was ignored.
But the PR shows that this is not safe. If the source partition
has layout 0 (the normal layout), we have to be prepared to handle
the case in which that ends up being the only valid layout.
Other code already accounts for this restriction, e.g. see
the code starting with:
/* Reject the layout if it would make layout 0 impossible
for later partitions. This amounts to testing that the
target supports reversing the layout change on edges
to later partitions.
gcc/
PR tree-optimization/113205
* tree-vect-slp.cc (vect_optimize_slp_pass::forward_cost): Reject
the proposed layout if it does not allow a source partition with
layout 2 to keep that layout.
gcc/testsuite/
PR tree-optimization/113205
* gcc.dg/torture/pr113205.c: New test.
Jakub Jelinek [Sat, 24 Feb 2024 11:45:40 +0000 (12:45 +0100)]
Use HOST_WIDE_INT_{C,UC,0,0U,1,1U} macros some more
I've searched for some uses of (HOST_WIDE_INT) constant or (unsigned
HOST_WIDE_INT) constant and turned them into uses of the appropriate
macros.
THere are quite a few cases in non-i386 backends but I've left that out
for now.
The only behavior change is in build_replicated_int_cst where the
left shift was done in HOST_WIDE_INT type but assigned to unsigned
HOST_WIDE_INT, which I've changed into unsigned HOST_WIDE_INT shift.
2024-02-24 Jakub Jelinek <jakub@redhat.com>
gcc/
* builtins.cc (fold_builtin_isascii): Use HOST_WIDE_INT_UC macro.
* combine.cc (make_field_assignment): Use HOST_WIDE_INT_1U macro.
* double-int.cc (double_int::mask): Use HOST_WIDE_INT_UC macros.
* genattrtab.cc (attr_alt_complement): Use HOST_WIDE_INT_1 macro.
(mk_attr_alt): Use HOST_WIDE_INT_0 macro.
* genautomata.cc (bitmap_set_bit, CLEAR_BIT): Use HOST_WIDE_INT_1
macros.
* ipa-strub.cc (can_strub_internally_p): Use HOST_WIDE_INT_1 macro.
* loop-iv.cc (implies_p): Use HOST_WIDE_INT_1U macro.
* pretty-print.cc (test_pp_format): Use HOST_WIDE_INT_C and
HOST_WIDE_INT_UC macros.
* rtlanal.cc (nonzero_bits1): Use HOST_WIDE_INT_UC macro.
* tree.cc (build_replicated_int_cst): Use HOST_WIDE_INT_1U macro.
* tree.h (DECL_OFFSET_ALIGN): Use HOST_WIDE_INT_1U macro.
* tree-ssa-structalias.cc (dump_varinfo): Use ~HOST_WIDE_INT_0U
macros.
* wide-int.cc (divmod_internal_2): Use HOST_WIDE_INT_1U macro.
* config/i386/constraints.md (define_constraint "L"): Use
HOST_WIDE_INT_C macro.
* config/i386/i386.md (movabsq split peephole2): Use HOST_WIDE_INT_C
macro.
(movl + movb peephole2): Likewise.
* config/i386/predicates.md (x86_64_zext_immediate_operand): Likewise.
(const_32bit_mask): Likewise.
gcc/objc/
* objc-encoding.cc (encode_array): Use HOST_WIDE_INT_0 macros.
Jakub Jelinek [Sat, 24 Feb 2024 11:44:34 +0000 (12:44 +0100)]
bitint: Handle VIEW_CONVERT_EXPRs between large/huge BITINT_TYPEs and VECTOR/COMPLEX_TYPE etc. [PR114073]
The following patch implements support for VIEW_CONVERT_EXPRs from/to
large/huge _BitInt to/from vector or complex types or anything else but
integral/pointer types which doesn't need to live in memory.
2024-02-24 Jakub Jelinek <jakub@redhat.com>
PR middle-end/114073
* gimple-lower-bitint.cc (bitint_large_huge::lower_stmt): Handle
VIEW_CONVERT_EXPRs between large/huge _BitInt and non-integer/pointer
types like vector or complex types.
(gimple_lower_bitint): Don't merge VIEW_CONVERT_EXPRs to non-integral
types. Fix up VIEW_CONVERT_EXPR handling. Allow merging
VIEW_CONVERT_EXPR from non-integral/pointer types with a store.
Steve Kargl [Fri, 23 Feb 2024 21:05:04 +0000 (22:05 +0100)]
Fortran: ALLOCATE statement, SOURCE/MOLD expressions with subrefs [PR114024]
PR fortran/114024
gcc/fortran/ChangeLog:
* trans-stmt.cc (gfc_trans_allocate): When a source expression has
substring references, part-refs, or %re/%im inquiries, wrap the
entity in parentheses to force evaluation of the expression.
gcc/testsuite/ChangeLog:
* gfortran.dg/allocate_with_source_27.f90: New test.
* gfortran.dg/allocate_with_source_28.f90: New test.
Robin Dapp [Thu, 22 Feb 2024 12:40:55 +0000 (13:40 +0100)]
RISC-V: Fix vec_init for simple sequences [PR114028].
For a vec_init (_a, _a, _a, _a) with _a of mode DImode we try to
construct a "superword" of two "_a"s. This only works for modes < Pmode
when we can "shift and or" both halves into one Pmode register.
This patch disallows the optimization for inner_mode == Pmode and emits
a simple broadcast in such a case.
gcc/ChangeLog:
PR target/114028
* config/riscv/riscv-v.cc (rvv_builder::can_duplicate_repeating_sequence_p):
Return false if inner mode is already Pmode.
(rvv_builder::is_all_same_sequence): New function.
(expand_vec_init): Emit broadcast if sequence is all same.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/pr114028.c: New test.
Jakub Jelinek [Fri, 23 Feb 2024 17:55:12 +0000 (18:55 +0100)]
c++: Fix ICE due to folding a call to constructor on cdtor_returns_this arches (aka arm32) [PR113083]
When targetm.cxx.cdtor_returns_this () (aka on arm32 TARGET_AAPCS_BASED)
constructor is supposed to return this pointer, but when we cp_fold such
a call, we don't take that into account and just INIT_EXPR the object,
so we can later ICE during gimplification, because the expression doesn't
have the right type.
2024-02-23 Jakub Jelinek <jakub@redhat.com>
PR c++/113083
* cp-gimplify.cc (cp_fold): For targetm.cxx.cdtor_returns_this ()
wrap r into a COMPOUND_EXPR and return folded CALL_EXPR_ARG (x, 0).
aarch64: Spread out FPR usage between RA regions [PR113613]
early-ra already had code to do regrename-style "broadening"
of the allocation, to promote scheduling freedom. However,
the pass divides the function into allocation regions
and this broadening only worked within a single region.
This meant that if a basic block contained one subblock
of FPR use, followed by a point at which no FPRs were live,
followed by another subblock of FPR use, the two subblocks
would tend to reuse the same registers. This in turn meant
that it wasn't possible to form LDP/STP pairs between them.
The failure to form LDPs and STPs in the testcase was a
regression from GCC 13.
The patch adds a simple heuristic to prefer less recently
used registers in the event of a tie.
gcc/
PR target/113613
* config/aarch64/aarch64-early-ra.cc
(early_ra::m_current_region): New member variable.
(early_ra::m_fpr_recency): Likewise.
(early_ra::start_new_region): Bump m_current_region.
(early_ra::allocate_colors): Prefer less recently used registers
in the event of a tie. Add a comment to explain why we prefer(ed)
higher-numbered registers.
(early_ra::find_oldest_color): Prefer less recently used registers
here too.
(early_ra::finalize_allocation): Update recency information for
allocated registers.
(early_ra::process_blocks): Initialize m_current_region and
m_fpr_recency.
gcc/testsuite/
PR target/113613
* gcc.target/aarch64/pr113613.c: New test.
aarch64: Tighten early-ra chain test for wide registers [PR113295]
Most code in early-ra used is_chain_candidate to check whether we
should chain two allocnos. This included both tests that matter
for correctness and tests for certain heuristics.
Once that test passes for one pair of allocnos, we test whether
it's safe to chain the containing groups (which might contain
multiple allocnos for x2, x3 and x4 modes). This test used an
inline test for correctness only, deliberately skipping the
heuristics. However, this instance of the test was missing
some handling of equivalent allocnos.
This patch fixes things by making is_chain_candidate take a
strictness parameter: correctness only, or correctness + heuristics.
It then makes the group-chaining test use the correctness version
rather than trying to replicate it inline.
gcc/
PR target/113295
* config/aarch64/aarch64-early-ra.cc
(early_ra::test_strictness): New enum.
(early_ra::is_chain_candidate): Add a strictness parameter to
control whether only correctness matters, or whether both correctness
and heuristics should be used. Handle multiple levels of equivalence.
(early_ra::find_related_start): Update call accordingly.
(early_ra::strided_polarity_pref): Likewise.
(early_ra::form_chains): Likewise.
(early_ra::try_to_chain_allocnos): Use is_chain_candidate in
correctness mode rather than trying to inline the test.
gcc/testsuite/
PR target/113295
* gcc.target/aarch64/pr113295-2.c: New test.
416.gamess showed up two wrong-code bugs in early-ra. This patch
fixes the first of them. It was difficult to reduce the source code
to something that would meaningfully show the situation, so the
testcase uses a direct RTL sequence instead.
In the sequence:
(a) register <2> is set more than once
(b) register <2> is copied to a temporary (<4>)
(c) register <2> is the destination of an FCSEL between <4> and
another value (<5>)
(d) <4> and <2> are equivalent for <4>'s live range
(e) <5>'s and <2>'s live ranges do not intersect, and there is
a pseudo-copy between <5> and <2>
On its own, (d) implies that <4> can be treated as equivalent to <2>.
And on its own, (e) implies that <5> can share <2>'s register. But
<4>'s and <5>'s live ranges conflict, meaning that they cannot both
share the register together. A bit of missing bookkeeping meant that
the mechanism for detecting this didn't fire. We therefore ended up
with an FCSEL in which both inputs were the same register.
gcc/
PR target/113295
* config/aarch64/aarch64-early-ra.cc
(early_ra::find_related_start): Account for definitions by shared
registers when testing for a single register definition.
(early_ra::accumulate_defs): New function.
(early_ra::record_copy): If A shares B's register, fold A's
definition information into B's. Fold A's use information into B's.
gcc/testsuite/
PR target/113295
* gcc.dg/rtl/aarch64/pr113295-1.c: New test.
H.J. Lu [Sun, 4 Feb 2024 15:46:35 +0000 (07:46 -0800)]
x86-64: Check R_X86_64_CODE_6_GOTTPOFF support
If assembler and linker supports
add %reg1, name@gottpoff(%rip), %reg2
with R_X86_64_CODE_6_GOTTPOFF, we can generate it instead of
mov name@gottpoff(%rip), %reg2
add %reg1, %reg2
gcc/
* configure.ac (HAVE_AS_R_X86_64_CODE_6_GOTTPOFF): Defined as 1
if R_X86_64_CODE_6_GOTTPOFF is supported.
* config.in: Regenerated.
* configure: Likewise.
* config/i386/predicates.md (apx_ndd_add_memory_operand): Allow
UNSPEC_GOTNTPOFF if R_X86_64_CODE_6_GOTTPOFF is supported.
gcc/testsuite/
* gcc.target/i386/apx-ndd-tls-1b.c: New test.
* lib/target-supports.exp
(check_effective_target_code_6_gottpoff_reloc): New.
Richard Earnshaw [Thu, 22 Feb 2024 16:47:20 +0000 (16:47 +0000)]
arm: fix ICE with vectorized reciprocal division [PR108120]
The expand pattern for reciprocal division was enabled for all math
optimization modes, but the patterns it was generating were not
enabled unless -funsafe-math-optimizations were enabled, this leads to
an ICE when the pattern we generate cannot be recognized.
Fixed by only enabling vector division when doing unsafe math.
gcc:
PR target/108120
* config/arm/neon.md (div<VCVTF:mode>3): Rename from div<mode>3.
Gate with ARM_HAVE_NEON_<MODE>_ARITH.
gcc/testsuite:
PR target/108120
* gcc.target/arm/neon-recip-div-1.c: New file.
Jakub Jelinek [Fri, 23 Feb 2024 10:38:18 +0000 (11:38 +0100)]
expr: Fix REDUCE_BIT_FIELD in multiplication expansion [PR114054]
The following testcase ICEs, because the REDUCE_BIT_FIELD macro uses
the target variable implicitly:
#define REDUCE_BIT_FIELD(expr) (reduce_bit_field \
? reduce_to_bit_field_precision ((expr), \
target, \
type) \
: (expr))
and so when the code below reuses the target variable, documented to be
The value may be stored in TARGET if TARGET is nonzero.
TARGET is just a suggestion; callers must assume that
the rtx returned may not be the same as TARGET.
for something unrelated (the value that should be returned), this misbehaves
(in the testcase target is set to a CONST_INT, which has VOIDmode and
reduce_to_bit_field_precision assert checking doesn't like that).
Needed to say that
If TARGET is CONST0_RTX, it means that the value will be ignored.
but in expand_expr_real_2 does at the start:
ignore = (target == const0_rtx
|| ((CONVERT_EXPR_CODE_P (code)
|| code == COND_EXPR || code == VIEW_CONVERT_EXPR)
&& TREE_CODE (type) == VOID_TYPE));
/* We should be called only if we need the result. */
gcc_assert (!ignore);
- so such target is mainly meant for calls and the like in other routines.
Certainly doesn't expect that target changes from not being ignored
initially to ignore later on and other CONST_INT results as well as anything
which is not an object into which anything can be stored.
So, the following patch fixes that by using a more appripriate temporary
for the result, which other code is using.
2024-02-23 Jakub Jelinek <jakub@redhat.com>
PR rtl-optimization/114054
* expr.cc (expand_expr_real_2) <case MULT_EXPR>: Use
temp variable instead of target parameter for result.
The following testcases show 2 bugs in the .{ADD,SUB}_OVERFLOW lowering,
both related to storing of the REALPART_EXPR part of the result.
On the first testcase prec is 255, prec_limbs is 4 and for the second limb
in the loop the store of the REALPART_EXPR of .USUBC (_30) is stored through:
if (_27 <= 3)
goto <bb 12>; [80.00%]
else
goto <bb 15>; [20.00%]
<bb 15> [local count: 1073741824]:
The first check is right, as prec_limbs is 4, we don't want to store
bitint.3[4] or above at all, those limbs are just computed for the overflow
checking and nothing else, so _27 > 4 leads to no store.
But the other condition is exact opposite of what should be done, if
the current index of the second limb (_27) is < 3, then it should
bitint.3[_27] = _30;
and if it is == 3, it should
MEM[(unsigned long *)&bitint.3 + 24B] = _30;
and (especially important for the targets which would bitinfo.extended = 1)
should actually in this case zero extend it from the 63 bits to 64, that is
the handling of the partial limb. The if_then_if_then_else helper if
there are 2 conditions sets m_gsi to be at the start of the
edge_true_false->dest bb, i.e. when the first condition is true and second
false, and that is where we store the SSA_NAME indexed limb store, so the
condition needs to be reversed.
The following patch does that and adds the cast as well, the usual
assumption that already handle_operand has the partial limb type doesn't
have to be true here, because the source operand could have much larger
precision than the REALPART_EXPR of the lhs.
2024-02-23 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/114040
* gimple-lower-bitint.cc (bitint_large_huge::lower_addsub_overflow):
Use EQ_EXPR rather than LT_EXPR for g2 condition and change its
probability from likely to unlikely. When handling the true true
store, first cast to limb_access_type and then to l's type.
* gcc.dg/torture/bitint-60.c: New test.
* gcc.dg/torture/bitint-61.c: New test.
Xi Ruoyao [Wed, 21 Feb 2024 15:54:53 +0000 (23:54 +0800)]
LoongArch: Don't falsely claim gold supported in toplevel configure
The gold linker has never been ported to LoongArch (and it seems
unlikely to be ported in the future as the new architectures are
focusing on lld and/or mold for fast linkers).
Richard Biener [Fri, 23 Feb 2024 07:59:12 +0000 (08:59 +0100)]
Add ia64*-*-* to the list of obsolete targets
The following deprecates ia64*-*-* for GCC 14. Since we plan to
force LRA for GCC 15 and the target only has slim chances of getting
updated this notifies people in advance. Given both Linux and
glibc have axed the target further development is also made difficult.
There is no listed maintainer for ia64 either.
PR target/90785
gcc/
* config.gcc: Add ia64*-*-* to the list of obsoleted targets.
contrib/
* config-list.mk (LIST): --enable-obsolete for ia64*-*-*.
Palmer Dabbelt [Fri, 9 Feb 2024 16:53:24 +0000 (08:53 -0800)]
RISC-V: Point our Python scripts at python3
This builds for me, and I frequently have python-is-python3 type
packages installed so I think I've been implicitly testing it for a
while. Looks like Kito's tested similar configurations, and the
bugzilla indicates we should be moving over.
gcc/ChangeLog:
PR other/109668
* config/riscv/arch-canonicalize: Move to python3
* config/riscv/multilib-generator: Likewise
Palmer Dabbelt [Tue, 20 Feb 2024 15:45:38 +0000 (07:45 -0800)]
doc: RISC-V: Document that -mcpu doesn't override -march or -mtune
This came up recently as Edwin was looking through the test suite. A
few of us were talking about this during the patchwork meeting and were
surprised. Looks like this is the desired behavior, so let's at least
document it.
Lulu Cheng [Wed, 21 Feb 2024 03:17:14 +0000 (11:17 +0800)]
LoongArch: When checking whether the assembler supports conditional branch relaxation, add compilation parameter "--fatal-warnings" to the assembler.
In binutils 2.40 and earlier versions, only a warning will be reported
when a relocation immediate value is out of bounds. As a result,
the value of the macro HAVE_AS_COND_BRANCH_RELAXATION will also be
defined as 1 when the assembler does not support conditional branch
relaxation. Therefore, add the compilation option "--fatal-warnings"
to avoid this problem.
gcc/ChangeLog:
* configure: Regenerate.
* configure.ac: Add parameter "--fatal-warnings" to assemble
when checking whether the assemble support conditional branch
relaxation.
Jakub Jelinek [Thu, 22 Feb 2024 18:32:02 +0000 (19:32 +0100)]
c: Handle scoped attributes in __has*attribute and scoped attribute parsing changes in -std=c11 etc. modes [PR114007]
We aren't able to parse __has_attribute (vendor::attr) (and __has_c_attribute
and __has_cpp_attribute) in strict C < C23 modes. While in -std=gnu* modes
or in -std=c23 there is CPP_SCOPE token, in -std=c* (except for -std=c23)
there are is just a pair of CPP_COLON tokens.
The c-lex.cc hunk adds support for that.
That leads to a question if we should return 1 or 0 from
__has_attribute (gnu::unused) or not, because while
[[gnu::unused]] is parsed fine in -std=gnu*/-std=c23 modes (sure, with
pedwarn for < C23), we do not parse it at all in -std=c* (except for
-std=c23), we only parse [[__extension__ gnu::unused]] there. While
the __extension__ in there helps to avoid the pedwarn, I think it is
better to be consistent between GNU and strict C < C23 modes and
parse [[gnu::unused]] too; on the other side, I think parsing
[[__extension__ gnu : : unused]] is too weird and undesirable.
So, the following patch adds a flag during preprocessing at the point
where we normally create CPP_SCOPE tokens out of 2 consecutive colons
on the first CPP_COLON to mark the consecutive case (as we are tight
on the bits, I've reused the PURE_ZERO flag, which is used just by the
C++ FE and only ever set (both C and C++) on CPP_NUMBER tokens, this
new flag has the same value and is only ever used on CPP_COLON tokens)
and instead of checking loose_scope_p argument (i.e. whether it is
[[__extension__ ...]] or not), it just parses CPP_SCOPE or CPP_COLON
with CLONE_SCOPE flag followed by another CPP_COLON the same.
The latter will never appear in >= C23 or -std=gnu* modes, though
guarding its use say with flag_iso && !flag_isoc23 && doesn't really
work because the __extension__ case temporarily clears flag_iso flag.
This makes the -std=c11 etc. behavior more similar to -std=gnu11 or
-std=c23, the only difference I'm aware of are the
#define JOIN2(A, B) A##B
[[vendor JOIN2(:,:) attr]]
[[__extension__ vendor JOIN2(:,:) attr]]
cases, which are accepted in the latter modes, but results in error
in -std=c11; but the error is during preprocessing that :: doesn't
form a valid preprocessing token, which is true, so just don't do that if
you try to have __STRICT_ANSI__ && __STDC_VERSION__ <= 201710L
compatibility.
2024-02-22 Jakub Jelinek <jakub@redhat.com>
PR c/114007
gcc/
* doc/extend.texi: (__extension__): Remove comments about scope
tokens vs. two colons.
gcc/c-family/
* c-lex.cc (c_common_has_attribute): Parse 2 CPP_COLONs with
the first one with COLON_SCOPE flag the same as CPP_SCOPE.
gcc/c/
* c-parser.cc (c_parser_std_attribute): Remove loose_scope_p argument.
Instead of checking it, parse 2 CPP_COLONs with the first one with
COLON_SCOPE flag the same as CPP_SCOPE.
(c_parser_std_attribute_list): Remove loose_scope_p argument, don't
pass it to c_parser_std_attribute.
(c_parser_std_attribute_specifier): Adjust c_parser_std_attribute_list
caller.
gcc/testsuite/
* gcc.dg/c23-attr-syntax-6.c: Adjust testcase for :: being valid
even in -std=c11 even without __extension__ and : : etc. not being
valid anymore even with __extension__.
* gcc.dg/c23-attr-syntax-7.c: Likewise.
* gcc.dg/c23-attr-syntax-8.c: New test.
libcpp/
* include/cpplib.h (COLON_SCOPE): Define to PURE_ZERO.
* lex.cc (_cpp_lex_direct): When lexing CPP_COLON with another
colon after it, if !CPP_OPTION (pfile, scope) set COLON_SCOPE
flag on the first CPP_COLON token.
Andrew Pinski [Thu, 22 Feb 2024 04:12:21 +0000 (20:12 -0800)]
warn-access: Fix handling of unnamed types [PR109804]
This looks like an oversight of handling DEMANGLE_COMPONENT_UNNAMED_TYPE.
DEMANGLE_COMPONENT_UNNAMED_TYPE only has the u.s_number.number set while
the code expected newc.u.s_binary.left would be valid.
So this treats DEMANGLE_COMPONENT_UNNAMED_TYPE like we treat function paramaters
(DEMANGLE_COMPONENT_FUNCTION_PARAM) and template paramaters (DEMANGLE_COMPONENT_TEMPLATE_PARAM).
Note the code in the demangler does this when it sets DEMANGLE_COMPONENT_UNNAMED_TYPE:
ret->type = DEMANGLE_COMPONENT_UNNAMED_TYPE;
ret->u.s_number.number = num;
Committed as obvious after bootstrap/test on x86_64-linux-gnu
This is because the non-Q variant for indices 0 and 1 are just shuffling values.
There is no perf difference between INS SIMD to SIMD and ZIP on Arm uArches but
preferring the INS alternative has a drawback on all uArches as ZIP being a three
operand instruction can be used to tie the result to the return register whereas
INS would require an fmov.
As such just update the test file for now.
gcc/testsuite/ChangeLog:
PR target/112375
* gcc.target/aarch64/vget_set_lane_1.c: Update test output.
Gaius Mulley [Thu, 22 Feb 2024 15:02:19 +0000 (15:02 +0000)]
PR modula2/114055 improve error message when checking the BY constant
The fix marks a constant created during the default BY clause of the
FOR loop as internal. The type checker will always return true if
checking against an internal const.
gcc/m2/ChangeLog:
PR modula2/114055
* gm2-compiler/M2Check.mod (Import): IsConstLitInternal and
IsConstLit.
(isInternal): New procedure function.
(doCheck): Test for isInternal in either operand and early
return true.
* gm2-compiler/M2Quads.mod (PushOne): Rewrite with extra
parameter internal.
(BuildPseudoBy): Add TRUE parameter to PushOne call.
(BuildIncProcedure): Add FALSE parameter to PushOne call.
(BuildDecProcedure): Add FALSE parameter to PushOne call.
* gm2-compiler/M2Range.mod (ForLoopBeginTypeCompatible):
Uncomment code and tidy up error string.
* gm2-compiler/SymbolTable.def (PutConstLitInternal):
New procedure.
(IsConstLitInternal): New procedure function.
* gm2-compiler/SymbolTable.mod (PutConstLitInternal):
New procedure.
(IsConstLitInternal): New procedure function.
(SymConstLit): New field IsInternal.
(CreateConstLit): Initialize IsInternal to FALSE.
gcc/testsuite/ChangeLog:
PR modula2/114055
* gm2/pim/fail/forloopby.mod: New test.
* gm2/pim/pass/forloopby2.mod: New test.
When we classify a conditional reduction chain as CONST_COND_REDUCTION
we fail to verify all involved conditionals have the same constant.
That's a quite unlikely situation so the following simply disables
such classification when there's more than one reduction statement.
PR tree-optimization/114027
* tree-vect-loop.cc (vecctorizable_reduction): Use optimized
condition reduction classification only for single-element
chains.
Jakub Jelinek [Thu, 22 Feb 2024 12:07:25 +0000 (13:07 +0100)]
profile-count: Don't dump through a temporary buffer [PR111960]
The profile_count::dump (char *, struct function * = NULL) const;
method has a single caller, the
profile_count::dump (FILE *f, struct function *fun) const;
method and for that going through a temporary buffer is just slower
and opens doors for buffer overflows, which is exactly why this P1
was filed.
The buffer size is 64 bytes, the previous maximum
"%" PRId64 " (%s)"
would print up to 61 bytes in there (19 bytes for arbitrary uint64_t:61
bitfield printed as signed, "estimated locally, globally 0 adjusted"
i.e. 38 bytes longest %s and 4 other characters).
Now, after the r14-2389 changes, it can be
19 + 38 plus 11 other characters + %.4f, which is worst case
309 chars before decimal point, decimal point and 4 digits after it,
so total 382 bytes.
So, either we could bump the buffer[64] to buffer[400], or the following
patch just drops the indirection through buffer and prints it directly to
stream. After all, having APIs which fill in some buffer without passing
down the size of the buffer is just asking for buffer overflows over time.
2024-02-22 Jakub Jelinek <jakub@redhat.com>
PR ipa/111960
* profile-count.h (profile_count::dump): Remove overload with
char * first argument.
* profile-count.cc (profile_count::dump): Change overload with char *
first argument which uses sprintf into the overfload with FILE *
first argument and use fprintf instead. Remove overload which wrapped
it.
Jakub Jelinek [Thu, 22 Feb 2024 09:19:15 +0000 (10:19 +0100)]
call-cdce: Add missing BUILT_IN_*F{32,64}X handling and improve BUILT_IN_*L [PR113993]
The following testcase ICEs, because can_test_argument_range
returns true for BUILT_IN_{COSH,SINH,EXP{,M1,2}}{F32X,F64X}
among many other builtins, but get_no_error_domain doesn't handle
those.
float32x_type_node when supported in GCC always has DFmode, so that
case is easy (and call-cdce assumes that SFmode is IEEE float and DFmode
is IEEE double). So *F32X is simply handled by adding those cases
next to *F64.
float64x_type_node when supported in GCC by definition has a mode
with larger precision and exponent range than DFmode, so it can be XFmode,
TFmode or KFmode. I went through all the l/f128 suffixed builtins and
verified that the float128x_type_node no error domain range is actually
identical to the Intel extended long double no error domain range; it isn't
that surprising, both IEEE quad and Intel/Motorola extended have the same
exponent range [-16381, 16384] (well, Motorola -16382 probably because of
different behavior for denormals, but that has nothing to do with
get_no_error_domain which is about large inputs overflowing into +-Inf
or triggering NaN, denormals could in theory do something solely for sqrt
and even that is fine). In theory some target could have different larger
type, so for *F64X the code verifies that
REAL_MODE_FORMAT (TYPE_MODE (float64x_type_node))->emax == 16384
and if so, uses the *F128 domains, otherwise falls back to the non-suffixed
ones (aka *F64), that is certainly the conservative minimum.
While at it, the patch also changes the *L suffixed cases to do pretty much
the same, the comment said that the function just assumes for *L
the *F64 ranges, but that is unnecessarily conservative.
All we currently have for long double is:
1) IEEE quad (emax 16384, *F128 ranges)
2) XFmode Intel/Motorola extended (emax 16384, same as *F128 ranges)
3) IBM extended (double double, emax 1024, the extra precision doesn't
really help and the domains are the same as for *F64)
4) same as double (*F64 again)
So, the patch uses also for *L
REAL_MODE_FORMAT (TYPE_MODE (long_double_type_node))->emax == 16384
checks and either tail recurses into the *F128 case for that or to
non-suffixed (aka *F64) case otherwise.
BUILT_IN_*F128X not handled because no target has those and it doesn't
seem something is on the horizon and who knows what would be used for that.
Thus, all we get this wrong for are probably VAX floats or something
similar, no intent from me to look at that, that is preexisting issue.
BTW, I'm surprised we don't have BUILT_IN_EXP10F{16,32,64,128,32X,64X,128X}
builtins, seems glibc has those (sure, I think except *16 and *128x).
2024-02-22 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/113993
* tree-call-cdce.cc (get_no_error_domain): Handle
BUILT_IN_{COSH,SINH,EXP{,M1,2}}{F32X,F64X}. Handle
BUILT_IN_{COSH,SINH,EXP{,M1,2}}L for
REAL_MODE_FORMAT (TYPE_MODE (long_double_type_node))->emax == 16384
the as the F128 suffixed cases, otherwise as non-suffixed ones.
Handle BUILT_IN_{EXP,POW}10L for
REAL_MODE_FORMAT (TYPE_MODE (long_double_type_node))->emax == 16384
as (-inf, 4932).
Currently, bitint_large_huge::lower_mul_overflow uses cnt 1 only if
startlimb == endlimb and in that case doesn't use a loop and handles
everything in a special if:
unsigned cnt;
bool use_loop = false;
if (startlimb == endlimb)
cnt = 1;
else if (startlimb + 1 == endlimb)
cnt = 2;
else if ((end % limb_prec) == 0)
{
cnt = 2;
use_loop = true;
}
else
{
cnt = 3;
use_loop = startlimb + 2 < endlimb;
}
if (cnt == 1)
{
...
}
else
The loop handling for the loop exit condition wants to compare if the
incremented index is equal to endlimb, but that is correct only if
end is not divisible by limb_prec and there will be a straight line
check after the loop as well for the most significant limb. The code
used endlimb + (cnt == 1) for that, but cnt == 1 is never true here,
because cnt is either 2 or 3, so the right check is (cnt == 2).
2024-02-22 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/114038
* gimple-lower-bitint.cc (bitint_large_huge::lower_mul_overflow): Fix
loop exit condition if end is divisible by limb_prec.
YunQiang Su [Thu, 22 Feb 2024 05:05:06 +0000 (13:05 +0800)]
invoke.texi: Fix some skipping UrlSuffix problem for MIPS
The problem is that, there are these lines in mips.opt.urls:
; skipping UrlSuffix for 'mabi=' due to finding no URLs
; skipping UrlSuffix for 'mno-flush-func' due to finding no URLs
; skipping UrlSuffix for 'mexplicit-relocs' due to finding no URLs
These lines is not fixed by this patch due to that we don't
document these options:
; skipping UrlSuffix for 'mlra' due to finding no URLs
; skipping UrlSuffix for 'mdebug' due to finding no URLs
; skipping UrlSuffix for 'meb' due to finding no URLs
; skipping UrlSuffix for 'mel' due to finding no URLs
gcc
* doc/invoke.texi(MIPS Options): Fix skipping UrlSuffix
problem of mabi=, mno-flush-func, mexplicit-relocs;
add missing leading - of mbranch-cost option.
* config/mips/mips.opt.urls: Regenerate.
As PR109987 and its duplicated bugs show, -mno-power8-vector
(and -mno-power9-vector) cause some problems and as Segher
pointed out in [1] they are workaround options, so this patch
is to remove -m{no,}-power{8,9}-options. Like what we did
for option -mdirect-move before, this patch still keep the
corresponding internal flags and they are automatically set
based on -mcpu. The test suite update takes some efforts,
it consists of some aspects:
- effective target powerpc_p{8,9}vector_ok are removed
and replaced with powerpc_vsx_ok.
- Some cases having -mpower{8,9}-vector are updated with
-mvsx, some of them already have -mdejagnu-cpu. For
those that don't have -mdejagnu-cpu, if -mdejagnu-cpu
is needed for the test point, then it's appended;
otherwise, add additional-options -mdejagnu-cpu=power{8,9}
if has_arch_pwr{8,9} isn't satisfied.
- Some test cases are updated with explicit -mvsx.
- Some test cases with those two option mixed are adjusted
to keep the test points, like -mpower8-vector
-mno-power9-vector are updated with -mdejagnu-cpu=power8
-mvsx etc.
- Some test cases with -mno-power{8,9}-vector are updated
by replacing -mno-power{8,9}-vector with -mno-vsx, or
just removing it.
- For some cases, we don't always specify -mdejagnu-cpu to
avoid to restrict the testing coverage, it would check
has_arch_pwr{8,9} and appended that as need.
- For vect test cases run, it doesn't specify -mcpu=power9
for power10 and up.
Bootstrapped and regtested on:
- powerpc64-linux-gnu P7/P8/P9 {-m32,-m64}
- powerpc64le-linux-gnu P8/P9/P10
Although it's stage4 now, as the discussion in PR113115 we
are still eager to neuter these two options, so is it ok
for trunk?
* config/rs6000/constraints.md (we): Update internal doc without
referring to option -mpower9-vector.
* config/rs6000/driver-rs6000.cc (asm_names): Remove mpower9-vector
special handlings.
* config/rs6000/rs6000-cpus.def (OTHER_P9_VECTOR_MASKS,
OTHER_P8_VECTOR_MASKS): Merge to ...
(OTHER_VSX_VECTOR_MASKS): ... here.
* config/rs6000/rs6000.cc (rs6000_option_override_internal): Remove
some error message handlings and explicit option mask adjustments on
explicit option power{8,9}-vector conflicting with other options.
(rs6000_print_isa_options): Update comments.
(rs6000_disable_incompatible_switches): Remove power{8,9}-vector
related array items and handlings.
* config/rs6000/rs6000.h (ASM_CPU_SPEC): Remove mpower9-vector
special handlings.
* config/rs6000/rs6000.opt: Make option power{8,9}-vector as
WarnRemoved.
* doc/extend.texi: Remove documentation referring to option
-mpower8-vector.
* doc/invoke.texi: Remove documentation for option
-mpower{8,9}-vector and adjust some documentation referring to them.
* doc/md.texi: Update documentation for constraint we.
* doc/sourcebuild.texi: Remove documentation for powerpc_p8vector_ok.
libgcc/ChangeLog:
* config/rs6000/t-float128-hw: Replace options -mpower{8,9}-vector
with -mcpu=power9.
* configure.ac: Update use of option -mpower9-vector with
-mcpu=power9.
* configure: Regenerate.
Fangrui Song [Wed, 31 Jan 2024 04:41:12 +0000 (20:41 -0800)]
RISC-V: Add tests for constraints "i" and "s"
The constraints "i" and "s" can be used with a symbol that binds
externally, e.g.
```
namespace ns { extern int var, a[4]; }
void foo() {
asm(".pushsection .xxx,\"aw\"; .dc.a %0; .popsection" :: "s"(&ns::var));
asm(".reloc ., BFD_RELOC_NONE, %0" :: "s"(&ns::a[3]));
}
```
Edwin Lu [Wed, 14 Feb 2024 20:06:38 +0000 (12:06 -0800)]
RISC-V: Quick and simple fixes to testcases that break due to reordering
The following test cases are easily fixed with small updates to the expected
assembly order. Additionally make calling-convention testcases more robust
Edwin Lu [Wed, 14 Feb 2024 20:04:59 +0000 (12:04 -0800)]
RISC-V: Use default cost model for insn scheduling
Use default cost model scheduling on these test cases. All these tests
introduce scan dump failures with -mtune generic-ooo. Since the vector
cost models are the same across all three tunes, some of the tests
in PR113249 will be fixed with this patch series.
Edwin Lu [Wed, 14 Feb 2024 20:03:37 +0000 (12:03 -0800)]
RISC-V: Add vector related pipelines
Creates new generic vector pipeline file common to all cpu tunes.
Moves all vector related pipelines from generic-ooo to generic-vector-ooo.
Creates new vector crypto related insn reservations.
Edwin Lu [Wed, 14 Feb 2024 20:01:22 +0000 (12:01 -0800)]
RISC-V: Add non-vector types to dfa pipelines
This patch adds non-vector related insn reservations and updates/creates
new insn reservations so all non-vector typed instructions have a reservation.
David Faust [Tue, 20 Feb 2024 22:48:33 +0000 (14:48 -0800)]
bpf: add inline memmove and memcpy expansion
BPF programs are not typically linked, which means we cannot fall back
on library calls to implement __builtin_{memmove,memcpy} and should
always expand them inline if possible.
GCC already successfully expands these builtins inline in many cases,
but failed to do so for a few for simple cases involving overlapping
memmove in the kernel BPF selftests and was instead emitting a libcall.
This patch implements a simple inline expansion of memcpy and memmove in
the BPF backend in a verifier-friendly way, with the caveat that the
size must be an integer constant, which is also required by clang.
Gaius Mulley [Wed, 21 Feb 2024 16:21:05 +0000 (16:21 +0000)]
PR modula2/114026 Incorrect location during for loop type checking
If a for loop contains an incompatible type expression between the
designator and the second expression then the location
used when generating the error message is set to token 0.
The bug is fixed by extending the range checking
InitForLoopBeginRangeCheck. The range checking is processed after
all types, constants have been resolved (and converted into gcc
trees). The range check will check for assignment compatibility
between des and expr1, expression compatibility between des and expr2.
Separate token positions for des, exp1, expr2 and by are stored in the
Range record and used to create virtual tokens if they are on the same
source line.
gcc/m2/ChangeLog:
PR modula2/114026
* gm2-compiler/M2GenGCC.mod (Import): Remove DisplayQuadruples.
Remove DisplayQuadList.
(MixTypesBinary): Replace check with overflowCheck.
New variable typeChecking.
Use GenQuadOTypetok to retrieve typeChecking.
Use typeChecking to suppress error message.
* gm2-compiler/M2LexBuf.def (MakeVirtual2Tok): New procedure
function.
* gm2-compiler/M2LexBuf.mod (MakeVirtualTok): Improve comment.
(MakeVirtual2Tok): New procedure function.
* gm2-compiler/M2Quads.def (GetQuadOTypetok): New procedure.
* gm2-compiler/M2Quads.mod (QuadFrame): New field CheckType.
(PutQuadO): Rewrite using PutQuadOType.
(PutQuadOType): New procedure.
(GetQuadOTypetok): New procedure.
(BuildPseudoBy): Rewrite.
(BuildForToByDo): Remove type checking.
Add parameters e2, e2tok, BySym, bytok to
InitForLoopBeginRange.
Push the RangeId.
(BuildEndFor): Pop the RangeId.
Use GenQuadOTypetok to generate AddOp without type checking.
Call PutRangeForIncrement with the RangeId and IncQuad.
(GenQuadOtok): Rewrite using GenQuadOTypetok.
(GenQuadOTypetok): New procedure.
* gm2-compiler/M2Range.def (InitForLoopBeginRangeCheck):
Rename d as des, e as expr.
Add expr1, expr1tok, expr2, expr2tok, byconst, byconsttok
parameters.
(PutRangeForIncrement): New procedure.
* gm2-compiler/M2Range.mod (Import): MakeVirtual2Tok.
(Range): Add expr2, byconst, destok, exprtok, expr2tok,
incrementquad.
(InitRange): Initialize expr2 to NulSym.
Initialize byconst to NulSym.
Initialize tokenNo, destok, exprtok, expr2tok, byconst to
UnknownTokenNo.
Initialize incrementquad to 0.
(PutRangeForIncrement): New procedure.
(PutRangeDesExpr2): New procedure.
(InitForLoopBeginRangeCheck): Rewrite.
(ForLoopBeginTypeCompatible): New procedure function.
(CodeForLoopBegin): Call ForLoopBeginTypeCompatible and
only code the for loop assignment if all the type checks
succeed.
gcc/testsuite/ChangeLog:
PR modula2/114026
* gm2/extensions/run/pass/callingc10.mod: New test.
* gm2/extensions/run/pass/callingc11.mod: New test.
* gm2/extensions/run/pass/callingc9.mod: New test.
* gm2/extensions/run/pass/strconst.def: New test.
* gm2/pim/fail/forloop.mod: New test.
* gm2/pim/pass/forloop2.mod: New test.
Martin Jambor [Wed, 21 Feb 2024 14:43:13 +0000 (15:43 +0100)]
ipa: Convert lattices from pure array to vector (PR 113476)
In PR 113476 we have discovered that ipcp_param_lattices is no longer
a POD and should be destructed. In a follow-up discussion it
transpired that their initialization done by memsetting their backing
memory to zero is also invalid because now any write there before
construction can be considered dead. Plus that having them in an
array is a little bit old-school and does not get the extra checking
offered by vector along with automatic construction and destruction
when necessary.
So this patch converts the array to a vector. That however means that
ipcp_param_lattices cannot be just a forward declared type but must be
known to all code that deals with ipa_node_params and thus to all code
that includes ipa-prop.h. Therefore I have moved ipcp_param_lattices
and the type it depends on to a new header ipa-cp.h which now
ipa-prop.h depends on. Because we have the (IMHO not a very wise)
rule that headers don't include what they need themselves, I had to
add inclusions of ipa-cp.h and sreal.h (on which it depends) to very
many files, which made the patch rather ugly.
gcc/lto/ChangeLog:
2024-02-16 Martin Jambor <mjambor@suse.cz>
PR ipa/113476
* lto-common.cc: Include sreal.h and ipa-cp.h.
* lto-partition.cc: Include ipa-cp.h, move inclusion of sreal higher.
* lto.cc: Include sreal.h and ipa-cp.h.
gcc/ChangeLog:
2024-02-16 Martin Jambor <mjambor@suse.cz>
PR ipa/113476
* ipa-prop.h (ipa_node_params): Convert lattices to a vector, adjust
initializers in the contructor.
(ipa_node_params::~ipa_node_params): Release lattices as a vector.
* ipa-cp.h: New file.
* ipa-cp.cc: Include sreal.h and ipa-cp.h.
(ipcp_value_source): Move to ipa-cp.h.
(ipcp_value_base): Likewise.
(ipcp_value): Likewise.
(ipcp_lattice): Likewise.
(ipcp_agg_lattice): Likewise.
(ipcp_bits_lattice): Likewise.
(ipcp_vr_lattice): Likewise.
(ipcp_param_lattices): Likewise.
(ipa_get_parm_lattices): Remove assert latticess is non-NULL.
(ipa_value_from_jfunc): Adjust a check for empty lattices.
(ipa_context_from_jfunc): Likewise.
(ipa_agg_value_from_jfunc): Likewise.
(merge_agg_lats_step): Do not memset new aggregate lattices to zero.
(ipcp_propagate_stage): Allocate lattices in a vector as opposed to
just in contiguous memory.
(ipcp_store_vr_results): Adjust a check for empty lattices.
* auto-profile.cc: Include sreal.h and ipa-cp.h.
* cgraph.cc: Likewise.
* cgraphclones.cc: Likewise.
* cgraphunit.cc: Likewise.
* config/aarch64/aarch64.cc: Likewise.
* config/i386/i386-builtins.cc: Likewise.
* config/i386/i386-expand.cc: Likewise.
* config/i386/i386-features.cc: Likewise.
* config/i386/i386-options.cc: Likewise.
* config/i386/i386.cc: Likewise.
* config/rs6000/rs6000.cc: Likewise.
* config/s390/s390.cc: Likewise.
* gengtype.cc (open_base_files): Added sreal.h and ipa-cp.h to the
files to be included in gtype-desc.cc.
* gimple-range-fold.cc: Include sreal.h and ipa-cp.h.
* ipa-devirt.cc: Likewise.
* ipa-fnsummary.cc: Likewise.
* ipa-icf.cc: Likewise.
* ipa-inline-analysis.cc: Likewise.
* ipa-inline-transform.cc: Likewise.
* ipa-inline.cc: Include ipa-cp.h, move inclusion of sreal.h higher.
* ipa-modref.cc: Include sreal.h and ipa-cp.h.
* ipa-param-manipulation.cc: Likewise.
* ipa-predicate.cc: Likewise.
* ipa-profile.cc: Likewise.
* ipa-prop.cc: Likewise.
(ipa_node_params_t::duplicate): Assert new lattices remain empty
instead of setting them to NULL.
* ipa-pure-const.cc: Include sreal.h and ipa-cp.h.
* ipa-split.cc: Likewise.
* ipa-sra.cc: Likewise.
* ipa-strub.cc: Likewise.
* ipa-utils.cc: Likewise.
* ipa.cc: Likewise.
* toplev.cc: Likewise.
* tree-ssa-ccp.cc: Likewise.
* tree-ssa-sccvn.cc: Likewise.
* tree-vrp.cc: Likewise.
Tamar Christina [Wed, 21 Feb 2024 11:42:53 +0000 (11:42 +0000)]
AArch64: remove ls64 from being mandatory on armv8.7-a..
The Arm Architectural Reference Manual (Version J.a, section A2.9 on FEAT_LS64)
shows that ls64 is an optional extensions and should not be enabled by default
for Armv8.7-a.
This drops it from the mandatory bits for the architecture and brings GCC inline
with LLVM and the achitecture.
Note that we will not be changing binutils to preserve compatibility with older
released compilers.
gcc/ChangeLog:
* config/aarch64/aarch64-arches.def (AARCH64_ARCH): Remove LS64 from
Armv8.7-a.
The sequence to commit a lazy save includes a branch based on
whether TPIDR2_EL0 is zero. The code assumed that CBZ could
be used for this, but that instruction is forbidden when
-mtrack-speculation is being used.
gcc/
* config/aarch64/aarch64.cc (aarch64_mode_emit_local_sme_state):
Use aarch64_gen_compare_zero_and_branch rather than emitting
a CBZ directly.
gcc/testsuite/
* gcc.target/aarch64/sme/locally_streaming_1_ts.c: New test.
* gcc.target/aarch64/sme/sibcall_7_ts.c: Likewise.
foo cannot tail-call bar because foo needs to restore ZT0 after
the call. I'd forgotten to update the ok_for_sibcall rules
to handle this when adding SME2.
Thanks to Sander de Smalen for the spot.
gcc/
* config/aarch64/aarch64.cc (aarch64_function_ok_for_sibcall):
Check that each individual piece of state is shared in the same
way, rather than using an aggregate check for PSTATE.ZA.
gcc/testsuite/
* gcc.target/aarch64/sme/sibcall_9.c: New test.
aarch64: Ensure ZT0 is zeroed in a new-ZT0 function
ACLE guarantees that a function like:
__arm_new("zt0") foo() { ... }
will start with ZT0 equal to zero. I'd forgotten to enforce that
after commiting a lazy save. After such a save, we should zero
ZA iff the function has ZA state and zero ZT0 iff the function
has ZT0 state.
gcc/
* config/aarch64/aarch64.cc (aarch64_mode_emit_local_sme_state):
In the code that commits a lazy save, only zero ZA if the function
has ZA state. Similarly zero ZT0 if the function has ZT0 state.
gcc/testsuite/
* gcc.target/aarch64/sme/zt0_state_5.c (test3): Expect ZT0 rather
than ZA to be zeroed.
(test5): Remove zeroing of ZA.
aarch64: Remove the aarch64_commit_lazy_save pattern
The main purpose of the aarch64_commit_lazy_save pattern
was to defer insertion of a half-diamond until splitting,
since splitting knew how to create the associated basic blocks.
However, the fix for PR113220 means that mode-switching also
knows how to do that. This patch therefore removes the pattern
and emits the subinstructions directly.
On its own, this is actually a slight regression, since it
means we keep an unnecessary zero { za }. But the cases
where that happens are wrong for a different reason, and this
patch is a prerequisite to fixing it.
aarch64: Stack-clash prologues and VG saves [PR113995]
This patch fixes an ICE for a combination of:
- -fstack-clash-protection
- a frame that has SVE save slots
- a frame that has no GPR save slots
- a frame that has a VG save slot
The allocation code was folding the SVE save slot allocation into
the initial frame allocation, so that we had one allocation of
size <size of SVE registers> + 16. But the VG save code itself
expected the allocations to remain separate, since it wants to
store at a constant offset from SP or FP.
The VG save isn't shrink-wrapped and so acts as a probe of the
initial allocations. It should therefore be safe to keep separate
allocations in this case.
The scans in locally_streaming_1.c expect no stack clash protection,
so the patch forces that and adds a separate compile-only test for
when protection is enabled.
gcc/
PR target/113995
* config/aarch64/aarch64.cc (aarch64_expand_prologue): Don't
fold the SVE allocation into the initial allocation if the
initial allocation includes a VG save.
Allow mode-switching to introduce internal loops [PR113220]
In this PR, the SME mode-switching code needs to insert a stack-probe
loop for an alloca. This patch allows the target to do that.
There are two parts to it: allowing loops for insertions in blocks,
and allowing them for insertions on edges. The former can be handled
entirely within mode-switching itself, by recording which blocks have
had new branches inserted. The latter requires an extension to
commit_one_edge_insertion.
I think the extension to commit_one_edge_insertion makes logical sense,
since it already explicitly allows internal loops during RTL expansion.
The single-block find_sub_basic_blocks is a relatively recent addition,
so wouldn't have been available when the code was originally written.
The patch also has a small and obvious fix to make the aarch64 emit
hook cope with labels.
I've added specific -fstack-clash-protection versions of all
aarch64-sme.exp tests that previously failed because of this bug.
I've also added -fno-stack-clash-protection to the original versions
of these tests if they contain scans that assume no protection.
gcc/
PR target/113220
* cfgrtl.cc (commit_one_edge_insertion): Handle sequences that
contain jumps even if called after initial RTL expansion.
* mode-switching.cc: Include cfgbuild.h.
(optimize_mode_switching): Allow the sequence returned by the
emit hook to contain internal jumps. Record which blocks
contain such jumps and split the blocks at the end.
* config/aarch64/aarch64.cc (aarch64_mode_emit): Check for
non-debug insns when scanning the sequence.
gcc/testsuite/
PR target/113220
* gcc.target/aarch64/sme/call_sm_switch_5.c: Add
-fno-stack-clash-protection.
* gcc.target/aarch64/sme/call_sm_switch_5_scp.c: New test.
* gcc.target/aarch64/sme/sibcall_6_scp.c: New test.
* gcc.target/aarch64/sme/za_state_4.c: Add
-fno-stack-clash-protection.
* gcc.target/aarch64/sme/za_state_4_scp.c: New test.
* gcc.target/aarch64/sme/za_state_5.c: Add
-fno-stack-clash-protection.
* gcc.target/aarch64/sme/za_state_5_scp.c: New test.
Tobias Burnus [Wed, 21 Feb 2024 10:31:43 +0000 (11:31 +0100)]
OpenMP/nvptx: support 'arch(nvptx64)' as context selector
The main 'arch' context selector for nvptx is, well, 'nvptx';
however, as 'nvptx64' is used as by LLVM, it makes sense
to support it as well.
Note that LLVM has: "The triple architecture can be one of
``nvptx`` (32-bit PTX) or ``nvptx64`` (64-bit PTX)."
GCC effectively only supports the 64bit variant (at least for
offloading). Thus, GCC's 'nvptx' is not quite the same as LLVM's.
The device-compiler part (nvptx_omp_device_kind_arch_isa) uses
TARGET_ABI64 such that nvptx64 is only defined with -m64.
gcc/ChangeLog:
* config/nvptx/gen-omp-device-properties.sh: Add 'nvptx64' to arch.
* config/nvptx/nvptx.cc (nvptx_omp_device_kind_arch_isa): Likewise.
libgomp/ChangeLog:
* libgomp.texi (OpenMP Context Selectors): Add 'nvptx64' as additional
'arch' value for nvptx.
Ilya Leoshkevich [Mon, 19 Feb 2024 10:51:38 +0000 (11:51 +0100)]
IBM Z: Preserve exceptions in autovec-*-signaling-eq.c tests
DSE, DCE, and other passes are removing redundant signaling comparisons
from these tests, but the whole point is to check that GCC knows how to
emit them. Use -fno-delete-dead-exceptions to prevent that.
The plan to maintain PRU hardware-specific specs in newlib tree has been
abandoned in favour of a new distinct GIT project. Update the
documentation accordingly.
gcc/ChangeLog:
* doc/invoke.texi (-mmcu): Add information about MCU specs.
pru: Document that arguments are not passed to main with -minrt
The minimal runtime has been documented from the beginning to break some
standard features in order to reduce code size, while keeping
the features required by typical firmware programs. Document one more
imposed restriction - the main() function must take no arguments.
gcc/ChangeLog:
* doc/invoke.texi (-minrt): Clarify that main
must take no arguments.
Iain Sandoe [Sun, 18 Feb 2024 06:52:47 +0000 (06:52 +0000)]
libgcc, aarch64: Allow for BE platforms in heap trampolines.
This arranges that the byte order of the instruction sequences is
independent of the byte order of memory.
libgcc/ChangeLog:
* config/aarch64/heap-trampoline.c
(aarch64_trampoline_insns): Arrange to encode instructions as a
byte array so that the order is independent of memory byte order.
(struct aarch64_trampoline): Likewise.
In _GLIBCXX_DEBUG mode the std::__niter_base can remove 2 layers, the
__gnu_debug::_Safe_iterator<> and the __gnu_cxx::__normal_iterator<>.
When std::__niter_wrap is called to build a __gnu_debug::_Safe_iterator<>
from a __gnu_cxx::__normal_iterator<> we then have a consistency issue
as the difference between the 2 iterators will done on a __normal_iterator
on one side and a C pointer on the other. To avoid this problem call
std::__niter_base on both input iterators.
libstdc++-v3/ChangeLog:
* include/bits/stl_algobase.h (std::__niter_wrap): Add a call to
std::__niter_base on res iterator.
Peter Hill [Tue, 20 Feb 2024 19:42:53 +0000 (20:42 +0100)]
Fortran: fix passing array component ref to polymorphic procedures
PR fortran/105658
gcc/fortran/ChangeLog:
* trans-expr.cc (gfc_conv_intrinsic_to_class): When passing an
array component reference of intrinsic type to a procedure
with an unlimited polymorphic dummy argument, a temporary
should be created.
Georg-Johann Lay [Tue, 20 Feb 2024 13:54:44 +0000 (14:54 +0100)]
AVR: Use types of exact size and signedness in built-ins.
The AVR built-ins used types like "int" or "char" that don't
have exact signedness or type size which depend on -mint8
and -f[no-][un-]signed-char etc. As the built-ins are modelling
machine instructions of given type sizes and signedness, also
use according types in their prototypes.
gcc/
* config/avr/builtins.def: Use function prototypes of given size
and signedness.
* config/avr/avr.cc (avr_init_builtins): Adjust types required
by builtins.def.
* doc/extend.texi (AVR Built-in Functions): Adjust accordingly.
aarch64: Fix streaming-compatible code with -mtrack-speculation [PR113805]
This patch makes -mtrack-speculation work on streaming-compatible
functions. There were two related issues. The first is that the
streaming-compatible code was using TB(N)Z unconditionally, whereas
those instructions are not allowed with speculation tracking.
That part can be fixed in a similar way to the recent eh_return
fix (PR112987).
The second issue was that the speculation-tracking pass runs
before some of the conditional branches are inserted. It isn't
safe to insert the branches any earlier, so the patch instead adds
a second speculation-tracking pass that runs afterwards. The new
pass is only used for streaming-compatible functions.
The testcase is adapted from call_sm_switch_1.c.
gcc/
PR target/113805
* config/aarch64/aarch64-passes.def (pass_late_track_speculation):
New pass.
* config/aarch64/aarch64-protos.h (make_pass_late_track_speculation):
Declare.
* config/aarch64/aarch64.md (is_call): New attribute.
(*and<mode>3nr_compare0): Rename to...
(@aarch64_and<mode>3nr_compare0): ...this.
* config/aarch64/aarch64-sme.md (aarch64_get_sme_state)
(aarch64_tpidr2_save, aarch64_tpidr2_restore): Add is_call attributes.
* config/aarch64/aarch64-speculation.cc: Update file comment to
describe the new late pass.
(aarch64_do_track_speculation): Handle is_call insns like other calls.
(pass_track_speculation): Add an is_late member variable.
(pass_track_speculation::gate): Run the late pass for streaming-
compatible functions and the early pass for other functions.
(make_pass_track_speculation): Update accordingly.
(make_pass_late_track_speculation): New function.
* config/aarch64/aarch64.cc (aarch64_gen_test_and_branch): New
function.
(aarch64_guard_switch_pstate_sm): Use it.
gcc/testsuite/
PR target/113805
* gcc.target/aarch64/sme/call_sm_switch_11.c: New test.
Jakub Jelinek [Tue, 20 Feb 2024 09:31:46 +0000 (10:31 +0100)]
testsuite: Fix up analyzer/torture/vector-extract-1.c test for i686 [PR113983]
The testcase fails on i686-linux with
.../gcc/testsuite/gcc.dg/analyzer/torture/vector-extract-1.c:11:1: warning: MMX vector return without MMX enabled changes the ABI [-Wpsabi]
Added -Wno-psabi to silence the warning.
2024-02-20 Jakub Jelinek <jakub@redhat.com>
PR analyzer/113983
* gcc.dg/analyzer/torture/vector-extract-1.c: Add -Wno-psabi as
dg-additional-options.
liuhongt [Mon, 19 Feb 2024 04:19:35 +0000 (12:19 +0800)]
Fix testcase for platform without gnu/stubs-x32.h
target maybe_x32 doesn't check if platform has gnu/stubs-x32.h, but
it's included by stdint.h in the testcase.
Adjust testcase: remove stdint.h, use 'typedef long long int64_t'
instead.
Andrew Pinski [Sun, 18 Feb 2024 22:14:23 +0000 (14:14 -0800)]
analyzer: Fix maybe_undo_optimize_bit_field_compare vs non-scalar types [PR113983]
After r14-6419-g4eaaf7f5a378e8, maybe_undo_optimize_bit_field_compare would ICE on
vector CST but this function really should be checking if we had integer types so
reject non-integral types early on (like it was doing for non-char type before r14-6419-g4eaaf7f5a378e8).
Committed as obvious after build and tested for aarch64-linux-gnu with no regressions.
PR analyzer/113983
gcc/analyzer/ChangeLog:
* region-model-manager.cc (maybe_undo_optimize_bit_field_compare): Reject
non integral types.
gcc/testsuite/ChangeLog:
* gcc.dg/analyzer/torture/vector-extract-1.c: New test.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>