emit-rtl: Allow extra checks for paradoxical subregs [PR119966]
When a paradoxical subreg is detected, validate_subreg exits early, thus
skipping the important checks later in the function.
Fix by continuing with the checks instead of declaring early that the
paradoxical subreg is valid.
One of the newly allowed subsequent checks needed to be disabled for
paradoxical subregs. It turned out that combine attempts to create
a paradoxical subreg of mem even for strict-alignment targets.
That is invalid and should eventually be rejected, but is
temporarily left allowed to prevent regressions for
armv8l-unknown-linux-gnueabihf. See PR120329 for more details.
Tests I did:
- No regressions were found for C and C++ for the following targets:
- native x86_64-pc-linux-gnu
- cross riscv64-unknown-linux-gnu
- cross riscv32-none-elf
- Sanity checked armv8l-unknown-linux-gnueabihf by cross-building
up to including libgcc. Linaro CI bot further confirmed there
are no regressions.
- Sanity checked powerpc64-unknown-linux-gnu by building native
toolchain, but I could not setup qemu-user for DejaGnu testing.
PR target/119966
gcc/ChangeLog:
* emit-rtl.cc (validate_subreg): Do not exit immediately for
paradoxical subregs. Filter subsequent tests which are
not valid for paradoxical subregs.
Co-authored-by: Richard Sandiford <richard.sandiford@arm.com> Signed-off-by: Dimitar Dimitrov <dimitar@dinux.eu>
Eric Botcazou [Sun, 18 May 2025 17:10:26 +0000 (19:10 +0200)]
Partially lift restriction from loc_list_from_tree_1
The function accepts all handled_component_p expressions and decodes them by
means of get_inner_reference as expected, but bails out on bitfields:
/* TODO: We can extract value of the small expression via shifting
even for nonzero bitpos. */
if (list_ret == 0)
return 0;
if (!multiple_p (bitpos, BITS_PER_UNIT, &bytepos)
|| !multiple_p (bitsize, BITS_PER_UNIT))
{
expansion_failed (loc, NULL_RTX,
"bitfield access");
return 0;
}
This lifts the second part of the restriction, which helps for obscure cases
of packed discriminated record types in Ada, although this requires the very
latest GDB sources.
gcc/
* dwarf2out.cc (loc_list_from_tree_1) <COMPONENT_REF>: Do not bail
out when the size is not a multiple of a byte.
Deal with bit-fields whose size is not a multiple of a byte when
dereferencing an address.
Andrew Pinski [Sun, 18 May 2025 00:21:39 +0000 (17:21 -0700)]
phiopt: Use mark_lhs_in_seq_for_dce instead of doing it inline
Right now phiopt has the same code as mark_lhs_in_seq_for_dce
inlined into match_simplify_replacement. Instead let's use the
function in gimple-fold that does the same thing.
Bootstrapped and tested on x86_64-linux-gnu.
gcc/ChangeLog:
* gimple-fold.cc (mark_lhs_in_seq_for_dce): Make
non-static.
* gimple-fold.h (mark_lhs_in_seq_for_dce): Declare.
* tree-ssa-phiopt.cc (match_simplify_replacement): Use
mark_lhs_in_seq_for_dce instead of manually looping.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
Oleg Endo [Sat, 17 May 2025 16:51:35 +0000 (10:51 -0600)]
[PATCH] libgcc SH: fix alignment for relaxation
From 6462f1e6a2565c5d4756036d9bc2f39dce9bd768 Mon Sep 17 00:00:00 2001
From: QBos07 <qubos@outlook.de>
Date: Sat, 10 May 2025 16:56:28 +0000
Subject: [PATCH] libgcc SH: fix alignment for relaxation
when relaxation is enabled we can not infer the alignment
from the position as that may change. This should not change
non-relaxed builds as its allready aligned there. This was
the missing piece to building an entire toolchain with -mrelax
Credit goes to Oleg Endo: https://sourceware.org/bugzilla/show_bug.cgi?id=3298#c4
libgcc/
* config/sh/lib1funcs.S (ashiftrt_r4_32): Increase alignment.
(movemem): Force alignment of the mova intruction.
Jeff Law [Sat, 17 May 2025 15:37:01 +0000 (09:37 -0600)]
[RISC-V] Fix ICE due to bogus use of gen_rtvec
Found this while setting up the risc-v coordination branch off of gcc-15. Not
sure why I didn't use rtvec_alloc directly here since we're going to initialize
the whole vector ourselves. Using gen_rtvec was just wrong as it's walking
down a non-existent varargs list. Under the "right" circumstances it can walk
off a page and fault.
This was seen with a test already in the testsuite (I forget which test), so no
new regression test.
Tested in my tester and verified the failure on the coordination branch is
resolved a well. Waiting on pre-commit CI to render a verdict.
gcc/
* config/riscv/riscv-vect-permconst.cc (vector_permconst:process_bb):
Use rtvec_alloc, not gen_rtvec since we don't want/need to initialize
the vector.
Yuao Ma [Sat, 17 May 2025 13:45:49 +0000 (07:45 -0600)]
[PATCH] gcc: add trigonometric pi-based functions as gcc builtins
I committed the wrong version on Yuao's behalf. This followup adds the
documentation changes -- Jeff.
This patch adds trigonometric pi-based functions as gcc builtins: acospi, asinpi, atan2pi,
atanpi, cospi, sinpi, and tanpi. Latest glibc already provides support for
these functions, which we plan to leverage in future gfortran implementations.
The patch includes two test cases to verify both correct code generation and
function definition.
If approved, I suggest committing this foundational change first. Constant
folding for these builtins will be addressed in subsequent patches.
Best regards,
Yuao
From 9a9683d250078ce1bc687797c26ca05a9e91b350 Mon Sep 17 00:00:00 2001
From: Yuao Ma <c8ef@outlook.com>
Date: Wed, 14 May 2025 22:14:00 +0800
Subject: [PATCH] gcc: add trigonometric pi-based functions as gcc builtins
Add trigonometric pi-based functions as GCC builtins: acospi, asinpi, atan2pi,
atanpi, cospi, sinpi, and tanpi. Latest glibc already provides support for
these functions, which we plan to leverage in future gfortran implementations.
The patch includes two test cases to verify both correct code generation and
function definition.
If approved, I suggest committing this foundational change first. Constant
folding for these builtins will be addressed in subsequent patches.
Yuao Ma [Sat, 17 May 2025 13:42:24 +0000 (07:42 -0600)]
[PATCH] gcc: add trigonometric pi-based functions as gcc builtins
This patch adds trigonometric pi-based functions as gcc builtins: acospi, asinpi, atan2pi,
atanpi, cospi, sinpi, and tanpi. Latest glibc already provides support for
these functions, which we plan to leverage in future gfortran implementations.
The patch includes two test cases to verify both correct code generation and
function definition.
If approved, I suggest committing this foundational change first. Constant
folding for these builtins will be addressed in subsequent patches.
Best regards,
Yuao
From 9a9683d250078ce1bc687797c26ca05a9e91b350 Mon Sep 17 00:00:00 2001
From: Yuao Ma <c8ef@outlook.com>
Date: Wed, 14 May 2025 22:14:00 +0800
Subject: [PATCH] gcc: add trigonometric pi-based functions as gcc builtins
Add trigonometric pi-based functions as GCC builtins: acospi, asinpi, atan2pi,
atanpi, cospi, sinpi, and tanpi. Latest glibc already provides support for
these functions, which we plan to leverage in future gfortran implementations.
The patch includes two test cases to verify both correct code generation and
function definition.
If approved, I suggest committing this foundational change first. Constant
folding for these builtins will be addressed in subsequent patches.
Jeff Law [Sat, 17 May 2025 13:16:50 +0000 (07:16 -0600)]
[RISC-V] Avoid setting output object more than once in IOR/XOR synthesis
While evaluating Shreya's logical AND synthesis work on spec2017 I ran into a
code quality regression where combine was failing to eliminate a redundant sign
extension.
I had a hunch the problem would be with the multiple sets of the same pseudo
register in the AND synthesis path. I was right that the problem was multiple
sets of the same pseudo, but it was actually some of the splitters in the
RISC-V backend that were the culprit. Those multiple sets caused the sign bit
tracking code to need to make conservative assumptions thus resulting in
failure to eliminate the unnecessary sign extension.
So before we start moving on the logical AND patch we're going to do some
cleanups.
There's multiple moving parts in play. For example, we have splitters which do
multiple sets of the output register. Fixing some of those independently would
result in a code quality regression. Instead they need some adjustments to or
removal of mvconst_internal. Of course getting rid of mvconst_internal will
trigger all kinds of code quality regressions right now which ultimately lead
back to the need to revamp the logical AND expander. Point being we've got
some circular dependencies and breaking them may result in short term code
quality regressions. I'll obviously try to avoid those as much as possible.
So to start the process this patch adjusts the recently added XOR/IOR synthesis
to avoid re-using the destination register. While the reuse was clearly safe
from a semantic standpoint, various parts of the compiler can do a better job
for pseudos that are only set once.
Given this synthesis path should only be active during initial RTL generation,
we can create new pseudos at will, so we create a new one for each insn. At
the end of the sequence we copy from the last set into the final destination.
This has various trivial impacts on the code generation, but the resulting code
looks no better or worse to me across spec2017.
This has been tested in my tester and is currently bootstrapping on my BPI.
Waiting on data from the pre-commit tester before moving forward...
gcc/
* config/riscv/riscv.cc (synthesize_ior_xor): Avoid writing
operands[0] more than once, use new pseudos instead.
Pan Li [Fri, 16 May 2025 07:34:51 +0000 (15:34 +0800)]
RISC-V: Avoid scalar unsigned SAT_ADD test data duplication
Some of the previous scalar unsigned SAT_ADD test data are
duplicated in different test files. This patch would like to
move them into a shared header file, to avoid the test data
duplication.
The below test suites are passed for this patch series.
* The rv64gcv fully regression test.
Pengxuan Zheng [Mon, 12 May 2025 17:21:49 +0000 (10:21 -0700)]
aarch64: Add more vector permute tests for the FMOV optimization [PR100165]
This patch adds more tests for vector permutes which can now be optimized as
FMOV with the generic PERM change and the aarch64 AND patch.
Changes since v1:
* v2: Add -mlittle-endian to the little endian tests explicitly and rename the
tests accordingly.
PR target/100165
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/fmov-3-be.c: New test.
* gcc.target/aarch64/fmov-3-le.c: New test.
* gcc.target/aarch64/fmov-4-be.c: New test.
* gcc.target/aarch64/fmov-4-le.c: New test.
* gcc.target/aarch64/fmov-5-be.c: New test.
* gcc.target/aarch64/fmov-5-le.c: New test.
Pengxuan Zheng [Mon, 12 May 2025 17:12:11 +0000 (10:12 -0700)]
aarch64: Optimize AND with certain vector of immediates as FMOV [PR100165]
We can optimize AND with certain vector of immediates as FMOV if the result of
the AND is as if the upper lane of the input vector is set to zero and the lower
lane remains unchanged.
f_v4hi:
movi d31, 0xffffffff
and v0.8b, v0.8b, v31.8b
ret
With this patch, it generates:
f_v4hi:
fmov s0, s0
ret
Changes since v1:
* v2: Simplify the mask checking logic by using native_decode_int and address a
few other review comments.
PR target/100165
gcc/ChangeLog:
* config/aarch64/aarch64-protos.h (aarch64_output_fmov): New prototype.
(aarch64_simd_valid_and_imm_fmov): Likewise.
* config/aarch64/aarch64-simd.md (and<mode>3<vczle><vczbe>): Allow FMOV
codegen.
* config/aarch64/aarch64.cc (aarch64_simd_valid_and_imm_fmov): New.
(aarch64_output_fmov): Likewise.
* config/aarch64/constraints.md (Df): New constraint.
* config/aarch64/predicates.md (aarch64_reg_or_and_imm): Update
predicate to support FMOV codegen.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/fmov-1-be.c: New test.
* gcc.target/aarch64/fmov-1-le.c: New test.
* gcc.target/aarch64/fmov-2-be.c: New test.
* gcc.target/aarch64/fmov-2-le.c: New test.
Pengxuan Zheng [Wed, 7 May 2025 17:47:37 +0000 (10:47 -0700)]
aarch64: Recognize vector permute patterns which can be interpreted as AND [PR100165]
Certain permute that blends a vector with zero can be interpreted as an AND of a
mask. This idea was suggested by Richard Sandiford when he was reviewing my
patch which tries to optimizes certain vector permute with the FMOV instruction
for the aarch64 target.
Pengxuan Zheng [Fri, 16 May 2025 00:52:29 +0000 (17:52 -0700)]
aarch64: Fix an oversight in aarch64_evpc_reencode
Some fields (e.g., zero_op0_p and zero_op1_p) of the struct "newd" may be left
uninitialized in aarch64_evpc_reencode. This can cause reading of uninitialized
data. I found this oversight when testing my patches on and/fmov
optimizations. This patch fixes the bug by zero initializing the struct.
Pushed as obvious after bootstrap/test on aarch64-linux-gnu.
gcc/ChangeLog:
* config/aarch64/aarch64.cc (aarch64_evpc_reencode): Zero initialize
newd.
Patrick Palka [Fri, 16 May 2025 17:06:04 +0000 (13:06 -0400)]
libstdc++: Use __is_invocable/nothrow_invocable builtins more
As a follow-up to r15-1253 and r15-1254 which made us use these builtins
in the standard std::is_invocable/nothrow_invocable class templates, let's
also use them directly in the standard variable templates and our internal
C++11 __is_invocable/nothrow_invocable class templates.
libstdc++-v3/ChangeLog:
* include/std/type_traits (__is_invocable): Define in terms of
corresponding builtin if available.
(__is_nothrow_invocable): Likewise.
(is_invocable_v): Likewise.
(is_nothrow_invocable_v): Likewise.
Andrew Pinski [Thu, 15 May 2025 03:41:22 +0000 (20:41 -0700)]
Forwprop: add a debug dump after propagate into comparison does something
I noticed that fowprop does not dump when forward_propagate_into_comparison
did a change to the assign statement.
I am actually using it to help guide changing/improving/add match patterns
instead of depending on doing a tree "combiner" here.
Bootstrapped and tested on x86_64-linux-gnu.
gcc/ChangeLog:
* tree-ssa-forwprop.cc (forward_propagate_into_comparison): Dump
when replacing statement.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
Martin Jambor [Fri, 16 May 2025 15:13:51 +0000 (17:13 +0200)]
ipa: Dump cgraph_node UID instead of order into ipa-clones dump file
Since starting from GCC 15 the order is not unique for any
symtab_nodes but m_uid is, I believe we ought to dump the latter in
the ipa-clones dump, if only so that people can reliably match entries
about new clones to those about removed nodes (if any).
This patch also contains a fixes to a few other places where we have
so far dumped order to our ordinary dumps and which have been
identified by Michal Jires.
gcc/ChangeLog:
2025-05-16 Martin Jambor <mjambor@suse.cz>
* cgraph.h (symtab_node): Make member function get_uid const.
* cgraphclones.cc (dump_callgraph_transformation): Dump m_uid of the
call graph nodes instead of order.
* cgraph.cc (cgraph_node::remove): Likewise.
* ipa-cp.cc (ipcp_lattice<valtype>::print): Likewise.
* ipa-sra.cc (ipa_sra_summarize_function): Likewise.
* symtab.cc (symtab_node::dump_base): Likewise.
Andrew Pinski [Sat, 10 May 2025 04:13:48 +0000 (21:13 -0700)]
aarch64: Fix narrowing warning in driver-aarch64.cc [PR118603]
Since the AARCH64_CORE defines in aarch64-cores.def all use -1 for
the variant, it is just easier to add the cast to unsigned in the usage
in driver-aarch64.cc.
Build and tested on aarch64-linux-gnu.
gcc/ChangeLog:
PR target/118603
* config/aarch64/driver-aarch64.cc (aarch64_cpu_data): Add cast to unsigned
to VARIANT of the define AARCH64_CORE.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
Andrew Pinski [Sat, 10 May 2025 03:56:42 +0000 (20:56 -0700)]
aarch64: Fix narrowing warning in aarch64_detect_vector_stmt_subtype
There is a narrowing warning in aarch64_detect_vector_stmt_subtype
about gather_load_x32_cost and gather_load_x64_cost converting from int to unsigned.
These fields are always unsigned and even the constructor for sve_vec_cost takes
an unsigned. So let's just move the fields over to unsigned.
Build and tested for aarch64-linux-gnu.
gcc/ChangeLog:
* config/aarch64/aarch64-protos.h (struct sve_vec_cost): Change gather_load_x32_cost
and gather_load_x64_cost fields to unsigned.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
Andrew Pinski [Mon, 21 Apr 2025 19:19:49 +0000 (12:19 -0700)]
forwprop: Move memcpy_to_memset from gimple fold to forwprop
Since this optimization now walks the vops, it is better to only
do it in forwprop rather than in all the time in fold_stmt.
The next patch will add the limit to the alias walk.
gcc/ChangeLog:
* gimple-fold.cc (optimize_memcpy_to_memset): Move to
tree-ssa-forwprop.cc.
(gimple_fold_builtin_memory_op): Remove call to
optimize_memcpy_to_memset.
(fold_stmt_1): Likewise.
* tree-ssa-forwprop.cc (optimize_memcpy_to_memset): Move from
gimple-fold.cc.
(simplify_builtin_call): Try to optimize memcpy/memset.
(pass_forwprop::execute): Try to optimize memcpy like assignment
from a previous memset.
gcc/testsuite/ChangeLog:
* gcc.dg/pr78408-1.c: Update scan to forwprop1 only.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
Iain Sandoe [Sat, 10 May 2025 16:22:55 +0000 (17:22 +0100)]
c++, coroutines: Allow NVRO in more cases for ramp functions.
The constraints of the c++ coroutines specification require the ramp
to construct a return object early in the function. This will be returned
at some later time. This is implemented as NVRO but requires that copying
be well-formed even though it will be elided. Special-case ramp functions
to allow this.
gcc/cp/ChangeLog:
* typeck.cc (check_return_expr): Suppress conversions for NVRO
in coroutine ramp functions.
Iain Sandoe [Sat, 10 May 2025 16:12:44 +0000 (17:12 +0100)]
c++: Set the outer brace marker for missed cases.
In some cases, a function might be declared as FUNCTION_NEEDS_BODY_BLOCK
but all the content is contained within that block. However, poplevel
is currently assuming that such cases would always contain subblocks.
In the case that we do have a body block, but there are no subblocks
then st the outer brace marker on the body block. This situation occurs
for at least coroutine lambda ramp functions and empty constructors.
gcc/cp/ChangeLog:
* decl.cc (poplevel): Set BLOCK_OUTER_CURLY_BRACE_P on the
body block for functions with no subblocks.
Nathaniel Shead [Fri, 28 Mar 2025 12:30:31 +0000 (23:30 +1100)]
c++/modules: Clean up importer_interface
This patch removes some no longer needed special casing in linkage
determination, and makes the distinction between "always_emit" and
"internal" for better future-proofing.
gcc/cp/ChangeLog:
* module.cc (importer_interface): Adjust flags.
(get_importer_interface): Rename flags.
(trees_out::core_bools): Clean up special casing.
(trees_out::write_function_def): Rename flag.
Signed-off-by: Nathaniel Shead <nathanieloshead@gmail.com> Reviewed-by: Jason Merrill <jason@redhat.com>
Jason Merrill [Fri, 16 May 2025 12:22:08 +0000 (08:22 -0400)]
c++: one more coro test tweak
After my r16-670, running the testsuite with explicit --stds didn't run this
one in C++17 mode, but the default did. Let's remove the { target c++17 }
so it doesn't by default, either.
This patch mops up obvious redundancies that weren't caught by the
automatic regexp replacements in earlier patches. It doesn't do
anything with genemit.cc, since that will be part of a later series.
gcc/
* config/arm/arm.cc (arm_gen_load_multiple_1): Simplify use of
end_sequence.
(arm_gen_store_multiple_1): Likewise.
* expr.cc (gen_move_insn): Likewise.
* gentarget-def.cc (main): Likewise.
The start_sequence/end_sequence interface was a big improvement over
the previous state, but one slightly awkward thing about it is that
you have to call get_insns before end_sequence in order to get the
insn sequence itself:
To get the contents of the sequence just made, you must call
`get_insns' *before* calling here.
I can see three main potential objections to this:
(1) It isn't obvious whether ending the sequence would return the first
or the last instruction. But although some code reads *both* the
first and the last instruction, I can't think of a specific case
where code would want *only* the last instruction. All the emit
functions take the first instruction rather than the last.
(2) The "end" in end_sequence might imply the C++ meaning of an exclusive
endpoint iterator. But for an insn sequence, the exclusive endpoint
is always the null pointer, so it would never need to be returned.
That said, we could rename the function to something like
"finish_sequence" or "complete_sequence" if this is an issue.
(3) There might have been an intention that start_sequence/end_sequence
could in future reclaim memory for unwanted sequences, and so an
explicit get_insns was used to indicate that the caller does want
the sequence.
But that sort of memory reclaimation has never been added,
and now that the codebase is C++, it would be easier to handle
using RAII. I think reclaiming memory would be difficult to do in
any case, since some code records the individual instructions that
they emit, rather than using get_insns.
Jonathan Wakely [Thu, 15 May 2025 15:03:53 +0000 (16:03 +0100)]
libstdc++: Fix proc check_v3_target_namedlocale for "" locale [PR65909]
When the last format argument to a Tcl proc is named 'args' it has
special meaning and is a list that accepts any number of arguments[1].
This means when "" is passed to the proc and then we expand "$args" we
get an empty list formatted as "{}". My r16-537-g3e2b83faeb6b14 change
broke all uses of dg-require-namedlocale with empty locale names, "".
By changing the name of the formal argument to 'locale' we avoid the
special behaviour for 'args' and now it only accepts a single argument
(as was always intended). When expanded as "$locale" we get "" as I
expected.
Pan Li [Tue, 13 May 2025 03:12:53 +0000 (11:12 +0800)]
RISC-V: Adjust vx combine test case to avoid name conflict
Given we will put all vx combine for int8 in a single file,
we need to make sure the generate function for different
types and ops has different function name. Thus, refactor
the test helper macros for avoiding possible function name
conflict.
The below test suites are passed for this patch series.
* The rv64gcv fully regression test.
Pan Li [Sun, 11 May 2025 08:20:28 +0000 (16:20 +0800)]
RISC-V: Combine vec_duplicate + vsub.vv to vsub.vx on GR2VR cost
This patch would like to combine the vec_duplicate + vsub.vv to the
vsub.vx. From example as below code. The related pattern will depend
on the cost of vec_duplicate from GR2VR. Then the late-combine will
take action if the cost of GR2VR is zero, and reject the combination
if the GR2VR cost is greater than zero.
Assume we have example code like below, GR2VR cost is 0.
#define DEF_VX_BINARY(T, OP) \
void \
test_vx_binary (T * restrict out, T * restrict in, T x, unsigned n) \
{ \
for (unsigned i = 0; i < n; i++) \
out[i] = in[i] OP x; \
}
The below test suites are passed for this patch.
* The rv64gcv fully regression test.
gcc/ChangeLog:
* config/riscv/autovec-opt.md (*<optab>_vx_<mode>): Add new
pattern to convert vec_duplicate + vsub.vv to vsub.vx.
* config/riscv/riscv.cc (riscv_rtx_costs): Add minus as plus op.
* config/riscv/vector-iterators.md: Add minus to iterator
any_int_binop_no_shift_vx.
Jason Merrill [Sat, 10 May 2025 15:24:38 +0000 (11:24 -0400)]
c++: remove coroutines.exp
coroutines.exp was basically only there to add -std=c++20 to all the tests;
removing it lets us use the general support for running tests under multiple
standards. Doing this revealed that some tests that specifically run in
C++17 mode were relying on -std=c++20 followed by -std=c++17 leaving
flag_coroutines set, which seems unintentional, and different from how we
handle other feature flags. So this changes that, and adds the missing
-fcoroutines to those tests.
Harald Anlauf [Thu, 15 May 2025 19:07:07 +0000 (21:07 +0200)]
Fortran: default-initialization and functions returning derived type [PR85750]
Functions with non-pointer, non-allocatable result and of derived type did
not always get initialized although the type had default-initialization,
and a derived type component had the allocatable or pointer attribute.
Rearrange the logic when to apply default-initialization.
PR fortran/85750
gcc/fortran/ChangeLog:
* resolve.cc (resolve_symbol): Reorder conditions when to apply
default-initializers.
Andrew MacLeod [Wed, 14 May 2025 15:13:15 +0000 (11:13 -0400)]
Allow bitmask intersection to process unknown masks.
bitmask_intersection should not return immediately if the current mask is
unknown. Unknown may mean its the default for a range, and this may
interact in intersting ways with the other bitmask.
PR tree-optimization/116546
* value-range.cc (irange::intersect_bitmask): Allow unknown
bitmasks to be processed.
Andrew MacLeod [Wed, 14 May 2025 15:12:22 +0000 (11:12 -0400)]
Improve constant bitmasks.
bitmasks for constants are created only for trailing zeros. It is no
additional work to also include leading 1's in the value that are also
known.
before : [5, 7] mask 0x7 value 0x0
after : [5, 7] mask 0x3 value 0x4
PR tree-optimization/116546
* value-range.cc (irange_bitmask::irange_bitmask): Include
leading ones in the bitmask.
Andrew MacLeod [Tue, 13 May 2025 17:23:16 +0000 (13:23 -0400)]
Turn get_bitmask_from_range into an irange_bitmask constructor.
There are other places where this is interesting, so move the static
function into a constructor for class irange_bitmask.
* value-range.cc (irange_bitmask::irange_bitmask): Rename from
get_bitmask_from_range and tweak.
(prange::set): Use new constructor.
(prange::intersect): Use new constructor.
(irange::get_bitmask): Likewise.
* value-range.h (irange_bitmask): New constructor prototype.
Robert Dubner [Thu, 15 May 2025 16:01:12 +0000 (12:01 -0400)]
cobol: Don't display 0xFF HIGH-VALUE characters in testcases. [PR120251]
The tests were displaying 0xFF characters, and the resulting generated
output changed with the system locale. The check_88 test was modified
so that the regex comparisons ignore those character positions. Two
of the other tests were changed to output hexadecimal rather than
character strings.
There is one new test, and the other inspect testcases were edited to
remove an unimportant back-apostrophe that had found its way into the
source code sequence number area.
gcc/testsuite/ChangeLog:
PR cobol/120251
* cobol.dg/group1/check_88.cob: Ignore characters above 0x80.
* cobol.dg/group2/ALLOCATE_Rule_8_OPTION_INITIALIZE_with_figconst.cob:
Output HIGH-VALUE as hex, rather than as characters.
* cobol.dg/group2/ALLOCATE_Rule_8_OPTION_INITIALIZE_with_figconst.out:
Likewise.
* cobol.dg/group2/INSPECT_CONVERTING_TO_figurative_constants.cob: Typo.
* cobol.dg/group2/INSPECT_CONVERTING_TO_figurative_constants.out: Likewise.
* cobol.dg/group2/INSPECT_ISO_Example_1.cob: Likewise.
* cobol.dg/group2/INSPECT_ISO_Example_2.cob: Likewise.
* cobol.dg/group2/INSPECT_ISO_Example_3.cob: Likewise.
* cobol.dg/group2/INSPECT_ISO_Example_4.cob: Likewise.
* cobol.dg/group2/INSPECT_ISO_Example_5-f.cob: Likewise.
* cobol.dg/group2/INSPECT_ISO_Example_6.cob: Likewise.
* cobol.dg/group2/INSPECT_ISO_Example_7.cob: Likewise.
* cobol.dg/group2/Multiple_INDEXED_BY_variables_with_the_same_name.cob: New test.
* cobol.dg/group2/Multiple_INDEXED_BY_variables_with_the_same_name.out: New test.
Luc Grosheintz [Wed, 14 May 2025 19:13:52 +0000 (21:13 +0200)]
libstdc++: Fix class mandate for extents.
The standard states that the IndexType must be a signed or unsigned
integer. This mandate was implemented using `std::is_integral_v`. Which
also includes (among others) char and bool, which neither signed nor
unsigned integers.
libstdc++-v3/ChangeLog:
* include/std/mdspan: Implement the mandate for extents as
signed or unsigned integer and not any interal type. Remove
leading underscores from names in static_assert message.
* testsuite/23_containers/mdspan/extents/class_mandates_neg.cc:
Check that extents<char,...> and extents<bool,...> are invalid.
Adjust dg-prune-output pattern.
* testsuite/23_containers/mdspan/extents/misc.cc: Update
tests to avoid `char` and `bool` as IndexType.
Reviewed-by: Tomasz Kamiński <tkaminsk@redhat.com> Reviewed-by: Jonathan Wakely <jwakely@redhat.com>
Jonathan Wakely [Thu, 15 May 2025 10:01:05 +0000 (11:01 +0100)]
libstdc++: Fix std::format_kind primary template for Clang [PR120190]
Although Clang trunk has been adjusted to handle our std::format_kind
definition (because they need to be able to compile the GCC 15.1.0
release), it's probably better to not rely on something that they might
start diagnosing again in future.
Define the primary template in terms of an immediately invoked function
expression, so that we can put a static_assert(false) in the body.
libstdc++-v3/ChangeLog:
PR libstdc++/120190
* include/std/format (format_kind): Adjust primary template to
not depend on itself.
* testsuite/std/format/ranges/format_kind_neg.cc: Adjust
expected errors. Check more invalid specializations.
Reviewed-by: Tomasz Kamiński <tkaminsk@redhat.com> Reviewed-by: Daniel Krügler <daniel.kruegler@gmail.com>
Jeff Law [Thu, 15 May 2025 15:03:13 +0000 (09:03 -0600)]
[RISC-V][PR target/120223] Don't use bset/binv for XTHEADBS
Thead has the XTHEADBB extension which has a lot of overlap with Zbb. I made
the incorrect assumption that XTHEADBS would largely be like Zbs when
generalizing Shreya's work.
As a result we can't use the operation synthesis code for IOR/XOR because we
don't have binv/bset like capabilities. I should have double checked on
XTHEADBS, my bad.
Anyway, the fix is trivial. Don't allow bset/binv based on XTHEADBS.
Already spun in my tester. Spinning in the pre-commit CI system now.
PR target/120223
gcc/
* config/riscv/riscv.cc (synthesize_ior_xor): XTHEADBS does not have
single bit manipulations.
Patrick Palka [Thu, 15 May 2025 15:07:53 +0000 (11:07 -0400)]
c++: unifying specializations of non-primary tmpls [PR120161]
Here unification of P=Wrap<int>::type, A=Wrap<long>::type wrongly
succeeds ever since r14-4112 which made the RECORD_TYPE case of unify
no longer recurse into template arguments for non-primary templates
(since they're a non-deduced context) and so the int/long mismatch that
makes the two types distinct goes unnoticed.
In the case of (comparing specializations of) a non-primary template,
unify should still go on to compare the types directly before returning
success.
PR c++/120161
gcc/cp/ChangeLog:
* pt.cc (unify) <case RECORD_TYPE>: When comparing specializations
of a non-primary template, still perform a type comparison.
Jason Merrill [Fri, 9 May 2025 23:13:49 +0000 (19:13 -0400)]
c++: -fimplicit-constexpr and modules
Import didn't like differences in DECL_DECLARED_CONSTEXPR_P due to implicit
constexpr, breaking several g++.dg/modules tests; we should handle that
along with DECL_MAYBE_DELETED. For which we need to stream the bit.
gcc/cp/ChangeLog:
* module.cc (trees_out::lang_decl_bools): Stream implicit_constexpr.
(trees_in::lang_decl_bools): Likewise.
(trees_in::is_matching_decl): Check it.
Jason Merrill [Wed, 14 May 2025 14:23:32 +0000 (10:23 -0400)]
c++: one more PR99599 tweak
Patrick pointed out that if the parm/arg types aren't complete yet at this
point, it would affect the type_has_converting_constructor and
TYPE_HAS_CONVERSION tests. I don't have a testcase, but it makes sense for
safety.
PR c++/99599
gcc/cp/ChangeLog:
* pt.cc (conversion_may_instantiate_p): Make sure
classes are complete.
Jason Merrill [Thu, 1 May 2025 14:20:25 +0000 (10:20 -0400)]
libstdc++: build testsuite with -Wabi
I added this locally to check whether the PR120012 fix affects libstdc++ (it
doesn't) but it seems more generally useful to catch whether compiler
ABI changes have library impact.
As a followup to PAREN_EXPR verification, let's ensure that CONJ_EXPR is
only used with COMPLEX_TYPE. While at it, move the whole block towards
the end of the switch, because unlike the other entries it needs to
break out of the switch, not immediately return from the function,
as after the switch we check that types of LHS and RHS match.
Refactor a bit to avoid repeated blocks with debug_generic_expr.
gcc/ChangeLog:
* tree-cfg.cc (verify_gimple_assign_unary): Accept only
COMPLEX_TYPE for CONJ_EXPR.
Tobias Burnus [Thu, 15 May 2025 07:15:21 +0000 (09:15 +0200)]
OpenMP/Fortran: Fix allocatable-component mapping of derived-type array comps
The check whether the location expression in map clause has allocatable
components was failing for some derived-type array expressions such as
map(var%tiles(1))
as the compiler produced
_4 = var.tiles;
MEMREF(_4, _5);
This commit now also handles this case.
gcc/fortran/ChangeLog:
* trans-openmp.cc (gfc_omp_deep_mapping_do): Handle SSA_NAME if
a def_stmt is available.
libgomp/ChangeLog:
* testsuite/libgomp.fortran/alloc-comp-4.f90: New test.
Andrew Pinski [Wed, 14 May 2025 16:01:07 +0000 (09:01 -0700)]
tree: Canonical order for ADDR
This is the followup based on the review at
https://inbox.sourceware.org/gcc-patches/CAFiYyc3xeG75dsWaF63Zbu5uELPEAEoHwGfoGaVyDWouUJ70Mg@mail.gmail.com/
.
We should put ADDR_EXPR last instead of just is_gimple_invariant_address ones.
Note a few match patterns needed to be updated for this change but we get a decent improvement
as forwprop-38.c is now able to optimize during CCP rather than taking all the way to forwprop.
Bootstrapped and tested on x86_64-linux-gnu.
gcc/ChangeLog:
* fold-const.cc (tree_swap_operands_p): Put ADDR_EXPR last
instead of just is_gimple_invariant_address ones.
* match.pd (`a ptr+ b !=\== ADDR`, `ADDR !=/== ssa_name`):
Move the ADDR to the last operand. Update comment.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
Richard Biener [Wed, 14 May 2025 14:45:08 +0000 (16:45 +0200)]
Enhance -fopt-info-vec vectorized loop diagnostic
The following includes whether we vectorize an epilogue, whether
we use loop masking and what vectorization factor (unroll factor)
we use. So it's now
t.c:4:21: optimized: loop vectorized using 64 byte vectors and unroll factor 32
t.c:4:21: optimized: epilogue loop vectorized using masked 64 byte vectors and unroll factor 32
for a masked epilogue with AVX512 and HImode data for example. Rather
than
t.c:4:21: optimized: loop vectorized using 64 byte vectors
t.c:4:21: optimized: loop vectorized using 64 byte vectors
I verified we don't translate opt-info messages and thus excessive
use of %s to compose the strings should be OK.
* tree-vectorizer.cc (vect_transform_loops): When diagnosing
a vectorized loop indicate whether we vectorized an epilogue,
whether we used masked vectors and what unroll factor was
used.
Richard Biener [Wed, 14 May 2025 14:36:29 +0000 (16:36 +0200)]
Fix regression from x86 multi-epilogue tuning
With the avx512_two_epilogues tuning enabled for zen4 and zen5
the gcc.target/i386/vect-epilogues-5.c testcase below regresses
and ends up using AVX2 sized vectors for the masked epilogue
rather than AVX512 sized vectors. The following patch rectifies
this and adds coverage for the intended behavior.
* config/i386/i386.cc (ix86_vector_costs::finish_cost):
Do not suggest a first epilogue mode for AVX512 sized
main loops with X86_TUNE_AVX512_TWO_EPILOGUES as that
interferes with using a masked epilogue.
Simon Martin [Wed, 14 May 2025 18:29:57 +0000 (20:29 +0200)]
c++: Add testcase for issue fixed in GCC 15 [PR120126]
Patrick noticed that this PR's testcase has been fixed by the patch for
PR c++/114292 (r15-7238-gceabea405ffdc8), more specifically the part
that walks the type of DECL_EXPR DECLs.
Tobias Burnus [Wed, 14 May 2025 18:06:49 +0000 (20:06 +0200)]
OpenMP: Fix mapping of zero-sized arrays with non-literal size: map(var[:n]), n = 0
For map(ptr[:0]), the used map kind is GOMP_MAP_ATTACH_ZERO_LENGTH_ARRAY_SECTION
and it is permitted that 'ptr' does not exist. 'ptr' is set to the device
pointee if it exists or to the host value otherwise.
For map(ptr[:3]), the variable is first mapped and then ptr is updated to point
to the just-mapped device data; the attachment uses GOMP_MAP_ATTACH.
For map(ptr[:n]), generates always a GOMP_MAP_ATTACH, but when n == 0, it
was failing with:
"pointer target not mapped for attach"
The solution is not to fail but first to check whether it was mapped before.
It turned out that for the mapping part, GCC adds a run-time check whether
n == 0 - and uses GOMP_MAP_ZERO_LEN_ARRAY_SECTION for the mapping.
Thus, we just have to check whether there such a mapping for the address
for which the GOMP_MAP_ATTACH. was requested. And, if there was, the
error diagnostic can be skipped.
Unsurprisingly, this issue occurs in real-world code; it was detected in
a code that distributes work via MPI and for some processes, some bounds
ended up to be zero.
libgomp/ChangeLog:
* target.c (gomp_attach_pointer): Return bool; accept additional
bool to optionally silence the fatal pointee-not-found error.
(gomp_map_vars_internal): If the pointee could not be found,
check whether it was mapped as GOMP_MAP_ZERO_LEN_ARRAY_SECTION.
* libgomp.h (gomp_attach_pointer): Update prototype.
* oacc-mem.c (acc_attach_async, goacc_enter_data_internal): Update
calls.
* testsuite/libgomp.c/target-map-zero-sized.c: New test.
* testsuite/libgomp.c/target-map-zero-sized-2.c: New test.
* testsuite/libgomp.c/target-map-zero-sized-3.c: New test.
Richard Biener [Tue, 13 May 2025 08:08:36 +0000 (10:08 +0200)]
Remove the mixed stmt_vec_info/SLP node record_stmt_cost overload
The following changes the record_stmt_cost calls in
vectorizable_load/store to only pass the SLP node when costing
vector stmts. For now we'll still pass the stmt_vec_info,
determined from SLP_TREE_REPRESENTATIVE, so this merely cleans up
the API.
* tree-vectorizer.h (record_stmt_cost): Remove mixed
stmt_vec_info/SLP node inline overload.
* tree-vect-stmts.cc (vectorizable_store): For costing
vector stmts only pass SLP node to record_stmt_cost.
(vectorizable_load): Likewise.
Richard Biener [Tue, 13 May 2025 07:50:36 +0000 (09:50 +0200)]
Use vectype from SLP node for vect_get_{load,store}_cost if possible
The vect_get_{load,store}_cost API is used from both vectorizable_*
where we've done SLP analysis and from alignment peeling analysis
with is done before this and thus only stmt_vec_infos are available.
The following patch makes sure we pick the vector type relevant
for costing from the SLP node when available.
* tree-vect-stmts.cc (vect_get_store_cost): Compute vectype based
on whether we got SLP node or stmt_vec_info and use the full
record_stmt_cost API.
(vect_get_load_cost): Likewise.
We forgot to initialize m_allow_adding_dup in the constructor of
riscv_subset_list, then that will be a random value...that will lead
to a random behavior of the -march may accpet duplicate extension.
gcc/ChangeLog:
* common/config/riscv/riscv-common.cc
(riscv_subset_list::riscv_subset_list): Init m_allow_adding_dup. Reviewed-by: Christoph Müllner <christoph.muellner@vrull.eu>
Jiawei [Tue, 13 May 2025 07:23:39 +0000 (15:23 +0800)]
RISC-V: Add augmented hypervisor series extensions.
The augmented hypervisor series extensions 'sha'[1] is a new profile-defined
extension series that captures the full set of features that are mandated to
be supported along with the 'H' extension.
Andrew Pinski [Tue, 13 May 2025 21:27:12 +0000 (14:27 -0700)]
gimple: Move canonicalization of bool==0 and bool!=1 to cleanupcfg
This moves the canonicalization of `bool==0` and `bool!=1` from
forwprop to cleanupcfg. We will still need to call it from forwprop
so we don't need to call forwprop a few times to fp comparisons in some
cases (forwprop-16.c was added originally for this code even).
This is the first step in removing forward_propagate_into_gimple_cond
and forward_propagate_into_comparison.
Bootstrapped and tested on x86_64-linux-gnu.
gcc/ChangeLog:
* tree-cfgcleanup.cc (canonicalize_bool_cond): New function.
(cleanup_control_expr_graph): Call canonicalize_bool_cond for GIMPLE_COND.
* tree-cfgcleanup.h (canonicalize_bool_cond): New declaration.
* tree-ssa-forwprop.cc (forward_propagate_into_gimple_cond):
Call canonicalize_bool_cond.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
Andrew Pinski [Tue, 13 May 2025 20:50:24 +0000 (13:50 -0700)]
gimple: Add assert for code being a comparison in gimple_cond_set_code
We have code later on that verifies the code is a comparison. So let's
try to catch it earlier. So it is easier to debug where the incorrect code
gets set.
Bootstrapped and tested on x86_64-linux-gnu.
gcc/ChangeLog:
* gimple.h (gimple_cond_set_code): Add assert of the code
being a comparison.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
Andrew Pinski [Tue, 13 May 2025 20:04:32 +0000 (13:04 -0700)]
forwprop: Change an if into an assert
Since the merge of the tuples branch (r0-88576-g726a989a8b74bf), the
if:
```
if (TREE_CODE_CLASS (gimple_cond_code (stmt)) != tcc_comparison)
```
Will always be false so let's change it into an assert.
gcc/ChangeLog:
* tree-ssa-forwprop.cc (forward_propagate_into_gimple_cond): Assert
that gimple_cond_code is always a comparison.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
Andrew Pinski [Tue, 13 May 2025 16:56:13 +0000 (09:56 -0700)]
gimple: allow fold_stmt without setting cfun in case of GIMPLE_COND folding
This is the followup mentioned in https://gcc.gnu.org/pipermail/gcc-patches/2025-May/683444.html .
It adds the check for cfun before accessing function specific flags.
We handle the case where !cfun as conservative in that it the function might throw.
gcc/ChangeLog:
* gimple-fold.cc (replace_stmt_with_simplification): Check cfun before
accessing cfun.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
Andrew Pinski [Mon, 21 Apr 2025 23:33:04 +0000 (16:33 -0700)]
forwprop: Move around the marking bb for eh to after the local non-fold_stmt optimizations
When moving the optimize_memcpy_to_memset optimization to forwprop from fold_stmt, the marking
of the bb to purge for eh cleanup was not happening for the local optimizations but only after
the fold_stmt, this causes g++.dg/torture/except-2.C to fail.
So this patch moves the marking of the bbs for cleanups after the local forwprop optimizations
instead of before.
There was already code to add to to_purge after forward_propagate_into_comparison and removes
that as it is now redundant.
gcc/ChangeLog:
* tree-ssa-forwprop.cc (pass_forwprop::execute): Move marking of to_purge bb
and marking of fixup statements to after the local optimizations.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
Andrew Pinski [Tue, 22 Apr 2025 03:15:42 +0000 (20:15 -0700)]
forwprop: Fix looping after fold_stmt and some forwprop local folds happen
r10-2587-gcc19f80ceb27cc added a loop over the current statment if there was
a change. Except in some cases it turns out changed will turn from true to false
because instead of doing |= after the fold_stmt, there was an just an `=`.
This fixes that and now we loop even if fold_stmt changed the statement and
there was a local fold that happened.
gcc/ChangeLog:
* tree-ssa-forwprop.cc (pass_forwprop::execute): Use `|=` for
changed on the local folding.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
Richard Biener [Mon, 12 May 2025 13:02:42 +0000 (15:02 +0200)]
This transitions vect_model_simple_cost to SLP only
As part of the vector cost API cleanup this transitions
vect_model_simple_cost to only record costs with SLP node.
For this to work the patch adds an overload to record_stmt_cost
only passing in the SLP node.
The vect_prologue_cost_for_slp adjustment is one spot that
needs an eye with regard to re-doing the whole thing.
* tree-vectorizer.h (record_stmt_cost): Add overload with
only SLP node and no vector type.
* tree-vect-stmts.cc (record_stmt_cost): Use
SLP_TREE_REPRESENTATIVE for stmt_vec_info.
(vect_model_simple_cost): Do not get stmt_vec_info argument
and adjust.
(vectorizable_call): Adjust.
(vectorizable_simd_clone_call): Likewise.
(vectorizable_conversion): Likewise.
(vectorizable_assignment): Likewise.
(vectorizable_shift): Likewise.
(vectorizable_operation): Likewise.
(vectorizable_condition): Likewise.
(vectorizable_comparison_1): Likewise.
* tree-vect-slp.cc (vect_prologue_cost_for_slp): Use
full-blown record_stmt_cost.
Tomasz Kamiński [Mon, 12 May 2025 09:06:34 +0000 (11:06 +0200)]
libstdc++: Renamed bits/move_only_function.h to bits/funcwrap.h [PR119125]
The file now includes copyable_function in addition to
move_only_function.
PR libstdc++/119125
libstdc++-v3/ChangeLog:
* include/bits/move_only_function.h: Move to...
* include/bits/funcwrap.h: ...here.
* doc/doxygen/stdheader.cc (init_map): Replaced move_only_function.h
with funcwrap.h, and changed include guard to use feature test macro.
Move bits/version.h include before others.
* include/Makefile.am: Likewise.
* include/Makefile.in: Likewise.
* include/std/functional: Likewise.
Reviewed-by: Patrick Palka <ppalka@redhat.com> Reviewed-by: Jonathan Wakely <jwakely@redhat.com> Signed-off-by: Tomasz Kamiński <tkaminsk@redhat.com>
liuhongt [Wed, 18 Dec 2024 06:32:31 +0000 (22:32 -0800)]
Consider frequency in cost estimation when converting scalar to vector.
n some benchmark, I notice stv failed due to cost unprofitable, but the igain
is inside the loop, but sse<->integer conversion is outside the loop, current cost
model doesn't consider the frequency of those gain/cost.
The patch weights those cost with frequency.
gcc/ChangeLog:
PR target/120215
* config/i386/i386-features.cc
(scalar_chain::mark_dual_mode_def): Weight
cost of integer<->sse move with bb frequency when it's
optimized_for_speed_p.
(general_scalar_chain::compute_convert_gain): Ditto, and
adjust function prototype to return true/false when cost model
is profitable or not.
(timode_scalar_chain::compute_convert_gain): Ditto.
(convert_scalars_to_vector): Adjust after the upper two
function prototype are changed.
* config/i386/i386-features.h (class scalar_chain): Change
n_integer_to_sse/n_sse_to_integer to cost_sse_integer, and add
weighted_cost_sse_integer.
(class general_scalar_chain): Adjust prototype to return bool
intead of int.
(class timode_scalar_chain): Ditto.