Jakub Jelinek [Sat, 21 Jun 2025 14:09:08 +0000 (16:09 +0200)]
value-range: Use int instead of uint for wi::ctz result [PR120746]
uint is some compatibility type in glibc sys/types.h enabled in misc/GNU
modes, so it doesn't exist on many hosts.
Furthermore, wi::ctz returns int rather than unsigned and the var is
only used in comparison to zero or as second argument of left shift, so
I think just using int instead of unsigned is better.
2025-06-21 Jakub Jelinek <jakub@redhat.com>
PR middle-end/120746
* value-range.cc (irange::snap): Use int type instead of uint.
Jan Hubicka [Sat, 21 Jun 2025 03:37:24 +0000 (05:37 +0200)]
Extend afdo inliner to introduce speculative calls
This patch makes the AFDO's VPT to happen during early inlining. This should
make the einline pass inside afdo pass unnecesary, but some inlining still
happens there - I will need to debug why that happens and will try to drop the
afdo's inliner incrementally.
get_inline_stack_in_node can now be used to produce inline stack out of
callgraph nodes which are marked as inline clones, so we do not need to iterate
tree-inline and IPA decisions phases like old code did. I also added some
debug facilities - dumping of decisions and inline stacks, so one can match
them with data in gcov profile.
Former VPT pass identified all caes where in train run indirect call was inlined
and the inlined callee collected some samples. In this case it forced inline without
doing any checks, such as whether inlining is possible.
New code simply introduces speculative edges into callgraph and lets afdo inlining
to decide. Old code also marked statements that were introduced during promotion
to prevent doing double speculation i.e.
if (ptr == foo)
.. inlined foo ...
else
ptr ();
to
if (ptr == foo)
.. inlined foo ...
else if (ptr == foo)
foo (); // for IPA inlining
else
ptr ();
Since inlning now happens much earlier, tracking the statements would be quite hard.
Instead I simply remove the targets from profile data which sould have same effect.
I also noticed that there is nothing setting max_count so all non-0 profile is
considered hot which I fixed too.
Training with ref run I now get
500.perlbench_r 1 160 9.93 * 1 162 9.84 *
502.gcc_r NR NR
505.mcf_r 1 186 8.68 * 1 194 8.34 *
520.omnetpp_r 1 183 7.15 * 1 208 6.32 *
523.xalancbmk_r NR NR
525.x264_r 1 85.2 20.5 * 1 85.8 20.4 *
531.deepsjeng_r 1 165 6.93 * 1 176 6.51 *
541.leela_r 1 268 6.18 * 1 282 5.87 *
548.exchange2_r 1 86.3 30.4 * 1 88.9 29.5 *
557.xz_r 1 224 4.81 * 1 224 4.82 *
Est. SPECrate2017_int_base 9.72
Est. SPECrate2017_int_peak 9.33
Base is without profile feedback and peak is AFDO.
gcc/ChangeLog:
* auto-profile.cc (dump_inline_stack): New function.
(get_inline_stack_in_node): New function.
(get_relative_location_for_stmt): Add FN parameter.
(has_indirect_call): Remove.
(function_instance::find_icall_target_map): Add FN parameter.
(function_instance::remove_icall_target): New function.
(function_instance::read_function_instance): Set sum_max.
(autofdo_source_profile::get_count_info): Add NODE parameter.
(autofdo_source_profile::update_inlined_ind_target): Add NODE parameter.
(autofdo_source_profile::remove_icall_target): New function.
(afdo_indirect_call): Add INDIRECT_EDGE parameter; dump reason
for failure; do not check for recursion; do not inline call.
(afdo_vpt): Add INDIRECT_EDGE parameter.
(afdo_set_bb_count): Do not take PROMOTED set.
(afdo_vpt_for_early_inline): Remove.
(afdo_annotate_cfg): Do not take PROMOTED set.
(auto_profile): Do not call afdo_vpt_for_early_inline.
(afdo_callsite_hot_enough_for_early_inline): Dump count.
(remove_afdo_speculative_target): New function.
* auto-profile.h (afdo_vpt_for_early_inline): Declare.
(remove_afdo_speculative_target): Declare.
* ipa-inline.cc (inline_functions_by_afdo): Do VPT.
(early_inliner): Redirecct edges if inlining happened.
* tree-inline.cc (expand_call_inline): Add sanity check.
Jan Hubicka [Wed, 18 Jun 2025 10:10:25 +0000 (12:10 +0200)]
Implement afdo inliner
This patch moves afdo inlining from early inliner into specialized one.
The reason is that early inliner is by design non-recursive while afdo
inliner needs to recurse. In the past google handled it by increasing
early inliner iterations, but it can be done easily and cheaply without
it by simply recusing into inlined functions.
I will also look into moving VPT to early inliner now.
Bootstrapped/regtested x86_64-linux, comitted.
gcc/ChangeLog:
* auto-profile.cc (get_inline_stack): Add fn parameter.
* ipa-inline.cc (want_early_inline_function_p): Do not care
about AFDO.
(inline_functions_by_afdo): New function.
(early_inliner): Use it.
The select_vl op_1 and op_2 may be the same const_int like (const_int 32).
And then maybe_legitimize_operands will:
1. First mov the const op_1 to a reg.
2. Resue the reg of op_1 for op_2 as the op_1 and op_2 is equal.
That will break the assumption that the op_2 of select_vl is immediate,
or something like CONST_INT_POLY.
The below test suites are passed for this patch series.
* The rv64gcv fully regression test.
PR target/120652
gcc/ChangeLog:
* config/riscv/autovec.md: Add immediate_operand for
select_vl operand 2.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/pr120652-1.c: New test.
* gcc.target/riscv/rvv/autovec/pr120652-2.c: New test.
* gcc.target/riscv/rvv/autovec/pr120652-3.c: New test.
* gcc.target/riscv/rvv/autovec/pr120652.h: New test.
Andrew MacLeod [Fri, 20 Jun 2025 12:50:39 +0000 (08:50 -0400)]
Fix range wrap check and enhance verify_range.
when snapping range bounds to satidsdaybitmask constraints, end bound overflow
and underflow checks were not working properly.
Also Adjust some comments, and enhance verify_range to make sure range pairs
are sorted properly.
PR tree-optimization/120701
gcc/
* value-range.cc (irange::verify_range): Verify range pairs are
sorted properly.
(irange::snap): Check for over/underflow properly.
Andrew Stubbs [Fri, 20 Jun 2025 16:43:37 +0000 (16:43 +0000)]
amdgcn: allow SImode in VCC_HI [PR120722]
This patch isn't fully tested yet, but it fixes the build failure, so that
will do for now. SImode was not allowed in VCC_HI because there were issues,
way back before the port went upstream, so it's possible we'll find out what
those issues were again soon.
gcc/ChangeLog:
PR target/120722
* config/gcn/gcn.cc (gcn_hard_regno_mode_ok): Allow SImode in VCC_HI.
Jørgen Kvalsvik [Thu, 19 Jun 2025 19:00:07 +0000 (21:00 +0200)]
Use auto_vec in prime paths selftests [PR120634]
The selftests had a bunch of memory leaks that showed up in make
selftest-valgrind as a result of not using auto_vec or other
explicitly calling release. Replacing vec with auto_vec makes the
problem go away. The auto_vec_vec helper is made constructable from a
vec so that objects returned from functions can be automatically
managed too.
H.J. Lu [Wed, 18 Jun 2025 21:03:48 +0000 (05:03 +0800)]
x86: Get the widest vector mode from MOVE_MAX
Since MOVE_MAX defines the maximum number of bytes that an instruction
can move quickly between memory and registers, use it to get the widest
vector mode in vector loop when inlining memcpy and memset.
gcc/
PR target/120708
* config/i386/i386-expand.cc (ix86_expand_set_or_cpymem): Use
MOVE_MAX to get the widest vector mode in vector loop.
Stafford Horne [Thu, 19 Jun 2025 11:17:20 +0000 (12:17 +0100)]
or1k: Improve If-Conversion by delaying cbranch splits
When working on PR120587 I found that the ce1 pass was not able to
properly optimize branches on OpenRISC. This is because of the early
splitting of "compare" and "branch" instructions during the expand pass.
Convert the cbranch* instructions from define_expand to
define_insn_and_split. This dalays the instruction split until after
the ce1 pass is done giving ce1 the best opportunity to perform the
optimizations on the original form of cbranch<mode>4 instructions.
gcc/ChangeLog:
* config/or1k/or1k.cc (or1k_noce_conversion_profitable_p): New
function.
(or1k_is_cmov_insn): New function.
(TARGET_NOCE_CONVERSION_PROFITABLE_P): Define macro.
* config/or1k/or1k.md (cbranchsi4): Convert to insn_and_split.
(cbranch<mode>4): Convert to insn_and_split.
Stafford Horne [Wed, 18 Jun 2025 20:47:03 +0000 (21:47 +0100)]
or1k: Implement *extendbisi* to fix ICE in convert_mode_scalar [PR120587]
After commit 2dcc6dbd8a0 ("emit-rtl: Use simplify_subreg_regno to
validate hardware subregs [PR119966]") the OpenRISC port is broken
again.
Add extend* iinstruction patterns for the SR_F pseudo registers to avoid
having to use the subreg conversions which no longer work.
gcc/ChangeLog:
PR target/120587
* config/or1k/or1k.md (zero_extendbisi2_sr_f): New expand.
(extendbisi2_sr_f): New expand.
* config/or1k/predicates.md (sr_f_reg_operand): New predicate.
I really question the value of checking the output that precisely in these
tests -- they're supposed to be checking vsetvl correctness and optimization,
so the ordering and such of scalar ops shouldn't really be important at all.
Regardless, since I don't know these tests at all I resisted the temptation to
rip out the undesirable aspects of the test.
Next up, fix the bogus scan or force the old cost model (rocket). I choose the
latter as a path of least resistance and least surprise.
[PATCH] RISC-V: Use builtin clz/ctz when count_leading_zeros and count_trailing_zeros is used
longlong.h for RISCV should define count_leading_zeros and
count_trailing_zeros and COUNT_LEADING_ZEROS_0 when ZBB is enabled.
The following patch patch fixes the bug reported in,
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110181
The divdi3 on riscv32 with zbb extension generates __clz_tab
instead of genearating __builtin_clzll/__builtin_clz which is
not efficient since lookup table is emitted.
Updating longlong.h to use this __builtin_clzll/__builtin_clz
generates optimized code for the instruction.
Pan Li [Thu, 19 Jun 2025 02:44:14 +0000 (10:44 +0800)]
RISC-V: Combine vec_duplicate + vminu.vv to vminu.vx on GR2VR cost
This patch would like to combine the vec_duplicate + vminu.vv to the
vminu.vx. From example as below code. The related pattern will depend
on the cost of vec_duplicate from GR2VR. Then the late-combine will
take action if the cost of GR2VR is zero, and reject the combination
if the GR2VR cost is greater than zero.
Assume we have example code like below, GR2VR cost is 0.
#define DEF_VX_BINARY(T, FUNC) \
void \
test_vx_binary (T * restrict out, T * restrict in, T x, unsigned n) \
{ \
for (unsigned i = 0; i < n; i++) \
out[i] = FUNC (in[i], x); \
}
uint32_t min(uint32 a, uint32 b)
{
return a > b ? b : a;
}
* config/riscv/riscv-v.cc (expand_vx_binary_vec_dup_vec): Add
new case UMIN.
(expand_vx_binary_vec_vec_dup): Ditto.
* config/riscv/riscv.cc (riscv_rtx_costs): Ditto.
* config/riscv/vector-iterators.md: Add new op umin.
Tobias Burnus [Thu, 19 Jun 2025 19:16:42 +0000 (21:16 +0200)]
libgomp/target.c: Fix buffer size for 'omp requires' diagnostic
One of the buffers that printed the list of set 'omp requires'
requirements missed the 'self' clause addition, being potentially
to short when all device-affecting clauses were passed. Solved it
by moving the sizeof(<string of all permitted values>" into a new
'#define' just above the associated gomp_requires_to_name function.
libgomp/ChangeLog:
* target.c (GOMP_REQUIRES_NAME_BUF_LEN): Define.
(GOMP_offload_register_ver, gomp_target_init): Use it for the
char buffer size.
* libgomp.texi (omp_init_allocator): Refer to 'Memory allocation'
for available memory spaces.
(OMP_ALLOCATOR): Move list of traits and predefined memspaces
and allocators to ...
(Memory allocation): ... here. Document omp(x)::allocator::*;
minor wording tweaks, be more explicit about memkind, pinned and
pool_size.
Jakub Jelinek [Thu, 19 Jun 2025 12:48:00 +0000 (14:48 +0200)]
expand: Align PARM_DECLs again to at least BITS_PER_WORD if possible [PR120689]
The following testcase shows a regression caused by the r10-577 change
made for cris. Before that change, the MEM holding (in this case 3 byte)
struct parameter was BITS_PER_WORD aligned, now it is just BITS_PER_UNIT
aligned and that causes significantly worse generated code.
So, the MAX (DECL_ALIGN (parm), BITS_PER_WORD) extra alignment clearly
doesn't help just STRICT_ALIGNMENT targets, but other targets as well.
Of course, it isn't worth doing stack realignment in the rare case of
MAX_SUPPORTED_STACK_ALIGNMENT < BITS_PER_WORD targets like cris, so the
patch only bumps the alignment if it won't go the
> MAX_SUPPORTED_STACK_ALIGNMENT path because of that optimization.
The patch keeps the gcc 15 behavior for avr, pru, m68k and cris (at
least some options for those) and restores the behavior before r10-577 on
other targets.
PR target/120689
* function.cc (assign_parm_setup_block): Align parm to at least
word alignment even on !STRICT_ALIGNMENT targets, as long as
BITS_PER_WORD is not larger than MAX_SUPPORTED_STACK_ALIGNMENT.
x86: PR target/103773: Fix wrong-code with -Oz from pop to memory.
added "*mov<mode>_and" and extended "*mov<mode>_or" to transform
"mov $0,mem" to the shorter "and $0,mem" and "mov $-1,mem" to the shorter
"or $-1,mem" for -Oz. But the new pattern:
aren't guarded for -Oz. As a result, "and $0,mem" and "or $-1,mem" are
generated without -Oz.
1. Change *mov<mode>_and" to define_insn_and_split and split it to
"mov $0,mem" if not -Oz.
2. Change "*mov<mode>_or" to define_insn_and_split and split it to
"mov $-1,mem" if not -Oz.
3. Don't transform "mov $-1,reg" to "push $-1; pop reg" for -Oz since it
should be transformed to "or $-1,reg".
gcc/
PR target/120427
* config/i386/i386.md (*mov<mode>_and): Changed to
define_insn_and_split. Split it to "mov $0,mem" if not -Oz.
(*mov<mode>_or): Changed to define_insn_and_split. Split it
to "mov $-1,mem" if not -Oz.
(peephole2): Don't transform "mov $-1,reg" to "push $-1; pop reg"
for -Oz since it will be transformed to "or $-1,reg".
gcc/testsuite/
PR target/120427
* gcc.target/i386/cold-attribute-4.c: Compile with -Oz.
* gcc.target/i386/pr120427-1.c: New test.
* gcc.target/i386/pr120427-2.c: Likewise.
* gcc.target/i386/pr120427-3.c: Likewise.
* gcc.target/i386/pr120427-4.c: Likewise.
Dongyan Chen [Wed, 18 Jun 2025 11:47:28 +0000 (19:47 +0800)]
RISC-V: Add generic tune as default.
According to the discussion in
https://gcc.gnu.org/pipermail/gcc-patches/2025-June/686893.html, by creating
a -mtune=generic may be a good idea to slove the question regarding the branch
cost.
Changes for v2:
- Delete the code about -mcpu=generic.
Jakub Jelinek [Thu, 19 Jun 2025 06:57:27 +0000 (08:57 +0200)]
dfp: Further decimal_real_to_integer fixes [PR120631]
Unfortunately, the following further testcase shows that there aren't
problems only with very large precisions and large exponents, but pretty
much anything larger than 64-bits. After all, before _BitInt support dfp
didn't even have {,unsigned }__int128 <-> _Decimal{32,64,128,64x} support,
and the testcase again shows some of the conversions yielding zeros.
While the pr120631.c test worked even without the earlier patch.
So, this patch assumes 64-bit precision at most is ok and for anything
larger it just uses exponent 0 and multiplies afterwards.
2025-06-19 Jakub Jelinek <jakub@redhat.com>
PR middle-end/120631
* dfp.cc (decimal_real_to_integer): Use result multiplication not just
when precision > 128 and dn.exponent > 19, but when precision > 64
and dn.exponent > 0.
* gcc.dg/dfp/bitint-10.c: New test.
* gcc.dg/dfp/pr120631.c: New test.
Kito Cheng [Tue, 17 Jun 2025 04:56:17 +0000 (12:56 +0800)]
RISC-V: Adding cost model for zilsd
Motivation of this patch is we want to use ld/sd if possible when zilsd
is enabled, however the subreg pass may split that into two lw/sw
instructions because the cost, and it only check cost for 64 bits reg move,
that's why we need adjust cost for 64 bit reg move as well.
However even we adjust the cost model, 64 bit shift still use 32 bit
load because it already got split at expand time, this may need to fix
on the expander side, and this apparently need few more time to
investigate, so I just added a testcase with XFAIL to show the current behavior,
and we can fix that...when we have time.
For long term, we may adding a new field to riscv_tune_param to control
the cost model for that.
gcc/ChangeLog:
* config/riscv/riscv.cc (riscv_cost_model): Add cost model for
zilsd.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/zilsd-code-gen-split-subreg-1.c: New test.
* gcc.target/riscv/zilsd-code-gen-split-subreg-2.c: New test.
emit-rtl: Use simplify_subreg_regno to validate hardware subregs [PR119966]
PR119966 showed that combine could generate unfoldable hardware subregs
for pru-unknown-elf. To fix, strengthen the checks performed by
validate_subreg.
The simplify_subreg_regno performs more validity checks than
the simple info.representable_p. Most importantly, the
targetm.hard_regno_mode_ok hook is called to ensure the hardware
register is valid in subreg's outer mode. This fixes the rootcause
for PR119966.
The checks for stack-related registers are bypassed because the i386
backend generates them, in this seemingly valid peephole optimization:
Testing done:
* No regressions were detected for C and C++ on x86_64-pc-linux-gnu.
* "contrib/compare-all-tests i386" showed no difference in code
generation.
* No regressions for pru-unknown-elf.
* Reverted r16-809-gf725d6765373f7 to expose the now latent PR119966.
Then ensured pru-unknown-elf build is ok. Only two cases regressed
where rnreg pass transforms a valid hardware subreg into invalid
one. But I think that is not related to combine's PR119966:
gcc.c-torture/execute/20040709-1.c
gcc.c-torture/execute/20040709-2.c
PR target/119966
gcc/ChangeLog:
* emit-rtl.cc (validate_subreg): Call simplify_subreg_regno
instead of checking info.representable_p..
* rtl.h (simplify_subreg_regno): Add new argument
allow_stack_regs.
* rtlanal.cc (simplify_subreg_regno): Do not reject
stack-related registers if allow_stack_regs is true.
Co-authored-by: Richard Sandiford <richard.sandiford@arm.com> Co-authored-by: Andrew Pinski <quic_apinski@quicinc.com> Signed-off-by: Dimitar Dimitrov <dimitar@dinux.eu>
This implements the final piece of the revised CWG2563 wording;
"It exits the scope of promise only if the coroutine completed
without suspending."
Considering the coroutine to be made up of two components; a
'ramp' and a 'body' where the body represents the user's original
code and the ramp is responsible for setup of that and for
returning some object to the original caller.
Coroutine state, and responsibility for its release.
A coroutine has some state that persists across suspensions.
The state has two components:
* State that is specified by the standard and persists for the entire
life of the coroutine.
* Local state that is constructed/destructed as scopes in the original
function body are entered/exited. The destruction of local state is
always the responsibility of the body code.
The persistent state (and the overall storage for the state) must be
managed in two places:
* The ramp function (which allocates and builds this - and can, in some
cases, be responsible for destroying it)
* The re-written function body which can destroy it when that body
completes its final suspend - or when the handle.destroy () is called.
In all cases the ramp holds responsibility for constructing the standard-
mandated persistent state.
There are four ways in which the ramp might be re-entered after starting
the function body:
A The body could suspend (one might expect that to be the 'normal' case
for most coroutines).
B The body might complete either synchronously or via continuations.
C An exception might be thrown during the setup of the initial await
expression, before the initial awaiter resumes.
D An exception might be processed by promise.unhandled_exception () and
that, in turn, might re-throw it (or throw something else). In this
case, the coroutine is considered suspended at the final suspension
point.
Once the coroutine has passed initial suspend (i.e. the initial awaiter
await_resume() has been called) the body is considered to have a use of
the state.
Until the ramp return value has been constructed, the ramp is considered
to have a use of the state.
To manage these interacting conditions we allocate a reference counter
for the frame state. This is initialised to 1 by the ramp as part of its
startup (note that failures/exceptions in the startup code are handled
locally to the ramp).
When the body returns (either normally, or by exception) the ramp releases
its use.
Once the rewritten coroutine body is started, the body is considered to
have a use of the frame. This use (potentially) needs to be released if
an exception is thrown from the body. We implement this using an eh-only
cleanup around the initial await. If we have the case D above, then we
do not release the body use.
In case:
A, typically the ramp would be re-entered with the body holding a use,
and therefore the ramp should not destroy the state.
B, both the body and ramp will have released their uses, and the ramp
should destroy the state.
C, we must arrange for the body to release its use, because we require
the ramp to cleanup in this circumstance.
D is an outlier, since the responsibility for destruction of the state
now rests with the user's code (via a handle.destroy() call).
NOTE: In the case that the body has never suspended before such an
exception occurs, the only reasonable way for the user code to obtain the
necessary handle is if unhandled_exception() throws the handle or some
object that contains the handle. That is outside of the designs here -
if the user code might need this corner-case, then such provision will
have to be made.
In the ramp, we implement destruction for the persistent frame state by
means of cleanups. These are run conditionally when the reference count
is 0 signalling that both the body and the ramp have completed.
In the body, once we pass the final suspend, then we test the use and
delete the state if the use is 0.
PR c++/115908
PR c++/118074
PR c++/95615
gcc/cp/ChangeLog:
* coroutines.cc (coro_frame_refcount_id): New.
(coro_init_identifiers): Initialise coro_frame_refcount_id.
(build_actor_fn): Set up initial_await_resume_called. Handle
decrementing of the frame reference count. Return directly to
the caller if that is non-zero.
(cp_coroutine_transform::wrap_original_function_body): Use a
conditional eh-only cleanup around the initial await expression
to release the body use on exception before initial await
resume.
(cp_coroutine_transform::build_ramp_function): Wrap the called
body in a cleanup that releases a use of the frame when we
return to the ramp. Implement frame, promise and argument copy
destruction via conditional cleanups when the frame use count
is zero.
gcc/testsuite/ChangeLog:
* g++.dg/coroutines/pr115908.C: Move to...
* g++.dg/coroutines/torture/pr115908.C: ...here.
* g++.dg/coroutines/torture/pr95615-02.C: Move to...
* g++.dg/coroutines/torture/pr95615-01-promise-ctor-throws.C: ...here.
* g++.dg/coroutines/torture/pr95615-03.C: Move to...
* g++.dg/coroutines/torture/pr95615-02-get-return-object-throws.C: ...here.
* g++.dg/coroutines/torture/pr95615-01.C: Move to...
* g++.dg/coroutines/torture/pr95615-03-initial-suspend-throws.C: ...here.
* g++.dg/coroutines/torture/pr95615-04.C: Move to...
* g++.dg/coroutines/torture/pr95615-04-initial-await-ready-throws.C: ...here.
* g++.dg/coroutines/torture/pr95615-05.C: Move to...
* g++.dg/coroutines/torture/pr95615-05-initial-await-suspend-throws.C: ...here.
* g++.dg/coroutines/torture/pr95615.inc: Add more cases and ensure that the
code completes properly when no exceptions are thrown.
* g++.dg/coroutines/torture/pr95615-00-nothing-throws.C: New test.
* g++.dg/coroutines/torture/pr95615-06-initial-await-resume-throws.C: New test.
* g++.dg/coroutines/torture/pr95615-07-body-throws.C: New test.
* g++.dg/coroutines/torture/pr95615-08-initial-suspend-throws-uhe-throws.C: New test.
* g++.dg/coroutines/torture/pr95615-09-body-throws-uhe-throws.C: New test.
Andrew MacLeod [Wed, 28 May 2025 20:27:16 +0000 (16:27 -0400)]
Improve contains_p and intersect with bitmasks.
Improve the way contains_p (wide_int) and intersect behave wioth
singletons and bitmasks. Also fix a buglet in bitmask_intersect when the
result is a singleton which is not in the current range.
PR tree-optimization/119039
gcc/
* value-range.cc (irange::contains_p): Call wide_int version of
contains_p for singleton ranges.
(irange::intersect): If either range is a singleton, use
contains_p.
Harald Anlauf [Tue, 17 Jun 2025 19:09:32 +0000 (21:09 +0200)]
Fortran: various fixes for STAT/LSTAT/FSTAT intrinsics [PR82480]
The GNU intrinsics STAT/LSTAT/FSTAT were inherited from g77, but changed
the names of some keywords: FILE became NAME, and SARRAY became VALUES,
which are the keywords documented in the gfortran manual.
Adjust code and libgfortran error messages to reflect this change.
Furthermore, add compile-time checking that INTENT(OUT) arguments are
definable, and that array VALUES has at least size 13.
Document that integer arguments are of default kind, and that overflows
in conversion to integer return -1 in VALUES.
* intrinsics/stat.c (stat_i4_sub_0): Fix argument names. Rename
SARRAY to VALUES also in error message. When array VALUES is
KIND=4, get only stat components that do not overflow INT32_MAX,
otherwise set the corresponding VALUES elements to -1.
(stat_i4_sub): Fix argument names.
(lstat_i4_sub): Likewise.
(stat_i8_sub_0): Likewise.
(stat_i8_sub): Likewise.
(lstat_i8_sub): Likewise.
(stat_i4): Likewise.
(stat_i8): Likewise.
(lstat_i4): Likewise.
(lstat_i8): Likewise.
(fstat_i4_sub): Likewise.
(fstat_i8_sub): Likewise.
(fstat_i4): Likewise.
(fstat_i8): Likewise.
Jakub Jelinek [Wed, 18 Jun 2025 06:07:22 +0000 (08:07 +0200)]
dfp, real: Fix up FLOAT_EXPR/FIX_TRUNC_EXPR constant folding between dfp and large _BitInt [PR120631]
The following testcase shows that while at runtime we handle conversions
between _Decimal{64,128} and large _BitInt correctly, at compile time we
mishandle them in both directions, in one direction we end up in ICE in
decimal_from_integer callee because the char buffer is too short for the
needed number of decimal digits, in the conversion of dfp to large _BitInt
we return 0 in the wide_int.
The following patch fixes the ICE by using larger buffer (XALLOCAVEC
allocated, it will be never larger than 65536 / 3 bytes) in the larger
_BitInt case, and the other direction by setting exponent to exp % 19
and instead multiplying the result by needed powers of 10^19 (10^19 chosen
as largest power of ten that can fit into UHWI).
2025-06-18 Jakub Jelinek <jakub@redhat.com>
PR middle-end/120631
* real.cc (decimal_from_integer): Add digits argument, if larger than
256, use XALLOCAVEC allocated buffer.
(real_from_integer): Pass val_in's precision divided by 3 to
decimal_from_integer.
* dfp.cc (decimal_real_to_integer): For precision > 128 if finite
and exponent is large, decrease exponent and multiply resulting
wide_int by powers of 10^19.
Pan Li [Tue, 17 Jun 2025 02:00:54 +0000 (10:00 +0800)]
RISC-V: Combine vec_duplicate + vmin.vv to vmin.vx on GR2VR cost
This patch would like to combine the vec_duplicate + vmin.vv to the
vmin.vx. From example as below code. The related pattern will depend
on the cost of vec_duplicate from GR2VR. Then the late-combine will
take action if the cost of GR2VR is zero, and reject the combination
if the GR2VR cost is greater than zero.
Assume we have example code like below, GR2VR cost is 0.
#define DEF_VX_BINARY(T, FUNC) \
void \
test_vx_binary (T * restrict out, T * restrict in, T x, unsigned n) \
{ \
for (unsigned i = 0; i < n; i++) \
out[i] = FUNC (in[i], x); \
}
int32_t min(int32 a, int32 b)
{
return a > b ? b : a;
}
* config/riscv/riscv-v.cc (expand_vx_binary_vec_dup_vec): Add
new case SMIN.
(expand_vx_binary_vec_vec_dup): Ditto.
* config/riscv/riscv.cc (riscv_rtx_costs): Ditto.
* config/riscv/vector-iterators.md: Add new op smin.
Lili Cui [Tue, 17 Jun 2025 13:39:38 +0000 (21:39 +0800)]
x86: Enable separate shrink wrapping
This commit implements the target macros (TARGET_SHRINK_WRAP_*) that
enable separate shrink wrapping for function prologues/epilogues in
x86.
When performing separate shrink wrapping, we choose to use mov instead
of push/pop, because using push/pop is more complicated to handle rsp
adjustment and may lose performance, so here we choose to use mov, which
has a small impact on code size, but guarantees performance.
Using mov means we need to use sub/add to maintain the stack frame. In
some special cases, we need to use lea to prevent affecting EFlags.
Avoid inserting sub between test-je-jle to change EFlags, lea should be
used here.
Tested against SPEC CPU 2017, this change always has a net-positive
effect on the dynamic instruction count. See the following table for
the breakdown on how this reduces the number of dynamic instructions
per workload on a like-for-like (with/without this commit):
Iain Sandoe [Mon, 16 Jun 2025 06:12:29 +0000 (09:12 +0300)]
c++, coroutines: Remove use of coroutine handle in the frame.
We have been keeping a copy of coroutine_handle<promise> in the state
frame, as it was expected to be efficient to use this to initialize the
argument to await_suspend. This does not turn out to be the case and
intializing the value is obstructive to CGW2563 fixes. This removes
the use.
gcc/cp/ChangeLog:
* coroutines.cc (struct coroutine_info): Update comments.
(struct coro_aw_data): Remove self_handle and add in
information to create the handle in lowering.
(expand_one_await_expression): Build a temporary coroutine
handle.
(build_actor_fn): Remove reference to the frame copy of the
coroutine handle.
(cp_coroutine_transform::wrap_original_function_body): Remove
reference to the frame copy of the coroutine handle.
Gaius Mulley [Tue, 17 Jun 2025 16:41:21 +0000 (17:41 +0100)]
PR modula2/120673: Mutually dependent types crash the compiler
This patch fixes an ICE which will occur if cyclic dependent types
are used when declaring a variable. This patch detects the
cyclic dependency and issues an error message for each outstanding
component.
gcc/m2/ChangeLog:
PR modula2/120673
* gm2-compiler/M2GCCDeclare.mod (ErrorDepList): New
global variable set containing every errant dependency symbol.
(mystop): Remove.
(EmitCircularDependancyError): Replace with ...
(EmitCircularDependencyError): ... this.
(AssertAllTypesDeclared): Rewrite.
(DoVariableDeclaration): Ditto.
(TypeDependentsDeclared): New procedure function.
(PrepareGCCVarDeclaration): Ditto.
(DeclareVariable): Remove assert.
(DeclareLocalVariable): Ditto.
(Constructor): Initialize ErrorDepList.
* gm2-compiler/M2MetaError.mod (doErrorScopeProc): Rewrite
and ensure that a symbol with a module scope does not lookup
from a definition module.
* gm2-compiler/P2SymBuild.mod (BuildType): Rewrite so that
a synonym type is created using the token refering to the name
on the lhs.
gcc/testsuite/ChangeLog:
PR modula2/120673
* gm2/pim/fail/badmodvar.mod: New test.
* gm2/pim/fail/cyclictypes.mod: New test.
* gm2/pim/fail/cyclictypes2.mod: New test.
* gm2/pim/fail/cyclictypes4.mod: New test.
Jan Hubicka [Tue, 17 Jun 2025 15:26:18 +0000 (17:26 +0200)]
Improve static and AFDO profile combination
This patch makes afdo_adjust_guessed_profile more agressive on finding scales
on the boundaries of connected components with no annotation. Originaly I
looked for edges into or out of the component with known AFDO counts and I also
haled edges from basic block with known AFDO count and known static probability
estimate.
Common problem is with components not containing any in edges, but only out
edges (i.e. those with ENTRY_BLOCK). In this case I added logic that looks
for edges out of the component to BBs with known AFDO count. If all flow to
the BB is either from the component or has AFDO count, we can deterine scale
precisely. It may happen that there are edges from other components. In this
case we know upper bound and use it, since it is better than nothing.
I also noticed that some components have 0 count in all profile and then scaling
gives up, which is fixed. I also optimized the code a bit by replacing
map holding current component with an array holding component ID and broke out
saling logic into separate functions.
The patch fixes perl regression I introduced in last change.
according to
https://lnt.opensuse.org/db_default/v4/SPEC/67674
there were improvements (percentage is runtime change):
This is a bit wild, but hope things will settle donw once we chase out
obvious problems (such as losing the profile of functions that has not been
inlined).
gcc/ChangeLog:
* auto-profile.cc (afdo_indirect_call): Compute speculative edge
probability.
(add_scale): Break out from ...
(scale_bbs): Break out from ...
(afdo_adjust_guessed_profile): ... here; use componet array instead of
current_component hash_map; handle components with only 0 profile;
be more agressive on finding scales along the boundary.
Jan Hubicka [Tue, 17 Jun 2025 15:20:04 +0000 (17:20 +0200)]
Fix cgraph_node::apply_scale
while working on auto-FDO I noticed that we may run into ICE because we inline
function with count profile_count::zero to a call site with profile_count::zero.
What may go wrong is that the caller has local profile while callee may have
IPA profiles.
We used to turn all such counts to 0, but that has changed by a short circuit
I introducd recently. Fixed thus.
* cgraph.cc (cgraph_node::apply_scale): Special case scaling
to profile_count::zero ().
(cgraph_node::verify_node): Add extra compatibility check.
Iain Sandoe [Mon, 9 Jun 2025 10:26:01 +0000 (11:26 +0100)]
c++,coroutines: Handle await expressions in assume attributes.
Here we have an expression that is not evaluated but is still seen
as potentially-evaluated. We handle this by determining if the
operand has side-effects, producing a warning that the assume has
been ignored and eliding it.
gcc/cp/ChangeLog:
* coroutines.cc (analyze_expression_awaits): Elide assume
attributes containing await expressions, since these have
side effects. Emit a diagnostic that this has been done.
Jason Merrill [Wed, 20 Nov 2024 15:20:52 +0000 (16:20 +0100)]
c++: modules and #pragma diagnostic
To respect the #pragma diagnostic lines in libstdc++ headers when compiling
with module std, we need to represent them in the module.
I think it's reasonable to give serializers direct access to the underlying
data, as here with get_classification_history. This is a different approach
from how Jakub made PCH streaming members of diagnostic_option_classifier,
but it seems to me that modules handling belongs in module.cc.
libcpp/ChangeLog:
* line-map.cc (linemap_location_from_module_p): Add.
* include/line-map.h: Declare it.
Jakub Jelinek [Tue, 17 Jun 2025 11:20:11 +0000 (13:20 +0200)]
crc: Fix up ICE from optimize_crc_loop [PR120677]
The following testcase ICEs, because optimize_crc_loop inserts a call
statement before labels instead of after labels.
Fixed thusly (plus fixed other issues noticed around it).
2025-06-17 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/120677
* gimple-crc-optimization.cc (crc_optimization::optimize_crc_loop):
Insert before gsi_after_labels instead of gsi_start_bb. Use
gimple_bb (output_crc) instead of output_crc->bb. Formatting fix.
aarch64: Add vec_set/extract for tuple modes [PR113027]
We generated inefficient code for bitfield references to Advanced
SIMD structure modes. In RTL, these modes are just extra-long
vectors, and so inserting and extracting an element is simply
a vec_set or vec_extract operation.
For the record, I don't think these modes should ever become fully
fledged vector modes. We shouldn't provide add, etc. for them.
But vec_set and vec_extract are the vector equivalent of insv
and extv. From that point of view, they seem closer to moves
than to arithmetic.
and the latter, libstdc++-v3/include/bits/ostream.tcc, has:
// Inhibit implicit instantiations for required instantiations,
// which are defined via explicit instantiations elsewhere.
#if _GLIBCXX_EXTERN_TEMPLATE
extern template class basic_ostream<char>;
extern template ostream& endl(ostream&);
Before this commit, omp_discover_declare_target_tgt_fn_r marked 'endl'
as (implicitly) declare target - but not the calls in it due to the
'extern' (DECL_EXTERNAL).
Thanks to inlining and as 'endl' is (therefore) not used and, hence,
discarded by the linker; hencet, it works with -O0 and -O1. However,
as the (unused) function still exits, IPA CP (enabled by -O2) will try
to do constant-value propagation and fails as the definition of 'widen'
is not available.
Solution is to still walk 'endl' despite being an 'extern(al)' decl;
this has been restricted for now to DECL_DECLARED_INLINE_P.
gcc/ChangeLog:
* omp-offload.cc (omp_discover_declare_target_tgt_fn_r): Also
walk external functions that are declare inline (and have a
DECL_SAVED_TREE).
libgomp/ChangeLog:
* testsuite/libgomp.c++/declare_target-2.C: New test.
Some of the lookup code is expecting to find a valid (not UNKNOWN)
location, which triggers in the reported case. To avoid this, we are
reverting the change to use UNKNOWN_LOCATION for synthesizing the
wrapper, and instead using the start and end locations of the original
function.
PR c++/120273
gcc/cp/ChangeLog:
* coroutines.cc
(cp_coroutine_transform::wrap_original_function_body): Use
function start and end locations when synthesizing code.
(cp_coroutine_transform::cp_coroutine_transform): Set the
function end location.
James K. Lowden [Mon, 16 Jun 2025 15:43:35 +0000 (11:43 -0400)]
cobol: Some 1000 small changes in answer to cppcheck diagnostics.
constification per cppcheck. Use STRICT_WARN and fix reported
diagnostics. Ignore [shadowVariable] in general. Use std::vector to
avoid exposing arrays as raw pointers.
Spencer Abson [Mon, 16 Jun 2025 19:31:30 +0000 (19:31 +0000)]
aarch64: Add support for unpacked SVE FP conversions
This patch introduces expanders for FP<-FP conversions that levarage
partial vector modes. We also extend the INT<-FP and FP<-INT conversions
using the same approach.
The ACLE enables vectorized conversions like the following:
fcvt z0.h, p7/m, z1.s
modelling the source vector as VNx4SF:
... | SF| SF| SF| SF|
and the destination as a VNx8HF, where this operation would yield:
... | 0 | HF| 0 | HF| 0 | HF| 0 | HF|
hence the useful results are stored unpacked, i.e.
... | X | HF| X | HF| X | HF| X | HF| (VNx4HF)
This patch allows the vectorizer to use this variant of fcvt as a
conversion from VNx4SF to VNx4HF. The same idea applies to widening
conversions, and between vectors with FP and integer base types.
If the source itself had been unpacked, e.g.
... | X | SF| X | SF| (VNx2SF)
The result would yield
... | X | X | X | HF| X | X | X | HF| (VNx2HF)
The upper bits of each container here are undefined, it's important to
avoid interpreting them during FP operations - doing so could introduce
spurious traps. The obvious route we've taken here is to mask undefined
lanes using the operation's predicate if we have flag_trapping_math.
The VPRED predicate mode (e.g. VNx2BI here) cannot do this; to ensure
correct behavior, we need a predicate mode that can control the data as if
it were fully-packed (VNx4BI).
Both VNx2BI and VNx4BI must be recognised as legal governing predicate modes
by the corresponding FP insns. In general, the governing predicate mode for
an insn could be any such with at least as many significant lanes as the data
mode. For example, addvnx4hf3 could be controlled by any of VNx{4,8,16}BI.
We implement 'aarch64_predicate_operand', a new define_special_predicate, to
acheive this.
gcc/ChangeLog:
* config/aarch64/aarch64-protos.h (aarch64_sve_valid_pred_p):
Declare helper for aarch64_predicate_operand.
(aarch64_sve_packed_pred): Declare helper for new expanders.
(aarch64_sve_fp_pred): Likewise.
* config/aarch64/aarch64-sve.md (<optab><mode><v_int_equiv>2):
Extend into...
(<optab><SVE_HSF:mode><SVE_HSDI:mode>2): New expander for converting
vectors of HF,SF to vectors of HI,SI,DI.
(<optab><VNx2DF_ONLY:mode><SVE_2SDI:mode>2): New expander for converting
vectors of SI,DI to vectors of DF.
(*aarch64_sve_<optab>_nontrunc<SVE_PARTIAL_F:mode><SVE_HSDI:mode>):
New pattern to match those we've added here.
(@aarch64_sve_<optab>_trunc<VNx2DF_ONLY:mode><VNx4SI_ONLY:mode>): Extend
into...
(@aarch64_sve_<optab>_trunc<VNx2DF_ONLY:mode><SVE_SI:mode>): Match both
VNx2SI<-VNx2DF and VNx4SI<-VNx4DF.
(<optab><v_int_equiv><mode>2): Extend into...
(<optab><SVE_HSDI:mode><SVE_F:mode>2): New expander for converting vectors
of HI,SI,DI to vectors of HF,SF,DF.
(*aarch64_sve_<optab>_nonextend<SVE_HSDI:mode><SVE_PARTIAL_F:mode>): New
pattern to match those we've added here.
(trunc<SVE_SDF:mode><SVE_PARTIAL_HSF:mode>2): New expander to handle
narrowing ('truncating') FP<-FP conversions.
(*aarch64_sve_<optab>_trunc<SVE_SDF:mode><SVE_PARTIAL_HSF:mode>): New
pattern to handle those we've added here.
(extend<SVE_PARTIAL_HSF:mode><SVE_SDF:mode>2): New expander to handle
widening ('extending') FP<-FP conversions.
(*aarch64_sve_<optab>_nontrunc<SVE_PARTIAL_HSF:mode><SVE_SDF:mode>): New
pattern to handle those we've added here.
* config/aarch64/aarch64.cc (aarch64_sve_packed_pred): New function.
(aarch64_sve_fp_pred): Likewise.
(aarch64_sve_valid_pred_p): Likewise.
* config/aarch64/iterators.md (SVE_PARTIAL_HSF): New mode iterator.
(SVE_HSF): Likewise.
(SVE_SDF): Likewise.
(SVE_SI): Likewise.
(SVE_2SDI) Likewise.
(self_mask): Extend to all integer/FP vector modes.
(narrower_mask): Likewise (excluding QI).
* config/aarch64/predicates.md (aarch64_predicate_operand): New special
predicate to handle narrower predicate modes.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/sve/pack_fcvt_signed_1.c: Disable the aarch64 vector
cost model to preserve this test.
* gcc.target/aarch64/sve/pack_fcvt_unsigned_1.c: Likewise.
* gcc.target/aarch64/sve/pack_float_1.c: Likewise.
* gcc.target/aarch64/sve/unpack_float_1.c: Likewise.
* gcc.target/aarch64/sve/unpacked_cvtf_1.c: New test.
* gcc.target/aarch64/sve/unpacked_cvtf_2.c: Likewise.
* gcc.target/aarch64/sve/unpacked_cvtf_3.c: Likewise.
* gcc.target/aarch64/sve/unpacked_fcvt_1.c: Likewise.
* gcc.target/aarch64/sve/unpacked_fcvt_2.c: Likewise.
* gcc.target/aarch64/sve/unpacked_fcvtz_1.c: Likewise.
* gcc.target/aarch64/sve/unpacked_fcvtz_2.c: Likewise.
Spencer Abson [Mon, 16 Jun 2025 16:43:07 +0000 (16:43 +0000)]
aarch64: Extend iterator support for partial SVE FP modes
Define new iterators for partial floating-point modes, and cover these
in some existing mode_attrs. This patch serves as a starting point for
an effort to extend support for unpacked floating-point operations.
To differentiate between BFloat mode iterators that need to test
TARGET_SSVE_B16B16, and those that don't (see LOGICALF), this patch
enforces the following naming convention:
- _BF: BF16 modes will not test TARGET_SSVE_B16B16.
- _B16B16: BF16 modes will test TARGET_SSVE_B16B16.
gcc/ChangeLog:
* config/aarch64/aarch64-sve.md: Replace uses of SVE_FULL_F_BF
with SVE_FULL_F_B16B16.
Replace use of SVE_F with SVE_F_BF.
* config/aarch64/iterators.md (SVE_PARTIAL_F): New iterator for
partial SVE FP modes.
(SVE_FULL_F_BF): Rename to SVE_FULL_F_B16B16.
(SVE_PARTIAL_F_B16B16): New iterator (BF16 included) for partial
SVE FP modes.
(SVE_F_B16B16): New iterator for all SVE FP modes.
(SVE_BF): New iterator for all SVE BF16 modes.
(SVE_F): Redefine to exclude BF16 modes.
(SVE_F_BF): New iterator to replace the previous SVE_F.
(VPRED): Describe the VPRED mapping for partial vector modes.
(b): Cover partial FP modes.
(is_bf16): Likewise.
Harald Anlauf [Sun, 15 Jun 2025 19:09:28 +0000 (21:09 +0200)]
Fortran: fix checking of MOLD= in ALLOCATE statements [PR51961]
In ALLOCATE statements where the MOLD= argument is present and is not
scalar, and the allocate-object has an explicit-shape-spec, the standard
does not require the ranks to agree. In that case we skip the rank check,
but emit a warning if -Wsurprising is given.
PR fortran/51961
gcc/fortran/ChangeLog:
* resolve.cc (conformable_arrays): Use modified rank check when
MOLD= expression is given.
Jason Merrill [Thu, 12 Jun 2025 15:19:19 +0000 (11:19 -0400)]
c++: add -Wsfinae-incomplete
We already error about a type or function definition causing a concept check
to change value, but it would be useful to diagnose this for other SFINAE
contexts as well; the memoization problem also affects templates. So
-Wsfinae-incomplete remembers if we've failed a requirement for a complete
type/deduced return type in a non-tf_error context, and later warns if the
type/function becomes complete.
This warning is enabled by default; I think the signal-to-noise ratio is
high enough to warrant that, and it catches things that are likely to make
the program "ill-formed, no diagnostic required".
friend87.C is an interesting case; this could be considered a false positive
because it is using friend injection to define the auto function to
implement a compile-time counter. I think this is sufficiently pathological
that it's fine to expect people who want to play this sort of game to
suppress the warning.
The data for this warning uses GTY((cache)) to persist through GC, but allow
entries to be discarded if the key is not otherwise marked.
I don't think it's desirable to export/import this information in modules,
it makes sense for it to be local to a single TU.
-Wsfinae-incomplete=2 adds a warning at the point of failure, which is
primarily intended to help with debugging warnings from the default mode.
Pan Li [Sun, 15 Jun 2025 08:28:38 +0000 (16:28 +0800)]
RISC-V: Refine VX combine test case 0 to avoid code duplication
The case 0 for vx combine def functions are most the same across
the different test files. Thus, re-arrange them in one place to
avoid code duplication.
Matthieu Longo [Wed, 11 Sep 2024 15:11:55 +0000 (16:11 +0100)]
aarch64: add support for AEABI Build Attributes
GCS (Guarded Control Stack, an Armv9.4-a extension) requires some
caution at runtime. The runtime linker needs to reason about the
compatibility of a set of relocable object files that might not
have been compiled with the same compiler.
Up until now, those metadata, used for the previously mentioned
runtime checks, have been provided to the runtime linker via GNU
properties which are stored in the ELF section ".note.gnu.property".
However, GNU properties are limited in their expressibility, and a
long-term commmitment was taken in the ABI for the Arm architecture
[1] to provide Build Attributes (a.k.a. BAs).
This patch adds the support for emitting AArch64 Build Attributes.
This support includes generating two new assembler directives:
.aeabi_subsection and .aeabi_attribute. These directives are generated
as per the syntax mentioned in spec "Build Attributes for the Arm®
64-bit Architecture (AArch64)" available at [1].
gcc/configure.ac now includes a new check to test whether the
assembler being used to build the toolchain supports these new
directives.
Two behaviors can be observed when -mbranch-protection=[standard|...]
is passed:
- If the assembler support BAs, GCC emits the BAs directives and
no GNU properties. Note: the static linker will derive the values
of GNU properties from the BAs, and will emit both BAs and GNU
properties into the output object.
- If the assembler do not support them, only .note.gnu.property
section will contain the relevant information.
Bootstrapped on aarch64-none-linux-gnu, and no regression found.
* config.in: Regenerate.
* config/aarch64/aarch64-elf-metadata.h
(class aeabi_subsection): New class for BAs.
* config/aarch64/aarch64-protos.h
(aarch64_pacret_enabled): New function.
* config/aarch64/aarch64.cc
(HAVE_AS_AEABI_BUILD_ATTRIBUTES): New definition.
(aarch64_file_end_indicate_exec_stack): Emit BAss.
(aarch64_pacret_enabled): New function.
(aarch64_start_file): Indent.
* configure: Regenerate.
* configure.ac: New configure check for BAs support in binutils.
gcc/testsuite/ChangeLog:
* lib/target-supports.exp:
(check_effective_target_aarch64_gas_has_build_attributes): New checker.
* gcc.target/aarch64/build-attributes/aarch64-build-attributes.exp: New DejaGNU file.
* gcc.target/aarch64/build-attributes/build-attribute-bti.c: New test.
* gcc.target/aarch64/build-attributes/build-attribute-gcs.c: New test.
* gcc.target/aarch64/build-attributes/build-attribute-pac.c: New test.
* gcc.target/aarch64/build-attributes/build-attribute-standard.c: New test.
* gcc.target/aarch64/build-attributes/no-build-attribute-bti.c: New test.
* gcc.target/aarch64/build-attributes/no-build-attribute-gcs.c: New test.
* gcc.target/aarch64/build-attributes/no-build-attribute-pac.c: New test.
* gcc.target/aarch64/build-attributes/no-build-attribute-standard.c: New test.
Matthieu Longo [Wed, 4 Jun 2025 11:10:05 +0000 (12:10 +0100)]
aarch64: encapsulate note.gnu.property emission into a class
The code emitting the GNU properties was moved to a separate file to
improve modularity and "releave" the 31000-lines long aarch64.cc file
from a few lines.
It introduces a new namespace "aarch64::" for AArch64 backend which
reduce the length of function names by not prepending 'aarch64_' to
each of them.
gcc/ChangeLog:
* Makefile.in: Add missing declaration of BACKEND_H.
* config.gcc: Add aarch64-elf-metadata.o to extra_objs.
* config/aarch64/aarch64-elf-metadata.h: New file
* config/aarch64/aarch64-elf-metadata.cc: New file.
* config/aarch64/aarch64.cc
(GNU_PROPERTY_AARCH64_FEATURE_1_AND): Removed.
(GNU_PROPERTY_AARCH64_FEATURE_1_BTI): Likewise.
(GNU_PROPERTY_AARCH64_FEATURE_1_PAC): Likewise.
(GNU_PROPERTY_AARCH64_FEATURE_1_GCS): Likewise.
(aarch64_file_end_indicate_exec_stack): Move GNU properties code to
aarch64-elf-metadata.cc
* config/aarch64/t-aarch64: Declare target aarch64-elf-metadata.o
yxj-github-437 [Wed, 4 Jun 2025 13:18:45 +0000 (21:18 +0800)]
c++: ICE with unexpanded pack in asm
Here an unexpanded parameter pack pass into asm_operand which doesn't
expect to see an operand without type. So use check_for_bare_parameter_packs
to remedy that.
gcc/cp/ChangeLog:
* parser.cc (cp_parser_asm_operand_list): Check for unexpanded
parameter packs.
Matthieu Longo [Mon, 23 Sep 2024 13:38:57 +0000 (14:38 +0100)]
aarch64: add debug comments to feature properties in .note.gnu.property
GNU properties are emitted to provide some information about the features
used in the generated code like BTI, GCS, or PAC. However, no debug
comment are emitted in the generated assembly even if -dA is provided.
It makes understanding the information stored in the .note.gnu.property
section more difficult than needed.
This patch adds assembly comments (if -dA is provided) next to the GNU
properties. For instance, if BTI and PAC are enabled, it will emit:
.word 0x3 // GNU_PROPERTY_AARCH64_FEATURE_1_AND (BTI, PAC)
Jan Hubicka [Mon, 16 Jun 2025 08:19:05 +0000 (10:19 +0200)]
Combine static and afdo branch predictions
Currently afdo reads the profile and anotates basic blocks containing
statements which have samples in profile data. For basic blocks which has been
fully optimized out (for example, basic blocks controlling loops that has been
fully unrolled) it has no data which it then tries to determine in
afdo_propagate using Kirhoff law.
Problem is that often there is not enough info to solve the problem. In that
case few tricks are applied and then algorithm gives up.
In all cases where it gave up, the count is then set to AFDO 0 and consequently
we end up with basic blocks having 0 counts in hot regions of program and we
can not trust those 0s much when optimizing.
This patch attempts to preserve static profile in regions we have no info.
After the propagation connected regions are identified and existing profile is
scaled to fit profile data. For single-entry-single exit regions this is
correct answer. For other regions we can theoretically try to adjust static
profile, but no attempts is made to do it to keep things simple.
Static profile has quality GUESSED while AFDO data quality AFDO which makes
it possible to distinguish it later.
afdo_adjust_guessed_profile does the profile adjustment. Rest of changes are
preventing the code from tampering with counts of basic blocks that can not
be fully determined. The propagation has some tricks to compute lower bound
of some basic blocks on the boundary of annotated regions and i am not trying
to preserve that.
We can end up with connected components where we can not determine the count.
This happens in pracitce in hot code i.e. for SPEC2017 perl benchmark. I plan
to handle this incrementally. Current code will simply set profle as undefined
in those regions, which works worse than 0 and thus we get regression in perl.
With this changed to 0, I now get same SPEC2017 score as without profiling.
The patch also makes gcc to completely ignore info about basic blocks which
do have statements that have actual 0 AFDO profile info. Since the profile
generation tool cuts profile at 2%, I think we should keep low guessed profile
there insead of 0. This is another step I plan to work on incrementally
this week.
Bootstrapped/regtested x86-64, comitted.
gcc/ChangeLog:
* auto-profile.cc (edge_set): Remove unused typedef.
(is_bb_annotated): Sanity check that annotated BBs has
quality AFDO and non-anntoated non-AFDO. Exceptions are
zeros.
(set_bb_annotated): Verify that BB set annotated has
AFDO profile.
(afdo_set_bb_count): Do not return true for 0 counts.
(afdo_find_equiv_class): Fix formating;
do not combine profile of annoated and non-annotated BBs.
(afdo_propagate_edge): Fix variable names; dump info
about changes; do not change non-annoated BB profiles;
if all flow out of BB was decided on, annotate remaining
edges with 0.
(afdo_propagate): Dump info about copied BB counts
and number of iteraitons used.
(cmp): New function.
(afdo_adjust_guessed_profile): New function.
(afdo_calculate_branch_prob): Do not initialize loop
optimizer here; call afdo_adjust_guessed_profile.
(afdo_annotate_cfg): Initialize profile here;
anotate entry/exit blocks only of profile is non-0.
* profile-count.h: (profile_count::force_guessed): New.
* tree-cfg.cc (gimple_verify_flow_info): Fix typo.
Since there are no unwanted reg-reg moves during DFmode input reloads in
recent GCCs, the previously committed patch
"xtensa: eliminate unwanted reg-reg moves during DFmode input reloads"
(commit cfad4856fa46abc878934a9433d0bfc2482ccf00) is no longer necessary
and is therefore being reverted.
gcc/ChangeLog:
* config/xtensa/predicates.md (reload_operand):
Remove.
* config/xtensa/xtensa.md:
Remove the peephole2 pattern that was previously added.
Due to improved register allocation for GP registers whose modes has been
changed by paradoxical SUBREGs, the previously committed patch
"xtensa: eliminate unnecessary general-purpose reg-reg moves"
(commit f83e76c3f998c8708fe2ddca16ae3f317c39c37a) is no longer necessary
and is therefore reverted.
gcc/ChangeLog:
* config/xtensa/xtensa.md:
Remove the peephole2 pattern that was previously added.
Jiawei [Fri, 13 Jun 2025 10:25:56 +0000 (18:25 +0800)]
simplify-rtx.cc:Simplify XOR(AND(ROTATE(~1) A) ASHIFT(1 A)) to IOR.
This patch adds a new simplification rule to `simplify-rtx.cc` that
handles a common bit manipulation pattern involving a single-bit set
and clear followed by XOR.
The transformation targets RTL of the form:
(xor (and (rotate (~1) A) B) (ashift 1 A))
which is semantically equivalent to:
B | (1 << A)
- v3 log:
Update RTL format, remove commas.
Only apply on SHIFT_COUNT_TRUNCATED target.
check '!side_effects_p' on XEXP (op1, 1).
gcc/ChangeLog:
* simplify-rtx.cc (simplify_context::simplify_binary_operation_1): Handle
more logical simplifications.
Pan Li [Sat, 14 Jun 2025 14:29:40 +0000 (22:29 +0800)]
RISC-V: Combine vec_duplicate + vmaxu.vv to vmaxu.vx on GR2VR cost
This patch would like to combine the vec_duplicate + vmaxu.vv to the
vmaxu.vx. From example as below code. The related pattern will depend
on the cost of vec_duplicate from GR2VR. Then the late-combine will
take action if the cost of GR2VR is zero, and reject the combination
if the GR2VR cost is greater than zero.
Assume we have example code like below, GR2VR cost is 0.
#define DEF_VX_BINARY(T, OP) \
void \
test_vx_binary (T * restrict out, T * restrict in, T x, unsigned n) \
{ \
for (unsigned i = 0; i < n; i++) \
out[i] = in[i] OP x; \
}
* config/riscv/riscv-v.cc (expand_vx_binary_vec_dup_vec): Add new
case UMAX.
(expand_vx_binary_vec_vec_dup): Ditto.
* config/riscv/riscv.cc (riscv_rtx_costs): Ditto.
* config/riscv/vector-iterators.md: Add new op umax.
Georg-Johann Lay [Sat, 14 Jun 2025 17:57:18 +0000 (19:57 +0200)]
AVR: Fix PR120423 / PR116389.
The problem with PR120423 and PR116389 is that reload might assign an invalid
hard register to a paradoxical subreg. For example with the test case from
the PR, it assigns (REG:QI 31) to the inner of (subreg:HI (QI) 0) which is
valid, but the subreg will be turned into (REG:HI 31) which is invalid
and triggers an ICE in postreload.
The problem only occurs with the old reload pass.
The patch maps the paradoxical subregs to a zero-extends which will be
allocated correctly. For the 120423 testcases, the code is the same like
with -mlra (which doesn't implement the fix), so the patch doesn't even
introduce a performance penalty.
The patch is only needed for v15: v14 is not affected, and in v16 reload
will be removed.
PR rtl-optimization/120423
PR rtl-optimization/116389
gcc/
* config/avr/avr.md [-mno-lra]: Add pre-reload split to transform
(left shift of) a paradoxical subreg to a (left shift of) zero-extend.
gcc/testsuite/
* gcc.target/avr/torture/pr120423-1.c: New test.
* gcc.target/avr/torture/pr120423-2.c: New test.
* gcc.target/avr/torture/pr120423-116389.c: New test.
Iain Sandoe [Thu, 29 May 2025 15:50:44 +0000 (16:50 +0100)]
c++, coroutines: Improve diagnostics for awaiter/promise.
At present, we can issue diagnostics about missing or malformed
awaiter or promise methods when we encounter their uses in the
body of a user's function. We might then re-issue the same
diagnostics when processing the initial or final await expressions.
This change avoids such duplication, and also attempts to
identify issues with the initial or final expressions specifically
since diagnostics for those do not have any useful line number.
gcc/cp/ChangeLog:
* coroutines.cc (build_co_await): Identify diagnostics
for initial and final await expressions.
(cp_coroutine_transform::wrap_original_function_body): Do
not handle initial and final await expressions here ...
(cp_coroutine_transform::apply_transforms): ... handle them
here and avoid duplicate diagnostics.
* coroutines.h: Declare inital and final await expressions
in the transform class. Save the function closing brace
location.
gcc/testsuite/ChangeLog:
* g++.dg/coroutines/coro1-missing-await-method.C: Adjust for
improved diagnostics.
* g++.dg/coroutines/coro-missing-final-suspend.C: Likewise.
* g++.dg/coroutines/pr104051.C: Move to...
* g++.dg/coroutines/pr104051-0.C: ...here.
* g++.dg/coroutines/pr104051-1.C: New test.
Since the folding of this builtin happens after the main coroutine FE
lowering, we need to account for await expressions in that lowering.
Since these expressions have a property of being not evaluated, but do
not have the full constraints of an unevaluatated context, we want to
apply the checks and then remove the await expressions so that they no
longer participate in the analysis and lowering.
When a builtin_constant_p call is encountered, and the operand contains
any await expression, we check to see if the operand can be a constant
and replace the call with its result.
PR c++/116775
gcc/cp/ChangeLog:
* coroutines.cc (analyze_expression_awaits): When we see
a builtin_constant_p call, and that contains one or more
await expressions, then replace the call with its result
and discard the unevaluated operand.
Iain Sandoe [Sat, 7 Jun 2025 16:01:15 +0000 (17:01 +0100)]
c++, coroutines: Ensure that the resumer is marked as can_throw.
We must flag that the resumer might throw (since the wrapping of the
original function body unconditionally adds a try-catch/rethrow). We
also add code that might throw - even when the original function body
would not.
TODO: We could improve code-gen by recognising cases where the combined
body + initial await expressions cannot throw and omitting the unneeded
try/catch/rethrow wrapper.
Jakub Jelinek [Fri, 13 Jun 2025 21:17:17 +0000 (23:17 +0200)]
expand: Add a helper function for edge splitting [PR120629]
On Fri, Jun 13, 2025 at 08:52:55AM +0100, Richard Sandiford wrote:
> But now that there are two instances, I wonder if it would
> be worth hiding this detail in a helper function?
Here it is.
2025-06-13 Jakub Jelinek <jakub@redhat.com>
PR middle-end/120629
* cfgexpand.cc (expand_split_edge): New function.
(expand_gimple_cond, construct_init_block): Use it.
Jonathan Wakely [Thu, 22 May 2025 14:42:45 +0000 (15:42 +0100)]
libstdc++: Fix std::uninitialized_value_construct for arrays [PR120397]
The std::uninitialized_{value,default}_construct{,_n} algorithms should
be able to create arrays, but that currently fails because when an
exception happens they clean up using std::_Destroy and in C++17 that
doesn't support destroying arrays. (For C++20 and later, std::destroy
does handle destroying arrays.)
This commit adjusts the _UninitDestroyGuard RAII type used by those
algos so that in C++17 mode it recursively destroys each rank of an
array type, only using std::_Destroy for the last rank when it's
destroying non-array objects.
libstdc++-v3/ChangeLog:
PR libstdc++/120397
* include/bits/stl_uninitialized.h (_UninitDestroyGuard<I,void>):
Add new member function _S_destroy and call it from the
destructor (for C++17 only).
* testsuite/20_util/specialized_algorithms/uninitialized_default_construct/120397.cc:
New test.
* testsuite/20_util/specialized_algorithms/uninitialized_value_construct/120397.cc:
New test.
Reviewed-by: Tomasz Kamiński <tkaminsk@redhat.com>
Tomasz Kamiński [Fri, 13 Jun 2025 11:28:30 +0000 (13:28 +0200)]
libstdc++: Format %r, %x and %X using locale's time_put facet [PR120648]
Similarly to issue reported for %c in PR117214, the format string for locale
specific time (%r, %X) and date (%x) representations may contain specifiers
not accepted by chrono-spec, leading to exception being thrown. This
happened for following conversion specifier and locale combinations:
* %r, %X for aa_DJ.UTF-8, ar_SA.UTF-8
* %x for ca_AD.UTF-8, my_MM.UTF-8
This fix follows approach from r15-8490-gc24a1d5, and uses time_put to emit
localized date format. The existing _M_c is reworked to handle all locale
dependent conversion specifies, by accepting them as argument. It is also
renamed to _M_c_r_x_X.
PR libstdc++/120648
libstdc++-v3/ChangeLog:
* include/bits/chrono_io.h (__formatter_chrono::_M_format_to):
Handle %c, %r, %x and %X by passing them to _M_c_r_x_X.
(__formatter_chrono::_M_c_r_x_X): Reworked from _M_c.
(__formatter_chrono::_M_c): Renamed into above.
(__formatter_chrono::_M_r, __formatter_chrono::_M_x)
(__formatter_chrono::_M_X): Removed.
* testsuite/std/time/format/pr117214.cc: New tests for %r, %x,
%X with date, time and durations.
Patrick Palka [Fri, 13 Jun 2025 15:03:19 +0000 (11:03 -0400)]
libstdc++: Optimize __make_comp/pred_proj for empty/scalar types
When creating a composite comparator/predicate that invokes a given
projection function, we don't need to capture a scalar (such as a
function pointer or member pointer) or empty object by reference,
instead capture it by value and use [[no_unique_address]] to elide
its storage (in the empty case). This makes using __make_comp_proj
zero-cost in the common case where both functions are empty/scalars.
libstdc++-v3/ChangeLog:
* include/bits/ranges_algo.h (__detail::__by_ref_or_value_fn): New.
(__detail::_Comp_proj): New.
(__detail::__make_comp_proj): Use it instead.
(__detail::_Pred_proj): New.
(__detail::__make_pred_proj): Use it instead.
Reviewed-by: Tomasz Kamiński <tkaminsk@redhat.com> Reviewed-by: Jonathan Wakely <jwakely@redhat.com>
libstdc++: add a workaround for format_kind<optional<T>> [PR120644]
The specialization of format_kind for optional is causing a problem when
optional is imported and included. The comments on the PR strongly
suggest that this is a frontend bug; this commit just works around the
issue by specifying the type of format_kind<optional<T>> to be
`range_format`, rather than leaving the compiler deduce it via `auto`.
PR c++/120644
libstdc++-v3/ChangeLog:
* include/std/optional (format_kind): Do not use `auto`.