While looking into PR 110252 a few years back, I noticed this missed
optimization in code from sel-sched.cc. I only realized today
I could generalize it to handle more than just 1 to all positive
values.
This adds the pattern to optimize:
signed < 0 ? positive : min<signed, positive>
into:
unsigned ts = signed;
unsigned ps = positive;
unsigned ru = min<ts, tp>;
(signed)ru
gcc:
* doc/install.texi (Prerequisites): Use Binutils over binutils to
refer to that project.
(Downloading the source): Ditto.
(Configuration): Ditto.
(Building): Ditto.
(Specific): Ditto.
Clearly a permutation of a permutation is another permutation, so
the above expression can be simplified/canonicalized. Conveniently
there's already code in simplify_rtx to spot that a vec_select of
vec_select is an identity, this patch extends that functionality to
simplify a vec_select of a vec_select to a single vec_select.
With this transformation in simplify-rtx.cc, combine now reports:
2026-04-26 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
* simplify-rtx.cc (simplify_context::simplify_binary_operation_1)
<case VEC_SELECT>: Simplify a (non-identity) vec_select of a
vec_select.
gcc/testsuite/ChangeLog
* gcc.target/i386/sse2-pshufd-2.c: New test case.
Roger Sayle [Sun, 26 Apr 2026 09:56:43 +0000 (10:56 +0100)]
PR tree-optimization/124715: pow(0,-1) sets errno with -fmath-errno
This patch addresses PR tree-optimization/124715, where it is unsafe for
GCC (specifically match.pd) to transform pow(x,-1) into 1.0/x if x may be
zero, which sets errno, unless -fno-math-errno (included in -ffast-math)
is specified.
2026-04-26 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
PR tree-optimization/124715
* match.pd (simpify pows): Check flag_errno_math before simplifying
pow(x,-1) -> 1/x when x could be zero.
gcc/testsuite/ChangeLog
PR tree-optimization/124715
* gcc.dg/no-math-errno-5.c: New test case.
* gcc.dg/no-math-errno-6.c: Likewise.
Roger Sayle [Sun, 26 Apr 2026 09:53:20 +0000 (10:53 +0100)]
i386: Refactor AVX512 comparisons in machine description sse.md.
This patch refactors/tidies up the define_insns for vector comparisons
on 512-bit vectors in sse.md. The motivation is that the current
organization (accidentally) introduces dubious instructions such as
avx512f_cmpv16si3_mask_round and avx512vl_cmpv2di3_mask_round, which
are integer comparisons that specify a floating point rounding mode!?
The problem is caused by the decomposition of mode iterators.
Currently, sse.md uses four patterns: (1) for signed comparions
of floating point and large integer modes (V48H), (2) for signed
comparisons of small integer modes (VI12), (3) for unsigned
comparisons of small integer modes (VI12) and (4) for unsigned
comparisons of large integer modes (VI48). The first pattern
also allows for variants specifying the FP rounding mode.
The refactoring below uses a more sensible decomposition into
only three patterns: (1) for [signed] comparisons of floating
point modes (VFH), (2) for signed comparisons of integers (VI1248)
and (3) for unsigned comparisons of integers (VI1248).
For the record, to show this produces the same coverage:
The simplification also allows a clean-up of predicates
(for operand[3]) as there are 8 integer comparison operators
and 32 floating point comparison operators, and we no longer
need cmp_imm_predicate to restrict range based upon <mode>.
There are no changes other than removing the non-sensical patterns
from insn-emit, insn-recog and friends.
2026-04-26 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
* config/i386/sse.md
(<avx512>_cmp<mode>3<mask_scalar_merge_name><round_saeonly_name>):
Change mode iterator from V48H_AVX512VL to VFH_AVX512VL and op3's
predicate from <cmp_imm_predicate> to const_0_to_31_operand.
(<avx512>_cmp<mode>3<mask_scalar_merge_name>): Change mode
iterator from VI12_AVX512VL to VI1248_AVX512VLBW.
(<avx512>_ucmp<mode>3<mask_scalar_merge_name>): Likewise.
Jeff Law [Sun, 26 Apr 2026 00:12:27 +0000 (18:12 -0600)]
[RISC-V][PR rtl-optimization/56096] Improve equality comparisons of a logical AND expressions
This BZ shows that we can improve certain comparisons for RISC-V. In
particular if we are testing the result of a logical AND for equality and one
operand of the AND requires synthesis, we may be able to do better if we right
shift away any trailing zeros from the constant and shift the other input as
well. This wins when the shifted constant does not require synthesis.
That may in turn allow improvement of a select of 0 and 2^n based on the
zero/nonzero status of a logical AND. Essentially we can rewrite the sequence
to remove a data dependency.
Concretely:
>
> unsigned f1 (unsigned x, unsigned m)
> {
> x >>= ((m & 0x008080) ? 8 : 0);
> return x;
> }
Compiles into:
> li a5,32768
> addi a5,a5,128
> and a1,a1,a5
> snez a1,a1
> slliw a1,a1,3
> srlw a0,a0,a1
> ret
But after this patch we generate this instead:
> srai a5,a1,7
> andi a5,a5,257
> li a4,8
> czero.eqz a1,a4,a5
> srlw a0,a0,a1
> ret
It's just one less instruction, but the li can issue whenever the uarch wants
before the srlw as it has no incoming dependency. So we're slight more dense
on encoding and slightly more efficient as well. Much like 57650, I'm focused
on the low level RISC-V codegen issues, not the broader issues that are raised
in the PR.
This has been in my tree for a while, so it's been tested on riscv32-elf,
riscv64-elf and bootstrapped on the BPI which has support for czero. Waiting
on pre-commit CI before moving forward.
PR rtl-optimization/56096
gcc/
* config/riscv/riscv.md: Add new patterns to optimize certain cases with
a logical AND feeding an equality test against zero.
Andrew Pinski [Tue, 10 Feb 2026 17:41:48 +0000 (09:41 -0800)]
scev/niter: Use INTEGRAL_NB_TYPE_P instead of direct comparison to INTEGER_TYPE [PR124061]
I noticed this while looking into PR 124052. This is not the first time we had
direct type comparison against INTEGER_TYPE which should have been different.
As mention in PR 124052, I didn't include bool types so I needed a new macro
to simplify things.
Bootstrapped and tested on x86_64-linux-gnu.
PR tree-optimization/124061
gcc/ChangeLog:
* tree-scalar-evolution.cc (interpret_rhs_expr): Use
INTEGRAL_NB_TYPE_P instead of comparing the code to INTEGER_TYPE.
* tree-ssa-loop-niter.cc (number_of_iterations_ne): Likewise.
(number_of_iterations_cltz): Likewise.
(number_of_iterations_exit_assumptions): Likewise.
* tree.h (INTEGRAL_NB_TYPE_P): New macro.
gcc/testsuite/ChangeLog:
* g++.dg/opt/enum-loop-1.C: New test.
* gcc.dg/tree-ssa/bitint-loop-opt-1.c: New test.
Signed-off-by: Andrew Pinski <andrew.pinski@oss.qualcomm.com>
Jeff Law [Sat, 25 Apr 2026 18:18:34 +0000 (12:18 -0600)]
[RISC-V][PR target/123904] Improve bit masking of shifted values
If we are masking off bits on the upper and lower part of a register on riscv,
depending on the precise mask it may be best implemented as a shift triplet.
ie, shift left to clear upper bits, shift right to clear lower bits, shift left
again to put the bits into their proper position.
If the input value is already left shifted and the shift count corresponds to
the low mask bits, then we can get away with just two shifts. We shift left to
clear the relevant high bits, then shift right to put them into their proper
position.
This likey came from spec or coremark given it was reported to me by the RAU
team a while back. But the testcase didn't include enough breadcrumbs to know
for sure.
This has been repeatedly bootstrapped and regression tested on the Pioneer and
BPI as well as regularly regression tested on the riscv32-elf and riscv64-elf
embedded targets.
I'll wait for pre-commit CI to spin before pushing to the trunk.
PR target/123904
gcc/
* config/riscv/riscv.md (masking shifted value): New splitter to
optimize certain masking operations on shifted values.
gcc/testsuite/
* gcc.target/riscv/pr123904.c: New test.
Jeff Law [Sat, 25 Apr 2026 17:40:38 +0000 (11:40 -0600)]
[RISC-V][PR target/123838] Improve code generated for shifts with counts 31-N or 63-N
A shift count expressed at 31 - n ends up generating code like this:
li a5,31
subw a5,a5,a1
sllw a0,a0,a5
ret
Note how we had to load 31 into a constant for the subtraction. But instead of
using 31 - n we can use a bit-not as it'll do precisely what we need in the
bits that the shift instruction actually uses. This results in:
not a1, a1
sllw a0, a0, a1
ret
The core idea we're exploiting here is the processor implements
SHIFT_COUNT_TRUNCATED semantics. so a SI shift only cares about the low 5 bits
and DI the low 6 bits of the shift count. And if we think about what bit
pattern -1 would be in those cases we get 31 and 63. We then exploit the
identity
-x = ~x + 1 // identity
-1 - x = ~x // a tiny bit of algebra
So in these limited cases we can place the the -1 - x with ~x.
I didn't implement this in simplify-rtx. It wasn't actually going to help
because while the RISC-V chip implements SHIFT_COUNT_TRUNCATED semantics, it
doesn't define SHIFT_COUNT_TRUNCATED for "reasons".
So there's two patterns. One for an X mode destination, naturally the shift
count is 31/63 - n for SI/DI respectively. It's a bit odd that the subtraction
is always SImode, but that's probably narrowing happening somewhere.
The second pattern covers the "w" forms for rv64.
This trick probably works for the zbs instructions as well. That's going to be
a whole lot more patterns and I haven't seen this idiom show up anywhere in
practice, so it doesn't seem like a good cost/benefit analysis.
This spun overnight on riscv32-elf and riscv64-elf and on the Pioneer without
regressions. I'll wait for pre-commit CI to do its thing before pushing.
PR target/123838
gcc/
* config/riscv/riscv.md: Use splitters to simplify shifts where
the shift count is 31-N or 63-N.
gcc/testsuite
* gcc.target/riscv/pr123838.c: New test.
Pan Li [Tue, 13 Jan 2026 02:03:46 +0000 (10:03 +0800)]
RISC-V: Combine vec_duplicate + vmsle.vv to vmsle.vx on GR2VR cost
This patch would like to combine the vec_duplicate + vmsle.vv to the
vmsle.vx. From example as below code. The related pattern will depend
on the cost of vec_duplicate from GR2VR. Then the late-combine will
take action if the cost of GR2VR is zero, and reject the combination
if the GR2VR cost is greater than zero.
Assume we have asm code like below, GR2VR cost is 0.
After this patch:
11 beq a3,zero,.L8
...
14 .L3:
15 vsetvli a5,a3,e32,m1,ta,ma
...
20 vmsle.vx v1,a2,v3
...
23 bne a3,zero,.L3
gcc/ChangeLog:
* config/riscv/predicates.md: Add ge to the swappable
cmp operator iterator.
* config/riscv/riscv-v.cc (get_swapped_cmp_rtx_code): Take
care of the swapped rtx code as well.
Daniel Barboza [Wed, 18 Feb 2026 13:29:50 +0000 (10:29 -0300)]
match.pd: remove bit set/bit clear branch mispredict [PR64567]
Add two patterns to eliminate mispredicts in the following bit ops
scenarios:
- checking if a single bit is not set, and in this case set it: always
set the bit;
- checking if a bitmask is set (even partially), and in this case clear
it: always clear the bitmask.
Bootstrapped and tested with x86_64-pc-linux-gnu.
PR tree-optimization/64567
gcc/ChangeLog:
* match.pd (`cond (bit_and A IMM) (bit_or A IMM) A`): New
pattern.
(`cond (bit_and A IMM) (bit_and A ~IMM) A`): New pattern.
gcc/testsuite/ChangeLog:
* gcc.dg/tree-ssa/pr64567-2.c: New test.
* gcc.dg/tree-ssa/pr64567.c: New test.
tree-ssa-strlen: Use gimple_build/gimple_convert_to_ptrofftype [PR122989]
Replace convert_to_ptrofftype, force_gimple_operand_gsi,
gimple_build_assign, and gsi_insert_before with
gimple_convert_to_ptrofftype and gimple_build.
gcc/ChangeLog:
PR tree-optimization/122989
* tree-ssa-strlen.cc (get_string_length): Use
gimple_convert_to_ptrofftype and gimple_build instead of
convert_to_ptrofftype/force_gimple_operand_gsi/gimple_build_assign.
gld-2.46: warning: /tmp//cckSN7Ts.o: missing .note.GNU-stack section implies executable stack
gld-2.46: NOTE: This behaviour is deprecated and will be removed in a future version of the linker
As shown in the PR, we can trigger an RTL checking abort when classifying thead
specific addressing modes. As far as I can tell, the code is supposed to be
extracting constant value from the multiply operation, but instead is
referencing the wrong object.
The fix is trivial. I don't think this is anywhere near serious enough to try
to get into the imminent gcc-16 release. So after pre-commit testing is done
I'll push to the trunk, then backport in a week or so after the gcc-16 release
has been made.
This has been regression tested on riscv64-elf and riscv32-elf. While it will
spin on the Pioneer overnight, which has the relevant thead extensions, they
aren't enabled by default, so I don't really expect any meaningful improvements
to coverage.
PR target/124984
gcc/
* config/riscv/thead.cc (th_memidx_classify_address_index): Extract
constant multiplicand value from the right object.
gcc/testsuite
* gcc.target/riscv/pr124984.c: New test.
Jeff Law [Fri, 24 Apr 2026 20:58:00 +0000 (14:58 -0600)]
[RISC-V][PR rtl-optimization/80770] Canonicalize extending byte loads for RISC-V
In the process of debugging pr80770 with Shreya it became apparent that a
failure to CSE certain memory references was inhibiting Shreya's RTL
simplification from firing in all the cases we cared about as the simplifier
requires two operands to be the same pseudo.
The failure to CSE stems from having two QI loads which are sign extended to
different sized destinations. As it turns out the code to fix that was
something I already had in flight as it's a small piece of eliminating a few
define_insn_and_split patterns (or simplifying them down to just a
define_split).
To expose the missed CSE what we really want to do is extend the value out to
word mode in a temporary, then use a lowpart extraction to set the real
destination. The key being we haven't changed the size of the load, just how
widely it gets extended. Think of it as canonicalization for the purposes of
CSE.
This isn't the full set of changes I had in flight in that space, but does
clean things up enough for QImode loads to get CSE'd better and is enough to
trigger Shreya's pr80770 changes consistently for the testcodes we have on
RISC-V.
This has been spinning in my tester for a while. So it's clean on riscv64-elf,
riscv32-elf as well as bootstrapped and regression tested on the Pioneer and
BPI-F3. I'll wait for the pre-commit tester to do its thing before pushing to
the trunk.
In case it's not obvious, I'm focused on trickling RISC-V target improvements
right now so as not to potentially interfere with the release process. So this
doesn't include Shreya's simplify-rtx.cc changes.
PR rtl-optimization/80770
gcc/
* config/riscv/riscv.md (zero_extendqi<SUPERQI:mode>2): Always extend
out to a word and use a subreg lowpart extraction to get the right bits.
(extend<SHORT:mode><SUPERQI:mode>2): Similarly.
Carter Rennick [Fri, 3 Apr 2026 13:07:38 +0000 (13:07 +0000)]
mips: Fix ICE on mips64-elf by removing MAX_FIXED_MODE_SIZE override [PR120144]
The definition of MAX_FIXED_MODE_SIZE did not account for MIPS supporting
TImode, which causes an internal compiler error when building libstdc++. Upon further
investigation, this definition appears to be a historical mistake.
This patch removes the MAX_FIXED_MODE_SIZE override, which fixes the error.
Eikansh Gupta [Tue, 31 Mar 2026 11:21:00 +0000 (16:51 +0530)]
tree-ssa-dce: eliminate dead relaxed atomic loads with no LHS [PR123966]
A relaxed atomic load whose result is never used has no observable
effect: the value is discarded and __ATOMIC_RELAXED provides no
inter-thread synchronisation guarantee.
Fix this by adding an early-return check for
BUILT_IN_ATOMIC_LOAD_1/2/4/8/16 calls that have no LHS and a
compile-time-constant relaxed memory order.
PR tree-optimization/123966
gcc/ChangeLog:
* tree-ssa-dce.cc (mark_stmt_if_obviously_necessary):
Don't mark a relaxed atomic load with no LHS as necessary.
gcc/testsuite/ChangeLog:
* gcc.dg/tree-ssa/pr123966.c: New test.
Milan Tripkovic [Fri, 24 Apr 2026 15:01:51 +0000 (09:01 -0600)]
[PATCH] RISC-V: Add vector cost model for Spacemit-X60
This patch implements a dedicated vector cost model for the Spacemit-X60
core. The cost values are derived from micro-benchmarking
data provided by the Camel CDR project.
Following discussions during the RISC-V Patchwork Meeting and based on
the upstream review process, this model applies a clamping
for long-latency instructions. Specifically, all long reservations
are capped at 7 cycles.
As we do not have access to the SPEC CPU benchmark suite, no testing
was performed using that suite. The implementation is based on the
cycle counts reported in the linked data source.
Data source:
https://camel-cdr.github.io/rvv-bench-results/spacemit_x60/index.html
gcc/ChangeLog:
* config/riscv/riscv.cc (riscv_sched_adjust_cost):Enable
TARGET_ADJUST_LMUL_COST for spacemit_x60.
* config/riscv/spacemit-x60.md: Add vector pipeline model
for Spacemit-X60.
Co-authored-by: Dusan Stojkovic <Dusan.Stojkovic@rt-rk.com> Co-authored-by: Nikola Ratkovac <Nikola.Ratkovac@rt-rk.com>
When P2165R4 updated __has_tuple_element in C++23 to reuse __tuple_like
concept, it dropped the requirement of validity of get, assuming that for
tuple_like type with size of N, get<I> on lvalue is well-formed for any I < N.
This however does not hold for ranges::subrange (tuple-like of size 2) with
move-only iterator, for which get can only be applied on rvalue. In consequence
constrains allowed instantiating elements_view for range of such subrange,
but instantiating it's iterator lead to hard error from iterator_category
computation.
This patch applies the requirements on validity of get also in C++23 and
later standard modes.
libstdc++-v3/ChangeLog:
* include/std/ranges (__detail::__has_tuple_element): Check
if std::get<_Nm>(__t) returns referenceable type also for C++23
and later.
* testsuite/std/ranges/adaptors/elements.cc: Add test covering
vector of ranges::subrange with move-only iterator.
Reviewed-by: Patrick Palka <ppalka@redhat.com> Signed-off-by: Tomasz Kamiński <tkaminsk@redhat.com>
Richard Biener [Thu, 26 Feb 2026 14:27:10 +0000 (15:27 +0100)]
Some TLC to vect_create_new_slp_node APIs
The following properly documents the overloads of vect_create_new_slp_node
and adjusts callers in tree-vect-slp-patterns.cc
* tree-vect-slp.cc (vect_create_new_slp_node): Assert that 'code'
is either ERROR_MARK or VEC_PERM_EXPR. Document properly.
* tree-vect-slp-patterns.cc (vect_build_swap_evenodd_node):
Use lane_permutation_t.
(vect_build_combine_node): Likewise. Pass VEC_PERM_EXPR
as code.
[RISC-V][V2][PR target/123839] Improve subset of constant permutes for RISC-V
There's a set of constant permutes that are currently implemented
via vslideup+vcompress which requires a mask (and setup of the
mask), but which can be implemented via vslideup+vslidedown.
This has been tested on riscv{32,64}-elf as well as in a BPI-F3 which
is configured to use V by default.
PR target/123839
gcc/
* config/riscv/riscv-v.cc (shuffle_slide_patterns): Use a
vslideup+vslidedown pair rather than a vcompressed based
sequence.
gcc/testsuite
* gcc.target/riscv/rvv/autovec/binop/vcompress-avlprop-1.c: Adjust
expected output.
* gcc.target/riscv/rvv/autovec/pr123839.c: New test.
Jakub Jelinek [Fri, 24 Apr 2026 12:50:23 +0000 (14:50 +0200)]
rs6000: Don't fold stuff for C++ during targetm.resolve_overloaded_builtin [PR124133]
The following testcase ICEs starting with the removal of NON_DEPENDENT_EXPR
in GCC 14. The problem is that while parsing templates if all the arguments
of the overloaded builtins are non-dependent types,
targetm.resolve_overloaded_builtin can be called on it. And trying to
fold_convert or fold_build2 subexpressions of such arguments can ICE,
because they can contain various FE specific trees, or standard trees
with NULL_TREE types, or e.g. type mismatches in binary tree operands etc.
All that goes away later when the trees are instantiated and
targetm.resolve_overloaded_builtin is called again, but if it ICEs while
doing that, it won't reach that point. And the reason to call that
hook in that case if none of the arguments are type dependent is to figure
out if the result type is also non-dependent.
Given the general desire to fold stuff in the FE during parsing as little
as possible and fold it only during cp_fold later on and because from the
target *-c.cc files it isn't easily possible to find out if it is
processing_template_decl or not, the following patch just stops folding
anything in the arguments, calls convert instead of fold_convert and
just build2 instead of fold_build2 etc. when in C++ (and keeps doing what
it did for C).
2026-04-24 Jakub Jelinek <jakub@redhat.com>
PR target/124133
* config/rs6000/rs6000-c.cc (c_fold_convert): New function.
(c_fold_build2_loc): Likewise.
(fully_fold_convert): Use c_fold_convert instead of fold_convert.
(altivec_build_resolved_builtin): Likewise. Use c_fold_build2_loc
instead of fold_build2.
(resolve_vec_mul, resolve_vec_adde_sube, resolve_vec_addec_subec):
Use c_fold_build2_loc instead of fold_build2_loc.
(resolve_vec_splats, resolve_vec_extract): Use c_fold_convert instead
of fold_convert.
(resolve_vec_insert): Use c_fold_build2_loc instead of fold_build2.
(altivec_resolve_overloaded_builtin): Use c_fold_convert instead
of fold_convert.
* g++.target/powerpc/pr124133-1.C: New test.
* g++.target/powerpc/pr124133-2.C: New test.
Reviewed-by: Michael Meissner <meissner@linux.ibm.com>
Jakub Jelinek [Fri, 24 Apr 2026 12:36:29 +0000 (14:36 +0200)]
bitintlower: Padding bit fixes, part 5 [PR123635]
The following patch is hopefully the last missing part of the _BitInt
bitint_extended padding bit fixes, this time for
__builtin_{add,sub,mul}_overflow. For __builtin_{add,sub}_overflow,
the extension in the padding bits of a partial limb (if any) is already
done in some cases during the handling of the limbs (and the last
hunk in gimple-lower-bitint.cc just adds it to one spot where it was
missing). The extension in the padding bits of a full limb of padding
bits (if any) and for __builtin_mul_overflow partial limb too is done
in finish_arith_overflow. If both var and obj are NULL, it is
__builtin_*_overflow_p or __builtin_*_overflow that ignores the result
of the operation and only cares about whether it overflowed or not; in
that case there is nothing to extend.
2026-04-24 Jakub Jelinek <jakub@redhat.com>
PR middle-end/123635
PR tree-optimization/124988
* gimple-lower-bitint.cc (bitint_large_huge::finish_arith_overflow):
Handle bitint_extend.
(bitint_large_huge::lower_addsub_overflow): Fix up comment spelling.
For bitint_extended extend the partial limb if any.
* gcc.dg/torture/bitint-91.c: New test.
* gcc.dg/torture/bitint-92.c: New test.
* gcc.dg/torture/bitint-93.c: New test.
* gcc.dg/torture/bitint-94.c: New test.
* gcc.dg/torture/bitint-95.c: New test.
Tomasz Kamiński [Fri, 24 Apr 2026 11:02:22 +0000 (13:02 +0200)]
libstdc++: Reject using views::iota on iota_view.
Resolves LWG4096, views::iota(views::iota(0)) should be rejected.
For __e of type _Tp that is specialization of iota_view, the CTAD based
expression iota_view(__e) is well formed, and creates a copy of __e.
As iota_view<decay_t<_Tp>> is ill-formed in this case (iota_view is not
weakly_incrementable), using that type in return type explicitly, removes
the overload from overload resolution in this case.
The (now redudant) __detail::__can_iota_view constrain in template head is
preserved, to provide error messages consistent with adaptors for other
non-incrementable types.
libstdc++-v3/ChangeLog:
* include/std/ranges (_Iota::operator()(_Tp&&)): Replace
auto return type and CTAD with iota_view<decay_t<_Tp>>.
* testsuite/std/ranges/iota/iota_view.cc: Tests if
views::iota(iota_view) is rejected.
Reviewed-by: Jonathan Wakely <jwakely@redhat.com> Signed-off-by: Tomasz Kamiński <tkaminsk@redhat.com>
Tomasz Kamiński [Fri, 24 Apr 2026 09:58:39 +0000 (11:58 +0200)]
libstdc++: Constrain views::adjacent(_transform)?<0> to forward_ranges.
This resolves LWG 4098, "views::adjacent<0> should reject non-forward ranges"
which was approved in Sofia 2024.
libstdc++-v3/ChangeLog:
* include/std/ranges (_AdjacentTransform::operator())
(_Adjacent::operator()): Require forward_range for N == 0.
* testsuite/std/ranges/adaptors/adjacent/1.cc: Test if input_ranges
are rejected.
* testsuite/std/ranges/adaptors/adjacent_transform/1.cc: Likewise.
Reviewed-by: Jonathan Wakely <jwakely@redhat.com> Signed-off-by: Tomasz Kamiński <tkaminsk@redhat.com>
Tomasz Kamiński [Fri, 24 Apr 2026 09:13:02 +0000 (11:13 +0200)]
libstdc++: Add _GLIBCXX_RESOLVE_LIB_DEFECTS comment for LWG4083.
The LWG4083, "views::as_rvalue should reject non-input ranges" is resolved,
as input_range<_Range> is implied by __detail::__can_as_rvalue_view<_Range>.
can use integer load. Use inner mode as the scalar mode for CONST_VECTOR
load source.
gcc/
PR target/125009
* config/i386/i386-features.cc (ix86_place_single_vector_set):
Support CONST_VECTOR load no larger than integer register.
(ix86_broadcast_inner): Use inner mode as the scalar mode for
CONST_VECTOR load source.
(pass_x86_cse::x86_cse): Generate CONST_VECTOR broadcast source
for CONST_VECTOR load no larger than integer register.
gcc/testsuite/
PR target/125009
* g++.target/i386/pr125009.C: New test.
* gcc.target/i386/pr125009.c: Likewise.
Richard Biener [Wed, 15 Apr 2026 09:10:56 +0000 (11:10 +0200)]
tree-optimization/124843 - vectorize inversion of scalar bools
Scalar bool inversion vectorization fails due to bools having
bit precision. The following adds a pattern to rewrite it
to the corresponding BIT_XOR_EXPR operation which we can vectorize
just fine.
PR tree-optimization/124843
* tree-vect-patterns.cc (vect_recog_bool_pattern): Recognize
BIT_NOT_EXPR of scalar bools and rewrite with BIT_XOR_EXPR.
Richard Biener [Thu, 9 Apr 2026 13:18:16 +0000 (15:18 +0200)]
SLP pattern TLC
The following removes STMT_VINFO_SLP_VECT_ONLY_PATTERN which only
exists so we can do some cleanup that doesn't seem to be necessary.
We've been cleaning the original to pattern stmt link, but
add_pattern_stmt never sets that up - it only sets up the pattern
to original stmt link, so the SLP pattern is only reachable from
the pattern SLP nodes representative.
* tree-vectorizer.h (_stmt_vec_info::slp_vect_pattern_only_p):
Remove.
(STMT_VINFO_SLP_VECT_ONLY_PATTERN): Likewise.
* tree-vectorizer.cc (vec_info::new_stmt_vec_info): Do not
initialize STMT_VINFO_SLP_VECT_ONLY_PATTERN.
* tree-vect-loop.cc (vect_analyze_loop_2): Nothing to do
for SLP pattern stmts that are not reachable from scalar
stmts anyway. Remove dead code.
* tree-vect-slp-patterns.cc (complex_pattern::build): Do not
set STMT_VINFO_SLP_VECT_ONLY_PATTERN.
(addsub_pattern::build): Likewise.
* tree-vect-slp.cc (vect_free_slp_tree): Remove dead code.
Richard Biener [Thu, 9 Apr 2026 13:01:14 +0000 (15:01 +0200)]
SLP pattern TLC
The following removes setting of STMT_VINFO_REDUC_DEF on pattern
stmts - those are only ever checked on original scalar stmts now.
But for that to work we have to make the related stmt of the new
SLP pattern stmts the original stmt of a possible pattern.
The only valid SLP_TREE_CODE are VEC_PERM_EXPR and ERROR_MARK,
do not set it to CALL_EXPR.
* tree-vect-slp-patterns.cc (complex_pattern::build):
Add pattern for the original stmt, do not set
STMT_VINFO_REDUC_DEF.
(addsub_pattern::build): Likewise.
ieee.exp tries to inherit flags from DEFAULT_CFLAGS, which is sometimes set and sometimes unset
When it is set, it is set to "-ansi -pedantic-errors", which causes spurious failures.
Introduce a new variable, DEFAULT_IEEE_CFLAGS, which is independent of
DEFAULT_CFLAGS, but which boards may still override if needed.
It includes the default of "-w -fno-inline" as it was in the old style testcases.
Then the target specific flags should not be stored out in DEFAULT_IEEE_CFLAGS but they
are needed for the default flags passed to the compiler. They can't be stored out to
DEFAULT_IEEE_CFLAGS as for x86, depending on if -m32 or -m64 is first, -ffloat-store might
be included for -m64 or not. We don't want it to be there for -m64.
PR testsuite/125003
gcc/testsuite/ChangeLog:
* gcc.c-torture/execute/ieee/ieee.exp: Rewrite the default flags
and set DEFAULT_IEEE_CFLAGS if not already set.
Co-authored-by: Andrew Pinski <andrew.pinski@oss.qualcomm.com>
match.pd: x != -CST ? x + CST : 0 -> x + CST [PR122996]
This patch simplifies expressions of the form x != CST1 ? x + CST2 : 0
into x + CST2 when CST1 == -CST2. This comes up, for example, when
dealing with 'rtrim'-style operations.
Bootstrapped and regression tested on x86_64-pc-linux-gnu.
PR tree-optimization/122996
gcc/ChangeLog:
* match.pd (x != CST1 ? x + CST2 : 0 -> x + CST2): New pattern.
Patrick Palka [Thu, 23 Apr 2026 22:31:53 +0000 (18:31 -0400)]
c++/modules: PTRMEM_CST member considered unused [PR124981]
Here in _b.C the needed specialization A<B, &B::g> has already been
instantiated in module M, so we stream it in rather than instantiate it.
We then proceed to instantiate A<B, &B::g>::f() whose definition invokes
the pointer-to-member &B::g but it turns out that nothing has marked
B::g as used in this TU so we neglect to emit it and linking fails.
We do mark B::g as used during instantiation of A<B, &B::g> via
mark_template_arguments_used, but this instantiaton happens in module M
not the importer, and TREE_USED is deliberately not streamed.
This patch fixes this by setting TREE_USED on PTRMEM_CST_MEMBER during
stream-in, via the RTU macro, which seems sufficient to ensure B::g gets
emitted. This macro is already used for streaming in subexpressions and
BASELINK_FUNCTIONS so using it for PTRMEM_CST_MEMBER doesn't seem too
out of place.
PR c++/124981
gcc/cp/ChangeLog:
* module.cc (trees_in::core_vals) <case PTRMEM_CST>: Use RTU
instead of RT to stream PTRMEM_CST_MEMBER.
gcc/testsuite/ChangeLog:
* g++.dg/modules/ptrmem-1_a.C: New test.
* g++.dg/modules/ptrmem-1_b.C: New test.
Marek Polacek [Tue, 20 Jan 2026 21:16:33 +0000 (16:16 -0500)]
c++: add lk_module
During Reflection review it came up that we don't have lk_module.
Instead, we're checking lk_external && DECL_MODULE_ATTACH_P &&
!DECL_MODULE_EXPORT_P. This patch adds lk_module which allows further
cleanups.
I'm not sure the cp_parser_template_argument change is required.
gcc/cp/ChangeLog:
* cp-tree.h (enum linkage_kind): Add lk_module.
* module.cc (check_module_decl_linkage): Use DECL_EXTERNAL_LINKAGE_P.
* name-lookup.cc (check_can_export_using_decl): Don't check for
attachment.
* parser.cc (cp_parser_template_argument): Check that linkage isn't
lk_module.
* reflect.cc (eval_has_module_linkage): Check lk_module.
(eval_has_external_linkage): Use DECL_EXTERNAL_LINKAGE_P.
* tree.cc (decl_linkage): Return lk_module if appropriate.
Marek Polacek [Wed, 22 Apr 2026 21:17:05 +0000 (17:17 -0400)]
c++/reflection: erroneous access check on dependent splice [PR124989]
When processing &[:R:] in cp_parser_splice_expression, we call
build_offset_ref with access checking turned off via push_ and
pop_deferring_access_checks, but the same pair of calls is not
present around the call to build_offset_ref in tsubst_splice_expr
and so the following test fails to compile due to access control
checking failures.
PR c++/124989
gcc/cp/ChangeLog:
* pt.cc (tsubst_splice_expr): Turn off access checking for the
build_offset_ref call.
Ben Wu [Tue, 21 Apr 2026 00:08:49 +0000 (20:08 -0400)]
c++: revert fix for PR41127 [PR118374]
Previously, we did not parse definitely in cp_parser_enum_specifier
after seeing CPP_COLON, since we allowed for bitfield widths to follow
"enum identifier :" in member-declarations. However, ISO says that in
such a situation, the colon should be parsed as an enum-base
([dcl.enum]/a), which means bitfield widths are not allowed. This
patch reverts the changes which allowed for bitfield widths, since
parsing definitely improves diagnostics for errant underlying types.
This reverts SVN r151246.
PR c++/118374
PR c++/41127
gcc/cp/ChangeLog:
* parser.cc (cp_parser_enum_specifier): Parse definitely
before cp_parser_type_specifier_seq.
gcc/testsuite/ChangeLog:
* g++.dg/cpp0x/enum1.C: Update test.
* g++.dg/parse/enum5.C: Expect error with bitfield width
and enum-key in member.
* g++.dg/cpp0x/enum45.C: New test.
c++: Add support for [[gnu::trivial_abi]] attribute [PR107187]
Implement the trivial_abi attribute for GCC to fix ABI compatibility
issues with Clang. Currently, GCC silently ignores this attribute,
causing call convention mismatches when linking GCC-compiled code with
Clang-compiled object files that use trivial_abi types.
This attribute allows types to be treated as trivial for ABI purposes,
enabling pass in registers instead of invisible references. The
attribute is supported with `__attribute__((trivial_abi))` and
`[[clang::trivial_abi]]` spellings.
PR c++/107187
gcc/cp/ChangeLog:
* cp-tree.h (has_trivial_abi_attribute): New function.
(validate_trivial_abi_attribute): Declare.
(classtype_has_non_deleted_copy_or_move_ctor): Declare.
(cxx_clang_attribute_table): Declare.
* tree.cc (handle_trivial_abi_attribute): New function.
(handle_gnu_trivial_abi_attribute): New function.
(classtype_has_trivial_abi): New function.
(validate_trivial_abi_attribute): New function.
(cxx_gnu_attributes): Add trivial_abi entry.
(cxx_clang_attributes): New table for [[clang::trivial_abi]].
* class.cc (finish_struct_bits): Skip BLKmode for types with
trivial_abi attribute.
(classtype_has_non_deleted_copy_or_move_ctor): New function.
(finish_struct_1): Call validate_trivial_abi_attribute before
finish_struct_bits.
* cp-objcp-common.h (cp_objcp_attribute_table): Register
cxx_clang_attribute_table.
* decl.cc (store_parm_decls): Register cleanups for trivial_abi
parameters.
Jason Merrill [Thu, 23 Apr 2026 13:20:57 +0000 (09:20 -0400)]
c++: fix typo in consteval, array, modules [PR124973]
Argh, I must have typoed when I realized that we wanted to check
ff_genericize here rather than !ff_only_non_odr. And didn't notice the
problem because I also forgot the -O in the testcase.
Philipp Tomsich [Thu, 23 Apr 2026 13:31:32 +0000 (07:31 -0600)]
RISC-V: Add SUBREG_PROMOTED annotation to min/max si3 expansion
The <bitmanip_optab>si3 expansion for smin/smax/umin/umax sign-extends
both inputs and then performs the DImode min/max, which returns one of
its inputs unchanged. The result is therefore always sign-extended,
but the missing SUBREG_PROMOTED annotation on the lowpart caused GCC
to emit a redundant sext.w.
Add the SUBREG_PROMOTED_VAR_P / SUBREG_PROMOTED_SET(SRP_SIGNED)
annotation, matching rotrsi3, rotlsi3, and other si3 expansions.
gcc/ChangeLog:
* config/riscv/bitmanip.md (<bitmanip_optab>si3): Add
SUBREG_PROMOTED annotation to lowpart result.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/zbb-min-max-05.c: New test.
* gcc.target/riscv/zbb-min-max-06.c: New test.
* gcc.target/riscv/zbb-min-max-07-run.c: New test.
Andrew Pinski [Wed, 22 Apr 2026 19:29:26 +0000 (13:29 -0600)]
[PR target/124029][RISC-V] Adjust cost of comparisons
> Given this is a relatively straightforward define_split, it is likely a good
> case for Austin to chase down.
Actually it is easier than that
The middle-end has a costing mechism for this already:
```
;; cmp: le, old cst: (const_int 268435455 [0xfffffff]) new cst: (const_int 268435456 [0x10000000])
;; old cst cost: 4, new cst cost: 4
```
You need to implement a COMPARE cost part of the riscv_rtx_costs like it is
done for aarch64_rtx_costs.
It won't be 100% exact because in riscv case there is no COMPARE instruction.
But at least it might be more about the costs of generating which constant and
all.
Marek Polacek [Wed, 22 Apr 2026 15:36:37 +0000 (11:36 -0400)]
c++/reflection: reflect on dependent class template [PR124926]
Here we issue a bogus error for
^^Cls<T>::template Inner
where Inner turns out to be a class type, but we created a SCOPE_REF
because we can't know in advance what it will substitute into, and
^^typename Cls<T>::template Inner
is invalid. The typename can only be used in
^^typename Cls<T>::template Inner<int>
We're taking a reflection so both types and non-types are valid, so
I think we shouldn't give the error for ^^, and take the reflection
of the TEMPLATE_DECL.
PR c++/124926
gcc/cp/ChangeLog:
* pt.cc (tsubst_qualified_id): Rename name_lookup_p parameter to
reflecting_p. Check !reflecting_p instead of name_lookup_p. Do
not give the "instantiation yields a type" error when reflecting_p
is true.
(tsubst_expr) <case REFLECT_EXPR>: Adjust the call to
tsubst_qualified_id.
and all other "(assembler options)" tests in gcc.misc-tests/options.exp.
This happends because my builds use something like
--with-as=/vol/gcc/bin/gas-2.46 instead of relying on a random bundled
version of gas. Therefore the configured assembler name doesn't end in
"as".
The assembler options check in options.exp (check_for_all_options) looks
for
" *as(\\.exe)? .*$as_pattern"
in the gcc -v output, with an empty as_pattern.
While gcc was configured with --with-gnu-as, the gcc -v output starts
with
* include/bits/indirect.h (indirect::operator==): Adjust
noexcept specification.
* testsuite/std/memory/indirect/relops.cc: New test for noexcept
specification.
Reviewed-by: Jonathan Wakely <jwakely@redhat.com> Signed-off-by: Tomasz Kamiński <tkaminsk@redhat.com>
Tomasz Kamiński [Tue, 21 Apr 2026 12:34:40 +0000 (14:34 +0200)]
libstdc++: Implement __integral_constant_like in terms of __constexpr_wrapper_like.
This implements LWG4486. integral-constant-like and constexpr-wrapper-like
exposition-only concept duplication.
libstdc++-v3/ChangeLog:
* include/bits/simd_details.h (simd::__constexpr_wrapper_like):
Move to...
* include/std/concepts (std::__constexpr_wrapper_like): Moved
from bits/simd_details.h.
* include/std/span (std::__integral_constant_like): Define in
terms of __constexpr_wrapper_like.
* testsuite/std/simd/traits_impl.cc: Added using declaration
for std::__constexpr_wrapper_like.
Reviewed-by: Jonathan Wakely <jwakely@redhat.com> Signed-off-by: Tomasz Kamiński <tkaminsk@redhat.com>
changed the x86_cse pass to also remove redundant TLS calls. Remove the
SSE2 check in x86_cse::gate so that redundant TLS calls are removed when
SSE is disabled.
gcc/
PR target/124994
* config/i386/i386-features.cc (x86_cse::gate): Drop TARGET_SSE2.
gcc/testsuite/
PR target/124994
* gcc.target/i386/pr124994.c: New test.
From: "Bohan Lei"<garthlei@linux.alibaba.com>
Date: Thu, Jan 8, 2026, 10:49
Subject: [PATCH] RISC-V: Remove redundant CALL_P check
To: <gcc-patches@gcc.gnu.org> Cc: <juzhe.zhong@rivai.ai>, <pan2.li@intel.com>, "Bohan Lei"<garthlei@linux.alibaba.com>
Since we are using `reg_set_p` to check VXRM definition, the `CALL_P`
check has become redundant. VXRM is marked as call-used in riscv.h, and
`reg_set_p` in `vxrm_unknown_p` should always return true when a call is
encountered.
Jason Merrill [Wed, 22 Apr 2026 18:53:14 +0000 (14:53 -0400)]
c++: consteval, array, modules [PR124973]
Here the consteval holder constructor calls the defaulted element_array
constructor, which uses a VEC_INIT_EXPR to call the defaulted element
constructor.
When we read in the holder constructor, we need to clone it, so we call
finish_function, which calls cp_fold_function_non_odr_use, which tries to
constant-evaluate the call to the element_array constructor. This
eventually wants to evaluate the VEC_INIT_EXPR, which wants to call the
element constructor (complete object clone). But we haven't cloned the
element constructor yet, so mark_used tries to synthesize it again, which
breaks because the constructor is already defined, just not cloned yet.
We should have cloned the element constructor first, but we didn't know that
the element_array constructor depends on it because VEC_INIT_EXPR doesn't
express that; build_vec_init_expr calls build_vec_init_elt and then throws
it away. Perhaps we want to add the elt_init as an additional operand that
is used to express dependencies, but ignored in expansion?
It would also be nice not to repeat all the finish_function passes when
loading a function from a module; we already did
cp_fold_function_non_odr_use and such for this function before writing out
the module, doing it again is a waste of time.
But also, trying to constant-evaluate the element_array constructor is wrong
for _non_odr_use, it shouldn't be doing any optimization folding.
Furthermore, since the TARGET_EXPR is wrapped in an INIT_EXPR, we should
never have tried to fold it by itself, before cp_genericize_init_expr has a
chance to elide it. So let's only do that folding when ff_genericize, like
the other TARGET_EXPR transformations. This is a much simpler fix for this
testcase.
While we're at it, let's also suppress the other flag_no_inline-conditional
folding when ff_only_non_odr.
PR c++/124973
PR c++/120502
PR c++/120005
gcc/cp/ChangeLog:
* cp-gimplify.cc (cp_fold_r) <case TARGET_EXPR>: Only
do optimization folding when ff_genericize.
(cp_fold) <case CALL_EXPR>: Don't do
optimization folding when ff_only_non_odr.
gcc/testsuite/ChangeLog:
* g++.dg/modules/consteval-1_a.C: New test.
* g++.dg/modules/consteval-1_b.C: New test.
Currently, trying to do "make install-html" after a build results in an
error:
$ make install-html
⋮
Doing install-html in gcc
make[2]: Entering directory '/tmp/build/gcc'
make[2]: *** No rule to make target '/tmp/build/gcc/HTML/gcc-16.0.1/ga68-coding-guidelines.info', needed by 'algol68.install-html'. Stop.
make[2]: Leaving directory '/tmp/build/gcc'
make[1]: *** [Makefile:5054: install-html-gcc] Error 1
make[1]: Leaving directory '/tmp/build'
make: *** [Makefile:1929: do-install-html] Error 2
The problem is a typo in a dependency of the algol68.install-html rule.
Fix it by removing the ".info" suffix.
With this change, "make install-html" succeeds but ga68-internals and
ga68-coding-guidelines don't get installed. Assuming this is
unintentional, extend the for loop to also install them.
gcc/algol68/ChangeLog:
* Make-lang.in (algol68.install-html): Fix
ga68-coding-guidelines dependency. Install all dependencies.
Andrew Pinski [Wed, 15 Apr 2026 20:39:51 +0000 (13:39 -0700)]
cfghooks: Pass data to callback function of make_forwarder_block
This makes a cleanup that is way overdue and should have been done
years ago. Instead of setting some global/static variables for the
callback function to check here, we pass down the data to the callback
function. This reduces the number of global variables (which should help
with Parallel GCC project). Plus since mfb_keep_just was exported outside
of cfgloopmanip.cc (it was used in tree-ssa-threadupdate.cc), it reduces
is shared between files.
I found this useful when working on PR 123113 as I needed a new callback
function.
Bootstrapped and tested on x86_64-linux-gnu.
gcc/ChangeLog:
* cfghooks.cc (make_forwarder_block): New data argument,
pass it down to redirect_edge_p.
* cfghooks.h (make_forwarder_block): Add void* argument.
* cfgloop.cc (mfb_reis_set): Remove.
(mfb_redirect_edges_in_set): Add new data argument.
Use it instead of mfb_reis_set.
(form_subloop): Create a local variable instead of
mfb_areis_set. Update call to make_forwarder_block.
(merge_latch_edges): Likewise.
* cfgloopmanip.cc (mfb_kj_edge): Remove.
(mfb_keep_just): Add new data argument.
Use it instead of mfb_kj_edge.
(create_preheader): Use local variable instead of
mfb_kj_edge. Update call to make_forwarder_block.
* cfgloopmanip.h (mfb_keep_just): Add void* argument.
* tree-cfgcleanup.cc (mfb_keep_latches): Add unused void* arugment.
(cleanup_tree_cfg_noloop): Update call to make_forwarder_block.
* tree-ssa-threadupdate.cc
(fwd_jt_path_registry::thread_through_loop_header): Use local
variable instead of mfb_kj_edge. Update call to make_forwarder_block.
Signed-off-by: Andrew Pinski <andrew.pinski@oss.qualcomm.com>
Andrew Pinski [Wed, 15 Apr 2026 00:57:49 +0000 (17:57 -0700)]
cfghooks: Remove new_bb_cbk callback from make_forwarder_block
This callback seems to be unused since it was allowed to be NULL
in r0-78960-g89f8f30f356532 (19 years ago), so let's just remove it.
this is also the first step in changing the callback to make_forwarder_block.
Alice Carlotti [Tue, 21 Apr 2026 18:53:57 +0000 (19:53 +0100)]
aarch64 testsuite: Merge exts_sve2 into exts
Now that we support enabling +sme without +sve2, we no longer need to
include armv9-a when checking assembler support for SME extensions.
Merge exts_sve2 back into exts, and remove the separate handling for
exts_sve2. This is a partial revert of r16-2660-g9793ffce933234.
gcc/testsuite/ChangeLog:
* lib/target-supports.exp: Merge exts_sve2 handling into exts.
Alice Carlotti [Tue, 21 Apr 2026 18:31:22 +0000 (19:31 +0100)]
aarch64 testsuite: Fix gating of sme-lutv2 asm tests
These tests were configured to try assembling whenever the assembler
supports sme2. Add dg-do directives to restrict this assemblers that
support sme-lutv2 (and otherwise just compile the test).
aarch64: PR124908 Fix ICE in svld1rq fold with -msve-vector-bits=128
svld1rq is a replicated-quadword load: it loads 16 bytes and
replicates them to fill the SVE register. When -msve-vector-bits=128
the instruction can be folded to a normal load.
The GIMPLE fold for svld1rq transforms the intrinsic into a 128-bit
memory load followed by a VEC_PERM_EXPR that replicates the loaded
value. When VL == 128, the VEC_PERM_EXPR becomes an identity
permutation. The checking assertion that validates the permutation
(can_vec_perm_const_p) fails for this degenerate case because the
vec_perm_const hook does not recognise the cross-mode identity
permutation (e.g. V16QI -> VNx16QI).
Fix by detecting when the SVE vector has the same number of elements as
the 128-bit quadword (known_eq (lhs_len, source_nelts)) and emitting a
VIEW_CONVERT_EXPR instead of a VEC_PERM_EXPR.
Bootstrapped and tested on aarch64-none-linux-gnu.
PR target/124908
* config/aarch64/aarch64-sve-builtins-base.cc
(svld1rq_impl::fold): When the SVE vector length equals the
quadword width, emit VIEW_CONVERT_EXPR instead of VEC_PERM_EXPR.
gcc/testsuite/ChangeLog:
PR target/124908
* gcc.target/aarch64/sve/acle/general/ld1rq_2.c: New test.
Jakub Jelinek [Wed, 22 Apr 2026 13:44:28 +0000 (15:44 +0200)]
Update crontab and git_update_version.py
2026-04-22 Jakub Jelinek <jakub@redhat.com>
maintainer-scripts/
* crontab: Snapshots from trunk are now GCC 17 related.
Add GCC 16 snapshots from the respective branch.
contrib/
* gcc-changelog/git_update_version.py (active_refs): Add
releases/gcc-16.
Jakub Jelinek [Wed, 22 Apr 2026 13:03:48 +0000 (15:03 +0200)]
c++, libstc++: Bump __cpp_impl_reflection and __cpp_lib_reflection
Both __cpp_impl_reflection and __cpp_lib_reflection were increased from
202506L to 202603L post Croydon, I assume to show that P3795R2 (maybe some
issues too) have been implemented.
Now, we do implement P3795R2 except for the is_applicable_type,
is_nothrow_applicable_type and apply_result metafunctions, but Jonathan says
there is agreement in LWG that to test for availability of those one should
test __cpp_lib_reflection >= 202603L && __cpp_lib_apply >= 202603L.
So, this patch bumps both FTMs.
2026-04-22 Jakub Jelinek <jakub@redhat.com>
gcc/c-family/
* c-cppbuiltin.cc (c_cpp_builtins): Bump __cpp_impl_reflection value
from 202506L to 202603L.
gcc/testsuite/
* g++.dg/DRs/dr2581-2.C: Adjust for __cpp_impl_reflection bump from
202506L to 202603L.
* g++.dg/reflect/feat1.C: Likewise. Also adjust for
__cpp_lib_reflection bump from 202506L to 202603L.
* g++.dg/reflect/feat2.C: Likewise.
* g++.dg/reflect/feat3.C: Likewise.
libstdc++-v3/
* include/bits/version.def (reflection): Bump 202506L to 202603L
for both v and in extra_cond.
* include/bits/version.h: Regenerate.
* include/std/meta: Compare __glibcxx_reflection against
202603L rather than 202506L.
* src/c++23/std.cc.in: Likewise.
Reviewed-by: Jason Merrill <jason@redhat.com> Reviewed-by: Jonathan Wakely <jwakely@redhat.com>