Gaius Mulley [Fri, 21 Apr 2023 12:19:54 +0000 (13:19 +0100)]
PR modula2/109586 cc1gm2 ICE when compiling large source files.
The function m2block_RememberConstant calls m2tree_IsAConstant.
However IsAConstant does not recognise TREE_CODE(t) ==
CONSTRUCTOR as a constant. Without this patch CONSTRUCTOR
contants are garbage collected (and not preserved) resulting in
a corrupt tree and crash.
Richard Biener [Fri, 21 Apr 2023 10:57:17 +0000 (12:57 +0200)]
tree-optimization/109573 - avoid ICEing on unexpected live def
The following relaxes the assert in vectorizable_live_operation
where we catch currently unhandled cases to also allow an
intermediate copy as it happens here but also relax the assert
to checking only.
PR tree-optimization/109573
* tree-vect-loop.cc (vectorizable_live_operation): Allow
unhandled SSA copy as well. Demote assert to checking only.
Richard Biener [Fri, 21 Apr 2023 10:02:28 +0000 (12:02 +0200)]
Use correct CFG orders for DF worklist processing
This adjusts the remaining three RPO computes in DF. The DF_FORWARD
problems should use a RPO on the forward graph, the DF_BACKWARD
problems should use a RPO on the inverted graph.
Conveniently now inverted_rev_post_order_compute computes a RPO.
We still use post_order_compute and reverse its order for its
side-effect of deleting unreachable blocks.
This resuls in an overall reduction on visited blocks on cc1files by 5.2%.
Because on the reverse CFG most regions are irreducible, there's
few cases the number of visited blocks increases. For the set
of cc1files I have this is for et-forest.i, graph.i, hwint.i,
tree-ssa-dom.i, tree-ssa-loop-ch.i and tree-ssa-threadedge.i. For
tree-ssa-dse.i it's off-noise and I've more closely investigated
and figured it is really bad luck due to the irreducibility.
* df-core.cc (df_analyze): Compute RPO on the reverse graph
for DF_BACKWARD problems.
(loop_post_order_compute): Rename to ...
(loop_rev_post_order_compute): ... this, compute a RPO.
(loop_inverted_post_order_compute): Rename to ...
(loop_inverted_rev_post_order_compute): ... this, compute a RPO.
(df_analyze_loop): Use RPO on the forward graph for DF_FORWARD
problems, RPO on the inverted graph for DF_BACKWARD.
Richard Biener [Fri, 21 Apr 2023 07:40:01 +0000 (09:40 +0200)]
change inverted_post_order_compute to inverted_rev_post_order_compute
The following changes the inverted_post_order_compute API back to
a plain C array interface and computing a reverse post order since
that's what's always required. It will make massaging DF to use
the correct iteration orders easier. Elsewhere it requires turning
backward iteration over the computed order with forward iteration.
* cfganal.h (inverted_rev_post_order_compute): Rename
from ...
(inverted_post_order_compute): ... this. Add struct function
argument, change allocation to a C array.
* cfganal.cc (inverted_rev_post_order_compute): Likewise.
* lcm.cc (compute_antinout_edge): Adjust.
* lra-lives.cc (lra_create_live_ranges_1): Likewise.
* tree-ssa-dce.cc (remove_dead_stmt): Likewise.
* tree-ssa-pre.cc (compute_antic): Likewise.
Richard Biener [Fri, 21 Apr 2023 09:40:23 +0000 (11:40 +0200)]
change DF to use the proper CFG order for DF_FORWARD problems
This changes DF to use RPO on the forward graph for DF_FORWARD
problems. While that naturally maps to pre_and_rev_postorder_compute
we use the existing (wrong) CFG order for DF_BACKWARD problems
computed by post_order_compute since that provides the required
side-effect of deleting unreachable blocks.
The change requires turning the inconsistent vec<int> vs int * back
to consistent int *. A followup patch will change the
inverted_post_order_compute API and change the DF_BACKWARD problem
to use the correct RPO on the backward graph together with statistics
I produced last year for the combined effect.
* df.h (df_d::postorder_inverted): Change back to int *,
clarify comments.
* df-core.cc (rest_of_handle_df_finish): Adjust.
(df_analyze_1): Likewise.
(df_analyze): For DF_FORWARD problems use RPO on the forward
graph. Adjust.
(loop_inverted_post_order_compute): Adjust API.
(df_analyze_loop): Adjust.
(df_get_n_blocks): Likewise.
(df_get_postorder): Likewise.
Consider the following testcase:
void f (void * restrict in, void * restrict out, int l, int n, int m)
{
for (int i = 0; i < l; i++){
for (int j = 0; j < m; j++){
for (int k = 0; k < n; k++)
{
vint8mf8_t v = __riscv_vle8_v_i8mf8 (in + i + j, 17);
__riscv_vse8_v_i8mf8 (out + i + j, v, 17);
}
}
}
}
Compile option: -O3
Before this patch:
mv a7,a2
mv a6,a0
mv t1,a1
mv a2,a3
vsetivli zero,17,e8,mf8,ta,ma
ble a7,zero,.L1
ble a4,zero,.L1
ble a3,zero,.L1
...
After this patch:
mv a7,a2
mv a6,a0
mv t1,a1
mv a2,a3
ble a7,zero,.L1
ble a4,zero,.L1
ble a3,zero,.L1
add a1,a0,a4
li a0,0
vsetivli zero,17,e8,mf8,ta,ma
...
This issue is a missed optmization produced by Phase 3 global backward demand
fusion instead of LCM.
This patch is fixing poor placement of the vsetvl.
This point is seletected not because LCM but by Phase 3 (VL/VTYPE demand info
backward fusion and propogation) which
is I introduced into VSETVL PASS to enhance LCM && improve vsetvl instruction
performance.
This patch is to supress the Phase 3 too aggressive backward fusion and
propagation to the top of the function program
when there is no define instruction of AVL (AVL is 0 ~ 31 imm since vsetivli
instruction allows imm value instead of reg).
You may want to ask why we need Phase 3 to the job.
Well, we have so many situations that pure LCM fails to optimize, here I can
show you a simple case to demonstrate it:
void f (void * restrict in, void * restrict out, int n, int m, int cond)
{
size_t vl = 101;
for (size_t j = 0; j < m; j++){
if (cond) {
for (size_t i = 0; i < n; i++)
{
vint8mf8_t v = __riscv_vle8_v_i8mf8 (in + i + j, vl);
__riscv_vse8_v_i8mf8 (out + i, v, vl);
}
} else {
for (size_t i = 0; i < n; i++)
{
vint32mf2_t v = __riscv_vle32_v_i32mf2 (in + i + j, vl);
v = __riscv_vadd_vv_i32mf2 (v,v,vl);
__riscv_vse32_v_i32mf2 (out + i, v, vl);
}
}
}
}
You can see:
The first inner loop needs vsetvli e8 mf8 for vle+vse.
The second inner loop need vsetvli e32 mf2 for vle+vadd+vse.
If we don't have Phase 3 (Only handled by LCM (Phase 4)), we will end up with :
outerloop:
...
vsetvli e8mf8
inner loop 1:
....
vsetvli e32mf2
inner loop 2:
....
However, if we have Phase 3, Phase 3 is going to fuse the vsetvli e32 mf2 of
inner loop 2 into vsetvli e8 mf8, then we will end up with this result after
phase 3:
outerloop:
...
inner loop 1:
vsetvli e32mf2
....
inner loop 2:
vsetvli e32mf2
....
Then, this demand information after phase 3 will be well optimized after phase 4
(LCM), after Phase 4 result is:
vsetvli e32mf2
outerloop:
...
inner loop 1:
....
inner loop 2:
....
You can see this is the optimal codegen after current VSETVL PASS (Phase 3:
Demand backward fusion and propagation + Phase 4: LCM ). This is a known issue
when I start to implement VSETVL PASS.
Robin Dapp [Fri, 21 Apr 2023 07:38:06 +0000 (09:38 +0200)]
riscv: Fix <bitmanip_insn> fallout.
PR109582: Since r14-116 generic.md uses standard names instead of the
types defined in the <bitmanip_insn> iterator (that match instruction
names). Change this.
gcc/ChangeLog:
PR target/109582
* config/riscv/generic.md: Change standard names to insn names.
rs6000: xfail float128 comparison test case that fails on powerpc64.
This patch xfails a float128 comparison test case on powerpc64 that
fails due to a longstanding issue with floating-point compares.
See PR58684 for more information.
When float128 hardware is enabled (-mfloat128-hardware), xscmpuqp is
generated for comparison which is unexpected. When float128 software
emulation is enabled (-mno-float128-hardware), we still have to xfail
the hardware version (__lekf2_hw) which finally generates xscmpuqp.
Richard Biener [Thu, 20 Apr 2023 11:56:21 +0000 (13:56 +0200)]
Fix LCM dataflow CFG order
The following fixes the initial order the LCM dataflow routines process
BBs. For a forward problem you want reverse postorder, for a backward
problem you want reverse postorder on the inverted graph.
The LCM iteration has very many other issues but this allows to
turn inverted_post_order_compute into computing a reverse postorder
more easily.
* lcm.cc (compute_antinout_edge): Use RPO on the inverted graph.
(compute_laterin): Use RPO.
(compute_available): Likewise.
* update_web_docs_git: Add a mechanism to override makeinfo,
texi2dvi and texi2pdf, and default them to
/home/gccadmin/texinfo/install-git/bin/${tool}, if present.
Patrick Palka [Thu, 20 Apr 2023 19:16:59 +0000 (15:16 -0400)]
c++: simplify TEMPLATE_TYPE_PARM level lowering
1. Don't bother recursing when level lowering a cv-qualified type
template parameter.
2. Get rid of the recursive loop breaker when level lowering a
constrained auto, and enable the TEMPLATE_PARM_DESCENDANTS cache in
this case too. This should be safe to do so now that we no longer
substitute constraints on an auto.
gcc/cp/ChangeLog:
* pt.cc (tsubst) <case TEMPLATE_TYPE_PARM>: Don't recurse when
level lowering a cv-qualified type template parameter. Remove
recursive loop breaker in the level lowering case for constrained
autos. Use the TEMPLATE_PARM_DESCENDANTS cache in this case as
well.
Patrick Palka [Thu, 20 Apr 2023 19:00:06 +0000 (15:00 -0400)]
c++: use TREE_VEC for trailing args of variadic built-in traits
This patch makes us use TREE_VEC instead of TREE_LIST to represent the
trailing arguments of a variadic built-in trait. These built-ins are
typically passed a simple pack expansion as the second argument, e.g.
__is_constructible(T, Ts...)
and the main benefit of this representation change is that substituting
into this argument list is now basically free since tsubst_template_args
makes sure we reuse the TREE_VEC of the corresponding ARGUMENT_PACK when
expanding such a pack expansion. In the previous TREE_LIST representation
we would need need to convert the expanded pack expansion into a TREE_LIST
(via tsubst_tree_list).
Note that an empty set of trailing arguments is now represented as an
empty TREE_VEC instead of NULL_TREE, so now TRAIT_TYPE/EXPR_TYPE2 will
be empty only for unary traits.
gcc/cp/ChangeLog:
* constraint.cc (diagnose_trait_expr): Convert a TREE_VEC
of arguments into a TREE_LIST for sake of pretty printing.
* cxx-pretty-print.cc (pp_cxx_trait): Handle TREE_VEC
instead of TREE_LIST of trailing variadic trait arguments.
* method.cc (constructible_expr): Likewise.
(is_xible_helper): Likewise.
* parser.cc (cp_parser_trait): Represent trailing variadic trait
arguments as a TREE_VEC instead of TREE_LIST.
* pt.cc (value_dependent_expression_p): Handle TREE_VEC
instead of TREE_LIST of trailing variadic trait arguments.
* semantics.cc (finish_type_pack_element): Likewise.
(check_trait_type): Likewise.
Patrick Palka [Thu, 20 Apr 2023 19:00:04 +0000 (15:00 -0400)]
c++: make strip_typedefs generalize strip_typedefs_expr
Currently if we have a TREE_VEC of types that we want to strip of typedefs,
we unintuitively need to call strip_typedefs_expr instead of strip_typedefs
since only strip_typedefs_expr handles TREE_VEC, and it also dispatches
to strip_typedefs when given a type. But this seems backwards: arguably
strip_typedefs_expr should be the more specialized function, which
strip_typedefs dispatches to (and thus generalizes).
So this patch makes strip_typedefs subsume strip_typedefs_expr rather
than vice versa, which allows for some simplifications.
gcc/cp/ChangeLog:
* tree.cc (strip_typedefs): Move TREE_LIST handling to
strip_typedefs_expr. Dispatch to strip_typedefs_expr for
non-type 't'.
<case TYPENAME_TYPE>: Remove manual dispatching to
strip_typedefs_expr.
<case TRAIT_TYPE>: Likewise.
(strip_typedefs_expr): Replaces calls to strip_typedefs_expr
with strip_typedefs throughout. Don't dispatch to strip_typedefs
for type 't'.
<case TREE_LIST>: Replace this with the better version from
strip_typedefs.
Andrew MacLeod [Thu, 20 Apr 2023 17:10:40 +0000 (13:10 -0400)]
Do not ignore UNDEFINED ranges when determining PHI equivalences.
Do not ignore UNDEFINED name arguments when registering two-way equivalences
from PHIs.
PR tree-optimization/109564
gcc/
* gimple-range-fold.cc (fold_using_range::range_of_phi): Do no ignore
UNDEFINED range names when deciding if all PHI arguments are the same,
Jakub Jelinek [Thu, 20 Apr 2023 17:44:27 +0000 (19:44 +0200)]
tree-vect-patterns: One small vect_recog_ctz_ffs_pattern tweak [PR109011]
I've noticed I've made a typo, ifn in this function this late
is always only IFN_CTZ or IFN_FFS, never IFN_CLZ.
Due to this typo, we weren't using the originally intended
.CTZ (X) = .POPCOUNT ((X - 1) & ~X)
but
.CTZ (X) = PREC - .POPCOUNT (X | -X)
instead when we want to emit __builtin_ctz*/.CTZ using .POPCOUNT.
Both compute the same value, both are defined at 0 with the
same value (PREC), both have same number of GIMPLE statements,
but I think the former ought to be preferred, because lots of targets
have andn as a single operation rather than two, and also putting
a -1 constant into a vector register is often cheaper than vector
with broadcast PREC power of two value.
2023-04-20 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/109011
* tree-vect-patterns.cc (vect_recog_ctz_ffs_pattern): Use
.CTZ (X) = .POPCOUNT ((X - 1) & ~X) in preference to
.CTZ (X) = PREC - .POPCOUNT (X | -X).
Jakub Jelinek [Thu, 20 Apr 2023 17:26:17 +0000 (19:26 +0200)]
c: Avoid -Wenum-int-mismatch warning for redeclaration of builtin acc_on_device [PR107041]
The new -Wenum-int-mismatch warning triggers with -Wsystem-headers in
<openacc.h>, for obvious reasons the builtin acc_on_device uses int
type argument rather than enum which isn't defined yet when the builtin
is created, while the OpenACC spec requires it to have acc_device_t
enum argument. The header makes sure it has int underlying type by using
negative and __INT_MAX__ enumerators.
I've tried to make the builtin typegeneric or just varargs, but that
changes behavior e.g. when one calls it with some C++ class which has
cast operator to acc_device_t, so the following patch instead disables
the warning for this builtin.
[LRA]: Exclude some hard regs for multi-reg inout reload pseudos used in asm in different mode
See gcc.c-torture/execute/20030222-1.c. Consider the code for 32-bit (e.g. BE) target:
int i, v; long x; x = v; asm ("" : "=r" (i) : "0" (x));
We generate the following RTL with reload insns:
1. subreg:si(x:di, 0) = 0;
2. subreg:si(x:di, 4) = v:si;
3. t:di = x:di, dead x;
4. asm ("" : "=r" (subreg:si(t:di,4)) : "0" (t:di))
5. i:si = subreg:si(t:di,4);
If we assign hard reg of x to t, dead code elimination will remove insn #2
and we will use unitialized hard reg. So exclude the hard reg of x for t.
We could ignore this problem for non-empty asm using all x value but it is hard to
check that the asm are expanded into insn realy using x and setting r.
The old reload pass used the same approach.
gcc/ChangeLog
* lra-constraints.cc (match_reload): Exclude some hard regs for
multi-reg inout reload pseudos used in asm in different mode.
[PR target/108248] [RISC-V] Break down some bitmanip insn types
This is primarily Raphael's work. All I did was adjust it to apply to the
trunk and add the new types to generic.md's scheduling model.
The basic idea here is to make sure we have the ability to schedule the
bitmanip instructions with a finer degree of control. Some of the bitmanip
instructions are likely to have differing scheduler characteristics across
different implementations.
So rather than assign these instructions a generic "bitmanip" type, this
patch assigns them a type based on their RTL code by using the <bitmanip_insn>
iterator for the type. Naturally we have to add a few new types. It affects
clz, ctz, cpop, min, max.
We didn't do this for things like shNadd, single bit manipulation, etc. We
certainly could if the needs presents itself.
I threw all the new types into the generic_alu bucket in the generic
scheduling model. Seems as good a place as any. Someone who knows the
sifive uarch should probably add these types (and bitmanip) to the sifive
scheduling model.
We also noticed that the recently added orc.b didn't have a type at all.
So we added it as a generic bitmanip type.
This has been bootstrapped in a gcc-12 base and I've built and run the
testsuite without regressions on the trunk.
Given it was primarily Raphael's work I could probably approve & commit it.
But I'd like to give the other RISC-V folks a chance to chime in.
PR target/108248
gcc/
* config/riscv/bitmanip.md (clz, ctz, pcnt, min, max patterns): Use
<bitmanip_insn> as the type to allow for fine grained control of
scheduling these insns.
* config/riscv/generic.md (generic_alu): Add bitmanip, clz, ctz, pcnt,
min, max.
* config/riscv/riscv.md (type attribute): Add types for clz, ctz,
pcnt, signed and unsigned min/max.
The redundant register spillings is eliminated.
However, there is one more issue need to be addressed which is the redundant
move instruction "vmv8r.v". This is another story, and it will be fixed by another
patch (Fine tune RVV machine description RA constraint).
RISC-V: Fix wrong check of register occurrences [PR109535]
count_occurrences will conly count same RTX (same code and same mode),
but what we want to track is the occurrence of a register, a register
might appeared in the insn with different mode or contain in SUBREG.
Testcase coming from Kito.
gcc/ChangeLog:
PR target/109535
* config/riscv/riscv-vsetvl.cc (count_regno_occurrences): New function.
(pass_vsetvl::cleanup_insns): Fix bug.
gcc/testsuite/ChangeLog:
PR target/109535
* g++.target/riscv/rvv/base/pr109535.C: New test.
* gcc.target/riscv/rvv/base/pr109535.c: New test.
Jakub Jelinek [Thu, 20 Apr 2023 11:02:52 +0000 (13:02 +0200)]
tree: Add 3+ argument fndecl_built_in_p
On Wed, Feb 22, 2023 at 09:52:06AM +0000, Richard Biener wrote:
> > The following testcase ICEs because we still have some spots that
> > treat BUILT_IN_UNREACHABLE specially but not BUILT_IN_UNREACHABLE_TRAP
> > the same.
This patch uses (fndecl_built_in_p (node, BUILT_IN_UNREACHABLE)
|| fndecl_built_in_p (node, BUILT_IN_UNREACHABLE_TRAP))
a lot and from grepping around, we do something like that in lots of
other places, or in some spots instead as
(fndecl_built_in_p (node, BUILT_IN_NORMAL)
&& (DECL_FUNCTION_CODE (node) == BUILT_IN_WHATEVER1
|| DECL_FUNCTION_CODE (node) == BUILT_IN_WHATEVER2))
The following patch adds an overload for this case, so we can write
it in a shorter way, using C++11 argument packs so that it supports
as many codes as one needs.
2023-04-20 Jakub Jelinek <jakub@redhat.com>
Jonathan Wakely <jwakely@redhat.com>
* tree.h (built_in_function_equal_p): New helper function.
(fndecl_built_in_p): Turn into variadic template to support
1 or more built_in_function arguments.
* builtins.cc (fold_builtin_expect): Use 3 argument fndecl_built_in_p.
* gimplify.cc (goa_stabilize_expr): Likewise.
* cgraphclones.cc (cgraph_node::create_clone): Likewise.
* ipa-fnsummary.cc (compute_fn_summary): Likewise.
* omp-low.cc (setjmp_or_longjmp_p): Likewise.
* cgraph.cc (cgraph_edge::redirect_call_stmt_to_callee,
cgraph_update_edges_for_call_stmt_node,
cgraph_edge::verify_corresponds_to_fndecl,
cgraph_node::verify_node): Likewise.
* tree-stdarg.cc (optimize_va_list_gpr_fpr_size): Likewise.
* gimple-ssa-warn-access.cc (matching_alloc_calls_p): Likewise.
* ipa-prop.cc (try_make_edge_direct_virtual_call): Likewise.
Jakub Jelinek [Thu, 20 Apr 2023 09:55:16 +0000 (11:55 +0200)]
tree-vect-patterns: Pattern recognize ctz or ffs using clz, popcount or ctz [PR109011]
The following patch allows to vectorize __builtin_ffs*/.FFS even if
we just have vector .CTZ support, or __builtin_ffs*/.FFS/__builtin_ctz*/.CTZ
if we just have vector .CLZ or .POPCOUNT support.
It uses various expansions from Hacker's Delight book as well as GCC's
expansion, in particular:
.CTZ (X) = PREC - .CLZ ((X - 1) & ~X)
.CTZ (X) = .POPCOUNT ((X - 1) & ~X)
.CTZ (X) = (PREC - 1) - .CLZ (X & -X)
.FFS (X) = PREC - .CLZ (X & -X)
.CTZ (X) = PREC - .POPCOUNT (X | -X)
.FFS (X) = (PREC + 1) - .POPCOUNT (X | -X)
.FFS (X) = .CTZ (X) + 1
where the first one can be only used if both CTZ and CLZ have value
defined at zero (kind 2) and both have value of PREC there.
If the original has value defined at zero and the latter doesn't
for other forms or if it doesn't have matching value for that case,
a COND_EXPR is added for that afterwards.
The patch also modifies vect_recog_popcount_clz_ctz_ffs_pattern
such that the two can work together.
2023-04-20 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/109011
* tree-vect-patterns.cc (vect_recog_ctz_ffs_pattern): New function.
(vect_recog_popcount_clz_ctz_ffs_pattern): Move vect_pattern_detected
call later. Don't punt for IFN_CTZ or IFN_FFS if it doesn't have
direct optab support, but has instead IFN_CLZ, IFN_POPCOUNT or
for IFN_FFS IFN_CTZ support, use vect_recog_ctz_ffs_pattern for that
case.
(vect_vect_recog_func_ptrs): Add ctz_ffs entry.
* gcc.dg/vect/pr109011-1.c: Remove -mpower9-vector from
dg-additional-options.
(baz, qux): Remove functions and corresponding dg-final.
* gcc.dg/vect/pr109011-2.c: New test.
* gcc.dg/vect/pr109011-3.c: New test.
* gcc.dg/vect/pr109011-4.c: New test.
* gcc.dg/vect/pr109011-5.c: New test.
Richard Biener [Mon, 20 Feb 2023 14:02:43 +0000 (15:02 +0100)]
Remove duplicate DFS walks from DF init
The following removes unused CFG order computes from
rest_of_handle_df_initialize. The CFG orders are computed from df_analyze ().
This also removes code duplication that would have to be kept in sync.
* df-core.cc (rest_of_handle_df_initialize): Remove
computation of df->postorder, df->postorder_inverted and
df->n_blocks.
Haochen Jiang [Fri, 10 Mar 2023 05:40:09 +0000 (13:40 +0800)]
i386: Share AES xmm intrin with VAES
Currently in GCC, the 128 bit intrin for instruction vaes{end,dec}{last,}
is under AES ISA. Because there is no dependency between ISA set AES
and VAES, The 128 bit intrin is not available when we use compiler flag
-mvaes -mavx512vl and there is no other way to use that intrin. But it
should according to Intel SDM.
Although VAES aims to be a VEX/EVEX promotion for AES, but it is only part
of it. Therefore, we share the AES xmm intrin with VAES.
Also, since -mvaes indicates that we could use VEX encoding for ymm, we
should imply AVX for VAES.
gcc/ChangeLog:
* common/config/i386/i386-common.cc
(OPTION_MASK_ISA2_AVX_UNSET): Add OPTION_MASK_ISA2_VAES_UNSET.
(ix86_handle_option): Set AVX flag for VAES.
* config/i386/i386-builtins.cc (ix86_init_mmx_sse_builtins):
Add OPTION_MASK_ISA2_VAES_UNSET.
(def_builtin): Share builtin between AES and VAES.
* config/i386/i386-expand.cc (ix86_check_builtin_isa_match):
Ditto.
* config/i386/i386.md (aes): New isa attribute.
* config/i386/sse.md (aesenc): Add pattern for VAES with xmm.
(aesenclast): Ditto.
(aesdec): Ditto.
(aesdeclast): Ditto.
* config/i386/vaesintrin.h: Remove redundant avx target push.
* config/i386/wmmintrin.h (_mm_aesdec_si128): Change to macro.
(_mm_aesdeclast_si128): Ditto.
(_mm_aesenc_si128): Ditto.
(_mm_aesenclast_si128): Ditto.
Haochen Jiang [Fri, 10 Mar 2023 02:38:50 +0000 (10:38 +0800)]
i386: Add PCLMUL dependency for VPCLMULQDQ
Currently in GCC, the 128 bit intrin for instruction vpclmulqdq is
under PCLMUL ISA. Because there is no dependency between ISA set PCLMUL
and VPCLMULQDQ, The 128 bit intrin is not available when we just use
compiler flag -mvpclmulqdq. But it should according to Intel SDM.
Since VPCLMULQDQ is a VEX/EVEX promotion for PCLMUL, it is natural to
add dependency between them.
Also, with -mvpclmulqdq, we can use ymm under VEX encoding, so
VPCLMULQDQ should imply AVX.
Haochen Jiang [Thu, 15 Dec 2022 03:10:16 +0000 (11:10 +0800)]
i386: Use macro to wrap up share builtin exceptions in builtin isa check
gcc/ChangeLog:
* config/i386/i386-expand.cc
(ix86_check_builtin_isa_match): Correct wrong comments.
Add a new macro SHARE_BUILTIN and refactor the current if
clauses to macro.
Max Filippov [Tue, 28 Feb 2023 13:46:29 +0000 (05:46 -0800)]
gcc: xtensa: add -m[no-]strict-align option
gcc/
* config/xtensa/xtensa-opts.h: New header.
* config/xtensa/xtensa.h (STRICT_ALIGNMENT): Redefine as
xtensa_strict_align.
* config/xtensa/xtensa.cc (xtensa_option_override): When
-m[no-]strict-align is not specified in the command line set
xtensa_strict_align to 0 if the hardware supports both unaligned
loads and stores or to 1 otherwise.
* config/xtensa/xtensa.opt (mstrict-align): New option.
* doc/invoke.texi (Xtensa Options): Document -m[no-]strict-align.
Patrick Palka [Wed, 19 Apr 2023 19:36:34 +0000 (15:36 -0400)]
c++: Define built-in for std::tuple_element [PR100157]
This adds a new built-in to replace the recursive class template
instantiations done by traits such as std::tuple_element and
std::variant_alternative. The purpose is to select the Nth type from a
list of types, e.g. __type_pack_element<1, char, int, float> is int.
We implement it as a special kind of TRAIT_TYPE.
For a pathological example tuple_element_t<1000, tuple<2000 types...>>
the compilation time is reduced by more than 90% and the memory used by
the compiler is reduced by 97%. In realistic examples the gains will be
much smaller, but still relevant.
Unlike the other built-in traits, __type_pack_element uses template-id
syntax instead of call syntax and is SFINAE-enabled, matching Clang's
implementation. And like the other built-in traits, it's not mangleable
so we can't use it directly in function signatures.
N.B. Clang seems to implement __type_pack_element as a first-class
template that can e.g. be used as a template-template argument. For
simplicity we implement it in a more ad-hoc way.
Co-authored-by: Jonathan Wakely <jwakely@redhat.com>
PR c++/100157
gcc/cp/ChangeLog:
* cp-trait.def (TYPE_PACK_ELEMENT): Define.
* cp-tree.h (finish_trait_type): Add complain parameter.
* cxx-pretty-print.cc (pp_cxx_trait): Handle
CPTK_TYPE_PACK_ELEMENT.
* parser.cc (cp_parser_constant_expression): Document default
arguments.
(cp_parser_trait): Handle CPTK_TYPE_PACK_ELEMENT. Pass
tf_warning_or_error to finish_trait_type.
* pt.cc (tsubst) <case TRAIT_TYPE>: Handle non-type first
argument. Pass complain to finish_trait_type.
* semantics.cc (finish_type_pack_element): Define.
(finish_trait_type): Add complain parameter. Handle
CPTK_TYPE_PACK_ELEMENT.
* tree.cc (strip_typedefs): Handle non-type first argument.
Pass tf_warning_or_error to finish_trait_type.
* typeck.cc (structural_comptypes) <case TRAIT_TYPE>: Use
cp_tree_equal instead of same_type_p for the first argument.
libstdc++-v3/ChangeLog:
* include/bits/utility.h (_Nth_type): Conditionally define in
terms of __type_pack_element if available.
* testsuite/20_util/tuple/element_access/get_neg.cc: Prune
additional errors from the new built-in.
gcc/testsuite/ChangeLog:
* g++.dg/ext/type_pack_element1.C: New test.
* g++.dg/ext/type_pack_element2.C: New test.
* g++.dg/ext/type_pack_element3.C: New test.
Patrick Palka [Wed, 19 Apr 2023 17:07:46 +0000 (13:07 -0400)]
c++: bad ggc_free in try_class_unification [PR109556]
Aside from correcting how try_class_unification copies multi-dimensional
'targs', r13-377-g3e948d645bc908 also made it ggc_free this copy as an
optimization. But this is wrong since the call to unify within might've
captured the args in persistent memory such as the satisfaction cache
(as part of constrained auto deduction).
PR c++/109556
gcc/cp/ChangeLog:
* pt.cc (try_class_unification): Don't ggc_free the copy of
'targs'.
gcc/testsuite/ChangeLog:
* g++.dg/cpp2a/concepts-placeholder13.C: New test.
Adjust scan-tree-dump patterns so that they do not accidentally match a
valid path.
gcc/testsuite/ChangeLog:
PR testsuite/83904
PR fortran/100297
* gfortran.dg/allocatable_function_1.f90: Use "__builtin_free "
instead of the naive "free".
* gfortran.dg/reshape_8.f90: Extend pattern from a simple "data".
Andrew Pinski [Thu, 13 Apr 2023 00:40:40 +0000 (00:40 +0000)]
i386: Add new pattern for zero-extend cmov
After a phiopt change, I got a failure of cmov9.c.
The RTL IR has zero_extend on the outside of
the if_then_else rather than on the side. Both
ways are considered canonical as mentioned in
PR 66588.
This fixes the failure I got and also adds a testcase
which fails before even my phiopt patch but will pass
with this patch.
OK? Bootstrapped and tested on x86_64-linux-gnu with
no regressions.
gcc/ChangeLog:
* config/i386/i386.md (*movsicc_noc_zext_1): New pattern.
gcc/testsuite/ChangeLog:
* gcc.target/i386/cmov10.c: New test.
* gcc.target/i386/cmov11.c: New test.
My earlier patch for 108099 made us accept this non-standard pattern but
messed up the semantics, so that e.g. unsigned __int128_t was not a 128-bit
type.
PR c++/108099
gcc/cp/ChangeLog:
* decl.cc (grokdeclarator): Keep typedef_decl for __int128_t.
RISC-V has provide different VLEN configuration by different ISA
extension like `zve32x`, `zve64x` and `v`
zve32x just guarantee the minimal VLEN is 32 bits,
zve64x guarantee the minimal VLEN is 64 bits,
and v guarantee the minimal VLEN is 128 bits,
Current status (without this patch):
Zve32x: Mode for one vector register mode is VNx1SImode and VNx1DImode
is invalid mode
- one vector register could hold 1 + 1x SImode where x is 0~n, so it
might hold just one SI
Zve64x: Mode for one vector register mode is VNx1DImode or VNx2SImode
- one vector register could hold 1 + 1x DImode where x is 0~n, so it
might hold just one DI.
- one vector register could hold 2 + 2x SImode where x is 0~n, so it
might hold just two SI.
However `v` extension guarantees the minimal VLEN is 128 bits.
We introduce another type/mode mapping for this configure:
v: Mode for one vector register mode is VNx2DImode or VNx4SImode
- one vector register could hold 2 + 2x DImode where x is 0~n, so it
will hold at least two DI
- one vector register could hold 4 + 4x SImode where x is 0~n, so it
will hold at least four DI
This patch model the mode more precisely for the RVV, and help some
middle-end optimization that assume number of element must be a
multiple of two.
Pan Li [Wed, 19 Apr 2023 09:18:20 +0000 (17:18 +0800)]
RISC-V: Align IOR optimization MODE_CLASS condition to AND.
This patch aligned the MODE_CLASS condition of the IOR to the AND. Then
more MODE_CLASS besides SCALAR_INT can able to perform the optimization
A | (~A) -> -1 similar to AND operator. For example as below sample code.
Before this patch:
vsetvli a5,zero,e8,mf4,ta,ma
vlm.v v24,0(a1)
vsetvli zero,a2,e8,mf4,ta,ma
vmorn.mm v24,v24,v24
vsetvli a5,zero,e8,mf4,ta,ma
vsm.v v24,0(a0)
ret
After this patch:
vsetvli zero,a2,e8,mf4,ta,ma
vmset.m v24
vsetvli a5,zero,e8,mf4,ta,ma
vsm.v v24,0(a0)
ret
Or in RTL's perspective,
from:
(ior:VNx2BI (reg/v:VNx2BI 137 [ v1 ]) (not:VNx2BI (reg/v:VNx2BI 137 [ v1 ])))
to:
(const_vector:VNx2BI repeat [ (const_int 1 [0x1]) ])
The similar optimization like VMANDN has enabled already. There should
be no difference execpt the operator when compare the VMORN and VMANDN
for such kind of optimization. The patch aligns the IOR MODE_CLASS condition
of the simplification to the AND operator.
gcc/ChangeLog:
* simplify-rtx.cc (simplify_context::simplify_binary_operation_1):
Align IOR (A | (~A) -> -1) optimization MODE_CLASS condition to AND.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/mask_insn_shortcut.c: Update check
condition.
* gcc.target/riscv/simplify_ior_optimization.c: New test.
the compare could use high register %ah instead of %dil:
movl %edi, %eax
cmpb ts(%rsi), %ah
setl %al
ret
Use any_extract code iterator to handle signed and unsigned extracts
from high register and introduce peephole2 patterns to propagate
norex memory opeerand into the compare insn.
gcc/ChangeLog:
PR target/78904
PR target/78952
* config/i386/i386.md (*cmpqi_ext<mode>_1_mem_rex64): New insn pattern.
(*cmpqi_ext<mode>_1): Use nonimmediate_operand predicate
for operand 0. Use any_extract code iterator.
(*cmpqi_ext<mode>_1 peephole2): New peephole2 pattern.
(*cmpqi_ext<mode>_2): Use any_extract code iterator.
(*cmpqi_ext<mode>_3_mem_rex64): New insn pattern.
(*cmpqi_ext<mode>_1): Use general_operand predicate
for operand 1. Use any_extract code iterator.
(*cmpqi_ext<mode>_3 peephole2): New peephole2 pattern.
(*cmpqi_ext<mode>_4): Use any_extract code iterator.
gcc/testsuite/ChangeLog:
PR target/78904
PR target/78952
* gcc.target/i386/pr78952-3.c: New test.
aarch64: Factorise widening add/sub high-half expanders with iterators
I noticed these define_expand are almost identical modulo some string substitutions.
This patch compresses them together with a couple of code iterators.
No functional change intended.
Bootstrapped and tested on aarch64-none-linux-gnu.
Richard Biener [Tue, 14 Mar 2023 13:39:17 +0000 (14:39 +0100)]
Use solve_add_graph_edge in more places
The following makes sure to use solve_add_graph_edge and honoring
special-cases, especially edges from escaped, in the remaining places
the solver adds edges.
* tree-ssa-structalias.cc (do_ds_constraint): Use
solve_add_graph_edge.
Richard Biener [Wed, 22 Mar 2023 13:13:02 +0000 (14:13 +0100)]
Remove odd code from gimple_can_merge_blocks_p
The following removes a special case to not merge a block with
only a non-local label. We have a restriction of non-local labels
to be the first statement (and label) in a block, but otherwise nothing,
if the last stmt of A is a non-local label then it will be still
the first statement of the combined A + B. In particular we'd
happily merge when there's a stmt after that label.
The check originates from the tree-ssa merge.
Bootstrapped and tested on x86_64-unknown-linux-gnu with all
languages.
* tree-cfg.cc (gimple_can_merge_blocks_p): Remove condition
rejecting the merge when A contains only a non-local label.
Introduce VIRTUAL_REGISTER_P and VIRTUAL_REGISTER_NUM_P predicates
These two predicates are similar to existing HARD_REGISTER_P and
HARD_REGISTER_NUM_P predicates and return 1 if the given register
corresponds to a virtual register.
gcc/ChangeLog:
* rtl.h (VIRTUAL_REGISTER_P): New predicate.
(VIRTUAL_REGISTER_NUM_P): Ditto.
(REGNO_PTR_FRAME_P): Use VIRTUAL_REGISTER_NUM_P predicate.
* expr.cc (force_operand): Use VIRTUAL_REGISTER_P predicate.
* function.cc (instantiate_decl_rtl): Ditto.
* rtlanal.cc (rtx_addr_can_trap_p_1): Ditto.
(nonzero_address_p): Ditto.
(refers_to_regno_p): Use VIRTUAL_REGISTER_NUM_P predicate.
Richard Biener [Wed, 19 Apr 2023 07:45:55 +0000 (09:45 +0200)]
Transform more gmp/mpfr uses to use RAII
The following picks up the coccinelle generated patch from Bernhard,
leaving out the fortran frontend parts and fixing up the rest.
In particular both gmp.h and mpfr.h contain macros like
#define mpfr_inf_p(_x) ((_x)->_mpfr_exp == __MPFR_EXP_INF)
for which I add operator-> overloads to the auto_* classes.
* system.h (auto_mpz::operator->()): New.
* realmpfr.h (auto_mpfr::operator->()): New.
* builtins.cc (do_mpfr_lgamma_r): Use auto_mpfr.
* real.cc (real_from_string): Likewise.
(dconst_e_ptr): Likewise.
(dconst_sqrt2_ptr): Likewise.
* tree-ssa-loop-niter.cc (refine_value_range_using_guard):
Use auto_mpz.
(bound_difference_of_offsetted_base): Likewise.
(number_of_iterations_ne): Likewise.
(number_of_iterations_lt_to_ne): Likewise.
* ubsan.cc: Include realmpfr.h.
(ubsan_instrument_float_cast): Use auto_mpfr.
Richard Biener [Tue, 14 Mar 2023 13:39:32 +0000 (14:39 +0100)]
Avoid non-unified nodes on the topological sorting for PTA solving
Since we do not update successor edges when merging nodes we have
to deal with this in the users. The following avoids putting those
on the topo order vector.
* tree-ssa-structalias.cc (topo_visit): Look at the real
destination of edges.
Richard Biener [Thu, 9 Mar 2023 08:02:07 +0000 (09:02 +0100)]
tree-optimization/44794 - avoid excessive RTL unrolling on epilogues
The following adjusts tree_[transform_and_]unroll_loop to set an
upper bound on the number of iterations on the epilogue loop it
creates. For the testcase at hand which involves array prefetching
this avoids applying RTL unrolling to them when -funroll-loops is
specified.
Other users of this API includes predictive commoning and
unroll-and-jam.
PR tree-optimization/44794
* tree-ssa-loop-manip.cc (tree_transform_and_unroll_loop):
If an epilogue loop is required set its iteration upper bound.
Xi Ruoyao [Wed, 12 Apr 2023 11:45:48 +0000 (11:45 +0000)]
LoongArch: Improve cpymemsi expansion [PR109465]
We'd been generating really bad block move sequences which is recently
complained by kernel developers who tried __builtin_memcpy. To improve
it:
1. Take the advantage of -mno-strict-align. When it is set, set mode
size to UNITS_PER_WORD regardless of the alignment.
2. Half the mode size when (block size) % (mode size) != 0, instead of
falling back to ld.bu/st.b at once.
3. Limit the length of block move sequence considering the number of
instructions, not the size of block. When -mstrict-align is set and
the block is not aligned, the old size limit for straight-line
implementation (64 bytes) was definitely too large (we don't have 64
registers anyway).
Change since v1: add a comment about the calculation of num_reg.
gcc/ChangeLog:
PR target/109465
* config/loongarch/loongarch-protos.h
(loongarch_expand_block_move): Add a parameter as alignment RTX.
* config/loongarch/loongarch.h:
(LARCH_MAX_MOVE_BYTES_PER_LOOP_ITER): Remove.
(LARCH_MAX_MOVE_BYTES_STRAIGHT): Remove.
(LARCH_MAX_MOVE_OPS_PER_LOOP_ITER): Define.
(LARCH_MAX_MOVE_OPS_STRAIGHT): Define.
(MOVE_RATIO): Use LARCH_MAX_MOVE_OPS_PER_LOOP_ITER instead of
LARCH_MAX_MOVE_BYTES_PER_LOOP_ITER.
* config/loongarch/loongarch.cc (loongarch_expand_block_move):
Take the alignment from the parameter, but set it to
UNITS_PER_WORD if !TARGET_STRICT_ALIGN. Limit the length of
straight-line implementation with LARCH_MAX_MOVE_OPS_STRAIGHT
instead of LARCH_MAX_MOVE_BYTES_STRAIGHT.
(loongarch_block_move_straight): When there are left-over bytes,
half the mode size instead of falling back to byte mode at once.
(loongarch_block_move_loop): Limit the length of loop body with
LARCH_MAX_MOVE_OPS_PER_LOOP_ITER instead of
LARCH_MAX_MOVE_BYTES_PER_LOOP_ITER.
* config/loongarch/loongarch.md (cpymemsi): Pass the alignment
to loongarch_expand_block_move.
gcc/testsuite/ChangeLog:
PR target/109465
* gcc.target/loongarch/pr109465-1.c: New test.
* gcc.target/loongarch/pr109465-2.c: New test.
* gcc.target/loongarch/pr109465-3.c: New test.
Xi Ruoyao [Tue, 28 Mar 2023 17:36:09 +0000 (01:36 +0800)]
LoongArch: Improve GAR store for va_list
LoongArch backend used to save all GARs for a function with variable
arguments. But sometimes a function only accepts variable arguments for
a purpose like C++ function overloading. For example, POSIX defines
open() as:
int open(const char *path, int oflag, ...);
But only two forms are actually used:
int open(const char *pathname, int flags);
int open(const char *pathname, int flags, mode_t mode);
So it's obviously a waste to save all 8 GARs in open(). We can use the
cfun->va_list_gpr_size field set by the stdarg pass to only save the
GARs necessary to be saved.
If the va_list escapes (for example, in fprintf() we pass it to
vfprintf()), stdarg would set cfun->va_list_gpr_size to 255 so we
don't need a special case.
With this patch, only one GAR ($a2/$r6) is saved in open(). Ideally
even this stack store should be omitted too, but doing so is not trivial
and AFAIK there are no compilers (for any target) performing the "ideal"
optimization here, see https://godbolt.org/z/n1YqWq9c9.
Bootstrapped and regtested on loongarch64-linux-gnu. Ok for trunk
(GCC 14 or now)?
gcc/ChangeLog:
* config/loongarch/loongarch.cc
(loongarch_setup_incoming_varargs): Don't save more GARs than
cfun->va_list_gpr_size / UNITS_PER_WORD.
Richard Biener [Fri, 16 Dec 2022 12:48:58 +0000 (13:48 +0100)]
Simplify gimple_assign_load
The following simplifies and outlines gimple_assign_load. In
particular it is not necessary to get at the base of the possibly
loaded expression but just handle the case of a single handled
component wrapping a non-memory operand.
* gimple.h (gimple_assign_load): Outline...
* gimple.cc (gimple_assign_load): ... here. Avoid
get_base_address and instead just strip the outermost
handled component, treating a remaining handled component
as load.
aarch64: Delete __builtin_aarch64_neg* builtins and their use
I don't think we need to keep the __builtin_aarch64_neg* builtins around.
They are only used once in the vnegh_f16 intrinsic in arm_fp16.h and I AFAICT
it was added this way only for the sake of orthogonality in
https://gcc.gnu.org/g:d7f33f07d88984cbe769047e3d07fc21067fbba9
We already use normal "-" negation in the other vneg* intrinsics, so do so here as well.
Bootstrapped and tested on aarch64-none-linux-gnu.
gcc/ChangeLog:
* config/aarch64/aarch64-simd-builtins.def (neg): Delete builtins
definition.
* config/aarch64/arm_fp16.h (vnegh_f16): Reimplement using normal negation.
For __builtin_popcountll tree-vect-patterns.cc has
vect_recog_popcount_pattern, which improves the vectorized code.
Without that the vectorization is always multi-type vectorization
in the loop (at least int and long long types) where we emit two
.POPCOUNT calls with long long arguments and int return value and then
widen to long long, so effectively after vectorization do the
V?DImode -> V?DImode popcount twice, then pack the result into V?SImode
and immediately unpack.
The following patch extends that handling to __builtin_{clz,ctz,ffs}ll
builtins as well (as long as there is an optab for them; more to come
laster).
x86 can do __builtin_popcountll with -mavx512vpopcntdq, __builtin_clzll
with -mavx512cd, ppc can do __builtin_popcountll and __builtin_clzll
with -mpower8-vector and __builtin_ctzll with -mpower9-vector, s390
can do __builtin_{popcount,clz,ctz}ll with -march=z13 -mzarch (i.e. VX).
2023-04-19 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/109011
* tree-vect-patterns.cc (vect_recog_popcount_pattern): Rename to ...
(vect_recog_popcount_clz_ctz_ffs_pattern): ... this. Handle also
CLZ, CTZ and FFS. Remove vargs variable, use
gimple_build_call_internal rather than gimple_build_call_internal_vec.
(vect_vect_recog_func_ptrs): Adjust popcount entry.
Jakub Jelinek [Wed, 19 Apr 2023 09:13:11 +0000 (11:13 +0200)]
dse: Use SUBREG_REG for copy_to_mode_reg in DSE replace_read for WORD_REGISTER_OPERATIONS targets [PR109040]
While we've agreed this is not the right fix for the PR109040 bug,
the patch clearly improves generated code (at least on the testcase from the
PR), so I'd like to propose this as optimization heuristics improvement
for GCC 14.
2023-04-19 Jakub Jelinek <jakub@redhat.com>
PR target/109040
* dse.cc (replace_read): If read_reg is a SUBREG of a word mode
REG, for WORD_REGISTER_OPERATIONS copy SUBREG_REG of it into
a new REG rather than the SUBREG.
In this PR we fail to eliminate explicit &31 operations for variable shifts such as in:
void
bar (int x[3], int y)
{
x[0] <<= (y & 31);
x[1] <<= (y & 31);
x[2] <<= (y & 31);
}
This is rejected by RTX costs that end up giving too high a cost for:
(set (reg:SI 96)
(ashift:SI (reg:SI 98)
(subreg:QI (and:SI (reg:SI 99)
(const_int 31 [0x1f])) 0)))
There is code to handle the AND-31 case in rtx costs, but it gets confused by the subreg.
It's easy enough to fix by looking inside the subreg when costing the expression.
While doing that I noticed that the ASHIFT case and the other shift-like cases are almost identical
and we should just merge them. This code will only be used for valid insns anyway, so the code after this
patch should do the Right Thing (TM) for all such shift cases.
With this patch there are no more "and wn, wn, 31" instructions left in the testcase.
Bootstrapped and tested on aarch64-none-linux-gnu.
PR target/108840
gcc/ChangeLog:
* config/aarch64/aarch64.cc (aarch64_rtx_costs): Merge ASHIFT and
ROTATE, ROTATERT, LSHIFTRT, ASHIFTRT cases. Handle subregs in op1.
Richard Biener [Wed, 22 Mar 2023 08:29:49 +0000 (09:29 +0100)]
rtl-optimization/109237 - quadraticness in delete_trivially_dead_insns
The following addresses quadraticness in processing debug insns
in delete_trivially_dead_insns and insn_live_p by using TREE_VISITED
on the INSN_VAR_LOCATION_DECL to indicate a later debug bind
with the same decl and no intervening real insn or debug marker.
That gets rid of the NEXT_INSN walk in insn_live_p in favor of
first clearing TREE_VISITED in the first loop over insn and
the book-keeping of decls we set the bit since we need to clear
them when visiting a real or debug marker insn.
That improves the time spent in delete_trivially_dead_insns from
10.6s to 2.2s for the testcase.
PR rtl-optimization/109237
* cse.cc (insn_live_p): Remove NEXT_INSN walk, instead check
TREE_VISITED on INSN_VAR_LOCATION_DECL.
(delete_trivially_dead_insns): Maintain TREE_VISITED on
active debug bind INSN_VAR_LOCATION_DECL.
For the testcase bb_is_just_return is on top of the profile, changing
it to walk BB insns backwards puts it off the profile. That's because
in the forward walk you have to process possibly many debug insns
but in a backward walk you very likely run into control insns first.
PR rtl-optimization/109237
* cfgcleanup.cc (bb_is_just_return): Walk insns backwards.
Jakub Jelinek [Wed, 19 Apr 2023 08:01:04 +0000 (10:01 +0200)]
testsuite: Fix up pr109524.C for -std=c++23 [PR109524]
This testcase was reduced such that it isn't valid C++23, so with my
usual testing with GXX_TESTSUITE_STDS=98,11,14,17,20,2b it fails:
FAIL: g++.dg/pr109524.C -std=gnu++2b (test for excess errors)
.../gcc/testsuite/g++.dg/pr109524.C: In function 'nn hh(nn)':
.../gcc/testsuite/g++.dg/pr109524.C:35:12: error: cannot bind non-const lvalue reference of type 'nn&' to an rvalue of type 'nn'
.../gcc/testsuite/g++.dg/pr109524.C:17:6: note: initializing argument 1 of 'nn::nn(nn&)'
The following patch fixes that and I've verified it doesn't change
anything on what the test was testing, it still ICEs in r13-7198 and
passes in r13-7203, now in all language modes (except for 98 where
it is intentionally UNSUPPORTED).
2023-04-19 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/109524
* g++.dg/pr109524.C (nn::nn): Change argument type from nn & to
const nn &.
Check hard_regno_mode_ok before setting lowest memory move cost for the mode with different reg classes.
There's a potential performance issue when backend returns some
unreasonable value for the mode which can be never be allocate with
reg class.
gcc/ChangeLog:
PR rtl-optimization/109351
* ira.cc (setup_class_subset_and_memory_move_costs): Check
hard_regno_mode_ok before setting lowest memory move cost for
the mode with different reg classes.
Jonathan Wakely [Tue, 18 Apr 2023 23:07:36 +0000 (00:07 +0100)]
libstdc++: Adjust uses of null pointer constants in docs
libstdc++-v3/ChangeLog:
* doc/xml/manual/extensions.xml: Fix example to declare and
qualify std::free, and use NULL instead of 0.
* doc/html/manual/ext_demangling.html: Regenerate.
* libsupc++/cxxabi.h: Adjust doxygen comments.
ifcvt.cc: Prevent excessive if-conversion for conditional moves
gcc/
* ifcvt.cc (cond_move_process_if_block): Consider the result of
targetm.noce_conversion_profitable_p() when replacing the original
sequence with the converted one.
Andrew Pinski [Fri, 31 Mar 2023 00:00:20 +0000 (00:00 +0000)]
PHIOPT: Move tree_ssa_cs_elim into pass_cselim::execute.
This moves around the code for tree_ssa_cs_elim slightly
improving code readability and removing declarations that
are no longer needed.
OK? Bootstrapped and tested on x86_64-linux-gnu with no regressions.
gcc/ChangeLog:
* tree-ssa-phiopt.cc (tree_ssa_phiopt_worker): Remove declaration.
(make_pass_phiopt): Make execute out of line.
(tree_ssa_cs_elim): Move code into ...
(pass_cselim::execute): here.
i386: Improve permutations with INSERTPS instruction [PR94908]
INSERTPS can select any element from src and insert into any place
of the dest. For SSE4.1 targets, compiler can generate e.g.
insertps $64, %xmm0, %xmm1
to insert element 1 from %xmm1 to element 0 of %xmm0.
gcc/ChangeLog:
PR target/94908
* config/i386/i386-builtin.def (__builtin_ia32_insertps128):
Use CODE_FOR_sse4_1_insertps_v4sf.
* config/i386/i386-expand.cc (expand_vec_perm_insertps): New.
(expand_vec_perm_1): Call expand_vec_per_insertps.
* config/i386/i386.md ("unspec"): Declare UNSPEC_INSERTPS here.
* config/i386/mmx.md (mmxscalarmode): New mode attribute.
(@sse4_1_insertps_<mode>): New insn pattern.
* config/i386/sse.md (@sse4_1_insertps_<mode>): Macroize insn
pattern from sse4_1_insertps using VI4F_128 mode iterator.
gcc/testsuite/ChangeLog:
PR target/94908
* gcc.target/i386/pr94908.c: New test.
* gcc.target/i386/sse4_1-insertps-5.c: New test.
* gcc.target/i386/vperm-v4sf-2-sse4.c: New test.
Jonathan Wakely [Tue, 18 Apr 2023 16:22:40 +0000 (17:22 +0100)]
libstdc++: Fix preprocessor condition in linker script [PR108969]
The linker script is preprocessed with $(top_builddir)/config.h not the
include/$target/bits/c++config.h version, which means that configure
macros do not have the _GLIBCXX_ prefix yet.
The _GLIBCXX_SYMVER_GNU and _GLIBCXX_SHARED checks are redundant,
because the gnu.ver file is only used for _GLIBCXX_SYMVER_GNU and the
linker script is only used for the shared library. Remove those.
Aldy Hernandez [Thu, 23 Feb 2023 08:10:16 +0000 (09:10 +0100)]
Add GTY support for vrange.
IPA currently puts *some* irange's in GC memory. When I contribute
support for generic ranges in IPA, we'll need to change this to
vrange. This patch adds GTY support for both vrange and frange.
constraint: fix relaxed memory and repeated constraint handling
The function `constrain_operands' lacked the logic to consider relaxed
memory constraints when "traditional" memory constraints were not
satisfied, creating potential issues as observed during the reload
compilation pass.
In addition, it was observed that while `constrain_operands' chooses
to disregard constraints when more than one alternative is provided,
e.g. "m,r" using CONSTRAINT__UNKNOWN, it has no checks in place to
determine whether the multiple constraints in a given string are in
fact repetitions of the same constraint and should thus in fact be
treated as a single constraint, as ought to be the case for something
like "m,m".
Both of these issues are dealt with here, thus ensuring that we get
appropriate pattern matching.
Jonathan Wakely [Tue, 18 Apr 2023 13:37:38 +0000 (14:37 +0100)]
libstdc++: Export global iostreams with GLIBCXX_3.4.31 symver [PR108969]
Since GCC 13 the global iostream objects are only initialized once in
libstdc++, and not by a std::ios::Init object in every translation unit
that includes <iostream>. To avoid using uninitialized streams defined
in an older libstdc++.so, translation units using the global iostreams
should depend on the GLIBCXX_3.4.31 symver.
Define std::cin as std::__io::cin and then export it as
std::cin@@GLIBCXX_3.4.31 so that references to std::cin bind to the new
symver. Also export it as @GLIBCXX_3.4 for backwards compatibility
libstdc++-v3/ChangeLog:
PR libstdc++/108969
* src/Makefile.am: Move globals_io.cc to here.
* src/Makefile.in: Regenerate.
* src/c++98/Makefile.am: Remove globals_io.cc from here.
* src/c++98/Makefile.in: Regenerate.
* src/c++98/globals_io.cc [_GLIBCXX_SYMVER_GNU] (cin): Adjust
symbol name and then export with GLIBCXX_3.4.31 symver.
(cout, cerr, clog, wcin, wcout, wcerr, wclog): Likewise.
* config/abi/post/aarch64-linux-gnu/baseline_symbols.txt:
Regenerate.
* config/abi/post/i486-linux-gnu/baseline_symbols.txt:
Regenerate.
* config/abi/post/m68k-linux-gnu/baseline_symbols.txt:
Regenerate.
* config/abi/post/powerpc64-linux-gnu/baseline_symbols.txt:
Regenerate.
* config/abi/post/riscv64-linux-gnu/baseline_symbols.txt:
Regenerate.
* config/abi/post/x86_64-linux-gnu/32/baseline_symbols.txt:
Regenerate.
* config/abi/post/s390x-linux-gnu/baseline_symbols.txt:
Regenerate.
* config/abi/post/x86_64-linux-gnu/baseline_symbols.txt:
Regenerate.
* config/abi/pre/gnu.ver: Add iostream objects to new symver.
Kito Cheng [Tue, 18 Apr 2023 10:07:06 +0000 (18:07 +0800)]
Docs: Add doc for RISC-V vector intrinsics
Document which version of RISC-V vector intrinsics has implemented in
GCC.
gcc/ChangeLog:
* doc/extend.texi (Target Builtins): Add RISC-V Vector
Intrinsics.
(RISC-V Vector Intrinsics): Document GCC implemented which
version of RISC-V vector intrinsics and its reference.