Andrew Pinski [Thu, 20 Apr 2023 16:23:25 +0000 (09:23 -0700)]
PHIOPT: Move check on diamond bb to tree_ssa_phiopt_worker from minmax_replacement
This moves the check to make sure on the diamond shaped form bbs that
the the two middle bbs are only for that diamond shaped form earlier
in the shared code.
Also remove the redundant check for single_succ_p since that was already
done before hand.
The next patch will simplify the code even further and remove redundant
checks.
PR tree-optimization/109604
gcc/ChangeLog:
* tree-ssa-phiopt.cc (tree_ssa_phiopt_worker): Move the
diamond form check from ...
(minmax_replacement): Here.
gcc/testsuite/ChangeLog:
* gcc.c-torture/compile/pr109604-1.c: New test.
* gcc.c-torture/compile/pr109604-2.c: New test.
Patrick Palka [Mon, 24 Apr 2023 14:33:49 +0000 (10:33 -0400)]
c++, tree: declare some basic functions inline
The functions strip_array_types, is_typedef_decl, typedef_variant_p
and cp_expr_location are used throughout the C++ front end including in
some fairly hot parts (e.g. in the tsubst routines and cp_walk_subtree)
and they're small enough that the overhead of calling them out-of-line
is relatively significant.
So this patch moves their definitions into the appropriate headers to
enable inlining them.
Motivated by a recent LLVM patch I saw, we can use SVE for 64-bit vector integer MUL (plain Advanced SIMD doesn't support it).
Since the Advanced SIMD regs are just the low 128-bit part of the SVE regs it all works transparently.
It's a reasonably straightforward implementation of the mulv2di3 optab that wires it up through the mulvnx2di3 expander and
subregs the results back to the Advanced SIMD modes.
There's more such tricks possible with other operations (and we could do 64-bit multiply-add merged operations too) but for now
this self-contained patch improves the mul case as without it for the testcases in the patch we'd have scalarised the arguments,
moved them to GP regs, performed two GP MULs and moved them back to SIMD regs.
Advertising a mulv2di3 optab from the backend should also allow for more flexibile vectorisation opportunities.
Bootstrapped and tested on aarch64-none-linux-gnu.
gcc/ChangeLog:
* config/aarch64/aarch64-simd.md (mulv2di3): New expander.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/sve-neon-modes_1.c: New test.
* gcc.target/aarch64/sve-neon-modes_2.c: New test.
install.texi needs some updates for GCC 13 and trunk:
* We used a mixture of Solaris 2 and Solaris references. Since Solaris
1/SunOS 4 is ancient history by now, consistently use Solaris
everywhere. Likewise, explicit references to Solaris 11 can go in
many places since Solaris 11.3 and 11.4 is all GCC supports.
* Some caveats apply to both Solaris/SPARC and x86, like the difference
between as and gas.
* Some specifics are obsolete, like the /usr/ccs/bin path whose contents
was merged into /usr/bin in Solaris 11.0 already. Likewise, /bin/sh
is ksh93 since Solaris 11.0, so there's no need to explicitly use
/bin/ksh.
* I've removed the reference to OpenCSW: there's barely a need for external
sites to get additional packages. OpenCSW is mostly unmaintained these
days and has been found to be rather harmful then helping.
* The section on assembler and linker to use was partially duplicated.
Better keep the info in one place.
* GNAT is bundled in recent Solaris 11.4 updates, so recommend that.
Tested on i386-pc-solaris2.11 with make doc/gccinstall.{info,pdf} and
inspection of the latter.
aarch64: PR target/109406 Add support for SVE2 unpredicated MUL
SVE2 supports an unpredicated vector integer MUL form that we can emit from our SVE expanders
without using up a predicate registers. This patch does so.
As the SVE MUL expansion currently is templated away through a code iterator I did not split it
off just for this case but instead special-cased it in the define_expand. It seemed somewhat less
invasive than the alternatives but I could split it off more explicitly if others want to.
The div-by-bitmask_1.c testcase is adjusted to expect this new MUL form.
Bootstrapped and tested on aarch64-none-linux-gnu.
gcc/ChangeLog:
PR target/109406
* config/aarch64/aarch64-sve.md (<optab><mode>3): Handle TARGET_SVE2 MUL
case.
* config/aarch64/aarch64-sve2.md (*aarch64_mul_unpredicated_<mode>): New
pattern.
gcc/testsuite/ChangeLog:
PR target/109406
* gcc.target/aarch64/sve2/div-by-bitmask_1.c: Adjust for unpredicated SVE2
MUL.
* gcc.target/aarch64/sve2/unpred_mul_1.c: New test.
[4/4] aarch64: Convert UABAL2 and SABAL2 patterns to standard RTL codes
The final patch in the series tackles the most complex of this family of patterns, UABAL2 and SABAL2.
These extract the high part of the sources, perform an absdiff on them, widen the result and accumulate.
The motivating testcase for this patch (series) is included and the simplification required doesn't actually
trigger with just the RTL pattern change because rtx_costs block it.
So this patch also extends rtx costs to recognise the (minus (smax (x, y) (smin (x, y)))) expression we use
to describe absdiff in the backend and avoid recursing into its arms.
This allows us to generate the single-instruction sequence expected here.
Bootstrapped and tested on aarch64-none-linux-gnu.
gcc/ChangeLog:
* config/aarch64/aarch64-simd.md (aarch64_<sur>abal2<mode>): Rename to...
(aarch64_<su>abal2<mode>_insn): ... This. Use RTL codes instead of unspec.
(aarch64_<su>abal2<mode>): New define_expand.
* config/aarch64/aarch64.cc (aarch64_abd_rtx_p): New function.
(aarch64_rtx_costs): Handle ABD rtxes.
* config/aarch64/aarch64.md (UNSPEC_SABAL2, UNSPEC_UABAL2): Delete.
* config/aarch64/iterators.md (ABAL2): Delete.
(sur): Remove handling of UNSPEC_UABAL2 and UNSPEC_SABAL2.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/simd/vabal_combine.c: New test.
[3/4] aarch64: Convert UABAL and SABAL patterns to standard RTL codes
With the SABDL and UABDL patterns converted, the accumulating forms of them UABAL and SABAL are not much more complicated.
There's an accumulator argument that we, err, accumulate into with a PLUS once all the widening is done.
Some necessary renaming of patterns relating to the removal of UNSPEC_SABAL and UNSPEC_UABAL is included.
Bootstrapped and tested on aarch64-none-linux-gnu.
gcc/ChangeLog:
* config/aarch64/aarch64-simd.md (aarch64_<sur>abal<mode>): Rename to...
(aarch64_<su>abal<mode>): ... This. Use RTL codes instead of unspec.
(<sur>sadv16qi): Rename to...
(<su>sadv16qi): ... This. Adjust for the above.
* config/aarch64/aarch64-sve.md (<sur>sad<vsi2qi>): Rename to...
(<su>sad<vsi2qi>): ... This. Adjust for the above.
* config/aarch64/aarch64.md (UNSPEC_SABAL, UNSPEC_UABAL): Delete.
* config/aarch64/iterators.md (ABAL): Delete.
(sur): Remove handling of UNSPEC_SABAL and UNSPEC_UABAL.
[2/4] aarch64: Convert UABDL2 and SABDL2 patterns to standard RTL codes
Similar to the previous patch for UABDL and SABDL, this patch covers the *2 versions that vec_select the high half
of its input to do the asbsdiff and extend. A define_expand is added for the intrinsic to create the "select-high-half" RTX the pattern expects.
Bootstrapped and tested on aarch64-none-linux-gnu.
gcc/ChangeLog:
* config/aarch64/aarch64-simd.md (aarch64_<sur>abdl2<mode>): Rename to...
(aarch64_<su>abdl2<mode>_insn): ... This. Use RTL codes instead of unspec.
(aarch64_<su>abdl2<mode>): New define_expand.
* config/aarch64/aarch64.md (UNSPEC_SABDL2, UNSPEC_UABDL2): Delete.
* config/aarch64/iterators.md (ABDL2): Delete.
(sur): Remove handling of UNSPEC_SABDL2 and UNSPEC_UABDL2.
[1/4] aarch64: Convert UABDL and SABDL patterns to standard RTL codes
This is the first patch in a series to improve the RTL representation of the sum-of-absolute-differences patterns
in the backend. We can use standard RTL codes and remove some unspecs.
For UABDL and SABDL we have a widening of the result so we can represent uabdl (x, y) as (zero_extend (minus (smax (x, y) (smin (x, y)))))
and sabdl (x, y) as (zero_extend (minus (umax (x, y) (umin (x, y))))).
It is important to use zero_extend rather than sign_extend for the sabdl case, as the result of the absolute difference is still a positive unsigned value
(the signedness of the operation refers to the values being diffed, not the absolute value of the difference) that must be zero-extended.
Bootstrapped and tested on aarch64-none-linux-gnu (these intrinsics are reasonably well-covered by the advsimd-intrinsics.exp tests)
gcc/ChangeLog:
* config/aarch64/aarch64-simd.md (aarch64_<sur>abdl<mode>): Rename to...
(aarch64_<su>abdl<mode>): ... This. Use standard RTL ops instead of
unspec.
* config/aarch64/aarch64.md (UNSPEC_SABDL, UNSPEC_UABDL): Delete.
* config/aarch64/iterators.md (ABDL): Delete.
(sur): Remove handling of UNSPEC_SABDL and UNSPEC_UABDL.
aarch64: Add pattern to match zero-extending scalar result of ADDLV
The vaddlv_u8 and vaddlv_u16 intrinsics produce a widened scalar result (uint16_t and uint32_t).
The ADDLV instructions themselves zero the rest of the V register, which gives us a free zero-extension
to 32 and 64 bits, similar to how it works on the GP reg side.
Because we don't model that zero-extension in the machine description this can cause GCC to move the
results of these instructions to the GP regs just to do a (superfluous) zero-extension.
This patch just adds a pattern to catch these cases. For the testcases we can now generate no zero-extends
or GP<->FP reg moves, whereas before we generated stuff like:
foo_8_32:
uaddlv h0, v0.8b
umov w1, v0.h[0] // FP<->GP move with zero-extension!
str w1, [x0]
ret
Bootstrapped and tested on aarch64-none-linux-gnu.
gcc/ChangeLog:
* config/aarch64/aarch64-simd.md
(*aarch64_<su>addlv<VDQV_L:mode>_ze<GPI:mode>): New pattern.
Richard Biener [Tue, 18 Apr 2023 15:26:57 +0000 (17:26 +0200)]
This replaces uses of last_stmt where we do not require debug skipping
There are quite some cases which want to access the control stmt
ending a basic-block. Since there cannot be debug stmts after
such stmt there's no point in using last_stmt which skips debug
stmts and can be a compile-time hog for larger testcases.
Richard Biener [Wed, 19 Apr 2023 09:24:00 +0000 (11:24 +0200)]
Avoid repeated forwarder_block_p calls in CFG cleanup
CFG cleanup maintains BB_FORWARDER_BLOCK and uses FORWARDER_BLOCK_P
to check that apart from two places which use forwarder_block_p
in outgoing_edges_match alongside many BB_FORWARDER_BLOCK uses.
The following adjusts those.
* cfgcleanup.cc (outgoing_edges_match): Use FORWARDER_BLOCK_P.
V2 patch for: https://patchwork.sourceware.org/project/gcc/patch/20230330012804.110539-1-juzhe.zhong@rivai.ai/
which has been reviewed.
This patch address Jeff's comment, refine ChangeLog to give more
clear information.
gcc/ChangeLog:
* config/riscv/vector-iterators.md: New unspec to refine fault first load pattern.
* config/riscv/vector.md: Refine fault first load pattern to erase avl from instructions
with the fault first load property.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/vsetvl/ffload-1.c: New test.
* gcc.target/riscv/rvv/vsetvl/ffload-2.c: New test.
* gcc.target/riscv/rvv/vsetvl/ffload-3.c: New test.
* gcc.target/riscv/rvv/vsetvl/ffload-5.c: New test.
* gcc.target/riscv/rvv/vsetvl/ffload-6.c: New test.
* gcc.target/riscv/rvv/vsetvl/ffload-7.c: New test.
liuhongt [Wed, 15 Mar 2023 05:41:06 +0000 (13:41 +0800)]
Add testcases for ffs/ctz vectorization.
gcc/testsuite/ChangeLog:
PR tree-optimization/109011
* gcc.target/i386/pr109011-b1.c: New test.
* gcc.target/i386/pr109011-b2.c: New test.
* gcc.target/i386/pr109011-d1.c: New test.
* gcc.target/i386/pr109011-d2.c: New test.
* gcc.target/i386/pr109011-q1.c: New test.
* gcc.target/i386/pr109011-q2.c: New test.
* gcc.target/i386/pr109011-w1.c: New test.
* gcc.target/i386/pr109011-w2.c: New test.
Gaius Mulley [Sun, 23 Apr 2023 20:09:45 +0000 (21:09 +0100)]
modula2: Add -lnsl -lsocket libraries to gcc/testsuite/lib/gm2.exp
Solaris requires -lnsl -lsocket (present in the driver) but not when
running the testsuite. This patch tests target for *-*-solaris2
and conditionally appends the above libraries.
gcc/testsuite/ChangeLog:
* lib/gm2.exp (gm2_target_compile_default): Conditionally
append -lnsl -lsocket to ldflags.
Roger Sayle [Sun, 23 Apr 2023 09:35:53 +0000 (10:35 +0100)]
[xstormy16] Update xstormy16_rtx_costs.
This patch provides an improved rtx_costs target hook on xstormy16.
The current implementation has the unfortunate property that it claims
that zero_extendhisi2 is very cheap, even though the machine description
doesn't provide that instruction/pattern. Doh! Rewriting the
xstormy16_rtx_costs function has additional benefits, including
making more use of the (short) "mul" instruction when optimizing
for size with -Os.
2023-04-23 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
* config/stormy16/stormy16.cc (xstormy16_rtx_costs): Rewrite to
provide reasonable values for common arithmetic operations and
immediate operands (in several machine modes).
gcc/testsuite/ChangeLog
* gcc.target/xstormy16/mulhi.c: New test case.
Roger Sayle [Sun, 23 Apr 2023 09:30:30 +0000 (10:30 +0100)]
[xstormy16] Add extendhisi2 and zero_extendhisi2 patterns to stormy16.md
This patch adds a pair of define_insn patterns to the xstormy16 machine
description that provide extendhisi2 and zero_extendhisi2, i.e. 16-bit
to 32-bit sign- and zero-extension respectively. This functionality is
already synthesized during RTL expansion, but providing patterns allow
the semantics to be exposed to the RTL optimizers. To simplify things,
this patch introduces a new %h0 output format, for emitting the high_part
register name of a double-word (SImode) register pair. The actual
code generated is identical to before.
Whilst there, I also fixed the instruction lengths and formatting of
the zero_extendqihi2 pattern. Then, mostly for documentation purposes
as the 'T' constraint isn't yet implemented, I've added a "and Rx,#255"
alternative to zero_extendqihi2 that takes advantage of its efficient
instruction encoding.
2023-04-23 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
* config/stormy16/stormy16.cc (xstormy16_print_operand): Add %h
format specifier to output high_part register name of SImode reg.
* config/stormy16/stormy16.md (extendhisi2): New define_insn.
(zero_extendqihi2): Fix lengths, consistent formatting and add
"and Rx,#255" alternative, for documentation purposes.
(zero_extendhisi2): New define_insn.
gcc/testsuite/ChangeLog
* gcc.target/xstormy16/extendhisi2.c: New test case.
* gcc.target/xstormy16/zextendhisi2.c: Likewise.
Roger Sayle [Sun, 23 Apr 2023 09:25:04 +0000 (10:25 +0100)]
[xstormy16] Improved SImode shifts by two bits.
Currently on xstormy16 SImode shifts by a single bit require two
instructions, and shifts by other non-zero integer immediate constants
require five instructions. This patch implements the obvious optimization
that shifts by two bits can be done in four instructions, by using two
single-bit sequences.
Hence, ashift_2 was previously generated as:
mov r7,r2 | shl r2,#2 | shl r3,#2 | shr r7,#14 | or r3,r7
ret
and with this patch we now generate:
shl r2,#1 | rlc r3,#1 | shl r2,#1 | rlc r3,#1
ret
2023-04-23 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
* config/stormy16/stormy16.cc (xstormy16_output_shift): Implement
SImode shifts by two by performing a single bit SImode shift twice.
gcc/testsuite/ChangeLog
* gcc.target/xstormy16/shiftsi.c: New test case.
liuhongt [Wed, 8 Feb 2023 14:27:54 +0000 (22:27 +0800)]
Adjust testcases after better RA decision.
After optimization for RA, memory op is not propagated into
instructions(>1), and it make testcases not generate vxorps since
the memory is loaded into the dest, and the dest is never unused now.
So rewrite testcases to make the codegen more stable.
gcc/testsuite/ChangeLog:
* gcc.target/i386/avx2-dest-false-dep-for-glc.c: Rewrite
testcase to make the codegen more stable.
* gcc.target/i386/avx512dq-dest-false-dep-for-glc.c: Ditto
* gcc.target/i386/avx512f-dest-false-dep-for-glc.c: Ditto.
* gcc.target/i386/avx512fp16-dest-false-dep-for-glc.c: Ditto.
* gcc.target/i386/avx512vl-dest-false-dep-for-glc.c: Ditto.
Andrew Pinski [Wed, 19 Apr 2023 21:42:45 +0000 (14:42 -0700)]
PHIOPT: Improve readability of tree_ssa_phiopt_worker
This small patch just changes around the code slightly to
make it easier to understand that the cases were handling diamond
shaped BB for both do_store_elim/do_hoist_loads.
There is no effect on code output at all since all of the checks
are the same still.
OK? Bootstrapped and tested on x86_64-linux-gnu with no regressions.
gcc/ChangeLog:
* tree-ssa-phiopt.cc (tree_ssa_phiopt_worker):
Change the code around slightly to move diamond
handling for do_store_elim/do_hoist_loads out of
the big if/else.
Andrew Pinski [Wed, 19 Apr 2023 17:31:20 +0000 (10:31 -0700)]
PHIOPT: Improve minmax diamond detection for phiopt1
For diamond bb phi node detection, there is a check
to make sure bb1 is not empty. But in the case where
bb1 is empty except for a predicate, empty_block_p
will still return true but the minmax code handles
that case already so there is no reason to check
if the basic block is empty.
This patch removes that check and removes some
xfails.
OK? Bootstrapped and tested on x86_64-linux-gnu.
gcc/ChangeLog:
* tree-ssa-phiopt.cc (tree_ssa_phiopt_worker):
Remove check on empty_block_p.
Harald Anlauf [Thu, 20 Apr 2023 19:47:34 +0000 (21:47 +0200)]
Fortran: function results never have the ALLOCATABLE attribute [PR109500]
Fortran 2018 8.5.3 (ALLOCATABLE attribute) explains in Note 1 that the
result of referencing a function whose result variable has the ALLOCATABLE
attribute is a value that does not itself have the ALLOCATABLE attribute.
gcc/fortran/ChangeLog:
PR fortran/109500
* interface.cc (gfc_compare_actual_formal): Reject allocatable
functions being used as actual argument for allocable dummy.
gcc/testsuite/ChangeLog:
PR fortran/109500
* gfortran.dg/allocatable_function_11.f90: New test.
Co-authored-by: Steven G. Kargl <kargl@gcc.gnu.org>
Jakub Jelinek [Sat, 22 Apr 2023 18:16:08 +0000 (20:16 +0200)]
testsuite: Fix up pr109011-*.c tests for powerpc [PR109572]
As reported, pr109011-{4,5}.c tests fail on powerpc.
I thought they should have the same counts as the corresponding -{2,3}.c
tests, the only difference is that -{2,3}.c are int while -{4,5}.c are
long long. But there are 2 issues. One is that in the foo
function the vectorization costs comparison triggered in, while in -{2,3}.c
we use vectorization factor 4 and it was found beneficial, when using
long long it was just vf 2 and the scalar cost of doing
p[i] = __builtin_ctzll (q[i]) twice looked smaller than the vectorizated
statements. I could disable the cost model, but instead chose to add
some further arithmetics to those functions to make it beneficial even
with vf 2.
After that change, pr109011-4.c still failed; I was expecting 4 .CTZ calls
there on power9, 3 vectorized and one in scalar code, but for some reason
the scalar one didn't trigger. As I really want to count just the
vectorized calls, I've added the vect prefix on the variables to ensure
I'm only counting vectorized calls and decreased the 4 counts to 3.
2023-04-22 Jakub Jelinek <jakub@redhat.com>
PR testsuite/109572
* gcc.dg/vect/pr109011-1.c: In scan-tree-dump-times regexps match also
vect prefix to make sure we only count vectorized calls.
* gcc.dg/vect/pr109011-2.c: Likewise. On powerpc* expect just count 3
rather than 4.
* gcc.dg/vect/pr109011-3.c: In scan-tree-dump-times regexps match also
vect prefix to make sure we only count vectorized calls.
* gcc.dg/vect/pr109011-4.c: Likewise. On powerpc* expect just count 3
rather than 4.
(foo): Add 2 further arithmetic ops to the loop to make it appear
worthwhile for vectorization heuristics on powerpc.
* gcc.dg/vect/pr109011-5.c: In scan-tree-dump-times regexps match also
vect prefix to make sure we only count vectorized calls.
(foo): Add 2 further arithmetic ops to the loop to make it appear
worthwhile for vectorization heuristics on powerpc.
Jakub Jelinek [Sat, 22 Apr 2023 18:14:06 +0000 (20:14 +0200)]
Fix up bootstrap with GCC 4.[89] after RAII auto_mpfr and autp_mpz [PR109589]
On Tue, Apr 18, 2023 at 03:39:41PM +0200, Richard Biener via Gcc-patches wrote:
> The following adds two RAII classes, one for mpz_t and one for mpfr_t
> making object lifetime management easier. Both formerly require
> explicit initialization with {mpz,mpfr}_init and release with
> {mpz,mpfr}_clear.
This unfortunately broke bootstrap when using GCC 4.8.x or 4.9.x as
it uses deleted friends which weren't supported until PR62101 fixed
them in 2014 for GCC 5.
The following patch adds an workaround, not deleting those friends
for those old versions.
While it means if people add those mp*_{init{,2},clear} calls on auto_mp*
objects they won't notice when doing non-bootstrap builds using
very old system compilers, people should be bootstrapping their changes
and it will be caught during bootstraps even when starting with those
old compilers, plus most people actually use much newer compilers
when developing.
2023-04-22 Jakub Jelinek <jakub@redhat.com>
PR bootstrap/109589
* system.h (class auto_mpz): Workaround PR62101 bug in GCC 4.8 and 4.9.
* realmpfr.h (class auto_mpfr): Likewise.
Jeff Law [Sat, 22 Apr 2023 16:43:35 +0000 (10:43 -0600)]
Adjust rx movsicc tests
The rx port has target specific test movsicc which is naturally meant to verify
that if-conversion is happening on the expected cases.
Unfortunately the test is poorly written. The core problem is there are 8
distinct tests and each of those tests is expected to generate a specific
sequence. Unfortunately, various generic bits might turn an equality test
into an inequality test or make other similar changes.
The net result is the assembly matching patterns may find a particular sequence,
but it may be for a different function than was originally intended. ie,
test1's output may match the expected assembly for test5. Ugh!
This patch breaks the movsicc test down into 8 distinct tests and adjusts the
patterns they match. The nice thing is all these tests are supposed to have
branches that use a bCC 1f form. So we can make them a bit more robust by
ignoring the actual condition code used. So if we change eq to ne, as long
as we match the movsicc pattern, we're OK. And the 1f style is only used by
the movsicc pattern.
With the tests broken down it's a lot easier to diagnose why one test fails
after the recent changes to if-conversion. movsicc-3 fails because of the
profitability test. It's more expensive than the other cases because of its
use of (const_int 10) rather than (const_int 0). (const_int 0) naturally has
a smaller cost.
It looks to me like in this context (const_int 10) should have the same cost
as (const_int 0). But I'm nowhere near well versed in the cost model for the
rx port. So I'm just leaving the test as xfailed. If someone cares enough,
they can dig into it further.
Jakub Jelinek [Sat, 22 Apr 2023 08:24:29 +0000 (10:24 +0200)]
match.pd: Fix fneg/fadd optimization [PR109583]
The following testcase ICEs on x86, foo function since my r14-22
improvement, but bar already since r13-4122. The problem is the same,
in the if expression related_vector_mode is called and that starts with
gcc_assert (VECTOR_MODE_P (vector_mode));
but nothing in the fneg/fadd match.pd pattern actually checks if the
VEC_PERM type has VECTOR_MODE_P (vec_mode). In this case it has BLKmode
and so it ICEs.
The following patch makes sure we don't ICE on it.
2023-04-22 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/109583
* match.pd (fneg/fadd simplify): Don't call related_vector_mode
if vec_mode is not VECTOR_MODE_P.
Jan Hubicka [Sat, 22 Apr 2023 07:20:45 +0000 (09:20 +0200)]
Update loop estimate after header duplication
Loop header copying implements partial loop peelng. If all exits of the loop
are peeled (which is the common case) the number of iterations decreases by 1.
Without noting this, for loops iterating zero times, we end up re-peeling them
later in the loop peeling pass which is wasteful.
This patch commonizes the code for esitmate update and adds logic to detect
when all (likely) exits were peeled by loop-ch.
We are still wrong about update of estimate however: if the exits behave
randomly with given probability, loop peeling does not decrease expected
iteration counts, just decreases probability that loop will be executed.
In this case we thus incorrectly decrease any_estimate. Doing so however
at least help us to not peel or optimize hard the lop later.
If the loop iterates precisely the estimated nuner of iterations. the estimate
decreases, but we are wrong about decreasing the header frequncy. We already
have logic that tries to prove that loop exit will not be taken in peeled out
iterations and it may make sense to special case this.
I also fixed problem where we had off by one error in iteration count updating.
It makes perfect sense to expect loop to have 0 iterations. However if bounds
drops to negative, we lose info about the loop behaviour (since we have no
profile data reaching the loop body).
2023-04-22 Jan Hubicka <hubicka@ucw.cz>
Ondrej Kubanek <kubanek0ondrej@gmail.com>
* cfgloopmanip.h (adjust_loop_info_after_peeling): Declare.
* tree-ssa-loop-ch.cc (ch_base::copy_headers): Fix updating of
loop profile and bounds after header duplication.
* tree-ssa-loop-ivcanon.cc (adjust_loop_info_after_peeling):
Break out from try_peel_loop; fix handling of 0 iterations.
(try_peel_loop): Use adjust_loop_info_after_peeling.
gcc/testsuite/ChangeLog:
2023-04-22 Jan Hubicka <hubicka@ucw.cz>
Ondrej Kubanek <kubanek0ondrej@gmail.com>
* gcc.dg/tree-ssa/peel1.c: Decrease number of peels by 1.
* gcc.dg/unroll-8.c: Decrease loop iteration estimate.
* gcc.dg/tree-prof/peel-2.c: New test.
In the comments for PR108099 Jakub provided some testcases that demonstrated
that even before the regression noted in the patch we were getting the
semantics of this extension wrong: in the unsigned case we weren't producing
the corresponding standard unsigned type but another distinct one of the
same size, and in the signed case we were just dropping it on the floor and
not actually returning a signed type at all.
The former issue is fixed by using c_common_signed_or_unsigned_type instead
of unsigned_type_for, and the latter issue by adding a (signed_p &&
typedef_decl) case.
This patch introduces a failure on std/ranges/iota/max_size_type.cc due to
the latter issue, since the testcase expects 'signed rep_t' to do something
sensible, and previously we didn't. Now that we do, it exposes a bug in the
__max_diff_type::operator>>= handling of sign extension: when we evaluate
-1000 >> 2 in __max_diff_type we keep the MSB set, but leave the
second-most-significant bit cleared.
PR c++/108099
gcc/cp/ChangeLog:
* decl.cc (grokdeclarator): Don't clear typedef_decl after 'unsigned
typedef' pedwarn. Use c_common_signed_or_unsigned_type. Also
handle 'signed typedef'.
gcc/testsuite/ChangeLog:
* g++.dg/ext/int128-8.C: Remove xfailed dg-bogus markers.
* g++.dg/ext/unsigned-typedef2.C: New test.
* g++.dg/ext/unsigned-typedef3.C: New test.
$(P) seems to have been a workaround for some old, proprietary make
implementations that we no longer support. It was removed in r0-31149-gb8dad04b688e9c.
gcc/m2/ChangeLog:
* Make-lang.in: Remove references to $(P).
* Make-maintainer.in: Ditto.
aarch64: Emit single-instruction for smin (x, 0) and smax (x, 0)
Motivated by https://reviews.llvm.org/D148249, we can expand to a single instruction
for the SMIN (x, 0) and SMAX (x, 0) cases using the combined AND/BIC and ASR operations.
Given that we already have well-fitting TARGET_CSSC patterns and expanders for the min/max codes
in the backend this patch does some minor refactoring to ensure we emit the right SMAX/SMIN RTL codes
for TARGET_CSSC, fall back to the generic expanders or emit a simple SMIN/SMAX with 0 RTX for !TARGET_CSSC
that is now matched by a separate pattern.
Bootstrapped and tested on aarch64-none-linux-gnu.
gcc/ChangeLog:
* config/aarch64/aarch64.md (aarch64_umax<mode>3_insn): Delete.
(umax<mode>3): Emit raw UMAX RTL instead of going through gen_ function
for umax.
(<optab><mode>3): New define_expand for MAXMIN_NOUMAX codes.
(*aarch64_<optab><mode>3_zero): Define.
(*aarch64_<optab><mode>3_cssc): Likewise.
* config/aarch64/iterators.md (maxminand): New code attribute.
A user has requested that we support the -mtp= option in aarch64 GCC for changing
the TPIDR register to read for TLS accesses. I'm not a big fan of the option name,
but we already support it in the arm port and Clang supports it for AArch64 already,
where it accepts the 'el0', 'el1', 'el2', 'el3' values.
This patch implements the same functionality in GCC.
Bootstrapped and tested on aarch64-none-linux-gnu.
Confirmed with godbolt that the sequences and options are the same as what Clang accepts/generates.
PR target/108779
* gcc.target/aarch64/mtp.c: New test.
* gcc.target/aarch64/mtp_1.c: New test.
* gcc.target/aarch64/mtp_2.c: New test.
* gcc.target/aarch64/mtp_3.c: New test.
* gcc.target/aarch64/mtp_4.c: New test.
aarch64: PR target/99195 Add scheme to optimise away vec_concat with zeroes on 64-bit Advanced SIMD ops
I finally got around to trying out the define_subst approach for PR target/99195.
The problem we have is that many Advanced SIMD instructions have 64-bit vector variants that
clear the top half of the 128-bit Q register. This would allow the compiler to avoid generating
explicit zeroing instructions to concat the 64-bit result with zeroes for code like:
vcombine_u16(vadd_u16(a, b), vdup_n_u16(0))
We've been getting user reports of GCC missing this optimisation in real world code, so it's worth
doing something about it.
The straightforward approach that we've been taking so far is adding extra patterns in aarch64-simd.md
that match the 64-bit result in a vec_concat with zeroes. Unfortunately for big-endian the vec_concat
operands to match have to be the other way around, so we would end up adding two extra define_insns.
This would lead to too much bloat in aarch64-simd.md
This patch defines a pair of define_subst constructs that allow us to annotate patterns in aarch64-simd.md
with the <vczle> and <vczbe> subst_attrs and the compiler will automatically produce the vec_concat widening patterns,
properly gated for BYTES_BIG_ENDIAN when needed. This seems like the least intrusive way to describe the extra zeroing semantics.
I've had a look at the generated insn-*.cc files in the build directory and it seems that define_subst does what we want it to do
when applied multiple times on a pattern in terms of insn conditions and modes.
This patch adds the define_subst machinery and adds the annotations to some of the straightforward binary and unary integer
operations. Many more such annotations are possible and I aim add them in future patches if this approach is acceptable.
Bootstrapped and tested on aarch64-none-linux-gnu and on aarch64_be-none-elf.
Patrick Palka [Fri, 21 Apr 2023 16:59:37 +0000 (12:59 -0400)]
c++, tree: optimize walk_tree_1 and cp_walk_subtrees
These functions currently repeatedly dereference tp during the subtree
walks, dereferences which the compiler can't CSE because it can't
guarantee that the subtree walking doesn't modify *tp.
But we already implicitly require that TREE_CODE (*tp) remains the same
throughout the subtree walks, so it doesn't seem to be a huge leap to
strengthen that to requiring *tp remains the same.
So this patch manually CSEs the dereferences of *tp. This means that a
callback function can no longer replace *tp with another tree (of the
same TREE_CODE) when walking one of its subtrees, but that doesn't sound
like a useful capability anyway.
gcc/cp/ChangeLog:
* tree.cc (cp_walk_subtrees): Avoid repeatedly dereferencing tp.
<case DECLTYPE_TYPE>: Use cp_unevaluated and WALK_SUBTREE.
<case ALIGNOF_EXPR etc>: Likewise.
gcc/ChangeLog:
* tree.cc (walk_tree_1): Avoid repeatedly dereferencing tp
and type_p.
Jan Hubicka [Fri, 21 Apr 2023 16:13:35 +0000 (18:13 +0200)]
Fix boostrap failure in tree-ssa-loop-ch.cc
I managed to mix up patch and its WIP version in previous commit.
This patch adds the missing edge iterator and also fixes a side
case where new loop header would have multiple latches.
Vineet Gupta [Wed, 1 Mar 2023 03:27:26 +0000 (19:27 -0800)]
expansion: make layout of x_shift*cost[][][] more efficient
when debugging expmed.[ch] for PR/108987 saw that some of the cost arrays have
less than ideal layout as follows:
x_shift*cost[0..63][speed][modes]
We would want speed to be first index since a typical compile will have
that fixed, followed by mode and then the shift values.
It should be non-functional from compiler semantics pov, except
executing slightly faster due to better locality of shift values for
given speed and mode. And also a bit more intutive when debugging.
gcc/Changelog:
* expmed.h (x_shift*_cost): convert to int [speed][mode][shift].
(shift*_cost_ptr ()): Access x_shift*_cost array directly.
[aarch64] Use force_reg instead of copy_to_mode_reg.
Use force_reg instead of copy_to_mode_reg in aarch64_simd_dup_constant
and aarch64_expand_vector_init to avoid creating pseudo if original value
is already in a register.
gcc/ChangeLog:
* config/aarch64/aarch64.cc (aarch64_simd_dup_constant): Use
force_reg instead of copy_to_mode_reg.
(aarch64_expand_vector_init): Likewise.
i386: Remove REG_OK_FOR_INDEX/REG_OK_FOR_BASE and their derivatives
x86 was converted to TARGET_LEGITIMATE_ADDRESS_P long ago. Remove
remnants of the conversion. Also, cleanup the remaining macros a bit
by introducing INDEX_REGNO_P macro.
(FIRST_INDEX_REG, LAST_INDEX_REG): New defines.
(LEGACY_INDEX_REG_P, LEGACY_INDEX_REGNO_P): New macros.
(INDEX_REG_P, INDEX_REGNO_P): Ditto.
(REGNO_OK_FOR_INDEX_P): Use INDEX_REGNO_P predicates.
(REGNO_OK_FOR_INDEX_NONSTRICT_P): New macro.
(EG_OK_FOR_BASE_NONSTRICT_P): Ditto.
* config/i386/predicates.md (index_register_operand):
Use REGNO_OK_FOR_INDEX_P and REGNO_OK_FOR_INDEX_NONSTRICT_P macros.
* config/i386/i386.cc (ix86_legitimate_address_p): Use
REGNO_OK_FOR_BASE_P, REGNO_OK_FOR_BASE_NONSTRICT_P,
REGNO_OK_FOR_INDEX_P and REGNO_OK_FOR_INDEX_NONSTRICT_P macros.
Jan Hubicka [Fri, 21 Apr 2023 13:46:38 +0000 (15:46 +0200)]
Stabilize inliner
The Fibonacci heap can change its behaviour quite significantly for no good
reasons when multiple edges with same key occurs. This is quite common
for small functions.
This patch stabilizes the order by adding edge uids into the info.
Again I think this is good idea regardless of the incremental WPA project
since we do not want random changes in inline decisions.
gcc/ChangeLog:
2023-04-21 Jan Hubicka <hubicka@ucw.cz>
Michal Jires <michal@jires.eu>
* ipa-inline.cc (class inline_badness): New class.
(edge_heap_t, edge_heap_node_t): Use inline_badness for badness instead
of sreal.
(update_edge_key): Update.
(lookup_recursive_calls): Likewise.
(recursive_inlining): Likewise.
(add_new_edges_to_heap): Likewise.
(inline_small_functions): Likewise.
Gaius Mulley [Fri, 21 Apr 2023 12:19:54 +0000 (13:19 +0100)]
PR modula2/109586 cc1gm2 ICE when compiling large source files.
The function m2block_RememberConstant calls m2tree_IsAConstant.
However IsAConstant does not recognise TREE_CODE(t) ==
CONSTRUCTOR as a constant. Without this patch CONSTRUCTOR
contants are garbage collected (and not preserved) resulting in
a corrupt tree and crash.
Richard Biener [Fri, 21 Apr 2023 10:57:17 +0000 (12:57 +0200)]
tree-optimization/109573 - avoid ICEing on unexpected live def
The following relaxes the assert in vectorizable_live_operation
where we catch currently unhandled cases to also allow an
intermediate copy as it happens here but also relax the assert
to checking only.
PR tree-optimization/109573
* tree-vect-loop.cc (vectorizable_live_operation): Allow
unhandled SSA copy as well. Demote assert to checking only.
Richard Biener [Fri, 21 Apr 2023 10:02:28 +0000 (12:02 +0200)]
Use correct CFG orders for DF worklist processing
This adjusts the remaining three RPO computes in DF. The DF_FORWARD
problems should use a RPO on the forward graph, the DF_BACKWARD
problems should use a RPO on the inverted graph.
Conveniently now inverted_rev_post_order_compute computes a RPO.
We still use post_order_compute and reverse its order for its
side-effect of deleting unreachable blocks.
This resuls in an overall reduction on visited blocks on cc1files by 5.2%.
Because on the reverse CFG most regions are irreducible, there's
few cases the number of visited blocks increases. For the set
of cc1files I have this is for et-forest.i, graph.i, hwint.i,
tree-ssa-dom.i, tree-ssa-loop-ch.i and tree-ssa-threadedge.i. For
tree-ssa-dse.i it's off-noise and I've more closely investigated
and figured it is really bad luck due to the irreducibility.
* df-core.cc (df_analyze): Compute RPO on the reverse graph
for DF_BACKWARD problems.
(loop_post_order_compute): Rename to ...
(loop_rev_post_order_compute): ... this, compute a RPO.
(loop_inverted_post_order_compute): Rename to ...
(loop_inverted_rev_post_order_compute): ... this, compute a RPO.
(df_analyze_loop): Use RPO on the forward graph for DF_FORWARD
problems, RPO on the inverted graph for DF_BACKWARD.
Richard Biener [Fri, 21 Apr 2023 07:40:01 +0000 (09:40 +0200)]
change inverted_post_order_compute to inverted_rev_post_order_compute
The following changes the inverted_post_order_compute API back to
a plain C array interface and computing a reverse post order since
that's what's always required. It will make massaging DF to use
the correct iteration orders easier. Elsewhere it requires turning
backward iteration over the computed order with forward iteration.
* cfganal.h (inverted_rev_post_order_compute): Rename
from ...
(inverted_post_order_compute): ... this. Add struct function
argument, change allocation to a C array.
* cfganal.cc (inverted_rev_post_order_compute): Likewise.
* lcm.cc (compute_antinout_edge): Adjust.
* lra-lives.cc (lra_create_live_ranges_1): Likewise.
* tree-ssa-dce.cc (remove_dead_stmt): Likewise.
* tree-ssa-pre.cc (compute_antic): Likewise.
Richard Biener [Fri, 21 Apr 2023 09:40:23 +0000 (11:40 +0200)]
change DF to use the proper CFG order for DF_FORWARD problems
This changes DF to use RPO on the forward graph for DF_FORWARD
problems. While that naturally maps to pre_and_rev_postorder_compute
we use the existing (wrong) CFG order for DF_BACKWARD problems
computed by post_order_compute since that provides the required
side-effect of deleting unreachable blocks.
The change requires turning the inconsistent vec<int> vs int * back
to consistent int *. A followup patch will change the
inverted_post_order_compute API and change the DF_BACKWARD problem
to use the correct RPO on the backward graph together with statistics
I produced last year for the combined effect.
* df.h (df_d::postorder_inverted): Change back to int *,
clarify comments.
* df-core.cc (rest_of_handle_df_finish): Adjust.
(df_analyze_1): Likewise.
(df_analyze): For DF_FORWARD problems use RPO on the forward
graph. Adjust.
(loop_inverted_post_order_compute): Adjust API.
(df_analyze_loop): Adjust.
(df_get_n_blocks): Likewise.
(df_get_postorder): Likewise.
Consider the following testcase:
void f (void * restrict in, void * restrict out, int l, int n, int m)
{
for (int i = 0; i < l; i++){
for (int j = 0; j < m; j++){
for (int k = 0; k < n; k++)
{
vint8mf8_t v = __riscv_vle8_v_i8mf8 (in + i + j, 17);
__riscv_vse8_v_i8mf8 (out + i + j, v, 17);
}
}
}
}
Compile option: -O3
Before this patch:
mv a7,a2
mv a6,a0
mv t1,a1
mv a2,a3
vsetivli zero,17,e8,mf8,ta,ma
ble a7,zero,.L1
ble a4,zero,.L1
ble a3,zero,.L1
...
After this patch:
mv a7,a2
mv a6,a0
mv t1,a1
mv a2,a3
ble a7,zero,.L1
ble a4,zero,.L1
ble a3,zero,.L1
add a1,a0,a4
li a0,0
vsetivli zero,17,e8,mf8,ta,ma
...
This issue is a missed optmization produced by Phase 3 global backward demand
fusion instead of LCM.
This patch is fixing poor placement of the vsetvl.
This point is seletected not because LCM but by Phase 3 (VL/VTYPE demand info
backward fusion and propogation) which
is I introduced into VSETVL PASS to enhance LCM && improve vsetvl instruction
performance.
This patch is to supress the Phase 3 too aggressive backward fusion and
propagation to the top of the function program
when there is no define instruction of AVL (AVL is 0 ~ 31 imm since vsetivli
instruction allows imm value instead of reg).
You may want to ask why we need Phase 3 to the job.
Well, we have so many situations that pure LCM fails to optimize, here I can
show you a simple case to demonstrate it:
void f (void * restrict in, void * restrict out, int n, int m, int cond)
{
size_t vl = 101;
for (size_t j = 0; j < m; j++){
if (cond) {
for (size_t i = 0; i < n; i++)
{
vint8mf8_t v = __riscv_vle8_v_i8mf8 (in + i + j, vl);
__riscv_vse8_v_i8mf8 (out + i, v, vl);
}
} else {
for (size_t i = 0; i < n; i++)
{
vint32mf2_t v = __riscv_vle32_v_i32mf2 (in + i + j, vl);
v = __riscv_vadd_vv_i32mf2 (v,v,vl);
__riscv_vse32_v_i32mf2 (out + i, v, vl);
}
}
}
}
You can see:
The first inner loop needs vsetvli e8 mf8 for vle+vse.
The second inner loop need vsetvli e32 mf2 for vle+vadd+vse.
If we don't have Phase 3 (Only handled by LCM (Phase 4)), we will end up with :
outerloop:
...
vsetvli e8mf8
inner loop 1:
....
vsetvli e32mf2
inner loop 2:
....
However, if we have Phase 3, Phase 3 is going to fuse the vsetvli e32 mf2 of
inner loop 2 into vsetvli e8 mf8, then we will end up with this result after
phase 3:
outerloop:
...
inner loop 1:
vsetvli e32mf2
....
inner loop 2:
vsetvli e32mf2
....
Then, this demand information after phase 3 will be well optimized after phase 4
(LCM), after Phase 4 result is:
vsetvli e32mf2
outerloop:
...
inner loop 1:
....
inner loop 2:
....
You can see this is the optimal codegen after current VSETVL PASS (Phase 3:
Demand backward fusion and propagation + Phase 4: LCM ). This is a known issue
when I start to implement VSETVL PASS.
Robin Dapp [Fri, 21 Apr 2023 07:38:06 +0000 (09:38 +0200)]
riscv: Fix <bitmanip_insn> fallout.
PR109582: Since r14-116 generic.md uses standard names instead of the
types defined in the <bitmanip_insn> iterator (that match instruction
names). Change this.
gcc/ChangeLog:
PR target/109582
* config/riscv/generic.md: Change standard names to insn names.
rs6000: xfail float128 comparison test case that fails on powerpc64.
This patch xfails a float128 comparison test case on powerpc64 that
fails due to a longstanding issue with floating-point compares.
See PR58684 for more information.
When float128 hardware is enabled (-mfloat128-hardware), xscmpuqp is
generated for comparison which is unexpected. When float128 software
emulation is enabled (-mno-float128-hardware), we still have to xfail
the hardware version (__lekf2_hw) which finally generates xscmpuqp.
Richard Biener [Thu, 20 Apr 2023 11:56:21 +0000 (13:56 +0200)]
Fix LCM dataflow CFG order
The following fixes the initial order the LCM dataflow routines process
BBs. For a forward problem you want reverse postorder, for a backward
problem you want reverse postorder on the inverted graph.
The LCM iteration has very many other issues but this allows to
turn inverted_post_order_compute into computing a reverse postorder
more easily.
* lcm.cc (compute_antinout_edge): Use RPO on the inverted graph.
(compute_laterin): Use RPO.
(compute_available): Likewise.
* update_web_docs_git: Add a mechanism to override makeinfo,
texi2dvi and texi2pdf, and default them to
/home/gccadmin/texinfo/install-git/bin/${tool}, if present.
Patrick Palka [Thu, 20 Apr 2023 19:16:59 +0000 (15:16 -0400)]
c++: simplify TEMPLATE_TYPE_PARM level lowering
1. Don't bother recursing when level lowering a cv-qualified type
template parameter.
2. Get rid of the recursive loop breaker when level lowering a
constrained auto, and enable the TEMPLATE_PARM_DESCENDANTS cache in
this case too. This should be safe to do so now that we no longer
substitute constraints on an auto.
gcc/cp/ChangeLog:
* pt.cc (tsubst) <case TEMPLATE_TYPE_PARM>: Don't recurse when
level lowering a cv-qualified type template parameter. Remove
recursive loop breaker in the level lowering case for constrained
autos. Use the TEMPLATE_PARM_DESCENDANTS cache in this case as
well.
Patrick Palka [Thu, 20 Apr 2023 19:00:06 +0000 (15:00 -0400)]
c++: use TREE_VEC for trailing args of variadic built-in traits
This patch makes us use TREE_VEC instead of TREE_LIST to represent the
trailing arguments of a variadic built-in trait. These built-ins are
typically passed a simple pack expansion as the second argument, e.g.
__is_constructible(T, Ts...)
and the main benefit of this representation change is that substituting
into this argument list is now basically free since tsubst_template_args
makes sure we reuse the TREE_VEC of the corresponding ARGUMENT_PACK when
expanding such a pack expansion. In the previous TREE_LIST representation
we would need need to convert the expanded pack expansion into a TREE_LIST
(via tsubst_tree_list).
Note that an empty set of trailing arguments is now represented as an
empty TREE_VEC instead of NULL_TREE, so now TRAIT_TYPE/EXPR_TYPE2 will
be empty only for unary traits.
gcc/cp/ChangeLog:
* constraint.cc (diagnose_trait_expr): Convert a TREE_VEC
of arguments into a TREE_LIST for sake of pretty printing.
* cxx-pretty-print.cc (pp_cxx_trait): Handle TREE_VEC
instead of TREE_LIST of trailing variadic trait arguments.
* method.cc (constructible_expr): Likewise.
(is_xible_helper): Likewise.
* parser.cc (cp_parser_trait): Represent trailing variadic trait
arguments as a TREE_VEC instead of TREE_LIST.
* pt.cc (value_dependent_expression_p): Handle TREE_VEC
instead of TREE_LIST of trailing variadic trait arguments.
* semantics.cc (finish_type_pack_element): Likewise.
(check_trait_type): Likewise.
Patrick Palka [Thu, 20 Apr 2023 19:00:04 +0000 (15:00 -0400)]
c++: make strip_typedefs generalize strip_typedefs_expr
Currently if we have a TREE_VEC of types that we want to strip of typedefs,
we unintuitively need to call strip_typedefs_expr instead of strip_typedefs
since only strip_typedefs_expr handles TREE_VEC, and it also dispatches
to strip_typedefs when given a type. But this seems backwards: arguably
strip_typedefs_expr should be the more specialized function, which
strip_typedefs dispatches to (and thus generalizes).
So this patch makes strip_typedefs subsume strip_typedefs_expr rather
than vice versa, which allows for some simplifications.
gcc/cp/ChangeLog:
* tree.cc (strip_typedefs): Move TREE_LIST handling to
strip_typedefs_expr. Dispatch to strip_typedefs_expr for
non-type 't'.
<case TYPENAME_TYPE>: Remove manual dispatching to
strip_typedefs_expr.
<case TRAIT_TYPE>: Likewise.
(strip_typedefs_expr): Replaces calls to strip_typedefs_expr
with strip_typedefs throughout. Don't dispatch to strip_typedefs
for type 't'.
<case TREE_LIST>: Replace this with the better version from
strip_typedefs.
Andrew MacLeod [Thu, 20 Apr 2023 17:10:40 +0000 (13:10 -0400)]
Do not ignore UNDEFINED ranges when determining PHI equivalences.
Do not ignore UNDEFINED name arguments when registering two-way equivalences
from PHIs.
PR tree-optimization/109564
gcc/
* gimple-range-fold.cc (fold_using_range::range_of_phi): Do no ignore
UNDEFINED range names when deciding if all PHI arguments are the same,
Jakub Jelinek [Thu, 20 Apr 2023 17:44:27 +0000 (19:44 +0200)]
tree-vect-patterns: One small vect_recog_ctz_ffs_pattern tweak [PR109011]
I've noticed I've made a typo, ifn in this function this late
is always only IFN_CTZ or IFN_FFS, never IFN_CLZ.
Due to this typo, we weren't using the originally intended
.CTZ (X) = .POPCOUNT ((X - 1) & ~X)
but
.CTZ (X) = PREC - .POPCOUNT (X | -X)
instead when we want to emit __builtin_ctz*/.CTZ using .POPCOUNT.
Both compute the same value, both are defined at 0 with the
same value (PREC), both have same number of GIMPLE statements,
but I think the former ought to be preferred, because lots of targets
have andn as a single operation rather than two, and also putting
a -1 constant into a vector register is often cheaper than vector
with broadcast PREC power of two value.
2023-04-20 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/109011
* tree-vect-patterns.cc (vect_recog_ctz_ffs_pattern): Use
.CTZ (X) = .POPCOUNT ((X - 1) & ~X) in preference to
.CTZ (X) = PREC - .POPCOUNT (X | -X).
Jakub Jelinek [Thu, 20 Apr 2023 17:26:17 +0000 (19:26 +0200)]
c: Avoid -Wenum-int-mismatch warning for redeclaration of builtin acc_on_device [PR107041]
The new -Wenum-int-mismatch warning triggers with -Wsystem-headers in
<openacc.h>, for obvious reasons the builtin acc_on_device uses int
type argument rather than enum which isn't defined yet when the builtin
is created, while the OpenACC spec requires it to have acc_device_t
enum argument. The header makes sure it has int underlying type by using
negative and __INT_MAX__ enumerators.
I've tried to make the builtin typegeneric or just varargs, but that
changes behavior e.g. when one calls it with some C++ class which has
cast operator to acc_device_t, so the following patch instead disables
the warning for this builtin.
[LRA]: Exclude some hard regs for multi-reg inout reload pseudos used in asm in different mode
See gcc.c-torture/execute/20030222-1.c. Consider the code for 32-bit (e.g. BE) target:
int i, v; long x; x = v; asm ("" : "=r" (i) : "0" (x));
We generate the following RTL with reload insns:
1. subreg:si(x:di, 0) = 0;
2. subreg:si(x:di, 4) = v:si;
3. t:di = x:di, dead x;
4. asm ("" : "=r" (subreg:si(t:di,4)) : "0" (t:di))
5. i:si = subreg:si(t:di,4);
If we assign hard reg of x to t, dead code elimination will remove insn #2
and we will use unitialized hard reg. So exclude the hard reg of x for t.
We could ignore this problem for non-empty asm using all x value but it is hard to
check that the asm are expanded into insn realy using x and setting r.
The old reload pass used the same approach.
gcc/ChangeLog
* lra-constraints.cc (match_reload): Exclude some hard regs for
multi-reg inout reload pseudos used in asm in different mode.
[PR target/108248] [RISC-V] Break down some bitmanip insn types
This is primarily Raphael's work. All I did was adjust it to apply to the
trunk and add the new types to generic.md's scheduling model.
The basic idea here is to make sure we have the ability to schedule the
bitmanip instructions with a finer degree of control. Some of the bitmanip
instructions are likely to have differing scheduler characteristics across
different implementations.
So rather than assign these instructions a generic "bitmanip" type, this
patch assigns them a type based on their RTL code by using the <bitmanip_insn>
iterator for the type. Naturally we have to add a few new types. It affects
clz, ctz, cpop, min, max.
We didn't do this for things like shNadd, single bit manipulation, etc. We
certainly could if the needs presents itself.
I threw all the new types into the generic_alu bucket in the generic
scheduling model. Seems as good a place as any. Someone who knows the
sifive uarch should probably add these types (and bitmanip) to the sifive
scheduling model.
We also noticed that the recently added orc.b didn't have a type at all.
So we added it as a generic bitmanip type.
This has been bootstrapped in a gcc-12 base and I've built and run the
testsuite without regressions on the trunk.
Given it was primarily Raphael's work I could probably approve & commit it.
But I'd like to give the other RISC-V folks a chance to chime in.
PR target/108248
gcc/
* config/riscv/bitmanip.md (clz, ctz, pcnt, min, max patterns): Use
<bitmanip_insn> as the type to allow for fine grained control of
scheduling these insns.
* config/riscv/generic.md (generic_alu): Add bitmanip, clz, ctz, pcnt,
min, max.
* config/riscv/riscv.md (type attribute): Add types for clz, ctz,
pcnt, signed and unsigned min/max.
The redundant register spillings is eliminated.
However, there is one more issue need to be addressed which is the redundant
move instruction "vmv8r.v". This is another story, and it will be fixed by another
patch (Fine tune RVV machine description RA constraint).
RISC-V: Fix wrong check of register occurrences [PR109535]
count_occurrences will conly count same RTX (same code and same mode),
but what we want to track is the occurrence of a register, a register
might appeared in the insn with different mode or contain in SUBREG.
Testcase coming from Kito.
gcc/ChangeLog:
PR target/109535
* config/riscv/riscv-vsetvl.cc (count_regno_occurrences): New function.
(pass_vsetvl::cleanup_insns): Fix bug.
gcc/testsuite/ChangeLog:
PR target/109535
* g++.target/riscv/rvv/base/pr109535.C: New test.
* gcc.target/riscv/rvv/base/pr109535.c: New test.
Jakub Jelinek [Thu, 20 Apr 2023 11:02:52 +0000 (13:02 +0200)]
tree: Add 3+ argument fndecl_built_in_p
On Wed, Feb 22, 2023 at 09:52:06AM +0000, Richard Biener wrote:
> > The following testcase ICEs because we still have some spots that
> > treat BUILT_IN_UNREACHABLE specially but not BUILT_IN_UNREACHABLE_TRAP
> > the same.
This patch uses (fndecl_built_in_p (node, BUILT_IN_UNREACHABLE)
|| fndecl_built_in_p (node, BUILT_IN_UNREACHABLE_TRAP))
a lot and from grepping around, we do something like that in lots of
other places, or in some spots instead as
(fndecl_built_in_p (node, BUILT_IN_NORMAL)
&& (DECL_FUNCTION_CODE (node) == BUILT_IN_WHATEVER1
|| DECL_FUNCTION_CODE (node) == BUILT_IN_WHATEVER2))
The following patch adds an overload for this case, so we can write
it in a shorter way, using C++11 argument packs so that it supports
as many codes as one needs.
2023-04-20 Jakub Jelinek <jakub@redhat.com>
Jonathan Wakely <jwakely@redhat.com>
* tree.h (built_in_function_equal_p): New helper function.
(fndecl_built_in_p): Turn into variadic template to support
1 or more built_in_function arguments.
* builtins.cc (fold_builtin_expect): Use 3 argument fndecl_built_in_p.
* gimplify.cc (goa_stabilize_expr): Likewise.
* cgraphclones.cc (cgraph_node::create_clone): Likewise.
* ipa-fnsummary.cc (compute_fn_summary): Likewise.
* omp-low.cc (setjmp_or_longjmp_p): Likewise.
* cgraph.cc (cgraph_edge::redirect_call_stmt_to_callee,
cgraph_update_edges_for_call_stmt_node,
cgraph_edge::verify_corresponds_to_fndecl,
cgraph_node::verify_node): Likewise.
* tree-stdarg.cc (optimize_va_list_gpr_fpr_size): Likewise.
* gimple-ssa-warn-access.cc (matching_alloc_calls_p): Likewise.
* ipa-prop.cc (try_make_edge_direct_virtual_call): Likewise.
Jakub Jelinek [Thu, 20 Apr 2023 09:55:16 +0000 (11:55 +0200)]
tree-vect-patterns: Pattern recognize ctz or ffs using clz, popcount or ctz [PR109011]
The following patch allows to vectorize __builtin_ffs*/.FFS even if
we just have vector .CTZ support, or __builtin_ffs*/.FFS/__builtin_ctz*/.CTZ
if we just have vector .CLZ or .POPCOUNT support.
It uses various expansions from Hacker's Delight book as well as GCC's
expansion, in particular:
.CTZ (X) = PREC - .CLZ ((X - 1) & ~X)
.CTZ (X) = .POPCOUNT ((X - 1) & ~X)
.CTZ (X) = (PREC - 1) - .CLZ (X & -X)
.FFS (X) = PREC - .CLZ (X & -X)
.CTZ (X) = PREC - .POPCOUNT (X | -X)
.FFS (X) = (PREC + 1) - .POPCOUNT (X | -X)
.FFS (X) = .CTZ (X) + 1
where the first one can be only used if both CTZ and CLZ have value
defined at zero (kind 2) and both have value of PREC there.
If the original has value defined at zero and the latter doesn't
for other forms or if it doesn't have matching value for that case,
a COND_EXPR is added for that afterwards.
The patch also modifies vect_recog_popcount_clz_ctz_ffs_pattern
such that the two can work together.
2023-04-20 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/109011
* tree-vect-patterns.cc (vect_recog_ctz_ffs_pattern): New function.
(vect_recog_popcount_clz_ctz_ffs_pattern): Move vect_pattern_detected
call later. Don't punt for IFN_CTZ or IFN_FFS if it doesn't have
direct optab support, but has instead IFN_CLZ, IFN_POPCOUNT or
for IFN_FFS IFN_CTZ support, use vect_recog_ctz_ffs_pattern for that
case.
(vect_vect_recog_func_ptrs): Add ctz_ffs entry.
* gcc.dg/vect/pr109011-1.c: Remove -mpower9-vector from
dg-additional-options.
(baz, qux): Remove functions and corresponding dg-final.
* gcc.dg/vect/pr109011-2.c: New test.
* gcc.dg/vect/pr109011-3.c: New test.
* gcc.dg/vect/pr109011-4.c: New test.
* gcc.dg/vect/pr109011-5.c: New test.
Richard Biener [Mon, 20 Feb 2023 14:02:43 +0000 (15:02 +0100)]
Remove duplicate DFS walks from DF init
The following removes unused CFG order computes from
rest_of_handle_df_initialize. The CFG orders are computed from df_analyze ().
This also removes code duplication that would have to be kept in sync.
* df-core.cc (rest_of_handle_df_initialize): Remove
computation of df->postorder, df->postorder_inverted and
df->n_blocks.
Haochen Jiang [Fri, 10 Mar 2023 05:40:09 +0000 (13:40 +0800)]
i386: Share AES xmm intrin with VAES
Currently in GCC, the 128 bit intrin for instruction vaes{end,dec}{last,}
is under AES ISA. Because there is no dependency between ISA set AES
and VAES, The 128 bit intrin is not available when we use compiler flag
-mvaes -mavx512vl and there is no other way to use that intrin. But it
should according to Intel SDM.
Although VAES aims to be a VEX/EVEX promotion for AES, but it is only part
of it. Therefore, we share the AES xmm intrin with VAES.
Also, since -mvaes indicates that we could use VEX encoding for ymm, we
should imply AVX for VAES.
gcc/ChangeLog:
* common/config/i386/i386-common.cc
(OPTION_MASK_ISA2_AVX_UNSET): Add OPTION_MASK_ISA2_VAES_UNSET.
(ix86_handle_option): Set AVX flag for VAES.
* config/i386/i386-builtins.cc (ix86_init_mmx_sse_builtins):
Add OPTION_MASK_ISA2_VAES_UNSET.
(def_builtin): Share builtin between AES and VAES.
* config/i386/i386-expand.cc (ix86_check_builtin_isa_match):
Ditto.
* config/i386/i386.md (aes): New isa attribute.
* config/i386/sse.md (aesenc): Add pattern for VAES with xmm.
(aesenclast): Ditto.
(aesdec): Ditto.
(aesdeclast): Ditto.
* config/i386/vaesintrin.h: Remove redundant avx target push.
* config/i386/wmmintrin.h (_mm_aesdec_si128): Change to macro.
(_mm_aesdeclast_si128): Ditto.
(_mm_aesenc_si128): Ditto.
(_mm_aesenclast_si128): Ditto.
Haochen Jiang [Fri, 10 Mar 2023 02:38:50 +0000 (10:38 +0800)]
i386: Add PCLMUL dependency for VPCLMULQDQ
Currently in GCC, the 128 bit intrin for instruction vpclmulqdq is
under PCLMUL ISA. Because there is no dependency between ISA set PCLMUL
and VPCLMULQDQ, The 128 bit intrin is not available when we just use
compiler flag -mvpclmulqdq. But it should according to Intel SDM.
Since VPCLMULQDQ is a VEX/EVEX promotion for PCLMUL, it is natural to
add dependency between them.
Also, with -mvpclmulqdq, we can use ymm under VEX encoding, so
VPCLMULQDQ should imply AVX.