Roger Sayle [Wed, 26 Apr 2023 08:10:06 +0000 (09:10 +0100)]
[xstormy16] Add support for byte and word swapping instructions.
This patch adds support for xstormy16's swpb (swap bytes) and swpw (swap
words) instructions. The most obvious application of these to implement
the __builtin_bswap16 and __builtin_bswap32 intrinsics.
Currently, __builtin_bswap16 is implemented as:
foo: mov r7,r2
shl r7,#8
shr r2,#8
or r2,r7
ret
but with this patch becomes:
foo: swpb r2
ret
Likewise, __builtin_bswap32 now becomes:
foo: swpb r2 | swpb r3 | swpw r2,r3
ret
Finally, the swpw instruction on its own can be used to exchange
two word mode registers without a temporary, so a new pattern and
peephole2 have been added to catch this. As described in the
PR rtl-optimization/106518, register allocation can (in theory)
be more efficient on targets that provide a swap/exchange instruction.
The slightly unusual swap<mode> naming matches that used in i386.md.
2024-04-26 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
* config/stormy16/stormy16.md (bswaphi2): New define_insn.
(bswapsi2): New define_insn.
(swaphi): New define_insn to exchange two registers (swpw).
(define_peephole2): Recognize exchange of registers as swaphi.
gcc/testsuite/ChangeLog
* gcc.target/xstormy16/bswap16.c: New test case.
* gcc.target/xstormy16/bswap32.c: Likewise.
* gcc.target/xstormy16/swpb.c: Likewise.
* gcc.target/xstormy16/swpw-1.c: Likewise.
* gcc.target/xstormy16/swpw-2.c: Likewise.
Richard Biener [Tue, 25 Apr 2023 14:38:44 +0000 (16:38 +0200)]
More last_stmt removal
This adjusts more users of last_stmt where it is clear that debug
stmt skipping is unnecessary. In most cases this also allowed
significant code simplification.
* config/riscv/vector.md: Refine vmadc/vmsbc RA constraint.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/narrow_constraint-13.c: New test.
* gcc.target/riscv/rvv/base/narrow_constraint-14.c: New test.
* gcc.target/riscv/rvv/base/narrow_constraint-15.c: New test.
* gcc.target/riscv/rvv/base/narrow_constraint-16.c: New test.
Kewen Lin [Wed, 26 Apr 2023 05:21:14 +0000 (00:21 -0500)]
rs6000: Guard power9-vector for vsx_scalar_cmp_exp_qp_* [PR108758]
__builtin_vsx_scalar_cmp_exp_qp_{eq,gt,lt,unordered} used
to be guarded with condition TARGET_P9_VECTOR before new
bif framework was introduced (r12-5752-gd08236359eb229),
since r12-5752 they are placed under stanza ieee128-hw,
that is to check condition TARGET_FLOAT128_HW, it caused
test case float128-cmp2-runnable.c to fail at -m32 as the
condition TARGET_FLOAT128_HW isn't satisified with -m32.
By checking the commit history, I didn't see any notes on
why this condition change on them was made, so this patch
is to move these bifs from stanza ieee128-hw to stanza
power9-vector as before.
PR target/108758
gcc/ChangeLog:
* config/rs6000/rs6000-builtins.def
(__builtin_vsx_scalar_cmp_exp_qp_eq, __builtin_vsx_scalar_cmp_exp_qp_gt
__builtin_vsx_scalar_cmp_exp_qp_lt,
__builtin_vsx_scalar_cmp_exp_qp_unordered): Move from stanza ieee128-hw
to power9-vector.
Kewen Lin [Wed, 26 Apr 2023 05:21:05 +0000 (00:21 -0500)]
rs6000: Fix predicate for const vector in sldoi_to_mov [PR109069]
As PR109069 shows, commit r12-6537-g080a06fcb076b3 which
introduces define_insn_and_split sldoi_to_mov adopts
easy_vector_constant for const vector of interest, but it's
wrong since predicate easy_vector_constant doesn't guarantee
each byte in the const vector is the same. One counter
example is the const vector in pr109069-1.c. This patch is
to introduce new predicate const_vector_each_byte_same to
ensure all bytes in the given const vector are the same by
considering both int and float, meanwhile for the constants
which don't meet easy_vector_constant we need to gen a move
instead of just a set, and uses VECTOR_MEM_ALTIVEC_OR_VSX_P
rather than VECTOR_UNIT_ALTIVEC_OR_VSX_P for V2DImode support
under VSX since vector long long type of vec_sld is guarded
under stanza vsx.
PR target/109069
gcc/ChangeLog:
* config/rs6000/altivec.md (sldoi_to_mov<mode>): Replace predicate
easy_vector_constant with const_vector_each_byte_same, add
handlings in preparation for !easy_vector_constant, and update
VECTOR_UNIT_ALTIVEC_OR_VSX_P with VECTOR_MEM_ALTIVEC_OR_VSX_P.
* config/rs6000/predicates.md (const_vector_each_byte_same): New
predicate.
gcc/testsuite/ChangeLog:
* gcc.target/powerpc/pr109069-1.c: New test.
* gcc.target/powerpc/pr109069-2-run.c: New test.
* gcc.target/powerpc/pr109069-2.c: New test.
* gcc.target/powerpc/pr109069-2.h: New test.
RISC-V: Optimize comparison patterns for register allocation
Current RA constraint for RVV comparison instructions totall does not allow
registers between dest and source operand have any overlaps.
For example:
vmseq.vv vd, vs2, vs1
If LMUL = 8, vs2 = v8, vs1 = v16:
In current GCC RA constraint, GCC does not allow vd to be any regno in v8 ~ v23.
However, it is too conservative and not true according to RVV ISA.
Since the dest EEW of comparison is always EEW = 1, so it always follows the overlap
rules of Dest EEW < Source EEW. So in this case, we should allow GCC RA have the chance
to allocate v8 or v16 for vd, so that we can have better vector registers usage in RA.
* gcc.target/riscv/rvv/base/binop_vv_constraint-4.c: Adapt testcase.
* gcc.target/riscv/rvv/base/narrow_constraint-17.c: New test.
* gcc.target/riscv/rvv/base/narrow_constraint-18.c: New test.
* gcc.target/riscv/rvv/base/narrow_constraint-19.c: New test.
* gcc.target/riscv/rvv/base/narrow_constraint-20.c: New test.
* gcc.target/riscv/rvv/base/narrow_constraint-21.c: New test.
* gcc.target/riscv/rvv/base/narrow_constraint-22.c: New test.
* gcc.target/riscv/rvv/base/narrow_constraint-23.c: New test.
* gcc.target/riscv/rvv/base/narrow_constraint-24.c: New test.
* gcc.target/riscv/rvv/base/narrow_constraint-25.c: New test.
* gcc.target/riscv/rvv/base/narrow_constraint-26.c: New test.
* gcc.target/riscv/rvv/base/narrow_constraint-27.c: New test.
* gcc.target/riscv/rvv/base/narrow_constraint-28.c: New test.
* gcc.target/riscv/rvv/base/narrow_constraint-29.c: New test.
* gcc.target/riscv/rvv/base/narrow_constraint-30.c: New test.
* gcc.target/riscv/rvv/base/narrow_constraint-31.c: New test.
Pan Li [Tue, 25 Apr 2023 14:29:04 +0000 (22:29 +0800)]
RISC-V: Bugfix for RVV vbool*_t vn_reference_equal
In most architecture the precision_size of vbool*_t types are caculated
like as the multiple of the type size. For example:
precision_size = type_size * 8 (aka, bit count per bytes).
Unfortunately, some architecture like RISC-V will adjust the
precision_size
for the vbool*_t in order to align the ISA. For example as below.
type_size = [1, 1, 1, 1, 2, 4, 8]
precision_size = [1, 2, 4, 8, 16, 32, 64]
Then the precision_size of RISC-V vbool*_t will not be the multiple of
the
type_size. This PATCH try to enrich this case when comparing the
vn_reference.
Given we have the below code:
void test_vbool8_then_vbool16(int8_t * restrict in, int8_t * restrict
out) {
vbool8_t v1 = *(vbool8_t*)in;
vbool16_t v2 = *(vbool16_t*)in;
RISC-V: Add auto-vectorization compile option for RVV
This patch is adding 2 compile option for RVV auto-vectorization.
1. -param=riscv-autovec-preference=
This option is to specify the auto-vectorization approach for RVV.
Currently, we only support scalable and fixed-vlmax.
- scalable means VLA auto-vectorization. The vector-length to compiler is
unknown and runtime invariant. Such approach can allow us compile the code
run on any vector-length RVV CPU.
- fixed-vlmax means the compile known the RVV CPU vector-length, compile option
in fixed-length VLS auto-vectorization. Meaning if we specify vector-length=512.
The execution file can only run on vector-length = 512 RVV CPU.
- TODO: we may need to support min-length VLS auto-vectorization, means the execution
file can run on larger length RVV CPU.
2. -param=riscv-autovec-lmul=
Specify LMUL choosing for RVV auto-vectorization.
gcc/ChangeLog:
* config/riscv/riscv-opts.h (enum riscv_autovec_preference_enum): Add enum for
auto-vectorization preference.
(enum riscv_autovec_lmul_enum): Add enum for choosing LMUL of RVV
auto-vectorization.
* config/riscv/riscv.opt: Add compile option for RVV auto-vectorization.
avoid splitting small constants in bcrli_nottwobits patterns
I have noticed that in the case when we try to clear two bits through a
small constant,
and ZBS is enabled then GCC split it into two "andi" instructions.
For example for the following C code:
int foo(int a) {
return a & ~ 0x101;
}
GCC generates the following:
foo:
andi a0,a0,-2
andi a0,a0,-257
ret
but should be this one:
foo:
andi a0,a0,-258
ret
This patch solves the mentioned issue.
gcc/ChangeLog
* config/riscv/bitmanip.md: Updated predicates of bclri<mode>_nottwobits
and bclridisi_nottwobits patterns.
* config/riscv/predicates.md: (not_uimm_extra_bit_or_nottwobits): Adjust
predicate to avoid splitting arith constants.
(const_nottwobits_not_arith_operand): New predicate.
gcc/testsuite
* gcc.target/riscv/zbs-bclri-nottwobits.c: New test.
Gaius Mulley [Wed, 26 Apr 2023 01:55:59 +0000 (02:55 +0100)]
PR modula2/108121 Re-implement overflow detection for constant literals
This patch fixes the overflow detection for constant literals.
The ZTYPE is changed to int128 (or int64) if int128 is unavailable and
constant literals are built from widest_int. The widest_int is converted
into the tree type and checked for overflow.
m2expr_interpret_integer and append_m2_digit are removed.
gcc/m2/ChangeLog:
PR modula2/108121
* gm2-compiler/M2ALU.mod (Less): Reformatted.
* gm2-compiler/SymbolTable.mod (DetermineSizeOfConstant): Remove
from import.
(ConstantStringExceedsZType): Import.
(GetConstLitType): Re-implement using ConstantStringExceedsZType.
* gm2-gcc/m2decl.cc (m2decl_DetermineSizeOfConstant): Remove.
(m2decl_ConstantStringExceedsZType): New function.
(m2decl_BuildConstLiteralNumber): Re-implement.
* gm2-gcc/m2decl.def (DetermineSizeOfConstant): Remove.
(ConstantStringExceedsZType): New function.
* gm2-gcc/m2decl.h (m2decl_DetermineSizeOfConstant): Remove.
(m2decl_ConstantStringExceedsZType): New function.
* gm2-gcc/m2expr.cc (append_digit): Remove.
(m2expr_interpret_integer): Remove.
(append_m2_digit): Remove.
(m2expr_StrToWideInt): New function.
(m2expr_interpret_m2_integer): Remove.
* gm2-gcc/m2expr.def (CheckConstStrZtypeRange): New function.
* gm2-gcc/m2expr.h (m2expr_StrToWideInt): New function.
* gm2-gcc/m2type.cc (build_m2_word64_type_node): New function.
(build_m2_ztype_node): New function.
(m2type_InitBaseTypes): Call build_m2_ztype_node.
* gm2-lang.cc (gm2_type_for_size): Re-write using early returns.
gcc/testsuite/ChangeLog:
PR modula2/108121
* gm2/pim/fail/largeconst.mod: Increased constant value test
to fail now that cc1gm2 uses widest_int to represent a ZTYPE.
* gm2/pim/fail/largeconst2.mod: New test.
Patrick Palka [Tue, 25 Apr 2023 19:59:22 +0000 (15:59 -0400)]
c++: value dependence of by-ref lambda capture [PR108975]
We are still ICEing on the generic lambda version of the testcase from
this PR, even after r13-6743-g6f90de97634d6f, due to the by-ref capture
of the constant local variable 'dim' being considered value-dependent
when regenerating the lambda (at which point processing_template_decl is
set since the lambda is generic), which prevents us from constant folding
its uses. Later during prune_lambda_captures we end up not thoroughly
walking the body of the lambda and overlook the (non-folded) uses of
'dim' within the array bound and using-decls.
We could fix this by making prune_lambda_captures walk the body of the
lambda more thoroughly so that it finds these uses of 'dim', but ideally
we should be able to constant fold all uses of 'dim' ahead of time and
prune the implicit capture after all.
To that end this patch makes value_dependent_expression_p return false
for such by-ref captures of constant local variables, allowing their
uses to get constant folded ahead of time. It seems we just need to
disable the predicate's conservative early exit for reference variables
(added by r5-5022-g51d72abe5ea04e) when DECL_HAS_VALUE_EXPR_P. This
effectively makes us treat by-value and by-ref captures more consistently
when it comes to value dependence.
PR c++/108975
gcc/cp/ChangeLog:
* pt.cc (value_dependent_expression_p) <case VAR_DECL>:
Suppress conservative early exit for reference variables
when DECL_HAS_VALUE_EXPR_P.
gcc/testsuite/ChangeLog:
* g++.dg/cpp0x/lambda/lambda-const11a.C: New test.
riscv: relax splitter restrictions for creating pseudos
[partial addressing of PR/109279]
RISCV splitters have restrictions to not create pesudos due to a combine
limitatation. And despite this being a split-during-combine limitation,
all split passes take the hit due to way define*_split are used in gcc.
With the original combine issue being fixed 61bee6aed2 ("combine: Don't
record for UNDO_MODE pointers into regno_reg_rtx array [PR104985]")
the RV splitters can now be relaxed.
This improves the codegen in general. e.g.
long long f(void) { return 0x0101010101010101ull; }
Before
li a0,0x01010000
addi a0,0x0101
slli a0,a0,16
addi a0,a0,0x0101
slli a0,a0,16
addi a0,a0,0x0101
ret
With patch
li a5,0x01010000
addi a5,a5,0x0101
mv a0,a5
slli a5,a5,32
add a0,a5,a0
ret
This reduces the qemu icounts, even if slightly, across SPEC2017.
This came up as part of IRC chat on PR/109279 and was suggested by
Andrew Pinski.
gcc/ChangeLog:
* config/riscv/riscv.md: riscv_move_integer() drop in_splitter arg.
riscv_split_symbol() drop in_splitter arg.
* config/riscv/riscv.cc: riscv_move_integer() drop in_splitter arg.
riscv_split_symbol() drop in_splitter arg.
riscv_force_temporary() drop in_splitter arg.
* config/riscv/riscv-protos.h: riscv_move_integer() drop in_splitter arg.
riscv_split_symbol() drop in_splitter arg.
Eric Botcazou [Tue, 25 Apr 2023 15:38:31 +0000 (17:38 +0200)]
Avoid creating useless debug temporaries
insert_debug_temp_for_var_def has some strange code whereby it creates
debug temporaries for SINGLE_RHS (RHS for gimple_assign_single_p) but
not for other RHS in the same situation.
gcc/
* tree-ssa.cc (insert_debug_temp_for_var_def): Do not create
superfluous debug temporaries for single GIMPLE assignments.
Richard Biener [Tue, 25 Apr 2023 12:56:44 +0000 (14:56 +0200)]
tree-optimization/109609 - correctly interpret arg size in fnspec
By majority vote and a hint from the API name which is
arg_max_access_size_given_by_arg_p this interprets a memory access
size specified as given as other argument such as for strncpy
in the testcase which has "1cO313" as specifying the _maximum_
size read/written rather than the exact size. There are two
uses interpreting it that way already and one differing. The
following adjusts the differing and clarifies the documentation.
PR tree-optimization/109609
* attr-fnspec.h (arg_max_access_size_given_by_arg_p):
Clarify semantics.
* tree-ssa-alias.cc (check_fnspec): Correctly interpret
the size given by arg_max_access_size_given_by_arg_p as
maximum, not exact, size.
While OpenMP 5.0 required a single structured block before and after the
'omp scan' directive, OpenMP 5.1 changed this to a 'structured block sequence,
denoting 2 or more executable statements in OpenMP 5.1 (whoops!) and zero or
more in OpenMP 5.2. This commit updates C/C++ to accept zero statements (but
till requires the '{' ... '}' for the final-loop-body) and updates Fortran
to accept zero or more than one statements.
If there is no preceeding or succeeding executable statement, a warning is
shown.
gcc/c/ChangeLog:
* c-parser.cc (c_parser_omp_scan_loop_body): Handle
zero exec statements before/after 'omp scan'.
gcc/cp/ChangeLog:
* parser.cc (cp_parser_omp_scan_loop_body): Handle
zero exec statements before/after 'omp scan'.
gcc/fortran/ChangeLog:
* openmp.cc (gfc_resolve_omp_do_blocks): Handle zero
or more than one exec statements before/after 'omp scan'.
* trans-openmp.cc (gfc_trans_omp_do): Likewise.
libgomp/ChangeLog:
* testsuite/libgomp.c-c++-common/scan-1.c: New test.
* testsuite/libgomp.c/scan-23.c: New test.
* testsuite/libgomp.fortran/scan-2.f90: New test.
Jakub Jelinek [Tue, 25 Apr 2023 14:00:48 +0000 (16:00 +0200)]
testsuite: Fix up ext-floating2.C on powerpc64-linux
Another testcase that is failing on powerpc64-linux. The test expects
a diagnostics when float64 && float128 or in another spot when
float32 && float128. Now, float128 effective target is satisfied on
powerpc64-linux, despite __CPP_FLOAT128_T__ not being defined, because
one needs to add some extra options for it. I think 32-bit arm has
similar case for float16.
2023-04-25 Jakub Jelinek <jakub@redhat.com>
* g++.dg/cpp23/ext-floating2.C: Add dg-add-options for
float16, float32, float64 and float128.
aarch64: Implement V2DI,V4SI division optabs for TARGET_SVE
Similar to the mulv2di case, we can use SVE instruction to implement the V4SI and V2DI optabs
for signed and unsigned integer division.
This allows us to generate much cleaner code for the testcase than the current:
food:
fmov x1, d1
fmov x0, d0
umov x2, v0.d[1]
sdiv x0, x0, x1
umov x1, v1.d[1]
sdiv x1, x2, x1
fmov d0, x0
ins v0.d[1], x1
ret
which now becomes:
food:
ptrue p0.b, all
sdiv z0.d, p0/m, z0.d, z1.d
ret
Bootstrapped and tested on aarch64-none-linux-gnu.
gcc/ChangeLog:
* config/aarch64/aarch64-simd.md (<su_optab>div<mode>3): New define_expand.
* config/aarch64/iterators.md (VQDIV): New mode iterator.
(vnx2di): New mode attribute.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/sve-neon-modes_3.c: New test.
Jakub Jelinek [Tue, 25 Apr 2023 12:38:01 +0000 (14:38 +0200)]
testsuite: Fix up ext-floating15.C tests on powerpc64-linux [PR109278]
I've noticed this test FAILs on powerpc64-linux, with
FAIL: g++.dg/cpp23/ext-floating15.C -std=gnu++98 (test for excess errors)
Excess errors:
/home/jakub/gcc/gcc/testsuite/g++.dg/cpp23/ext-floating15.C:8:5: error: '_Float128' is not supported on this target
/home/jakub/gcc/gcc/testsuite/g++.dg/cpp23/ext-floating15.C:8:5: error: '_Float128' is not supported on this target
/home/jakub/gcc/gcc/testsuite/g++.dg/cpp23/ext-floating15.C:8:1: error: variable or field 'bar' declared void
/home/jakub/gcc/gcc/testsuite/g++.dg/cpp23/ext-floating15.C:8:5: error: '_Float128' is not supported on this target
/home/jakub/gcc/gcc/testsuite/g++.dg/cpp23/ext-floating15.C:8:6: error: expected primary-expression before '_Float128'
and similarly other std versions.
powerpc64-linux is float128 target, but needs to add some options for it.
Richard Biener [Mon, 24 Apr 2023 11:31:07 +0000 (13:31 +0200)]
rtl-optimization/109585 - alias analysis typo
When r10-514-gc6b84edb6110dd2b4fb improved access path analysis
it introduced a typo that triggers when there's an access to a
trailing array in the first access path leading to false
disambiguation.
Jakub Jelinek [Tue, 25 Apr 2023 12:20:51 +0000 (14:20 +0200)]
powerpc: Fix up *branch_anddi3_dot for -m32 -mpowerpc64 [PR109566]
The following testcase reduced from newlib ICEs on powerpc-linux,
with -O2 -m32 -mpowerpc64 since r12-6433 PR102239 optimization was
added and on the original testcase since some ranger improvements in
GCC 13 made it no longer latent on newlib.
The problem is that the *branch_anddi3_dot define_insn_and_split
relies on the *rotldi3_mask_dot define_insn_and_split being recognized
during splitting. The rs6000_is_valid_rotate_dot_mask function checks whether
the mask is a CONST_INT which is a valid mask, but *rotl<mode>3_mask_dot in
addition to checking that it is a valid mask also has
(<MODE>mode == Pmode || UINTVAL (operands[3]) <= 0x7fffffff)
test in the condition. For TARGET_64BIT that doesn't add any further
requirements, but for !TARGET_64BIT && TARGET_POWERPC64 if the AND
second operand is larger than INT_MAX it will not be recognized.
The rs6000_is_valid_rotate_dot_mask function is used solely in one spot,
condition of *branch_anddi3_dot, so the following patch adjusts it
to check for that as well.
2023-04-25 Jakub Jelinek <jakub@redhat.com>
PR target/109566
* config/rs6000/rs6000.cc (rs6000_is_valid_rotate_dot_mask): For
!TARGET_64BIT, don't return true if UINTVAL (mask) << (63 - nb)
is larger than signed int maximum.
Martin Liska [Thu, 6 Apr 2023 09:54:51 +0000 (11:54 +0200)]
gcov: add info about "calls" to JSON output format
gcc/ChangeLog:
* doc/gcov.texi: Document the new "calls" field and document
the API bump. Mention also "block_ids" for lines.
* gcov.cc (output_intermediate_json_line): Output info about
calls and extend branches as well.
(generate_results): Bump version to 2.
(output_line_details): Use block ID instead of a non-sensual
index.
gcc/testsuite/ChangeLog:
* g++.dg/gcov/gcov-17.C: Add call to a noreturn function.
* g++.dg/gcov/test-gcov-17.py: Cover new format.
* lib/gcov.exp: Add options for gcov that emit the extra info.
Roger Sayle [Tue, 25 Apr 2023 11:04:52 +0000 (12:04 +0100)]
[Committed] Correct zeroextendqihi2 insn length regression on xstormy16.
My recent tweak to the zeroextendqihi2 pattern on xstormy16 incorrectly
handled the case where the operand was a MEM. MEM operands use a longer
encoding than REG operands, and the incorrect instruction length resulted
in assembler errors (as reported by Jeff Law). This patch restores the
original length resolving this regression. Sorry for the inconvenience.
Committed as obvious, after testing that a cross-compiler to xstormy16-elf
builds from x86_64-pc-linux-gnu, and that gcc.c-torture/execute/memset-2.c
no longer causes "operand out of range" issues in gas. Committed as
obvious.
2023-04-25 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
* config/stormy16/stormy16.md (zero_extendqihi2): Restore/fix
length attribute for the first (memory operand) alternative.
aarch64: Leveraging the use of STP instruction for vec_duplicate
The backend pattern for storing a pair of identical values in 32 and
64-bit modes with the machine instruction STP was missing, and
multiple instructions were needed to reproduce this behavior as a
result of failed RTL pattern match in combine pass.
For the test case:
typedef long long v2di __attribute__((vector_size (16)));
typedef int v2si __attribute__((vector_size (8)));
void
foo (v2di *x, long long a)
{
v2di tmp = {a, a};
*x = tmp;
}
void
foo2 (v2si *x, int a)
{
v2si tmp = {a, a};
*x = tmp;
}
at -O2 on aarch64 gives:
foo:
stp x1, x1, [x0]
ret
foo2:
stp w1, w1, [x0]
ret
instead of:
foo:
dup v0.2d, x1
str q0, [x0]
ret
foo2:
dup v0.2s, w1
str d0, [x0]
ret
Bootstrapped and regtested on aarch64-none-linux-gnu.
I think it's best to specify the default behavior of nan_state, since
it's not obvious that nan_state() defaults to TRUE. Also, this avoids
the ugly nan_state(false, false) idiom.
gcc/ChangeLog:
* value-range.cc (frange::set): Adjust constructor.
* value-range.h (nan_state::nan_state): Replace default
constructor with one taking an argument.
Eric Botcazou [Tue, 25 Apr 2023 08:46:16 +0000 (10:46 +0200)]
Remove obsolete configure code in gnattools
It was recently pointed out that we generate symbolic links to ghost files
when building the GNAT tools, as the mlib-tgt-specific-*.adb files are gone.
Aldy Hernandez [Mon, 21 Nov 2022 22:18:43 +0000 (23:18 +0100)]
Pass correct type to irange::contains_p() in ipa-cp.cc.
There is a call to contains_p() in ipa-cp.cc which passes incompatible
types. This currently works because deep in the call chain, the legacy
code uses tree_int_cst_lt which performs the operation with
widest_int. With the upcoming removal of legacy, contains_p() will be
stricter.
gcc/ChangeLog:
* ipa-cp.cc (ipa_range_contains_p): New.
(decide_whether_version_node): Use it.
Andrew Pinski [Tue, 25 Apr 2023 00:17:27 +0000 (17:17 -0700)]
Add alternative testcase of phi-opt-25.c that tests phiopt
Right now phi-opt-25.c has tests like `a ? func(a) : CST`
but if we add the simplifications to match.pd, then phi-opt-25.c
will no longer be testing phiopt to make sure these get optimized.
So this adds an alternative version which is designed to test
phiopt.
Committed as obvious after testing the testcase to make sure it does not
fail on x86_64-linux-gnu.
gcc/ChangeLog:
* tree-ssa-forwprop.cc (is_combined_permutation_identity): Try to
simplify two successive VEC_PERM_EXPRs with same VLA mask,
where mask chooses elements in reverse order.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/sve/acle/general/rev-1.c: New test.
Patrick Palka [Mon, 24 Apr 2023 17:39:54 +0000 (13:39 -0400)]
libstdc++: Fix __max_diff_type::operator>>= for negative values
This patch fixes sign bit propagation when right-shifting a negative
__max_diff_type value by more than one, a bug that our existing test
coverage didn't expose until r14-159-g03cebd304955a6 fixed the front
end's 'signed typedef-name' handling that the test relies on (which is
a non-standard extension to the language grammar).
libstdc++-v3/ChangeLog:
* include/bits/max_size_type.h (__max_diff_type::operator>>=):
Fix propagation of sign bit.
* testsuite/std/ranges/iota/max_size_type.cc: Avoid using the
non-standard 'signed typedef-name'. Add some compile-time tests
for right-shifting a negative __max_diff_type value by more than
one.
Andrew Pinski [Fri, 21 Apr 2023 21:45:56 +0000 (14:45 -0700)]
PHIOPT: Add support for diamond shaped bb to match_simplify_replacement
This adds diamond shaped form of basic blocks to match_simplify_replacement.
This is the patch is the start of removing/moving all
of what minmax_replacement does to match.pd to reduce the code duplication.
OK? Bootstrapped and tested on x86_64-linux-gnu with no regressions.
Note phi-opt-{23,24}.c testcase had an incorrect xfail as there should
have been 2 if still because f4/f5 would not be transformed as -ABS is
not allowable during early phi-opt.
gcc/ChangeLog:
* tree-ssa-phiopt.cc (match_simplify_replacement): Add new arguments
and support diamond shaped basic block form.
(tree_ssa_phiopt_worker): Update call to match_simplify_replacement
Andrew Pinski [Sun, 9 Apr 2023 22:47:50 +0000 (22:47 +0000)]
PHIOPT: Ignore predicates for match-and-simplify phi-opt
This fixes a missed optimization where early phi-opt would
not work when there was predicates. The easiest fix is
to change empty_bb_or_one_feeding_into_p to ignore those
statements while checking for only feeding statement.
Note phi-opt-23.c and phi-opt-24.c still fail as we don't handle
diamond form in match_and_simplify phiopt yet.
OK? Bootstrapped and tested on x86_64-linux-gnu with no regressions.
gcc/ChangeLog:
* tree-ssa-phiopt.cc (empty_bb_or_one_feeding_into_p):
Instead of calling last_and_only_stmt, look for the last statement
manually.
Andrew Pinski [Fri, 31 Mar 2023 17:29:26 +0000 (17:29 +0000)]
PHIOPT: Factor out some code from match_simplify_replacement
This factors out the code checking if we have an empty bb
or one statement that feeds into the phi so it can be used
when adding diamond shaped bb form to match_simplify_replacement
in the next patch. Also allows for some improvements
in the next patches too.
OK? Bootstrapped and tested on x86_64-linux-gnu.
gcc/ChangeLog:
* tree-ssa-phiopt.cc (empty_bb_or_one_feeding_into_p):
New function.
(match_simplify_replacement): Call
empty_bb_or_one_feeding_into_p instead of doing it inline.
Andrew Pinski [Thu, 20 Apr 2023 17:56:17 +0000 (10:56 -0700)]
PHIOPT: Allow other diamond uses when do_hoist_loads is true
While working on adding diamond shaped form to match-and-simplify
phiopt, I Noticed that we would not reach there if do_hoist_loads
was true. In the original code before the cleanups it was not
obvious why but after I finished the cleanups, it was just a matter
of removing a continue and that is what this patch does.
This just happens also to fix a bug report that I noticed too.
OK? Bootstrapped and tested on x86_64-linux-gnu.
gcc/ChangeLog:
PR tree-optimization/68894
* tree-ssa-phiopt.cc (tree_ssa_phiopt_worker): Remove the
continue for the do_hoist_loads diamond case.
Andrew Pinski [Thu, 20 Apr 2023 17:26:43 +0000 (10:26 -0700)]
PHIOPT: Cleanup tree_ssa_phiopt_worker code
This patch cleans up tree_ssa_phiopt_worker by merging
common code. Making do_store_elim handled earlier.
Note this does not change any overall logic of the code,
just moves code around enough to be able to do this.
This will make it easier to move code around even more
and a few other fixes I have.
Plus I think all of the do_store_elim code really
should move to its own function as how much code is shared
is now obvious not much.
OK? Bootstrapped and tested on x86_64-linux-gnu.
gcc/ChangeLog:
* tree-ssa-phiopt.cc (tree_ssa_phiopt_worker): Rearrange
code for better code readability.
Andrew Pinski [Thu, 20 Apr 2023 16:23:25 +0000 (09:23 -0700)]
PHIOPT: Move check on diamond bb to tree_ssa_phiopt_worker from minmax_replacement
This moves the check to make sure on the diamond shaped form bbs that
the the two middle bbs are only for that diamond shaped form earlier
in the shared code.
Also remove the redundant check for single_succ_p since that was already
done before hand.
The next patch will simplify the code even further and remove redundant
checks.
PR tree-optimization/109604
gcc/ChangeLog:
* tree-ssa-phiopt.cc (tree_ssa_phiopt_worker): Move the
diamond form check from ...
(minmax_replacement): Here.
gcc/testsuite/ChangeLog:
* gcc.c-torture/compile/pr109604-1.c: New test.
* gcc.c-torture/compile/pr109604-2.c: New test.
Patrick Palka [Mon, 24 Apr 2023 14:33:49 +0000 (10:33 -0400)]
c++, tree: declare some basic functions inline
The functions strip_array_types, is_typedef_decl, typedef_variant_p
and cp_expr_location are used throughout the C++ front end including in
some fairly hot parts (e.g. in the tsubst routines and cp_walk_subtree)
and they're small enough that the overhead of calling them out-of-line
is relatively significant.
So this patch moves their definitions into the appropriate headers to
enable inlining them.
Motivated by a recent LLVM patch I saw, we can use SVE for 64-bit vector integer MUL (plain Advanced SIMD doesn't support it).
Since the Advanced SIMD regs are just the low 128-bit part of the SVE regs it all works transparently.
It's a reasonably straightforward implementation of the mulv2di3 optab that wires it up through the mulvnx2di3 expander and
subregs the results back to the Advanced SIMD modes.
There's more such tricks possible with other operations (and we could do 64-bit multiply-add merged operations too) but for now
this self-contained patch improves the mul case as without it for the testcases in the patch we'd have scalarised the arguments,
moved them to GP regs, performed two GP MULs and moved them back to SIMD regs.
Advertising a mulv2di3 optab from the backend should also allow for more flexibile vectorisation opportunities.
Bootstrapped and tested on aarch64-none-linux-gnu.
gcc/ChangeLog:
* config/aarch64/aarch64-simd.md (mulv2di3): New expander.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/sve-neon-modes_1.c: New test.
* gcc.target/aarch64/sve-neon-modes_2.c: New test.
install.texi needs some updates for GCC 13 and trunk:
* We used a mixture of Solaris 2 and Solaris references. Since Solaris
1/SunOS 4 is ancient history by now, consistently use Solaris
everywhere. Likewise, explicit references to Solaris 11 can go in
many places since Solaris 11.3 and 11.4 is all GCC supports.
* Some caveats apply to both Solaris/SPARC and x86, like the difference
between as and gas.
* Some specifics are obsolete, like the /usr/ccs/bin path whose contents
was merged into /usr/bin in Solaris 11.0 already. Likewise, /bin/sh
is ksh93 since Solaris 11.0, so there's no need to explicitly use
/bin/ksh.
* I've removed the reference to OpenCSW: there's barely a need for external
sites to get additional packages. OpenCSW is mostly unmaintained these
days and has been found to be rather harmful then helping.
* The section on assembler and linker to use was partially duplicated.
Better keep the info in one place.
* GNAT is bundled in recent Solaris 11.4 updates, so recommend that.
Tested on i386-pc-solaris2.11 with make doc/gccinstall.{info,pdf} and
inspection of the latter.
aarch64: PR target/109406 Add support for SVE2 unpredicated MUL
SVE2 supports an unpredicated vector integer MUL form that we can emit from our SVE expanders
without using up a predicate registers. This patch does so.
As the SVE MUL expansion currently is templated away through a code iterator I did not split it
off just for this case but instead special-cased it in the define_expand. It seemed somewhat less
invasive than the alternatives but I could split it off more explicitly if others want to.
The div-by-bitmask_1.c testcase is adjusted to expect this new MUL form.
Bootstrapped and tested on aarch64-none-linux-gnu.
gcc/ChangeLog:
PR target/109406
* config/aarch64/aarch64-sve.md (<optab><mode>3): Handle TARGET_SVE2 MUL
case.
* config/aarch64/aarch64-sve2.md (*aarch64_mul_unpredicated_<mode>): New
pattern.
gcc/testsuite/ChangeLog:
PR target/109406
* gcc.target/aarch64/sve2/div-by-bitmask_1.c: Adjust for unpredicated SVE2
MUL.
* gcc.target/aarch64/sve2/unpred_mul_1.c: New test.
[4/4] aarch64: Convert UABAL2 and SABAL2 patterns to standard RTL codes
The final patch in the series tackles the most complex of this family of patterns, UABAL2 and SABAL2.
These extract the high part of the sources, perform an absdiff on them, widen the result and accumulate.
The motivating testcase for this patch (series) is included and the simplification required doesn't actually
trigger with just the RTL pattern change because rtx_costs block it.
So this patch also extends rtx costs to recognise the (minus (smax (x, y) (smin (x, y)))) expression we use
to describe absdiff in the backend and avoid recursing into its arms.
This allows us to generate the single-instruction sequence expected here.
Bootstrapped and tested on aarch64-none-linux-gnu.
gcc/ChangeLog:
* config/aarch64/aarch64-simd.md (aarch64_<sur>abal2<mode>): Rename to...
(aarch64_<su>abal2<mode>_insn): ... This. Use RTL codes instead of unspec.
(aarch64_<su>abal2<mode>): New define_expand.
* config/aarch64/aarch64.cc (aarch64_abd_rtx_p): New function.
(aarch64_rtx_costs): Handle ABD rtxes.
* config/aarch64/aarch64.md (UNSPEC_SABAL2, UNSPEC_UABAL2): Delete.
* config/aarch64/iterators.md (ABAL2): Delete.
(sur): Remove handling of UNSPEC_UABAL2 and UNSPEC_SABAL2.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/simd/vabal_combine.c: New test.
[3/4] aarch64: Convert UABAL and SABAL patterns to standard RTL codes
With the SABDL and UABDL patterns converted, the accumulating forms of them UABAL and SABAL are not much more complicated.
There's an accumulator argument that we, err, accumulate into with a PLUS once all the widening is done.
Some necessary renaming of patterns relating to the removal of UNSPEC_SABAL and UNSPEC_UABAL is included.
Bootstrapped and tested on aarch64-none-linux-gnu.
gcc/ChangeLog:
* config/aarch64/aarch64-simd.md (aarch64_<sur>abal<mode>): Rename to...
(aarch64_<su>abal<mode>): ... This. Use RTL codes instead of unspec.
(<sur>sadv16qi): Rename to...
(<su>sadv16qi): ... This. Adjust for the above.
* config/aarch64/aarch64-sve.md (<sur>sad<vsi2qi>): Rename to...
(<su>sad<vsi2qi>): ... This. Adjust for the above.
* config/aarch64/aarch64.md (UNSPEC_SABAL, UNSPEC_UABAL): Delete.
* config/aarch64/iterators.md (ABAL): Delete.
(sur): Remove handling of UNSPEC_SABAL and UNSPEC_UABAL.
[2/4] aarch64: Convert UABDL2 and SABDL2 patterns to standard RTL codes
Similar to the previous patch for UABDL and SABDL, this patch covers the *2 versions that vec_select the high half
of its input to do the asbsdiff and extend. A define_expand is added for the intrinsic to create the "select-high-half" RTX the pattern expects.
Bootstrapped and tested on aarch64-none-linux-gnu.
gcc/ChangeLog:
* config/aarch64/aarch64-simd.md (aarch64_<sur>abdl2<mode>): Rename to...
(aarch64_<su>abdl2<mode>_insn): ... This. Use RTL codes instead of unspec.
(aarch64_<su>abdl2<mode>): New define_expand.
* config/aarch64/aarch64.md (UNSPEC_SABDL2, UNSPEC_UABDL2): Delete.
* config/aarch64/iterators.md (ABDL2): Delete.
(sur): Remove handling of UNSPEC_SABDL2 and UNSPEC_UABDL2.
[1/4] aarch64: Convert UABDL and SABDL patterns to standard RTL codes
This is the first patch in a series to improve the RTL representation of the sum-of-absolute-differences patterns
in the backend. We can use standard RTL codes and remove some unspecs.
For UABDL and SABDL we have a widening of the result so we can represent uabdl (x, y) as (zero_extend (minus (smax (x, y) (smin (x, y)))))
and sabdl (x, y) as (zero_extend (minus (umax (x, y) (umin (x, y))))).
It is important to use zero_extend rather than sign_extend for the sabdl case, as the result of the absolute difference is still a positive unsigned value
(the signedness of the operation refers to the values being diffed, not the absolute value of the difference) that must be zero-extended.
Bootstrapped and tested on aarch64-none-linux-gnu (these intrinsics are reasonably well-covered by the advsimd-intrinsics.exp tests)
gcc/ChangeLog:
* config/aarch64/aarch64-simd.md (aarch64_<sur>abdl<mode>): Rename to...
(aarch64_<su>abdl<mode>): ... This. Use standard RTL ops instead of
unspec.
* config/aarch64/aarch64.md (UNSPEC_SABDL, UNSPEC_UABDL): Delete.
* config/aarch64/iterators.md (ABDL): Delete.
(sur): Remove handling of UNSPEC_SABDL and UNSPEC_UABDL.
aarch64: Add pattern to match zero-extending scalar result of ADDLV
The vaddlv_u8 and vaddlv_u16 intrinsics produce a widened scalar result (uint16_t and uint32_t).
The ADDLV instructions themselves zero the rest of the V register, which gives us a free zero-extension
to 32 and 64 bits, similar to how it works on the GP reg side.
Because we don't model that zero-extension in the machine description this can cause GCC to move the
results of these instructions to the GP regs just to do a (superfluous) zero-extension.
This patch just adds a pattern to catch these cases. For the testcases we can now generate no zero-extends
or GP<->FP reg moves, whereas before we generated stuff like:
foo_8_32:
uaddlv h0, v0.8b
umov w1, v0.h[0] // FP<->GP move with zero-extension!
str w1, [x0]
ret
Bootstrapped and tested on aarch64-none-linux-gnu.
gcc/ChangeLog:
* config/aarch64/aarch64-simd.md
(*aarch64_<su>addlv<VDQV_L:mode>_ze<GPI:mode>): New pattern.
Richard Biener [Tue, 18 Apr 2023 15:26:57 +0000 (17:26 +0200)]
This replaces uses of last_stmt where we do not require debug skipping
There are quite some cases which want to access the control stmt
ending a basic-block. Since there cannot be debug stmts after
such stmt there's no point in using last_stmt which skips debug
stmts and can be a compile-time hog for larger testcases.
Richard Biener [Wed, 19 Apr 2023 09:24:00 +0000 (11:24 +0200)]
Avoid repeated forwarder_block_p calls in CFG cleanup
CFG cleanup maintains BB_FORWARDER_BLOCK and uses FORWARDER_BLOCK_P
to check that apart from two places which use forwarder_block_p
in outgoing_edges_match alongside many BB_FORWARDER_BLOCK uses.
The following adjusts those.
* cfgcleanup.cc (outgoing_edges_match): Use FORWARDER_BLOCK_P.
V2 patch for: https://patchwork.sourceware.org/project/gcc/patch/20230330012804.110539-1-juzhe.zhong@rivai.ai/
which has been reviewed.
This patch address Jeff's comment, refine ChangeLog to give more
clear information.
gcc/ChangeLog:
* config/riscv/vector-iterators.md: New unspec to refine fault first load pattern.
* config/riscv/vector.md: Refine fault first load pattern to erase avl from instructions
with the fault first load property.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/vsetvl/ffload-1.c: New test.
* gcc.target/riscv/rvv/vsetvl/ffload-2.c: New test.
* gcc.target/riscv/rvv/vsetvl/ffload-3.c: New test.
* gcc.target/riscv/rvv/vsetvl/ffload-5.c: New test.
* gcc.target/riscv/rvv/vsetvl/ffload-6.c: New test.
* gcc.target/riscv/rvv/vsetvl/ffload-7.c: New test.
liuhongt [Wed, 15 Mar 2023 05:41:06 +0000 (13:41 +0800)]
Add testcases for ffs/ctz vectorization.
gcc/testsuite/ChangeLog:
PR tree-optimization/109011
* gcc.target/i386/pr109011-b1.c: New test.
* gcc.target/i386/pr109011-b2.c: New test.
* gcc.target/i386/pr109011-d1.c: New test.
* gcc.target/i386/pr109011-d2.c: New test.
* gcc.target/i386/pr109011-q1.c: New test.
* gcc.target/i386/pr109011-q2.c: New test.
* gcc.target/i386/pr109011-w1.c: New test.
* gcc.target/i386/pr109011-w2.c: New test.
Gaius Mulley [Sun, 23 Apr 2023 20:09:45 +0000 (21:09 +0100)]
modula2: Add -lnsl -lsocket libraries to gcc/testsuite/lib/gm2.exp
Solaris requires -lnsl -lsocket (present in the driver) but not when
running the testsuite. This patch tests target for *-*-solaris2
and conditionally appends the above libraries.
gcc/testsuite/ChangeLog:
* lib/gm2.exp (gm2_target_compile_default): Conditionally
append -lnsl -lsocket to ldflags.
Roger Sayle [Sun, 23 Apr 2023 09:35:53 +0000 (10:35 +0100)]
[xstormy16] Update xstormy16_rtx_costs.
This patch provides an improved rtx_costs target hook on xstormy16.
The current implementation has the unfortunate property that it claims
that zero_extendhisi2 is very cheap, even though the machine description
doesn't provide that instruction/pattern. Doh! Rewriting the
xstormy16_rtx_costs function has additional benefits, including
making more use of the (short) "mul" instruction when optimizing
for size with -Os.
2023-04-23 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
* config/stormy16/stormy16.cc (xstormy16_rtx_costs): Rewrite to
provide reasonable values for common arithmetic operations and
immediate operands (in several machine modes).
gcc/testsuite/ChangeLog
* gcc.target/xstormy16/mulhi.c: New test case.
Roger Sayle [Sun, 23 Apr 2023 09:30:30 +0000 (10:30 +0100)]
[xstormy16] Add extendhisi2 and zero_extendhisi2 patterns to stormy16.md
This patch adds a pair of define_insn patterns to the xstormy16 machine
description that provide extendhisi2 and zero_extendhisi2, i.e. 16-bit
to 32-bit sign- and zero-extension respectively. This functionality is
already synthesized during RTL expansion, but providing patterns allow
the semantics to be exposed to the RTL optimizers. To simplify things,
this patch introduces a new %h0 output format, for emitting the high_part
register name of a double-word (SImode) register pair. The actual
code generated is identical to before.
Whilst there, I also fixed the instruction lengths and formatting of
the zero_extendqihi2 pattern. Then, mostly for documentation purposes
as the 'T' constraint isn't yet implemented, I've added a "and Rx,#255"
alternative to zero_extendqihi2 that takes advantage of its efficient
instruction encoding.
2023-04-23 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
* config/stormy16/stormy16.cc (xstormy16_print_operand): Add %h
format specifier to output high_part register name of SImode reg.
* config/stormy16/stormy16.md (extendhisi2): New define_insn.
(zero_extendqihi2): Fix lengths, consistent formatting and add
"and Rx,#255" alternative, for documentation purposes.
(zero_extendhisi2): New define_insn.
gcc/testsuite/ChangeLog
* gcc.target/xstormy16/extendhisi2.c: New test case.
* gcc.target/xstormy16/zextendhisi2.c: Likewise.
Roger Sayle [Sun, 23 Apr 2023 09:25:04 +0000 (10:25 +0100)]
[xstormy16] Improved SImode shifts by two bits.
Currently on xstormy16 SImode shifts by a single bit require two
instructions, and shifts by other non-zero integer immediate constants
require five instructions. This patch implements the obvious optimization
that shifts by two bits can be done in four instructions, by using two
single-bit sequences.
Hence, ashift_2 was previously generated as:
mov r7,r2 | shl r2,#2 | shl r3,#2 | shr r7,#14 | or r3,r7
ret
and with this patch we now generate:
shl r2,#1 | rlc r3,#1 | shl r2,#1 | rlc r3,#1
ret
2023-04-23 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
* config/stormy16/stormy16.cc (xstormy16_output_shift): Implement
SImode shifts by two by performing a single bit SImode shift twice.
gcc/testsuite/ChangeLog
* gcc.target/xstormy16/shiftsi.c: New test case.
liuhongt [Wed, 8 Feb 2023 14:27:54 +0000 (22:27 +0800)]
Adjust testcases after better RA decision.
After optimization for RA, memory op is not propagated into
instructions(>1), and it make testcases not generate vxorps since
the memory is loaded into the dest, and the dest is never unused now.
So rewrite testcases to make the codegen more stable.
gcc/testsuite/ChangeLog:
* gcc.target/i386/avx2-dest-false-dep-for-glc.c: Rewrite
testcase to make the codegen more stable.
* gcc.target/i386/avx512dq-dest-false-dep-for-glc.c: Ditto
* gcc.target/i386/avx512f-dest-false-dep-for-glc.c: Ditto.
* gcc.target/i386/avx512fp16-dest-false-dep-for-glc.c: Ditto.
* gcc.target/i386/avx512vl-dest-false-dep-for-glc.c: Ditto.
Andrew Pinski [Wed, 19 Apr 2023 21:42:45 +0000 (14:42 -0700)]
PHIOPT: Improve readability of tree_ssa_phiopt_worker
This small patch just changes around the code slightly to
make it easier to understand that the cases were handling diamond
shaped BB for both do_store_elim/do_hoist_loads.
There is no effect on code output at all since all of the checks
are the same still.
OK? Bootstrapped and tested on x86_64-linux-gnu with no regressions.
gcc/ChangeLog:
* tree-ssa-phiopt.cc (tree_ssa_phiopt_worker):
Change the code around slightly to move diamond
handling for do_store_elim/do_hoist_loads out of
the big if/else.
Andrew Pinski [Wed, 19 Apr 2023 17:31:20 +0000 (10:31 -0700)]
PHIOPT: Improve minmax diamond detection for phiopt1
For diamond bb phi node detection, there is a check
to make sure bb1 is not empty. But in the case where
bb1 is empty except for a predicate, empty_block_p
will still return true but the minmax code handles
that case already so there is no reason to check
if the basic block is empty.
This patch removes that check and removes some
xfails.
OK? Bootstrapped and tested on x86_64-linux-gnu.
gcc/ChangeLog:
* tree-ssa-phiopt.cc (tree_ssa_phiopt_worker):
Remove check on empty_block_p.
Harald Anlauf [Thu, 20 Apr 2023 19:47:34 +0000 (21:47 +0200)]
Fortran: function results never have the ALLOCATABLE attribute [PR109500]
Fortran 2018 8.5.3 (ALLOCATABLE attribute) explains in Note 1 that the
result of referencing a function whose result variable has the ALLOCATABLE
attribute is a value that does not itself have the ALLOCATABLE attribute.
gcc/fortran/ChangeLog:
PR fortran/109500
* interface.cc (gfc_compare_actual_formal): Reject allocatable
functions being used as actual argument for allocable dummy.
gcc/testsuite/ChangeLog:
PR fortran/109500
* gfortran.dg/allocatable_function_11.f90: New test.
Co-authored-by: Steven G. Kargl <kargl@gcc.gnu.org>
Jakub Jelinek [Sat, 22 Apr 2023 18:16:08 +0000 (20:16 +0200)]
testsuite: Fix up pr109011-*.c tests for powerpc [PR109572]
As reported, pr109011-{4,5}.c tests fail on powerpc.
I thought they should have the same counts as the corresponding -{2,3}.c
tests, the only difference is that -{2,3}.c are int while -{4,5}.c are
long long. But there are 2 issues. One is that in the foo
function the vectorization costs comparison triggered in, while in -{2,3}.c
we use vectorization factor 4 and it was found beneficial, when using
long long it was just vf 2 and the scalar cost of doing
p[i] = __builtin_ctzll (q[i]) twice looked smaller than the vectorizated
statements. I could disable the cost model, but instead chose to add
some further arithmetics to those functions to make it beneficial even
with vf 2.
After that change, pr109011-4.c still failed; I was expecting 4 .CTZ calls
there on power9, 3 vectorized and one in scalar code, but for some reason
the scalar one didn't trigger. As I really want to count just the
vectorized calls, I've added the vect prefix on the variables to ensure
I'm only counting vectorized calls and decreased the 4 counts to 3.
2023-04-22 Jakub Jelinek <jakub@redhat.com>
PR testsuite/109572
* gcc.dg/vect/pr109011-1.c: In scan-tree-dump-times regexps match also
vect prefix to make sure we only count vectorized calls.
* gcc.dg/vect/pr109011-2.c: Likewise. On powerpc* expect just count 3
rather than 4.
* gcc.dg/vect/pr109011-3.c: In scan-tree-dump-times regexps match also
vect prefix to make sure we only count vectorized calls.
* gcc.dg/vect/pr109011-4.c: Likewise. On powerpc* expect just count 3
rather than 4.
(foo): Add 2 further arithmetic ops to the loop to make it appear
worthwhile for vectorization heuristics on powerpc.
* gcc.dg/vect/pr109011-5.c: In scan-tree-dump-times regexps match also
vect prefix to make sure we only count vectorized calls.
(foo): Add 2 further arithmetic ops to the loop to make it appear
worthwhile for vectorization heuristics on powerpc.
Jakub Jelinek [Sat, 22 Apr 2023 18:14:06 +0000 (20:14 +0200)]
Fix up bootstrap with GCC 4.[89] after RAII auto_mpfr and autp_mpz [PR109589]
On Tue, Apr 18, 2023 at 03:39:41PM +0200, Richard Biener via Gcc-patches wrote:
> The following adds two RAII classes, one for mpz_t and one for mpfr_t
> making object lifetime management easier. Both formerly require
> explicit initialization with {mpz,mpfr}_init and release with
> {mpz,mpfr}_clear.
This unfortunately broke bootstrap when using GCC 4.8.x or 4.9.x as
it uses deleted friends which weren't supported until PR62101 fixed
them in 2014 for GCC 5.
The following patch adds an workaround, not deleting those friends
for those old versions.
While it means if people add those mp*_{init{,2},clear} calls on auto_mp*
objects they won't notice when doing non-bootstrap builds using
very old system compilers, people should be bootstrapping their changes
and it will be caught during bootstraps even when starting with those
old compilers, plus most people actually use much newer compilers
when developing.
2023-04-22 Jakub Jelinek <jakub@redhat.com>
PR bootstrap/109589
* system.h (class auto_mpz): Workaround PR62101 bug in GCC 4.8 and 4.9.
* realmpfr.h (class auto_mpfr): Likewise.
Jeff Law [Sat, 22 Apr 2023 16:43:35 +0000 (10:43 -0600)]
Adjust rx movsicc tests
The rx port has target specific test movsicc which is naturally meant to verify
that if-conversion is happening on the expected cases.
Unfortunately the test is poorly written. The core problem is there are 8
distinct tests and each of those tests is expected to generate a specific
sequence. Unfortunately, various generic bits might turn an equality test
into an inequality test or make other similar changes.
The net result is the assembly matching patterns may find a particular sequence,
but it may be for a different function than was originally intended. ie,
test1's output may match the expected assembly for test5. Ugh!
This patch breaks the movsicc test down into 8 distinct tests and adjusts the
patterns they match. The nice thing is all these tests are supposed to have
branches that use a bCC 1f form. So we can make them a bit more robust by
ignoring the actual condition code used. So if we change eq to ne, as long
as we match the movsicc pattern, we're OK. And the 1f style is only used by
the movsicc pattern.
With the tests broken down it's a lot easier to diagnose why one test fails
after the recent changes to if-conversion. movsicc-3 fails because of the
profitability test. It's more expensive than the other cases because of its
use of (const_int 10) rather than (const_int 0). (const_int 0) naturally has
a smaller cost.
It looks to me like in this context (const_int 10) should have the same cost
as (const_int 0). But I'm nowhere near well versed in the cost model for the
rx port. So I'm just leaving the test as xfailed. If someone cares enough,
they can dig into it further.
Jakub Jelinek [Sat, 22 Apr 2023 08:24:29 +0000 (10:24 +0200)]
match.pd: Fix fneg/fadd optimization [PR109583]
The following testcase ICEs on x86, foo function since my r14-22
improvement, but bar already since r13-4122. The problem is the same,
in the if expression related_vector_mode is called and that starts with
gcc_assert (VECTOR_MODE_P (vector_mode));
but nothing in the fneg/fadd match.pd pattern actually checks if the
VEC_PERM type has VECTOR_MODE_P (vec_mode). In this case it has BLKmode
and so it ICEs.
The following patch makes sure we don't ICE on it.
2023-04-22 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/109583
* match.pd (fneg/fadd simplify): Don't call related_vector_mode
if vec_mode is not VECTOR_MODE_P.
Jan Hubicka [Sat, 22 Apr 2023 07:20:45 +0000 (09:20 +0200)]
Update loop estimate after header duplication
Loop header copying implements partial loop peelng. If all exits of the loop
are peeled (which is the common case) the number of iterations decreases by 1.
Without noting this, for loops iterating zero times, we end up re-peeling them
later in the loop peeling pass which is wasteful.
This patch commonizes the code for esitmate update and adds logic to detect
when all (likely) exits were peeled by loop-ch.
We are still wrong about update of estimate however: if the exits behave
randomly with given probability, loop peeling does not decrease expected
iteration counts, just decreases probability that loop will be executed.
In this case we thus incorrectly decrease any_estimate. Doing so however
at least help us to not peel or optimize hard the lop later.
If the loop iterates precisely the estimated nuner of iterations. the estimate
decreases, but we are wrong about decreasing the header frequncy. We already
have logic that tries to prove that loop exit will not be taken in peeled out
iterations and it may make sense to special case this.
I also fixed problem where we had off by one error in iteration count updating.
It makes perfect sense to expect loop to have 0 iterations. However if bounds
drops to negative, we lose info about the loop behaviour (since we have no
profile data reaching the loop body).
2023-04-22 Jan Hubicka <hubicka@ucw.cz>
Ondrej Kubanek <kubanek0ondrej@gmail.com>
* cfgloopmanip.h (adjust_loop_info_after_peeling): Declare.
* tree-ssa-loop-ch.cc (ch_base::copy_headers): Fix updating of
loop profile and bounds after header duplication.
* tree-ssa-loop-ivcanon.cc (adjust_loop_info_after_peeling):
Break out from try_peel_loop; fix handling of 0 iterations.
(try_peel_loop): Use adjust_loop_info_after_peeling.
gcc/testsuite/ChangeLog:
2023-04-22 Jan Hubicka <hubicka@ucw.cz>
Ondrej Kubanek <kubanek0ondrej@gmail.com>
* gcc.dg/tree-ssa/peel1.c: Decrease number of peels by 1.
* gcc.dg/unroll-8.c: Decrease loop iteration estimate.
* gcc.dg/tree-prof/peel-2.c: New test.
In the comments for PR108099 Jakub provided some testcases that demonstrated
that even before the regression noted in the patch we were getting the
semantics of this extension wrong: in the unsigned case we weren't producing
the corresponding standard unsigned type but another distinct one of the
same size, and in the signed case we were just dropping it on the floor and
not actually returning a signed type at all.
The former issue is fixed by using c_common_signed_or_unsigned_type instead
of unsigned_type_for, and the latter issue by adding a (signed_p &&
typedef_decl) case.
This patch introduces a failure on std/ranges/iota/max_size_type.cc due to
the latter issue, since the testcase expects 'signed rep_t' to do something
sensible, and previously we didn't. Now that we do, it exposes a bug in the
__max_diff_type::operator>>= handling of sign extension: when we evaluate
-1000 >> 2 in __max_diff_type we keep the MSB set, but leave the
second-most-significant bit cleared.
PR c++/108099
gcc/cp/ChangeLog:
* decl.cc (grokdeclarator): Don't clear typedef_decl after 'unsigned
typedef' pedwarn. Use c_common_signed_or_unsigned_type. Also
handle 'signed typedef'.
gcc/testsuite/ChangeLog:
* g++.dg/ext/int128-8.C: Remove xfailed dg-bogus markers.
* g++.dg/ext/unsigned-typedef2.C: New test.
* g++.dg/ext/unsigned-typedef3.C: New test.
$(P) seems to have been a workaround for some old, proprietary make
implementations that we no longer support. It was removed in r0-31149-gb8dad04b688e9c.
gcc/m2/ChangeLog:
* Make-lang.in: Remove references to $(P).
* Make-maintainer.in: Ditto.
aarch64: Emit single-instruction for smin (x, 0) and smax (x, 0)
Motivated by https://reviews.llvm.org/D148249, we can expand to a single instruction
for the SMIN (x, 0) and SMAX (x, 0) cases using the combined AND/BIC and ASR operations.
Given that we already have well-fitting TARGET_CSSC patterns and expanders for the min/max codes
in the backend this patch does some minor refactoring to ensure we emit the right SMAX/SMIN RTL codes
for TARGET_CSSC, fall back to the generic expanders or emit a simple SMIN/SMAX with 0 RTX for !TARGET_CSSC
that is now matched by a separate pattern.
Bootstrapped and tested on aarch64-none-linux-gnu.
gcc/ChangeLog:
* config/aarch64/aarch64.md (aarch64_umax<mode>3_insn): Delete.
(umax<mode>3): Emit raw UMAX RTL instead of going through gen_ function
for umax.
(<optab><mode>3): New define_expand for MAXMIN_NOUMAX codes.
(*aarch64_<optab><mode>3_zero): Define.
(*aarch64_<optab><mode>3_cssc): Likewise.
* config/aarch64/iterators.md (maxminand): New code attribute.
A user has requested that we support the -mtp= option in aarch64 GCC for changing
the TPIDR register to read for TLS accesses. I'm not a big fan of the option name,
but we already support it in the arm port and Clang supports it for AArch64 already,
where it accepts the 'el0', 'el1', 'el2', 'el3' values.
This patch implements the same functionality in GCC.
Bootstrapped and tested on aarch64-none-linux-gnu.
Confirmed with godbolt that the sequences and options are the same as what Clang accepts/generates.
PR target/108779
* gcc.target/aarch64/mtp.c: New test.
* gcc.target/aarch64/mtp_1.c: New test.
* gcc.target/aarch64/mtp_2.c: New test.
* gcc.target/aarch64/mtp_3.c: New test.
* gcc.target/aarch64/mtp_4.c: New test.
aarch64: PR target/99195 Add scheme to optimise away vec_concat with zeroes on 64-bit Advanced SIMD ops
I finally got around to trying out the define_subst approach for PR target/99195.
The problem we have is that many Advanced SIMD instructions have 64-bit vector variants that
clear the top half of the 128-bit Q register. This would allow the compiler to avoid generating
explicit zeroing instructions to concat the 64-bit result with zeroes for code like:
vcombine_u16(vadd_u16(a, b), vdup_n_u16(0))
We've been getting user reports of GCC missing this optimisation in real world code, so it's worth
doing something about it.
The straightforward approach that we've been taking so far is adding extra patterns in aarch64-simd.md
that match the 64-bit result in a vec_concat with zeroes. Unfortunately for big-endian the vec_concat
operands to match have to be the other way around, so we would end up adding two extra define_insns.
This would lead to too much bloat in aarch64-simd.md
This patch defines a pair of define_subst constructs that allow us to annotate patterns in aarch64-simd.md
with the <vczle> and <vczbe> subst_attrs and the compiler will automatically produce the vec_concat widening patterns,
properly gated for BYTES_BIG_ENDIAN when needed. This seems like the least intrusive way to describe the extra zeroing semantics.
I've had a look at the generated insn-*.cc files in the build directory and it seems that define_subst does what we want it to do
when applied multiple times on a pattern in terms of insn conditions and modes.
This patch adds the define_subst machinery and adds the annotations to some of the straightforward binary and unary integer
operations. Many more such annotations are possible and I aim add them in future patches if this approach is acceptable.
Bootstrapped and tested on aarch64-none-linux-gnu and on aarch64_be-none-elf.
Patrick Palka [Fri, 21 Apr 2023 16:59:37 +0000 (12:59 -0400)]
c++, tree: optimize walk_tree_1 and cp_walk_subtrees
These functions currently repeatedly dereference tp during the subtree
walks, dereferences which the compiler can't CSE because it can't
guarantee that the subtree walking doesn't modify *tp.
But we already implicitly require that TREE_CODE (*tp) remains the same
throughout the subtree walks, so it doesn't seem to be a huge leap to
strengthen that to requiring *tp remains the same.
So this patch manually CSEs the dereferences of *tp. This means that a
callback function can no longer replace *tp with another tree (of the
same TREE_CODE) when walking one of its subtrees, but that doesn't sound
like a useful capability anyway.
gcc/cp/ChangeLog:
* tree.cc (cp_walk_subtrees): Avoid repeatedly dereferencing tp.
<case DECLTYPE_TYPE>: Use cp_unevaluated and WALK_SUBTREE.
<case ALIGNOF_EXPR etc>: Likewise.
gcc/ChangeLog:
* tree.cc (walk_tree_1): Avoid repeatedly dereferencing tp
and type_p.
Jan Hubicka [Fri, 21 Apr 2023 16:13:35 +0000 (18:13 +0200)]
Fix boostrap failure in tree-ssa-loop-ch.cc
I managed to mix up patch and its WIP version in previous commit.
This patch adds the missing edge iterator and also fixes a side
case where new loop header would have multiple latches.
Vineet Gupta [Wed, 1 Mar 2023 03:27:26 +0000 (19:27 -0800)]
expansion: make layout of x_shift*cost[][][] more efficient
when debugging expmed.[ch] for PR/108987 saw that some of the cost arrays have
less than ideal layout as follows:
x_shift*cost[0..63][speed][modes]
We would want speed to be first index since a typical compile will have
that fixed, followed by mode and then the shift values.
It should be non-functional from compiler semantics pov, except
executing slightly faster due to better locality of shift values for
given speed and mode. And also a bit more intutive when debugging.
gcc/Changelog:
* expmed.h (x_shift*_cost): convert to int [speed][mode][shift].
(shift*_cost_ptr ()): Access x_shift*_cost array directly.