Lewis Hyatt [Fri, 12 Jan 2024 18:26:06 +0000 (13:26 -0500)]
libcpp: Support extended characters for #pragma {push,pop}_macro [PR109704]
The implementation of #pragma push_macro and #pragma pop_macro has to date
made use of an ad-hoc function, _cpp_lex_identifier(), which lexes an
identifier out of a string. When support was added for extended characters
in identifiers ($, UCNs, or UTF-8), that support was added only for the
"normal" way of lexing identifiers out of a cpp_buffer (_cpp_lex_direct) and
not for the ad-hoc way. Consequently, extended identifiers are not usable
with these pragmas.
The logic for lexing identifiers has become more complicated than it was
when _cpp_lex_identifier() was written -- it now handles things like \N{}
escapes in C++, for instance -- and it no longer seems practical to maintain
a redundant code path for lexing identifiers. Address the issue by changing
the implementation of #pragma {push,pop}_macro to lex identifiers in the
expected way, i.e. by pushing a cpp_buffer and lexing the identifier from
there.
The existing implementation has some quirks because of the ad-hoc parsing
logic. For example:
actually does sucessfully restore "X". This is because the key for looking
up the saved macro on the push stack is the original string passed, so the
string passed to pop_macro needs to match it exactly. It is not that easy to
reproduce this logic in the world of extended characters, given that for
example it should be valid to pass a UCN to push_macro, and the
corresponding UTF-8 to pop_macro. Given that this aspect of the existing
behavior seems unintentional and has no tests (and does not match other
implementations), I opted to make the new logic more straightforward. The
string passed needs to lex to one token, which must be a valid identifier,
or else no action is taken and no error is generated. Any diagnostics
encountered during lexing (e.g., due to a UTF-8 character not permitted to
appear in an identifier) are also suppressed.
It could be nice (for GCC 15) to also add a warning if a pop_macro does not
match a previous push_macro.
libcpp/ChangeLog:
PR preprocessor/109704
* include/cpplib.h (class cpp_auto_suppress_diagnostics): New class.
* errors.cc
(cpp_auto_suppress_diagnostics::cpp_auto_suppress_diagnostics): New
function.
(cpp_auto_suppress_diagnostics::~cpp_auto_suppress_diagnostics): New
function.
* charset.cc (noop_diagnostic_cb): Remove.
(cpp_interpret_string_ranges): Refactor diagnostic suppression logic
into new class cpp_auto_suppress_diagnostics.
(count_source_chars): Likewise.
* directives.cc (cpp_pop_definition): Add cpp_hashnode argument.
(lex_identifier_from_string): New static helper function.
(push_pop_macro_common): Refactor common logic from
do_pragma_push_macro and do_pragma_pop_macro; use
lex_identifier_from_string instead of _cpp_lex_identifier.
(do_pragma_push_macro): Reimplement using push_pop_macro_common.
(do_pragma_pop_macro): Likewise.
* internal.h (_cpp_lex_identifier): Remove.
* lex.cc (lex_identifier_intern): Remove.
(_cpp_lex_identifier): Remove.
gcc/testsuite/ChangeLog:
PR preprocessor/109704
* c-c++-common/cpp/pragma-push-pop-utf8.c: New test.
* g++.dg/pch/pushpop-2.C: New test.
* g++.dg/pch/pushpop-2.Hs: New test.
* gcc.dg/pch/pushpop-2.c: New test.
* gcc.dg/pch/pushpop-2.hs: New test.
Allow for class type coarray parameters. [PR77871]
gcc/fortran/ChangeLog:
PR fortran/77871
* trans-expr.cc (gfc_conv_derived_to_class): Assign token when
converting a coarray to class.
(gfc_get_tree_for_caf_expr): For classes get the caf decl from
the saved descriptor.
(gfc_get_caf_token_offset):Assert that coarray=lib is set and
cover more cases where the tree having the coarray token can be.
* trans-intrinsic.cc (gfc_conv_intrinsic_caf_get): Use unified
test for pointers.
Tamar Christina [Mon, 14 Oct 2024 13:01:24 +0000 (14:01 +0100)]
middle-end: copy STMT_VINFO_STRIDED_P when DR is replaced [PR116956]
When move_dr copies a DR from one statement to another, it seems we've
forgotten to copy the STMT_VINFO_STRIDED_P flag. This leaves the new DR in a
broken state where it has a non constant stride but isn't marked as strided.
This causes the ICE in the PR because dataref analysis fails during epilogue
vectorization because there is an assumption in place that while costing may
fail for epiloque vectorization, that DR analysis cannot if it succeeded for
the main loop.
Tamar Christina [Mon, 14 Oct 2024 13:00:25 +0000 (14:00 +0100)]
simplify-rtx: Fix incorrect folding of shift and AND [PR117012]
The optimization added in r15-1047-g7876cde25cbd2f is using the wrong
operaiton to check for uniform constant vectors.
The Author intended to check that all the lanes in the vector are the same and
so used CONST_VECTOR_DUPLICATE_P. However this only checks that the vector
is created from a pattern duplication, but doesn't say how many pattern
alternatives make up the duplication. Normally would would need to check this
separately or use const_vec_duplicate_p.
Without this the optimization incorrectly triggers.
gcc/ChangeLog:
PR rtl-optimization/117012
* simplify-rtx.cc (simplify_context::simplify_binary_operation_1): Use
const_vec_duplicate_p instead of CONST_VECTOR_DUPLICATE_P.
gcc/testsuite/ChangeLog:
PR rtl-optimization/117012
* gcc.target/aarch64/pr117012.c: New test.
Tamar Christina [Mon, 14 Oct 2024 10:58:59 +0000 (11:58 +0100)]
middle-end: support SLP early break
This patch introduces feature parity for early break int the SLP only
vectorizer.
The approach taken here is to treat the early exits as root statements for an
SLP tree. This means that we don't need any changes to build_slp to support
gconds.
Codegen for the gcond itself now has to be done out of line but the body of the
SLP blocks itself is simply driven by SLP scheduling. There is a slight
awkwardness in having re-used vectorizable_early_exit for both SLP and non-SLP
but I've documented the differences and when I did try to refactor it it wasn't
really worth it given that this is a temporary state anyway.
This version is restricted to lane = 1, as such we can re-use the existing
move_early_break function instead of having to do safety update through
scheduling. I have a branch where I'm working on that but lane > 1 is out of
scope for GCC 15 anyway. The only reason I will try to get moving through
scheduling done as a stretch goal is so we get epilogue vectorization back for
early break.
The example:
unsigned test4(unsigned x)
{
unsigned ret = 0;
for (int i = 0; i < N; i++)
{
vect_b[i] = x + i;
if (vect_a[i]*2 != x)
break;
vect_a[i] = x;
}
return ret;
}
builds the following SLP instance for early break:
note: Analyzing vectorizable control flow: if (patt_6 != 0)
note: Starting SLP discovery for
note: patt_6 = _4 != x_9(D);
note: starting SLP discovery for node 0x63abc80
note: Build SLP for patt_6 = _4 != x_9(D);
note: precomputed vectype: vector(4) <signed-boolean:32>
note: nunits = 4
note: vect_is_simple_use: operand x_9(D), type of def: external
note: vect_is_simple_use: operand # RANGE [irange] unsigned int [0, 0][2, +INF] MASK 0xffff
_3 * 2, type of def: internal
note: starting SLP discovery for node 0x63abdc0
note: Build SLP for _4 = _3 * 2;
note: precomputed vectype: vector(4) unsigned int
note: nunits = 4
note: vect_is_simple_use: operand #
vect_aD.4416[i_15], type of def: internal
note: vect_is_simple_use: operand 2, type of def: constant
note: starting SLP discovery for node 0x63abe60
note: Build SLP for _3 = vect_a[i_15];
note: precomputed vectype: vector(4) unsigned int
note: nunits = 4
note: SLP discovery for node 0x63abe60 succeeded
note: SLP discovery for node 0x63abdc0 succeeded
note: SLP discovery for node 0x63abc80 succeeded
note: SLP size 3 vs. limit 10.
note: Final SLP tree for instance 0x6474190:
note: node 0x63abc80 (max_nunits=4, refcnt=2) vector(4) <signed-boolean:32>
note: op template: patt_6 = _4 != x_9(D);
note: stmt 0 patt_6 = _4 != x_9(D);
note: children 0x63abd20 0x63abdc0
note: node (external) 0x63abd20 (max_nunits=1, refcnt=1)
note: { x_9(D) }
note: node 0x63abdc0 (max_nunits=4, refcnt=2) vector(4) unsigned int
note: op template: _4 = _3 * 2;
note: stmt 0 _4 = _3 * 2;
note: children 0x63abe60 0x63abf00
note: node 0x63abe60 (max_nunits=4, refcnt=2) vector(4) unsigned int
note: op template: _3 = vect_a[i_15];
note: stmt 0 _3 = vect_a[i_15];
note: load permutation { 0 }
note: node (constant) 0x63abf00 (max_nunits=1, refcnt=1)
note: { 2 }
and during codegen:
note: ------>vectorizing SLP node starting from: patt_6 = _4 != x_9(D);
note: vect_is_simple_use: operand # RANGE [irange] unsigned int [0, 0][2, +INF] MASK 0xffff
_3 * 2, type of def: internal
note: add new stmt: mask_patt_6.18_58 = _53 != vect__4.17_57;
note: === vectorizable_early_exit ===
note: transform early-exit.
note: vectorizing stmts using SLP.
note: Vectorizing SLP tree:
note: node 0x63abfa0 (max_nunits=4, refcnt=1) vector(4) int
note: op template: i_12 = i_15 + 1;
note: stmt 0 i_12 = i_15 + 1;
note: children 0x63aba00 0x63ac040
note: node 0x63aba00 (max_nunits=4, refcnt=2) vector(4) int
note: op template: i_15 = PHI <i_12(6), 0(14)>
note: [l] stmt 0 i_15 = PHI <i_12(6), 0(14)>
note: children (nil) (nil)
note: node (constant) 0x63ac040 (max_nunits=1, refcnt=1) vector(4) int
note: { 1 }
gcc/ChangeLog:
* tree-vect-loop.cc (vect_analyze_loop_2): Handle SLP trees with no
children.
* tree-vectorizer.h (enum slp_instance_kind): Add slp_inst_kind_gcond.
(LOOP_VINFO_EARLY_BREAKS_LIVE_IVS): New.
(vectorizable_early_exit): Expose.
(class _loop_vec_info): Add early_break_live_stmts.
* tree-vect-slp.cc (vect_build_slp_instance, vect_analyze_slp_instance):
Support gcond instances.
(vect_analyze_slp): Analyze gcond roots and early break live statements.
(maybe_push_to_hybrid_worklist): Don't sink gconds.
(vect_slp_analyze_operations): Support gconds.
(vect_slp_check_for_roots): Update comments.
(vectorize_slp_instance_root_stmt): Support gconds.
(vect_schedule_slp): Pass vinfo to vectorize_slp_instance_root_stmt.
* tree-vect-stmts.cc (vect_stmt_relevant_p): Record early break live
statements.
(vectorizable_early_exit): Support SLP.
gcc/testsuite/ChangeLog:
* gcc.dg/vect/vect-early-break_126.c: New test.
* gcc.dg/vect/vect-early-break_127.c: New test.
* gcc.dg/vect/vect-early-break_128.c: New test.
Jonathan Wakely [Sun, 13 Oct 2024 21:48:43 +0000 (22:48 +0100)]
libstdc++: Use std::move for iterator in ranges::fill [PR117094]
Input iterators aren't required to be copyable.
libstdc++-v3/ChangeLog:
PR libstdc++/117094
* include/bits/ranges_algobase.h (__fill_fn): Use std::move for
iterator that might not be copyable.
* testsuite/25_algorithms/fill/constrained.cc: Check
non-copyable iterator with sized sentinel.
Jonathan Wakely [Thu, 10 Oct 2024 12:36:33 +0000 (13:36 +0100)]
libstdc++: Enable memset optimizations for distinct character types [PR93059]
Currently we only optimize std::fill to memset when the source and
destination types are the same byte-sized type. This means that we fail
to optimize cases like std::fill(buf. buf+n, 0) because the literal 0 is
not the same type as the character buffer.
Such cases can safely be optimized to use memset, because assigning an
int (or other integer) to a narrow character type has the same effects
as converting the integer to unsigned char then copying it with memset.
This patch enables the optimized code path when the fill character is a
memcpy-able integer (using the new __memcpyable_integer trait). We still
need to check is_same<U, T> to enable the memset optimization for
filling a range of std::byte with a std::byte value, because that isn't
a memcpyable integer.
libstdc++-v3/ChangeLog:
PR libstdc++/93059
* include/bits/stl_algobase.h (__fill_a1(T*, T*, const T&)):
Change template parameters and enable_if condition to allow the
fill value to be an integer.
Jonathan Wakely [Thu, 10 Oct 2024 12:36:33 +0000 (13:36 +0100)]
libstdc++: Enable memcpy optimizations for distinct integral types [PR93059]
Currently we only optimize std::copy, std::copy_n etc. to memmove when
the source and destination types are the same. This means that we fail
to optimize copying between distinct 1-byte types, e.g. copying from a
buffer of unsigned char to a buffer of char8_t or vice versa.
This patch adds more partial specializations of the __memcpyable trait
so that we allow memcpy between integers of equal widths. This will
enable memmove for copies between narrow character types and also
between same-width types like int and unsigned.
Enabling the optimization needs to be based on the width of the integer
type, not just the size in bytes. This is because some targets define
non-standard integral types such as __int20 in msp430, which has padding
bits. It would not be safe to memcpy between e.g. __int20 and int32_t,
even though sizeof(__int20) == sizeof(int32_t). A new trait is
introduced to define the width, __memcpyable_integer, and then the
__memcpyable trait compares the widths.
It's safe to copy between signed and unsigned integers of the same
width, because GCC only supports two's complement integers.
I initially though it would be useful to define the specialization
__memcpyable_integer<byte> to enable copying between narrow character
types and std::byte. But that isn't possible with std::copy, because
is_assignable<char&, std::byte> is false. Optimized copies using memmove
will already happen for copying std::byte to std::byte, because
__memcpyable<T*, T*> is true.
libstdc++-v3/ChangeLog:
PR libstdc++/93059
* include/bits/cpp_type_traits.h (__memcpyable): Add partial
specialization for pointers to distinct types.
(__memcpyable_integer): New trait to control which types can use
cross-type memcpy optimizations.
Kito Cheng [Mon, 14 Oct 2024 08:07:16 +0000 (16:07 +0800)]
RISC-V: Implement __init_riscv_feature_bits, __riscv_feature_bits, and __riscv_vendor_feature_bits
This provides a common abstraction layer to probe the available extensions at
run-time. These functions can be used to implement function multi-versioning or
to detect available extensions.
The advantages of providing this abstraction layer are:
- Easy to port to other new platforms.
- Easier to maintain in GCC for function multi-versioning.
- For example, maintaining platform-dependent code in C code/libgcc is much
easier than maintaining it in GCC by creating GIMPLEs...
This API is intended to provide the capability to query minimal common available extensions on the system.
The API is defined in the riscv-c-api-doc:
https://github.com/riscv-non-isa/riscv-c-api-doc/blob/main/src/c-api.adoc
Proposal to use unsigned long long for marchid and mimpid:
https://github.com/riscv-non-isa/riscv-c-api-doc/pull/91
Full function multi-versioning implementation will come later. We are posting
this first because we intend to backport it to the GCC 14 branch to unblock
LLVM 19 to use this with GCC 14.2, rather than waiting for GCC 15.
Changes since v7:
- Remove vendorID field in __riscv_vendor_feature_bits.
- Fix C implies Zcf only for RV32.
- Add more comments to kernel versions.
Changes since v6:
- Implement __riscv_cpu_model.
- Set new sub extension bits which implied from previous extensions.
Changes since v5:
- Minor fixes on indentation.
Changes since v4:
- Bump to newest riscv-c-api-doc with some new extensions like Zve*, Zc*
Zimop, Zcmop, Zawrs.
- Rename the return variable name of hwprobe syscall.
- Minor fixes on indentation.
Changes since v3:
- Fix non-linux build.
- Let __init_riscv_feature_bits become constructor
Changes since v2:
- Prevent it initialize more than once.
Changes since v1:
- Fix the format.
- Prevented race conditions by introducing a local variable to avoid load/store
operations during the computation of the feature bit.
middle-end: [PR middle-end/116926] Allow widening optabs for vec-mode -> scalar-mode
The recent refactoring of the dot_prod optab to convert-type exposed a
limitation in how `find_widening_optab_handler_and_mode' is currently
implemented, owing to the fact that, while the function expects the
aarch64: Fix folding of degenerate svwhilele case [PR117045]
The svwhilele folder mishandled the degenerate case in which
the second argument is the maximum integer. In that case,
the result is all-true regardless of the first parameter:
If the second scalar operand is equal to the maximum signed integer
value then a condition which includes an equality test can never fail
and the result will be an all-true predicate.
This is because the conceptual "increment the first operand
by 1 after each element" is done modulo the range of the operand.
The GCC code was instead treating it as infinite precision.
whilele_5.c even had a test for the incorrect behaviour.
The easiest fix seemed to be to handle that case specially before
doing constant folding. This also copes with variable first operands.
gcc/
PR target/116999
PR target/117045
* config/aarch64/aarch64-sve-builtins-base.cc
(svwhilelx_impl::fold): Check for WHILELTs of the minimum value
and WHILELEs of the maximum value. Fold them to all-false and
all-true respectively.
Thomas Schwinge [Mon, 14 Oct 2024 08:34:34 +0000 (10:34 +0200)]
Fortran: Use OpenACC's acc_on_device builtin, fix OpenMP' __builtin_is_initial_device: Harmonize 'libgomp.oacc-fortran/acc_on_device-1-*'
The test case 'libgomp.oacc-fortran/acc_on_device-1-1.f90' added in
commit 3269a722b7a03613e9c4e2862bc5088c4a17cc11
"Fortran: Use OpenACC's acc_on_device builtin, fix OpenMP' __builtin_is_initial_device"
was missing '-fno-builtin-acc_on_device', and all
'libgomp.oacc-fortran/acc_on_device-1-*' need comments, why that option is
specified.
Thomas Schwinge [Mon, 14 Oct 2024 08:26:13 +0000 (10:26 +0200)]
Fortran: Use OpenACC's acc_on_device builtin, fix OpenMP' __builtin_is_initial_device: Fix effective-target keyword in 'libgomp.oacc-fortran/acc_on_device-2.f90'
The test case 'libgomp.oacc-fortran/acc_on_device-2.f90' added in
commit 3269a722b7a03613e9c4e2862bc5088c4a17cc11
"Fortran: Use OpenACC's acc_on_device builtin, fix OpenMP' __builtin_is_initial_device"
had a mismatch between dump file production and its scanning; the former needs
to use 'offload_target_nvptx' (like 'offload_target_amdgcn'), not
'offload_device_nvptx'.
Pan Li [Sat, 12 Oct 2024 03:08:21 +0000 (11:08 +0800)]
RISC-V: Add testcases for form 4 of vector signed SAT_SUB
Form 4:
#define DEF_VEC_SAT_S_SUB_FMT_4(T, UT, MIN, MAX) \
void __attribute__((noinline)) \
vec_sat_s_sub_##T##_fmt_4 (T *out, T *op_1, T *op_2, unsigned limit) \
{ \
unsigned i; \
for (i = 0; i < limit; i++) \
{ \
T x = op_1[i]; \
T y = op_2[i]; \
T minus; \
bool overflow = __builtin_sub_overflow (x, y, &minus); \
out[i] = !overflow ? minus : x < 0 ? MIN : MAX; \
} \
}
The below test are passed for this patch.
* The rv64gcv fully regression test.
It is test only patch and obvious up to a point, will commit it
directly if no comments in next 48H.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/vec_sat_arith.h: Add test helper macros.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-4-i16.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-4-i32.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-4-i64.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-4-i8.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-run-4-i16.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-run-4-i32.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-run-4-i64.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-run-4-i8.c: New test.
Pan Li [Sat, 12 Oct 2024 02:40:30 +0000 (10:40 +0800)]
RISC-V: Add testcases for form 3 of vector signed SAT_SUB
Form 3:
#define DEF_VEC_SAT_S_SUB_FMT_3(T, UT, MIN, MAX) \
void __attribute__((noinline)) \
vec_sat_s_sub_##T##_fmt_3 (T *out, T *op_1, T *op_2, unsigned limit) \
{ \
unsigned i; \
for (i = 0; i < limit; i++) \
{ \
T x = op_1[i]; \
T y = op_2[i]; \
T minus; \
bool overflow = __builtin_sub_overflow (x, y, &minus); \
out[i] = overflow ? x < 0 ? MIN : MAX : minus; \
} \
}
The below test are passed for this patch.
* The rv64gcv fully regression test.
It is test only patch and obvious up to a point, will commit it
directly if no comments in next 48H.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/vec_sat_arith.h: Add test helper macros.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-3-i16.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-3-i32.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-3-i64.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-3-i8.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-run-3-i16.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-run-3-i32.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-run-3-i64.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-run-3-i8.c: New test.
Pan Li [Sat, 12 Oct 2024 02:34:55 +0000 (10:34 +0800)]
Match: Support form 3 for vector signed integer SAT_SUB
This patch would like to support the form 3 of the vector signed
integer SAT_SUB. Aka below example:
Form 3:
#define DEF_VEC_SAT_S_SUB_FMT_3(T, UT, MIN, MAX) \
void __attribute__((noinline)) \
vec_sat_s_sub_##T##_fmt_3 (T *out, T *op_1, T *op_2, unsigned limit) \
{ \
unsigned i; \
for (i = 0; i < limit; i++) \
{ \
T x = op_1[i]; \
T y = op_2[i]; \
T minus; \
bool overflow = __builtin_sub_overflow (x, y, &minus); \
out[i] = overflow ? x < 0 ? MIN : MAX : minus; \
} \
}
The below test are passed for this patch.
* The rv64gcv fully regression test.
It is test only patch and obvious up to a point, will commit it
directly if no comments in next 48H.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/vec_sat_arith.h: Add test helper macros.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-2-i16.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-2-i32.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-2-i64.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-2-i8.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-run-2-i16.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-run-2-i32.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-run-2-i64.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-run-2-i8.c: New test.
Richard Biener [Sun, 13 Oct 2024 13:12:44 +0000 (15:12 +0200)]
tree-optimization/116290 - fix compare-debug issue in ldist
Loop distribution does different analysis with -g0/-g due to counting
a debug stmt starting a BB against a limit which will everntually
lead to different IVOPTs choices. I've fixed a possible IVOPTs
issue on the way even though it doesn't make a difference here.
PR tree-optimization/116290
* tree-loop-distribution.cc (determine_reduction_stmt_1): PHIs
have no debug variants. Start with first non-debug real stmt.
* tree-ssa-loop-ivopts.cc (find_givs_in_bb): Do not analyze
debug stmts.
Oleg Endo [Sun, 13 Oct 2024 02:36:38 +0000 (11:36 +0900)]
SH: Fix cost estimation of mem load/store
For memory loads/stores (that contain a MEM rtx) sh_rtx_costs would wrongly
report a cost lower than 1 insn which is not accurate as it makes loads/stores
appear cheaper than simple arithmetic insns. The cost of a load/store insn is
at least 1 insn plus the cost of the address expression (some addressing modes
can be considered more expensive than others due to additional constraints).
gcc/ChangeLog:
PR target/113533
* config/sh/sh.cc (sh_rtx_costs): Adjust cost estimation of MEM rtx
to be always at least COST_N_INSNS (1). Forward speed argument to
sh_address_cost.
Co-authored-by: Roger Sayle <roger@nextmovesoftware.com>
Jonathan Wakely [Sun, 13 Oct 2024 18:14:04 +0000 (19:14 +0100)]
libstdc++: Fix ranges::copy_backward for a single memcpyable element [PR117121]
The result iterator needs to be decremented before writing to it.
Improve the PR 108846 tests for all of std::copy, std::copy_n,
std::copy_backward, and the std::ranges versions.
libstdc++-v3/ChangeLog:
PR libstdc++/117121
* include/bits/ranges_algobase.h (copy_backward): Decrement
output iterator before assigning one element through it.
* testsuite/25_algorithms/copy/108846.cc: Ensure the algorithm's
effects are correct for a single memcpyable element.
* testsuite/25_algorithms/copy_backward/108846.cc: Likewise.
* testsuite/25_algorithms/copy_n/108846.cc: Likewise.
Richard Biener [Sun, 13 Oct 2024 09:42:27 +0000 (11:42 +0200)]
tree-optimization/116481 - avoid building function_type[]
The following avoids building an array type with function or method
element type during diagnosing an array bound violation as this
will result in an error, rejecting a program with a not too useful
error message. Instead build such array type manually.
PR tree-optimization/116481
* pointer-query.cc (build_printable_array_type):
Build an array types with function or method element type
manually to avoid bogus diagnostic.
Tobias Burnus [Sun, 13 Oct 2024 08:18:31 +0000 (10:18 +0200)]
Fortran: Use OpenACC's acc_on_device builtin, fix OpenMP' __builtin_is_initial_device
It turned out that 'if (omp_is_initial_device() .eqv. true)' gave an ICE
due to comparing 'int' with 'logical(4)'. When digging deeper, it also
turned out that when the procedure pointer is needed, the builtin cannot
be used, either. (Follow up to r15-2799-gf1bfba3a9b3f31 )
Extend the code to also use the builtin acc_on_device with OpenACC,
which was previously only used in C/C++. Additionally, fix folding
when offloading is not enabled.
Fixes additionally the BT_BOOL data type, which was 'char'/integer(1)
instead of bool, backing the booleaness; use bool_type_node as the rest
of GCC.
gcc/fortran/ChangeLog:
* gfortran.h (gfc_option_t): Add disable_acc_on_device.
* options.cc (gfc_handle_option): Handle -fno-builtin-acc_on_device.
* trans-decl.cc (gfc_get_extern_function_decl): Move
__builtin_omp_is_initial_device handling to ...
* trans-expr.cc (get_builtin_fn): ... this new function.
(conv_function_val): Call it.
(update_builtin_function): New.
(gfc_conv_procedure_call): Call it.
* types.def (BT_BOOL): Fix type by using bool_type_node.
gcc/ChangeLog:
* gimple-fold.cc (gimple_fold_builtin_acc_on_device): Also fold
when offloading is not configured.
libgomp/ChangeLog:
* libgomp.texi (TR13): Fix minor typos.
(omp_is_initial_device): Improve wording.
(acc_on_device): Note how to disable the builtin.
* testsuite/libgomp.oacc-fortran/acc_on_device-1-1.f90: Remove TODO.
* testsuite/libgomp.oacc-fortran/acc_on_device-1-2.f: Likewise.
Add -fno-builtin-acc_on_device.
* testsuite/libgomp.oacc-fortran/acc_on_device-1-3.f: Likewise.
* testsuite/libgomp.oacc-c-c++-common/routine-nohost-1.c: Update
dg- as !offloading_enabled now compile-time expands acc_on_device.
* testsuite/libgomp.fortran/target-is-initial-device-3.f90: New test.
* testsuite/libgomp.oacc-fortran/acc_on_device-2.f90: New test.
Jivan Hakobyan [Sun, 13 Oct 2024 01:10:50 +0000 (19:10 -0600)]
[RISC-V] Avoid unnecessary extensions when value is already extended
This is a minor patch from Jivan from roughly a year ago. The basic
idea here is similar to what we do when extending values for the sake of
comparisons. Specifically if the value is already known to be properly
extended, then an extension is just a copy.
The original idea was to use a similar patch, but which aborted to
identify cases where these unnecessary promotions where emitted. All
that showed up when doing a testsuite run with that abort was the
promotions created by the arithmetic with overflow patterns such as addv.
Things like addv aren't *that* common so this never got high on my todo
list, even after a minor issue in this space was raised in bugzilla.
But with stage1 closing soon and no good reason not to go forward, I'm
submitting this into the pre-commit tester now. My tester has been
using it since roughly Feb :-) Plan would be to commit after the
pre-commit tester renders its verdict.
* config/riscv/riscv.md (zero_extendsidi2): If RHS is already
zero extended, then this is just a copy.
(extendsidi2): Similarly, but for sign extension.
Feng Xue [Fri, 11 Oct 2024 06:55:05 +0000 (14:55 +0800)]
vect: Fix inconsistency in fully-masked lane-reducing op generation [PR116985]
To align vectorized def/use when lane-reducing op is present in loop reduction,
we may need to insert extra trivial pass-through copies, which would cause
mismatch between lane-reducing vector copy and loop mask index. This could be
fixed by computing the right index around a new counter on effective lane-
reducing vector copies.
2024-10-11 Feng Xue <fxue@os.amperecomputing.com>
gcc/
PR tree-optimization/116985
* tree-vect-loop.cc (vect_transform_reduction): Compute loop mask
index based on effective vector copies for reduction op.
gcc/testsuite/
PR tree-optimization/116985
* gcc.dg/vect/pr116985.c: New testcase.
Richard Biener [Sat, 12 Oct 2024 12:51:37 +0000 (14:51 +0200)]
tree-optimization/117104 - add missed guards to max(a,b) != a simplification
For vector types we have to make sure the comparison result is a vector
type and the resulting compare operation is supported. As the resulting
compare is never an equality compare I didn't bother to check for the
cbranch case.
And no other uses of a4. We could have used x0 trivially.
First we adjust the expander so that it doesn't force the constant into a
register. In the matching pattern we change the appropriate source constraints
from "r" to "rJ" and the output template is changed to use %z for the operand.
The net is we drop the li completely and emit vmv.s.x,v3,x0.
But wait, there's more. If we're broadcasting a constant in the range
[-16..15] into a vector, we currently load the constant into a register and use
vmv.v.r. We can instead use vmv.v.i, which avoids loading the constant into a
GPR. For that case we again avoid forcing the constant into a register in the
expander and adjust the output template to emit vmv.v.x or vmv.v.i based on
whether or not the appropriate operand is a constant or general purpose
register. So again, we'll drop a load immediate into a scalar for this case.
Whether or not we should use vmv.v.i vs vmv.s.x for loading [-16..15] into the
0th element is probably uarch dependent. The tradeoff is loading the GPR vs
the broadcast in the vector unit. I didn't bother with this case.
Tested in my tester (which tests rv64gcv as a default codegen option). Will
wait for the pre-commit tester to render a verdict.
gcc/
* config/riscv/constraints.md (P): New constraint.
* config/riscv/vector.md (pred_broadcast<mode> expander): Do
not force small integers into GPRs so aggressively.
(pred_broadcast<mode> insn & splitter): Allow splatting small
constants across the vector register directly. Allow splatting
(const_int 0) into element 0 directly.
Tobias Burnus [Sat, 12 Oct 2024 12:55:22 +0000 (14:55 +0200)]
Fortran/OpenMP: Warn when mapping polymorphic variables
OpenMP (TR13) states for Fortran:
* For map: "If a list item has polymorphic type, the behavior is unspecified."
* "If the firstprivate clause is on a target construct and a variable is of
polymorphic type, the behavior is unspecified."
which this commit now warns for.
gcc/fortran/ChangeLog:
* openmp.cc (resolve_omp_clauses): Diagnose polymorphic mapping.
* trans-openmp.cc (gfc_omp_finish_clause): Warn when
polymorphic variable is implicitly mapped.
gcc/testsuite/ChangeLog:
* gfortran.dg/gomp/polymorphic-mapping.f90: New test.
* gfortran.dg/gomp/polymorphic-mapping-2.f90: New test.
Jakub Jelinek [Sat, 12 Oct 2024 11:47:45 +0000 (13:47 +0200)]
bootstrap: Fix genmatch build where system gcc defaults to -fPIE -pie
Seems our buildbot is unhappy about my latest commit to link genmatch with
libcommon.a in order to support gcc_diag diagnostics in libcpp.
We have in gcc/configure.ac:
if test x$enable_host_shared = xyes; then
PICFLAG=-fPIC
elif test x$enable_host_pie = xyes; then
PICFLAG=-fPIE
elif test x$gcc_cv_c_no_fpie = xyes; then
PICFLAG=-fno-PIE
else
PICFLAG=
fi
if test x$enable_host_pie = xyes; then
LD_PICFLAG=-pie
elif test x$gcc_cv_no_pie = xyes; then
LD_PICFLAG=-no-pie
else
LD_PICFLAG=
fi
if test x$enable_host_bind_now = xyes; then
LD_PICFLAG="$LD_PICFLAG -Wl,-z,now"
fi
Now, for object files linked into cc1, cc1plus, xgcc etc. we carefully
arrange for them to be compiled with $(PICFLAG) and do the link with
$(LD_PICFLAG).
For the generator programs, we don't do anything like that, we simply
compile their objects without $(PICFLAG) and link without $(LD_PICFLAG).
It isn't that big deal, the generator programs runs once or a couple of
times during the build and that is it, we don't ship them and don't
care much if they are PIE or not.
Except that after my changes to link in libcommon.a into build/genmatch,
we now link -fno-PIE compiled objects into a binary which is linked with
default flags. Our distro compiler just links a normal executable and
everything works fine (-fPIE/-pie is added through spec file snippet and
just added in rpm default flags), but seems the buildbot system gcc
defaults to -fPIE -pie instead and so building build/genmatch fails.
The following patch is a minimal fix for that, just add -no-pie when
linking build/genmatch, but don't add -pie.
If we wanted to start building even the build/gen* tools with $(PICFLAG)
and $(LD_PICFLAG), that would be much larger change.
2024-10-12 Jakub Jelinek <jakub@redhat.com>
* Makefile.in (LINKER_FOR_BUILD): Append -no-pie if it is in
$(LD_PICFLAG) when building build/genmatch.
Jakub Jelinek [Sat, 12 Oct 2024 08:44:17 +0000 (10:44 +0200)]
libcpp, genmatch: Use gcc_diag instead of printf for libcpp diagnostics
When working on #embed support, or -Wheader-guard or other recent libcpp
changes, I've been annoyed by the libcpp diagnostics being visually
different from normal gcc diagnostics, especially in the area of quoting
stuff in the diagnostic messages.
Normall GCC diagnostics is gcc_diag/gcc_tdiag, one can use
%</%>, %qs etc. in there, while libcpp diagnostics was marked as printf
and in libcpp we've been very creative with quoting stuff, either
no quotes at all, or "something" quoting, or 'something' quoting, or
`something' quoting (but in none of the cases it used colors consistently
with the rest of the compiler).
Now, libcpp diagnostics is always emitted using a callback,
pfile->cb.diagnostic. On the gcc/ side, this callback is initialized with
genmatch.cc: cb->diagnostic = diagnostic_cb;
c-family/c-opts.cc: cb->diagnostic = c_cpp_diagnostic;
fortran/cpp.cc: cb->diagnostic = cb_cpp_diagnostic;
where the latter two just use diagnostic_report_diagnostic, so actually
support all the gcc_diag stuff, only the genmatch.cc case didn't.
So, the following patch changes genmatch.cc to use pp_format* instead
of vfprintf so that it supports the gcc_diag formatting (pretty-print.o
unfortunately has various dependencies, so had to link genmatch with
libcommon.a libbacktrace.a and tweak Makefile.in so that there are no
circular dependencies) and marks the libcpp diagnostic routines as
gcc_diag rather than printf. That change resulted in hundreds of
-Wformat-diag new warnings (most of them useful and resulting IMHO in
better diagnostics), so the rest of the patch is changing the format
strings to make -Wformat-diag happy and adjusting the testsuite for
the differences in how is the diagnostic reformatted.
Dunno if some out of GCC tree projects use libcpp, that case would
make it harder because one couldn't use vfprintf in the diagnostic
callback anymore, but there is always David's libdiagnostic which could
be used for that purpose IMHO.
This commit reduces code duplication by moving gfc_get_location
from trans.cc to error.cc. The gcc_assert is now used more often
and reveald a bug in gfc_match_array_constructor where the union
expr->ts.u.derived of a derived type is partially overwritten by
the assignment expr->ts.u.cl->... as a ts.type == BT_CHARACTER check
was missing.
gcc/fortran/ChangeLog:
* array.cc (gfc_match_array_constructor): Only update the
character length if the expression is of character type.
* error.cc (gfc_get_location_with_offset): New; split off
from ...
(gfc_format_decoder): ... here; call it.
* gfortran.h (gfc_get_location_with_offset): New prototype.
(gfc_get_location): New inline function.
* trans.cc (gfc_get_location): Remove function definition.
* trans.h (gfc_get_location): Remove declaration.
Simon Martin [Fri, 11 Oct 2024 08:16:26 +0000 (10:16 +0200)]
c++: Fix overeager Woverloaded-virtual with conversion operators [PR109918]
We currently emit an incorrect -Woverloaded-virtual warning upon the
following test case
=== cut here ===
struct A {
virtual operator int() { return 42; }
virtual operator char() = 0;
};
struct B : public A {
operator char() { return 'A'; }
};
=== cut here ===
The problem is that when iterating over ovl_range (fns), warn_hidden
gets confused by the conversion operator marker, concludes that
seen_non_override is true and therefore emits a warning for all
conversion operators in A that do not convert to char, even if
-Woverloaded-virtual is 1 (e.g. with -Wall, the case reported).
A second set of problems is highlighted when -Woverloaded-virtual is 2.
First, with the same test case, since base_fndecls contains all
conversion operators in A (except the one to char, that's been removed
when iterating over ovl_range (fns)), we emit a spurious warning for
the conversion operator to int, even though it's unrelated.
Second, in case there are several conversion operators with different
cv-qualifiers to the same type in A, we rightfully emit a warning,
however the note uses the location of the conversion operator marker
instead of the right one; location_of should go over conv_op_marker.
This patch fixes all these by explicitly keeping track of (1) base
methods that are overriden, as well as (2) base methods that are hidden
but not overriden (and by what), and warning about methods that are in
(2) but not (1). It also ignores non virtual base methods, per
"definition" of -Woverloaded-virtual.
PR c++/109918
gcc/cp/ChangeLog:
* class.cc (warn_hidden): Keep track of overloaded and of hidden
base methods. Mention the actual hiding function in the warning,
not the first overload.
* error.cc (location_of): Skip over conv_op_marker.
gcc/testsuite/ChangeLog:
* g++.dg/warn/Woverloaded-virt1.C: Check that no warning is
emitted for non virtual base methods.
* g++.dg/warn/Woverloaded-virt5.C: New test.
* g++.dg/warn/Woverloaded-virt6.C: New test.
* g++.dg/warn/Woverloaded-virt7.C: New test.
* g++.dg/warn/Woverloaded-virt8.C: New test.
* g++.dg/warn/Woverloaded-virt9.C: New test.
The below test are passed for this patch.
* The rv64gcv fully regression test.
It is test only patch and obvious up to a point, will commit it
directly if no comments in next 48H.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/binop/vec_sat_data.h: Add test
data for run test.
* gcc.target/riscv/rvv/autovec/vec_sat_arith.h: Add test helper
macros.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-1-i16.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-1-i32.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-1-i64.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-1-i8.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-run-1-i16.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-run-1-i32.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-run-1-i64.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_s_sub-run-1-i8.c: New test.
The below test suites are passed for this patch.
* The rv64gcv fully regression test.
gcc/ChangeLog:
* config/riscv/autovec.md (sssub<mode>3): Add new pattern for
signed SAT_SUB.
* config/riscv/riscv-protos.h (expand_vec_sssub): Add new func
decl to expand sssub to vssub.
* config/riscv/riscv-v.cc (expand_vec_sssub): Add new func
impl to expand sssub to vssub.
Pan Li [Fri, 11 Oct 2024 03:58:30 +0000 (11:58 +0800)]
Vect: Try the pattern of vector signed integer SAT_SUB
Almost the same as vector unsigned integer SAT_SUB, try to match
the signed version during the vector pattern matching.
The below test suites are passed for this patch.
* The rv64gcv fully regression test.
* The x86 bootstrap test.
* The x86 fully regression test.
gcc/ChangeLog:
* tree-vect-patterns.cc (gimple_signed_integer_sat_sub): Add new
func decl for signed SAT_SUB.
(vect_recog_sat_sub_pattern_transform): Update comments.
(vect_recog_sat_sub_pattern): Try the vector signed SAT_SUB
pattern.
Thomas Koenig [Fri, 11 Oct 2024 20:58:51 +0000 (22:58 +0200)]
Introduce GFC_STD_UNSIGNED.
This patch creates an unsigned "standard" for the
gfc_option.allow_std field.
One of the main reason why people want UNSIGNED for Fortran is
interfacing for C.
This is a preparation for further work on the ISO_C_BINDING constants.
That, we do via iso-c-binding.def , whose last field is a standard
for the constant to be defined for the standard in question, which is
then checked. I could try and invent a different method for this,
but I'd rather not.
gcc/fortran/ChangeLog:
* intrinsic.cc (add_functions): Convert uint and
selected_unsigned_kind to GFC_STD_UNSIGNED.
(gfc_check_intrinsic_standard): Handle GFC_STD_UNSIGNED.
* libgfortran.h (GFC_STD_UNSIGNED): Add.
* options.cc (gfc_post_options): Set GFC_STD_UNSIGNED
if -funsigned is set.
Jonathan Wakely [Thu, 10 Oct 2024 21:47:46 +0000 (22:47 +0100)]
libstdc++: Rearrange std::move_iterator helpers in stl_iterator.h
The __niter_base(move_iterator<I>) overload and __is_move_iterator trait
were originally immediately after the definition of move_iterator. The
addition of C++20 features after move_iterator meant that those helpers
were no longer anywhere near move_iterator.
This change puts them back where they used to be, before all the new
C++20 additions.
libstdc++-v3/ChangeLog:
* include/bits/stl_iterator.h (__niter_base(move_iterator<I>))
(__is_move_iterator, __miter_base, _GLIBCXX_MAKE_MOVE_ITERATOR)
(_GLIBCXX_MAKE_MOVE_IF_NOEXCEPT_ITERATOR): Move earlier in the
file.
Kyrylo Tkachov [Wed, 9 Oct 2024 16:40:33 +0000 (09:40 -0700)]
PR target/117048 aarch64: Use more canonical and optimization-friendly representation for XAR instruction
The pattern for the Advanced SIMD XAR instruction isn't very
optimization-friendly at the moment.
In the testcase from the PR once simlify-rtx has done its work it
generates the RTL:
(set (reg:V2DI 119 [ _14 ])
(rotate:V2DI (xor:V2DI (reg:V2DI 114 [ vect__1.12_16 ])
(reg:V2DI 116 [ *m1_01_8(D) ]))
(const_vector:V2DI [
(const_int 32 [0x20]) repeated x2
])))
which fails to match our XAR pattern because the pattern expects:
1) A ROTATERT instead of the ROTATE. However, according to the RTL ops
documentation the preferred form of rotate-by-immediate is ROTATE, which
I take to mean it's the canonical form.
ROTATE (x, C) <-> ROTATERT (x, MODE_WIDTH - C) so it's better to match just
one canonical representation.
2) A CONST_INT shift amount whereas the midend asks for a repeated vector
constant.
These issues are fixed by introducing a dedicated expander for the
aarch64_xarqv2di name, needed by the arm_neon.h intrinsic, that translate
the intrinsic-level CONST_INT immediate (the right-rotate amount) into
a repeated vector constant subtracted from 64 to give the corresponding
left-rotate amount that is fed to the new representation for the XAR
define_insn that uses the ROTATE RTL code. This is a similar approach
to have we handle the discrepancy between intrinsic-level and RTL-level
vector lane numbers for big-endian.
With this patch and [1/2] the arithmetic parts of the testcase now simplify
to just one XAR instruction.
Bootstrapped and tested on aarch64-none-linux-gnu.
Signed-off-by: Kyrylo Tkachov <ktkachov@nvidia.com>
gcc/
PR target/117048
* config/aarch64/aarch64-simd.md (aarch64_xarqv2di): Redefine into a
define_expand.
(*aarch64_xarqv2di_insn): Define.
gcc/testsuite/
PR target/117048
* g++.target/aarch64/pr117048.C: New test.
In the testcase from patch [2/2] we want to match a vector rotate operate from
an IOR of left and right shifts by immediate. simplify-rtx has code for just
that but it looks like it's prepared to do handle only scalar operands.
In practice most of the code works for vector modes as well except the shift
amounts are checked to be CONST_INT rather than vector constants that we have
here. This is easily extended by using unwrap_const_vec_duplicate to extract
the repeating constant shift amount. With this change combine now tries
matching the simpler and expected:
(set (reg:V2DI 119 [ _14 ])
(rotate:V2DI (xor:V2DI (reg:V2DI 114 [ vect__1.12_16 ])
(reg:V2DI 116 [ *m1_01_8(D) ]))
(const_vector:V2DI [
(const_int 32 [0x20]) repeated x2
])))
instead of the previous:
(set (reg:V2DI 119 [ _14 ])
(ior:V2DI (ashift:V2DI (xor:V2DI (reg:V2DI 114 [ vect__1.12_16 ])
(reg:V2DI 116 [ *m1_01_8(D) ]))
(const_vector:V2DI [
(const_int 32 [0x20]) repeated x2
]))
(lshiftrt:V2DI (xor:V2DI (reg:V2DI 114 [ vect__1.12_16 ])
(reg:V2DI 116 [ *m1_01_8(D) ]))
(const_vector:V2DI [
(const_int 32 [0x20]) repeated x2
]))))
To actually fix the PR the aarch64 backend needs some adjustment as well
which is done in patch [2/2], which adds the testcase as well.
Bootstrapped and tested on aarch64-none-linux-gnu.
Tobias Burnus [Fri, 11 Oct 2024 15:05:37 +0000 (17:05 +0200)]
Fortran: Dead-function removal in error.cc (shrinking by 40%)
This patch removes a large number of unused static functions from error.cc,
which previously were used for diagnostic but have been replaced by the common
diagnostic code.
Jennifer Schmitz [Wed, 25 Sep 2024 10:21:22 +0000 (03:21 -0700)]
match.pd: Fold logarithmic identities.
This patch implements 4 rules for logarithmic identities in match.pd
under -funsafe-math-optimizations:
1) logN(1.0/a) -> -logN(a). This avoids the division instruction.
2) logN(C/a) -> logN(C) - logN(a), where C is a real constant. Same as 1).
3) logN(a) + logN(b) -> logN(a*b). This reduces the number of calls to
log function.
4) logN(a) - logN(b) -> logN(a/b). Same as 4).
Tests were added for float, double, and long double.
The patch was bootstrapped and regtested on aarch64-linux-gnu and
x86_64-linux-gnu, no regression.
Additionally, SPEC 2017 fprate was run. While the transform does not seem
to be triggered, we also see no non-noise impact on performance.
OK for mainline?
Jonathan Wakely [Fri, 11 Oct 2024 08:40:38 +0000 (09:40 +0100)]
libstdc++: Fix localized %c formatting for <chrono> [PR117085]
When formatting a time point with %c we call std::vformat_to using the
formatting locale's D_T_FMT string, but we weren't adding the L option
to the format string. This meant we always interpreted D_T_FMT in the C
locale, instead of using the formatting locale as obviously intended
when %c is used.
libstdc++-v3/ChangeLog:
PR libstdc++/117085
* include/bits/chrono_io.h (__formatter_chrono::_M_c): Add L
option to format string.
* testsuite/std/time/format.cc: Move to...
* testsuite/std/time/format/format.cc: ...here.
* testsuite/std/time/format_localized.cc: Move to...
* testsuite/std/time/format/localized.cc: ...here.
* testsuite/std/time/format/pr117085.cc: New test.
It turns out target costing code looks at STMT_VINFO_MEMORY_ACCESS_TYPE
to identify operations from (emulated) gathers for example. This
doesn't work for SLP loads since we do not set STMT_VINFO_MEMORY_ACCESS_TYPE
there as the vectorization strathegy might differ between different
stmt uses. It seems we got away with setting it for stores though.
The following adds a memory_access_type field to slp_tree and sets it
from load and store vectorization code. All the costing doesn't record
the SLP node (that was only done selectively for some corner case). The
costing is really in need of a big overhaul, the following just massages
the two relevant ops to fix gcc.dg/target/pr88531-2[bc].c FAILs when
switching on SLP for non-grouped stores. In particular currently
we either have a SLP node or a stmt_info in the cost hook but not both.
So the following mitigates this, postponing a rewrite of costing to
next stage1. Other targets look possibly affected as well but are
left to respective maintainers to update.
PR tree-optimization/117080
* tree-vectorizer.h (_slp_tree::memory_access_type): Add.
(SLP_TREE_MEMORY_ACCESS_TYPE): New.
(record_stmt_cost): Add another overload.
* tree-vect-slp.cc (_slp_tree::_slp_tree): Initialize
memory_access_type.
* tree-vect-stmts.cc (vectorizable_store): Set
SLP_TREE_MEMORY_ACCESS_TYPE.
(vectorizable_load): Likewise. Also record the SLP node
when costing emulated gather offset decompose and vector
composition.
* config/i386/i386.cc (ix86_vector_costs::add_stmt_cost): Also
recognize SLP emulated gather/scatter.
The AArch64 FEAT_FAMINMAX extension introduces instructions for
computing the floating point absolute maximum and minimum of the
two vectors element-wise.
This patch adds code generation for famax and famin in terms of existing
unspecs. With this patch:
1. famax can be expressed as taking UNSPEC_COND_SMAX of the two operands
and then taking absolute value of their result.
2. famin can be expressed as taking UNSPEC_COND_SMIN of the two operands
and then taking absolute value of their result.
This fusion of operators is only possible when
-march=armv9-a+faminmax+sve flags are passed. We also need to pass
-ffast-math flag; this is what enables compiler to use UNSPEC_COND_SMAX
and UNSPEC_COND_SMIN.
This code generation is only available on -O2 or -O3 as that is when
auto-vectorization is enabled.
gcc/ChangeLog:
* config/aarch64/aarch64-sve2.md
(*aarch64_pred_faminmax_fused): Instruction pattern for faminmax
codegen.
* config/aarch64/iterators.md: Iterator and attribute for
faminmax codegen.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/sve/faminmax_1.c: New test.
* gcc.target/aarch64/sve/faminmax_2.c: New test.
The AArch64 FEAT_FAMINMAX extension introduces instructions for
computing the floating point absolute maximum and minimum of the
two vectors element-wise.
This patch introduces SVE2 faminmax intrinsics. The intrinsics of this
extension are implemented as the following builtin functions:
* sva[max|min]_[m|x|z]
* sva[max|min]_[f16|f32|f64]_[m|x|z]
* sva[max|min]_n_[f16|f32|f64]_[m|x|z]
gcc/ChangeLog:
* config/aarch64/aarch64-sve-builtins-base.cc
(svamax): Absolute maximum declaration.
(svamin): Absolute minimum declaration.
* config/aarch64/aarch64-sve-builtins-base.def
(REQUIRED_EXTENSIONS): Add faminmax intrinsics behind a flag.
(svamax): Absolute maximum declaration.
(svamin): Absolute minimum declaration.
* config/aarch64/aarch64-sve-builtins-base.h: Declaring function
bases for the new intrinsics.
* config/aarch64/aarch64.h
(TARGET_SVE_FAMINMAX): New flag for SVE2 faminmax.
* config/aarch64/iterators.md: New unspecs, iterators, and attrs
for the new intrinsics.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/sve2/acle/asm/amax_f16.c: New test.
* gcc.target/aarch64/sve2/acle/asm/amax_f32.c: New test.
* gcc.target/aarch64/sve2/acle/asm/amax_f64.c: New test.
* gcc.target/aarch64/sve2/acle/asm/amin_f16.c: New test.
* gcc.target/aarch64/sve2/acle/asm/amin_f32.c: New test.
* gcc.target/aarch64/sve2/acle/asm/amin_f64.c: New test.
Pan Li [Thu, 10 Oct 2024 08:24:08 +0000 (16:24 +0800)]
RISC-V: Add testcases for form 8 of scalar signed SAT_TRUNC
Form 8:
#define DEF_SAT_S_TRUNC_FMT_8(NT, WT, NT_MIN, NT_MAX) \
NT __attribute__((noinline)) \
sat_s_trunc_##WT##_to_##NT##_fmt_8 (WT x) \
{ \
NT trunc = (NT)x; \
return (WT)NT_MIN > x || x >= (WT)NT_MAX \
? x < 0 ? NT_MIN : NT_MAX \
: trunc; \
}
The below test are passed for this patch.
* The rv64gcv fully regression test.
It is test only patch and obvious up to a point, will commit it
directly if no comments in next 48H.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/sat_arith.h: Add test helper macros.
* gcc.target/riscv/sat_s_trunc-8-i16-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-8-i32-to-i16.c: New test.
* gcc.target/riscv/sat_s_trunc-8-i32-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-8-i64-to-i16.c: New test.
* gcc.target/riscv/sat_s_trunc-8-i64-to-i32.c: New test.
* gcc.target/riscv/sat_s_trunc-8-i64-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-run-8-i16-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-run-8-i32-to-i16.c: New test.
* gcc.target/riscv/sat_s_trunc-run-8-i32-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-run-8-i64-to-i16.c: New test.
* gcc.target/riscv/sat_s_trunc-run-8-i64-to-i32.c: New test.
* gcc.target/riscv/sat_s_trunc-run-8-i64-to-i8.c: New test.
Pan Li [Thu, 10 Oct 2024 08:08:40 +0000 (16:08 +0800)]
RISC-V: Add testcases for form 7 of scalar signed SAT_TRUNC
Form 7:
#define DEF_SAT_S_TRUNC_FMT_7(NT, WT, NT_MIN, NT_MAX) \
NT __attribute__((noinline)) \
sat_s_trunc_##WT##_to_##NT##_fmt_7 (WT x) \
{ \
NT trunc = (NT)x; \
return (WT)NT_MIN >= x || x >= (WT)NT_MAX \
? x < 0 ? NT_MIN : NT_MAX \
: trunc; \
}
The below test are passed for this patch.
* The rv64gcv fully regression test.
It is test only patch and obvious up to a point, will commit it
directly if no comments in next 48H.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/sat_arith.h: Add test helper macros.
* gcc.target/riscv/sat_s_trunc-7-i16-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-7-i32-to-i16.c: New test.
* gcc.target/riscv/sat_s_trunc-7-i32-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-7-i64-to-i16.c: New test.
* gcc.target/riscv/sat_s_trunc-7-i64-to-i32.c: New test.
* gcc.target/riscv/sat_s_trunc-7-i64-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-run-7-i16-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-run-7-i32-to-i16.c: New test.
* gcc.target/riscv/sat_s_trunc-run-7-i32-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-run-7-i64-to-i16.c: New test.
* gcc.target/riscv/sat_s_trunc-run-7-i64-to-i32.c: New test.
* gcc.target/riscv/sat_s_trunc-run-7-i64-to-i8.c: New test.
Pan Li [Thu, 10 Oct 2024 07:53:45 +0000 (15:53 +0800)]
RISC-V: Add testcases for form 6 of scalar signed SAT_TRUNC
Form 6:
#define DEF_SAT_S_TRUNC_FMT_6(NT, WT, NT_MIN, NT_MAX) \
NT __attribute__((noinline)) \
sat_s_trunc_##WT##_to_##NT##_fmt_6 (WT x) \
{ \
NT trunc = (NT)x; \
return (WT)NT_MIN >= x || x > (WT)NT_MAX \
? x < 0 ? NT_MIN : NT_MAX \
: trunc; \
}
The below test are passed for this patch.
* The rv64gcv fully regression test.
It is test only patch and obvious up to a point, will commit it
directly if no comments in next 48H.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/sat_arith.h: Add test helper macros.
* gcc.target/riscv/sat_s_trunc-6-i16-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-6-i32-to-i16.c: New test.
* gcc.target/riscv/sat_s_trunc-6-i32-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-6-i64-to-i16.c: New test.
* gcc.target/riscv/sat_s_trunc-6-i64-to-i32.c: New test.
* gcc.target/riscv/sat_s_trunc-6-i64-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-run-6-i16-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-run-6-i32-to-i16.c: New test.
* gcc.target/riscv/sat_s_trunc-run-6-i32-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-run-6-i64-to-i16.c: New test.
* gcc.target/riscv/sat_s_trunc-run-6-i64-to-i32.c: New test.
* gcc.target/riscv/sat_s_trunc-run-6-i64-to-i8.c: New test.
Pan Li [Thu, 10 Oct 2024 07:35:33 +0000 (15:35 +0800)]
RISC-V: Add testcases for form 5 of scalar signed SAT_TRUNC
Form 5:
#define DEF_SAT_S_TRUNC_FMT_5(NT, WT, NT_MIN, NT_MAX) \
NT __attribute__((noinline)) \
sat_s_trunc_##WT##_to_##NT##_fmt_5 (WT x) \
{ \
NT trunc = (NT)x; \
return (WT)NT_MIN > x || x > (WT)NT_MAX \
? x < 0 ? NT_MIN : NT_MAX \
: trunc; \
}
The below test are passed for this patch.
* The rv64gcv fully regression test.
It is test only patch and obvious up to a point, will commit it
directly if no comments in next 48H.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/sat_arith.h: Add test helper macros.
* gcc.target/riscv/sat_s_trunc-5-i16-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-5-i32-to-i16.c: New test.
* gcc.target/riscv/sat_s_trunc-5-i32-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-5-i64-to-i16.c: New test.
* gcc.target/riscv/sat_s_trunc-5-i64-to-i32.c: New test.
* gcc.target/riscv/sat_s_trunc-5-i64-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-run-5-i16-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-run-5-i32-to-i16.c: New test.
* gcc.target/riscv/sat_s_trunc-run-5-i32-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-run-5-i64-to-i16.c: New test.
* gcc.target/riscv/sat_s_trunc-run-5-i64-to-i32.c: New test.
* gcc.target/riscv/sat_s_trunc-run-5-i64-to-i8.c: New test.
Pan Li [Thu, 10 Oct 2024 06:52:04 +0000 (14:52 +0800)]
RISC-V: Add testcases for form 4 of scalar signed SAT_TRUNC
Form 4:
#define DEF_SAT_S_TRUNC_FMT_4(NT, WT, NT_MIN, NT_MAX) \
NT __attribute__((noinline)) \
sat_s_trunc_##WT##_to_##NT##_fmt_4 (WT x) \
{ \
NT trunc = (NT)x; \
return (WT)NT_MIN <= x && x < (WT)NT_MAX \
? trunc \
: x < 0 ? NT_MIN : NT_MAX; \
}
gcc/testsuite/ChangeLog:
* gcc.target/riscv/sat_arith.h: Add test helper macros.
* gcc.target/riscv/sat_s_trunc-4-i16-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-4-i32-to-i16.c: New test.
* gcc.target/riscv/sat_s_trunc-4-i32-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-4-i64-to-i16.c: New test.
* gcc.target/riscv/sat_s_trunc-4-i64-to-i32.c: New test.
* gcc.target/riscv/sat_s_trunc-4-i64-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-run-4-i16-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-run-4-i32-to-i16.c: New test.
* gcc.target/riscv/sat_s_trunc-run-4-i32-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-run-4-i64-to-i16.c: New test.
* gcc.target/riscv/sat_s_trunc-run-4-i64-to-i32.c: New test.
* gcc.target/riscv/sat_s_trunc-run-4-i64-to-i8.c: New test.
Pan Li [Wed, 9 Oct 2024 14:37:00 +0000 (22:37 +0800)]
RISC-V: Add testcases for form 3 of scalar signed SAT_TRUNC
Form 3:
#define DEF_SAT_S_TRUNC_FMT_3(NT, WT, NT_MIN, NT_MAX) \
NT __attribute__((noinline)) \
sat_s_trunc_##WT##_to_##NT##_fmt_3 (WT x) \
{ \
NT trunc = (NT)x; \
return (WT)NT_MIN < x && x <= (WT)NT_MAX \
? trunc \
: x < 0 ? NT_MIN : NT_MAX; \
}
gcc/testsuite/ChangeLog:
* gcc.target/riscv/sat_arith.h: Add test helper macros.
* gcc.target/riscv/sat_s_trunc-3-i16-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-3-i32-to-i16.c: New test.
* gcc.target/riscv/sat_s_trunc-3-i32-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-3-i64-to-i16.c: New test.
* gcc.target/riscv/sat_s_trunc-3-i64-to-i32.c: New test.
* gcc.target/riscv/sat_s_trunc-3-i64-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-run-3-i16-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-run-3-i32-to-i16.c: New test.
* gcc.target/riscv/sat_s_trunc-run-3-i32-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-run-3-i64-to-i16.c: New test.
* gcc.target/riscv/sat_s_trunc-run-3-i64-to-i32.c: New test.
* gcc.target/riscv/sat_s_trunc-run-3-i64-to-i8.c: New test.
Pan Li [Wed, 9 Oct 2024 02:33:31 +0000 (10:33 +0800)]
RISC-V: Add testcases for form 2 of scalar signed SAT_TRUNC
Form 2:
#define DEF_SAT_S_TRUNC_FMT_2(NT, WT, NT_MIN, NT_MAX) \
NT __attribute__((noinline)) \
sat_s_trunc_##WT##_to_##NT##_fmt_2 (WT x) \
{ \
NT trunc = (NT)x; \
return (WT)NT_MIN < x && x < (WT)NT_MAX \
? trunc \
: x < 0 ? NT_MIN : NT_MAX; \
}
The below test are passed for this patch.
* The rv64gcv fully regression test.
It is test only patch and obvious up to a point, will commit it
directly if no comments in next 48H.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/sat_arith.h: Add test helper macros.
* gcc.target/riscv/sat_s_trunc-2-i16-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-2-i32-to-i16.c: New test.
* gcc.target/riscv/sat_s_trunc-2-i32-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-2-i64-to-i16.c: New test.
* gcc.target/riscv/sat_s_trunc-2-i64-to-i32.c: New test.
* gcc.target/riscv/sat_s_trunc-2-i64-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-run-2-i16-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-run-2-i32-to-i16.c: New test.
* gcc.target/riscv/sat_s_trunc-run-2-i32-to-i8.c: New test.
* gcc.target/riscv/sat_s_trunc-run-2-i64-to-i16.c: New test.
* gcc.target/riscv/sat_s_trunc-run-2-i64-to-i32.c: New test.
* gcc.target/riscv/sat_s_trunc-run-2-i64-to-i8.c: New test.
Jakub Jelinek [Fri, 11 Oct 2024 09:41:53 +0000 (11:41 +0200)]
i386: Fix up spaceship expanders for -mtune=i[45]86 [PR117053]
The adjusted and new spaceship expanders ICE with -mtune=i486 or
-mtune=i586.
The problem is that in that case TARGET_ZERO_EXTEND_WITH_AND is true
and zero_extendqisi2 isn't allowed in that case, and we can't use
the replacement AND, because that clobbers flags and we want to use them
again.
The following patch fixes that by using in those cases roughly what
we want to expand it to after peephole2 optimizations, i.e. xor
before the comparison, *setcc_qi_slp and sbbl $0 (or for signed
int case xoring of 2 regs, two *setcc_qi_slp, subl).
For *setcc_qi_slp, it uses the setcc_si_slp hacks with UNSPEC that
were in use for the floating point jp case (so such code is IMHO
undesirable for the !TARGET_ZERO_EXTEND_WITH_AND case as we want to
give combiner more liberty in that case).
Jonathan Wakely [Wed, 9 Oct 2024 13:24:19 +0000 (14:24 +0100)]
libstdc++: Fix some test failures with -fno-char8_t
libstdc++-v3/ChangeLog:
* testsuite/20_util/duration/io.cc [!__cpp_lib_char8_t]: Define
char8_t as a typedef for unsigned char.
* testsuite/std/format/parse_ctx_neg.cc: Skip for -fno-char8_t.
Richard Biener [Thu, 10 Oct 2024 12:00:11 +0000 (14:00 +0200)]
Fix possible wrong-code with masked store-lanes
When we're doing masked store-lanes one mask element applies to all
loads of one struct element. This requires uniform masks for all
of the SLP lanes, something we already compute into STMT_VINFO_SLP_VECT_ONLY
but fail to check when doing SLP store-lanes. The following corrects
this. The following also adjusts the store-lane heuristic to properly
check for masked or non-masked optab support.
* tree-vect-slp.cc (vect_slp_prefer_store_lanes_p): Allow
passing in of vectype, pass in whether the stores are masked
and query the correct optab.
(vect_build_slp_instance): Guard store-lanes query with
! STMT_VINFO_SLP_VECT_ONLY, guaranteeing an uniform mask.
Hu, Lin1 [Wed, 9 Oct 2024 02:20:05 +0000 (10:20 +0800)]
i386: Fix some patterns's mem attribute.
Hi, all
This is another patch to modify some pattern's type attr from ssemov to
ssemov2.
Some ssemov pattern's mem attr should be load when their 2 operand is a memory
operand.
Bootstrapped and regtested on x86-64-linux-pc, OK for trunk?
BRs,
Lin
gcc/ChangeLog:
* config/i386/sse.md
(sse_movhlps): Change type attr from ssemov to ssemov2.
(sse_loadhps): Ditto.
(*vec_concat<mode>): Ditto.
(vec_setv2df_0): Ditto.
(sse_loadlps): Change attr from ssemov to ssemov2 except for 2, 3.
(sse2_loadhps): Change attr from ssemov to ssemov2 except for 0, 1.
(sse2_loadlpd): Change attr from ssemov to ssemov2 except for 0, 1,
2.
(sse2_movsd_<mode>): Change attr from ssemov to ssemov2 except for 5.
(vec_concatv2df): Change attr from ssemov to ssemov2 except for 0, 1,
2.
(*vec_concat<mode>): Change attr from ssemov to ssemov2 for 3, 4.
(vec_concatv2di): Change attr from ssemov to ssemov2 except for 0, 1,
2, 3, 4, 5.
Richard Ball [Thu, 10 Oct 2024 18:16:39 +0000 (19:16 +0100)]
aarch64: Alter pr116258.c test to correct for big endian.
The test at pr116258.c fails on big endian targets,
this is because the test checks that the index of a floating
point multiply is 0, which is correct only for little endian.
gcc/testsuite/ChangeLog:
PR tree-optimization/116258
* gcc.target/aarch64/pr116258.c:
Alter test to add big-endian support.
Michael Matz [Thu, 10 Oct 2024 14:36:51 +0000 (16:36 +0200)]
Fix PR116650: check all regs in regrename targets
(this came up for m68k vs. LRA, but is a generic problem)
Regrename wants to use new registers for certain def-use chains.
For validity of replacements it needs to check that the selected
candidates are unused up to then. That's done in check_new_reg_p.
But if it so happens that the new register needs more hardregs
than the old register (which happens if the target allows inter-bank
moves and the mode is something like a DFmode that needs to be placed
into a SImode reg-pair), then check_new_reg_p only checks the
first of those registers for free-ness.
This is caused by that function looking up the number of necessary
hardregs only in terms of the old hardreg number. It of course needs
to do that in terms of the new candidate regnumber. The symptom is that
regrename sometimes clobbers the higher numbered registers of such a
regrename target pair. This patch fixes that problem.
(In the particular case of the bug report it was LRA that left over a
inter-bank move instruction that triggers regrename, ultimately causing
the mis-compile. Reload didn't do that, but in general we of course
can't rely on such moves not happening if the target allows them.)
This also shows a general confusion in that function and the target hook
interface here:
for (i = nregs - 1; i >= 0; --)
...
|| ! HARD_REGNO_RENAME_OK (reg + i, new_reg + i))
it uses nregs in a way that requires it to be the same between old and
new register. The problem is that the target hook only gets register
numbers, when it instead should get a mode and register numbers and
would be called only for the first but not for subsequent registers.
I've looked at a number of definitions of that target hook and I think
that this is currently harmless in the sense that it would merely rule
out some potential reg-renames that would in fact be okay to do. So I'm
not changing the target hook interface here and hence that problem
remains unfixed.
PR rtl-optimization/116650
* regrename.cc (check_new_reg_p): Calculate nregs in terms of
the new candidate register.
Li Xu [Thu, 10 Oct 2024 14:51:19 +0000 (08:51 -0600)]
RISC-V:Bugfix for C++ code compilation failure with rv32imafc_zve32f[pr116883]
From: xuli <xuli1@eswincomputing.com>
Example as follows:
int main()
{
unsigned long arraya[128], arrayb[128], arrayc[128];
for (int i = 0; i < 128; i++)
{
arraya[i] = arrayb[i] + arrayc[i];
}
return 0;
}
Compiled with -march=rv32imafc_zve32f -mabi=ilp32f, it will cause a compilation issue:
riscv_vector.h:40:25: error: ambiguating new declaration of 'vint64m4_t __riscv_vle64(vbool16_t, const long long int*, unsigned int)'
40 | #pragma riscv intrinsic "vector"
| ^~~~~~~~
riscv_vector.h:40:25: note: old declaration 'vint64m1_t __riscv_vle64(vbool64_t, const long long int*, unsigned int)'
With zvl=32b, vbool16_t is registered in init_builtins() with
type_common.precision=0x101 (nunits=2), mode_nunits[E_RVVMF16BI]=[2,2].
Normally, vbool64_t is only valid when TARGET_MIN_VLEN > 32, so vbool64_t
is not registered in init_builtins(), meaning vbool64_t=null.
In order to implement __attribute__((target("arch=+v"))), we must register
all vector types and all RVV intrinsics. Therefore, vbool64_t will be registered
by default with zvl=128b in reinit_builtins(), resulting in
type_common.precision=0x101 (nunits=2) and mode_nunits[E_RVVMF64BI]=[2,2].
We then get TYPE_VECTOR_SUBPARTS(vbool16_t) == TYPE_VECTOR_SUBPARTS(vbool64_t),
calculated using type_common.precision, resulting in 2. Since vbool16_t and
vbool64_t have the same element type (boolean_type), the compiler treats them
as the same type, leading to a re-declaration conflict.
After all types and intrinsics have been registered, processing
__attribute__((target("arch=+v"))) will update the parameters option and
init_adjust_machine_modes. Therefore, to avoid conflicts, we can choose
zvl=4096b for the null type reinit_builtins().
vect: Avoid divide by zero for permutes of extern VLA vectors
My recent VLA SLP patches caused a regression with cross compilers
in gcc.dg/torture/neon-sve-bridge.c. There we have a VEC_PERM_EXPR
created from two BIT_FIELD_REFs, with the child node being an
external VLA vector:
For this kind of external node, the SLP_TREE_LANES is normally
the total number of lanes in the vector, but it is zero if the
vector has variable length:
auto nunits = TYPE_VECTOR_SUBPARTS (SLP_TREE_VECTYPE (vnode));
unsigned HOST_WIDE_INT const_nunits;
if (nunits.is_constant (&const_nunits))
SLP_TREE_LANES (vnode) = const_nunits;
This led to division by zero in:
/* Check whether the output has N times as many lanes per vector. */
else if (constant_multiple_p (SLP_TREE_LANES (node) * op_nunits,
SLP_TREE_LANES (child) * nunits,
&this_unpack_factor)
&& (i == 0 || unpack_factor == this_unpack_factor))
unpack_factor = this_unpack_factor;
No repetition takes place for this kind of external node, so this
patch goes with Richard's suggestion to check for external nodes
that have no scalar statements.
This didn't show up for my native testing since division by zero
doesn't trap on AArch64.
gcc/
* tree-vect-slp.cc (vectorizable_slp_permutation_1): Set repeating_p
to false if we have an external node for a pre-existing vector.
match.pd: Check trunc_mod vector obtap before folding.
This patch guards the simplification x / y * y == x -> x % y == 0 in
match.pd by a check for:
1) Non-vector mode of x OR
2) Lack of support for vector division OR
3) Support of vector modulo
The patch was bootstrapped and tested with no regression on
aarch64-linux-gnu and x86_64-linux-gnu.
OK for mainline?
Signed-off-by: Jennifer Schmitz <jschmitz@nvidia.com>
gcc/
PR tree-optimization/116831
* match.pd: Guard simplification to trunc_mod with check for
mod optab support.
gcc/testsuite/
PR tree-optimization/116831
* gcc.dg/torture/pr116831.c: New test.
Richard Biener [Wed, 9 Oct 2024 13:31:59 +0000 (15:31 +0200)]
Allow SLP store of mixed external and constant
vect_build_slp_tree_1 rejected this during SLP discovery because it
ran into the rhs code comparison code for stores. The following
skips that completely for loads and stores as those are handled
later anyway.
This needs a heuristic adjustment in vect_get_and_check_slp_defs
to avoid fallout with regard to BB vectorization and splitting
of a store group vs. demoting one operand to external.
gcc.dg/Wstringop-overflow-47.c needs adjustment given we now have
vast improvements for code generation. gcc.dg/strlenopt-32.c
needs adjustment because the strlen pass doesn't handle
* tree-vect-slp.cc (vect_build_slp_tree_1): Do not compare
RHS codes for loads or stores.
(vect_get_and_check_slp_defs): Only demote operand to external
in case there is more than one operand.
According to Intel SOM[1], For Crestmont, most 256-bit Intel AVX2
instructions can be decomposed into two independent 128-bit
micro-operations, except for a subset of Intel AVX2 instructions,
known as cross-lane operations, can only compute the result for an
element by utilizing one or more sources belonging to other elements.
The 256-bit instructions listed below use more operand sources than
can be natively supported by a single reservation station within these
microarchitectures. They are decomposed into two μops, where the first
μop resolves a subset of operand dependencies across two cycles. The
dependent second μop executes the 256-bit operation by using a single
128-bit execution port for two consecutive cycles with a five-cycle
latency for a total latency of seven cycles.
Instead of setting tune avx128_optimal for SRF, the patch add a new
tune avx256_avoid_vec_perm for it. so by default, vectorizer still
uses 256-bit VF if cost is profitable, but lowers to 128-bit whenever
256-bit vec_perm is needed for auto-vectorization. w/o vec_perm,
performance of 256-bit vectorization should be similar as 128-bit
ones(some benchmark results show it's even better than 128-bit
vectorization since it enables more parallelism for convert cases.)
* config/i386/i386.cc (ix86_vector_costs::ix86_vector_costs):
Add new member m_num_avx256_vec_perm.
(ix86_vector_costs::add_stmt_cost): Record 256-bit vec_perm.
(ix86_vector_costs::finish_cost): Prevent vectorization for
TAREGT_AVX256_AVOID_VEC_PERM when there's 256-bit vec_perm
instruction.
* config/i386/i386.h (TARGET_AVX256_AVOID_VEC_PERM): New
Macro.
* config/i386/x86-tune.def (X86_TUNE_AVX256_SPLIT_REGS): Add
m_CORE_ATOM.
(X86_TUNE_AVX256_AVOID_VEC_PERM): New tune.
gcc/testsuite/ChangeLog:
* gcc.target/i386/avx256_avoid_vec_perm.c: New test.
For Crestmont, 4-operand vex blendv instructions come from MSROM and
is slower than 3-instructions sequence (op1 & mask) | (op2 & ~mask).
legacy blendv instruction can still be handled by the decoder.
The patch add a new tune which is enabled for all processors except
for SRF/CWF. It will use vpand + vpandn + vpor instead of
vpblendvb(similar for vblendvps/vblendvpd) for SRF/CWF.
gcc/ChangeLog:
* config/i386/i386-expand.cc (ix86_expand_sse_movcc): Guard
instruction blendv generation under new tune.
* config/i386/i386.h (TARGET_SSE_MOVCC_USE_BLENDV): New Macro.
* config/i386/x86-tune.def (X86_TUNE_SSE_MOVCC_USE_BLENDV):
New tune.
David Malcolm [Thu, 10 Oct 2024 01:26:09 +0000 (21:26 -0400)]
diagnostics: move text output member functions to correct file
No functional change intended.
gcc/ChangeLog:
* diagnostic-format-text.cc
(diagnostic_text_output_format::after_diagnostic): Replace call to
show_any_path with body, taken from diagnostic.cc.
(diagnostic_text_output_format::build_prefix): Move here from
diagnostic.cc, updating to use get_diagnostic_kind_text and
diagnostic_get_color_for_kind.
(diagnostic_text_output_format::file_name_as_prefix): Move here
from diagnostic.cc
(diagnostic_text_output_format::append_note): Likewise.
* diagnostic-format-text.h
(diagnostic_text_output_format::show_any_path): Drop decl.
* diagnostic.cc
(diagnostic_text_output_format::file_name_as_prefix): Move to
diagnostic-format-text.cc.
(diagnostic_text_output_format::build_prefix): Likewise.
(diagnostic_text_output_format::show_any_path): Move to body of
diagnostic_text_output_format::after_diagnostic.
(diagnostic_text_output_format::append_note): Move to
diagnostic-format-text.cc.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>