Roger Sayle [Tue, 9 Jan 2024 10:21:39 +0000 (10:21 +0000)]
ARC: Table-driven ashlsi implementation for better code/rtx_costs.
One of the cool features of the H8 backend is its use of tables to select
optimal shift implementations for different CPU variants. This patch
borrows (plagiarizes) that idiom for SImode left shifts in the ARC backend
(for CPUs without a barrel-shifter). This provides a convenient mechanism
for both selecting the best implementation strategy (for speed vs. size),
and providing accurate rtx_costs [without duplicating a lot of logic].
Left shift RTX costs are especially important for use in synth_mult.
An example improvement is:
int foo(int x) { return 32768*x; }
which is now generated with -O2 -mcpu=em -mswap as:
where previously the ARC backend would generate a loop:
foo: mov lp_count,15
lp 2f
add r0,r0,r0
nop
2: # end single insn loop
j_s [blink]
2024-01-09 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
* config/arc/arc.cc (arc_shift_alg): New enumerated type for
left shift implementation strategies.
(arc_shift_info): Type for each entry of the shift strategy table.
(arc_shift_context_idx): Return a integer value for each code
generation context, used as an index
(arc_ashl_alg): Table indexed by context and shifted bit count.
(arc_split_ashl): Use the arc_ashl_alg table to select SImode
left shift implementation.
(arc_rtx_costs) <case ASHIFT>: Use the arc_ashl_alg table to
provide accurate costs, when optimizing for speed or size.
Julian Brown [Mon, 12 Sep 2022 17:11:29 +0000 (17:11 +0000)]
OpenMP: lvalue parsing for map/to/from clauses (C++)
This patch supports "lvalue" parsing (or "locator list item type" parsing)
for several OpenMP clause types for C++, as required for OpenMP 5.0
and above.
This version has been rebased -- some things have changed around
template handling recently, e.g. removal of build_non_dependent_expr and
tsubst_copy. A new potential corner-case issue has shown up regarding
implicit mapping of references to pointer to pointers -- an interaction
with the post-review fixes/rework for the patch here:
Which fixed the (new) tests baseptrs-[6789].C. I've noted that for now in
the patch, and adjusted the baseptrs-[46].C tests slightly to accommodate.
2024-01-08 Julian Brown <julian@codesourcery.com>
gcc/c-family/
* c-common.h (c_omp_address_inspector): Remove static from get_origin
and maybe_unconvert_ref methods.
* c-omp.cc (c_omp_split_clauses): Support OMP_ARRAY_SECTION.
(c_omp_address_inspector::map_supported_p): Handle OMP_ARRAY_SECTION.
(c_omp_address_inspector::get_origin): Avoid dereferencing possibly
NULL type when processing template decls.
(c_omp_address_inspector::maybe_unconvert_ref): Likewise.
gcc/cp/
* constexpr.cc (potential_consant_expression_1): Handle
OMP_ARRAY_SECTION.
* cp-tree.h (grok_omp_array_section, build_omp_array_section): Add
prototypes.
* decl2.cc (grok_omp_array_section): New function.
* error.cc (dump_expr): Handle OMP_ARRAY_SECTION.
* parser.cc (cp_parser_new): Initialize parser->omp_array_section_p.
(cp_parser_statement_expr): Disallow array sections.
(cp_parser_postfix_open_square_expression): Support OMP_ARRAY_SECTION
parsing.
(cp_parser_parenthesized_expression_list, cp_parser_lambda_expression,
cp_parser_braced_list): Disallow array sections.
(cp_parser_omp_var_list_no_open): Remove ALLOW_DEREF parameter, add
MAP_LVALUE in its place. Support generalised lvalue parsing for
OpenMP map, to and from clauses. Use OMP_ARRAY_SECTION
code instead of TREE_LIST to represent OpenMP array sections.
(cp_parser_omp_var_list): Remove ALLOW_DEREF parameter, add MAP_LVALUE.
Pass to cp_parser_omp_var_list_no_open.
(cp_parser_oacc_data_clause): Update call to cp_parser_omp_var_list.
(cp_parser_omp_clause_map): Add sk_omp scope around
cp_parser_omp_var_list_no_open call.
* parser.h (cp_parser): Add omp_array_section_p field.
* pt.cc (tsubst, tsubst_copy, tsubst_omp_clause_decl,
tsubst_copy_and_build): Add OMP_ARRAY_SECTION support.
* semantics.cc (handle_omp_array_sections_1, handle_omp_array_sections,
cp_oacc_check_attachments, finish_omp_clauses): Use OMP_ARRAY_SECTION
instead of TREE_LIST where appropriate. Handle more types of map
expression.
* typeck.cc (build_omp_array_section): New function.
gcc/
* gimplify.cc (gimplify_expr): Ensure OMP_ARRAY_SECTION has been
processed out before gimplification.
* tree-pretty-print.cc (dump_generic_node): Support OMP_ARRAY_SECTION.
* tree.def (OMP_ARRAY_SECTION): New tree code.
gcc/testsuite/
* c-c++-common/gomp/map-6.c: Update expected output.
* c-c++-common/gomp/target-enter-data-1.c: Update scan test.
* g++.dg/gomp/array-section-1.C: New test.
* g++.dg/gomp/array-section-2.C: New test.
* g++.dg/gomp/bad-array-section-1.C: New test.
* g++.dg/gomp/bad-array-section-2.C: New test.
* g++.dg/gomp/bad-array-section-3.C: New test.
* g++.dg/gomp/bad-array-section-4.C: New test.
* g++.dg/gomp/bad-array-section-5.C: New test.
* g++.dg/gomp/bad-array-section-6.C: New test.
* g++.dg/gomp/bad-array-section-7.C: New test.
* g++.dg/gomp/bad-array-section-8.C: New test.
* g++.dg/gomp/bad-array-section-9.C: New test.
* g++.dg/gomp/bad-array-section-10.C: New test.
* g++.dg/gomp/bad-array-section-11.C: New test.
* g++.dg/gomp/has_device_addr-non-lvalue-1.C: New test.
* g++.dg/gomp/pr67522.C: Update expected output.
* g++.dg/gomp/ind-base-3.C: New test.
* g++.dg/gomp/map-assignment-1.C: New test.
* g++.dg/gomp/map-inc-1.C: New test.
* g++.dg/gomp/map-lvalue-ref-1.C: New test.
* g++.dg/gomp/map-ptrmem-1.C: New test.
* g++.dg/gomp/map-ptrmem-2.C: New test.
* g++.dg/gomp/map-static-cast-lvalue-1.C: New test.
* g++.dg/gomp/map-ternary-1.C: New test.
* g++.dg/gomp/member-array-2.C: New test.
libgomp/
* testsuite/libgomp.c++/baseptrs-4.C: Remove commented-out cases that
now work.
* testsuite/libgomp.c++/baseptrs-6.C: New test.
* testsuite/libgomp.c++/ind-base-1.C: New test.
* testsuite/libgomp.c++/ind-base-2.C: New test.
* testsuite/libgomp.c++/lvalue-tofrom-1.C: New test.
* testsuite/libgomp.c++/lvalue-tofrom-2.C: New test.
* testsuite/libgomp.c++/map-comma-1.C: New test.
* testsuite/libgomp.c++/map-rvalue-ref-1.C: New test.
* testsuite/libgomp.c++/struct-ref-1.C: New test.
* testsuite/libgomp.c-c++-common/array-field-1.c: New test.
* testsuite/libgomp.c-c++-common/array-of-struct-1.c: New test.
* testsuite/libgomp.c-c++-common/array-of-struct-2.c: New test.
Eric Botcazou [Tue, 9 Jan 2024 10:06:23 +0000 (11:06 +0100)]
Fix internal error on function call returning extension of limited interface
The problem occurs when this function call is the expression of a return in
a function returning the limited interface; in this peculiar case, there is
a mismatch between the callee, which has BIP formals but is not a BIP call,
and the caller, which is a BIP function, that is spotted by an assertion.
This is fixed by restoring the semantics of Is_Build_In_Place_Function_Call,
which returns again true only for calls to BIP functions, introducing the
Is_Function_Call_With_BIP_Formals predicate, which also returns true for
calls to functions with BIP formals that are not BIP functions, and moving
down the assertion in Expand_Simple_Function_Return.
gcc/ada/
PR ada/112781
* exp_ch6.ads (Is_Build_In_Place_Function): Adjust description.
* exp_ch6.adb (Is_True_Build_In_Place_Function_Call): Delete.
(Is_Function_Call_With_BIP_Formals): New predicate.
(Is_Build_In_Place_Function_Call): Restore original semantics.
(Expand_Call_Helper): Adjust conditions guarding the calls to
Add_Dummy_Build_In_Place_Actuals to above renaming.
(Expand_N_Extended_Return_Statement): Adjust to above renaming.
(Expand_Simple_Function_Return): Likewise. Move the assertion
to after the transformation into an extended return statement.
(Make_Build_In_Place_Call_In_Allocator): Remove unreachable code.
(Make_Build_In_Place_Call_In_Assignment): Likewise.
gcc/testsuite/
* gnat.dg/bip_prim_func2.adb: New test.
* gnat.dg/bip_prim_func2_pkg.ads, gnat.dg/bip_prim_func2_pkg.adb:
New helper package.
Eric Botcazou [Tue, 9 Jan 2024 09:46:23 +0000 (10:46 +0100)]
Fix internal error on function call returning extension of limited interface
This is a regression present on the mainline and 13 branch, in the form of a
series of internal errors (3) on a function call returning the extension of
a limited interface.
This is only a partial fix for the first two assertion failures; the third
one is the most problematic and will be dealt with separately.
The first issue is in Instantiate_Type, where we use Base_Type in a specific
case to compute the ancestor of a derived type, which will later trigger the
assertion on line 16960 of sem_ch3.adb since Parent_Base and Generic_Actual
are the same node. This is changed to use Etype like in other cases around.
The second issue is an unprotected use of Designated_Type on type T in
Analyze_Explicit_Dereference, while another use in an equivalent context
is guarded by Is_Access_Type a few lines above.
gcc/ada
PR ada/112781
* sem_ch12.adb (Instantiate_Type): Use Etype instead of Base_Type
consistently to retrieve the ancestor for a derived type.
* sem_ch4.adb (Analyze_Explicit_Dereference): Test Is_Access_Type
consistently before accessing Designated_Type.
Jakub Jelinek [Tue, 9 Jan 2024 09:31:51 +0000 (10:31 +0100)]
vect: Ensure both NITERSM1 and NITERS are INTEGER_CSTs or neither of them [PR113210]
On the following testcase e.g. on riscv64 or aarch64 (latter with
-O3 -march=armv8-a+sve ) we ICE, because while NITERS is INTEGER_CST,
NITERSM1 is a complex expression like
(short unsigned int) (a.0_1 + 255) + 1 > 256 ? ~(short unsigned int) (a.0_1 + 255) : 0
where a.0_1 is unsigned char. The condition is never true, so the above
is equivalent to just 0, but only when trying to fold the above with
PLUS_EXPR 1 we manage to simplify it (first
~(short unsigned int) (a.0_1 + 255)
to
-(short unsigned int) (a.0_1 + 255)
and then
(short unsigned int) (a.0_1 + 255) + 1 > 256 ? -(short unsigned int) (a.0_1 + 255) : 1
to
(short unsigned int) (a.0_1 + 255) >= 256 ? -(short unsigned int) (a.0_1 + 255) : 1
and only at this point we fold the condition to be false.
But the vectorizer seems to assume that if NITERS is known (i.e. suitable
INTEGER_CST) then NITERSM1 also is, so the following hack ensures that if
NITERS folds into INTEGER_CST NITERSM1 will be one as well.
2024-01-09 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/113210
* tree-vect-loop.cc (vect_get_loop_niters): If non-INTEGER_CST
value in *number_of_iterationsm1 PLUS_EXPR 1 is folded into
INTEGER_CST, recompute *number_of_iterationsm1 as the INTEGER_CST
minus 1.
Eric Botcazou [Tue, 9 Jan 2024 09:21:51 +0000 (10:21 +0100)]
Fix internal error on anonymous access type equality
This is a small regression present on the mainline and 13 branch, in the
form of an internal error in gigi on anonymous access type equality. We
now need to also accept them for anonymous access types that point to
compatible object subtypes in the language sense.
Eric Botcazou [Tue, 9 Jan 2024 09:14:29 +0000 (10:14 +0100)]
Fix segfault during delay slot scheduling pass
This is a small regression present on the mainline and 13 branch, although
the underlying problem has probably been there for ages, in the form of a
segfault during the delay slot scheduling pass, for a function that falls
through to exit without any instruction generated for the end of function.
gcc/
PR rtl-optimization/113140
* reorg.cc (fill_slots_from_thread): If we are to branch after the
last instruction of the function, create an end label.
gcc/testsuite/
* g++.dg/opt/delay-slot-2.C: New test.
Jakub Jelinek [Tue, 9 Jan 2024 08:54:06 +0000 (09:54 +0100)]
libgomp: Use absolute pathname to testsuite/flock [PR113192]
When flock program doesn't exist, libgomp configure attempts to
offer a fallback version using a perl script, but we weren't using
absolute filename to that, so it apparently failed to work correctly.
The following patch arranges for it to get the absolute filename.
Tested by John David in the PR.
2024-01-09 Jakub Jelinek <jakub@redhat.com>
PR libgomp/113192
* configure.ac (FLOCK): Use \$(abs_top_srcdir)/testsuite/flock
rather than $srcdir/testsuite/flock.
* configure: Regenerated.
Roger Sayle [Tue, 9 Jan 2024 08:28:42 +0000 (08:28 +0000)]
i386: PR target/112992: Optimize mode for broadcast of constants.
The issue addressed by this patch is that when initializing vectors by
broadcasting integer constants, the compiler has the flexibility to
select the most appropriate vector mode to perform the broadcast, as
long as the resulting vector has an identical bit pattern.
For example, the following constants are all equivalent:
V4SImode {0x01010101, 0x01010101, 0x01010101, 0x01010101 }
V8HImode {0x0101, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101, 0x0101 }
V16QImode {0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, ... 0x01 }
So instruction sequences that construct any of these can be used to
construct the others (with a suitable cast/SUBREG).
On x86_64, it turns out that broadcasts of SImode constants are preferred,
as DImode constants often require a longer movabs instruction, and
HImode and QImode broadcasts require multiple uops on some architectures.
Hence, SImode is always the equal shortest/fastest implementation.
Examples of this improvement, can be seen in the testsuite.
where according to Agner Fog's instruction tables broadcastd is slightly
faster on some microarchitectures, for example Knight's Landing.
2024-01-09 Roger Sayle <roger@nextmovesoftware.com>
Hongtao Liu <hongtao.liu@intel.com>
gcc/ChangeLog
PR target/112992
* config/i386/i386-expand.cc
(ix86_convert_const_wide_int_to_broadcast): Allow call to
ix86_expand_vector_init_duplicate to fail, and return NULL_RTX.
(ix86_broadcast_from_constant): Revert recent change; Return a
suitable MEMREF independently of mode/target combinations.
(ix86_expand_vector_move): Allow ix86_expand_vector_init_duplicate
to decide whether expansion is possible/preferrable. Only try
forcing DImode constants to memory (and trying again) if calling
ix86_expand_vector_init_duplicate fails with an DImode immediate
constant.
(ix86_expand_vector_init_duplicate) <case E_V2DImode>: Try using
V4SImode for suitable immediate constants.
<case E_V4DImode>: Try using V8SImode for suitable constants.
<case E_V4HImode>: Fail for CONST_INT_P, i.e. use constant pool.
<case E_V2HImode>: Likewise.
<case E_V8HImode>: For CONST_INT_P try using V4SImode via widen.
<case E_V16QImode>: For CONT_INT_P try using V8HImode via widen.
<label widen>: Handle CONT_INTs via simplify_binary_operation.
Allow recursive calls to ix86_expand_vector_init_duplicate to fail.
<case E_V16HImode>: For CONST_INT_P try V8SImode via widen.
<case E_V32QImode>: For CONST_INT_P try V16HImode via widen.
(ix86_expand_vector_init): Move try using a broadcast for all_same
with ix86_expand_vector_init_duplicate before using constant pool.
Jiahao Xu [Fri, 5 Jan 2024 07:38:25 +0000 (15:38 +0800)]
LoongArch: Implement vec_init<M><N> where N is a LSX vector mode
This patch implements more vec_init optabs that can handle two LSX vectors producing a LASX
vector by concatenating them. When an lsx vector is concatenated with an LSX const_vector of
zeroes, the vec_concatz pattern can be used effectively. For example as below
typedef short v8hi __attribute__ ((vector_size (16)));
typedef short v16hi __attribute__ ((vector_size (32)));
v8hi a, b;
* config/loongarch/lasx.md (vec_initv32qiv16qi): Rename to ..
(vec_init<mode><lasxhalf>): .. this, and extend to mode.
(@vec_concatz<mode>): New insn pattern.
* config/loongarch/loongarch.cc (loongarch_expand_vector_group_init):
Handle VALS containing two vectors.
gcc/testsuite/ChangeLog:
* gcc.target/loongarch/vector/lasx/lasx-vec-init-2.c: New test.
Feng Wang [Mon, 8 Jan 2024 09:12:00 +0000 (09:12 +0000)]
RISC-V: Add crypto vector api-testing cases.
Patch v8: Resubmit after fix the rtl-checking issue. Passed all the riscv regression test.
Patch v7: Add newline at the end of file.
Patch v6: Move intrinsic tests into rvv/base.
Patch v5: Rebase
Patch v4: Add some RV32 vx constraint testcase.
Patch v3: Refine crypto vector api-testing cases.
Patch v2: Update march info according to the change of riscv-common.c
This patch add crypto vector api-testing cases based on
https://github.com/riscv-non-isa/rvv-intrinsic-doc/blob/eopc/vector-crypto/auto-generated/vector-crypto
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/zvbb-intrinsic.c: New test.
* gcc.target/riscv/rvv/base/zvbb_vandn_vx_constraint.c: New test.
* gcc.target/riscv/rvv/base/zvbc-intrinsic.c: New test.
* gcc.target/riscv/rvv/base/zvbc_vx_constraint-1.c: New test.
* gcc.target/riscv/rvv/base/zvbc_vx_constraint-2.c: New test.
* gcc.target/riscv/rvv/base/zvkg-intrinsic.c: New test.
* gcc.target/riscv/rvv/base/zvkned-intrinsic.c: New test.
* gcc.target/riscv/rvv/base/zvknha-intrinsic.c: New test.
* gcc.target/riscv/rvv/base/zvknhb-intrinsic.c: New test.
* gcc.target/riscv/rvv/base/zvksed-intrinsic.c: New test.
* gcc.target/riscv/rvv/base/zvksh-intrinsic.c: New test.
* gcc.target/riscv/zvkb.c: New test.
Feng Wang [Mon, 8 Jan 2024 09:12:01 +0000 (09:12 +0000)]
RISC-V: Add crypto vector builtin function.
This patch add the intrinsic funtions of crypto vector based on the
intrinsic doc(https://github.com/riscv-non-isa/rvv-intrinsic-doc/blob
/eopc/vector-crypto/auto-generated/vector-crypto/intrinsic_funcs.md).
hppa: Fix bind_c_coms.f90 and bind_c_vars.f90 tests on hppa
Commit 6271dd98 changed the default from -fcommon to -fno-common.
This silently changed the alignment of uninitialized BSS data on
hppa where the alignment of common data must be greater or equal
to the alignment of the largest type that will fit in the block.
For example, the alignment of `double d[2];' changed from 16 to 8
on hppa64.
The hppa architecture requires strict alignment and the linker
warns about inconsistent alignment of variables. This change broke
the gfortran.dg/bind_c_coms.f90 and gfortran.dg/bind_c_vars.f90
tests. These tests check whether bind_c works between fortran
and C.
Adding the -fcommon option fixes the tests. Probably, gcc and HP
C are now by default inconsistent but that's water under the bridge.
2024-01-08 John David Anglin <danglin@gcc.gnu.org>
Thomas Schwinge [Mon, 8 Jan 2024 19:35:27 +0000 (20:35 +0100)]
GCN: Add pre-initial support for gfx1100: 'EF_AMDGPU_MACH_AMDGCN_GFX1100'
../../../source-gcc/libgomp/plugin/plugin-gcn.c: In function ‘isa_hsa_name’:
../../../source-gcc/libgomp/plugin/plugin-gcn.c:1666:10: error: ‘EF_AMDGPU_MACH_AMDGCN_GFX1100’ undeclared (first use in this function); did you mean ‘EF_AMDGPU_MACH_AMDGCN_GFX1030’?
1666 | case EF_AMDGPU_MACH_AMDGCN_GFX1100:
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| EF_AMDGPU_MACH_AMDGCN_GFX1030
../../../source-gcc/libgomp/plugin/plugin-gcn.c:1666:10: note: each undeclared identifier is reported only once for each function it appears in
../../../source-gcc/libgomp/plugin/plugin-gcn.c: In function ‘isa_code’:
../../../source-gcc/libgomp/plugin/plugin-gcn.c:1711:12: error: ‘EF_AMDGPU_MACH_AMDGCN_GFX1100’ undeclared (first use in this function); did you mean ‘EF_AMDGPU_MACH_AMDGCN_GFX1030’?
1711 | return EF_AMDGPU_MACH_AMDGCN_GFX1100;
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| EF_AMDGPU_MACH_AMDGCN_GFX1030
../../../source-gcc/libgomp/plugin/plugin-gcn.c: In function ‘max_isa_vgprs’:
../../../source-gcc/libgomp/plugin/plugin-gcn.c:1728:10: error: ‘EF_AMDGPU_MACH_AMDGCN_GFX1100’ undeclared (first use in this function); did you mean ‘EF_AMDGPU_MACH_AMDGCN_GFX1030’?
1728 | case EF_AMDGPU_MACH_AMDGCN_GFX1100:
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| EF_AMDGPU_MACH_AMDGCN_GFX1030
make[4]: *** [Makefile:813: libgomp_plugin_gcn_la-plugin-gcn.lo] Error 1
asan: Do not call asan_function_start () without the current function [PR113251]
Using ASAN on i686-linux with -fPIC causes an ICE, because when
pc_thunks are generated, there is no current function anymore, but
asan_function_start () expects one.
Fix by not calling asan_function_start () without one.
A narrower fix would be to temporarily disable ASAN around pc_thunk
generation. However, the issue looks generic enough, and may affect
less often tested configurations, so go for a broader fix.
Fixes: e66dc37b299c ("asan: Align .LASANPC on function boundary") Suggested-by: Jakub Jelinek <jakub@redhat.com> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
gcc/ChangeLog:
PR sanitizer/113251
* varasm.cc (assemble_function_label_raw): Do not call
asan_function_start () without the current function.
bpf: Correct BTF for kernel_helper attributed decls
This patch fix a problem with kernel_helper attribute BTF information,
which incorrectly generates BTF_KIND_FUNC entry.
This BTF entry although accurate with traditional extern function
declarations, once the function is attributed with kernel_helper, it is
semantically incompatible of the kernel helpers in BPF infrastructure.
gcc/ChangeLog:
PR target/113225
* btfout.cc (btf_collect_datasec): Skip creating BTF info for
extern and kernel_helper attributed function decls.
gcc/testsuite/ChangeLog:
* gcc.target/bpf/attr-kernel-helper.c: New test.
When using -dA, this function was only printing as comment btf_string or
btf_aux_string.
This patch changes the comment to also include the position of the
string within the section in hexadecimal format.
Julian Brown [Thu, 4 Jan 2024 16:44:18 +0000 (16:44 +0000)]
OpenMP: Support accelerated 2D/3D memory copies for AMD GCN
This patch adds support for 2D/3D memory copies for omp_target_memcpy_rect
using AMD extensions to the HSA API. This is just the AMD GCN-specific
part of the following patch:
Jonathan Wakely [Mon, 8 Jan 2024 11:46:56 +0000 (11:46 +0000)]
libstdc++: Remove std::__unicode::__null_sentinel
The name __null_sentinel is defined as a macro by newlib, so we can't
use it as an identifier. That variable is not actually used by
libstdc++, it was added because P2728R6 proposes std::uc::null_sentinel.
Since we don't need it and it breaks bootstrap for newlib targets, just
remove it. A null sentinel can still be used by constructing a
_Null_sentinel_t object as needed, rather than having a named object of
that type predefined.
Tobias Burnus [Mon, 8 Jan 2024 14:18:10 +0000 (15:18 +0100)]
amdgcn: Add gfx1100 to new XNACK defaults in mkoffload
Commit r14-6997-g78dff4c25c1b95 added an arch-dependent
SET_XNACK_OFF vs. SET_XNACK_ANY check; that was added
between writing and committing the add-gfx1100
commit r14-7005-g52a2c659ae6c21 - and I missed to add
it there.
gcc/ChangeLog:
* config/gcn/mkoffload.cc (main): Handle gfx1100
when setting the default XNACK.
Tobias Burnus [Mon, 8 Jan 2024 14:12:44 +0000 (15:12 +0100)]
GCN: Add pre-initial support for gfx1100
ROCm since 5.7.1 supports gfx1100 (RDNA3) cards. This commit adds support
for it, mostly by assuming gfx1100 behaves identical to gfx1030. Like gfx1030,
gfx1100 support is neither documented nor the build of the multilib enabled by
default.
But contrary to gfx1030, gfx1100 has a known issue causing some libraries not
to build, including newlib: The sdwa variant of v_mov_b32_sdwa is not supported
by the hardware but GCC current does generates this instruction.
This will be addressed in a later commit.
Richard Biener [Mon, 8 Jan 2024 09:48:19 +0000 (10:48 +0100)]
Clarify -mmovbe documentation
It was noticed that -mmovbe doesn't use movbe for __builtin_bswap{32,64}
when not optimizing. The follownig adjusts the documentation to
say it will be used for optimizing and applies to all byte swaps,
not just those carried out via builtin function calls.
Richard Biener [Fri, 15 Dec 2023 09:32:29 +0000 (10:32 +0100)]
tree-optimization/113026 - avoid vector epilog in more cases
The following avoids creating a niter peeling epilog more consistently,
matching what peeling later uses for the skip_vector condition, in
particular when versioning is required which then also ensures the
vector loop is entered unless the epilog is vectorized. This should
ideally match LOOP_VINFO_VERSIONING_THRESHOLD which is only computed
later, some refactoring could make that better matching.
The patch also makes sure to adjust the upper bound of the epilogues
when we do not have a skip edge around the vector loop.
PR tree-optimization/113026
* tree-vect-loop.cc (vect_need_peeling_or_partial_vectors_p):
Avoid an epilog in more cases.
* tree-vect-loop-manip.cc (vect_do_peeling): Adjust the
epilogues niter upper bounds and estimates.
* gcc.dg/torture/pr113026-1.c: New testcase.
* gcc.dg/torture/pr113026-2.c: Likewise.
Jakub Jelinek [Mon, 8 Jan 2024 12:59:15 +0000 (13:59 +0100)]
gimplify: Fix ICE in recalculate_side_effects [PR113228]
The following testcase ICEs during regimplificatgion since the addition of
(convert (eqne zero_one_valued_p@0 INTEGER_CST@1))
simplification. That simplification is novel in the sense that in
gimplify_expr it can turn an expression (comparison in particular) into
a SSA_NAME. Normally when gimplify_expr sees originally a SSA_NAME, it does
case SSA_NAME:
/* Allow callbacks into the gimplifier during optimization. */
ret = GS_ALL_DONE;
break;
and doesn't try to recalculate side effects because of that, but in this
case gimplify_expr normally enters the:
default:
switch (TREE_CODE_CLASS (TREE_CODE (*expr_p)))
{
case tcc_comparison:
then does
*expr_p = gimple_boolify (*expr_p);
and then
*expr_p = fold_convert_loc (input_location,
org_type, *expr_p);
with this new match.pd simplification turns that tcc_comparison class
into SSA_NAME. Unlike the outer SSA_NAME handling though, this falls
through into
recalculate_side_effects (*expr_p);
dont_recalculate:
break;
but unfortunately recalculate_side_effects doesn't handle SSA_NAME and ICEs
on it.
SSA_NAMEs don't ever have TREE_SIDE_EFFECTS set on those, so the following
patch fixes it by handling it similarly to the tcc_constant case.
2024-01-08 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/113228
* gimplify.cc (recalculate_side_effects): Do nothing for SSA_NAMEs.
Jakub Jelinek [Mon, 8 Jan 2024 12:58:28 +0000 (13:58 +0100)]
lower-bitint: Fix up lowering of huge _BitInt 0 PHI args [PR113120]
The PHI argument expansion of INTEGER_CSTs where bitint_min_cst_precision
returns significantly smaller precision than the PHI result precision is
optimized by loading the much smaller constant (if any) from memory and
then either setting the remaining limbs to {} or calling memset with -1.
The case where no constant is loaded (i.e. c == NULL) is when the
INTEGER_CST is 0 or all_ones - in that case we can just set all the limbs
to {} or call memset with -1 on everything.
While for the all ones extension case that is what the code was already
doing, I missed one spot in the zero extension case, where constricting
the offset of the MEM_REF lhs of the = {} store it was using unconditionally
the byte size of c, which obviously doesn't work if c is NULL. In that case
we want to use zero offset.
2024-01-08 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/113120
* gimple-lower-bitint.cc (gimple_lower_bitint): Fix handling of very
large _BitInt zero INTEGER_CST PHI argument.
Jakub Jelinek [Mon, 8 Jan 2024 12:57:26 +0000 (13:57 +0100)]
lower-bitint: Punt .*_OVERFLOW optimization if cast from IMAGPART_EXPR appears before REALPART_EXPR [PR113119]
_BitInt lowering for .{ADD,SUB,MUL}_OVERFLOW calls which have both
REALPART_EXPR and IMAGPART_EXPR used and have a cast from the IMAGPART_EXPR
to a boolean or normal integral type lowers them at the point of
the REALPART_EXPR statement (which is especially needed if the lhs of
the call is complex with large/huge _BitInt element type); we emit the
stmt to set the lhs of the cast at the same spot as well.
Normally, the lowering of __builtin_{add,sub,mul}_overflow arranges
the REALPART_EXPR to come before IMAGPART_EXPR, followed by cast from that,
but as the testcase shows, a redundant __builtin_*_overflow call and VN
can reorder those and we then ICE because the def-stmt of the former cast
from IMAGPART_EXPR may appear after its uses.
We already check that all of REALPART_EXPR, IMAGPART_EXPR and the cast
from the latter appear in the same bb as the .{ADD,SUB,MUL}_OVERFLOW call
in the optimization, the following patch just extends it to make sure
cast appears after REALPART_EXPR; if not, we punt on the optimization and
expand it as a store of a complex _BitInt on the location of the ifn call.
Only the testcase in the testsuite is changed by the patch, all other
__builtin_*_overflow* calls in the bitint* tests (and there are quite a few)
have REALPART_EXPR first.
2024-01-08 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/113119
* gimple-lower-bitint.cc (optimizable_arith_overflow): Punt if
both REALPART_EXPR and cast from IMAGPART_EXPR appear, but cast
is before REALPART_EXPR.
AVR: PR target/112952: Fix attribute "address", "io" and "io_low"
so they work with all combinations of -f[no-]data-sections -f[no-]common.
The patch also improves some diagnostics and adds additional checks, for
example these attributes must only be applied to variables in static storage.
gcc/
PR target/112952
* config/avr/avr.cc (avr_handle_addr_attribute): Also print valid
range when diagnosing attribute "io" and "io_low" are out of range.
(avr_eval_addr_attrib): Don't ICE on empty address at that place.
(avr_insert_attributes): Reject if attribute "address", "io" or "io_low"
in contexts other than static storage.
(avr_asm_output_aligned_decl_common): Move output of decls with
attribute "address", "io", and "io_low" to...
(avr_output_addr_attrib): ...this new function.
(avr_asm_asm_output_aligned_bss): Remove output for decls with
attribute "address", "io", and "io_low".
(avr_encode_section_info): Rectify handling of decls with attribute
"address", "io", and "io_low".
gcc/testsuite/
PR target/112952
* gcc.target/avr/attribute-io.h: New file.
* gcc.target/avr/pr112952-0.c: New test.
* gcc.target/avr/pr112952-1.c: New test.
* gcc.target/avr/pr112952-2.c: New test.
* gcc.target/avr/pr112952-3.c: New test.
Andrew Stubbs [Wed, 3 Jan 2024 16:53:52 +0000 (16:53 +0000)]
amdgcn: Match new XNACK defaults in mkoffload
The patch that disabled XNACK by default for ISA other than gfx90a was missing
the matching mkoffload changes. This patch should fix offload.
gcc/ChangeLog:
* config/gcn/mkoffload.cc (TEST_XNACK_UNSET): New.
(elf_flags): Remove XNACK from the default value.
(main): Set a default XNACK according to the arch.
Andrew Stubbs [Wed, 3 Jan 2024 16:18:43 +0000 (16:18 +0000)]
amdgcn: Don't double-count AVGPRs
CDNA2 devices have VGPRs and AVGPRs combined into a single hardware register
file (they're seperate in CDNA1). I originally thought they were counted
separately in the vgpr_count and agpr_count metadata fields, and therefore
mkoffload had to account for this when passing the values to libgomp. However,
that wasn't the case, and this code should have been removed when I corrected
the calculations in gcn.cc. Fixing the error now.
Jonathan Wakely [Sun, 7 Jan 2024 23:14:31 +0000 (23:14 +0000)]
libstdc++: Implement P2918R0 "Runtime format strings II" for C++26
This adds std::runtime_format for C++26. These new overloaded functions
enhance the std::format API so that it isn't necessary to use the less
ergonomic std::vformat and std::make_format_args (which are meant to be
implementation details). This was approved in Kona 2023 for C++26.
libstdc++-v3/ChangeLog:
* include/std/format (__format::_Runtime_format_string): Define
new class template.
(basic_format_string): Add non-consteval constructor for runtime
format strings.
(runtime_format): Define new function for C++26.
* testsuite/std/format/runtime_format.cc: New test.
Jonathan Wakely [Sun, 7 Jan 2024 22:21:08 +0000 (22:21 +0000)]
libstdc++: Implement P2905R2 "Runtime format strings" for C++20
This change makes std::make_format_args refuse to create dangling
references to temporaries. This makes the std::vformat API safer. This
was approved in Kona 2023 as a DR for C++20 so the change is implemented
unconditionally.
libstdc++-v3/ChangeLog:
* include/bits/chrono_io.h (__formatter_chrono): Always use
lvalue arguments to make_format_args.
* include/std/format (make_format_args): Change parameter pack
from forwarding references to lvalue references. Remove use of
remove_reference_t which is now unnecessary.
(format_to, formatted_size): Remove incorrect forwarding of
arguments.
* include/std/ostream (print): Remove forwarding of arguments.
* include/std/print (print): Likewise.
* testsuite/20_util/duration/io.cc: Use lvalues as arguments to
make_format_args.
* testsuite/std/format/arguments/args.cc: Likewise.
* testsuite/std/format/arguments/lwg3810.cc: Likewise.
* testsuite/std/format/functions/format.cc: Likewise.
* testsuite/std/format/functions/vformat_to.cc: Likewise.
* testsuite/std/format/string.cc: Likewise.
* testsuite/std/time/day/io.cc: Likewise.
* testsuite/std/time/month/io.cc: Likewise.
* testsuite/std/time/weekday/io.cc: Likewise.
* testsuite/std/time/year/io.cc: Likewise.
* testsuite/std/time/year_month_day/io.cc: Likewise.
* testsuite/std/format/arguments/args_neg.cc: New test.
Jonathan Wakely [Sat, 16 Dec 2023 23:30:20 +0000 (23:30 +0000)]
libstdc++: Add Unicode-aware width estimation for std::format
This implements the requirements in the following proposals, which
dictate how std::format deals with non-ASCII strings:
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p1868r1.html
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p2572r1.html
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p2675r1.pdf
There are two parts to this. The width estimation for strings must only
count the width of the first character in an extended grapheme cluster.
That requires implementing the algorithm for detecting cluster breaks,
which requires a number of lookup tables of the grapheme cluster break
properties (and Indic_Conjunct_Break and Extended_Pictographic
properties) of every code point. Additionally, some characters have a
field width of 2, which requires another lookup table of field widths
for every code point. The tables added in this commit do not contain
entries for every code point from 0 to 0x10FFFF as that would be very
inefficient and use too much memory. Instead the tables only contain the
code points that form an "edge" for a property, omitting all the code
points that have the same property as the preceding one. We can use a
binary search to find the closest code point in the table that is not
greater than the one we're looking for.
The tables are generated by a new Python script added to the
contrib/unicode directory, and a new data file downloaded from the
Unicode Consortium website.
The rules for extended grapheme cluster breaking are implemented for the
latest Unicode standard, version 15.1.0.
libstdc++-v3/ChangeLog:
* include/Makefile.am: Add new headers.
* include/Makefile.in: Regenerate.
* include/bits/unicode.h: New file.
* include/bits/unicode-data.h: New file.
* include/std/format: Include <bits/unicode.h>.
(__literal_encoding_is_utf8): Move to <bits/unicode.h>.
(_Spec::_M_fill): Change type to char32_t.
(_Spec::_M_parse_fill_and_align): Read a Unicode scalar value
instead of a single character.
(__write_padded): Change __fill_char parameter to char32_t and
encode it into the output.
(__formatter_str::format): Use new __unicode::__field_width and
__unicode::__truncate functions.
* include/std/ostream: Adjust namespace qualification for
__literal_encoding_is_utf8.
* include/std/print: Likewise.
* src/c++23/print.cc: Add [[unlikely]] attribute to error path.
* testsuite/ext/unicode/view.cc: New test.
* testsuite/std/format/functions/format.cc: Add missing examples
from the standard demonstrating alignment with non-ASCII
characters. Add examples checking correct handling of extended
grapheme clusters.
contrib/ChangeLog:
* unicode/README: Add notes about generating libstdc++ tables.
* unicode/GraphemeBreakProperty.txt: New file.
* unicode/emoji-data.txt: New file.
* unicode/gen_libstdcxx_unicode_data.py: New file.
Jonathan Wakely [Wed, 3 Jan 2024 15:35:50 +0000 (15:35 +0000)]
libstdc++: Implement P2909R4 ("Dude, where's my char?") for C++20
This change ensures that char and wchar_t arguments are formatted
consistently when using integer presentation types. This avoids
non-portable std::format output that depends on whether char and wchar_t
happen to be signed or unsigned on the target. Formatting '\xff' as an
integer will now always format 255 and not sometimes -1. This was
approved in Kona 2023 as a DR for C++20 so the change is implemented
unconditionally.
Also make character formatters check for _Pres_c explicitly and call
_M_format_character directly. This avoid the overhead of calling format
and _S_to_character and then calling _M_format_character anyway.
libstdc++-v3/ChangeLog:
* include/bits/version.def (format_uchar): Define.
* include/bits/version.h: Regenerate.
* include/std/format (formatter<C, C>::format): Check for
_Pres_c and call _M_format_character directly. Cast C to its
unsigned equivalent for formatting as an integer.
(formatter<char, wchar_t>::format): Likewise.
(basic_format_arg(T&)): Store char arguments as unsigned char
for formatting to a wide string.
* testsuite/std/format/functions/format.cc: Adjust test. Check
formatting of
Feng Wang [Fri, 5 Jan 2024 09:23:44 +0000 (09:23 +0000)]
RISC-V: Fix avl-type operand index error for ZVBC
This patch fix the rtl-checking error for crypto vector. The root
cause is the avl-type index of zvbc ins is error,it should be operand[8]
not operand[5].
gcc/ChangeLog:
* config/riscv/vector.md: Modify avl_type operand index of zvbc ins.
AVR: Fix some test options. Skip tests with address-space on Reduced Tiny.
gcc/testsuite/
* gcc.target/avr/lra-cpymem_qi.c: Remove duplicate -mmcu=.
* gcc.target/avr/lra-elim.c: Same.
* gcc.target/avr/pr112830.c: Skip for Reduced Tiny.
* gcc.target/avr/pr46779-1.c: Same.
* gcc.target/avr/pr46779-2.c: Same.
* gcc.target/avr/pr86869.c: Skip for Reduced Tiny and add -std=gnu99
for GNU-C due to address spaces.
* gcc.target/avr/pr89270.c: Same.
* gcc.target/avr/torture/builtins-2-flash.c: Only test address
space __flash1 if we have it.
* gcc.target/avr/torture/addr-space-1-1.c: Same.
* gcc.target/avr/torture/addr-space-2-1.c: Same.
Roger Sayle [Sun, 7 Jan 2024 17:42:00 +0000 (17:42 +0000)]
i386: PR target/113231: Improved costs in Scalar-To-Vector (STV) pass.
This patch improves the cost/gain calculation used during the i386 backend's
SImode/DImode scalar-to-vector (STV) conversion pass. The current code
handles loads and stores, but doesn't consider that converting other
scalar operations with a memory destination, requires an explicit load
before and an explicit store after the vector equivalent.
To ease the review, the significant change looks like:
/* For operations on memory operands, include the overhead
of explicit load and store instructions. */
if (MEM_P (dst))
igain += !optimize_insn_for_size_p ()
? -COSTS_N_BYTES (8);
: (m * (ix86_cost->int_load[2]
+ ix86_cost->int_store[2])
- (ix86_cost->sse_load[sse_cost_idx] +
ix86_cost->sse_store[sse_cost_idx]));
however the patch itself is complicated by a change in indentation
which leads to a number of lines with only whitespace changes.
For architectures where integer load/store costs are the same as
vector load/store costs, there should be no change without -Os/-Oz.
2024-01-07 Roger Sayle <roger@nextmovesoftware.com>
Uros Bizjak <ubizjak@gmail.com>
gcc/ChangeLog
PR target/113231
* config/i386/i386-features.cc (compute_convert_gain): Include
the overhead of explicit load and store (movd) instructions when
converting non-store scalar operations with memory destinations.
Various indentation whitespace fixes.
gcc/testsuite/ChangeLog
PR target/113231
* gcc.target/i386/pr113231.c: New test case.
gcc/testsuite/
PR testsuite/52641
* gcc.dg/torture/pr110838.c: Use proper shift offset to get MSB or int.
* gcc.dg/torture/pr112282.c: Use at least 32 bits for :20 bit-fields.
* gcc.dg/tree-ssa/bitcmp-5.c: Use integral type with 32 bits or more.
* gcc.dg/tree-ssa/bitcmp-6.c: Same.
* gcc.dg/tree-ssa/cltz-complement-max.c: Same.
* gcc.dg/tree-ssa/cltz-max.c: Same.
* gcc.dg/tree-ssa/if-to-switch-8.c: Use literals that fit int.
* gcc.dg/tree-ssa/if-to-switch-9.c [avr]: Set case-values-threshold=3.
* gcc.dg/tree-ssa/negneg-3.c: Discriminate [not] large_double.
* gcc.dg/tree-ssa/phi-opt-25b.c: Use types of correct widths for
__builtin_bswapN.
* gcc.dg/tree-ssa/pr55177-1.c: Same.
* gcc.dg/tree-ssa/popcount-max.c: Use int32_t where required.
* gcc.dg/tree-ssa/pr111583-1.c: Use intptr_t as needed.
* gcc.dg/tree-ssa/pr111583-2.c: Same.
Nathaniel Shead [Tue, 2 Jan 2024 22:28:43 +0000 (09:28 +1100)]
c++: Fix ICE when writing nontrivial variable initializers
The attached testcase Patrick found in PR c++/112899 ICEs because it is
attempting to write a variable initializer that is no longer in the
static_aggregates map.
The issue is that, for non-header modules, the loop in
c_parse_final_cleanups prunes the static_aggregates list, which means
that by the time we get to emitting module information those
initialisers have been lost.
However, we don't actually need to write non-trivial initialisers for
non-header modules, because they've already been emitted as part of the
module TU itself. Instead let's just only write the initializers from
header modules (which skipped writing them in c_parse_final_cleanups).
gcc/cp/ChangeLog:
* module.cc (trees_out::write_var_def): Only write initializers
in header modules.
gcc/testsuite/ChangeLog:
* g++.dg/modules/init-5_a.C: New test.
* g++.dg/modules/init-5_b.C: New test.
Nathaniel Shead [Wed, 3 Jan 2024 04:29:51 +0000 (15:29 +1100)]
c++: Export usings referring to global module fragment [PR109679]
This patch stops 'add_binding_entity' from ignoring all names in the
global module fragment, since they should still be exported if named
in an exported using-declaration.
PR c++/109679
gcc/cp/ChangeLog:
* module.cc (depset::hash::add_binding_entity): Don't skip names
in the GMF if they've been exported with a using declaration.
gcc/testsuite/ChangeLog:
* g++.dg/modules/using-11.h: New test.
* g++.dg/modules/using-11_a.C: New test.
* g++.dg/modules/using-11_b.C: New test.
Nathaniel Shead [Fri, 24 Nov 2023 05:26:43 +0000 (16:26 +1100)]
c++: Follow module grammar more closely [PR110808]
This patch cleans up the parsing of module-declarations and
import-declarations to more closely follow the grammar defined by the
standard.
For instance, currently we allow declarations like 'import A:B', even
from an unrelated source file (not part of module A), which causes
errors in merging declarations. However, the syntax in [module.import]
doesn't even allow this form of import, so this patch prevents this from
parsing at all and avoids the error that way.
Additionally, we sometimes allow statements like 'import :X' or
'module :X' even when not in a named module, and this causes segfaults,
so we disallow this too.
PR c++/110808
gcc/cp/ChangeLog:
* parser.cc (cp_parser_module_name): Rewrite to handle
module-names and module-partitions independently.
(cp_parser_module_partition): New function.
(cp_parser_module_declaration): Parse module partitions
explicitly. Don't change state if parsing module decl failed.
(cp_parser_import_declaration): Handle different kinds of
import-declarations locally.
gcc/testsuite/ChangeLog:
* g++.dg/modules/part-hdr-1_c.C: Fix syntax.
* g++.dg/modules/part-mac-1_c.C: Likewise.
* g++.dg/modules/mod-invalid-1.C: New test.
* g++.dg/modules/part-8_a.C: New test.
* g++.dg/modules/part-8_b.C: New test.
* g++.dg/modules/part-8_c.C: New test.
Jonathan Wakely [Wed, 13 Dec 2023 09:45:44 +0000 (09:45 +0000)]
libstdc++: Avoid conflicting declaration in eh_call.cc [PR112997]
r14-1527-g2415024e0f81f8 changed the parameter of the
__cxa_call_terminate definition, but there's also a declaration in
unwind-cxx.h which should have been changed too.
libstdc++-v3/ChangeLog:
PR libstdc++/112997
* libsupc++/unwind-cxx.h (__cxa_call_terminate): Change first
parameter to void*.
This reduces the overhead of using std::is_trivially_destructible_v and
as a result fixes some recent regressions seen with a non-default
GLIBCXX_TESTSUITE_STDS env var:
FAIL: 20_util/variant/87619.cc -std=gnu++20 (test for excess errors)
FAIL: 20_util/variant/87619.cc -std=gnu++23 (test for excess errors)
FAIL: 20_util/variant/87619.cc -std=gnu++26 (test for excess errors)
libstdc++-v3/ChangeLog:
* include/std/type_traits (is_trivially_destructible_v): Use
built-in directly when concepts are supported.
* testsuite/20_util/is_trivially_destructible/value_v.cc: New
test.
1). We not only have vashl_optab,vashr_optab,vlshr_optab which vectorize shift with vector shift amount,
that is, vectorization of 'a[i] >> x[i]', the shift amount is loop variant.
2). But also, we have ashl_optab, ashr_optab, lshr_optab which can vectorize shift with scalar shift amount,
that is, vectorization of 'a[i] >> x', the shift amount is loop invariant.
For the 2) case, we don't need to allocate a vector register group for shift amount.
So consider this following case:
void
f (int *restrict a, int *restrict b, int *restrict c, int *restrict d, int x,
int n)
{
for (int i = 0; i < n; i++)
{
int tmp = b[i] >> x;
int tmp2 = tmp * b[i];
c[i] = tmp2 * b[i];
d[i] = tmp * tmp2 * b[i] >> x;
}
}
Before this patch, we choose LMUL = 4, now after this patch, we can choose LMUL = 8:
Tested on both RV32/RV64 no regression. Ok for trunk ?
Note that we will apply same heuristic for vadd.vx, ... etc when the late-combine pass from
Richard Sandiford is committed (Since we need late combine pass to do vv->vx transformation for vadd).
Mark Wielaard [Sat, 6 Jan 2024 00:25:01 +0000 (01:25 +0100)]
Regenerate libgomp/configure for copyright year update
commit a945c346f57ba40fc80c14ac59be0d43624e559d updated
libgomp/plugin/configfrag.ac but didn't regenerate/update
libgomp/configure which includes that configfrag.
aarch64: Extend VECT_COMPARE_COSTS to !SVE [PR113104]
When SVE is enabled, we try vectorising with multiple different SVE and
Advanced SIMD approaches and use the cost model to pick the best one.
Until now, we've not done that for Advanced SIMD, since "the first mode
that works should always be the best".
The testcase is a counterexample. Each iteration of the scalar loop
vectorises naturally with 64-bit input vectors and 128-bit output
vectors. We do try that for SVE, and choose it as the best approach.
But the first approach we try is instead to use:
- a vectorisation factor of 2
- 1 128-bit vector for the inputs
- 2 128-bit vectors for the outputs
But since the stride is variable, the cost of marshalling the input
vector from two iterations outweighs the benefit of doing two iterations
at once.
This patch therefore generalises aarch64-sve-compare-costs to
aarch64-vect-compare-costs and applies it to non-SVE compilations.
gcc/
PR target/113104
* doc/invoke.texi (aarch64-sve-compare-costs): Replace with...
(aarch64-vect-compare-costs): ...this.
* config/aarch64/aarch64.opt (-param=aarch64-sve-compare-costs=):
Replace with...
(-param=aarch64-vect-compare-costs=): ...this new param.
* config/aarch64/aarch64.cc (aarch64_override_options_internal):
Don't disable it when vectorizing for Advanced SIMD only.
(aarch64_autovectorize_vector_modes): Apply VECT_COMPARE_COSTS
whenever aarch64_vect_compare_costs is true.
Jonathan Wakely [Fri, 5 Jan 2024 13:40:06 +0000 (13:40 +0000)]
libstdc++: Avoid overflow when appending to std::filesystem::path
This prevents a std::filesystem::path from exceeding INT_MAX/4
components (which is unlikely to ever be a problem except on 16-bit
targets). That limit ensures that the capacity*1.5 calculation doesn't
overflow. We should also check that we don't exceed SIZE_MAX when
calculating how many bytes to allocate. That only needs to be checked
when int is at least as large as size_t, because otherwise we know that
the product INT_MAX/4 * sizeof(value_type) will fit in SIZE_MAX. For
targets where size_t is twice as wide as int this obviously holds. For
msp430-elf we have 16-bit int and 20-bit size_t, so the condition holds
as long as sizeof(value_type) fits in 7 bits, which it does.
We can also remove some floating-point arithmetic in operator/= which
ensures exponential growth of the buffer. That's redundant because
path::_List::reserve does that anyway (and does so more efficiently
since the commit immediately before this one).
libstdc++-v3/ChangeLog:
* src/c++17/fs_path.cc (path::_List::reserve): Limit maximum
size and check for overflows in arithmetic.
(path::operator/=(const path&)): Remove redundant exponential
growth calculation.
Lulu Cheng [Thu, 4 Jan 2024 02:37:53 +0000 (10:37 +0800)]
LoongArch: Fixed the problem of incorrect judgment of the immediate field of the [x]vld/[x]vst instruction.
The [x]vld/[x]vst directive is defined as follows:
[x]vld/[x]vst {x/v}d, rj, si12
When not modified, the immediate field of [x]vld/[x]vst is between 10 and
14 bits depending on the type. However, in loongarch_valid_offset_p, the
immediate field is restricted first, so there is no error. However, in
some cases redundant instructions will be generated, see test cases.
Now modify it according to the description in the instruction manual.
gcc/ChangeLog:
* config/loongarch/lasx.md (lasx_mxld_<lasxfmt_f>):
Modify the method of determining the memory offset of [x]vld/[x]vst.
(lasx_mxst_<lasxfmt_f>): Likewise.
* config/loongarch/loongarch.cc (loongarch_valid_offset_p): Delete.
(loongarch_address_insns): Likewise.
* config/loongarch/lsx.md (lsx_ld_<lsxfmt_f>): Likewise.
(lsx_st_<lsxfmt_f>): Likewise.
* config/loongarch/predicates.md (aq10b_operand): Likewise.
(aq10h_operand): Likewise.
(aq10w_operand): Likewise.
(aq10d_operand): Likewise.
gcc/testsuite/ChangeLog:
* gcc.target/loongarch/vect-ld-st-imm12.c: New test.
chenxiaolong [Fri, 5 Jan 2024 03:43:29 +0000 (11:43 +0800)]
LoongArch: testsuite:Give up the detection of the gcc.dg/fma-{3, 4, 6, 7}.c file.
On the LoongArch architecture, the above four test cases need to be waived
during testing. There are two situations:
1. The function of fma-{3,6}.c test is to find the value of c-a*b, but on
the LoongArch architecture, the function of the existing fnmsub instruction
is to find the value of -(a*b - c);
2. The function of fma-{4,7}.c test is to find the value of -(a*b)-c, but on
the LoongArch architecture, the function of the existing fnmadd instruction
is to find the value of -(a*b + c);
Through the analysis of the above two cases, there will be positive and
negative zero inequality.
gcc/testsuite/ChangeLog
* gcc.dg/fma-3.c: The intermediate file corresponding to the
function does not produce the corresponding FNMA symbol, so the test
rules should be skipped when testing.
* gcc.dg/fma-4.c: The intermediate file corresponding to the
function does not produce the corresponding FNMS symbol, so skip the
test rules when testing.
* gcc.dg/fma-6.c: The cause is the same as fma-3.c.
* gcc.dg/fma-7.c: The cause is the same as fma-4.c
In the LoongArch architecture, the reason for not adding the 128-bit
vector-width-*hi* instruction template in the GCC back end is that it causes
program performance loss, so we can only add the "-mlasx" compilation option
to use 256-bit vectorization functions in test files.
gcc/testsuite/ChangeLog:
* gcc.dg/vect/bb-slp-pattern-1.c: If you are testing on the
LoongArch architecture, you need to add the "-mlasx" compilation
option to generate vectorized code.
* gcc.dg/vect/slp-widen-mult-half.c: Dito.
* gcc.dg/vect/vect-widen-mult-const-s16.c: Dito.
* gcc.dg/vect/vect-widen-mult-const-u16.c: Dito.
* gcc.dg/vect/vect-widen-mult-half-u8.c: Dito.
* gcc.dg/vect/vect-widen-mult-half.c: Dito.
* gcc.dg/vect/vect-widen-mult-u16.c: Dito.
* gcc.dg/vect/vect-widen-mult-u8-s16-s32.c: Dito.
* gcc.dg/vect/vect-widen-mult-u8-u32.c: Dito.
* gcc.dg/vect/vect-widen-mult-u8.c: Dito.
chenxiaolong [Fri, 5 Jan 2024 03:43:27 +0000 (11:43 +0800)]
LoongArch: testsuite:Delete the default run behavior in pr60510.f.
When binutils does not support vector instruction sets, the test program fails
because it does not recognize vectorization at the assembly stage. Therefore,
the default run behavior of the program is deleted, so that the behavior of
the program depends on whether the software supports vectorization.
gcc/testsuite/ChangeLog:
* gfortran.dg/vect/pr60510.f: Delete the default behavior of the
program.
chenxiaolong [Fri, 5 Jan 2024 03:43:26 +0000 (11:43 +0800)]
LoongArch: testsuite:Fix FAIL in file bind_c_array_params_2.f90.
On the LoongArch architecture, an error was found in the
bind_c_array_params_2.f90 file because there was no proper assembly code
for the regular expression detection function call, such as bl %plt(myBindC).
gcc/testsuite/ChangeLog:
* gfortran.dg/bind_c_array_params_2.f90: Add code test rules to
support testing of the loongArch architecture.
chenxiaolong [Fri, 5 Jan 2024 03:43:24 +0000 (11:43 +0800)]
LoongArch: testsuite:Modify the test behavior of the vect-bic-bitmask-{12, 23}.c file.
Before modifying the test behavior of the program, dg-do is set to assemble in
vect-bic-bitmask-{12,23}.c. However, when the binutils library does not support
the vector instruction set, it will FAIL to recognize the vector instruction
and fail item will appear in the assembly stage. So set the program's dg-do to
compile.
gcc/testsuite/ChangeLog:
* gcc.dg/vect/vect-bic-bitmask-12.c: Change the default
setting of assembly to compile.
* gcc.dg/vect/vect-bic-bitmask-23.c: Dito.
Alex Coplan [Fri, 5 Jan 2024 12:25:00 +0000 (12:25 +0000)]
aarch64: Further fix for throwing insns in ldp/stp pass [PR113217]
As the PR shows, the fix in r14-6916-g057dc349021660c40699fb5c98fd9cac8e168653 was not complete.
That fix was enough to stop us trying to move throwing accesses above
nondebug insns, but due to this code in try_fuse_pair:
// Placement strategy: push loads down and pull stores up, this should
// help register pressure by reducing live ranges.
if (load_p)
range.first = range.last;
else
range.last = range.first;
we would still try to move stores up above any debug insns that occurred
immediately after the previous nondebug insn. This patch fixes that by
narrowing the move range in the case that the second access is throwing
to exactly the range of that insn.
Note that we still need the fix to latest_hazard_before mentioned above
so as to ensure we select a suitable base and reject pairs if it isn't
viable to form the pair at the end of the BB.
gcc/ChangeLog:
PR target/113217
* config/aarch64/aarch64-ldp-fusion.cc
(ldp_bb_info::try_fuse_pair): If the second access can throw,
narrow the move range to exactly that insn.
GCC can emit code between the function label and the .LASANPC label,
making the latter unaligned. Some architectures cannot load unaligned
labels directly and require literal pool entries, which is inefficient.
Move the invocation of asan_function_start to
ASM_OUTPUT_FUNCTION_LABEL, which guarantees that no additional code is
emitted. This allows setting the .LASANPC label alignment to the
respective function alignment.
Implement ASM_DECLARE_FUNCTION_NAME using ASM_OUTPUT_FUNCTION_LABEL
gccint recommends using ASM_OUTPUT_FUNCTION_LABEL in
ASM_DECLARE_FUNCTION_NAME, but many implementations use
ASM_OUTPUT_LABEL instead. It's inconsistent and prevents changes to
ASM_OUTPUT_FUNCTION_LABEL from affecting the respective targets.
The current constexpr implementation of std::char_traits<C>::move relies
on being able to compare the pointer parameters, which is not allowed
for unrelated pointers. We can use __builtin_constant_p to determine
whether it's safe to compare the pointers directly. If not, then we know
the ranges must be disjoint and so we can use char_traits<C>::copy to
copy forwards from the first character to the last. If the pointers can
be compared directly, then we can simplify the condition for copying
backwards to just two pointer comparisons.
libstdc++-v3/ChangeLog:
PR libstdc++/113200
* include/bits/char_traits.h (__gnu_cxx::char_traits::move): Use
__builtin_constant_p to check for unrelated pointers that cannot
be compared during constant evaluation.
* testsuite/21_strings/char_traits/requirements/113200.cc: New
test.
Cassio Neri [Sun, 10 Dec 2023 11:31:31 +0000 (11:31 +0000)]
libstdc++: Remove UB from month and weekday additions and subtractions.
The following invoke signed integer overflow (UB) [1]:
month + months{MAX} // where MAX is the maximum value of months::rep
month + months{MIN} // where MIN is the maximum value of months::rep
month - months{MIN} // where MIN is the minimum value of months::rep
weekday + days {MAX} // where MAX is the maximum value of days::rep
weekday - days {MIN} // where MIN is the minimum value of days::rep
For the additions to MAX, the crux of the problem is that, in libstdc++,
months::rep and days::rep are int64_t. Other implementations use int32_t, cast
operands to int64_t and perform arithmetic operations without risk of
overflowing.
For month + months{MIN}, the implementation follows the Standard's "returns
clause" and evaluates:
Overflow occurs when MIN - 1 is evaluated. Casting to a larger type could help
but, unfortunately again, this is not possible for libstdc++.
For the subtraction of MIN, the problem is that -MIN is not representable.
It's fair to say that the intention is for these additions/subtractions to
be performed in modulus (12 or 7) arithmetic so that no overflow is expected.
which respectively, returns the remainder of Euclidean division of, __x + __y
and __x - __y by __d without overflowing. These functions replace
constexpr unsigned __modulo(long long __n, unsigned __d);
which also calculates the reminder of __n, where __n is the result of the
addition or subtraction. Hence, these operations might invoke UB before __modulo
is called and thus, __modulo can't do anything to remediate the issue.
In addition to solve the UB issues, __add_modulo and __sub_modulo allow better
codegen (shorter and branchless) on x86-64 and ARM [2].
* include/std/chrono: Fix + and - for months and weekdays.
* testsuite/std/time/month/1.cc: Add constexpr tests against overflow.
* testsuite/std/time/month/2.cc: New test for extreme values.
* testsuite/std/time/weekday/1.cc: Add constexpr tests against overflow.
* testsuite/std/time/weekday/2.cc: New test for extreme values.
Jonathan Wakely [Wed, 3 Jan 2024 12:23:32 +0000 (12:23 +0000)]
libstdc++: Use if-constexpr in std::__try_use_facet [PR113099]
As noted in the PR, we can use if-constexpr for the explicit
instantantiation definitions that are compiled with -std=gnu++11. We
just need to disable the -Wc++17-extensions diagnostics.
libstdc++-v3/ChangeLog:
PR libstdc++/113099
* include/bits/locale_classes.tcc (__try_use_facet): Use
if-constexpr for C++11 and up.
Jakub Jelinek [Fri, 5 Jan 2024 10:18:17 +0000 (11:18 +0100)]
scev: Avoid ICE on results used in abnormal PHI args [PR113201]
The following testcase ICEs when rslt is SSA_NAME_OCCURS_IN_ABNORMAL_PHI
and we call replace_uses_by with a INTEGER_CST def, where it ICEs on:
if (e->flags & EDGE_ABNORMAL
&& !SSA_NAME_OCCURS_IN_ABNORMAL_PHI (val))
because val is not an SSA_NAME. One way would be to add
&& TREE_CODE (val) == SSA_NAME
check in between the above 2 lines in replace_uses_by.
And/or the following patch just punts propagating constants to
SSA_NAME_OCCURS_IN_ABNORMAL_PHI rslt uses.
Or we could punt somewhere earlier in final value replacement (but dunno
where).
Jakub Jelinek [Fri, 5 Jan 2024 10:16:58 +0000 (11:16 +0100)]
Improve __builtin_popcount* (x) == 1 generation if x is known != 0 [PR90693]
We expand __builtin_popcount* (x) == 1 as
x ^ (x - 1) > x - 1, either unconditionally in tree-ssa-math-opts.cc
if we don't have a direct optab support for popcount, or during
expansion where we compare the costs of comparison of the popcount
against one vs. the above expression.
As mentioned in the PR, if we know from ranger that the argument is
not zero, we can emit x & (x - 1) == 0 test which is same number of
GIMPLE statements, but on many targets cheaper (e.g. whenever an AND
instruction can also set flags on whether result was zero or not).
The following patch does that.
2024-01-05 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/90693
* tree-ssa-math-opts.cc (match_single_bit_test): If
tree_expr_nonzero_p (arg), remember it in the second argument to
IFN_POPCOUNT or lower it as arg & (arg - 1) == 0 rather than
arg ^ (arg - 1) > arg - 1.
* internal-fn.cc (expand_POPCOUNT): If second argument to
IFN_POPCOUNT suggests arg is non-zero, try to expand it as
arg & (arg - 1) == 0 rather than arg ^ (arg - 1) > arg - 1.