Andrew Pinski [Mon, 15 Jan 2024 09:31:36 +0000 (10:31 +0100)]
AVR: target/113156 - Fix ICE due to missing "Save" on -m[long-]double= options.
Multilib options -mdouble= and -mlong-double= are not orthogonal:
TARGET_HANDLE_OPTION = avr-common.cc::avr_handle_option() sets them
such that sizeof(double) <= sizeof(long double) is always true.
Jakub Jelinek [Mon, 15 Jan 2024 08:58:37 +0000 (09:58 +0100)]
lower-bitint: Fix up handling of INTEGER_CSTs in handle_operand in right shifts or comparisons [PR113370]
The INTEGER_CST code uses the remainder bits in computations whether to use
whole constant or just part of it and extend it at runtime, and furthermore
uses it to avoid using all bits even when using the (almost) whole constant.
The problem is that the prec % (2 * limb_prec) computation it uses is
appropriate only for the normal lowering of mergeable operations (where
we process 2 limbs at a time in a loop starting with least significant
limbs and process the remaining 0-2 limbs after the loop (there with
constant indexes). For that case it is ok not to emit the upper
prec % (2 * limb_prec) bits into the constant, because those bits will be
extracted using INTEGER_CST idx and so will be used directly in the
statements as INTEGER_CSTs.
For other cases, where we either process just a single limb in a loop,
process it downwards (e.g. non-equality comparisons) or with some runtime
addends (some shifts), there is either just at most one limb lowered with
INTEGER_CST idx after the loop (e.g. for right shift) or before the loop
(e.g. non-equality comparisons), or all limbs are processed with
non-INTEGER_CST indexes (e.g. for left shift, when m_var_msb is set).
Now, the m_var_msb case is already handled through
if (m_var_msb)
type = TREE_TYPE (op);
else
/* If we have a guarantee the most significant partial limb
(if any) will be only accessed through handle_operand
with INTEGER_CST idx, we don't need to include the partial
limb in .rodata. */
type = build_bitint_type (prec - rem, 1);
but for the right shifts or comparisons the prec - rem when rem was
prec % (2 * limb_prec) was incorrect, so the following patch fixes it
to use remainder for 2 limbs only if m_upwards_2limb and remainder for
1 limb otherwise.
2024-01-15 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/113370
* gimple-lower-bitint.cc (bitint_large_huge::handle_operand): Only
set rem to prec % (2 * limb_prec) if m_upwards_2limb, otherwise
set it to just prec % limb_prec.
Juzhe-Zhong [Mon, 15 Jan 2024 06:57:38 +0000 (14:57 +0800)]
RISC-V: Fix attributes bug configuration of ternary instructions
This patch fixes the following FAILs:
Running target riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-preference=fixed-vlmax
FAIL: gcc.c-torture/execute/pr68532.c -O0 execution test
FAIL: gcc.c-torture/execute/pr68532.c -O1 execution test
FAIL: gcc.c-torture/execute/pr68532.c -O2 execution test
FAIL: gcc.c-torture/execute/pr68532.c -O3 -fomit-frame-pointer -funroll-loops -fpeel-loops -ftracer -finline-functions execution test
FAIL: gcc.c-torture/execute/pr68532.c -O3 -g execution test
FAIL: gcc.c-torture/execute/pr68532.c -Os execution test
FAIL: gcc.c-torture/execute/pr68532.c -O2 -flto -fno-use-linker-plugin -flto-partition=none execution test
Running target riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2/--param=riscv-autovec-preference=fixed-vlmax
FAIL: gcc.dg/vect/pr60196-1.c execution test
FAIL: gcc.dg/vect/pr60196-1.c -flto -ffat-lto-objects execution test
Running target riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-preference=fixed-vlmax
FAIL: gcc.dg/vect/pr60196-1.c execution test
FAIL: gcc.dg/vect/pr60196-1.c -flto -ffat-lto-objects execution test
Running target riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-preference=fixed-vlmax
FAIL: gcc.dg/vect/pr60196-1.c execution test
FAIL: gcc.dg/vect/pr60196-1.c -flto -ffat-lto-objects execution test
The root cause is attributes of ternary intructions are incorrect which cause AVL prop PASS and VSETVL PASS behave
incorrectly.
Tested no regression and committed.
PR target/113393
gcc/ChangeLog:
* config/riscv/vector.md: Fix ternary attributes.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/pr113393-1.c: New test.
* gcc.target/riscv/rvv/autovec/pr113393-2.c: New test.
* gcc.target/riscv/rvv/autovec/pr113393-3.c: New test.
Georg-Johann Lay [Thu, 11 Jan 2024 21:11:25 +0000 (22:11 +0100)]
AVR: Support .rodata in Flash for AVR64* and AVR128* Devices.
These devices see a 32 KiB block of their program memory (flash) in
the RAM address space. This can be used to support .rodata in flash
provided Binutils support PR31124 (Add new emulations which locate
.rodata in flash). This patch does the following:
* configure checks availability of Binutils PR31124.
* Add new command line options -mrodata-in-ram and -mflmap.
While -flmap is for internal usage (communicate hardware properties
from device-specs to the compiler proper), -mrodata-in-ram is a user
space option that allows to return to the current rodata-in-ram layout.
* Adjust gen-avr-mmcu-specs.cc so that device-specs are generated
that sanity check options, and that translate -m[no-]rodata-in-ram
to its emulation.
* Objects in .rodata don't drag __do_copy_data.
* Document new options and built-in macros.
PR target/112944
gcc/
* configure.ac [target=avr]: Check availability of emulations
avrxmega2_flmap and avrxmega4_flmap, resulting in new config vars
HAVE_LD_AVR_AVRXMEGA2_FLMAP and HAVE_LD_AVR_AVRXMEGA4_FLMAP.
* configure: Regenerate.
* config.in: Regenerate.
* doc/invoke.texi (AVR Options): Document -mflmap, -mrodata-in-ram,
__AVR_HAVE_FLMAP__, __AVR_RODATA_IN_RAM__.
* config/avr/avr.opt (-mflmap, -mrodata-in-ram): New options.
* config/avr/avr-arch.h (enum avr_device_specific_features):
Add AVR_ISA_FLMAP.
* config/avr/avr-mcus.def (AVR_MCU) [avr64*, avr128*]: Set isa flag
AVR_ISA_FLMAP.
* config/avr/avr.cc (avr_arch_index, avr_has_rodata_p): New vars.
(avr_set_core_architecture): Set avr_arch_index.
(have_avrxmega2_flmap, have_avrxmega4_flmap)
(have_avrxmega3_rodata_in_flash): Set new static const bool according
to configure results.
(avr_rodata_in_flash_p): New function using them.
(avr_asm_init_sections): Let readonly_data_section->unnamed.callback
track avr_need_copy_data_p only if not avr_rodata_in_flash_p().
(avr_asm_named_section): Track avr_has_rodata_p.
(avr_file_end): Emit __do_copy_data also when avr_has_rodata_p
and not avr_rodata_in_flash_p ().
* config/avr/specs.h (CC1_SPEC): Add %(cc1_rodata_in_ram).
(LINK_SPEC): Add %(link_rodata_in_ram).
(LINK_ARCH_SPEC): Remove.
* config/avr/gen-avr-mmcu-specs.cc (have_avrxmega3_rodata_in_flash)
(have_avrxmega2_flmap, have_avrxmega4_flmap): Set new static
const bool according to configure results.
(diagnose_mrodata_in_ram): New function.
(print_mcu): Generate specs with the following changes:
<*cc1_misc, *asm_misc, *link_misc>: New specs so that we don't
need to extend avr/specs.h each time we add a new bell or whistle.
<*cc1_rodata_in_ram, *link_rodata_in_ram>: New specs to diagnose
-m[no-]rodata-in-ram.
<*cpp_rodata_in_ram>: New. Does -D__AVR_RODATA_IN_RAM__=0/1.
<*cpp_mcu>: Add -D__AVR_AVR_FLMAP__ if it applies.
<*cpp>: Add %(cpp_rodata_in_ram).
<*link_arch>: Use emulation avrxmega2_flmap, avrxmega4_flmap as
requested.
<*self_spec>: Add -mflmap or %<mflmap as needed.
gcc/testsuite/
* gcc.target/avr/torture/pr112944-flmap-0.c: New test.
* gcc.target/avr/torture/pr112944-flmap-1.c: New test.
Jeff Law [Sun, 14 Jan 2024 14:53:49 +0000 (07:53 -0700)]
[committed] Fix MIPS bootstrap
mips bootstraps have been broken for a while. They've been triggering an error
about mutually exclusive equal-tests always being false when building
gencondmd.
This was ultimately tracked down to the ior<mode>3_mips16_asmacro pattern. The
pattern uses the GPR mode iterator which looks like this:
/* The MIPS16e V2 instructions are available. */
&& !TARGET_64BIT)
The way the mode iterator is handled is by adding its condition to the
pattern's condition when we expand copies of the pattern resulting in this
condition for one of the two generated patterns:
Harald Anlauf [Sat, 13 Jan 2024 21:00:21 +0000 (22:00 +0100)]
Fortran: intrinsic ISHFTC and missing optional argument SIZE [PR67277]
gcc/fortran/ChangeLog:
PR fortran/67277
* trans-intrinsic.cc (gfc_conv_intrinsic_ishftc): Handle optional
dummy argument for SIZE passed to ISHFTC. Set default value to
BIT_SIZE(I) when missing.
gcc/testsuite/ChangeLog:
PR fortran/67277
* gfortran.dg/ishftc_optional_size_1.f90: New test.
hppa64: Fix fmt_f_default_field_width_3.f90 and fmt_g_default_field_width_3.f90
The hppa*64*-*-hpux* target is not included in the set of fortran_real_16
targets because it doesn't have cosl. However, these tests don't need
cosl, etc.
2024-01-13 John David Anglin <danglin@gcc.gnu.org>
Harald Anlauf [Fri, 12 Jan 2024 18:51:11 +0000 (19:51 +0100)]
Fortran: annotations for DO CONCURRENT loops [PR113305]
gcc/fortran/ChangeLog:
PR fortran/113305
* gfortran.h (gfc_loop_annot): New.
(gfc_iterator, gfc_forall_iterator): Use for annotation control.
* array.cc (gfc_copy_iterator): Adjust.
* gfortran.texi: Document annotations IVDEP, UNROLL n, VECTOR,
NOVECTOR as applied to DO CONCURRENT.
* parse.cc (parse_do_block): Parse annotations IVDEP, UNROLL n,
VECTOR, NOVECTOR as applied to DO CONCURRENT. Apply UNROLL only to
first loop control variable.
* trans-stmt.cc (iter_info): Use gfc_loop_annot.
(gfc_trans_simple_do): Adjust.
(gfc_trans_forall_loop): Annotate loops with IVDEP, UNROLL n,
VECTOR, NOVECTOR as needed for DO CONCURRENT.
(gfc_trans_forall_1): Handle loop annotations.
gcc/testsuite/ChangeLog:
PR fortran/113305
* gfortran.dg/do_concurrent_7.f90: New test.
Jonathan Wakely [Wed, 10 Jan 2024 17:29:22 +0000 (17:29 +0000)]
libstdc++: Implement P2255R2 dangling checks for std::tuple [PR108822]
This is the last part of PR libstdc++/108822 implementing P2255R2, which
makes it ill-formed to create a std::tuple that would bind a reference
to a temporary.
The dangling checks are implemented as deleted constructors for C++20
and higher, and as Debug Mode static assertions in the constructor body
for older standards. This is similar to the r13-6084-g916ce577ad109b
changes for std::pair.
As part of this change, I've reimplemented most of std::tuple for C++20,
making use of concepts to replace the enable_if constraints, and using
conditional explicit to avoid duplicating most constructors. We could
use conditional explicit for the C++11 implementation too (with pragmas
to disables the -Wc++17-extensions warnings), but that should be done as
a stage 1 change for GCC 15 rather than now.
The partial specialization for std::tuple<T1, T2> is no longer used for
C++20 (or more precisely, for a C++20 compiler that supports concepts
and conditional explicit). The additional constructors and assignment
operators that take std::pair arguments have been added to the C++20
implementation of the primary template, with sizeof...(_Elements)==2
constraints. This avoids reimplementing all the other constructors in
the std::tuple<T1, T2> partial specialization to use concepts. This way
we avoid four implementations of every constructor and only have three!
(The primary template has an implementation of each constructor for
C++11 and another for C++20, and the tuple<T1,T2> specialization has an
implementation of each for C++11, so that's three for each constructor.)
In order to make the constraints more efficient on the C++20 version of
the default constructor I've also added a variable template for the
__is_implicitly_default_constructible trait, implemented using concepts.
libstdc++-v3/ChangeLog:
PR libstdc++/108822
* include/std/tuple (tuple): Add checks for dangling references.
Reimplement constraints and constant expressions using C++20
features.
* include/std/type_traits [C++20]
(__is_implicitly_default_constructible_v): Define.
(__is_implicitly_default_constructible): Use variable template.
* testsuite/20_util/tuple/dangling_ref.cc: New test.
Jakub Jelinek [Sat, 13 Jan 2024 10:29:15 +0000 (11:29 +0100)]
lower-bitint: Fix up handle_operand_addr INTEGER_CST handling [PR113361]
As the testcase shows, the INTEGER_CST handling in handle_operand_addr
(i.e. what is used when passing address of an integer to a bitint library
routine) wasn't correct. If the minimum precision to represent an
INTEGER_CST is smaller or equal to limb_prec, the code correctly uses
m_limb_type; if the minimum precision of a _BitInt INTEGER_CST is large
enough such that the bitint is middle, large or huge, everything is fine
too. But the code wasn't handling correctly e.g. __int128 constants which
need more than limb_prec bits or _BitInt constants which on the architecture
are considered small (say have DImode limb_mode, TImode abi_limb_mode and
for [65, 128] bits use TImode scalar like the proposed aarch64 patch).
Best would be to use an array of 2/3/4 limbs in that case, but we'd need to
convert the INTEGER_CST to a CONSTRUCTOR in the right endianity etc.,
so the code was using mid_min_prec to enforce a middle _BitInt precision.
Except that mid_min_prec can be 0 and not computed yet, or it doesn't have
to be the smallest middle _BitInt precision, just the smallest so far
encountered. So, on the testcase one possibility was that it used precision
65 from mid_min_prec, even when the INTEGER_CST actually needed larger
minimum precision (96 bits at least), or crashed when mid_min_prec was 0.
The patch fixes it in 2 hunks, the first makes sure we actually try to
create a BITINT_TYPE for the > limb_prec cases like __int128, and the second
instead of using mid_min_prec attempts to increase mp precision until it
isn't small anymore.
2024-01-13 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/113361
* gimple-lower-bitint.cc (bitint_large_huge::handle_operand_addr):
Fix up determination of the type for > limb_prec constants.
Jakub Jelinek [Sat, 13 Jan 2024 09:46:51 +0000 (10:46 +0100)]
testsuite: Fix up vect-early-break_100-pr113287.c testcase [PR113287]
When the testcase was being adjusted for unsigned long -> unsigned long long,
two spots using long weren't changed to long long, so the testcase still warns
about UB in shifts.
2024-01-13 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/113287
* gcc.dg/vect/vect-early-break_100-pr113287.c: Use long long instead
of long.
The following patch attempts to implement what apparently clang++
implemented for explicit object member function mangling, but nobody
actually proposed in patch form in
https://github.com/itanium-cxx-abi/cxx-abi/issues/148
2024-01-13 Jakub Jelinek <jakub@redhat.com>
gcc/cp/
* mangle.cc (write_nested_name): Mangle explicit object
member functions with H as per
https://github.com/itanium-cxx-abi/cxx-abi/issues/148 non-proposal.
gcc/testsuite/
* g++.dg/abi/mangle79.C: New test.
include/
* demangle.h (enum demangle_component_type): Add
DEMANGLE_COMPONENT_XOBJ_MEMBER_FUNCTION.
libiberty/
* cp-demangle.c (FNQUAL_COMPONENT_CASE): Add case for
DEMANGLE_COMPONENT_XOBJ_MEMBER_FUNCTION.
(d_dump): Handle DEMANGLE_COMPONENT_XOBJ_MEMBER_FUNCTION.
(d_nested_name): Parse H after N in nested name.
(d_count_templates_scopes): Handle
DEMANGLE_COMPONENT_XOBJ_MEMBER_FUNCTION.
(d_print_mod): Likewise.
(d_print_function_type): Likewise.
* testsuite/demangle-expected: Add tests for explicit object
member functions.
Andrew Pinski [Sat, 13 Jan 2024 04:24:34 +0000 (20:24 -0800)]
Add a few testcases for fix missed optimization regressions
Adds a few new testcases for some missed optimization regressions.
The analysis on how each should be optimized is in the testcases
themselves (and in the bug report).
Committed as obvious after running the testsuite to make sure they pass.
* gcc.dg/tree-ssa/ssa-thread-22.c: New test.
* gcc.dg/tree-ssa/vrp-loop-1.c: New test.
* gcc.dg/tree-ssa/vrp-loop-2.c: New test.
* gcc.dg/tree-ssa/vrp-unreachable-1.c: New test.
* gcc.dg/tree-ssa/vrp-unreachable-2.c: New test.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
Patrick Palka [Sat, 13 Jan 2024 03:55:43 +0000 (22:55 -0500)]
libstdc++: Use C++23 deducing this in std::bind_front
This simplifies the operator() of _Bind_front using C++23 deducing
this, allowing us to condense multiple operator() overloads into one.
In passing I think we can remove _Bind_front's defaulted special member
declarations and just let the compiler implicitly generate them for us.
libstdc++-v3/ChangeLog:
* include/std/functional (_Bind_front): Remove =default special
member function declarations.
(_Bind_front::operator()): Implement using C++23 deducing this
when available.
* testsuite/20_util/function_objects/bind_front/111327.cc:
Adjust testcase to expect better errors in C++23 mode.
Patrick Palka [Sat, 13 Jan 2024 03:54:59 +0000 (22:54 -0500)]
libstdc++/ranges: Use perfect forwarding in _Pipe and _Partial ctors
This avoids redundant moves when composing and partially applying range
adaptor objects.
libstdc++-v3/ChangeLog:
* include/std/ranges (views::__adaptor::operator|): Perform
perfect forwarding of arguments.
(views::__adaptor::_RangeAdaptor::operator()): Pass dummy
first argument to _Partial.
(views::__adaptor::_Partial::_Partial): Likewise. Add dummy
first parameter.
(views::__adaptor::_Pipe::_Pipe): Perform perfect forwarding
of arguments.
(to): Pass dummy first argument to _Partial.
Jonathan Wakely [Thu, 11 Jan 2024 15:09:12 +0000 (15:09 +0000)]
libstdc++: Fix non-portable results from 64-bit std::subtract_with_carry_engine [PR107466]
I implemented the resolution of LWG 3809 in r13-4364-ga64775a0edd469 but
it was recently noted in the MSVC STL github repo that the change causes
possible truncation for 64-bit seeds. Whether the truncation occurs (and
to what value) depends on the width of uint_least32_t which is not
portable, so the output of the PRNG for 64-bit seed values is no longer
the same as in C++20, and no longer portable across platforms.
That new issue was filed as LWG 4014. I proposed a new change which
reduces the seed by the LCG's modulus before the conversion to
uint_least32_t. This ensures that 64-bit seed values are consistently
reduced by the modulus before any truncation. This removes the
platform-dependent behaviour and restores the old behaviour for
std::subtract_with_carry_engine specializations using a 64-bit result
type (such as std::ranlux48_base).
libstdc++-v3/ChangeLog:
PR libstdc++/107466
* include/bits/random.tcc (subtract_with_carry_engine::seed):
Implement proposed resolution of LWG 4014.
* testsuite/26_numerics/random/pr60037-neg.cc: Adjust dg-error
line number.
* testsuite/26_numerics/random/subtract_with_carry_engine/cons/lwg3809.cc:
Check for expected result of 64-bit engine with seed that
doesn't fit in 32-bits.
Jason Merrill [Mon, 18 Dec 2023 20:47:10 +0000 (15:47 -0500)]
c++: __class_type_info and modules [PR113038]
Doing a dynamic_cast in both TUs broke because we were declaring a new
__class_type_info in _b that conflicted with the one imported in the global
module from _a. It seems clear to me that any new class declaration in
the global module should merge with an imported definition, but for GCC 14
let's just fix this for the specific case of __class_type_info.
PR c++/113038
gcc/cp/ChangeLog:
* name-lookup.cc (lookup_elaborated_type): Look for bindings
in the global namespace in the ABI namespace.
Ezra Sitorus [Tue, 2 Jan 2024 09:23:45 +0000 (09:23 +0000)]
arm: vld1_types_x4 ACLE intrinsics
This patch is part of a series of patches implementing the _xN
variants of the vld1 intrinsic for the arm port. This patch adds the
_x4 variants of the vld1 intrinsic.
The previous vld1_x4 has been updated to vld1q_x4 to take into
account that it works with 4-word-length types. vld1_x4 is now
only for 2-word-length types.
ISA documents:
https://developer.arm.com/documentation/ddi0487/latest/
gcc/ChangeLog:
* config/arm/arm_neon.h
(vld1_u8_x4, vld1_u16_x4, vld1_u32_x4, vld1_u64_x4): New.
(vld1_s8_x4, vld1_s16_x4, vld1_s32_x4, vld1_s64_x4): New.
(vld1_f16_x4, vld1_f32_x4): New.
(vld1_p8_x4, vld1_p16_x4, vld1_p64_x4): New.
(vld1_bf16_x4): New.
(vld1q_types_x4): Updated to use vld1q_x4
from arm_neon_builtins.def
* config/arm/arm_neon_builtins.def
(vld1_x4): Updated entries.
(vld1q_x4): New entries, but comes from the old vld1_x4
* config/arm/neon.md
(neon_vld1q_x4<mode>): Updated from neon_vld1_x4<mode>.
Ezra Sitorus [Tue, 2 Jan 2024 09:23:44 +0000 (09:23 +0000)]
arm: vld1_types_x3 ACLE intrinsics
This patch is part of a series of patches implementing the _xN
variants of the vld1 intrinsic for the arm port. This patch adds the
_x3 variants of the vld1 intrinsic.
The previous vld1_x3 has been updated to vld1q_x3 to take into
account that it works with 4-word-length types. vld1_x3 is now
only for 2-word-length types.
ISA documents:
https://developer.arm.com/documentation/ddi0487/latest/
gcc/ChangeLog:
* config/arm/arm_neon.h
(vld1_u8_x3, vld1_u16_x3, vld1_u32_x3, vld1_u64_x3): New.
(vld1_s8_x3, vld1_s16_x3, vld1_s32_x3, vld1_s64_x3): New.
(vld1_f16_x3, vld1_f32_x3): New.
(vld1_p8_x3, vld1_p16_x3, vld1_p64_x3): New.
(vld1_bf16_x3): New.
(vld1q_types_x3): Updated to use vld1q_x3 from
arm_neon_builtins.def
* config/arm/arm_neon_builtins.def
(vld1_x3): Updated entries.
(vld1q_x3): New entries, but comes from the old vld1_x2
* config/arm/neon.md
(neon_vld1q_x3<mode>): Updated from neon_vld1_x3<mode>.
Ezra Sitorus [Tue, 2 Jan 2024 09:23:43 +0000 (09:23 +0000)]
arm: vld1_types_x2 ACLE intrinsics
This patch is part of a series of patches implementing the _xN
variants of the vld1 intrinsic for the arm port. This patch adds the
_x2 variants of the vld1 intrinsic.
The previous vld1_x2 has been updated to vld1q_x2 to take into
account that it works with 4-word-length types. vld1_x2 is now
only for 2-word-length types.
ISA documents:
https://developer.arm.com/documentation/ddi0487/latest/
gcc/ChangeLog:
* config/arm/arm_neon.h
(vld1_u8_x2, vld1_u16_x2, vld1_u32_x2, vld1_u64_x2): New.
(vld1_s8_x2, vld1_s16_x2, vld1_s32_x2, vld1_s64_x2): New.
(vld1_f16_x2, vld1_f32_x2): New.
(vld1_p8_x2, vld1_p16_x2, vld1_p64_x2): New.
(vld1_bf16_x2): New.
(vld1q_types_x2): Updated to use vld1q_x2 from
arm_neon_builtins.def
* config/arm/arm_neon_builtins.def
(vld1_x2): Updated entries.
(vld1q_x2): New entries, but comes from the old vld1_x2
* config/arm/neon.md
(neon_vld1<VMEMX2_q>_x2<VDQX:mode>): Updated from
neon_vld1_x2<mode>.
gcc/testsuite/ChangeLog:
* gcc.target/arm/simd/vld1_base_xN_1.c: Add new tests.
* gcc.target/arm/simd/vld1_bf16_xN_1.c: Add new tests.
* gcc.target/arm/simd/vld1_fp16_xN_1.c: Add new tests.
* gcc.target/arm/simd/vld1_p64_xN_1.c: Add new tests.
Ezra Sitorus [Tue, 2 Jan 2024 09:23:42 +0000 (09:23 +0000)]
arm: vst1q_types_x4 ACLE intrinsics
This patch is part of a series of patches implementing the _xN
variants of the vst1q intrinsic for the arm port. This patch adds the
_x4 variants of the vst1q intrinsic.
Ezra Sitorus [Tue, 2 Jan 2024 09:23:41 +0000 (09:23 +0000)]
arm: vst1q_types_x3 ACLE intrinsics
This patch is part of a series of patches implementing the _xN
variants of the vst1q intrinsic for the arm port. This patch adds the
_x3 variants of the vst1q intrinsic.
Ezra Sitorus [Tue, 2 Jan 2024 09:23:40 +0000 (09:23 +0000)]
arm: vst1q_types_x2 ACLE intrinsics
This patch is part of a series of patches implementing the _xN
variants of the vst1q intrinsic for the arm port. This patch adds the
_x2 variants of the vst1q intrinsic.
Ezra Sitorus [Tue, 2 Jan 2024 09:23:39 +0000 (09:23 +0000)]
arm: vst1_types_x4 ACLE intrinsics
This patch is part of a series of patches implementing the _xN
variants of the vst1 intrinsic for the arm port. This patch adds the
_x4 variants of the vst1 intrinsic.
Ezra Sitorus [Tue, 2 Jan 2024 09:23:38 +0000 (09:23 +0000)]
arm: vst1_types_x3 ACLE intrinsics
This patch is part of a series of patches implementing the _xN
variants of the vst1 intrinsic for the arm port. This patch adds the
_x3 variants of the vst1 intrinsic.
Ezra Sitorus [Tue, 2 Jan 2024 09:23:37 +0000 (09:23 +0000)]
arm: vst1_types_x2 ACLE intrinsics
This patch is part of a series of patches implementing the _xN
variants of the vst1 intrinsic for the arm port. This patch adds the
_x2 variants of the vst1 intrinsic.
Ezra Sitorus [Tue, 2 Jan 2024 09:23:36 +0000 (09:23 +0000)]
arm: vld1q_types_x4 ACLE intrinsics
This patch is part of a series of patches implementing the _xN
variants of the vld1q intrinsic for the arm port. This patch adds the
_x4 variants of the vld1q intrinsic.
Ezra Sitorus [Tue, 2 Jan 2024 09:23:35 +0000 (09:23 +0000)]
arm: vld1q_types_x3 ACLE intrinsics
This patch is part of a series of patches implementing the _xN
variants of the vld1q intrinsic for the arm port. This patch adds the
_x3 variants of the vld1q intrinsic.
Ezra Sitorus [Tue, 2 Jan 2024 09:23:34 +0000 (09:23 +0000)]
arm: vld1q_types_x2 ACLE intrinsics
This patch is part of a series of patches implementing the _xN
variants of the vld1q intrinsic for the arm port. This patch adds the
_x2 variants of the vld1q intrinsic.
Jakub Jelinek [Fri, 12 Jan 2024 16:11:49 +0000 (17:11 +0100)]
c: Avoid _BitInt indexes > sizetype in ARRAY_REFs [PR113315]
When build_array_ref doesn't use ARRAY_REF, it casts the index to sizetype
already, performs POINTER_PLUS_EXPR and then dereferences.
While when emitting ARRAY_REF, we try to keep index expression as is in
whatever type it had, which is reasonable e.g. for signed or unsigned types
narrower than sizetype for loop optimizations etc.
But if the index is wider than sizetype, we are unnecessarily computing
bits beyond what is needed. For {,unsigned }__int128 on 64-bit arches
or {,unsigned }long long on 32-bit arches we've been doing that for decades,
so the following patch doesn't propose to change that (might be stage1
material), but for _BitInt at least the _BitInt lowering code doesn't expect
to see large/huge _BitInt in the ARRAY_REF indexes, I was expecting one
would see just casts of those to sizetype.
So, the following patch makes sure that large/huge _BitInt indexes don't
appear in ARRAY_REFs.
2024-01-12 Jakub Jelinek <jakub@redhat.com>
PR c/113315
* c-typeck.cc (build_array_ref): If index has BITINT_TYPE type with
precision larger than sizetype precision, convert it to sizetype.
* gcc.dg/bitint-65.c: New test.
* gcc.dg/bitint-66.c: New test.
Tamar Christina [Fri, 12 Jan 2024 15:26:29 +0000 (15:26 +0000)]
middle-end: fill in reduction PHI for all alt exits [PR113178]
When we have a loop with more than 2 exits and a reduction I forgot to fill in
the PHI value for all alternate exits.
All alternate exits use the same PHI value so we should loop over the new
PHI elements and copy the value across since we call the reduction calculation
code only once for all exits. This was normally covered up by earlier parts of
the compiler rejecting loops incorrectly (which has been fixed now).
Note that while I can use the loop in all cases, the reason I separated out the
main and alt exit is so that if you pass the wrong edge the macro will assert.
gcc/ChangeLog:
PR tree-optimization/113178
* tree-vect-loop.cc (vect_create_epilog_for_reduction): Fill in all
alternate exits.
gcc/testsuite/ChangeLog:
PR tree-optimization/113178
* gcc.dg/vect/vect-early-break_101-pr113178.c: New test.
* gcc.dg/vect/vect-early-break_102-pr113178.c: New test.
Tamar Christina [Fri, 12 Jan 2024 15:25:58 +0000 (15:25 +0000)]
middle-end: thread through existing LCSSA variable for alternative exits too [PR113237]
Builing on top of the previous patch, similar to when we have a single exit if
we have a case where all exits are considered early exits and there are existing
non virtual phi then in order to maintain LCSSA we have to use the existing PHI
variables. We can't simply clear them and just rebuild them because the order
of the PHIs in the main exit must match the original exit for when we add the
skip_epilog guard.
But the infrastructure is already in place to maintain them, we just have to use
the right value.
gcc/ChangeLog:
PR tree-optimization/113237
* tree-vect-loop-manip.cc (slpeel_tree_duplicate_loop_to_edge_cfg): Use
existing LCSSA variable for exit when all exits are early break.
gcc/testsuite/ChangeLog:
PR tree-optimization/113237
* gcc.dg/vect/vect-early-break_98-pr113237.c: New test.
Tamar Christina [Fri, 12 Jan 2024 15:25:34 +0000 (15:25 +0000)]
middle-end: maintain LCSSA form when peeled vector iterations have virtual operands
This patch fixes several interconnected issues.
1. When picking an exit we wanted to check for niter_desc.may_be_zero not true.
i.e. we want to pick an exit which we know will iterate at least once.
However niter_desc.may_be_zero is not a boolean. It is a tree that encodes
a boolean value. !niter_desc.may_be_zero is just checking if we have some
information, not what the information is. This leads us to pick a more
difficult to vectorize exit more often than we should.
2. Because we had this bug, we used to pick an alternative exit much more ofthen
which showed one issue, when the loop accesses memory and we "invert it" we
would corrupt the VUSE chain. This is because on an peeled vector iteration
every exit restarts the loop (i.e. they're all early) BUT since we may have
performed a store, the vUSE would need to be updated. This version maintains
virtual PHIs correctly in these cases. Note that we can't simply remove all
of them and recreate them because we need the PHI nodes still in the right
order for if skip_vector.
3. Since we're moving the stores to a safe location I don't think we actually
need to analyze whether the store is in range of the memref, because if we
ever get there, we know that the loads must be in range, and if the loads are
in range and we get to the store we know the early breaks were not taken and
so the scalar loop would have done the VF stores too.
4. Instead of searching for where to move stores to, they should always be in
exit belonging to the latch. We can only ever delay stores and even if we
pick a different exit than the latch one as the main one, effects still
happen in program order when vectorized. If we don't move the stores to the
latch exit but instead to whever we pick as the "main" exit then we can
perform incorrect memory accesses (luckily these are trapped by verify_ssa).
5. We only used to analyze loads inside the same BB as an early break, and also
we'd never analyze the ones inside the block where we'd be moving memory
references to. This is obviously bogus and to fix it this patch splits apart
the two constraints. We first validate that all load memory references are
in bounds and only after that do we perform the alias checks for the writes.
This makes the code simpler to understand and more trivially correct.
gcc/ChangeLog:
PR tree-optimization/113137
PR tree-optimization/113136
PR tree-optimization/113172
PR tree-optimization/113178
* tree-vect-loop-manip.cc (slpeel_tree_duplicate_loop_to_edge_cfg):
Maintain PHIs on inverted loops.
(vect_do_peeling): Maintain virtual PHIs on inverted loops.
* tree-vect-loop.cc (vec_init_loop_exit_info): Pick exit closes to
latch.
(vect_create_loop_vinfo): Record all conds instead of only alt ones.
gcc/testsuite/ChangeLog:
PR tree-optimization/113137
PR tree-optimization/113136
PR tree-optimization/113172
PR tree-optimization/113178
* g++.dg/vect/vect-early-break_4-pr113137.cc: New test.
* g++.dg/vect/vect-early-break_5-pr113137.cc: New test.
* gcc.dg/vect/vect-early-break_95-pr113137.c: New test.
* gcc.dg/vect/vect-early-break_96-pr113136.c: New test.
* gcc.dg/vect/vect-early-break_97-pr113172.c: New test.
Tamar Christina [Fri, 12 Jan 2024 15:24:49 +0000 (15:24 +0000)]
middle-end: make memory analysis for early break more deterministic [PR113135]
Instead of searching for where to move stores to, they should always be in
exit belonging to the latch. We can only ever delay stores and even if we
pick a different exit than the latch one as the main one, effects still
happen in program order when vectorized. If we don't move the stores to the
latch exit but instead to whever we pick as the "main" exit then we can
perform incorrect memory accesses (luckily these are trapped by verify_ssa).
We used to iterate over the conds and check the loads and stores inside them.
However this relies on the conds being ordered in program order. Additionally
if there is a basic block between two conds we would not have analyzed it.
Instead this now walks from the preds of the destination basic block up to the
loop header and analyzes every block along the way. As a later optimization we
could stop as soon as we've seen all the BBs we have conds for. For now the
header will always contain the first cond, but this can change when we support
arbitrary control flow.
Jason Merrill [Thu, 11 Jan 2024 04:18:23 +0000 (23:18 -0500)]
c++: cand_parms_match and reversed candidates
When considering whether the candidate parameters match, according to the
language we're considering the synthesized reversed candidate, so we should
compare the parameters in swapped order. In this situation it doesn't make
sense to consider whether object parameters correspond, since we're
comparing an object parameter to a non-object parameter, so I generalized
xobj_iobj_parameters_correspond accordingly.
As I refine cand_parms_match, more behaviors need to differ between its
original use to compare the original templates for two candidates, and the
later use to decide whether to compare constraints. So now there's a
parameter to select between the semantics.
gcc/cp/ChangeLog:
* call.cc (reversed_match): New.
(enum class pmatch): New enum.
(cand_parms_match): Add match_kind parm.
(object_parms_correspond): Add fn parms.
(joust): Adjust.
* class.cc (xobj_iobj_parameters_correspond): Rename to...
(iobj_parm_corresponds_to): ...this. Take the other
type instead of a second function.
(object_parms_correspond): Adjust.
* cp-tree.h (iobj_parm_corresponds_to): Declare.
gcc/testsuite/ChangeLog:
* g++.dg/cpp2a/concepts-memfun4.C: Change expected
reversed handling.
Iain Sandoe [Sat, 6 Jan 2024 19:21:40 +0000 (19:21 +0000)]
Objective-C, Darwin: Fix a regression in handling bad receivers.
This is seen on 32b hosts with a 64b multilib, and is an ICE when
the build has checking enabled. The fix is to exit the routine
early if the sender or receiver are already error_mark_node.
gcc/objc/ChangeLog:
* objc-next-runtime-abi-02.cc
(build_v2_objc_method_fixup_call): Early exit for cases
where the sender or receiver are known to be in error.
Jakub Jelinek [Fri, 12 Jan 2024 12:58:07 +0000 (13:58 +0100)]
varasm: Fix up process_pending_assemble_externals [PR113182]
John reported that on HP-UX we no longer emit needed external libcalls.
The problem is that we didn't strip name encoding when looking up
the identifiers in assemble_external_libcall and
process_pending_assemble_externals, while
assemble_name_resolve does that:
const char *real_name = targetm.strip_name_encoding (name);
tree id = maybe_get_identifier (real_name);
if (id)
{
...
mark_referenced (id);
The intention is that assemble_external_libcall ensures the IDENTIFIER
exists for the external libcall, then for actually emitted calls
assemble_name_resolve sees those IDENTIFIERS and sets TREE_SYMBOL_REFERENCED
on them and finally process_pending_assemble_externals looks the
IDENTIFIER up again and checks its TREE_SYMBOL_REFERENCED.
But without the strip_name_encoding call, they can look up different
identifiers and those are likely never used.
In the PR, John was discussing whether get_identifier or
maybe_get_identifier should be used, I believe in assemble_external_libcall
we definitely want to use get_identifier, we need an IDENTIFIER allocated
so that it can be actually tracked, in process_pending_assemble_externals
it doesn't matter, the IDENTIFIER should be already created.
2024-01-12 John David Anglin <danglin@gcc.gnu.org>
Jakub Jelinek <jakub@redhat.com>
PR middle-end/113182
* varasm.cc (process_pending_assemble_externals,
assemble_external_libcall): Use targetm.strip_name_encoding
before calling get_identifier.
g:f26f92b534f9 implemented unsigned extensions using ZIPs rather than
UXTL{,2}, since the former has a higher throughput than the latter on
amny cores. The optimisation worked by lowering directly to ZIP during
expand, so that the zero input could be hoisted and shared.
However, changing to ZIP means that zero extensions no longer benefit
from some existing combine patterns. The patch included new patterns
for UADDW and USUBW, but the PR shows that other patterns were affected
as well.
This patch instead introduces the ZIPs during a pre-reload split
and forcibly hoists the zero move to the outermost scope. This has
the disadvantage of executing the move even for a shrink-wrapped
function, which I suppose could be a problem if it causes a kernel
to trap and enable Advanced SIMD unnecessarily. In other circumstances,
an unused move shouldn't affect things much.
Also, the RA should be able to rematerialise the move at an
appropriate point if necessary, such as if there is an intervening
call.
In https://gcc.gnu.org/pipermail/gcc-patches/2024-January/641948.html
I'd then tried to allow a zero to be recombined back into a solitary
ZIP. However, that relied on late-combine, which didn't make it into
GCC 14. This version instead restricts the split to cases where the
UXTL executes more frequently as the entry block (which is where we
plan to put the zero).
Also, the original optimisation contained a big-endian correction
that I don't think is needed/correct. Even on big-endian targets,
we want the ZIP to take the low half of an element from the input
vector and the high half from the zero vector. And the patterns
map directly to the underlying Advanced SIMD instructions: the use
of unspecs means that there's no need to adjust for the difference
between GCC and Arm lane numbering.
gcc/
PR target/113196
* config/aarch64/aarch64.h (machine_function::advsimd_zero_insn):
New member variable.
* config/aarch64/aarch64-protos.h (aarch64_split_simd_shift_p):
Declare.
* config/aarch64/iterators.md (Vnarrowq2): New mode attribute.
* config/aarch64/aarch64-simd.md
(vec_unpacku_hi_<mode>, vec_unpacks_hi_<mode>): Recombine into...
(vec_unpack<su>_hi_<mode>): ...this. Move the generation of
zip2 for zero-extends to...
(aarch64_simd_vec_unpack<su>_hi_<mode>): ...a split of this
instruction. Fix big-endian handling.
(vec_unpacku_lo_<mode>, vec_unpacks_lo_<mode>): Recombine into...
(vec_unpack<su>_lo_<mode>): ...this. Move the generation of
zip1 for zero-extends to...
(<optab><Vnarrowq><mode>2): ...a split of this instruction.
Fix big-endian handling.
(*aarch64_zip1_uxtl): New pattern.
(aarch64_usubw<mode>_lo_zip, aarch64_uaddw<mode>_lo_zip): Delete
(aarch64_usubw<mode>_hi_zip, aarch64_uaddw<mode>_hi_zip): Likewise.
* config/aarch64/aarch64.cc (aarch64_get_shareable_reg): New function.
(aarch64_gen_shareable_zero): Use it.
(aarch64_split_simd_shift_p): New function.
gcc/testsuite/
PR target/113196
* gcc.target/aarch64/pr113196.c: New test.
* gcc.target/aarch64/simd/vmovl_high_1.c: Remove double include.
Expect uxtl2 rather than zip2.
* gcc.target/aarch64/vect_mixed_sizes_8.c: Expect zip1 rather
than uxtl.
* gcc.target/aarch64/vect_mixed_sizes_9.c: Likewise.
* gcc.target/aarch64/vect_mixed_sizes_10.c: Likewise.
function.cc emits a NOTE_FUNCTION_BEG after all arguments have
been copied to pseudos. It then records this note in parm_birth_insn.
Various other pieces of code use this insn as a convenient place to
insert things at the start of the function.
However, cfgexpand later changes parm_birth_insn as follows:
/* If we emitted any instructions for setting up the variables,
emit them before the FUNCTION_START note. */
if (var_seq)
{
emit_insn_before (var_seq, parm_birth_insn);
/* In expand_function_end we'll insert the alloca save/restore
before parm_birth_insn. We've just insertted an alloca call.
Adjust the pointer to match. */
parm_birth_insn = var_seq;
}
But the FUNCTION_BEG note is still useful for things that aren't
sensitive to stack allocation, and it has the advantage that
(unlike the var_seq above) it is never deleted or combined.
This patch adds a separate variable to track it.
gcc/
* emit-rtl.h (rtl_data::x_function_beg_note): New member variable.
(function_beg_insn): New macro.
* function.cc (expand_function_start): Initialize function_beg_insn.
aarch64: Use a global map to detect duplicated overloads [PR112989]
As explained in the covering note to the previous patch,
the fact that aarch64-sve-* is now used for multiple header
files means that function_builder::add_overloaded_function
now needs to use a global map to detect duplicated overload
functions, instead of the member variable that it used previously.
gcc/
PR target/112989
* config/aarch64/aarch64-sve-builtins.h
(function_builder::m_overload_names): Replace with...
* config/aarch64/aarch64-sve-builtins.cc (overload_names): ...this
new global.
(add_overloaded_function): Update accordingly, using get_identifier
to get a GGC-friendly record of the name.
aarch64: Use a separate group for SME builtins [PR112989]
The PR shows that we were registering the same overloaded SVE
builtins twice. This was supposed to be prevented by
function_builder::add_overloaded_function, which uses a map
to detect whether a function of the same name has already been
registered. add_overloaded_function then had some asserts to
check for consistency.
However, the map that add_overloaded_function uses was a member of
function_builder itself. That made sense when there was just one
header file, arm_sve.h, since it meant that the memory could be
reclaimed once arm_sve.h had been processed. But now we have three
header files, and in principle, it's possible for arm_sme.h to include
overloads of things that arm_sve.h also defines. We therefore need
to use a global map instead.
However, doing that meant that the consistency checks in
add_overloaded_function fired as expected, which showed some
latent issues. This preliminary patch deals with those by adding
AARCH64_FL_SME to things that require AARCH64_FL_SME2.
This inconsistency led to another problem: functions were selected
for arm_sme.h over arm_sve.h based on whether they had AARCH64_FL_SME.
So some SME2-only things were actually defined in arm_sve.h, whereas
similar SME things were defined in arm_sme.h.
Choosing based on flags was an early get-started crutch that I forgot
to clean up later :( This patch goes for the more direct approach of
having a separate table of SME builtins, as for arm_neon_sve_bridge.h.
aarch64-sve-builtins-sve2.def contains several intrinsics that are
currently SME-only but that operate entirely on vector registers.
Many of these will be extended to SVE2.1 once SVE2.1 support is added,
so the patch front-loads that by keeping the current division between
aarch64-sve-builtins-sve2.def (whose functions now go in arm_sve.h)
and aarch64-sve-builtins-sme.def (whose functions now go in arm_sme.h).
gcc/
PR target/112989
* config/aarch64/aarch64-sve-builtins.def: Don't include
aarch64-sve-builtins-sme.def.
(DEF_SME_ZA_FUNCTION_GS, DEF_SME_ZA_FUNCTION): Move to...
* config/aarch64/aarch64-sve-builtins-sme.def: ...here.
(DEF_SME_FUNCTION): New macro. Use it and DEF_SME_FUNCTION_GS
instead of DEF_SVE_*. Add AARCH64_FL_SME to anything that
requires AARCH64_FL_SME2.
* config/aarch64/aarch64-sve-builtins-sve2.def: Make same
AARCH64_FL_SME adjustment here.
* config/aarch64/aarch64-sve-builtins.cc (function_groups): Don't
include SME intrinsics.
(sme_function_groups): New array.
(handle_arm_sve_h): Remove check for AARCH64_FL_SME.
(handle_arm_sme_h): Use sme_function_groups instead of function_groups.
Jakub Jelinek [Fri, 12 Jan 2024 10:23:27 +0000 (11:23 +0100)]
lower-bitint: Fix up handling of unsigned INTEGER_CSTs operands with lots of 1s in the upper bits [PR113334]
For INTEGER_CST operands, the code decides if it should emit the whole
INTEGER_CST into memory, or if there are enough upper bits either all 0s
or all 1s to warrant an optimization, where we use memory for lower limbs
or even just an INTEGER_CST for least significant limb and fill in the
rest of limbs with 0s or 1s. Unfortunately when not using
bitint_min_cst_precision, the code was using tree_int_cst_sgn (op) < 0
to determine whether to fill in the upper bits with 1s or 0s. That is
incorrect for TYPE_UNSIGNED INTEGER_CSTs which have higher limbs full of
ones, we really want to check here whether the most significant bit is
set or clear.
Fixed thusly.
2024-01-12 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/113334
* gimple-lower-bitint.cc (bitint_large_huge::handle_operand): Use
wi::neg_p (wi::to_wide (op)) instead of tree_int_cst_sgn (op) < 0
to determine if number should be extended by all ones rather than zero
extended.
Jakub Jelinek [Fri, 12 Jan 2024 10:22:04 +0000 (11:22 +0100)]
sra: Punt for too large _BitInt accesses [PR113330]
This is the case I was talking about in
https://gcc.gnu.org/pipermail/gcc-patches/2024-January/642423.html
and Zdenek kindly found a testcase for it.
We can only create BITINT_TYPE with precision at most 65535, not 65536,
so need to punt if we'd want to create it.
2024-01-12 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/113330
* tree-sra.cc (create_access): Punt for BITINT_TYPE accesses with
too large size.
Jakub Jelinek [Fri, 12 Jan 2024 10:20:40 +0000 (11:20 +0100)]
lower-bitint: Fix a typo in a condition [PR113323]
The following testcase revealed a typo in condition, as the comment
says the intent is
/* If lhs of stmt is large/huge _BitInt SSA_NAME not in m_names
it means it will be handled in a loop or straight line code
at the location of its (ultimate) immediate use, so for
vop checking purposes check these only at the ultimate
immediate use. */
but the condition was using != BITINT_TYPE rather than == BITINT_TYPE,
so e.g. it used bitint_precision_kind on non-BITINT_TYPEs (e.g. on vector
types it will crash because TYPE_PRECISION means something different there,
or on say INTEGER_TYPEs the precision will never be large enough to be >=
bitint_prec_large).
The following patch fixes that.
2024-01-12 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/113323
* gimple-lower-bitint.cc (bitint_dom_walker::before_dom_children): Fix
check for lhs being large/huge _BitInt not in m_names.
Jakub Jelinek [Fri, 12 Jan 2024 10:19:08 +0000 (11:19 +0100)]
lower-bitint: Fix up handling of uninitialized large/huge _BitInt call arguments [PR113316]
The code to assign large/huge _BitInt SSA_NAMEs to partitions intentionally
ignores uninitialized SSA_NAMEs:
/* Also ignore uninitialized uses. */
if (SSA_NAME_IS_DEFAULT_DEF (s)
&& (!SSA_NAME_VAR (s) || VAR_P (SSA_NAME_VAR (s))))
continue;
because there is no need to store them into memory, all we need is when
trying to extract some limb from them use uninitialized SSA_NAME for the
limb.
The following testcase shows this is a problem for call arguments though,
for those we need to create replacement SSA_NAMEs which are loaded from
the underlying variable. For uninitialized SSA_NAMEs because we didn't
create underlying variable for them var_to_partition doesn't work, the
following patch handles it by just creating an uninitialized replacement
SSA_NAME.
Jakub Jelinek [Fri, 12 Jan 2024 10:16:41 +0000 (11:16 +0100)]
lower-bitint: Fix handling of casts on arches with abi_limb_mode != limb_mode
> > This patch is still work in progress, but posting to show failure with
> > bitint-7 test where handle_stmt called from lower_mergeable_stmt ICE's
> > because the idx (3) is out of range for the __BitInt(135) with a limb_prec
> > of 64.
>
> I can reproduce it, will debug it momentarily.
So, the problem was that in 2 spots I was comparing TYPE_SIZE of large/huge
BITINT_TYPEs to determine if it can be handled cheaply.
On x86_64 with limb_mode == abi_limb_mode (both DImode) that works fine,
if TYPE_SIZE is equal, it means it has the same number of limbs.
But on aarch64 TYPE_SIZE of say _BitInt(135) and _BitInt(193) is the same,
both are 256-bit storage, but because DImode is used as limb_mode, the
former actually needs just 3 limbs, while the latter needs 4 limbs.
And limb_access_type was asserting that we don't try to access 4th limb
on types which actually have a precision which needs just 3 limbs.
The following patch should fix that.
Note, for the info.extended targets (currently none, but I think arm 32-bit
in the ABI is meant like that), we'll need to do something different,
because the upper bits aren't just padding and should be zero/sign extended,
so if we say have limb_mode SImode, abi_limb_mode DImode, we'll need to
treat _BitInt(135) not as 5 SImode limbs, but 6. For !info.extended targets
I think treating _BitInt(135) as 3 DImode limbs rather than 4 is fine.
2024-01-12 Jakub Jelinek <jakub@redhat.com>
* gimple-lower-bitint.cc (mergeable_op): Instead of comparing
TYPE_SIZE (t) of large/huge BITINT_TYPEs, compare
CEIL (TYPE_PRECISION (t), limb_prec).
(bitint_large_huge::handle_cast): Likewise.
Guillaume Gomez [Wed, 10 Jan 2024 14:23:37 +0000 (15:23 +0100)]
[PATCH] libgccjit: Add support for function attributes and variable attributes.
gcc/jit/ChangeLog:
* dummy-frontend.cc (handle_alias_attribute): New function.
(handle_always_inline_attribute): New function.
(handle_cold_attribute): New function.
(handle_fnspec_attribute): New function.
(handle_format_arg_attribute): New function.
(handle_format_attribute): New function.
(handle_noinline_attribute): New function.
(handle_target_attribute): New function.
(handle_used_attribute): New function.
(handle_visibility_attribute): New function.
(handle_weak_attribute): New function.
(handle_alias_ifunc_attribute): New function.
* jit-playback.cc (fn_attribute_to_string): New function.
(variable_attribute_to_string): New function.
(global_new_decl): Add attributes support.
(set_variable_attribute): New function.
(new_global): Add attributes support.
(new_global_initialized): Add attributes support.
(new_local): Add attributes support.
* jit-playback.h (fn_attribute_to_string): New function.
(set_variable_attribute): New function.
* jit-recording.cc (recording::lvalue::add_attribute): New function.
(recording::function::function): New function.
(recording::function::write_to_dump): Add attributes support.
(recording::function::add_attribute): New function.
(recording::function::add_string_attribute): New function.
(recording::function::add_integer_array_attribute): New function.
(recording::global::replay_into): Add attributes support.
(recording::local::replay_into): Add attributes support.
* jit-recording.h: Add attributes support.
* libgccjit.cc (gcc_jit_function_add_attribute): New function.
(gcc_jit_function_add_string_attribute): New function.
(gcc_jit_function_add_integer_array_attribute): New function.
(gcc_jit_lvalue_add_attribute): New function.
* libgccjit.h (enum gcc_jit_fn_attribute): New enum.
(gcc_jit_function_add_attribute): New function.
(gcc_jit_function_add_string_attribute): New function.
(gcc_jit_function_add_integer_array_attribute): New function.
(enum gcc_jit_variable_attribute): New function.
(gcc_jit_lvalue_add_string_attribute): New function.
* libgccjit.map: Declare new functions.
gcc/testsuite/ChangeLog:
* jit.dg/all-non-failing-tests.h: Add new attributes tests.
* jit.dg/jit.exp: Add `jit-verify-assembler-output-not` test command.
* jit.dg/test-restrict-attribute.c: New test.
* jit.dg/test-alias-attribute.c: New test.
* jit.dg/test-always_inline-attribute.c: New test.
* jit.dg/test-cold-attribute.c: New test.
* jit.dg/test-const-attribute.c: New test.
* jit.dg/test-noinline-attribute.c: New test.
* jit.dg/test-nonnull-attribute.c: New test.
* jit.dg/test-pure-attribute.c: New test.
* jit.dg/test-used-attribute.c: New test.
* jit.dg/test-variable-attribute.c: New test.
* jit.dg/test-weak-attribute.c: New test.
gcc/jit/ChangeLog:
* docs/topics/compatibility.rst: Add documentation for LIBGCCJIT_ABI_26.
* docs/topics/functions.rst: Add documentation for new functions.
* docs/topics/expressions.rst: Add documentation for new functions.
Co-authored-by: Antoni Boucher <bouanto@zoho.com> Signed-off-by: Guillaume Gomez <guillaume1.gomez@gmail.com>
rs6000: Fix ASAN linker errors for Power ELF V1 ABI [PR113284]
rs6000_elf_declare_function_name () outputs Power ELF V1 ABI function
entry labels without using ASM_OUTPUT_FUNCTION_LABEL (). As a result,
.LASANPC labels are not emitted, causing linker errors.
In theory, it is possible to reuse ASM_OUTPUT_FUNCTION_LABEL () by
changing rs6000_output_function_entry () to generate label names
without outputting them, but this would be quite a large change.
Instead, factor out the .LASANPC emitting code from
ASM_OUTPUT_FUNCTION_LABEL () and call it manually.
Fixes: c659dd8bfb55 ("Implement ASM_DECLARE_FUNCTION_NAME using ASM_OUTPUT_FUNCTION_LABEL") Suggested-by: Jakub Jelinek <jakub@redhat.com> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
gcc/ChangeLog:
PR sanitizer/113284
* config/rs6000/rs6000.cc (rs6000_elf_declare_function_name):
Use assemble_function_label_final () for Power ELF V1 ABI.
* output.h (assemble_function_label_final): New function.
* varasm.cc (assemble_function_label_raw): Use
assemble_function_label_final ().
(assemble_function_label_final): New function.
Jakub Jelinek [Fri, 12 Jan 2024 09:47:20 +0000 (10:47 +0100)]
testsuite: Fix up preprocessor conditions in bitint-31.c test
Andre reported on IRC that this test has weird preprocessor conditions,
obviously the intent was to test whether corresponding __*_MANT_DIG__
is equal to the expected value like earlier in the function definitions,
but somehow I've ended up with a comma expression instead, which was
always true.
2024-01-12 Jakub Jelinek <jakub@redhat.com>
* gcc.dg/bitint-31.c: Fix up #if conditions checking whether
__*_MANT_DIG__ is equal to a particular precision.
Jonathan Wakely [Wed, 10 Jan 2024 20:54:11 +0000 (20:54 +0000)]
libstdc++: Fix std::runtime_format deviations from the spec [PR113320]
I seem to have implemented this feature based on the P2918R0 revision,
not the final P2918R2 one that was approved for C++26. This commit fixes
it.
The runtime-format-string type should not have a publicly accessible
data member, so add a constructor and make it a friend of
basic_format_string. It should also be non-copyable, so that it can only
be constructed from a prvalue via temporary materialization. Change the
basic_format_string constructor parameter to pass by value. Also add
noexcept to the constructors and runtime_format generator functions.
libstdc++-v3/ChangeLog:
PR libstdc++/113320
* include/std/format (__format::_Runtime_format_string): Add
constructor and disable copy operations.
(basic_format_string(_Runtime_format_string)): Add noexcept and
take parameter by value not rvalue reference.
(runtime_format): Add noexcept.
* testsuite/std/format/runtime_format.cc: Check noexcept. Check
that construction is only possible from prvalues, not xvalues.
Reviewed-by: Daniel Krügler <daniel.kruegler@gmail.com>
This was approved for C++23 at he June 2021 virtual meeting.
This allows more efficient construction of std::pair members when {} is
used as a constructor argument. The perfect forwarding constructor can
be used, so that the member variables are constructed from forwarded
arguments instead of being copy constructed from temporaries.
libstdc++-v3/ChangeLog:
PR libstdc++/105505
* include/bits/stl_pair.h (pair::pair(U1&&, U2&&)) [C++23]: Add
default template arguments, as per P1951R1.
* testsuite/20_util/pair/cons/default_tmpl_args.cc: New test.
Jakub Jelinek [Fri, 12 Jan 2024 09:10:20 +0000 (10:10 +0100)]
libgcc: Use may_alias attribute in bitint handlers
As discussed on IRC, the following patch uses may_alias attribute, so that
on targets like aarch64 where abi_limb_mode != limb_mode the library
accesses the limbs (half limbs of the ABI) in the arrays with conservative
alias set.
2024-01-12 Jakub Jelinek <jakub@redhat.com>
* libgcc2.h (UBILtype): New typedef with may_alias attribute.
(__mulbitint3, __divmodbitint4): Use UBILtype * instead of
UWtype * and const UBILtype * instead of const UWtype *.
* libgcc2.c (bitint_reduce_prec, bitint_mul_1, bitint_addmul_1,
__mulbitint3, bitint_negate, bitint_submul_1, __divmodbitint4):
Likewise.
* soft-fp/bitint.h (UBILtype): Change define into a typedef with
may_alias attribute.
Sandra Loosemore [Thu, 11 Jan 2024 21:12:56 +0000 (21:12 +0000)]
libgcc, nios2: Fix exception handling on nios2 with -fpic
Exception handling on nios2-linux-gnu with -fpic has been broken since
revision 790854ea7670f11c14d431c102a49181d2915965, "Use _dl_find_object
in _Unwind_Find_FDE". For whatever reason, this doesn't work on nios2.
Nios2 uses the GOT address as the base for DW_EH_PE_datarel
relocations in PIC; see my previous fix to make this work, revision 2d33dcfe9f0494c9b56a8d704c3d27c5a4329ebc, "Support for GOT-relative
DW_EH_PE_datarel encoding". So this may be a horrible bug in the ABI
or in my interpretation of it or just glibc's implementation of
_dl_find_object for this target, but there's existing code out there
that does things this way; and realistically, nobody is going to
re-engineer this now that the vendor has EOL'ed the nios2
architecture. So, just skip over the code trying to use
_dl_find_object on this target and fall back to the way that works.
I plan to backport this patch to the GCC 12 and GCC 13 branches as well.
libgcc/ChangeLog
* unwind-dw2-fde-dip.c (_Unwind_Find_FDE): Do not try to use
_dl_find_object on nios2; it doesn't work.
liuhongt [Mon, 8 Jan 2024 06:04:38 +0000 (14:04 +0800)]
Update documents for fcf-protection=
After r14-2692-g1c6231c05bdcca, the option is defined as EnumSet and
-fcf-protection=branch won't unset any others bits since they're in
different groups. So to override -fcf-protection, an explicit
-fcf-protection=none needs to be added and then with
-fcf-protection=XXX
Because reg:SI 143 is not died or set in insn 78, no replacement merge will
be performed for the insn sequence. We adjusted the add template to eliminate
redundant sign extensions during the expand pass.
Adjusted based on upstream comments:
https://gcc.gnu.org/pipermail/gcc-patches/2024-January/641988.html
Julian Brown [Mon, 15 Nov 2021 10:23:49 +0000 (02:23 -0800)]
OpenMP: lvalue parsing for map/to/from clauses (C)
This patch adds support for parsing general lvalues ("locator list item
types") for OpenMP "map", "to" and "from" clauses to the C front-end,
similar to the previously-posted patch for C++. Such syntax is permitted
for OpenMP 5.0 and above. It was previously posted for mainline here:
gcc/c/
* c-parser.cc (c_parser_braced_init, c_parser_conditional_expression):
Don't allow OpenMP array section.
(c_parser_postfix_expression): Don't allow array section in statement
expression.
(c_parser_postfix_expression_after_primary): Add support for OpenMP
array section parsing.
(c_parser_expr_list): Don't allow OpenMP array section here.
(c_parser_omp_variable_list): Change ALLOW_DEREF parameter to
MAP_LVALUE. Support parsing of general lvalues in "map", "to" and
"from" clauses.
(c_parser_omp_var_list_parens): Change ALLOW_DEREF parameter to
MAP_LVALUE. Update call to c_parser_omp_variable_list.
(c_parser_oacc_data_clause): Update calls to
c_parser_omp_var_list_parens.
(c_parser_omp_clause_reduction): Use OMP_ARRAY_SECTION tree node
instead of TREE_LIST for array sections.
(c_parser_omp_target): Allow GOMP_MAP_ATTACH.
* c-tree.h (c_omp_array_section_p): Add extern declaration.
(build_omp_array_section): Add prototype.
* c-typeck.cc (c_omp_array_section_p): Add flag.
(mark_exp_read): Support OMP_ARRAY_SECTION.
(build_omp_array_section): Add function.
(build_external_ref): Tweak error path for OpenMP array sections.
(handle_omp_array_sections_1): Use OMP_ARRAY_SECTION tree code instead
of TREE_LIST. Handle more kinds of expressions.
(c_oacc_check_attachments): Use OMP_ARRAY_SECTION instead of TREE_LIST
for array sections.
(c_finish_omp_clauses): Use OMP_ARRAY_SECTION instead of TREE_LIST.
Check for supported expression types.
gcc/testsuite/
* gcc.dg/gomp/bad-array-section-c-1.c: New test.
* gcc.dg/gomp/bad-array-section-c-2.c: New test.
* gcc.dg/gomp/bad-array-section-c-3.c: New test.
* gcc.dg/gomp/bad-array-section-c-4.c: New test.
* gcc.dg/gomp/bad-array-section-c-5.c: New test.
* gcc.dg/gomp/bad-array-section-c-6.c: New test.
* gcc.dg/gomp/bad-array-section-c-7.c: New test.
* gcc.dg/gomp/bad-array-section-c-8.c: New test.
libgomp/
* libgomp.texi: C/C++ lvalues are supported now for map/to/from.
* testsuite/libgomp.c-c++-common/ind-base-4.c: New test.
* testsuite/libgomp.c-c++-common/unary-ptr-1.c: New test.
Jason Merrill [Tue, 9 Jan 2024 10:15:01 +0000 (05:15 -0500)]
c++: corresponding object parms [PR113191]
As discussed, our handling of corresponding object parameters needed to
handle the using-declaration case better. And I took the opportunity to
share code between the add_method and cand_parms_match uses.
This patch specifically doesn't compare reversed parameters, but a follow-up
patch will.
PR c++/113191
gcc/cp/ChangeLog:
* class.cc (xobj_iobj_parameters_correspond): Add context parm.
(object_parms_correspond): Factor out of...
(add_method): ...here.
* method.cc (defaulted_late_check): Use it.
* call.cc (class_of_implicit_object): New.
(object_parms_correspond): Overload taking two candidates.
(cand_parms_match): Use it.
(joust): Check reversed before comparing constraints.
* cp-tree.h (object_parms_correspond): Declare.
Marcus Haehnel [Thu, 11 Jan 2024 16:05:54 +0000 (16:05 +0000)]
libstdc++: use updated type for __unexpected_handler
Commit f4130a3eb545ab1aaf3ecb44f3d06b43e3751e04 changed the type of
__expected_handler in libsupc++/unwind-cxx.h to be a
std::terminate_handler to avoid a deprecated warning. However, the
definition in eh_unex_handler.cc still used the old type
(std::unexpected_handler) and thus causes a warning when compiling
libstdc++ with -Wdeprecated-declarations (which is the default, for
example, for clang).
Adapt the definition to match the declaration.
libstdc++-v3/ChangeLog:
* libsupc++/eh_unex_handler.cc: Adjust definition type to
declaration.
Since we currently do not support adjustments to th_m_mir/th_m_miu,
which will trigger ICE. So it is recommended to place the split
optimizations after reload to ensure FPR when registers are allocated.
gcc/ChangeLog:
* config/riscv/thead.md: Add limits for splits.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/xtheadfmemidx-medany.c: New test.
François Dumont [Wed, 10 Jan 2024 18:06:48 +0000 (19:06 +0100)]
libstdc++: [_GLIBCXX_DEBUG] Fix assignment of value-initialized iterator [PR112477]
Now that _M_Detach do not reset iterator _M_version value we need to reset it when
the iterator is attached to a new sequence, even if this sequencer is null when
assigning a value-initialized iterator. In this case _M_version shall be resetted to 0.
libstdc++-v3/ChangeLog:
PR libstdc++/112477
* src/c++11/debug.cc
(_Safe_iterator_base::_M_attach): Reset _M_version to 0 if attaching to null
sequence.
(_Safe_iterator_base::_M_attach_single): Likewise.
(_Safe_local_iterator_base::_M_attach): Likewise.
(_Safe_local_iterator_base::_M_attach_single): Likewise.
* testsuite/23_containers/map/debug/112477.cc: New test case.
Patrick Palka [Thu, 11 Jan 2024 18:14:46 +0000 (13:14 -0500)]
libstdc++/ranges: Use C++23 deducing this in _Pipe and _Partial
This simplifies the operator() of the _Pipe and _Partial range adaptor
closure objects using C++23 deducing this, allowing us to condense
multiple operator() overloads into one.
The new __like_t alias template is similar to the expositional one from
P0847R6 except it's implemented in terms of forward_like instead of vice
versa, and thus ours always yields a reference so e.g. __like_t<A, char>
is char&& instead of char. For our purposes (forwarding) this shouldn't
make a difference, I think..
libstdc++-v3/ChangeLog:
* include/bits/move.h (__like_t): Define in C++23 mode.
* include/std/ranges (views::__adaptor::Partial::operator()):
Implement using C++23 deducing this when available.
(views::__adaptor::_Pipe::operator()): Likewise.
* testsuite/std/ranges/adaptors/100577.cc: Adjust testcase to
accept new "no match for call" errors issued in C++23 mode.
* testsuite/std/ranges/adaptors/lazy_split_neg.cc: Likewise.
Andrew Pinski [Thu, 11 Jan 2024 06:13:03 +0000 (22:13 -0800)]
expr: Limit the store flag optimization for single bit to non-vectors [PR113322]
The problem here is after the recent vectorizer improvements, we end up
with a comparison against a vector bool 0 which then tries expand_single_bit_test
which is not expecting vector comparisons at all.
The IR was:
vector(4) <signed-boolean:1> mask_patt_5.13;
_Bool _12;
Andrew Pinski [Wed, 10 Jan 2024 22:25:37 +0000 (14:25 -0800)]
match: Delay folding of 1/x into `(x+1u)<2u?x:0` until late [PR113301]
Since currently ranger does not work with the complexity of COND_EXPR in
some cases so delaying the simplification of `1/x` for signed types
help code generation.
tree-ssa/divide-8.c is a new testcase where this can help.
Bootstrapped and tested on x86_64-linux-gnu with no regressions.
PR tree-optimization/113301
gcc/ChangeLog:
* match.pd (`1/x`): Delay signed case until late.
gcc/testsuite/ChangeLog:
* gcc.dg/tree-ssa/divide-8.c: New test.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
Jonathan Wakely [Wed, 10 Jan 2024 10:53:43 +0000 (10:53 +0000)]
libstdc++: Add GDB printer for std::integral_constant
libstdc++-v3/ChangeLog:
* python/libstdcxx/v6/printers.py (StdIntegralConstantPrinter):
Add printer for std::integral_constant.
* testsuite/libstdc++-prettyprinters/cxx11.cc: Test it.
Jonathan Wakely [Tue, 9 Jan 2024 15:22:46 +0000 (15:22 +0000)]
libstdc++: Prefer posix_memalign for aligned-new [PR113258]
As described in PR libstdc++/113258 there are old versions of tcmalloc
which replace malloc and related APIs, but do not repalce aligned_alloc
because it didn't exist at the time they were released. This means that
when operator new(size_t, align_val_t) uses aligned_alloc to obtain
memory, it comes from libc's aligned_alloc not from tcmalloc. But when
operator delete(void*, size_t, align_val_t) uses free to deallocate the
memory, that goes to tcmalloc's replacement version of free, which
doesn't know how to free it.
If we give preference to the older posix_memalign instead of
aligned_alloc then we're more likely to use a function that will be
compatible with the replacement version of free. Because posix_memalign
has been around for longer, it's more likely that old third-party malloc
replacements will also replace posix_memalign alongside malloc and free.
libstdc++-v3/ChangeLog:
PR libstdc++/113258
* libsupc++/new_opa.cc: Prefer to use posix_memalign if
available.