This patch adds the +sme ISA feature and requires it to be present
when compiling arm_streaming code. (arm_streaming_compatible code
does not necessarily assume the presence of SME. It just has to
work when SME is present and streaming mode is enabled.)
gcc/
* doc/invoke.texi: Document SME.
* doc/sourcebuild.texi: Document aarch64_sve.
* config/aarch64/aarch64-option-extensions.def (sme): Define.
* config/aarch64/aarch64.h (AARCH64_ISA_SME): New macro.
(TARGET_SME): Likewise.
* config/aarch64/aarch64.cc (aarch64_override_options_internal):
Ensure that SME is present when compiling streaming code.
gcc/testsuite/
* lib/target-supports.exp (check_effective_target_aarch64_sme): New
target test.
* gcc.target/aarch64/sme/aarch64-sme.exp: Force SME to be enabled
if it isn't by default.
* g++.target/aarch64/sme/aarch64-sme.exp: Likewise.
* gcc.target/aarch64/sme/streaming_mode_3.c: New test.
This patch adds support for recognising the SME arm::streaming
and arm::streaming_compatible attributes. These attributes
respectively describe whether the processor is definitely in
"streaming mode" (PSTATE.SM==1), whether the processor is
definitely not in streaming mode (PSTATE.SM==0), or whether
we don't know at compile time either way.
As far as the compiler is concerned, this effectively creates three
ISA submodes: streaming mode enables things that are not available
in non-streaming mode, non-streaming mode enables things that not
available in streaming mode, and streaming-compatible mode has to stick
to the common subset. This means that some instructions are conditional
on PSTATE.SM==1 and some are conditional on PSTATE.SM==0.
I wondered about recording the streaming state in a new variable.
However, the set of available instructions is also influenced by
PSTATE.ZA (added later), so I think it makes sense to view this
as an instance of a more general mechanism. Also, keeping the
PSTATE.SM state in the same flag variable as the other ISA
features makes it possible to sum up the requirements of an
ACLE function in a single value.
The patch therefore adds a new set of feature flags called "ISA modes".
Unlike the other two sets of flags (optional features and architecture-
level features), these ISA modes are not controlled directly by
command-line parameters or "target" attributes.
arm::streaming and arm::streaming_compatible are function type attributes
rather than function declaration attributes. This means that we need
to find somewhere to copy the type information across to a function's
target options. The patch does this in aarch64_set_current_function.
We also need to record which ISA mode a callee expects/requires
to be active on entry. (The same mode is then active on return.)
The patch extends the current UNSPEC_CALLEE_ABI cookie to include
this information, as well as the PCS variant that it recorded
previously.
The attributes can also be written __arm_streaming and
__arm_streaming_compatible. This has two advantages: it triggers
an error on compilers that don't understand the attributes, and it
eases use on C, where [[...]] attributes were only added in C23.
gcc/
* config/aarch64/aarch64-isa-modes.def: New file.
* config/aarch64/aarch64.h: Include it in the feature enumerations.
(AARCH64_FL_SM_STATE, AARCH64_FL_ISA_MODES): New constants.
(AARCH64_FL_DEFAULT_ISA_MODE): Likewise.
(AARCH64_ISA_MODE): New macro.
(CUMULATIVE_ARGS): Add an isa_mode field.
* config/aarch64/aarch64-protos.h (aarch64_gen_callee_cookie): Declare.
(aarch64_tlsdesc_abi_id): Return an arm_pcs.
* config/aarch64/aarch64.cc (attr_streaming_exclusions)
(aarch64_gnu_attributes, aarch64_gnu_attribute_table)
(aarch64_arm_attributes, aarch64_arm_attribute_table): New tables.
(aarch64_attribute_table): Redefine to include the gnu and arm
attributes.
(aarch64_fntype_pstate_sm, aarch64_fntype_isa_mode): New functions.
(aarch64_fndecl_pstate_sm, aarch64_fndecl_isa_mode): Likewise.
(aarch64_gen_callee_cookie, aarch64_callee_abi): Likewise.
(aarch64_insn_callee_cookie, aarch64_insn_callee_abi): Use them.
(aarch64_function_arg, aarch64_output_mi_thunk): Likewise.
(aarch64_init_cumulative_args): Initialize the isa_mode field.
(aarch64_output_mi_thunk): Use aarch64_gen_callee_cookie to get
the ABI cookie.
(aarch64_override_options): Add the ISA mode to the feature set.
(aarch64_temporary_target::copy_from_fndecl): Likewise.
(aarch64_fndecl_options, aarch64_handle_attr_arch): Likewise.
(aarch64_set_current_function): Maintain the correct ISA mode.
(aarch64_tlsdesc_abi_id): Return an arm_pcs.
(aarch64_comp_type_attributes): Handle arm::streaming and
arm::streaming_compatible.
* config/aarch64/aarch64-c.cc (aarch64_define_unconditional_macros):
Define __arm_streaming and __arm_streaming_compatible.
* config/aarch64/aarch64.md (tlsdesc_small_<mode>): Use
aarch64_gen_callee_cookie to get the ABI cookie.
* config/aarch64/t-aarch64 (TM_H): Add all feature-related .def files.
gcc/testsuite/
* gcc.target/aarch64/sme/aarch64-sme.exp: New harness.
* gcc.target/aarch64/sme/streaming_mode_1.c: New test.
* gcc.target/aarch64/sme/streaming_mode_2.c: Likewise.
* gcc.target/aarch64/sme/keyword_macros_1.c: Likewise.
* g++.target/aarch64/sme/aarch64-sme.exp: New harness.
* g++.target/aarch64/sme/streaming_mode_1.C: New test.
* g++.target/aarch64/sme/streaming_mode_2.C: Likewise.
* g++.target/aarch64/sme/keyword_macros_1.C: Likewise.
* gcc.target/aarch64/auto-init-1.c: Only expect the call insn
to contain 1 (const_int 0), not 2.
SME2 adds a number of intrinsics that operate on tuples of 2 and 4
vectors. The ACLE therefore extends the existing svreinterpret
intrinsics to handle tuples as well.
gcc/
* config/aarch64/aarch64-sve-builtins-base.cc
(svreinterpret_impl::fold): Punt on tuple forms.
(svreinterpret_impl::expand): Use tuple_mode instead of vector_mode.
* config/aarch64/aarch64-sve-builtins-base.def (svreinterpret):
Extend to x1234 groups.
* config/aarch64/aarch64-sve-builtins-functions.h
(multi_vector_function::vectors_per_tuple): If the function has
a group suffix, get the number of vectors from there.
* config/aarch64/aarch64-sve-builtins-shapes.h (reinterpret): Declare.
* config/aarch64/aarch64-sve-builtins-shapes.cc (reinterpret_def)
(reinterpret): New function shape.
* config/aarch64/aarch64-sve-builtins.cc (function_groups): Handle
DEF_SVE_FUNCTION_GS.
* config/aarch64/aarch64-sve-builtins.def (DEF_SVE_FUNCTION_GS): New
macro.
(DEF_SVE_FUNCTION): Forward to DEF_SVE_FUNCTION_GS by default.
* config/aarch64/aarch64-sve-builtins.h
(function_instance::tuple_mode): New member function.
(function_base::vectors_per_tuple): Take the function instance
as argument and get the number from the group suffix.
(function_instance::vectors_per_tuple): Update accordingly.
* config/aarch64/iterators.md (SVE_FULLx2, SVE_FULLx3, SVE_FULLx4)
(SVE_ALL_STRUCT): New mode iterators.
(SVE_STRUCT): Redefine in terms of SVE_FULL*.
* config/aarch64/aarch64-sve.md (@aarch64_sve_reinterpret<mode>)
(*aarch64_sve_reinterpret<mode>): Extend to SVE structure modes.
aarch64: Tweak error message for (tuple,vector) pairs
SME2 adds more intrinsics that take a tuple of vectors followed
by a single vector, with the two arguments expected to have the
same element type. Unlike with the existing svset* intrinsics,
the size of the tuple is not fixed by the overloaded function name.
This patch adds an error message that (hopefully) copes better
with that combination.
gcc/
* config/aarch64/aarch64-sve-builtins.cc
(function_resolver::require_derived_vector_type): Add a specific
error message for the case in which the caller wants a single
vector whose element type matches a previous tuyple argument.
This patch makes some functions operate on sve_type, rather than just
on type suffixes. It also allows an overload to be resolved based on
a mode and sve_type. In this case the sve_type is used to derive the
group size as well as a type suffix.
This is needed for the SME2 intrinsics and the new tuple forms of
svreinterpret. No functional change intended on its own.
gcc/
* config/aarch64/aarch64-sve-builtins.h
(function_resolver::lookup_form): Add an overload that takes
an sve_type rather than type and group suffixes.
(function_resolver::resolve_to): Likewise.
(function_resolver::infer_vector_or_tuple_type): Return an sve_type.
(function_resolver::infer_tuple_type): Likewise.
(function_resolver::require_matching_vector_type): Take an sve_type
rather than a type_suffix_index.
(function_resolver::require_derived_vector_type): Likewise.
* config/aarch64/aarch64-sve-builtins.cc (num_vectors_to_group):
New function.
(function_resolver::lookup_form): Add an overload that takes
an sve_type rather than type and group suffixes.
(function_resolver::resolve_to): Likewise.
(function_resolver::infer_vector_or_tuple_type): Return an sve_type.
(function_resolver::infer_tuple_type): Likewise.
(function_resolver::infer_vector_type): Update accordingly.
(function_resolver::require_matching_vector_type): Take an sve_type
rather than a type_suffix_index.
(function_resolver::require_derived_vector_type): Likewise.
* config/aarch64/aarch64-sve-builtins-shapes.cc (get_def::resolve)
(set_def::resolve, store_def::resolve, tbl_tuple_def::resolve): Update
calls accordingly.
If an SVE ACLE intrinsic requires two arguments to have the
same type, the C resolver would report mismatches as "argument N
has type T2, but previous arguments had type T1". This patch makes
the message say which argument had type T1.
This is needed to give decent error messages for some SME cases.
gcc/
* config/aarch64/aarch64-sve-builtins.h
(function_resolver::require_matching_vector_type): Add a parameter
that specifies the number of the earlier argument that is being
matched against.
* config/aarch64/aarch64-sve-builtins.cc
(function_resolver::require_matching_vector_type): Likewise.
(require_derived_vector_type): Update calls accordingly.
(function_resolver::resolve_unary): Likewise.
(function_resolver::resolve_uniform): Likewise.
(function_resolver::resolve_uniform_opt_n): Likewise.
* config/aarch64/aarch64-sve-builtins-shapes.cc
(binary_long_lane_def::resolve): Likewise.
(clast_def::resolve, ternary_uint_def::resolve): Likewise.
gcc/testsuite/
* gcc.target/aarch64/sve/acle/general-c/*: Replace "but previous
arguments had" with "but argument N had".
The current SVE ACLE function-resolution diagnostics assume
that a function has a fixed choice between vectors or tuples
of vectors. If an argument was not an SVE type at all, the
error message said the function "expects an SVE vector type"
or "expects an SVE tuple type".
This patch generalises the error to cope with cases where
an argument can be either a vector or a tuple. It also splits
out the diagnostics for mismatched tuple sizes, so that they
can be reused by later patches.
gcc/
* config/aarch64/aarch64-sve-builtins.h
(function_resolver::infer_sve_type): New member function.
(function_resolver::report_incorrect_num_vectors): Likewise.
* config/aarch64/aarch64-sve-builtins.cc
(function_resolver::infer_sve_type): New function,.
(function_resolver::report_incorrect_num_vectors): New function,
split out from...
(function_resolver::infer_vector_or_tuple_type): ...here. Use
infer_sve_type.
Until now, the SVE ACLE code had mostly been able to represent
individual SVE arguments with just an element type suffix (s32, u32,
etc.). However, the SME2 ACLE provides many overloaded intrinsics
that operate on tuples rather than single vectors. This patch
therefore adds a new type (sve_type) that combines an element
type suffix with a vector count. This is enough to uniquely
represent all SVE ACLE types.
gcc/
* config/aarch64/aarch64-sve-builtins.h (sve_type): New struct.
(sve_type::operator==): New function.
(function_resolver::get_vector_type): Delete.
(function_resolver::report_no_such_form): Take an sve_type rather
than a type_suffix_index.
* config/aarch64/aarch64-sve-builtins.cc (get_vector_type): New
function.
(function_resolver::get_vector_type): Delete.
(function_resolver::report_no_such_form): Take an sve_type rather
than a type_suffix_index.
(find_sve_type): New function, split out from...
(function_resolver::infer_vector_or_tuple_type): ...here.
The SME2 ACLE adds a new "group" suffix component to the naming
convention for SVE intrinsics. This is also used in the new tuple
forms of the svreinterpret intrinsics.
This patch adds support for group suffixes and defines the
x2, x3 and x4 suffixes that are needed for the svreinterprets.
gcc/
* config/aarch64/aarch64-sve-builtins-shapes.cc (build_one): Take
a group suffix index parameter.
(build_32_64, build_all): Update accordingly. Iterate over all
group suffixes.
* config/aarch64/aarch64-sve-builtins-sve2.cc (svqrshl_impl::fold)
(svqshl_impl::fold, svrshl_impl::fold): Update function_instance
constructors.
* config/aarch64/aarch64-sve-builtins.cc (group_suffixes): New array.
(groups_none): New constant.
(function_groups): Initialize the groups field.
(function_instance::hash): Hash the group index.
(function_builder::get_name): Add the group suffix.
(function_builder::add_overloaded_functions): Iterate over all
group suffixes.
(function_resolver::lookup_form): Take a group suffix parameter.
(function_resolver::resolve_to): Likewise.
* config/aarch64/aarch64-sve-builtins.def (DEF_SVE_GROUP_SUFFIX): New
macro.
(x2, x3, x4): New group suffixes.
* config/aarch64/aarch64-sve-builtins.h (group_suffix_index): New enum.
(group_suffix_info): New structure.
(function_group_info::groups): New member variable.
(function_instance::group_suffix_id): Likewise.
(group_suffixes): New array.
(function_instance::operator==): Compare the group suffixes.
(function_instance::group_suffix): New function.
aarch64: Make AARCH64_FL_SVE requirements explicit
So far, all intrinsics covered by the aarch64-sve-builtins*
framework have (naturally enough) required at least SVE.
However, arm_sme.h defines a couple of intrinsics that can
be called by any code. It's therefore necessary to make
the implicit SVE requirement explicit.
We didn't previously use SVE's RDVL instruction, since the CNT*
forms are preferred and provide most of the range. However,
there are some cases that RDVL can handle and CNT* can't,
and using RDVL-like instructions becomes important for SME.
gcc/
* config/aarch64/aarch64-protos.h (aarch64_sve_rdvl_immediate_p)
(aarch64_output_sve_rdvl): Declare.
* config/aarch64/aarch64.cc (aarch64_sve_cnt_factor_p): New
function, split out from...
(aarch64_sve_cnt_immediate_p): ...here.
(aarch64_sve_rdvl_factor_p): New function.
(aarch64_sve_rdvl_immediate_p): Likewise.
(aarch64_output_sve_rdvl): Likewise.
(aarch64_offset_temporaries): Rewrite the SVE handling to use RDVL
for some cases.
(aarch64_expand_mov_immediate): Handle RDVL immediates.
(aarch64_mov_operand_p): Likewise.
* config/aarch64/constraints.md (Usr): New constraint.
* config/aarch64/aarch64.md (*mov<SHORT:mode>_aarch64): Add an RDVL
alternative.
(*movsi_aarch64, *movdi_aarch64): Likewise.
gcc/testsuite/
* gcc.target/aarch64/sve/acle/asm/cntb.c: Tweak expected output.
* gcc.target/aarch64/sve/acle/asm/cnth.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/cntw.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/cntd.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/prfb.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/prfh.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/prfw.c: Likewise.
* gcc.target/aarch64/sve/acle/asm/prfd.c: Likewise.
* gcc.target/aarch64/sve/loop_add_4.c: Expect RDVL to be used
to calculate the -17 and 17 factors.
* gcc.target/aarch64/sve/pcs/stack_clash_1.c: Likewise the 18 factor.
require_immediate_lane_index previously hard-coded the assumption
that the group size is determined by the argument immediately before
the index. However, for SME, there are cases where it should be
determined by an earlier argument instead.
gcc/
* config/aarch64/aarch64-sve-builtins.h:
(function_checker::require_immediate_lane_index): Add an argument
for the index of the indexed vector argument.
* config/aarch64/aarch64-sve-builtins.cc
(function_checker::require_immediate_lane_index): Likewise.
* config/aarch64/aarch64-sve-builtins-shapes.cc
(ternary_bfloat_lane_base::check): Update accordingly.
(ternary_qq_lane_base::check): Likewise.
(binary_lane_def::check): Likewise.
(binary_long_lane_def::check): Likewise.
(ternary_lane_def::check): Likewise.
(ternary_lane_rotate_def::check): Likewise.
(ternary_long_lane_def::check): Likewise.
(ternary_qq_lane_rotate_def::check): Likewise.
Rainer Orth [Tue, 5 Dec 2023 10:08:05 +0000 (11:08 +0100)]
ada: Fix Ada bootstrap on Solaris
The recent warning patches broke Ada bootstrap on Solaris:
adaint.c: In function '__gnat_kill':
adaint.c:3597:3: error: implicit declaration of function 'kill'
[-Wimplicit-function-declaration]
3597 | kill (pid, sig);
| ^~~~
expect.c: In function '__gnat_expect_poll':
expect.c:409:5: error: implicit declaration of function 'memset'
[-Wimplicit-function-declaration]
409 | FD_ZERO (&rset);
| ^~~~~~~
expect.c:55:1: note: include '<string.h>' or provide a declaration of 'memset'
54 | #include <sys/wait.h>
+++ |+#include <string.h>
55 | #endif
I'm now including the necessary headers: <signal.h> for kill and
<string.h> for memset.
Bootstrapped without regressions on i386-pc-solaris2.11,
sparc-sun-solaris2.11, x86_64-pc-linux-gnu, and
x86_64-apple-darwin23.1.0.
Rainer Orth [Tue, 5 Dec 2023 10:06:04 +0000 (11:06 +0100)]
gm2: Fix mc/mc.flex compilation on Solaris
The recent warning changes broke gm2 bootstrap on Solaris:
/vol/gcc/src/hg/master/local/gcc/m2/mc/mc.flex: In function 'handleFile':
/vol/gcc/src/hg/master/local/gcc/m2/mc/mc.flex:297:21: error: implicit
declaration of function 'alloca' [-Wimplicit-function-declaration]
297 | char *s = (char *)alloca (strlen (filename) + 2 + 1);
| ^~~~~~
alloca needs <alloca.h> on Solaris, which isn't universally available.
Since mc.flex doesn't include any config header, I chose to switch to
__builtin_alloca instead.
/vol/gcc/src/hg/master/local/gcc/m2/mc/mc.flex:332:19: error: implicit
declaration of function 'index' [-Wimplicit-function-declaration]
332 | char *p = index(sdate, '\n');
| ^~~~~
index is declared in <strings.h> on Solaris, again not a standard
header. I simply switched to using strchr to avoid that issue.
Bootstrapped without regressions on i386-pc-solaris2.11,
sparc-sun-solaris2.11, x86_64-pc-linux-gnu, and
x86_64-apple-darwin23.1.0.
Rainer Orth [Tue, 5 Dec 2023 10:04:06 +0000 (11:04 +0100)]
libiberty: Fix pex_unix_wait return type
The recent warning patches broke Solaris bootstrap:
/vol/gcc/src/hg/master/local/libiberty/pex-unix.c:326:3: error: initialization of 'pid_t (*)(struct pex_obj *, pid_t, int *, struct pex_time *, int, const char **, int *)' {aka 'long int (*)(struct pex_obj *, long int, int *, struct pex_time *, int, const char **, int *)'} from incompatible pointer type 'int (*)(struct pex_obj *, pid_t, int *, struct pex_time *, int, const char **, int *)' {aka 'int (*)(struct pex_obj *, long int, int *, struct pex_time *, int, const char **, int *)'} [-Wincompatible-pointer-types]
326 | pex_unix_wait,
| ^~~~~~~~~~~~~
/vol/gcc/src/hg/master/local/libiberty/pex-unix.c:326:3: note: (near initialization for 'funcs.wait')
While pex_funcs.wait expects a function returning pid_t, pex_unix_wait
currently returns int. However, on Solaris pid_t is long for 32-bit,
but int for 64-bit.
This patches fixes this by having pex_unix_wait return pid_t as
expected, and like every other variant already does.
Bootstrapped without regressions on i386-pc-solaris2.11,
sparc-sun-solaris2.11, x86_64-pc-linux-gnu, and
x86_64-apple-darwin23.1.0.
Arm's SME has an array called ZA that for inline asm purposes
is effectively a form of special-purpose memory. It doesn't
have an associated storage type and so can't be passed and
returned in normal C/C++ objects.
We'd therefore like "za" in a clobber list to mean that an inline
asm can read from and write to ZA. (Just reading or writing
individually is unlikely to be useful, but we could add syntax
for that too if necessary.)
There is currently a TARGET_MD_ASM_ADJUST target hook that allows
targets to add clobbers to an asm instruction. This patch
extends that to allow targets to add USEs as well.
We have the following two hooks into the call expansion code:
- TARGET_CALL_ARGS is called for each argument before arguments
are moved into hard registers.
- TARGET_END_CALL_ARGS is called after the end of the call
sequence (specifically, after any return value has been
moved to a pseudo).
This patch adds a TARGET_START_CALL_ARGS hook that is called before
the TARGET_CALL_ARGS sequence. This means that TARGET_START_CALL_REGS
and TARGET_END_CALL_REGS bracket the region in which argument registers
might be live. They also bracket a region in which the only call
emiitted by target-independent code is the call to the target function
itself. (For example, TARGET_START_CALL_ARGS happens after any use of
memcpy to copy arguments, and TARGET_END_CALL_ARGS happens before any
use of memcpy to copy the result.)
Also, the patch adds the cumulative argument structure as an argument
to the hooks, so that the target can use it to record and retrieve
information about the call as a whole.
The TARGET_CALL_ARGS docs said:
While generating RTL for a function call, this target hook is invoked once
for each argument passed to the function, either a register returned by
``TARGET_FUNCTION_ARG`` or a memory location. It is called just
- before the point where argument registers are stored.
The last bit was true for normal calls, but for libcalls the hook was
invoked earlier, before stack arguments have been copied. I don't think
this caused a practical difference for nvptx (the only port to use the
hooks) since I wouldn't expect any libcalls to take stack parameters.
gcc/
* doc/tm.texi.in: Add TARGET_START_CALL_ARGS.
* doc/tm.texi: Regenerate.
* target.def (start_call_args): New hook.
(call_args, end_call_args): Add a parameter for the cumulative
argument information.
* hooks.h (hook_void_rtx_tree): Delete.
* hooks.cc (hook_void_rtx_tree): Likewise.
* targhooks.h (hook_void_CUMULATIVE_ARGS): Declare.
(hook_void_CUMULATIVE_ARGS_rtx_tree): Likewise.
* targhooks.cc (hook_void_CUMULATIVE_ARGS): New function.
(hook_void_CUMULATIVE_ARGS_rtx_tree): Likewise.
* calls.cc (expand_call): Call start_call_args before computing
and storing stack parameters. Pass the cumulative argument
information to call_args and end_call_args.
(emit_library_call_value_1): Likewise.
* config/nvptx/nvptx.cc (nvptx_call_args): Add a cumulative
argument parameter.
(nvptx_end_call_args): Likewise.
Epilogues for sibling calls are generated using the
sibcall_epilogue pattern. One disadvantage of this approach
is that the target doesn't know which call the epilogue is for,
even though the code that generates the pattern has the call
to hand.
Although call instructions are currently rtxes, and so could be
passed as an operand to the pattern, the main point of introducing
rtx_insn was to move towards separating the rtx and insn types
(a good thing IMO). There also isn't an existing practice of
passing genuine instructions (as opposed to labels) to
instruction patterns.
This patch therefore adds a hook that can be defined as an
alternative to sibcall_epilogue. The advantage is that it
can be passed the call; the disadvantage is that it can't
use .md conveniences like generating instructions from
textual patterns (although most epilogues are too complex
to benefit much from that anyway).
gcc/
* doc/tm.texi.in: Add TARGET_EMIT_EPILOGUE_FOR_SIBCALL.
* doc/tm.texi: Regenerate.
* target.def (emit_epilogue_for_sibcall): New hook.
* calls.cc (can_implement_as_sibling_call_p): Use it.
* function.cc (thread_prologue_and_epilogue_insns): Likewise.
(reposition_prologue_and_epilogue_notes): Likewise.
* config/aarch64/aarch64-protos.h (aarch64_expand_epilogue): Take
an rtx_call_insn * rather than a bool.
* config/aarch64/aarch64.cc (aarch64_expand_epilogue): Likewise.
(TARGET_EMIT_EPILOGUE_FOR_SIBCALL): Define.
* config/aarch64/aarch64.md (epilogue): Update call.
(sibcall_epilogue): Delete.
[-PASS:-]{+FAIL:+} gcc.dg/gnu23-builtins-no-dfp-1.c (test for warnings, line 13)
[-PASS:-]{+FAIL:+} gcc.dg/gnu23-builtins-no-dfp-1.c (test for warnings, line 14)
[-PASS:-]{+FAIL:+} gcc.dg/gnu23-builtins-no-dfp-1.c (test for warnings, line 15)
[-PASS:-]{+FAIL:+} gcc.dg/gnu23-builtins-no-dfp-1.c (test for warnings, line 16)
[-PASS:-]{+FAIL:+} gcc.dg/gnu23-builtins-no-dfp-1.c (test for warnings, line 17)
[-PASS:-]{+FAIL:+} gcc.dg/gnu23-builtins-no-dfp-1.c (test for warnings, line 18)
[-PASS:-]{+FAIL:+} gcc.dg/gnu23-builtins-no-dfp-1.c (test for excess errors)
This is due to:
[...]/gcc.dg/gnu23-builtins-no-dfp-1.c:13:13: error: implicit declaration of function '__builtin_fabsd32'; did you mean '__builtin_fabsf32'? [-Wimplicit-function-declaration]
[...]
Specifying '-fpermissive', commit f37744662cbc74efcceb790b99dcd6521c51a578
"[committed] Fix gnu23-builtins-no-dfp" subsequently resolved the FAILs, but
patch review concluded that for this test case it's secondary *how*
"implicit declaration of function" is diagnosed, so we'd test the standard
way, which instead of "warning" now is "error".
Allow prologues and epilogues to be inserted later
Arm's SME adds a new processor mode called streaming mode.
This mode enables some new (matrix-oriented) instructions and
disables several existing groups of instructions, such as most
Advanced SIMD vector instructions and a much smaller set of SVE
instructions. It can also change the current vector length.
There are instructions to switch in and out of streaming mode.
However, their effect on the ISA and vector length can't be represented
directly in RTL, so they need to be emitted late in the pass pipeline,
close to md_reorg.
It's sometimes the responsibility of the prologue and epilogue to
switch modes, which means we need to emit the prologue and epilogue
sequences late as well. (This loses shrink-wrapping and scheduling
opportunities, but that's a price worth paying.)
This patch therefore adds a target hook for forcing prologue
and epilogue insertion to happen later in the pipeline.
gcc/
* target.def (use_late_prologue_epilogue): New hook.
* doc/tm.texi.in: Add TARGET_USE_LATE_PROLOGUE_EPILOGUE.
* doc/tm.texi: Regenerate.
* passes.def (pass_late_thread_prologue_and_epilogue): New pass.
* tree-pass.h (make_pass_late_thread_prologue_and_epilogue): Declare.
* function.cc (pass_thread_prologue_and_epilogue::gate): New function.
(pass_data_late_thread_prologue_and_epilogue): New pass variable.
(pass_late_thread_prologue_and_epilogue): New pass class.
(make_pass_late_thread_prologue_and_epilogue): New function.
lra: Updates of biggest mode for hard regs [PR112278]
LRA keeps track of the biggest mode for both hard registers and
pseudos. The updates assume that the modes are ordered, i.e. that
we can tell whether one is no bigger than the other at compile time.
That is (or at least seemed to be) a reasonable restriction for pseudos.
But it isn't necessarily so for hard registers, since the uses of hard
registers can be logically distinct. The testcase is an example of this.
The biggest mode of hard registers is also special for other reasons.
As the existing comment says:
/* A reg can have a biggest_mode of VOIDmode if it was only ever seen as
part of a multi-word register. In that case, just use the reg_rtx
mode. Do the same also if the biggest mode was larger than a register
or we can not compare the modes. Otherwise, limit the size to that of
the biggest access in the function or to the natural mode at least. */
This patch applies the same approach to the updates.
gcc/
PR rtl-optimization/112278
* lra-int.h (lra_update_biggest_mode): New function.
* lra-coalesce.cc (merge_pseudos): Use it.
* lra-lives.cc (process_bb_lives): Likewise.
* lra.cc (new_insn_reg): Likewise.
gcc/testsuite/
PR rtl-optimization/112278
* gcc.target/aarch64/sve/pr112278.c: New test.
Jakub Jelinek [Tue, 5 Dec 2023 08:45:40 +0000 (09:45 +0100)]
lower-bitint: Make temporarily wrong IL less wrong [PR112843]
As discussed in the PR, for the middle (on x86-64 65..128 bit) _BitInt
types like
_1 = x_4(D) * 5;
where _1 and x_4(D) have _BitInt(128) type and x is PARM_DECL, the bitint
lowering pass wants to replace this with
_13 = (int128_t) x_4(D);
_12 = _13 * 5;
_1 = (_BitInt(128)) _12;
where _13 and _12 have int128_t type and the ranger ICEs when the IL is
temporarily invalid:
during GIMPLE pass: bitintlower
pr112843.c: In function ‘foo’:
pr112843.c:7:1: internal compiler error: Segmentation fault
7 | foo (_BitInt (128) x, _BitInt (256) y)
| ^~~
0x152943f crash_signal
../../gcc/toplev.cc:316
0x25c21c8 ranger_cache::range_of_expr(vrange&, tree_node*, gimple*)
../../gcc/gimple-range-cache.cc:1204
0x25cdcf9 fold_using_range::range_of_range_op(vrange&, gimple_range_op_handler&, fur_source&)
../../gcc/gimple-range-fold.cc:671
0x25cf9a0 fold_using_range::fold_stmt(vrange&, gimple*, fur_source&, tree_node*)
../../gcc/gimple-range-fold.cc:602
0x25b5520 gimple_ranger::update_stmt(gimple*)
../../gcc/gimple-range.cc:564
0x16f1234 update_stmt_operands(function*, gimple*)
../../gcc/tree-ssa-operands.cc:1150
0x117a5b6 update_stmt_if_modified(gimple*)
../../gcc/gimple-ssa.h:187
0x117a5b6 update_stmt_if_modified(gimple*)
../../gcc/gimple-ssa.h:184
0x117a5b6 update_modified_stmt
../../gcc/gimple-iterator.cc:44
0x117a5b6 gsi_insert_after(gimple_stmt_iterator*, gimple*, gsi_iterator_update)
../../gcc/gimple-iterator.cc:544
0x25abc2f gimple_lower_bitint
../../gcc/gimple-lower-bitint.cc:6348
What the code does right now is, it first creates a new SSA_NAME (_12
above), adds the
_1 = (_BitInt(128)) _12;
stmt after it (where it crashes, because _12 has no SSA_NAME_DEF_STMT yet),
then sets lhs of the previous stmt to _12 (this is also temporarily
incorrect, there are incompatible types involved in the stmt), later on
changes also operands and finally update_stmt it.
The following patch instead changes the lhs of the stmt before adding the
cast after it. The question is if this is less or more wrong temporarily
(but the ICE is gone). In addition to that the patch moves the operand
adjustments before the lhs adjustment.
The reason I tweaked the lhs first is that it then just uses gimple_op and
iterates over all ops, if that is done before lhs it would need to special
case which op to skip because it is lhs (I'm using gimple_get_lhs for the
lhs, but this isn't done for GIMPLE_CALL nor GIMPLE_PHI, so GIMPLE_ASSIGN
or say GIMPLE_GOTO etc. are the only options).
2023-12-05 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/112843
* gimple-lower-bitint.cc (gimple_lower_bitint): Change lhs of stmt
to lhs2 before building and inserting lhs = (cast) lhs2; assignment.
Adjust stmt operands before adjusting lhs.
On the testcase I've recently fixed I've noticed bad code generation,
we emit
pxor %xmm1, %xmm1
psrld $31, %xmm0
pcmpeqd %xmm1, %xmm0
pcmpeqd %xmm1, %xmm0
or
vpxor %xmm1, %xmm1, %xmm1
vpsrld $31, %xmm0, %xmm0
vpcmpeqd %xmm1, %xmm0, %xmm0
vpcmpeqd %xmm1, %xmm0, %xmm2
rather than
psrad $31, %xmm2
or
vpsrad $31, %xmm1, %xmm2
The following patch fixes that using a combiner splitter.
2023-12-05 Jakub Jelinek <jakub@redhat.com>
PR target/112816
* config/i386/sse.md ((eq (eq (lshiftrt x elt_bits-1) 0) 0)): New
splitter to turn psrld $31; pcmpeq; pcmpeq into psrad $31.
Richard Biener [Mon, 4 Dec 2023 13:03:37 +0000 (14:03 +0100)]
c/89270 - honor registered_builtin_types in type_for_size
The following fixes the intermediate conversions inserted by
convert_to_integer when facing address-spaces and converts
to their effective [u]intptr_t when they are registered_builtin_types
by considering those also from c_common_type_for_size and not
only from c_common_type_for_mode.
Richard Biener [Mon, 4 Dec 2023 14:46:38 +0000 (15:46 +0100)]
tree-optimization/112827 - more SCEV cprop fixes
The insert iteration can be corrupted by foldings of replace_uses_by,
within this particular PHI replacement but also with subsequent ones.
Recompute the insert location before insertion instead.
This fixes an obvserved ICE of gcc.dg/tree-ssa/ssa-sink-16.c.
PR tree-optimization/112827
PR tree-optimization/112848
* tree-scalar-evolution.cc (final_value_replacement_loop):
Compute the insert location for each insert.
liuhongt [Mon, 27 Nov 2023 05:35:41 +0000 (13:35 +0800)]
Take register pressure into account for vec_construct/scalar_to_vec when the components are not loaded from memory.
For vec_contruct, the components must be live at the same time if
they're not loaded from memory, when the number of those components
exceeds available registers, spill happens. Try to account that with a
rough estimation.
??? Ideally, we should have an overall estimation of register pressure
if we know the live range of all variables.
gcc/ChangeLog:
* config/i386/i386.cc (ix86_vector_costs::add_stmt_cost):
Count sse_reg/gpr_regs for components not loaded from memory.
(ix86_vector_costs:ix86_vector_costs): New constructor.
(ix86_vector_costs::m_num_gpr_needed[3]): New private memeber.
(ix86_vector_costs::m_num_sse_needed[3]): Ditto.
(ix86_vector_costs::finish_cost): Estimate overall register
pressure cost.
(ix86_vector_costs::ix86_vect_estimate_reg_pressure): New
function.
Marek Polacek [Tue, 19 Sep 2023 20:31:17 +0000 (16:31 -0400)]
c++: implement P2564, consteval needs to propagate up [PR107687]
This patch implements P2564, described at <wg21.link/p2564>, whereby
certain functions are promoted to consteval. For example:
consteval int id(int i) { return i; }
template <typename T>
constexpr int f(T t)
{
return t + id(t); // id causes f<int> to be promoted to consteval
}
void g(int i)
{
f (3);
}
now compiles. Previously the code was ill-formed: we would complain
that 't' in 'f' is not a constant expression. Since 'f' is now
consteval, it means that the call to id(t) is in an immediate context,
so doesn't have to produce a constant -- this is how we allow consteval
functions composition. But making 'f<int>' consteval also means that
the call to 'f' in 'g' must yield a constant; failure to do so results
in an error. I made the effort to have cc1plus explain to us what's
going on. For example, calling f(i) produces this neat diagnostic:
w.C:11:11: error: call to consteval function 'f<int>(i)' is not a constant expression
11 | f (i);
| ~~^~~
w.C:11:11: error: 'i' is not a constant expression
w.C:6:22: note: 'constexpr int f(T) [with T = int]' was promoted to an immediate function because its body contains an immediate-escalating expression 'id(t)'
6 | return t + id(t); // id causes f<int> to be promoted to consteval
| ~~^~~
which hopefully makes it clear what's going on.
Implementing this proposal has been tricky. One problem was delayed
instantiation: instantiating a function can set off a domino effect
where one call promotes a function to consteval but that then means
that another function should also be promoted, etc.
In v1, I addressed the delayed instantiation problem by instantiating
trees early, so that we can escalate functions right away. That caused
a number of problems, and in certain cases, like consteval-prop3.C, it
can't work, because we need to wait till EOF to see the definition of
the function anyway. Overeager instantiation tends to cause diagnostic
problems too.
In v2, I attempted to move the escalation to the gimplifier, at which
point all templates have been instantiated. That attempt flopped,
however, because once we've gimplified a function, its body is discarded
and as a consequence, you can no longer evaluate a call to that function
which is required for escalating, which needs to decide if a call is
a constant expression or not.
Therefore, we have to perform the escalation before gimplifying, but
after instantiate_pending_templates. That's not easy because we have
no way to walk all the trees. In the v2 patch, I use two vectors: one
to store function decls that may become consteval, and another to
remember references to immediate-escalating functions. Unfortunately
the latter must also stash functions that call immediate-escalating
functions. Consider:
int g(int i)
{
f<int>(i); // f is immediate-escalating
}
where g itself is not immediate-escalating, but we have to make sure
that if f gets promoted to consteval, we give an error.
A new option, -fno-immediate-escalation, is provided to suppress
escalating functions.
v2 also adds a new flag, DECL_ESCALATION_CHECKED_P, so that we don't
escalate a function multiple times, and so that we can distinguish between
explicitly consteval functions and functions that have been promoted
to consteval.
In v3, I removed one of the new vectors and changed the other one
to a hash set. This version also contains numerous cleanups.
v4 merges find_escalating_expr_r into cp_fold_immediate_r. It also
adds a new optimization in cp_fold_function.
v5 greatly simplifies the code.
v6 simplifies the code further and removes an ff_ flag.
v7 removes maybe_promote_function_to_consteval and further simplifies
cp_fold_immediate_r logic.
* call.cc (in_immediate_context): No longer static.
* constexpr.cc (cxx_eval_call_expression): Adjust assert.
* cp-gimplify.cc (deferred_escalating_exprs): New vec.
(remember_escalating_expr): New.
(enum fold_flags): Remove ff_fold_immediate.
(immediate_escalating_function_p): New.
(unchecked_immediate_escalating_function_p): New.
(promote_function_to_consteval): New.
(cp_fold_immediate): Move above. Return non-null if any errors were
emitted.
(maybe_explain_promoted_consteval): New.
(cp_gimplify_expr) <case CALL_EXPR>: Assert we've handled all
immediate invocations.
(taking_address_of_imm_fn_error): New.
(cp_fold_immediate_r): Merge ADDR_EXPR and PTRMEM_CST cases. Implement
P2564 - promoting functions to consteval.
<case CALL_EXPR>: Implement P2564 - promoting functions to consteval.
(cp_fold_r): If an expression turns into a CALL_EXPR after cp_fold,
call cp_fold_immediate_r on the CALL_EXPR.
(cp_fold_function): Set DECL_ESCALATION_CHECKED_P if
deferred_escalating_exprs does not contain current_function_decl.
(process_and_check_pending_immediate_escalating_fns): New.
* cp-tree.h (struct lang_decl_fn): Add escalated_p bit-field.
(DECL_ESCALATION_CHECKED_P): New.
(immediate_invocation_p): Declare.
(process_pending_immediate_escalating_fns): Likewise.
* decl2.cc (c_parse_final_cleanups): Set at_eof to 2 after all
templates have been instantiated; and to 3 at the end of the function.
Call process_pending_immediate_escalating_fns.
* error.cc (dump_template_bindings): Check at_eof against an updated
value.
* module.cc (trees_out::lang_decl_bools): Stream escalated_p.
(trees_in::lang_decl_bools): Likewise.
* pt.cc (push_tinst_level_loc): Set at_eof to 3, not 2.
* typeck.cc (cp_build_addr_expr_1): Don't check
DECL_IMMEDIATE_FUNCTION_P.
Andrew Pinski [Sun, 12 Nov 2023 04:33:28 +0000 (20:33 -0800)]
MATCH: Fix zero_one_valued_p's convert pattern
While working on PR 111972, I was getting a regression
due to zero_one_valued_p matching a signed 1 bit integer
when it came to convert. This patch fixes that by checking
the outer type too.
Bootstrapped and tested on x86_64-linux-gnu with no regressions.
gcc/ChangeLog:
* match.pd (zero_one_valued_p): For convert
make sure type is not a signed 1-bit integer.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
Jeff Law [Mon, 4 Dec 2023 17:06:49 +0000 (10:06 -0700)]
[committed] Fix HImode load mnemonic on microblaze port
The tester recently started failing va-arg-22.c on microblaze-linux:
gcc.c-torture/execute/va-arg-22.c -O0 (test for excess errors)
It was failing with an undefined reference to "r7" at link time. This was
ultimately tracked down to a HImode load using (reg+reg) addressing mode, but
which used the lhui instruction instead of lhu. The "i" means it's supposed to
be (reg+disp) so the assembler tried to interpret "r7" as an immediate/symbol.
The port uses %i<opnum> as an output modifier to select between sh/shi and
various other mnemonics for loads/stores. The movhi pattern simply failed to
use it for the two cases where it's loading from memory (interestingly enough
it was used for stores).
Clearly we aren't using reg+reg much for HImode loads as this didn't fix
anything else in the testsuite.
gcc/
* config/microblaze/microblaze.md (movhi): Use %i for half-word
loads to properly select between lhu/lhui.
Robin Dapp [Fri, 1 Dec 2023 08:45:29 +0000 (09:45 +0100)]
RISC-V: Fix rawmemchr implementation.
This fixes a bug in the rawmemchr implementation by incrementing the
source address by vl * element_size instead of just vl.
This is normally harmless as we will just scan the same region more than
once but, in combination with an older qemu version, will lead to
an execution failure in SPEC2017.
gcc/ChangeLog:
* config/riscv/riscv-string.cc (expand_rawmemchr): Increment
source address by vl * element_size.
Robin Dapp [Fri, 1 Dec 2023 08:30:17 +0000 (09:30 +0100)]
RISC-V: Rename and unify stringop strategy handling.
In preparation for the vectorized strlen and strcmp support this NFC
patch unifies the stringop strategy handling a bit. The "auto"
strategy now is a combination of scalar and vector and an expander
should try the strategies in their preferred order.
For the block_move expander this patch does just that.
gcc/ChangeLog:
* config/riscv/riscv-opts.h (enum riscv_stringop_strategy_enum):
Rename...
(enum stringop_strategy_enum): ... to this.
* config/riscv/riscv-string.cc (riscv_expand_block_move): New
wrapper expander handling the strategies and delegation.
(riscv_expand_block_move_scalar): Rename function and make
static.
(expand_block_move): Remove strategy handling.
* config/riscv/riscv.md: Call expander wrapper.
* config/riscv/riscv.opt: Rename.
Richard Biener [Mon, 4 Dec 2023 13:50:59 +0000 (14:50 +0100)]
middle-end/112785 - guard against last_clique overflow
The PR shows that we'll ICE eventually when last_clique wraps. The
following avoids this by refusing to hand out new cliques after
exhausting them. We then use zero (no clique) as conservative
fallback.
PR middle-end/112785
* function.h (get_new_clique): New inline function handling
last_clique overflow.
* cfgrtl.cc (duplicate_insn_chain): Use it.
* tree-cfg.cc (gimple_duplicate_bb): Likewise.
* tree-inline.cc (remap_dependence_clique): Likewise.
This patch documents the optimization parameter
riscv-strcmp-inline-limit, which can be used to tweak the behaviour
of -minline-strcmp and -minline-strncmp.
Juzhe-Zhong [Mon, 4 Dec 2023 13:44:56 +0000 (21:44 +0800)]
RISC-V: Fix overlap group incorrect overlap on v0
In serious high register pressure case (appended in this patch):
We see vluxei8.v v0,(s1),v1,v0.t which is not allowed.
Since according to RVV ISA:
+;; The destination vector register group for a masked vector instruction cannot overlap the source mask register (v0),
+;; unless the destination vector register is being written with a mask value (e.g., compares) or the scalar result of a reduction.
Such case doesn't have spillings, however, we expect such case should be spilled and reload data.
The rootcause is I made a mistake in previous patch on matching dest operand and mask operand constraints:
dest: "=vr"
mask: "vmWc1"
After this patch:
dest: "vd,vr"
mask: "vm,Wc1"
make EEW widening pattern are same as other instruction patterns.
PR target/112431
gcc/ChangeLog:
* config/riscv/vector.md: Fix incorrect overlap in v0.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/pr112431-34.c: New test.
Richard Biener [Mon, 4 Dec 2023 09:46:11 +0000 (10:46 +0100)]
tree-optimization/112827 - corrupt SCEV cache during SCCP
The following avoids corrupting the SCEV cache by my last change
to propagate constant final values immediately. The easiest fix
is to keep a dead initialization around.
PR tree-optimization/112827
* tree-scalar-evolution.cc (final_value_replacement_loop):
Do not release SSA name but keep a dead initialization around.
* gcc.dg/torture/pr112827-1.c: New testcase.
* gcc.dg/torture/pr112827-2.c: Likewise.
Juzhe-Zhong [Mon, 4 Dec 2023 08:51:06 +0000 (16:51 +0800)]
RISC-V: Remove earlyclobber from widen reduction
Since the destination of reduction is not a vector register group, there
is no need to apply overlap constraint.
Also confirm Clang:
The mir in LLVM has early clobber:
early-clobber %49:vrm2 = PseudoVWADD_VX_M1 $noreg(tied-def 0), killed %17:vr, %48:gpr, %0:gprnox0, 3, 0; example.c:59:24
The mir in LLVM doesn't have early clobber:
%48:vr = PseudoVWREDSUM_VS_M2_E8 $noreg(tied-def 0), %17:vrm2, killed %33:vr, %0:gprnox0, 3, 1; example.c:60:26
And also confirm both:
vwredsum.vs v24, v8, v24 and vwredsum.vs v8, v8, v24 all legal on LLVM.
Align with LLVM and honor RISC-V V spec, remove earlyclobber.
Indu Bhagat [Mon, 4 Dec 2023 09:57:34 +0000 (01:57 -0800)]
BTF: fix PR debug/112656
PR debug/112656 - btf: function prototypes generated with name
With this patch, all BTF_KIND_FUNC_PROTO will appear anonymous in the
generated BTF section.
As noted in the discussion in the bugzilla, the number of
BTF_KIND_FUNC_PROTO types output varies across targets (BPF with -mco-re
vs non-BPF targets). Hence the check in the test case merely checks
that all BTF_KIND_FUNC_PROTO appear anonymous.
gcc/ChangeLog:
PR debug/112656
* btfout.cc (btf_asm_type): Fixup ctti_name for all
BTF types of kind BTF_KIND_FUNC_PROTO.
gcc/testsuite/ChangeLog:
PR debug/112656
* gcc.dg/debug/btf/btf-function-7.c: New test.
The patch adds a small function to abstract out the detail and return
the name of the type. The patch also fixes the issue of BTF_KIND_FUNC
appearing in the comments with a 'null' string.
PR debug/112768
* btfout.cc (get_btf_type_name): New definition.
(btf_collect_datasec): Update dtd_name to the original type name
string.
(btf_asm_type_ref): Use the new get_btf_type_name function
instead.
(btf_asm_type): Likewise.
(btf_asm_func_type): Likewise.
gcc/testsuite/ChangeLog:
PR debug/112768
* gcc.dg/debug/btf/btf-function-6.c: Empty string expected with
BTF_KIND_FUNC_PROTO.
Jakub Jelinek [Mon, 4 Dec 2023 08:01:09 +0000 (09:01 +0100)]
i386: Fix rtl checking ICE in ix86_elim_entry_set_got [PR112837]
The following testcase ICEs with RTL checking, because it sets if
XINT (SET_SRC (set), 1) is UNSPEC_SET_GOT without checking if SET_SRC (set)
is actually an UNSPEC, so any time we see any other insn with PARALLEL
and a SET in it which is not an UNSPEC we ICE during RTL checking or
access there some other union member as if it was an rt_int.
The rest is just small cleanup.
2023-12-04 Jakub Jelinek <jakub@redhat.com>
PR target/112837
* config/i386/i386.cc (ix86_elim_entry_set_got): Before checking
for UNSPEC_SET_GOT check that SET_SRC is UNSPEC. Use SET_SRC and
SET_DEST macros instead of XEXP, rename vec variable to set.
Jakub Jelinek [Mon, 4 Dec 2023 08:00:18 +0000 (09:00 +0100)]
i386: Fix up signbit<mode>2 expander [PR112816]
The following testcase ICEs, because the signbit<mode>2 expander uses an
explicit SUBREG in the pattern around match_operand with register_operand
predicate. If we are unlucky enough that expansion tries to expand it
with some SUBREG as operands[1], we have two nested SUBREGs in the IL,
which is not valid and causes ICE later.
2023-12-04 Jakub Jelinek <jakub@redhat.com>
PR target/112816
* config/i386/sse.md (signbit<mode>2): Force operands[1] into a REG.
Jakub Jelinek [Mon, 4 Dec 2023 07:59:15 +0000 (08:59 +0100)]
c++: #pragma GCC unroll C++ fixes [PR112795]
foo in the unroll-5.C testcase ICEs because cp_parser_pragma_unroll
during parsing calls maybe_constant_value unconditionally, which is
fine if !processing_template_decl, but can ICE otherwise.
While just calling fold_non_dependent_expr there instead could be enough
to fix the ICE (and I guess the right thing to do for backports if any),
I don't see a reason why we couldn't handle a dependent #pragma GCC unroll
argument as well, the unrolling isn't done in the FE and all the middle-end
cares about is that ANNOTATE_EXPR has a 1..65534 last operand when it is
annot_expr_unroll_kind.
So, the following patch changes all the unsigned short unroll arguments
to tree unroll (and thus avoids the tree -> unsigned short -> tree
conversions), does the type and value checking during parsing only if
the argument isn't dependent and repeats it during instantiation.
2023-12-04 Jakub Jelinek <jakub@redhat.com>
PR c++/112795
gcc/cp/
* cp-tree.h (cp_convert_range_for): Change UNROLL type from
unsigned short to tree.
(finish_while_stmt_cond, finish_do_stmt, finish_for_cond): Likewise.
* parser.cc (cp_parser_statement): Pass NULL_TREE rather than 0 to
cp_parser_iteration_statement UNROLL argument.
(cp_parser_for, cp_parser_c_for): Change UNROLL type from
unsigned short to tree.
(cp_parser_range_for): Likewise. Set RANGE_FOR_UNROLL to just UNROLL
rather than build_int_cst from it.
(cp_convert_range_for, cp_parser_iteration_statement): Change UNROLL
type from unsigned short to tree.
(cp_parser_omp_loop_nest): Pass NULL_TREE rather than 0 to
cp_parser_range_for UNROLL argument.
(cp_parser_pragma_unroll): Return tree rather than unsigned short.
If parsed expression is type dependent, just return it, don't diagnose
issues with value if it is value dependent.
(cp_parser_pragma): Change UNROLL type from unsigned short to tree.
* semantics.cc (finish_while_stmt_cond): Change UNROLL type from
unsigned short to tree. Build ANNOTATE_EXPR with UNROLL as its last
operand rather than build_int_cst from it.
(finish_do_stmt, finish_for_cond): Likewise.
* pt.cc (tsubst_stmt) <case RANGE_FOR_STMT>: Change UNROLL type from
unsigned short to tree and set it to RECUR on RANGE_FOR_UNROLL (t).
(tsubst_expr) <case ANNOTATE_EXPR>: For annot_expr_unroll_kind repeat
checks on UNROLL value from cp_parser_pragma_unroll.
gcc/testsuite/
* g++.dg/ext/unroll-5.C: New test.
* g++.dg/ext/unroll-6.C: New test.
Kito Cheng [Mon, 27 Nov 2023 14:01:44 +0000 (22:01 +0800)]
RISC-V: Refactor riscv_implied_info_t to make it able to handle conditional implication [NFC]
RISC-V ISA implication rules become little bit complicated than before,
it may come with condition, so this commit extend the capability of
riscv_implied_info_t, also make it more...C++ize.
gcc/ChangeLog:
* common/config/riscv/riscv-common.cc (riscv_implied_predicator_t): New.
(riscv_implied_info_t::riscv_implied_info_t): New.
(riscv_implied_info_t::match): New.
(riscv_implied_info): New entry for zcf.
(riscv_subset_list::handle_implied_ext): Use
riscv_implied_info_t::match.
(riscv_subset_list::check_implied_ext): Ditto.
(riscv_subset_list::handle_combine_ext): Ditto.
(riscv_subset_list::parse): Move zcf implication handling to
riscv_implied_infos.
Gaius Mulley [Mon, 4 Dec 2023 01:35:46 +0000 (01:35 +0000)]
PR modula2/112825: modula2 builds target objects as part of all-gcc
This patch fixes the PR modula2/112825 which fails if the target
assembler is not present on the host. This can be seen if the
build invokes make all-gcc. m2 should not attempt to generate
target libraries when performing make all-gcc.
Prior to this patch it generated build/gcc/m2/gm2-libs/SYSTSEM.def
using the script gcc/m2/tools-src/makeSystem (and gm2 -c).
makeSystem should exec gm2 -S instead (and other flags)
to generate the list of target data types without requiring any
target tools. The target types emitted are textually converted
into SYSTEM.def.
gcc/m2/ChangeLog:
PR modula2/112825
* tools-src/makeSystem: Change all occurrences of -c to -S.
Jakub Jelinek [Sun, 3 Dec 2023 19:03:27 +0000 (20:03 +0100)]
testsuite: Fix up gcc.target/aarch64/pr112406.c for modern C [PR112406]
On Fri, Nov 17, 2023 at 02:04:01PM +0100, Robin Dapp wrote:
> > Yes, your version is also OK.
>
> The attached was bootstrapped and regtested on aarch64, x86 and
> regtested on riscv. Going to commit it later unless somebody objects.
Unfortunately the aarch64/pr112406.c was reduced too much and is rejected
since the switch to modern C patchset.
The following patch fixes that, I've verified the testcase
before/after the changes still ICEs in r14-5563 and doesn't with
r14-5564 and after the changes compiles fine with even latest trunk.
Everything admittedly with a cross-compiler, but that shouldn't change
anything.
Note, one of the modern C changes is that at least when people use
cvise/creduce/delta scripts which ensure no further errors are introduced
during the reduction then expected originally such reductions will not
appear anymore.
2023-12-03 Jakub Jelinek <jakub@redhat.com>
PR middle-end/112406
* gcc.target/aarch64/pr112406.c (MagickPixelPacket): Add missing
semicolon.
(GetImageChannelMoments_image): Avoid using implicit int.
(SetMagickPixelPacket): Use void return type instead of implicit int.
(GetImageChannelMoments): Likewise. Use __builtin_atan instead of
atan.
Jakub Jelinek [Sun, 3 Dec 2023 16:54:03 +0000 (17:54 +0100)]
lower-bitint: Fix up lower_addsub_overflow [PR112807]
lower_addsub_overflow uses handle_cast or handle_operand to extract current
limb from the operands. Both of those functions heavily assume that they
return a large or huge BITINT_TYPE. The problem in the testcase is that
this is violated. Normally, lower_addsub_overflow isn't even called if
neither the return's type element type nor any of the operand is large/huge
BITINT_TYPE (on x86_64 129+ bits), for middle BITINT_TYPE (on x86_64 65-128
bits) some other code casts such operands to {,unsigned }__int128.
In the testcase the result is complex unsigned, so small, but one of the
arguments is _BitInt(256), so lower_addsub_overflow is called. But
range_for_prec asks the ranger for ranges of the operands and in this
case the first argument has [0, 0xffffffff] range and second [-2, 1], so
unsigned 32-bit and signed 2-bit, and in such case the code for
handle_operand/handle_cast purposes would use the _BitInt(256) type for the
first operand (ok), but because prec3 aka maximum of result precision and
the VRP computes ranges of the arguments is 32, use cast to 32-bit
BITINT_TYPE, which is why it didn't work correctly.
The following patch ensures that in such cases we use handle_cast to the
type of the other argument.
Perhaps incrementally, we could try to optimize this in an earlier phase,
see that while the .{ADD,SUB}_OVERFLOW has large/huge _BitInt argument, as
ranger says it fits into a smaller type, add a cast of the larger argument
to the smaller precision type in which it fits. Either in
gimple_lower_bitint, or match.pd. An argument for the latter is that e.g.
complex unsigned .ADD_OVERFLOW (unsigned_long_long_arg, unsigned_arg)
where ranger says unsigned_long_long_arg fits into unsigned 32-bit could
be also more efficient as
.ADD_OVERFLOW ((unsigned) unsigned_long_long_arg, unsigned_arg)
2023-12-03 Jakub Jelinek <jakub@redhat.com>
PR middle-end/112807
* gimple-lower-bitint.cc (bitint_large_huge::lower_addsub_overflow):
When choosing type0 and type1 types, if prec3 has small/middle bitint
kind, use maximum of type0 and type1's precision instead of prec3.
Jeff Law [Sun, 3 Dec 2023 05:54:46 +0000 (22:54 -0700)]
[committed] Fix gnu23-builtins-no-dfp
Last patch for the night. There's still a bit of minor fallout left in GCC
(loongarch testsuite for example). But things are looking good on the targets
I test. The plan is to start submitting the various newlib/libgloss fixes
tomorrow.
Anyway, this test was the one I was most concerned about. Basically we're
testing that on a !dfp target that the builtins are not available. It expects
a warning, but gets an error by default now. I just changed the test to use
-fpermissive, so that the test behaves as it did previously.
Jeff Law [Sun, 3 Dec 2023 05:45:48 +0000 (22:45 -0700)]
[committed] Fix build of libgcc on ports using FDPIC
read_encoded_value_with_base has an ifdef'd code path conditional on __FDPIC__
which was calling _Unwind_gnu_Find_got without a prototype. This naturally
caused various build failures.
Jeff Law [Sun, 3 Dec 2023 05:16:33 +0000 (22:16 -0700)]
[committed] Fix a few arc tests
Similar to others. Where it's easy to fix the implicit types or add prototypes
I did. One was just ugly and I didn't want to think too hard, so I just added
-fpermissive.
Jeff Law [Sun, 3 Dec 2023 05:12:55 +0000 (22:12 -0700)]
[committed] Fix nios2 tests
The nios2 port has two tests that are affected by the recent changes. In
cdx-ldstwm-1.c it was easiest to just add -fpermissive. for cdx-ldstwm-2.c
adding an prototype for exit and abort is all that's needed.
Jeff Law [Sun, 3 Dec 2023 05:07:59 +0000 (22:07 -0700)]
[committed] Fix rx build failure in libgcc
The rx port has a bunch of what I presume are ABI compatibility functions in
libgcc. Those compatibility functions routines such as __eqdf2 from libgcc,
but without a prototype. This patch adds the missing prototypes.
Jeff Law [Sun, 3 Dec 2023 05:03:28 +0000 (22:03 -0700)]
[committed] Fix minor testsuite problems on H8 after C99 changes
Two minor regressions on the H8 were triggered by the C99 changes. First
pr58400.c has several functions without prototypes. I just added -fpermissive
to that test. Second pr17306-2.c has a single call to an unprototyped function
for which I added the prototype.
Jeff Law [Sun, 3 Dec 2023 04:54:36 +0000 (21:54 -0700)]
[committed] Fix frv build after C99 changes
Two issues prevent the frv-elf port from building after the C99 changes. First
the trampoline code emitted into libgcc has calls to exit, but no prototype.
Adding a trivial prototype for exit() into the macro fixes that little goof.
Second, frvbegin.c has a call to atexit, so a quick prototype is added into
frvbegin.c to fix that problem.
That's enough to get the compiler building again.
gcc/
* config/frv/frv.h (TRANSFER_FROM_TRAMPOLINE): Add prototype for exit.
Alexandre Oliva [Sat, 2 Dec 2023 17:14:02 +0000 (14:14 -0300)]
libsupc++: try cxa_thread_atexit_impl at runtime
g++.dg/tls/thread_local-order2.C fails when the toolchain is built for
a platform that lacks __cxa_thread_atexit_impl, even if the program is
built and run using that toolchain on a (later) platform that offers
__cxa_thread_atexit_impl.
This patch adds runtime testing for __cxa_thread_atexit_impl on
platforms that support weak symbols.
for libstdc++-v3/ChangeLog
* libsupc++/atexit_thread.cc [__GXX_WEAK__]: Add dynamic
detection of __cxa_thread_atexit_impl.
Harald Anlauf [Fri, 1 Dec 2023 21:44:30 +0000 (22:44 +0100)]
Fortran: deferred-length character optional dummy arguments [PR93762,PR100651]
gcc/fortran/ChangeLog:
PR fortran/93762
PR fortran/100651
* trans-array.cc (gfc_trans_deferred_array): Add presence check
for optional deferred-length character dummy arguments.
* trans-expr.cc (gfc_conv_missing_dummy): The character length for
deferred-length dummy arguments is passed by reference, so that
its value can be returned. Adjust handling for optional dummies.
gcc/testsuite/ChangeLog:
PR fortran/93762
PR fortran/100651
* gfortran.dg/optional_deferred_char_1.f90: New test.
This patch does the same for other callers in the file.
gcc/
* attribs.cc (comp_type_attributes): Pass the full TREE_PURPOSE
to lookup_attribute_spec, rather than just the name.
(remove_attributes_matching): Likewise.
attribs: Consider namespaces when comparing attributes
decl_attributes and comp_type_attributes both had code that
iterated over one list of attributes and looked for coresponding
attributes in another list. This patch makes those lookups
namespace-aware.
gcc/
* attribs.cc (find_same_attribute): New function.
(decl_attributes, comp_type_attributes): Use it when looking
up one list's attributes in another list.
Later patches add more calls to get_attribute_namespace.
For scoped attributes, this is a simple operation on tree pointers.
But for normal GNU attributes (the vast majority), it involves a
call to get_identifier ("gnu"). This patch caches the identifier
for speed.
gcc/
* Makefile.in (GTFILES): Add attribs.cc.
* attribs.cc (gnu_namespace_cache): New variable.
(get_gnu_namespace): New function.
(lookup_attribute_spec): Use it instead of get_identifier ("gnu").
(get_attribute_namespace, attribs_cc_tests): Likewise.
When I tried to use config-list.mk, the build for every triple except
the build machine's failed for m2. This is because, unlike other
languages, m2 builds target objects during all-gcc. The build will
therefore fail unless you have access to an appropriate binutils
(or an equivalent). That's quite a big ask for over 100 targets. :)
This patch therefore makes m2 an optional inclusion.
Doing that wasn't entirely straightforward though. The current
configure line includes "--enable-languages=all,...", which means
that the "..." can only force languages to be added that otherwise
wouldn't have been. (I.e. the only effect of the "..." is to
override configure autodetection.)
The choice of all,ada and:
# Make sure you have a recent enough gcc (with ada support) in your path so
# that --enable-werror-always will work.
make it clear that lack of GNAT should be a build failure rather than
silently ignored. This predates the D frontend, which requires GDC
in the same way that Ada requires GNAT. I don't know of a reason
why D should be treated differently.
The patch therefore expands the "all" into a specific list of
languages.
That in turn meant that Fortran had to be handled specially,
since bpf and mmix don't support Fortran.
Perhaps there's an argument that m2 shouldn't build target objects
during all-gcc, but (a) it works for practical usage and (b) the
patch is an easy workaround. I'd be happy for the patch to be
reverted if the build system changes.
contrib/
* config-list.mk (OPT_IN_LANGUAGES): New variable.
($(LIST)): Replace --enable-languages=all with a specifc list.
Disable fortran on bpf and mmix. Enable the languages in
OPT_IN_LANGUAGES.
All of the attributes in these tables go in the "gnu" namespace.
This means that they can use the traditional GNU __attribute__((...))
syntax and the standard [[gnu::...]] syntax.
Standard attributes are registered dynamically with a null namespace.
There are no supported attributes in other namespaces (clang, vendor
namespaces, etc.).
This patch tries to generalise things by making the namespace
part of the attribute specification.
It's usual for multiple attributes to be defined in the same namespace,
so rather than adding the namespace to each individual definition,
it seemed better to group attributes in the same namespace together.
This would also allow us to reuse the same table for clang attributes
that are written with the GNU syntax, or other similar situations
where the attribute can be accessed via multiple "spellings".
The patch therefore adds a scoped_attribute_specs that contains
a namespace and a list of attributes in that namespace.
It's still possible to have multiple scoped_attribute_specs
for the same namespace. E.g. it makes sense to keep the
C++-specific, C/C++-common, and format-related attributes in
separate tables, even though they're all GNU attributes.
Current lists of attributes are terminated by a null name.
Rather than keep that for the new structure, it seemed neater
to use an array_slice. This also makes the tables slighly more
compact.
In general, a target might want to support attributes in multiple
namespaces. Rather than have a separate hook for each possibility
(like the three langhooks above), it seemed better to make
TARGET_ATTRIBUTE_TABLE a table of tables. Specifically, it's
an array_slice of scoped_attribute_specs.
We can do the same thing for langhooks, which allows the three hooks
above to be merged into a single LANG_HOOKS_ATTRIBUTE_TABLE.
It also allows the standard attributes to be registered statically
and checked by the usual attribs.cc checks.
The patch adds a TARGET_GNU_ATTRIBUTES helper for the common case
in which a target wants a single table of gnu attributes. It can
only be used if the table is free of preprocessor directives.
There are probably other things we need to do to make vendor namespaces
work smoothly. E.g. in principle it would be good to make exclusion
sets namespace-aware. But to some extent we have that with standard
vs. gnu attributes too. This patch is just supposed to be a first step.
gcc/
* attribs.h (scoped_attribute_specs): New structure.
(register_scoped_attributes): Take a reference to a
scoped_attribute_specs instead of separate namespace and array
parameters.
* plugin.h (register_scoped_attributes): Likewise.
* attribs.cc (register_scoped_attributes): Likewise.
(attribute_tables): Change into an array of scoped_attribute_specs
pointers. Reduce to 1 element for frontends and 1 element for targets.
(empty_attribute_table): Delete.
(check_attribute_tables): Update for changes to attribute_tables.
Use a hash_set to identify duplicates.
(handle_ignored_attributes_option): Update for above changes.
(init_attributes): Likewise.
(excl_pair): Delete.
(test_attribute_exclusions): Update for above changes. Don't
enforce symmetry for standard attributes in the top-level namespace.
* langhooks-def.h (LANG_HOOKS_COMMON_ATTRIBUTE_TABLE): Delete.
(LANG_HOOKS_FORMAT_ATTRIBUTE_TABLE): Likewise.
(LANG_HOOKS_INITIALIZER): Update accordingly.
(LANG_HOOKS_ATTRIBUTE_TABLE): Define to an empty constructor.
* langhooks.h (lang_hooks::common_attribute_table): Delete.
(lang_hooks::format_attribute_table): Likewise.
(lang_hooks::attribute_table): Redefine to an array of
scoped_attribute_specs pointers.
* target-def.h (TARGET_GNU_ATTRIBUTES): New macro.
* target.def (attribute_spec): Redefine to return an array of
scoped_attribute_specs pointers.
* tree-inline.cc (function_attribute_inlinable_p): Update accordingly.
* doc/tm.texi: Regenerate.
* config/aarch64/aarch64.cc (aarch64_attribute_table): Define using
TARGET_GNU_ATTRIBUTES.
* config/alpha/alpha.cc (vms_attribute_table): Likewise.
* config/avr/avr.cc (avr_attribute_table): Likewise.
* config/bfin/bfin.cc (bfin_attribute_table): Likewise.
* config/bpf/bpf.cc (bpf_attribute_table): Likewise.
* config/csky/csky.cc (csky_attribute_table): Likewise.
* config/epiphany/epiphany.cc (epiphany_attribute_table): Likewise.
* config/gcn/gcn.cc (gcn_attribute_table): Likewise.
* config/h8300/h8300.cc (h8300_attribute_table): Likewise.
* config/loongarch/loongarch.cc (loongarch_attribute_table): Likewise.
* config/m32c/m32c.cc (m32c_attribute_table): Likewise.
* config/m32r/m32r.cc (m32r_attribute_table): Likewise.
* config/m68k/m68k.cc (m68k_attribute_table): Likewise.
* config/mcore/mcore.cc (mcore_attribute_table): Likewise.
* config/microblaze/microblaze.cc (microblaze_attribute_table):
Likewise.
* config/mips/mips.cc (mips_attribute_table): Likewise.
* config/msp430/msp430.cc (msp430_attribute_table): Likewise.
* config/nds32/nds32.cc (nds32_attribute_table): Likewise.
* config/nvptx/nvptx.cc (nvptx_attribute_table): Likewise.
* config/riscv/riscv.cc (riscv_attribute_table): Likewise.
* config/rl78/rl78.cc (rl78_attribute_table): Likewise.
* config/rx/rx.cc (rx_attribute_table): Likewise.
* config/s390/s390.cc (s390_attribute_table): Likewise.
* config/sh/sh.cc (sh_attribute_table): Likewise.
* config/sparc/sparc.cc (sparc_attribute_table): Likewise.
* config/stormy16/stormy16.cc (xstormy16_attribute_table): Likewise.
* config/v850/v850.cc (v850_attribute_table): Likewise.
* config/visium/visium.cc (visium_attribute_table): Likewise.
* config/arc/arc.cc (arc_attribute_table): Likewise. Move further
down file.
* config/arm/arm.cc (arm_attribute_table): Update for above changes,
using...
(arm_gnu_attributes, arm_gnu_attribute_table): ...these new globals.
* config/i386/i386-options.h (ix86_attribute_table): Delete.
(ix86_gnu_attribute_table): Declare.
* config/i386/i386-options.cc (ix86_attribute_table): Replace with...
(ix86_gnu_attributes, ix86_gnu_attribute_table): ...these two globals.
* config/i386/i386.cc (ix86_attribute_table): Define as an array of
scoped_attribute_specs pointers.
* config/ia64/ia64.cc (ia64_attribute_table): Update for above changes,
using...
(ia64_gnu_attributes, ia64_gnu_attribute_table): ...these new globals.
* config/rs6000/rs6000.cc (rs6000_attribute_table): Update for above
changes, using...
(rs6000_gnu_attributes, rs6000_gnu_attribute_table): ...these new
globals.
gcc/ada/
* gcc-interface/gigi.h (gnat_internal_attribute_table): Change
type to scoped_attribute_specs.
* gcc-interface/utils.cc (gnat_internal_attribute_table): Likewise,
using...
(gnat_internal_attributes): ...this as the underlying array.
* gcc-interface/misc.cc (gnat_attribute_table): New global.
(LANG_HOOKS_ATTRIBUTE_TABLE): Use it.
gcc/c-family/
* c-common.h (c_common_attribute_table): Replace with...
(c_common_gnu_attribute_table): ...this.
(c_common_format_attribute_table): Change type to
scoped_attribute_specs.
* c-attribs.cc (c_common_attribute_table): Replace with...
(c_common_gnu_attributes, c_common_gnu_attribute_table): ...these
new globals.
(c_common_format_attribute_table): Change type to
scoped_attribute_specs, using...
(c_common_format_attributes): ...this as the underlying array.
gcc/c/
* c-tree.h (std_attribute_table): Declare.
* c-decl.cc (std_attribute_table): Change type to
scoped_attribute_specs, using...
(std_attributes): ...this as the underlying array.
(c_init_decl_processing): Remove call to register_scoped_attributes.
* c-objc-common.h (c_objc_attribute_table): New global.
(LANG_HOOKS_ATTRIBUTE_TABLE): Use it.
(LANG_HOOKS_COMMON_ATTRIBUTE_TABLE): Delete.
(LANG_HOOKS_FORMAT_ATTRIBUTE_TABLE): Delete.
gcc/cp/
* cp-tree.h (cxx_attribute_table): Delete.
(cxx_gnu_attribute_table, std_attribute_table): Declare.
* cp-objcp-common.h (LANG_HOOKS_COMMON_ATTRIBUTE_TABLE): Delete.
(LANG_HOOKS_FORMAT_ATTRIBUTE_TABLE): Delete.
(cp_objcp_attribute_table): New table.
(LANG_HOOKS_ATTRIBUTE_TABLE): Redefine.
* tree.cc (cxx_attribute_table): Replace with...
(cxx_gnu_attributes, cxx_gnu_attribute_table): ...these globals.
(std_attribute_table): Change type to scoped_attribute_specs, using...
(std_attributes): ...this as the underlying array.
(init_tree): Remove call to register_scoped_attributes.
gcc/d/
* d-tree.h (d_langhook_attribute_table): Replace with...
(d_langhook_gnu_attribute_table): ...this.
(d_langhook_common_attribute_table): Change type to
scoped_attribute_specs.
* d-attribs.cc (d_langhook_common_attribute_table): Change type to
scoped_attribute_specs, using...
(d_langhook_common_attributes): ...this as the underlying array.
(d_langhook_attribute_table): Replace with...
(d_langhook_gnu_attributes, d_langhook_gnu_attribute_table): ...these
new globals.
(uda_attribute_p): Update accordingly, and update for new
targetm.attribute_table type.
* d-lang.cc (d_langhook_attribute_table): New global.
(LANG_HOOKS_COMMON_ATTRIBUTE_TABLE): Delete.
gcc/fortran/
* f95-lang.cc: Include attribs.h.
(gfc_attribute_table): Change to an array of scoped_attribute_specs
pointers, using...
(gfc_gnu_attributes, gfc_gnu_attribute_table): ...these new globals.
gcc/jit/
* dummy-frontend.cc (jit_format_attribute_table): Change type to
scoped_attribute_specs, using...
(jit_format_attributes): ...this as the underlying array.
(jit_attribute_table): Change to an array of scoped_attribute_specs
pointers, using...
(jit_gnu_attributes, jit_gnu_attribute_table): ...these new globals
for the original array. Include the format attributes.
(LANG_HOOKS_COMMON_ATTRIBUTE_TABLE): Delete.
(LANG_HOOKS_FORMAT_ATTRIBUTE_TABLE): Delete.
(LANG_HOOKS_ATTRIBUTE_TABLE): Define.
gcc/lto/
* lto-lang.cc (lto_format_attribute_table): Change type to
scoped_attribute_specs, using...
(lto_format_attributes): ...this as the underlying array.
(lto_attribute_table): Change to an array of scoped_attribute_specs
pointers, using...
(lto_gnu_attributes, lto_gnu_attribute_table): ...these new globals
for the original array. Include the format attributes.
(LANG_HOOKS_COMMON_ATTRIBUTE_TABLE): Delete.
(LANG_HOOKS_FORMAT_ATTRIBUTE_TABLE): Delete.
(LANG_HOOKS_ATTRIBUTE_TABLE): Define.
Roger Sayle [Sat, 2 Dec 2023 11:15:14 +0000 (11:15 +0000)]
RISC-V: Improve style to work around PR 60994 in host compiler.
This simple patch allows me to build a cross-compiler to riscv using
older versions of RedHat's system compiler. The issue is PR c++/60994
where g++ doesn't like the same name (demand_flags) to be used by both
a variable and a (enumeration) type, which is also undesirable from a
(GNU) coding style perspective. One solution is to rename the type
to demand_flags_t, but a less invasive change is to simply use another
identifier for the problematic local variable, renaming demand_flags
to dflags.
2023-12-02 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
* config/riscv/riscv-vsetvl.cc (csetvl_info::parse_insn): Rename
local variable from demand_flags to dflags, to avoid conflicting
with (enumeration) type of the same name.
For vector constant extract-{even/odd} permutation replace the default
[x]vshuf instruction combination with [x]vilv{l/h} instruction, which
can reduce instructions and improves performance.
Li Wei [Tue, 28 Nov 2023 07:38:37 +0000 (15:38 +0800)]
LoongArch: Accelerate optimization of scalar signed/unsigned popcount.
In LoongArch, the vector popcount has corresponding instructions, while
the scalar does not. Currently, the scalar popcount is calculated
through a loop, and the value of a non-power of two needs to be iterated
several times, so the vector popcount instruction is considered for
optimization.
gcc/ChangeLog:
* config/loongarch/loongarch.md (v2di): Used to simplify the
following templates.
(popcount<mode>2): New.
gcc/testsuite/ChangeLog:
* gcc.target/loongarch/popcnt.c: New test.
* gcc.target/loongarch/popcount.c: New test.
chenxiaolong [Tue, 28 Nov 2023 08:23:53 +0000 (16:23 +0800)]
LoongArch: Added vectorized hardware inspection for testsuite.
When GCC regression tests are executed on a cpu that does not support
vectorization, the loongarch/vector directory will have some FAIL entries for
all test cases related to vectorization runs. In order to solve this kind
of problem, a vectorized hardware detection function was added to the code,
which can only be compiled but not run.
Li Wei [Tue, 28 Nov 2023 07:56:35 +0000 (15:56 +0800)]
LoongArch: Remove duplicate definition of CLZ_DEFINED_VALUE_AT_ZERO.
In the r14-5547 commit, C[LT]Z_DEFINED_VALUE_AT_ZERO were defined at
the same time, but in fact, CLZ_DEFINED_VALUE_AT_ZERO has already been
defined, so remove the duplicate definition.
We first force_reg such CONST_INT (within 32bit value) into a SImode reg.
Then use such special patterns.
Those pattern with this operand match should only value on! TARGET_64BIT.
The PR112801 combine into such patterns on RV64 incorrectly (Those patterns should be only value on RV32).
This is the bug:
andi a2,a2,2
vsetivli zero,2,e64,m1,ta,ma
sext.w a3,a4
vmv.v.x v1,a2
vslide1down.vx v1,v1,a4 -> it should be a3 instead of a4.
Such incorrect codegen is caused by
...
(sign_extend:DI (subreg:SI (reg:DI 135 [ f.0_3 ]) 0))
] UNSPEC_VSLIDE1DOWN)) 16935 {*pred_slide1downv2di_extended}
...
Incorretly combine into the patterns should not be valid on RV64 system.
So add !TARGET_64BIT to all same type patterns which can fix such issue as well as robostify the vector.md.
PR target/112801
gcc/ChangeLog:
* config/riscv/vector.md: Add !TARGET_64BIT.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/pr112801.c: New test.
Pan Li [Thu, 30 Nov 2023 07:08:50 +0000 (15:08 +0800)]
RISC-V: Bugfix for legitimize move when get vec mode in zve32f
If we want to extract 64bit value but ELEN < 64, we use RVV
vector mode with EEW = 32 to extract the highpart and lowpart.
However, this approach doesn't honor DFmode when movdf pattern
when ZVE32f and of course results in ICE when zve32f.
This patch would like to reuse the approach with some additional
handing, consider lowpart bits is meaningless for FP mode, we need
one int reg as bridge here. For example:
rtx tmp = gen_rtx_reg (DImode)
reg:DI = reg:DF (fmv.d.x) // Move DF reg to DI
...
perform the extract for high and low parts
...
reg:DF = reg:DI (fmv.x.d) // Move DI reg back to DF after all done
PR target/112743
gcc/ChangeLog:
* config/riscv/riscv.cc (riscv_legitimize_move): Take the
exist (U *mode) and handle DFmode like DImode when EEW is
32bits for ZVE32F.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/pr112743-2.c: New test.
Harald Anlauf [Thu, 30 Nov 2023 20:53:21 +0000 (21:53 +0100)]
Fortran: copy-out for possibly missing OPTIONAL CLASS arguments [PR112772]
gcc/fortran/ChangeLog:
PR fortran/112772
* trans-expr.cc (gfc_conv_class_to_class): Make copy-out conditional
on the presence of an OPTIONAL CLASS argument passed to an OPTIONAL
CLASS dummy.
gcc/testsuite/ChangeLog:
PR fortran/112772
* gfortran.dg/missing_optional_dummy_7.f90: New test.
Jason Merrill [Mon, 25 Sep 2023 09:15:02 +0000 (10:15 +0100)]
c++: mangle function template constraints
Per https://github.com/itanium-cxx-abi/cxx-abi/issues/24 and
https://github.com/itanium-cxx-abi/cxx-abi/pull/166
We need to mangle constraints to be able to distinguish between function
templates that only differ in constraints. From the latter link, we want to
use the template parameter mangling previously specified for lambdas to also
make explicit the form of a template parameter where the argument is not a
"natural" fit for it, such as when the parameter is constrained or deduced.
I'm concerned about how the latter link changes the mangling for some C++98
and C++11 patterns, so I've limited template_parm_natural_p to avoid two
cases found by running the testsuite with -Wabi forced on:
template <class T, T V> T f() { return V; }
int main() { return f<int,42>(); }
template <int i> int max() { return i; }
template <int i, int j, int... rest> int max()
{
int sub = max<j, rest...>();
return i > sub ? i : sub;
}
int main() { return max<1,2,3>(); }
A third C++11 pattern is changed by this patch:
template <template <typename...> class TT, typename... Ts> TT<Ts...> f();
template <typename> struct A { };
int main() { f<A,int>(); }
I aim to resolve these with the ABI committee before GCC 14.1.
We also need to resolve https://github.com/itanium-cxx-abi/cxx-abi/issues/38
(mangling references to dependent template-ids where the name is fully
resolved) as references to concepts in std:: will consistently run into this
area. This is why mangle-concepts1.C only refers to concepts in the global
namespace so far.
The library changes are to avoid trying to mangle builtins, which fails.
Demangler support and test coverage is not complete yet.
Alexandre Oliva [Fri, 1 Dec 2023 17:31:22 +0000 (14:31 -0300)]
hardcfr: make builtin_return tests more portable [PR112334]
Rework __builtin_return tests to explicitly call __builtin_apply and
use its return value rather than anything else. Also require
untyped_assembly. Avoid the noise out of exceptions escaping the
builtin-applied function, but add a test to cover their effects as
well.