Pan Li [Wed, 3 Jul 2024 14:06:48 +0000 (22:06 +0800)]
RISC-V: Bugfix vfmv insn honor zvfhmin for FP16 SEW [PR115763]
According to the ISA, the zvfhmin sub extension should only contain
convertion insn. Thus, the vfmv insn acts on FP16 should not be
present when only the zvfhmin option is given.
This patch would like to fix it by split the pred_broadcast define_insn
into zvfhmin and zvfh part. Given below example:
Before this patch:
test:
vsetivli zero,2,e16,mf4,ta,ma
vfmv.v.f v1,fa0 // should not leverage vfmv for zvfhmin
vse16.v v1,0(a0)
ret
After this patch:
test:
addi sp,sp,-16
fsh fa0,14(sp)
addi a5,sp,14
vsetivli zero,2,e16,mf4,ta,ma
vlse16.v v1,0(a5),zero
vse16.v v1,0(a0)
addi sp,sp,16
jr ra
PR target/115763
gcc/ChangeLog:
* config/riscv/vector.md (*pred_broadcast<mode>): Split into
zvfh and zvfhmin part.
(*pred_broadcast<mode>_zvfh): New define_insn for zvfh part.
(*pred_broadcast<mode>_zvfhmin): Ditto but for zvfhmin.
Pan Li [Tue, 2 Jul 2024 00:57:50 +0000 (08:57 +0800)]
Match: Allow more types truncation for .SAT_TRUNC
The .SAT_TRUNC has the input and output types, aka cvt from
itype to otype and the sizeof (otype) < sizeof (itype). The
previous patch only allows the sizeof (otype) == sizeof (itype) / 2.
But actually we have 1/4 and 1/8 truncation.
This patch would like to support more types trunction when
sizeof (otype) < sizeof (itype). The below truncation will be
covered.
The below test suites are passed for this patch:
1. The rv64gcv fully regression tests.
2. The rv64gcv build with glibc.
3. The x86 bootstrap tests.
4. The x86 fully regression tests.
gcc/ChangeLog:
* match.pd: Allow any otype is less than itype truncation.
The below test suites are passed for this patch
* The x86 bootstrap test.
* The x86 fully regression test.
* The rv64gcv fully regression tests.
gcc/ChangeLog:
* tree-vect-patterns.cc (gimple_unsigned_integer_sat_trunc): Add
new decl generated by match.
(vect_recog_sat_trunc_pattern): Add new func impl to recog the
.SAT_TRUNC pattern.
This patch adds a pattern in match.pd folding x/sqrt(x) to sqrt(x) for -funsafe-math-optimizations. Test cases were added for double, float, and long double.
The patch was bootstrapped and regtested on aarch64-linux-gnu, no regression.
Ok for mainline?
Signed-off-by: Jennifer Schmitz <jschmitz@nvidia.com>
gcc/
When make_type_from_size is called with a biased type, for an entity
that isn't explicitly biased, we may refrain from reusing the given
type because it doesn't seem to match, and then proceed to create an
exact copy of that type.
Compute earlier the biased status of the expected type, early enough
for the suitability check of the given type. Modify for_biased
instead of biased_p, so that biased_p remains with the given type's
status for the comparison.
Avoid creating unnecessary copies of types in make_type_from_size, by
caching and reusing previously-created identical types, similarly to
the caching of packable types.
While at that, fix two vaguely related issues:
- TYPE_DEBUG_TYPE's storage is shared with other sorts of references
to types, so it shouldn't be accessed unless
TYPE_CAN_HAVE_DEBUG_TYPE_P holds.
- When we choose the narrower/packed variant of a type as the main
debug info type, we fail to output its name if we fail to follow debug
type for the TYPE_NAME decl type in modified_type_die.
for gcc/ada/ChangeLog
* gcc-interface/misc.cc (gnat_get_array_descr_info): Only follow
TYPE_DEBUG_TYPE if TYPE_CAN_HAVE_DEBUG_TYPE_P.
* gcc-interface/utils.cc (sized_type_hash): New struct.
(sized_type_hasher): New struct.
(sized_type_hash_table): New variable.
(init_gnat_utils): Allocate it.
(destroy_gnat_utils): Release it.
(sized_type_hasher::equal): New.
(hash_sized_type): New.
(canonicalize_sized_type): New.
(make_type_from_size): Use it to cache packed variants. Fix
type reuse by combining biased_p and for_biased earlier. Hold
the combination in for_biased, adjusting later uses.
[debug] Avoid dropping bits from num/den in fixed-point types
We used to use an unsigned 128-bit type to hold the numerator and
denominator used to represent the delta of a fixed-point type in debug
information, but there are cases in which that was not enough, and
more significant bits silently overflowed and got omitted from debug
information.
Introduce a mode in which UI_to_gnu selects a wide-enough unsigned
type, and use that to convert numerator and denominator. While at
that, avoid exceeding the maximum precision for wide ints, and for
available int modes, when selecting a type to represent very wide
constants, falling back to 0/0 for unrepresentable fractions.
for gcc/ada/ChangeLog
* gcc-interface/cuintp.cc (UI_To_gnu): Add mode that selects a
wide enough unsigned type. Fail if the constant exceeds the
representable numbers.
* gcc-interface/decl.cc (gnat_to_gnu_entity): Use it for
numerator and denominator of fixed-point types. In case of
failure, fall back to an indeterminate fraction.
Alexandre Oliva [Thu, 13 Jun 2024 03:12:47 +0000 (00:12 -0300)]
[i386] restore recompute to override opts after change [PR113719]
The first patch for PR113719 regressed gcc.dg/ipa/iinline-attr.c on
toolchains configured to --enable-frame-pointer, because the
optimization node created within handle_optimize_attribute had
flag_omit_frame_pointer incorrectly set, whereas
default_optimization_node didn't. With this difference,
can_inline_edge_by_limits_p flagged an optimization mismatch and we
refused to inline the function that had a redundant optimization flag
into one that didn't, which is exactly what is tested for there.
This patch restores the calls to ix86_default_align and
ix86_recompute_optlev_based_flags that used to be, and ought to be,
issued during TARGET_OVERRIDE_OPTIONS_AFTER_CHANGE, but preserves the
intent of the original change, of having those functions called at
different spots within ix86_option_override_internal. To that end,
the remaining bits were refactored into a separate function, that was
in turn adjusted to operate on explicitly-passed opts and opts_set,
rather than going for their global counterparts.
for gcc/ChangeLog
PR target/113719
* config/i386/i386-options.cc
(ix86_override_options_after_change_1): Add opts and opts_set
parms, operate on them, after factoring out of...
(ix86_override_options_after_change): ... this. Restore calls
of ix86_default_align and ix86_recompute_optlev_based_flags.
(ix86_option_override_internal): Call the factored-out bits.
The ACLE requires __ARM_FEATURE_SVE_BF16 to be enabled when SVE and BF16
and the associated intrinsics are available.
GCC does support the required intrinsics for TARGET_SVE_BF16 so define
this macro too.
Bootstrapped and tested on aarch64-none-linux-gnu.
gcc/
PR target/115475
* config/aarch64/aarch64-c.cc (aarch64_update_cpp_builtins):
Define __ARM_FEATURE_SVE_BF16 for TARGET_SVE_BF16.
gcc/testsuite/
PR target/115475
* gcc.target/aarch64/acle/bf16_sve_feature.c: New test.
The ACLE asks the user to test for __ARM_FEATURE_BF16 before using the
<arm_bf16.h> header but GCC doesn't set this up.
LLVM does, so this is an inconsistency between the compilers.
This patch enables that macro for TARGET_BF16_FP.
Bootstrapped and tested on aarch64-none-linux-gnu.
gcc/
PR target/115457
* config/aarch64/aarch64-c.cc (aarch64_update_cpp_builtins):
Define __ARM_FEATURE_BF16 for TARGET_BF16_FP.
gcc/testsuite/
PR target/115457
* gcc.target/aarch64/acle/bf16_feature.c: New test.
Richard Biener [Fri, 28 Jun 2024 14:04:13 +0000 (16:04 +0200)]
Handle NULL stmt in SLP_TREE_SCALAR_STMTS
The following starts to handle NULL elements in SLP_TREE_SCALAR_STMTS
with the first candidate being the two-operator nodes where some
lanes are do-not-care and also do not have a scalar stmt computing
the result. I originally added SLP_TREE_SCALAR_STMTS to two-operator
nodes but this exposes PR115764, so I've split that out.
I have a patch use NULL elements for loads from groups with gaps
where we get around not doing that by having a load permutation.
AVR: target/98762 - Handle partial clobber in movqi output.
PR target/98762
gcc/
* config/avr/avr.cc (avr_out_movqi_r_mr_reg_disp_tiny): Properly
restore the base register when it is partially clobbered.
gcc/testsuite/
* gcc.target/avr/torture/pr98762.c: New test.
Tamar Christina [Wed, 3 Jul 2024 08:31:09 +0000 (09:31 +0100)]
ivopts: replace constant_multiple_of with aff_combination_constant_multiple_p [PR114932]
The current implementation of constant_multiple_of is doing a more limited
version of aff_combination_constant_multiple_p.
The only non-debug usage of constant_multiple_of will proceed with the values
as affine trees. There is scope for further optimization here, namely I believe
that if constant_multiple_of returns the aff_tree after the conversion then
get_computation_aff_1 can use it instead of manually creating the aff_tree.
However I think it makes sense to first commit this smaller change and then
incrementally change things.
gcc/ChangeLog:
PR tree-optimization/114932
* tree-ssa-loop-ivopts.cc (constant_multiple_of): Use
aff_combination_constant_multiple_p instead.
Thomas pointed out that we sometimes failed to eliminate some dead code
(specifically clobbers of otherwise unused registers) on nvptx when
late-combine is enabled. This happens because:
- combine is able to optimise the function in a way that exposes dead code.
This leaves the df information in a "dirty" state.
- late_combine calls df_analyze without DF_LR_RUN_DCE run set.
This updates the df information and clears the "dirty" state.
- late_combine doesn't find any extra optimisations, and so leaves
the df information up-to-date.
- if_after_combine (ce2) calls df_analyze with DF_LR_RUN_DCE set.
Because the df information is already up-to-date, fast DCE is
not run.
The upshot is that running late-combine has the effect of suppressing
a DCE opportunity that would have been noticed without late_combine.
I think this shows that we should track the state of the DCE separately
from the LR problem. Every pass updates the latter, but not all passes
update the former.
gcc/
* df.h (DF_LR_DCE): New df_problem_id.
(df_lr_dce): New macro.
* df-core.cc (rest_of_handle_df_finish): Check for a null free_fun.
* df-problems.cc (df_lr_finalize): Split out fast DCE handling to...
(df_lr_dce_finalize): ...this new function.
(problem_LR_DCE): New df_problem.
(df_lr_add_problem): Register LR_DCE rather than LR itself.
* dce.cc (fast_dce): Clear df_lr_dce->solutions_dirty.
Richard Biener [Wed, 3 Jul 2024 07:05:06 +0000 (09:05 +0200)]
tree-optimization/115764 - testcase for BB SLP issue
The following adds a testcase for a CSE issue with BB SLP two operator
handling when we make those CSE aware by providing SLP_TREE_SCALAR_STMTS
for them. This was reduced from 526.blender_r.
PR tree-optimization/115764
* gcc.dg/vect/bb-slp-76.c: New testcase.
Lewis Hyatt [Sun, 16 Jun 2024 01:09:01 +0000 (21:09 -0400)]
preprocessor: Create the parser before handling command-line includes [PR115312]
Since r14-2893, we create a parser object in preprocess-only mode for the
purpose of parsing #pragma while preprocessing. The parser object was
formerly created after calling c_finish_options(), which leads to problems
on platforms that don't use stdc-predef.h (such as MinGW, as reported in
the PR). On such platforms, the call to c_finish_options() will process
the first command-line-specified include file. If that includes a PCH, then
c-ppoutput.cc will encounter a state it did not anticipate. Fix it by
creating the parser prior to calling c_finish_options().
gcc/c-family/ChangeLog:
PR pch/115312
* c-opts.cc (c_common_init): Call c_init_preprocess() before
c_finish_options() so that a parser is available to process any
includes specified on the command line.
gcc/testsuite/ChangeLog:
PR pch/115312
* g++.dg/pch/pr115312.C: New test.
* g++.dg/pch/pr115312.Hs: New test.
This patch improves GCC’s vectorization of __builtin_popcount for aarch64 target
by adding popcount patterns for vector modes besides QImode, i.e., HImode,
SImode and DImode.
With this patch, we now generate the following for V8HI:
cnt v1.16b, v0.16b
uaddlp v2.8h, v1.16b
For V4HI, we generate:
cnt v1.8b, v0.8b
uaddlp v2.4h, v1.8b
For V4SI, we generate:
cnt v1.16b, v0.16b
uaddlp v2.8h, v1.16b
uaddlp v3.4s, v2.8h
For V4SI with TARGET_DOTPROD, we generate the following instead:
movi v0.4s, #0
movi v1.16b, #1
cnt v3.16b, v2.16b
udot v0.4s, v3.16b, v1.16b
For V2SI, we generate:
cnt v1.8b, v.8b
uaddlp v2.4h, v1.8b
uaddlp v3.2s, v2.4h
For V2SI with TARGET_DOTPROD, we generate the following instead:
movi v0.8b, #0
movi v1.8b, #1
cnt v3.8b, v2.8b
udot v0.2s, v3.8b, v1.8b
For V2DI, we generate:
cnt v1.16b, v.16b
uaddlp v2.8h, v1.16b
uaddlp v3.4s, v2.8h
uaddlp v4.2d, v3.4s
For V4SI with TARGET_DOTPROD, we generate the following instead:
movi v0.4s, #0
movi v1.16b, #1
cnt v3.16b, v2.16b
udot v0.4s, v3.16b, v1.16b
uaddlp v0.2d, v0.4s
Andrew Pinski [Tue, 2 Jul 2024 22:02:17 +0000 (15:02 -0700)]
aarch64: Add testcase for vectorconvert lowering [PR110473]
Vectorconvert lowering was changed to use the convert optab directly
starting in r15-1677-gc320a7efcd35ba. I had filed an aarch64 specific
issue for this specific thing and it would make sense to add an aarch64
specific testcase instead of just having a x86_64 specific ones for
this.
Pushed as obvious after testing for aarch64-linux-gnu.
Andrew Pinski [Mon, 1 Jul 2024 01:21:15 +0000 (18:21 -0700)]
Add some optimizations to gimple_expand_builtin_cabs
While looking into the original folding code for cabs
(moved to match in r6-4111-gabcc43f5323869), I noticed that
`cabs(x+0i)` was optimized even without the need of sqrt.
I also noticed that now the code generation in this case
will be worse if the target had a sqrt. So let's implement
this small optimizations in gimple_expand_builtin_cabs.
Note `cabs(x+0i)` is done without unsafe math optimizations.
This is because the definition of `cabs(x+0i)` is `hypot(x, 0)`
and the definition in the standard says that just returns `abs(x)`.
Bootstrapped and tested on x86_64-linux-gnu with no regressions.
gcc/ChangeLog:
* tree-complex.cc (gimple_expand_builtin_cabs): Add
`cabs(a+ai)`, `cabs(x+0i)` and `cabs(0+xi)` optimizations.
gcc/testsuite/ChangeLog:
* gcc.dg/tree-ssa/cabs-3.c: New test.
* gcc.dg/tree-ssa/cabs-4.c: New test.
* gcc.dg/tree-ssa/cabs-5.c: New test.
* gcc.dg/tree-ssa/cabs-6.c: New test.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
Andrew Pinski [Sun, 30 Jun 2024 19:57:14 +0000 (12:57 -0700)]
Move cabs expansion from powcabs to complex lowering [PR115710]
Expanding cabs in powcab might be too late as forwprop might
recombine the load from a memory with the complex expr. Moving
instead to complex lowering allows us to use directly the real/imag
component from the loads instead. This allows for vectorization too.
Bootstrapped and tested on x86_64-linux-gnu with no regressions.
PR tree-optimization/115710
gcc/ChangeLog:
* tree-complex.cc (init_dont_simulate_again): Handle CABS.
(gimple_expand_builtin_cabs): New function, moved mostly
from tree-ssa-math-opts.cc.
(expand_complex_operations_1): Call gimple_expand_builtin_cabs.
* tree-ssa-math-opts.cc (gimple_expand_builtin_cabs): Remove.
(build_and_insert_binop): Remove.
(pass_data_expand_powcabs): Update comment.
(pass_expand_powcabs::execute): Don't handle CABS.
gcc/testsuite/ChangeLog:
* gcc.dg/tree-ssa/cabs-1.c: New test.
* gcc.dg/tree-ssa/cabs-2.c: New test.
* gfortran.dg/vect/pr115710.f90: New test.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
Andrew Pinski [Mon, 1 Jul 2024 01:39:07 +0000 (18:39 -0700)]
Small optimization for complex addition, real/imag parts the same
This is just a small optimization for the case where the real and imag
parts are the same when lowering complex addition/subtraction. We only
need to do the addition once when the real and imag parts are the same (on
both sides of the operator). This gets done later on by FRE/PRE/DOM but
having it done soon allows the cabs lowering to remove the sqrt and
just change it to a multiply by a constant.
Bootstrapped and tested on x86_64-linux-gnu.
gcc/ChangeLog:
* tree-complex.cc (expand_complex_addition): If both
operands have the same real and imag parts, only
add the addition once.
gcc/testsuite/ChangeLog:
* gcc.dg/tree-ssa/complex-8.c: New test.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
Jakub Jelinek [Tue, 2 Jul 2024 20:09:58 +0000 (22:09 +0200)]
c++: Fix ICE on constexpr placement new [PR115754]
C++26 is making in P2747R2 paper placement new constexpr.
While working on a patch for that, I've noticed we ICE starting with
GCC 14 on the following testcase.
The problem is that e.g. for the void * to sometype * casts checks,
we really assume the casts have their operand constant evaluated
as prvalue, but on the testcase the cast itself is evaluated with
vc_discard and that means op can end up e.g. a VAR_DECL which the
later code doesn't like and asserts on.
If the result type is void, we don't really need the cast operand
for anything, so can use vc_discard for the recursive call,
VIEW_CONVERT_EXPR can appear on the lhs, so we need to honor the
lval but otherwise the patch uses vc_prvalue.
I'd like to get this patch in before the rest of P2747R2 implementation,
so that it can be backported to 14.2 later on.
2024-07-02 Jakub Jelinek <jakub@redhat.com>
Jason Merrill <jason@redhat.com>
PR c++/115754
* constexpr.cc (cxx_eval_constant_expression) <case CONVERT_EXPR>:
For conversions to void, pass vc_discard to the recursive call
and otherwise for tcode other than VIEW_CONVERT_EXPR pass vc_prvalue.
Jakub Jelinek [Tue, 2 Jul 2024 20:08:45 +0000 (22:08 +0200)]
c++: Implement C++26 P3144R2 - Deleting a Pointer to an Incomplete Type Should be Ill-formed [PR115747]
The following patch implements the C++26 paper which makes delete
and delete[] on incomplete class types invalid, previously it has
been UB unless the class had trivial destructor and no custom
deallocator.
The patch uses permerror_opt, so -Wno-delete-incomplete makes it
still compile without warnings like before, and -fpermissive makes
it warn but not error; in SFINAE contexts it is considered an error
in C++26 and later.
2024-07-02 Jakub Jelinek <jakub@redhat.com>
Jason Merrill <jason@redhat.com>
PR c++/115747
gcc/cp/
* init.cc: Implement C++26 P3144R2 - Deleting a Pointer to an
Incomplete Type Should be Ill-formed.
(build_vec_delete_1): Emit permerror_at and return error_mark_node
for delete [] on incomplete type.
(build_delete): Similarly for delete.
gcc/testsuite/
* g++.dg/init/delete1.C: Adjust expected diagnostics for C++26.
* g++.dg/warn/Wdelete-incomplete-1.C: Likewise.
* g++.dg/warn/incomplete1.C: Likewise.
* g++.dg/ipa/pr85607.C: Likewise.
* g++.dg/cpp26/delete1.C: New test.
* g++.dg/cpp26/delete2.C: New test.
* g++.dg/cpp26/delete3.C: New test.
Jakub Jelinek [Tue, 2 Jul 2024 20:07:30 +0000 (22:07 +0200)]
c++: Implement C++26 P0963R3 - Structured binding declaration as a condition [PR115745]
This C++26 paper allows structured bindings declaration in
if/while/for/switch conditions, where the structured binding shouldn't
be initialized by array (so in the standard only non-union class types;
as extension _Complex will also work and vectors will be diagnosed because
of conversion issues) and the decision variable is the artificial variable
(e in the standard) itself contextually converted to bool or converted to
some integer/enumeration type.
The standard requires that the conversion is evaluated before the get calls
in case of std::tuple* using class, so the largest part of the patch is making
sure this can be done during instantiation without duplicating too much
code.
In cp_parser_condition, creating a TARGET_EXPR to hold temporarily the
bool or int/enum result of the conversion across the get calls is easy, it
could be just added in between cp_finish_decl and cp_finish_decomp, but for
pt.cc there was no easy spot to add that.
In the end, the patch uses DECL_DECOMP_BASE for this. That tree is used
primarily for the user vars or var proxies to point back at the
DECL_ARTIFICIAL e variable, before this patch it has been NULL_TREE on
the base. In some places code was checking if DECL_DECOMP_BASE is NULL_TREE
to find out if it is the base or user var/var proxy.
The patch introduces DECL_DECOMP_IS_BASE macro for what used to be
!DECL_DECOMP_BASE and can stick something else in the base's
DECL_DECOMP_BASE as long as it is not a VAR_DECL.
The patch uses integer_zero_node to mark if/while/for condition structured
binding, integer_one_node to mark switch condition structured binding and
finally cp_finish_decomp sets it to TARGET_EXPR if some get method calls are
emitted and from there the callers can pick that up. This way I also
avoided code duplication between !processing_template_decl parsing and
pt.cc.
2024-07-02 Jakub Jelinek <jakub@redhat.com>
PR c++/115745
gcc/cp/
* cp-tree.h: Implement C++26 P0963R3 - Structured binding declaration
as a condition.
(DECL_DECOMP_BASE): Adjust comment.
(DECL_DECOMP_IS_BASE): Define.
* parser.cc (cp_parser_selection_statement): Adjust
cp_parser_condition caller.
(cp_parser_condition): Add KEYWORD argument. Parse
C++26 structured bindings in conditions.
(cp_parser_c_for, cp_parser_iteration_statement): Adjust
cp_parser_condition callers.
(cp_parser_simple_declaration): Adjust
cp_parser_decomposition_declaration caller.
(cp_parser_decomposition_declaration): Add KEYWORD argument.
If it is not RID_MAX, diagnose for C++23 and older rather than C++14
and older. Set DECL_DECOMP_BASE to integer_zero_node for structured
bindings used in if/while/for conditions or integer_one_node for
those used in switch conditions.
* decl.cc (poplevel, check_array_initializer): Use DECL_DECOMP_IS_BASE
instead of !DECL_DECOMP_BASE.
(cp_finish_decomp): Diagnose array initializer for structured bindings
used in conditions. If using std::tuple_{size,element}, emit
conversion to bool or integer/enumeration of e into a TARGET_EXPR
before emitting get method calls.
* decl2.cc (mark_used): Use DECL_DECOMP_IS_BASE instead of
!DECL_DECOMP_BASE.
* module.cc (trees_in::tree_node): Likewise.
* typeck.cc (maybe_warn_about_returning_address_of_local): Likewise.
* semantics.cc (maybe_convert_cond): For structured bindings with
TARGET_EXPR DECL_DECOMP_BASE use that as condition.
(finish_switch_cond): Likewise.
gcc/testsuite/
* g++.dg/cpp1z/decomp16.C: Adjust expected diagnostics.
* g++.dg/cpp26/decomp3.C: New test.
* g++.dg/cpp26/decomp4.C: New test.
* g++.dg/cpp26/decomp5.C: New test.
* g++.dg/cpp26/decomp6.C: New test.
* g++.dg/cpp26/decomp7.C: New test.
* g++.dg/cpp26/decomp8.C: New test.
* g++.dg/cpp26/decomp9.C: New test.
* g++.dg/cpp26/decomp10.C: New test.
David Faust [Mon, 10 Jun 2024 17:59:05 +0000 (10:59 -0700)]
bpf,btf: enable BTF pruning by default for BPF
This patch enables -gprune-btf by default in the BPF backend when
generating BTF information, and fixes BPF CO-RE generation when using
-gprune-btf.
When generating BPF CO-RE information, we must ensure that types used
in CO-RE relocations always have sufficient BTF information emited so
that the CO-RE relocations can be processed by a BPF loader. The BTF
pruning algorithm on its own does not have sufficient information to
determine which types are used in a BPF CO-RE relocation, so this
information must be supplied by the BPF backend, using a new
btf_mark_type_used function.
Co-authored-by: Cupertino Miranda <cupertino.miranda@oracle.com>
gcc/
* btfout.cc (btf_mark_type_used): New.
* ctfc.h (btf_mark_type_used): Declare it here.
* config/bpf/bpf.cc (bpf_option_override): Enable -gprune-btf
by default if -gbtf is enabled.
* config/bpf/core-builtins.cc (extra_fn): New typedef.
(compute_field_expr): Add callback parameter, and call it if supplied.
Fix computation for MEM_REF.
(mark_component_type_as_used): New.
(bpf_mark_types_as_used): Likewise.
(bpf_expand_core_builtin): Call here.
* doc/invoke.texi (Debugging Options): Note that -gprune-btf is
enabled by default for BPF target when generating BTF.
gcc/testsuite/
* gcc.dg/debug/btf/btf-variables-5.c: Adjust one test for bpf-*-*
target.
David Faust [Mon, 10 Jun 2024 17:54:53 +0000 (10:54 -0700)]
btf: add -gprune-btf option
This patch adds a new option, -gprune-btf, to control BTF debug info
generation.
As the name implies, this option enables a kind of "pruning" of the BTF
information before it is emitted. When enabled, rather than emitting
all type information translated from DWARF, only information for types
directly used in the source program is emitted.
The primary purpose of this pruning is to reduce the amount of
unnecessary BTF information emitted, especially for BPF programs. It is
very common for BPF programs to include Linux kernel internal headers in
order to have access to kernel data structures. However, doing so often
has the side effect of also adding type definitions for a large number
of types which are not actually used by nor relevant to the program.
In these cases, -gprune-btf commonly reduces the size of the resulting
BTF information by 10x or more, as seen on average when compiling Linux
kernel BPF selftests. This both slims down the size of the resulting
object and reduces the time required by the BPF loader to verify the
program and its BTF information.
Note that the pruning implemented in this patch follows the same rules
as the BTF pruning performed unconditionally by LLVM's BPF backend when
generating BTF. In particular, the main sources of pruning are:
1) Only generate BTF for types used by variables and functions at the
file scope.
Note that which variables are known to be "used" may differ
slightly between LTO and non-LTO builds due to optimizations. For
non-LTO builds (and always for the BPF target), variables which are
optimized away during compilation are considered to be unused, and
they (along with their types) are pruned. For LTO builds, such
variables are not known to be optimized away by the time pruning
occurs, so VAR records for them and information for their types may
be present in the emitted BTF information. This is a missed
optimization that may be fixed in the future.
2) Avoid emitting full BTF for struct and union types which are only
pointed-to by members of other struct/union types. In these cases,
the full BTF_KIND_STRUCT or BTF_KIND_UNION which would normally
be emitted is replaced with a BTF_KIND_FWD, as though the
underlying type was a forward-declared struct or union type.
gcc/
* btfout.cc (btf_used_types): New hash set.
(struct btf_fixup): New.
(fixups, forwards): New vecs.
(btf_output): Calculate num_types depending on debug_prune_btf.
(btf_early_finsih): New initialization for debug_prune_btf.
(btf_add_used_type): New function.
(btf_used_type_list_cb): Likewise.
(btf_collect_pruned_types): Likewise.
(btf_add_vars): Handle special case for variables in ".maps" section
when generating BTF for BPF CO-RE target.
(btf_late_finish): Use btf_collect_pruned_types when debug_prune_btf
is in effect. Move some initialization to btf_early_finish.
(btf_finalize): Additional deallocation for debug_prune_btf.
* common.opt (gprune-btf): New flag.
* ctfc.cc (init_ctf_strtable): Make non-static.
* ctfc.h (init_ctf_strtable, ctfc_delete_strtab): Make extern.
* doc/invoke.texi (Debugging Options): Document -gprune-btf.
David Faust [Thu, 30 May 2024 21:06:27 +0000 (14:06 -0700)]
btf: refactor and simplify implementation
This patch heavily refactors btfout.cc to take advantage of the
structural changes in the prior commits.
Now that inter-type references are internally stored as simply pointers,
all the painful, brittle, confusing infrastructure that was used in the
process of converting CTF type IDs to BTF type IDs can be thrown out.
This greatly simplifies the entire process of converting from CTF to
BTF, making the code cleaner, easier to read, and easier to maintain.
In addition, we no longer need to worry about destructive changes in
internal data structures used commonly by CTF and BTF, which allows
deleting several ancillary data structures previously used in btfout.cc.
This is nearly transparent, but a few improvements have also been made:
1) BTF_KIND_FUNC records are now _always_ constructed at early_finish,
allowing us to construct records even for functions which are later
inlined by optimizations. DATASEC entries for functions are only
constructed at late_finish, to avoid incorrectly generating entries
for functions which get inlined.
2) BTF_KIND_VAR records and DATASEC entries for them are now always
constructed at (late) finish, which avoids cases where we could
incorrectly create records for variables which were completely
optimized away. This fixes PR debug/113566 for non-LTO builds.
In LTO builds, BTF must be emitted at early_finish, so some VAR
records may be emitted for variables which are later optimized away.
3) Some additional assembler comments have been added with more
information for debugging.
gcc/
* btfout.cc (struct btf_datasec_entry): New.
(struct btf_datasec): Add `id' member. Change `entries' to use
new struct btf_datasec_entry.
(func_map): New hash_map.
(max_translated_id): New.
(btf_var_ids, btf_id_map, holes, voids, num_vars_added)
(num_types_added, num_types_created): Delete.
(btf_absolute_var_id, btf_relative_var_id, btf_absolute_func_id)
(btf_relative_func_id, btf_absolute_datasec_id, init_btf_id_map)
(get_btf_id, set_btf_id, btf_emit_id_p): Delete.
(btf_removed_type_p): Delete.
(btf_dtd_kind, btf_emit_type_p): New helpers.
(btf_fwd_to_enum_p, btf_calc_num_vbytes): Use them.
(btf_collect_datasec): Delete.
(btf_dtd_postprocess_cb, btf_dvd_emit_preprocess_cb)
(btf_dtd_emit_preprocess_cb, btf_emit_preprocess): Delete.
(btf_dmd_representable_bitfield_p): Adapt to type reference changes
and delete now-unused ctfc argument.
(btf_asm_datasec_type_ref): Delete.
(btf_asm_type_ref): Adapt to type reference changes, simplify.
(btf_asm_type): Likewise. Mark struct/union types with bitfield
members.
(btf_asm_array): Adapt to data structure changes.
(btf_asm_varent): Likewise.
(btf_asm_sou_member): Likewise. Ensure non-bitfield members are
correctly re-encoded if struct or union contains any bitfield.
(btf_asm_func_arg, btf_asm_func_type, btf_asm_datasec_entry)
(btf_asm_datasec_type): Adapt to data structure changes.
(output_btf_header): Adapt to other changes, simplify type
length calculation, add info to assembler comments.
(output_btf_vars): Adapt to other changes.
(output_btf_strs): Fix overlong lines.
(output_asm_btf_sou_fields, output_asm_btf_enum_list)
(output_asm_btf_func_args_list, output_asm_btf_vlen_bytes)
(output_asm_btf_type, output_btf_types, output_btf_func_types)
(output_btf_datasec_types): Adapt to other changes.
(btf_init_postprocess): Delete.
(btf_output): Change to only perform output.
(btf_add_const_void, btf_add_func_records): New.
(btf_early_finish): Use them here. New.
(btf_datasec_push_entry): Adapt to data structure changes.
(btf_datasec_add_func, btf_datasec_add_var): New.
(btf_add_func_datasec_entries): New.
(btf_emit_variable_p): New helper.
(btf_add_vars): Use it here. New.
(btf_type_list_cb, btf_collect_translated_types): New.
(btf_assign_func_ids, btf_late_assign_var_ids)
(btf_assign_datasec_ids): New.
(btf_finish): Remove unused argument. Call new btf_late*
functions and btf_output.
(btf_finalize): Adapt to data structure changes.
* ctfc.h (struct ctf_dtdef): Convert existing boolean flags to
BOOL_BITFIELD and reorder.
(struct ctf_dvdef): Add dvd_id member.
(btf_finish): Remove argument from prototype.
(get_btf_id): Delete prototype.
(funcs_traverse_callback, traverse_btf_func_types): Add an
explanatory comment.
* dwarf2ctf.cc (ctf_debug_finish): Remove unused argument.
* dwarf2ctf.h: Analogous change.
* dwarf2out.cc: Likewise.
David Faust [Thu, 30 May 2024 21:06:27 +0000 (14:06 -0700)]
ctf: use pointers instead of IDs internally
This patch replaces all inter-type references in the ctfc internal data
structures with pointers, rather than the references-by-ID which were
used previously.
A couple of small updates in the BPF backend are included to make it
compatible with the change.
This change is only to the in-memory representation of various CTF
structures to make them easier to work with in various cases. It is
outwardly transparent; there is no change in emitted CTF.
gcc/
* btfout.cc (BTF_VOID_TYPEID, BTF_INIT_TYPEID): Move defines to
include/btf.h.
(btf_dvd_emit_preprocess_cb, btf_emit_preprocess)
(btf_dmd_representable_bitfield_p, btf_asm_array, btf_asm_varent)
(btf_asm_sou_member, btf_asm_func_arg, btf_init_postprocess):
Adapt to structural changes in ctf_* structs.
* ctfc.h (struct ctf_dtdef): Add forward declaration.
(ctf_dtdef_t, ctf_dtdef_ref): Move typedefs earlier.
(struct ctf_arinfo, struct ctf_funcinfo, struct ctf_sliceinfo)
(struct ctf_itype, struct ctf_dmdef, struct ctf_func_arg)
(struct ctf_dvdef): Use pointers instead of type IDs for
references to other types and use typedefs where appropriate.
(struct ctf_dtdef): Add ref_type member.
(ctf_type_exists): Use pointer instead of type ID.
(ctf_add_reftype, ctf_add_enum, ctf_add_slice, ctf_add_float)
(ctf_add_integer, ctf_add_unknown, ctf_add_pointer)
(ctf_add_array, ctf_add_forward, ctf_add_typedef)
(ctf_add_function, ctf_add_sou, ctf_add_enumerator)
(ctf_add_variable): Likewise. Return pointer instead of ID.
(ctf_lookup_tree_type): Return pointer to type instead of ID.
* ctfc.cc: Analogous changes.
* ctfout.cc (ctf_asm_type, ctf_asm_slice, ctf_asm_varent)
(ctf_asm_sou_lmember, ctf_asm_sou_member, ctf_asm_func_arg)
(output_ctf_objt_info): Adapt to changes.
* dwarf2ctf.cc (gen_ctf_type, gen_ctf_void_type)
(gen_ctf_unknown_type, gen_ctf_base_type, gen_ctf_pointer_type)
(gen_ctf_subrange_type, gen_ctf_array_type, gen_ctf_typedef)
(gen_ctf_modifier_type, gen_ctf_sou_type, gen_ctf_function_type)
(gen_ctf_enumeration_type, gen_ctf_variable, gen_ctf_function)
(gen_ctf_type, ctf_do_die): Likewise.
* config/bpf/btfext-out.cc (struct btf_ext_core_reloc): Use
pointer instead of type ID.
(bpf_core_reloc_add, bpf_core_get_sou_member_index)
(output_btfext_core_sections): Adapt to above changes.
* config/bpf/core-builtins.cc (process_type): Likewise.
include/
* btf.h (BTF_VOID_TYPEID, BTF_INIT_TYPEID): Move defines here,
from gcc/btfout.cc.
David Faust [Thu, 30 May 2024 21:06:27 +0000 (14:06 -0700)]
ctf, btf: restructure CTF/BTF emission
This commit makes some structural changes to the CTF/BTF debug info
emission. In particular:
a) CTF is new always fully generated and emitted before any
BTF-related procedures are run. This means that BTF-related
functions can change, even irreversibly, the shared in-memory
representation used by the two formats without issue.
b) BTF generation has fewer entry points, and is cleanly divided
into early_finish and finish.
c) BTF is now always emitted at finish (called from dwarf2out_finish),
for all targets in non-LTO builds, rather than being emitted at
early_finish for targets other than BPF CO-RE. In LTO builds,
BTF is emitted at early_finish as before.
Note that this change alone does not alter the contents of BTF at
all, regardless of whether it would have previously been emitted at
early_finish or finish, because the calculation of the BTF to be
emitted is not moved by this patch, only the write-out.
The changes are transparent to both CTF and BTF emission.
gcc/
* btfout.cc (btf_init_postprocess): Rename to...
(btf_early_finish): ...this.
(btf_output): Rename to...
(btf_finish): ...this.
* ctfc.h: Analogous changes.
* dwarf2ctf.cc (ctf_debug_early_finish): Conditionally call
btf_early_finish, or ctf_finalize as appropriate. Emit BTF
here for LTO builds.
(ctf_debug_finish): Always call btf_finish here if generating
BTF info in non-LTO builds.
(ctf_debug_finalize, ctf_debug_init_postprocess): Delete.
* dwarf2out.cc (dwarf2out_early_finish): Remove call to
ctf_debug_init_postprocess.
Arm: Fix disassembly error in Thumb-1 relaxed load/store [PR115188]
A Thumb-1 memory operand allows single-register LDMIA/STMIA. This doesn't get
printed as LDR/STR with writeback in unified syntax, resulting in strange
assembler errors if writeback is selected. To work around this, use the 'Uw'
constraint that blocks writeback. Also use a new 'mem_and_no_t1_wback_op'
which is a general memory operand that disallows writeback in Thumb-1.
A few other patterns were using 'm' for Thumb-1 in a similar way, update these
to also use 'mem_and_no_t1_wback_op' and 'Uw'.
gcc:
PR target/115188
* config/arm/arm.md (unaligned_loadsi): Use 'Uw' constraint and
'mem_and_no_t1_wback_op'.
(unaligned_loadhiu): Likewise.
(unaligned_storesi): Likewise.
(unaligned_storehi): Likewise.
* config/arm/predicates.md (mem_and_no_t1_wback_op): Add new predicate.
* config/arm/sync.md (arm_atomic_load<mode>): Use 'Uw' constraint.
(arm_atomic_store<mode>): Likewise.
gcc/testsuite:
PR target/115188
* gcc.target/arm/pr115188.c: Add new test.
Lewis Hyatt [Thu, 27 Jun 2024 20:11:27 +0000 (16:11 -0400)]
build: Fix "make install" for MinGW
Since r8-4925, the "make install" recipe generates a path which can start
with "//", causing problems for some Windows environments. Fix by removing
the redundant slash.
The `function_attribute_inlinable_p` hook documentation described it
returning the value if it is OK to inline the provided fndecl into "the
current function". AFAICS This hook is only called when
`current_function_decl` is the same as the `fndecl` argument that the
hook is given, hence asking whether `fndecl` can be inlined into "the
current function" doesn't seem relevant. Moreover from what I see no
existing implementation of `function_attribute_inlinable_p` uses "the
current function" in any way.
Update the documentation to match this understanding.
The `unspec_may_trap_p` documentation mentioned applying to either
`unspec` or `unspec_volatile`. AFAICS this hook is only used for
`unspec` codes since c84a808e493a, so I removed the mention of
`unspec_volatile`.
Eric Botcazou [Wed, 19 Jun 2024 20:45:29 +0000 (22:45 +0200)]
ada: Use static allocation for small dynamic string concatenations in more cases
This lifts the limitation of the original implementation whereby the first
operand of the concatenation needs to have a length known at compiled time
in order for the static allocation to be used.
gcc/ada/
* exp_ch4.adb (Expand_Concatenate): In the case where an operand
does not have both bounds known at compile time, use nevertheless
the low bound directly if it is known at compile time.
Fold the conditional expression giving the low bound of the result
in the general case if the low bound of all the operands are equal.
Steve Baird [Thu, 13 Jun 2024 22:28:29 +0000 (15:28 -0700)]
ada: Use clause (or use type clause) in a protected operation sometimes ignored.
In some cases, a use clause (or a use type clause) occurring within a
protected operation is incorrectly ignored.
gcc/ada/
* exp_ch9.adb
(Expand_N_Protected_Body): Declare new procedure
Unanalyze_Use_Clauses and call it before analyzing the newly
constructed subprogram body.
Steve Baird [Thu, 13 Jun 2024 22:39:37 +0000 (15:39 -0700)]
ada: Put_Image aspect spec ignored for null extension.
If type T1 is is a tagged null record with a Put_Image aspect specification
and type T2 is a null extension of T1 (with no aspect specifications), then
evaluation of a T2'Image call should include a call to the specified procedure
(as opposed to yielding "(NULL RECORD)").
gcc/ada/
* exp_put_image.adb
(Build_Record_Put_Image_Procedure): Declare new Boolean-valued
function Null_Record_Default_Implementation_OK; call it as part of
deciding whether to generate "(NULL RECORD)" text.
Justin Squirek [Tue, 18 Jun 2024 08:38:18 +0000 (08:38 +0000)]
ada: Allow mutably tagged types to work with qualified expressions
This patch modifies the experimental 'Size'Class feature such that objects of
mutably tagged types can be assigned qualified expressions featuring a
definite type (e.g. Mutable_Obj := Root_Child_T'(Root_T with others => <>)).
gcc/ada/
* sem_ch5.adb:
(Analyze_Assignment): Add special expansion for qualified expressions
in certain cases dealing with mutably tagged types.
Bob Duff [Tue, 18 Jun 2024 16:53:46 +0000 (12:53 -0400)]
ada: Bug box for expression function with list comprehension
GNAT crashes on an iterator with a filter inside an expression function
that is the completion of an earlier spec.
gcc/ada/
* freeze.adb (Freeze_Type_Refs): If Node is in N_Has_Etype,
check that it has had its Etype set, because this can be
called early for expression functions that are completions.
Eric Botcazou [Mon, 17 Jun 2024 07:54:47 +0000 (09:54 +0200)]
ada: Call memcmp instead of Compare_Array_Unsigned_8 and...
... implement support for ordering comparisons of discrete array types.
This extends the Support_Composite_Compare_On_Target feature to ordering
comparisons of discrete array types as specified by RM 4.5.2(26/3), when
the component type is a byte (unsigned).
Implement support for ordering comparisons of discrete array types
with a two-pronged approach: for types with a size known at compile time,
this lets the gimplifier generate the call to memcmp (or else an optimize
version of it); otherwise, this directly generates the call to memcmp.
gcc/ada/
* exp_ch4.adb (Expand_Array_Comparison): Remove the obsolete byte
addressibility test. If Support_Composite_Compare_On_Target is true,
immediately return for a component size of 8, an unsigned component
type and aligned operands. Disable when Unnest_Subprogram_Mode is
true (for LLVM).
(Expand_N_Op_Eq): Adjust comment.
* targparm.ads (Support_Composite_Compare_On_Target): Replace bit by
byte in description and document support for ordering comparisons.
* gcc-interface/utils2.cc (compare_arrays): Rename into...
(compare_arrays_for_equality): ...this. Remove redundant lines.
(compare_arrays_for_ordering): New function.
(build_binary_op) <comparisons>: Call compare_arrays_for_ordering
to implement ordering comparisons for arrays.
Yannick Moy [Mon, 17 Jun 2024 09:57:55 +0000 (11:57 +0200)]
ada: Fix analysis of Extensions_Visible
Pragma/aspect Extensions_Visible should be analyzed before any
pre/post contracts on a subprogram, as the legality of conversions
of formal parameters to classwide type depends on the value of
Extensions_Visible. Now fixed.
gcc/ada/
* contracts.adb (Analyze_Pragmas_In_Declarations): Analyze
pragmas in two iterations over the list of declarations in
order to analyze some pragmas before others.
* einfo-utils.ads (Get_Pragma): Fix comment.
* sem_prag.ads (Pragma_Significant_To_Subprograms): Fix.
(Pragma_Significant_To_Subprograms_Analyzed_First): Add new
global array to identify these pragmas which should be analyzed
first, which concerns only Extensions_Visible for now.
Eric Botcazou [Mon, 17 Jun 2024 19:22:06 +0000 (21:22 +0200)]
ada: Fix bogus error on allocator in instantiation with private derived types
The problem is that the call to Convert_View made from Make_Init_Call does
nothing because the Etype is not set on the second argument.
gcc/ada/
* exp_ch7.adb (Convert_View): Add third parameter Typ and use it if
the second parameter does not have an Etype.
(Make_Adjust_Call): Remove obsolete setting of Etype and pass Typ in
call to Convert_View.
(Make_Final_Call): Likewise.
(Make_Init_Call): Pass Typ in call to Convert_View.
Javier Miranda [Sun, 16 Jun 2024 18:41:57 +0000 (18:41 +0000)]
ada: Miscomputed bounds for inner null array aggregates
When an array has several dimensions, and inner dimmensions are
initialized using Ada 2022 null array aggregates, the compiler
crashes or reports spurious errors computing the bounds of the
null array aggregates. This patch fixes the problem and adds
new warnings reported when the index of null array aggregates is
an enumeration type or a modular type and it is known at compile
time that the program will raise Constraint_Error computing the
bounds of the aggregate.
gcc/ada/
* sem_aggr.adb (Cannot_Compute_High_Bound): New subprogram.
(Report_Null_Array_Constraint_Error): New subprogram.
(Collect_Aggr_Bounds): For null aggregates, build the bounds
of the inner dimensions.
(Has_Null_Aggregate_Raising_Constraint_Error): New subprogram.
(Subtract): New subprogram.
(Resolve_Array_Aggregate): Report a warning when the index of
null array aggregates is an enumeration type or a modular type
at we can statically determine that the program will raise CE
at runtime computing its high bound.
(Resolve_Null_Array_Aggregate): ditto.
Eric Botcazou [Tue, 11 Jun 2024 21:06:22 +0000 (23:06 +0200)]
ada: Fix crash on box-initialized component with No_Default_Initialization
The problem is that the implementation of the No_Default_Initialization
restriction assumes that no type initialization routines are needed and,
therefore, builds a dummy version of them, which goes against their use
for box-initialized components in aggregates.
Therefore this use needs to be flagged as violating the restriction too.
gcc/ada/
* doc/gnat_rm/standard_and_implementation_defined_restrictions.rst
(No_Default_Initialization): Mention components alongside variables.
* exp_aggr.adb (Build_Array_Aggr_Code.Gen_Assign): Check that the
restriction No_Default_Initialization is not in effect for default
initialized component.
(Build_Record_Aggr_Code): Likewise.
* gnat_rm.texi: Regenerate.
Andrew Stubbs [Fri, 28 Jun 2024 15:13:59 +0000 (15:13 +0000)]
amdgcn: invent target feature flags
This is a first step towards having a device table so we can add new devices
more easily. It'll also make it easier to remove the deprecated GCN3 bits.
The patch should not change the behaviour of anything.
Kewen Lin [Tue, 2 Jul 2024 08:58:06 +0000 (03:58 -0500)]
sparc: define SPARC_LONG_DOUBLE_TYPE_SIZE for vxworks [PR115739]
Commit r15-1594 removed define of LONG_DOUBLE_TYPE_SIZE in
sparc.cc, it's based on the assumption that each OS has its
own define (see the comments in sparc.h), but it exposes an
issue on vxworks which lacks of the define.
We can bring back the default SPARC_LONG_DOUBLE_TYPE_SIZE to
sparc.cc, but according to the comments in sparc.h, I think
it's better to define this in vxworks.h. btw, I also went
through all the sparc supported triples, vxworks is the only
one that misses this define.
PR target/115739
gcc/ChangeLog:
* config/sparc/vxworks.h (SPARC_LONG_DOUBLE_TYPE_SIZE): New define.
After r15-1579, ADD and LD/ST pairs will be merged into LDX/STX.
Cause these two tests to fail. To guarantee that these two tests pass,
add the compilation option '-fno-late-combine-instructions'.
Kewen Lin [Tue, 2 Jul 2024 07:13:35 +0000 (02:13 -0500)]
isel: Fold more in gimple_expand_vec_cond_expr [PR115659]
As PR115659 shows, assuming c = x CMP y, there are some
folding chances for patterns r = c ? -1/z : z/0.
For r = c ? -1 : z, it can be folded into:
- r = c | z (with ior_optab supported)
- or r = c ? c : z
while for r = c ? z : 0, it can be foled into:
- r = c & z (with and_optab supported)
- or r = c ? z : c
This patch is to teach ISEL to take care of them and also
remove the redundant gsi_replace as the caller of function
gimple_expand_vec_cond_expr will handle it.
PR tree-optimization/115659
gcc/ChangeLog:
* gimple-isel.cc (gimple_expand_vec_cond_expr): Add more foldings for
patterns x CMP y ? -1 : z and x CMP y ? z : 0.
Marek Polacek [Wed, 26 Jun 2024 21:55:21 +0000 (17:55 -0400)]
c++: ICE with computed gotos [PR115469]
This is a low-prio crash on invalid code where we ICE on a VAR_DECL
with erroneous type. I thought I'd try to avoid putting such decls
into ->names and ->names_in_scope but that sounds riskier than the
following cleanup.
PR c++/115469
gcc/cp/ChangeLog:
* decl.cc (automatic_var_with_nontrivial_dtor_p): New.
(poplevel_named_label_1): Use it.
(check_goto_1): Likewise.
Marek Polacek [Tue, 25 Jun 2024 21:42:01 +0000 (17:42 -0400)]
c++: unresolved overload with comma op [PR115430]
This works:
template<typename T>
int Func(T);
typedef int (*funcptrtype)(int);
funcptrtype fp0 = &Func<int>;
but this doesn't:
funcptrtype fp2 = (0, &Func<int>);
because we only call resolve_nondeduced_context on the LHS (via
convert_to_void) but not on the RHS, so cp_build_compound_expr's
type_unknown_p check issues an error.
PR c++/115430
gcc/cp/ChangeLog:
* typeck.cc (cp_build_compound_expr): Call resolve_nondeduced_context
on RHS.
gcc/testsuite/ChangeLog:
* g++.dg/cpp0x/noexcept41.C: Remove dg-error.
* g++.dg/overload/addr3.C: New test.
Marek Polacek [Fri, 28 Jun 2024 21:51:19 +0000 (17:51 -0400)]
c++: DR2627, Bit-fields and narrowing conversions [PR94058]
This DR (https://cplusplus.github.io/CWG/issues/2627.html) says that
even if we are converting from an integer type or unscoped enumeration type
to an integer type that cannot represent all the values of the original
type, it's not narrowing if "the source is a bit-field whose width w is
less than that of its type (or, for an enumeration type, its underlying
type) and the target type can represent all the values of a hypothetical
extended integer type with width w and with the same signedness as the
original type".
DR 2627
PR c++/94058
PR c++/104392
gcc/cp/ChangeLog:
* typeck2.cc (check_narrowing): Don't warn if the conversion isn't
narrowing as per DR 2627.
gcc/testsuite/ChangeLog:
* g++.dg/DRs/dr2627.C: New test.
* g++.dg/cpp0x/Wnarrowing22.C: New test.
* g++.dg/cpp2a/spaceship-narrowing1.C: New test.
* g++.dg/cpp2a/spaceship-narrowing2.C: New test.
Richard Biener [Sun, 30 Jun 2024 09:37:12 +0000 (11:37 +0200)]
Preserve SSA info for more propagated copy
Besides VN and copy-prop also CCP and VRP as well as forwprop
propagate out copies and thus it's worthwhile to try to preserve
range and points-to info there when possible.
Note that this also fixes the testcase from PR115701 but that's
because we do not actually intersect info but only copy info when
there was no info present.
Pan Li [Sun, 30 Jun 2024 08:48:19 +0000 (16:48 +0800)]
RISC-V: Add testcases for unsigned scalar .SAT_ADD IMM form 4
This patch would like to add test cases for the unsigned scalar
.SAT_ADD IMM form 4. Aka:
Form 4:
#define DEF_SAT_U_ADD_IMM_FMT_4(T) \
T __attribute__((noinline)) \
sat_u_add_imm_##T##_fmt_4 (T x) \
{ \
T ret; \
return __builtin_add_overflow (x, 9, &ret) == 0 ? ret : -1; \
}
DEF_SAT_U_ADD_IMM_FMT_4(uint64_t)
The below test is passed for this patch.
* The rv64gcv regression test.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/sat_arith.h: Add helper test macro.
* gcc.target/riscv/sat_u_add_imm-13.c: New test.
* gcc.target/riscv/sat_u_add_imm-14.c: New test.
* gcc.target/riscv/sat_u_add_imm-15.c: New test.
* gcc.target/riscv/sat_u_add_imm-16.c: New test.
* gcc.target/riscv/sat_u_add_imm-run-13.c: New test.
* gcc.target/riscv/sat_u_add_imm-run-14.c: New test.
* gcc.target/riscv/sat_u_add_imm-run-15.c: New test.
* gcc.target/riscv/sat_u_add_imm-run-16.c: New test.
Pan Li [Sun, 30 Jun 2024 08:41:16 +0000 (16:41 +0800)]
RISC-V: Add testcases for unsigned scalar .SAT_ADD IMM form 3
This patch would like to add test cases for the unsigned scalar
.SAT_ADD IMM form 3. Aka:
Form 3:
#define DEF_SAT_U_ADD_IMM_FMT_3(T) \
T __attribute__((noinline)) \
sat_u_add_imm_##T##_fmt_3 (T x) \
{ \
T ret; \
return __builtin_add_overflow (x, 8, &ret) ? -1 : ret; \
}
DEF_SAT_U_ADD_IMM_FMT_3(uint64_t)
The below test is passed for this patch.
* The rv64gcv regression test.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/sat_arith.h: Add helper test macro.
* gcc.target/riscv/sat_u_add_imm-10.c: New test.
* gcc.target/riscv/sat_u_add_imm-11.c: New test.
* gcc.target/riscv/sat_u_add_imm-12.c: New test.
* gcc.target/riscv/sat_u_add_imm-9.c: New test.
* gcc.target/riscv/sat_u_add_imm-run-10.c: New test.
* gcc.target/riscv/sat_u_add_imm-run-11.c: New test.
* gcc.target/riscv/sat_u_add_imm-run-12.c: New test.
* gcc.target/riscv/sat_u_add_imm-run-9.c: New test.
Pan Li [Sun, 30 Jun 2024 08:14:38 +0000 (16:14 +0800)]
RISC-V: Add testcases for unsigned scalar .SAT_ADD IMM form 2
This patch would like to add test cases for the unsigned scalar
.SAT_ADD IMM form 2. Aka:
Form 2:
#define DEF_SAT_U_ADD_IMM_FMT_2(T) \
T __attribute__((noinline)) \
sat_u_add_imm_##T##_fmt_1 (T x) \
{ \
return (T)(x + 9) < x ? -1 : (x + 9); \
}
DEF_SAT_U_ADD_IMM_FMT_2(uint64_t)
The below test is passed for this patch.
* The rv64gcv regression test.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/sat_arith.h: Add helper test macro.
* gcc.target/riscv/sat_u_add_imm-5.c: New test.
* gcc.target/riscv/sat_u_add_imm-6.c: New test.
* gcc.target/riscv/sat_u_add_imm-7.c: New test.
* gcc.target/riscv/sat_u_add_imm-8.c: New test.
* gcc.target/riscv/sat_u_add_imm-run-5.c: New test.
* gcc.target/riscv/sat_u_add_imm-run-6.c: New test.
* gcc.target/riscv/sat_u_add_imm-run-7.c: New test.
* gcc.target/riscv/sat_u_add_imm-run-8.c: New test.
Pan Li [Sun, 30 Jun 2024 08:03:41 +0000 (16:03 +0800)]
RISC-V: Add testcases for unsigned scalar .SAT_ADD IMM form 1
This patch would like to add test cases for the unsigned scalar
.SAT_ADD IMM form 1. Aka:
Form 1:
#define DEF_SAT_U_ADD_IMM_FMT_1(T) \
T __attribute__((noinline)) \
sat_u_add_imm_##T##_fmt_1 (T x) \
{ \
return (T)(x + 9) >= x ? (x + 9) : -1; \
}
DEF_SAT_U_ADD_IMM_FMT_1(uint64_t)
The below test is passed for this patch.
* The rv64gcv regression test.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/sat_arith.h: Add helper test macro.
* gcc.target/riscv/sat_u_add_imm-1.c: New test.
* gcc.target/riscv/sat_u_add_imm-2.c: New test.
* gcc.target/riscv/sat_u_add_imm-3.c: New test.
* gcc.target/riscv/sat_u_add_imm-4.c: New test.
* gcc.target/riscv/sat_u_add_imm-run-1.c: New test.
* gcc.target/riscv/sat_u_add_imm-run-2.c: New test.
* gcc.target/riscv/sat_u_add_imm-run-3.c: New test.
* gcc.target/riscv/sat_u_add_imm-run-4.c: New test.
Roger Sayle [Mon, 1 Jul 2024 11:21:20 +0000 (12:21 +0100)]
testsuite: Fix -m32 gcc.target/i386/pr102464-vrndscaleph.c on RedHat.
This patch fixes the 4 FAILs of gcc.target/i386/pr192464-vrndscaleph.c
with --target_board='unix{-m32}' on RedHat 7.x. The issue is that this
AVX512 test includes the system math.h, and on older systems this provides
inline versions of floor, ceil and rint (for the 387). The work around
is to define __NO_MATH_INLINES before #include <math.h> (or alternatively
use __builtin_floor, __builtin_ceil, etc.).
2024-07-01 Roger Sayle <roger@nextmovesoftware.com>
gcc/testsuite/ChangeLog
PR middle-end/102464
* gcc.target/i386/pr102464-vrndscaleph.c: Define __NO_MATH_INLINES
to resovle FAILs with -m32 on older RedHat systems.
Roger Sayle [Mon, 1 Jul 2024 11:18:26 +0000 (12:18 +0100)]
i386: Additional peephole2 to use lea in round-up integer division.
A common idiom for implementing an integer division that rounds upwards is
to write (x + y - 1) / y. Conveniently on x86, the two additions to form
the numerator can be performed by a single lea instruction, and indeed gcc
currently generates a lea when both x and y are both registers.
This discrepancy is caused by the late decision (in peephole2) to split
an addition with a memory operand, into a load followed by a reg-reg
addition. This patch improves this situation by adding a peephole2
to recognize consecutive additions and transform them into lea if
profitable.
My first attempt at fixing this was to use a define_insn_and_split:
using combine to combine instructions. Unfortunately, this approach
interferes with (reload's) subtle balance of deciding when to use/avoid lea,
which can be observed as a code size regression in CSiBE. The peephole2
approach (proposed here) uniformly improves CSiBE results.
2024-07-01 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
* config/i386/i386.md (peephole2): Transform two consecutive
additions into a 3-component lea if !TARGET_AVOID_LEA_FOR_ADDR.
gcc/testsuite/ChangeLog
* gcc.target/i386/lea-3.c: New test case.
PR target/88236
PR target/115726
gcc/
* config/avr/avr.md (mov<mode>) [avr_mem_memx_p]: Expand in such a
way that the destination does not overlap with any hard register
clobbered / used by xload8qi_A resp. xload<mode>_A.
* config/avr/avr.cc (avr_out_xload): Avoid early-clobber
situation for Z by executing just one load when the output register
overlaps with Z.
gcc/testsuite/
* gcc.target/avr/torture/pr88236-pr115726.c: New test.
Andrew Stubbs [Wed, 12 Jun 2024 11:09:33 +0000 (11:09 +0000)]
libgomp, openmp: Add ompx_gnu_pinned_mem_alloc
This creates a new predefined allocator as a shortcut for using pinned
memory with OpenMP. This is not in the OpenMP standard so it uses the "ompx"
namespace and an independent enum baseline of 200 (selected to not clash with
other known implementations).
The allocator is equivalent to using a custom allocator with the pinned
trait and the null fallback trait. One motivation for having this feature is
for use by the (planned) -foffload-memory=pinned feature.
gcc/fortran/ChangeLog:
* openmp.cc (is_predefined_allocator): Update valid ranges to
incorporate ompx_gnu_pinned_mem_alloc.
libgomp/ChangeLog:
* allocator.c (ompx_gnu_min_predefined_alloc): New.
(ompx_gnu_max_predefined_alloc): New.
(predefined_alloc_mapping): Rename to ...
(predefined_omp_alloc_mapping): ... this.
(predefined_ompx_gnu_alloc_mapping): New.
(_Static_assert): Adjust for the new name, and add a new assert for the
new table.
(predefined_allocator_p): New.
(predefined_alloc_mapping): New.
(omp_aligned_alloc): Support ompx_gnu_pinned_mem_alloc.
Use predefined_allocator_p and predefined_alloc_mapping.
(omp_free): Likewise.
(omp_alligned_calloc): Likewise.
(omp_realloc): Likewise.
* env.c (parse_allocator): Add ompx_gnu_pinned_mem_alloc.
* libgomp.texi: Document ompx_gnu_pinned_mem_alloc.
* omp.h.in (omp_allocator_handle_t): Add ompx_gnu_pinned_mem_alloc.
* omp_lib.f90.in: Add ompx_gnu_pinned_mem_alloc.
* omp_lib.h.in: Add ompx_gnu_pinned_mem_alloc.
* testsuite/libgomp.c/alloc-pinned-5.c: New test.
* testsuite/libgomp.c/alloc-pinned-6.c: New test.
* testsuite/libgomp.fortran/alloc-pinned-1.f90: New test.
gcc/testsuite/ChangeLog:
* gfortran.dg/gomp/allocate-pinned-1.f90: New test.
Co-Authored-By: Thomas Schwinge <thomas@codesourcery.com>
Andrew Stubbs [Wed, 12 Jun 2024 08:43:53 +0000 (08:43 +0000)]
libgomp: change alloc-pinned tests failure mode
The feature doesn't work on non-Linux hosts, at present, so skip the tests
entirely.
On Linux systems that have insufficient lockable memory configured we still
need to fail or else the feature won't be getting tested when we think it is,
but now there's a message to explain why.
libgomp/ChangeLog:
* testsuite/libgomp.c/alloc-pinned-1.c: Change dg-xfail-run-if to
dg-skip-if.
Correct spelling mistake.
Abort on insufficient lockable memory.
Use #error on non-linux hosts.
* testsuite/libgomp.c/alloc-pinned-2.c: Likewise.
Richard Biener [Mon, 1 Jul 2024 08:06:55 +0000 (10:06 +0200)]
tree-optimization/115723 - ICE with .COND_ADD reduction
The following fixes an ICE with a .COND_ADD discovered as reduction
even though its else value isn't the reduction chain link but a
constant. This would be wrong-code with --disable-checking I think.
PR tree-optimization/115723
* tree-vect-loop.cc (check_reduction_path): For a .COND_ADD
verify the else value also refers to the reduction chain op.
liuhongt [Thu, 20 Jun 2024 04:41:13 +0000 (12:41 +0800)]
Optimize a < 0 ? -1 : 0 to (signed)a >> 31.
Try to optimize x < 0 ? -1 : 0 into (signed) x >> 31
and x < 0 ? 1 : 0 into (unsigned) x >> 31.
Add define_insn_and_split for the optimization did in
ix86_expand_int_vcond.
gcc/ChangeLog:
PR target/115517
* config/i386/sse.md ("*ashr<mode>3_1"): New
define_insn_and_split.
(*avx512_ashr<mode>3_1): Ditto.
(*avx2_lshr<mode>3_1): Ditto.
(*avx2_lshr<mode>3_2): Ditto and add 2 combine splitter after
it.
* config/i386/mmx.md (mmxscalarsize): New mode attribute.
(*mmw_ashr<mode>3_1): New define_insn_and_split.
("mmx_<insn><mode>3): Add a combine spiltter after it.
(*mmx_ashrv2hi3_1): New define_insn_and_plit, also add a
combine splitter after it.
liuhongt [Wed, 19 Jun 2024 08:05:58 +0000 (16:05 +0800)]
Adjust testcase for the regressed testcases after obsolete of vcond{,u,eq}.
> Richard suggests that we implement the "obvious" transforms like
> inversion in the middle-end but if for example unsigned compares
> are not supported the us_minus + eq + negative trick isn't on
> that list.
>
> The main reason to restrict vec_cmp would be to avoid
> a <= b ? c : d going with an unsupported vec_cmp but instead
> do a > b ? d : c - the alternative is trying to fix this
> on the RTL side via combine. I understand the non-native
Yes, I have a patch which can fix most regressions via pattern match
in combine.
Still there is a situation that is difficult to deal with, mainly the
optimization w/o sse4.1 . Because pblendvb/blendvps/blendvpd only
exists under sse4.1, w/o sse4.1, it takes 3
instructions (pand,pandn,por) to simulate the vcond_mask, and the
combine matches up to 4 instructions, which makes it currently
impossible to use the combine to recover those optimizations in the
vcond{,u,eq}.i.e min/max.
In the case of sse 4.1 and above, there is basically no regression anymore.
liuhongt [Wed, 26 Jun 2024 05:52:24 +0000 (13:52 +0800)]
Enable flate-combine.
Move pass_stv2 and pass_rpad after pre_reload pass_late_combine, also
define target_insn_cost to prevent post_reload pass_late_combine to
revert the optimziation did in pass_rpad.
Adjust testcases since pass_late_combine generates better code but
break scan assembly.
.i.e
Under 32-bit target, gcc used to generate broadcast from stack and
then do the real operation.
After flate_combine, they're combined into embeded broadcast
operations.
gcc/ChangeLog:
* config/i386/i386-features.cc (ix86_rpad_gate): New function.
* config/i386/i386-options.cc (ix86_override_options_after_change):
Don't disable flate_combine.
* config/i386/i386-passes.def: Move pass_stv2 and pass_rpad
after pre_reload pas_late_combine.
* config/i386/i386-protos.h (ix86_rpad_gate): New declare.
* config/i386/i386.cc (ix86_insn_cost): New function.
(TARGET_INSN_COST): Define.
liuhongt [Wed, 26 Jun 2024 05:07:31 +0000 (13:07 +0800)]
Extend lshifrtsi3_1_zext to ?k alternative.
late_combine will combine lshift + zero into *lshifrtsi3_1_zext which
cause extra mov between gpr and kmask, add ?k to the pattern.
gcc/ChangeLog:
PR target/115610
* config/i386/i386.md (<*insnsi3_zext): Add alternative ?k,
enable it only for lshiftrt and under avx512bw.
* config/i386/sse.md (*klshrsi3_1_zext): New define_insn, and
add corresponding define_split after it.
liuhongt [Wed, 26 Jun 2024 03:17:46 +0000 (11:17 +0800)]
Define mask as extern instead of uninitialized local variables.
The testcases are supposed to scan for vpopcnt{b,w,d,q} operations
with k mask, but mask is defined as uninitialized local variable which
will be set as 0 at rtl expand phase.
And it's further simplified off by late_combine which caused scan assembly failure.
Move the definition of mask outside to make the testcases more stable.
gcc/testsuite/ChangeLog:
PR target/115610
* gcc.target/i386/avx512bitalg-vpopcntb.c: Define mask as
extern instead of uninitialized local variables.
* gcc.target/i386/avx512bitalg-vpopcntbvl.c: Ditto.
* gcc.target/i386/avx512bitalg-vpopcntw.c: Ditto.
* gcc.target/i386/avx512bitalg-vpopcntwvl.c: Ditto.
* gcc.target/i386/avx512vpopcntdq-vpopcntd.c: Ditto.
* gcc.target/i386/avx512vpopcntdq-vpopcntq.c: Ditto.
Richard Biener [Thu, 27 Jun 2024 09:36:07 +0000 (11:36 +0200)]
Harden SLP reduction support wrt STMT_VINFO_REDUC_IDX
The following makes sure that for a SLP reductions all lanes have
the same STMT_VINFO_REDUC_IDX. Once we move that info and can adjust
it we can implement swapping. It also makes the existing protection
against operand swapping trigger for all stmts participating in a
reduction, not just the final one marked as reduction-def.
* tree-vect-slp.cc (vect_build_slp_tree_1): Compare
STMT_VINFO_REDUC_IDX.
(vect_build_slp_tree_2): Prevent operand swapping for
all stmts participating in a reduction.
Feng Xue [Sun, 16 Jun 2024 05:00:32 +0000 (13:00 +0800)]
vect: Determine input vectype for multiple lane-reducing operations
The input vectype of reduction PHI statement must be determined before
vect cost computation for the reduction. Since lance-reducing operation has
different input vectype from normal one, so we need to traverse all reduction
statements to find out the input vectype with the least lanes, and set that to
the PHI statement.
2024-06-16 Feng Xue <fxue@os.amperecomputing.com>
gcc/
* tree-vect-loop.cc (vectorizable_reduction): Determine input vectype
during traversal of reduction statements.
[PR115565] cse: Don't use a valid regno for non-register in comparison_qty
Use INT_MIN rather than -1 in `comparison_qty' where a comparison is not
with a register, because the value of -1 is actually a valid reference
to register 0 in the case where it has not been assigned a quantity.
Using -1 makes `REG_QTY (REGNO (folded_arg1)) == ent->comparison_qty'
comparison in `fold_rtx' to incorrectly trigger in rare circumstances
and return true for a memory reference, making CSE consider a comparison
operation to evaluate to a constant expression and consequently make the
resulting code incorrectly execute or fail to execute conditional
blocks.
This has caused a miscompilation of rwlock.c from LinuxThreads for the
`alpha-linux-gnu' target, where `rwlock->__rw_writer != thread_self ()'
expression (where `thread_self' returns the thread pointer via a PALcode
call) has been decided to be always true (with `ent->comparison_qty'
using -1 for a reference to to `rwlock->__rw_writer', while register 0
holding the thread pointer retrieved by `thread_self') and code for the
false case has been optimized away where it mustn't have, causing
program lockups.
The issue has been observed as a regression from commit 08a692679fb8
("Undefined cse.c behaviour causes 3.4 regression on HPUX"),
<https://gcc.gnu.org/ml/gcc-patches/2004-10/msg02027.html>, and up to
commit 932ad4d9b550 ("Make CSE path following use the CFG"),
<https://gcc.gnu.org/ml/gcc-patches/2006-12/msg00431.html>, where CSE
has been restructured sufficiently for the issue not to trigger with the
original reproducer anymore. However the original bug remains and can
trigger, because `comparison_qty' will still be assigned -1 for a memory
reference and the `reg_qty' member of a `cse_reg_info_table' entry will
still be assigned -1 for register 0 where the entry has not been
assigned a quantity, e.g. at initialization.
Use INT_MIN then as noted above, so that the value remains negative, for
consistency with the REGNO_QTY_VALID_P macro (even though not used on
`comparison_qty'), and then so that it should not ever match a valid
negated register number, fixing the regression with commit 08a692679fb8.
gcc/
PR rtl-optimization/115565
* cse.cc (record_jump_cond): Use INT_MIN rather than -1 for
`comparison_qty' if !REG_P.