Eric Botcazou [Wed, 3 Feb 2021 10:38:04 +0000 (11:38 +0100)]
Fix regression with partial rep clause on variant record type
It can yield an incorrect layout when there is a partial representation
clause on a discriminated record type with a variant part.
gcc/ada/
* gcc-interface/decl.c (components_to_record): If the first component
with rep clause is the _Parent field with variable size, temporarily
set it aside when computing the internal layout of the REP part again.
* gcc-interface/utils.c (finish_record_type): Revert to taking the
maximum when merging sizes for all record types with rep clause.
(merge_sizes): Put SPECIAL parameter last and adjust recursive calls.
Eric Botcazou [Wed, 3 Feb 2021 10:11:26 +0000 (11:11 +0100)]
Assorted LTO fixes for Ada
This polishes a few rough edges visible in LTO mode.
gcc/ada/
* gcc-interface/decl.c (gnat_to_gnu_entity) <E_Array_Type>: Make the
two fields of the fat pointer type addressable, and do not make the
template type read-only.
<E_Record_Type>: If the type has discriminants mark it as may_alias.
* gcc-interface/utils.c (make_dummy_type): Likewise.
(build_dummy_unc_pointer_types): Likewise.
Jakub Jelinek [Wed, 3 Feb 2021 08:09:26 +0000 (09:09 +0100)]
ifcvt: Avoid ICEs trying to force_operand random RTL [PR97487]
As the testcase shows, RTL ifcvt can throw random RTL (whatever it found in
some insns) at expand_binop or expand_unop and expects it to do something
(and then will check if it created valid insns and punts if not).
These functions in the end if the operands don't match try to
copy_to_mode_reg the operands, which does
if (!general_operand (x, VOIDmode))
x = force_operand (x, temp);
but, force_operand is far from handling all possible RTLs, it will ICE for
all more unusual RTL codes. Basically handles just simple arithmetic and
unary RTL operations if they have an optab and
expand_simple_binop/expand_simple_unop ICE on others.
The following patch fixes it by adding some operand verification (whether
there is a hope that copy_to_mode_reg will succeed on those). It is added
both to noce_emit_move_insn (not needed for this exact testcase,
that function simply tries to recog the insn as is and if it fails,
handles some simple binop/unop cases; the patch performs the verification
of their operands) and noce_try_sign_mask.
2021-02-03 Jakub Jelinek <jakub@redhat.com>
PR middle-end/97487
* ifcvt.c (noce_can_force_operand): New function.
(noce_emit_move_insn): Use it.
(noce_try_sign_mask): Likewise. Formatting fix.
* gcc.dg/pr97487-1.c: New test.
* gcc.dg/pr97487-2.c: New test.
Jakub Jelinek [Wed, 3 Feb 2021 08:07:36 +0000 (09:07 +0100)]
lra-constraints: Fix error-recovery for bad inline-asms [PR97971]
The following testcase has ice-on-invalid, it can't be reloaded, but we
shouldn't ICE the compiler because the user typed non-sense.
In current_insn_transform we have:
if (process_alt_operands (reused_alternative_num))
alt_p = true;
if (check_only_p)
return ! alt_p || best_losers != 0;
/* If insn is commutative (it's safe to exchange a certain pair of
operands) then we need to try each alternative twice, the second
time matching those two operands as if we had exchanged them. To
do this, really exchange them in operands.
If we have just tried the alternatives the second time, return
operands to normal and drop through. */
if (! alt_p && ! sec_mem_p)
{
/* No alternative works with reloads?? */
if (INSN_CODE (curr_insn) >= 0)
fatal_insn ("unable to generate reloads for:", curr_insn);
error_for_asm (curr_insn,
"inconsistent operand constraints in an %<asm%>");
lra_asm_error_p = true;
...
and so handle inline asms there differently (and delete/nullify them after
this) - fatal_insn is only called for non-inline asm.
But in process_alt_operands we do:
/* Both the earlyclobber operand and conflicting operand
cannot both be user defined hard registers. */
if (HARD_REGISTER_P (operand_reg[i])
&& REG_USERVAR_P (operand_reg[i])
&& operand_reg[j] != NULL_RTX
&& HARD_REGISTER_P (operand_reg[j])
&& REG_USERVAR_P (operand_reg[j]))
fatal_insn ("unable to generate reloads for "
"impossible constraints:", curr_insn);
and thus ICE even for inline-asms.
I think it is inappropriate to delete/nullify the insn in
process_alt_operands, as it could be done e.g. in the check_only_p mode,
so this patch just returns false in that case, which results in the
caller have alt_p false, and as inline asm isn't simple move, sec_mem_p
will be also false (and it isn't commutative either), so for check_only_p
it will suggests to the callers it isn't ok and otherwise will emit
error and delete/nullify the inline asm insn.
2021-02-03 Jakub Jelinek <jakub@redhat.com>
PR middle-end/97971
* lra-constraints.c (process_alt_operands): For inline asm, don't call
fatal_insn, but instead return false.
Jakub Jelinek [Wed, 3 Feb 2021 08:04:26 +0000 (09:04 +0100)]
i386: Remove V1DImode shift expanders [PR98287]
On Tue, Feb 02, 2021 at 02:23:55PM +0100, Richard Biener wrote:
> All I say is that the x86 target
> should either not advertise V1DF shifts or advertise the basic
> ops that reasonable simplification would expect to exist.
The backend has several V1?Imode shifts, but optab only for those V1DImode
ones:
Tamar Christina [Wed, 3 Feb 2021 08:06:11 +0000 (08:06 +0000)]
slp: Split out patterns away from using SLP_ONLY into their own flag
Previously the SLP pattern matcher was using STMT_VINFO_SLP_VECT_ONLY as a way
to dissolve the SLP only patterns during SLP cancellation. However it seems
like the semantics for STMT_VINFO_SLP_VECT_ONLY are slightly different than what
I expected.
Namely that the non-SLP path can still use a statement marked
STMT_VINFO_SLP_VECT_ONLY. One such example is masked loads which are used both
in the SLP and non-SLP path.
To fix this I now introduce a new flag STMT_VINFO_SLP_VECT_ONLY_PATTERN which is
used only by the pattern matcher.
Jason Merrill [Tue, 2 Feb 2021 15:08:48 +0000 (10:08 -0500)]
c++: Member fns and deduction guide rewriting [PR98929]
My patch for 96199 had us re-substitute the parameter types of a constructor
in order to rewrite mentions of members into dependent references. We need
to do that for member functions, too.
Kyrylo Tkachov [Tue, 2 Feb 2021 13:28:55 +0000 (13:28 +0000)]
aarch64: Reimplement vqmovun_high* intrinsics using builtins
Another transition from inline asm to builtin.
Only 3 intrinsics converted this time but they use the "+w" constraint in their inline asm
so are more likely to generate redundant moves so benefit more from reimplementation.
Kyrylo Tkachov [Mon, 1 Feb 2021 23:03:49 +0000 (23:03 +0000)]
aarch64: Update flags for bfloat16 builtins
This patch updates the flags for the bfloat16 builtins.
The bfdot ones aren't affected by the FPCR/FPSR so can be AUTO_FP
whereas the bfmlal ones follow the normal floating-point instructions and get FP.
gcc/ChangeLog:
* config/aarch64/aarch64-simd-builtins.def (bfdot_lane, bfdot_laneq): Use
AUTO_FP flags.
(bfmlalb_lane, bfmlalt_lane, bfmlalb_lane_q, bfmlalt_lane_q): Use FP flags.
Kyrylo Tkachov [Mon, 1 Feb 2021 22:51:11 +0000 (22:51 +0000)]
aarch64: Add and use FLAG_LOAD in builtins
We already have a STORE flag that we use for builtins. This patch introduces a LOAD set
that uses AUTO_FP and FLAG_READ_MEMORY. This allows for more aggressive optimisation of the load
intrinsics.
Turns out we have a great many testcases that do:
float16x4x2_t
f_vld2_lane_f16 (float16_t * p, float16x4x2_t v)
{
float16x4x2_t res;
/* { dg-error "lane 4 out of range 0 - 3" "" { target *-*-* } 0 } */
res = vld2_lane_f16 (p, v, 4);
/* { dg-error "lane -1 out of range 0 - 3" "" { target *-*-* } 0 } */
res = vld2_lane_f16 (p, v, -1);
return res;
}
but since the first res is unused it now gets eliminated early on before we get to give an error
message. Ideally we'd like to warn for both.
This patch takes the conservative approach and doesn't convert the load-lane builtins to LOAD ;
that's something we can improve later.
Kyrylo Tkachov [Mon, 1 Feb 2021 21:21:38 +0000 (21:21 +0000)]
aarch64: Relax some builtins to AUTO_FP
This patch relaxes the flags for some builtins to AUTO_FP. These
builtins do permutes and similar, so they shouldn't get the FP flags
when operating on floating-point modes as they don't care about
FPCR/FPSR and exceptions.
Kyrylo Tkachov [Mon, 1 Feb 2021 17:40:20 +0000 (17:40 +0000)]
aarch64: Relax builtin flags for integer builtins
This patch relaxes the flags for most integer builtins to NONE as they don't read/write memory
and don't care about the FPCR/FPSR or exceptions so we should be more aggressive with them.
This leads to fallout in a testcase where the result of an intrinsic was unused and it is now
DCE'd. The testcase is adjusted.
Jakub Jelinek [Tue, 2 Feb 2021 09:32:23 +0000 (10:32 +0100)]
tree-vect-patterns: Don't create over widening patterns for stmts used in reductions [PR98848]
As discussed in the PR, the reduction code isn't able to cope with type
promotions/demotions in the reduction computation, so if we recognize an
over-widening pattern that has vect_reduction_def type, we most likely make
it non-vectorizable.
2021-02-02 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/98848
* tree-vect-patterns.c (vect_recog_over_widening_pattern): Punt if
STMT_VINFO_DEF_TYPE (last_stmt_info) is vect_reduction_def.
* gcc.dg/vect/pr98848.c: New test.
* gcc.dg/vect/pr92205.c: Remove xfail.
Alexandre Oliva [Tue, 2 Feb 2021 02:59:06 +0000 (23:59 -0300)]
restore current_function_decl after re-gimplifying nested ADDR_EXPRs
Ada makes extensive use of nested functions, which turn all automatic
variables of the enclosing function that are used in nested ones into
members of an artificial FRAME record type.
The address of a local variable is usually passed to asan marking
functions without using a temporary. asan_expand_mark_ifn will reject
an ADDR_EXPRs if it's split out from the call into an SSA_NAMEs.
Taking the address of a member of FRAME within a nested function was
not regarded as a gimple val: while introducing FRAME variables,
current_function_decl pointed to the outermost function, even while
processing a nested function, so decl_address_invariant_p, checking
that the context of the variable is current_function_decl, returned
false for such ADDR_EXPRs.
decl_address_invariant_p, called when determining whether an
expression is a legitimate gimple value, compares the context of
automatic variables with current_function_decl. Some of the
tree-nested function processing doesn't set current_function_decl, but
ADDR_EXPR-processing bits temporarily override it. However, they
restore it before re-gimplifying, which causes even ADDR_EXPRs
referencing automatic variables in the FRAME struct of a nested
function to not be regarded as address-invariant.
This patch moves the restores of current_function_decl in the
ADDR_EXPR-handling bits after the re-gimplification, so that the
correct current_function_decl is used when testing for address
invariance.
for gcc/ChangeLog
* tree-nested.c (convert_nonlocal_reference_op): Move
current_function_decl restore after re-gimplification.
(convert_local_reference_op): Likewise.
David Malcolm [Tue, 2 Feb 2021 02:54:11 +0000 (21:54 -0500)]
analyzer: directly explore within static functions [PR93355,PR96374]
PR analyzer/93355 tracks that -fanalyzer fails to report the FILE *
leak in read_alias_file in intl/localealias.c.
One reason for the failure is that read_alias_file is marked as
"static", and the path leading to the single call of
read_alias_file is falsely rejected as infeasible due to
PR analyzer/96374. I have been attempting to fix that bug, but
don't have a good solution yet.
Previously, -fanalyzer only directly explored "static" functions
if they were needed for call summaries, instead forcing them to
be indirectly explored, but if we have a feasibility bug like
above, we will fail to report any issues in a function that's
only called by such a falsely infeasible path.
It now seems wrong to me to reject directly exploring static
functions: even if there is currently no way to call a function,
it seems reasonable to warn about bugs within them, since
otherwise these latent bugs are a timebomb in the code.
Hence this patch reworks toplevel_function_p to directly explore
almost all functions, working around these feasiblity issues.
It introduces a naming convention that "__analyzer_"-prefixed
function names don't get directly explored, since this is
useful in the analyzer's DejaGnu-based tests.
This workaround gets PR analyzer/93355 closer to working, but
unfortunately there is a second instance of PR analyzer/96374
within read_alias_file itself which means even with this patch
-fanalyzer falsely rejects the path as infeasible.
Still, this ought to help in other cases, and simplifies the
implementation.
gcc/analyzer/ChangeLog:
PR analyzer/93355
PR analyzer/96374
* engine.cc (toplevel_function_p): Simplify so that
we only reject functions with a "__analyzer_" prefix.
(add_any_callbacks): Delete.
(exploded_graph::build_initial_worklist): Update for
dropped param of toplevel_function_p.
(exploded_graph::build_initial_worklist): Don't bother
looking for callbacks that are reachable from global
initializers.
gcc/testsuite/ChangeLog:
PR analyzer/93355
PR analyzer/96374
* gcc.dg/analyzer/conditionals-3.c: Add "__analyzer_"
prefix to support subroutines where necessary.
* gcc.dg/analyzer/data-model-1.c: Likewise.
* gcc.dg/analyzer/feasibility-1.c (called_by_test_6a): New.
(test_6a): New.
* gcc.dg/analyzer/params.c: Add "__analyzer_" prefix to support
subroutines where necessary.
* gcc.dg/analyzer/pr96651-2.c: Likewise.
* gcc.dg/analyzer/signal-4b.c: Likewise.
* gcc.dg/analyzer/single-field.c: Likewise.
* gcc.dg/analyzer/torture/conditionals-2.c: Likewise.
David Malcolm [Tue, 2 Feb 2021 02:52:41 +0000 (21:52 -0500)]
analyzer: add more feasibility test cases [PR93355,PR96374]
This patch adds a couple more reduced test cases derived from the
integration test for PR analyzer/93355. In both cases, the analyzer
falsely rejects the buggy code paths as being infeasible due to
PR analyzer/96374, and so the tests are marked as XFAIL for now.
gcc/testsuite/ChangeLog:
PR analyzer/93355
PR analyzer/96374
* gcc.dg/analyzer/pr93355-localealias-feasibility-2.c: New test.
* gcc.dg/analyzer/pr93355-localealias-feasibility-3.c: New test.
Kyrylo Tkachov [Mon, 1 Feb 2021 21:10:35 +0000 (21:10 +0000)]
aarch64: Reimplement vrshrn* intrinsics using builtins
This patch moves the vrshrn* intrinsics to builtins away from inline
asm.
It's a bit of code, but it's very similar to the recent vsrhn*
reimplementation except that we use an unspec rather than standard RTL
codes for the functionality.
David Malcolm [Mon, 1 Feb 2021 20:13:39 +0000 (15:13 -0500)]
analyzer: fix false positives with *UNKNOWN_PTR [PR98918]
PR analyzer/98918 reports various false positives and state explosions
on correct code that frees nodes and other pointers in a singly-linked
list.
The issue is that state-merger in the loop leads to UNKNOWN_VALUEs,
and these are then erroneously used to form compound symbolic values
and regions, such as;
INIT_VAL((*UNKNOWN(struct marker *)).ref)
and:
(*INIT_VAL((*UNKNOWN(struct marker * *))))
The malloc state machine then treats these symbolic values as
identifying specific pointers, and thus e.g. erroneously reports a
double-free when
INIT_VAL((*UNKNOWN(struct marker *)).ref)
is freed twice (on subsequent iterations of the loop).
Similarly, the increasingly complex compound symbolic values have
sm-state which prevents state merging, and eventually lead to the
analysis hitting safety limits and stopping.
This patch makes various compound values involving UNKNOWN be
themselves UNKNOWN, resolving both the false positives and the state
explosions.
gcc/analyzer/ChangeLog:
PR analyzer/98918
* region-model-manager.cc
(region_model_manager::get_or_create_initial_value):
Fold the initial value of *UNKNOWN_PTR to an UNKNOWN value.
(region_model_manager::get_field_region): Fold the value
of UNKNOWN_PTR->FIELD to *UNKNOWN_PTR_OF_&FIELD_TYPE.
gcc/testsuite/ChangeLog:
PR analyzer/98918
* gcc.dg/analyzer/pr98918.c: New test.
François Dumont [Thu, 28 Jan 2021 18:00:56 +0000 (19:00 +0100)]
libstdc++: Make deque iterator operator- usable with value-init iterators
N3644 implies that operator- can be used on value-init iterators. We now return
0 if both iterators are value initialized. If only one is value initialized we
keep the UB by returning the result of a normal computation which is a meaningless
value.
libstdc++-v3/ChangeLog:
PR libstdc++/70303
* include/bits/stl_deque.h (std::deque<>::operator-(iterator, iterator)):
Return 0 if both iterators are value-initialized.
* testsuite/23_containers/deque/70303.cc: New test.
* testsuite/23_containers/vector/70303.cc: New test.
Kyrylo Tkachov [Mon, 1 Feb 2021 15:29:13 +0000 (15:29 +0000)]
aarch64: Reimplement vmovl_high_* intrinsics using builtins
The vmovl_high_* intrinsics map down to the SXTL2/UXTL2 instructions
that already have appropriately-named patterns and expanders,
so it's straightforward to wire them up.
Patrick Palka [Mon, 1 Feb 2021 15:27:45 +0000 (10:27 -0500)]
c++: Fix ICE from verify_ctor_sanity [PR98295]
In this testcase we're crashing during constexpr evaluation of the
ARRAY_REF b[0] as part of evaluation of the lambda's by-copy capture of b
(which is encoded as a VEC_INIT_EXPR<b>). Since A's constexpr default
constructor is not yet defined, b's initialization is not actually
constant, but because A is an empty type, evaluation of b from
cxx_eval_array_ref is successful and yields an empty CONSTRUCTOR.
And since this CONSTRUCTOR is empty, we {}-initialize the desired array
element, and end up crashing from verify_ctor_sanity during evaluation
of this initializer because we updated new_ctx.ctor without updating
new_ctx.object: the former now has type A[3] and the latter is still the
target of a TARGET_EXPR for b[0][0] created from cxx_eval_vec_init
(and so has type A).
This patch fixes this by setting new_ctx.object appropriately at the
same time that we set new_ctx.ctor from cxx_eval_array_reference.
gcc/cp/ChangeLog:
PR c++/98295
* constexpr.c (cxx_eval_array_reference): Also set
new_ctx.object when setting new_ctx.ctor.
gcc/testsuite/ChangeLog:
PR c++/98295
* g++.dg/cpp0x/constexpr-98295.C: New test.
Marek Polacek [Fri, 29 Jan 2021 16:29:25 +0000 (11:29 -0500)]
c++: Improve sorry for __builtin_has_attribute [PR98355]
__builtin_has_attribute doesn't work in templates yet (bug 92104), so
in r11-471 I added a sorry. But that only caught type-dependent
expressions and we also want to sorry on value-dependent expressions.
This patch uses uses_template_parms, but guarded with p_t_d, because
u_t_p sets p_t_d and then v_d_e_p considers variables with reference
types value-dependent, which breaks builtin-has-attribute-6.c.
This is a regression and I also plan to apply this to gcc-10.
gcc/cp/ChangeLog:
PR c++/98355
* parser.c (cp_parser_has_attribute_expression): Use
uses_template_parms instead of type_dependent_expression_p.
gcc/testsuite/ChangeLog:
PR c++/98355
* g++.dg/ext/builtin-has-attribute2.C: New test.
Jason Merrill [Wed, 27 Jan 2021 22:15:39 +0000 (17:15 -0500)]
c++: alias in qualified-id in template arg [PR98570]
template_args_equal has handled dependent alias specializations for a while,
but in this testcase the actual template argument is a SCOPE_REF, so we
called cp_tree_equal, which doesn't handle aliases specially when we get to
them.
This patch generalizes this by setting a flag so structural_comptypes will
check for template alias equivalence (if we aren't doing partial ordering).
The existing flag, comparing_specializations, was too broad; in particular,
when we're doing decls_match, we want to treat corresponding parameters as
equivalent, so we need to separate that from alias comparison. So I
introduce the comparing_dependent_aliases flag.
From looking at other uses of comparing_specializations, it seems to me that
the new flag is what modules wants, as well.
The other use of comparing_specializations in structural_comptypes is a hack
to deal with spec_hasher::equal not calling push_to_top_level, which we
also don't want to tie to the alias comparison semantics.
This patch also changes how we get to structural comparison of aliases from
checking TYPE_CANONICAL in comptypes to marking the aliases as getting
structural comparison when they are built, which is more consistent with how
e.g. typename is handled.
As I mention in the comment for comparing_dependent_aliases, I think the
default should be to treat different dependent aliases for the same type as
distinct, only treating them as equal during deduction (particularly partial
ordering). But that's a matter for the C++ committee, to try in stage 1.
gcc/cp/ChangeLog:
PR c++/98570
* cp-tree.h: Declare it.
* pt.c (comparing_dependent_aliases): New flag.
(template_args_equal, spec_hasher::equal): Set it.
(dependent_alias_template_spec_p): Assert that we don't
get non-types other than error_mark_node.
(instantiate_alias_template): SET_TYPE_STRUCTURAL_EQUALITY
on complex alias specializations. Set TYPE_DEPENDENT_P here.
(tsubst_decl): Not here.
* module.cc (module_state::read_cluster): Set
comparing_dependent_aliases instead of
comparing_specializations.
* tree.c (cp_tree_equal): Remove comparing_specializations
module handling.
* typeck.c (structural_comptypes): Adjust.
(comptypes): Remove comparing_specializations handling.
gcc/testsuite/ChangeLog:
PR c++/98570
* g++.dg/cpp0x/alias-decl-targ1.C: New test.
Jonathan Wright [Sun, 31 Jan 2021 14:47:04 +0000 (14:47 +0000)]
testsuite: aarch64: Add tests for vmlXl_high intrinsics
Add tests for vmlal_high_* and vmlsl_high_* Neon intrinsics. Since
these intrinsics are only supported for AArch64, these tests are
restricted to only run on AArch64 targets.
gcc/testsuite/ChangeLog:
2021-01-31 Jonathan Wright <jonathan.wright@arm.com>
* gcc.target/aarch64/advsimd-intrinsics/vmlXl_high.inc:
New test template.
* gcc.target/aarch64/advsimd-intrinsics/vmlXl_high_lane.inc:
New test template.
* gcc.target/aarch64/advsimd-intrinsics/vmlXl_high_laneq.inc:
New test template.
* gcc.target/aarch64/advsimd-intrinsics/vmlXl_high_n.inc:
New test.
* gcc.target/aarch64/advsimd-intrinsics/vmlal_high.c:
New test.
* gcc.target/aarch64/advsimd-intrinsics/vmlal_high_lane.c:
New test.
* gcc.target/aarch64/advsimd-intrinsics/vmlal_high_laneq.c:
New test.
* gcc.target/aarch64/advsimd-intrinsics/vmlal_high_n.c:
New test.
* gcc.target/aarch64/advsimd-intrinsics/vmlsl_high.c:
New test.
* gcc.target/aarch64/advsimd-intrinsics/vmlsl_high_lane.c:
New test.
* gcc.target/aarch64/advsimd-intrinsics/vmlsl_high_laneq.c:
New test.
* gcc.target/aarch64/advsimd-intrinsics/vmlsl_high_n.c:
New test.
Jonathan Wright [Fri, 29 Jan 2021 00:45:30 +0000 (00:45 +0000)]
testsuite: aarch64: Add tests for vmull_high intrinsics
Add tests for vmull_high_* Neon intrinsics. Since these intrinsics
are only supported for AArch64, these tests are restricted to only
run on AArch64 targets.
gcc/testsuite/ChangeLog:
2021-01-29 Jonathan Wright <jonathan.wright@arm.com>
* gcc.target/aarch64/advsimd-intrinsics/vmull_high.c:
New test.
* gcc.target/aarch64/advsimd-intrinsics/vmull_high_lane.c:
New test.
* gcc.target/aarch64/advsimd-intrinsics/vmull_high_laneq.c:
New test.
* gcc.target/aarch64/advsimd-intrinsics/vmull_high_n.c:
New test.
The existing RTL that combine should try to match to remove the vec_dup is
aarch64_vec_<su>mlal_lane<Qlane> and aarch64_vec_<su>mult_lane<Qlane> which
expects the select register to be the second operand of mult.
The pattern introduced has it as the first operand so combine was unable to
remove the vec_dup. This flips the order such that the patterns optimize
correctly.
gcc/ChangeLog:
* config/aarch64/aarch64-simd.md (aarch64_<su>mlal_n<mode>,
aarch64_<su>mlsl<mode>, aarch64_<su>mlsl_n<mode>): Flip mult operands.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/advsimd-intrinsics/smlal-smlsl-mull-optimized.c: New test.
Xionghu Luo [Mon, 1 Feb 2021 02:29:14 +0000 (20:29 -0600)]
testsuite: Update pr79251 ilp32 store regex
BE ilp32 Linux generates extra stack stwu instructions which shouldn't
be counted in, \m … \M is needed around each instruction, not just the
beginning and end of the entire pattern.
gcc/testsuite/ChangeLog:
2021-02-01 Xionghu Luo <luoxhu@linux.ibm.com>
* gcc.target/powerpc/pr79251.p8.c: Update store count regex.
* gcc.target/powerpc/pr79251.p9.c: Likewise.
Iain Sandoe [Sun, 31 Jan 2021 12:24:44 +0000 (12:24 +0000)]
testsuite, Darwin : Skip ELF-specific tests.
A number of ELF-specific tests were introduced in r11-6140, one
of which fails on all Mach-O/Darwin platforms.
On examination, the tests have no meaningful parallel for Mach-O
which dead strips at the symbol level, and does not make use of
function sections (the fact that a used and an unused symbol are
placed in the same section will not affect dead stripping).
Given that the tests do not demonstrate anything useful on Darwin,
skip them.
Aaron Sawdey [Tue, 8 Dec 2020 18:07:04 +0000 (12:07 -0600)]
Fusion patterns for logical-logical
This patch adds a new function to genfusion.pl to generate patterns for
logical-logical fusion. They are enabled by default for power10 and can
be disabled by -mno-power10-fusion-2logical or -mno-power10-fusion.
gcc/ChangeLog
* config/rs6000/genfusion.pl (gen_2logical): New function to
generate patterns for logical-logical fusion.
* config/rs6000/fusion.md: Regenerated patterns.
* config/rs6000/rs6000-cpus.def: Add
OPTION_MASK_P10_FUSION_2LOGICAL.
* config/rs6000/rs6000.c (rs6000_option_override_internal):
Enable logical-logical fusion for p10.
* config/rs6000/rs6000.opt: Add -mpower10-fusion-2logical.
David Edelsohn [Wed, 27 Jan 2021 21:47:22 +0000 (16:47 -0500)]
aix: Permit use of AIX Vector extended ABI mode
AIX only permits use of Altivec VSRs 20-31 in a Vector Extended ABI mode.
This patch explicitly enables use of the VSRs using the new -mabi=vec-extabi
command line option also implemented in LLVM for AIX.
Bootstrapped on powerpc-ibm-aix7.2.3.0 and powerpc64le-linux-gnu.
gcc/ChangeLog:
* config/rs6000/rs6000.opt (mabi=vec-extabi): New.
(mabi=vec-default): New.
* config/rs6000/rs6000-c.c (rs6000_target_modify_macros): Define
__EXTABI__ for AIX Vector extended ABI.
* config/rs6000/rs6000.c (rs6000_debug_reg_global): Print AIX Vector
extabi info.
(conditional_register_usage): If AIX vec_extabi enabled, vs20-vs31
are non-volatile.
* doc/invoke.texi (PowerPC mabi): Add AIX vec-extabi and vec-default.
Jakub Jelinek [Sat, 30 Jan 2021 13:58:14 +0000 (14:58 +0100)]
i386, df: Fix up gcc.c-torture/compile/20051216-1.c -O1 -march=cascadelake
> rtl-optimization/98863 - tame i386 specific RPAD pass
>
> caused
>
> FAIL: gcc.c-torture/compile/20051216-1.c -O1 (internal compiler error)
> FAIL: gcc.c-torture/compile/20051216-1.c -O1 (test for excess errors)
The problem is that we don't revert the df flags back.
This patch fixes it by clearing DF_DEFER_INSN_RESCAN after
calling df_process_deferred_rescans, so that it doesn't leak into following
unprepared passes that expect non-deferred rescans.
2021-01-30 Jakub Jelinek <jakub@redhat.com>
* config/i386/i386-features.c (remove_partial_avx_dependency): Clear
DF_DEFER_INSN_RESCAN after calling df_process_deferred_rescans.
Jakub Jelinek [Sat, 30 Jan 2021 09:52:57 +0000 (10:52 +0100)]
testsuite: Fix up gomp/simd-{2,3}.c tests [PR98243]
The test (intentionally) is not gcc.dg/vect/, as it needs -fopenmp and uses
OpenMP directives other than simd and therefore can't rely on default
VECTFLAGS and so I think can't safely use vect_int effective target
either. So, I'm just making sure it is vectorized on x86 and on aarch64 (the
latter as an example of a target that doesn't need any extra options to get
the vectorization).
2021-01-30 Jakub Jelinek <jakub@redhat.com>
PR testsuite/98243
* gcc.dg/gomp/simd-2.c: Add -msse2 on x86. Restrict
scan-tree-dump-times to x86 and aarch64 targets.
* gcc.dg/gomp/simd-3.c: Likewise.
Michael Meissner [Fri, 29 Jan 2021 22:44:54 +0000 (17:44 -0500)]
PR testsuite/98870: Fix IEEE 128-bit fortran test
This test started failing when I changed the mapping of IEEE 128-bit long
double built-in functions on 2021-01-28. This patch fixes the test so it
uses the correct name.
gcc/testsuite/
2021-01-29 Michael Meissner <meissner@linux.ibm.com>
PR testsuite/98870
* gcc.target/powerpc/ppc-fortran/ieee128-math.f90: Fix the
expected result.
[PR97701] LRA: Don't narrow class only for REG or MEM.
Reload pseudos of ALL_REGS class did not narrow class from constraint
in insn (set (pseudo) (lo_sum ...)) because lo_sum is considered an
object (OBJECT_P) although the insn is not a classic move. To permit
narrowing we are starting to use MEM_P and REG_P instead of OBJECT_P.
gcc/ChangeLog:
PR target/97701
* lra-constraints.c (in_class_p): Don't narrow class only for REG
or MEM.
Will Schmidt [Fri, 23 Oct 2020 22:28:17 +0000 (17:28 -0500)]
[PATCH, rs6000] improve vec_ctf invalid parameter handling.
Hi,
Per PR91903, GCC ICEs when we attempt to pass a variable
(or out of range value) into the vec_ctf() builtin. Per
investigation, the parameter checking exists for this
builtin with the int types, but was missing for
the long long types. This problem also occurs for the
vec_cts() builtin, which is also fixed by this patch.
This patch adds the missing CODE_FOR_* entries to the
rs6000_expand_binup_builtin to cover that scenario.
This patch also updates some existing tests to remove
calls to vec_ctf() and vec_cts() that contain negative
values.
PR target/91903
2020-01-29 Will Schmidt <will_schmidt@vnet.ibm.com>
gcc/ChangeLog:
* config/rs6000/rs6000-call.c (rs6000_expand_binup_builtin): Add
clauses for CODE_FOR_vsx_xvcvuxddp_scale and
CODE_FOR_vsx_xvcvsxddp_scale to the parameter checking code.
Nathan Sidwell [Fri, 29 Jan 2021 16:30:40 +0000 (08:30 -0800)]
c++: Fix unordered entity array [PR 98843]
A couple of module invariants are that the modules are always
allocated in ascending order and appended to the module array. The
entity array is likewise ordered, with each module having spans in
that array in ascending order. Prior to header-units, this was
provided by the way import declarations were encountered. With
header-units we need to load the preprocessor state of header units
before we parse the C++, and this can lead to incorrect ordering of
the entity array. I had made the initialization of a module's
language state a little too lazy. This moves the allocation of entity
array spans into the initial read of a module, thus ensuring the
ordering of those spans. We won't be looking in them until we've
loaded the language portions of that particular module, and even if we
did, we'd find NULLs there and issue a diagnostic.
PR c++/98843
gcc/cp/
* module.cc (module_state_config): Add num_entities field.
(module_state::read_entities): The entity_ary span is
already allocated.
(module_state::write_config): Write num_entities.
(module_state::read_config): Read num_entities.
(module_state::write): Set config's num_entities.
(module_state::read_initial): Allocate the entity ary
span here.
(module_state::read_language): Do not set entity_lwm
here.
gcc/testsuite/
* g++.dg/modules/pr98843_a.C: New.
* g++.dg/modules/pr98843_b.H: New.
* g++.dg/modules/pr98843_c.C: New.
Andrew MacLeod [Fri, 29 Jan 2021 14:23:48 +0000 (09:23 -0500)]
tree-optimization/98866 - Compile time hog in VRP
Don't track [1, +INF] for pointer types, treat them as invariant for caching
purposes as they cannot be further refined without evaluating to UNDEFINED.
PR tree-optimization/98866
* gimple-range-gori.h (gori_compute:set_range_invariant): New.
* gimple-range-gori.cc (gori_map::set_range_invariant): New.
(gori_map::m_maybe_invariant): Rename from all_outgoing.
(gori_map::gori_map): Rename all_outgoing to m_maybe_invariant.
(gori_map::is_export_p): Ditto.
(gori_map::calculate_gori): Ditto.
(gori_compute::set_range_invariant): New.
* gimple-range.cc (gimple_ranger::range_of_stmt): Set range
invariant for pointers evaluating to [1, +INF].
Richard Biener [Fri, 29 Jan 2021 15:02:36 +0000 (16:02 +0100)]
rtl-optimization/98863 - tame i386 specific RPAD pass
This removes analyzing DF with expensive problems which we do not
use at all and which somehow cause 5GB of memory to leak. Instead
just do a defered rescan of added insns.
2021-01-29 Richard Biener <rguenther@suse.de>
PR rtl-optimization/98863
* config/i386/i386-features.c (remove_partial_avx_dependency):
Do not perform DF analysis.
(pass_data_remove_partial_avx_dependency): Remove
TODO_df_finish.
Kyrylo Tkachov [Fri, 29 Jan 2021 13:10:46 +0000 (13:10 +0000)]
aarch64: Reimplement vabdl_high* intrinsics using builtins
This patch reimplements the vabdl_high intrinsics using builtins.
It slightly cleans up the RTL pattern (the mode iterators) but nothing
interesting apart from that.
Kyrylo Tkachov [Fri, 29 Jan 2021 10:57:44 +0000 (10:57 +0000)]
aarch64: Reimplement vabal* intrinsics using builtins
This patch reimplements the vabal intrinsics with builtins.
The RTL pattern is cleaned up to emit the right .8b suffixes for the
inputs (though .16b is also accepted)
and iterate over the right modes. The pattern's only other use is
through the sadv16qi expander, which is adjusted.
I've verified that the codegen for sadv16qi is not worse off.
gcc/ChangeLog:
* config/aarch64/aarch64-simd-builtins.def (sabal): Define
builtin.
(uabal): Likewise.
* config/aarch64/aarch64-simd.md (aarch64_<sur>abal<mode>_4):
Rename to...
(aarch64_<sur>abal<mode>): ... This
(<sur>sadv16qi): Adust use of the above.
* config/aarch64/arm_neon.h (vabal_s8): Reimplement using
builtin.
(vabal_s16): Likewise.
(vabal_s32): Likewise.
(vabal_u8): Likewise.
(vabal_u16): Likewise.
(vabal_u32): Likewise.
Kyrylo Tkachov [Thu, 28 Jan 2021 13:10:07 +0000 (13:10 +0000)]
aarch64: Reimplement vaddlv* intrinsics using builtins
This patch reimplements the vaddlv* intrinsics using builtins.
The vaddlv_s32 and vaddlv_u32 intrinsics actually perform a pairwise
SADDLP/UADDLP instead of a SADDLV/UADDLV but because they only use
two elements it has the same semantics.
Richard Biener [Fri, 29 Jan 2021 12:58:28 +0000 (13:58 +0100)]
change unit of --param max-gcse-memory to kB
This changes it from bytes to kB since its value is limited to 2147483648.
2021-01-29 Richard Biener <rguenther@suse.de>
* doc/invoke.texi (--param max-gcse-memory): Document unit
of size.
* gcse.c (gcse_or_cprop_is_too_expensive): Adjust.
* params.opt (--param max-gcse-memory): Adjust default and
document unit of size.
Richard Biener [Fri, 29 Jan 2021 09:23:40 +0000 (10:23 +0100)]
rtl-optimization/98144 - tame REE memory usage
This changes the REE dataflow to change the explicit all-ones
starting solution to be implicit via a visited flag, removing
the need to initially start with fully populated bitmaps for
all basic-blocks. That reduces peak memory use when compiling
the RTL checking enabled insn-extract.c testcase from PR98144
from 6GB to less than 2GB.
2021-01-29 Richard Biener <rguenther@suse.de>
PR rtl-optimization/98144
* df.h (df_mir_bb_info): Add con_visited member.
* df-problems.c (df_mir_alloc): Initialize con_visited,
do not fully populate IN and OUT.
(df_mir_reset): Likewise.
(df_mir_confluence_0): Set con_visited.
(df_mir_confluence_n): Properly handle implicitely
fully populated IN and OUT as designated by con_visited
and update con_visited accordingly.
Jakub Jelinek [Fri, 29 Jan 2021 10:54:22 +0000 (11:54 +0100)]
arm: Fix up -mcpu=iwmmxt ICEs [PR98849]
The
https://gcc.gnu.org/r11-6707-g7432f255b70811dafaf325d94036ac580891de69
https://gcc.gnu.org/r11-6708-gbfab355012ca0f5219da8beb04f2fdaf757d34b7
changes moved the vashl/vashr/vlshr expanders from neon.md to vec-common.md
and changed their condition from TARGET_NEON to ARM_HAVE_<MODE>_ARITH,
so that they apply also for TARGET_HAVE_MVE. But, the ARM_HAVE_<MODE>_ARITH
macros are sometimes true also for TARGET_REALLY_IWMMXT, which at least
from quick skimming of former iwmmxt*.md doesn't have such instructions,
so it seems incorrect to enable them for iwmmxt. Furthermore, even if it
had them, iwmmxt doesn't support any way to broadcast values in those
modes (vec_duplicate and vec_init optabs) and the middle end relies on
if the vector x vector shift/rotate patterns are supported it can emit
vector x scalar shift/rotate by broadcasting the shift amount to a vector.
As the TARGET_NEON vs. TARGET_REALLY_IWMMXT vs. TARGET_HAVE_MVE never seem
to be enabled together, I think we can just write it the following way.
Note, seems iwmmxt actually does support vector x scalar shifts, but doesn't
really enable the optabs that would tell the middle-end code that it does
(and neon and mve don't seem to support those). I'll defer that to anybody
that cares about iwmmxt (if any).
Jakub Jelinek [Fri, 29 Jan 2021 09:30:09 +0000 (10:30 +0100)]
expand: Fix up find_bb_boundaries [PR98331]
When expansion emits some control flow insns etc. inside of a former GIMPLE
basic block, find_bb_boundaries needs to split it into multiple basic
blocks.
The code needs to ignore debug insns in decisions how many splits to do or
where in between some non-debug insns the split should be done, but it can
decide where to put debug insns if they can be kept and otherwise throws
them away (they can't stay outside of basic blocks).
On the following testcase, we end up in the bb from expander with
control flow insn
debug insns
barrier
some other insn
(the some other insn is effectively dead after __builtin_unreachable and
we'll optimize that out later).
Without debug insns, we'd do the split when encountering some other insn
and split after PREV_INSN (some other insn), i.e. after barrier (and the
splitting code then moves the barrier in between basic blocks).
But if there are debug insns, we actually split before the first debug insn
that appeared after the control flow insn, so after control flow insn,
and get a basic block that starts with debug insns and then has a barrier
in the middle that nothing moves it out of the bb. This leads to ICEs and
even if it wouldn't, different behavior from -g0.
The reason for treating debug insns that way is a different case, e.g.
control flow insn
debug insns
some other insn
or even
control flow insn
barrier
debug insns
some other insn
where splitting before the first such debug insn allows us to keep them
while otherwise we would have to drop them on the floor, and in those
situations we behave the same with -g0 and -g.
So, the following patch fixes it by resetting debug_insn not just when
splitting the blocks (it is set only after seeing a control flow insn and
before splitting for it if needed), but also when seeing a barrier,
which effectively means we always throw away debug insns after a control
flow insn and before following barrier if any, but there is no way around
that, control flow insn must be the last in the bb (BB_END) and BARRIER
after it, debug insns aren't allowed outside of bb.
We still handle the other cases fine (when there is no barrier or when
debug insns appear only after the barrier).
2021-01-29 Jakub Jelinek <jakub@redhat.com>
PR debug/98331
* cfgbuild.c (find_bb_boundaries): Reset debug_insn when seeing
a BARRIER.
but that caused endless looping in cp_parser_type_specifier_seq (the
while (true) loop) in this invalid test, because we never set a parser
error, therefore cp_parser_type_specifier returned error_mark_node
instead of NULL_TREE, and we never issued the "expected type-specifier"
error.
At first I thought I'd just add cp_parser_simulate_error right before
the return, but that regresses crash81.C -- we'd emit multiple errors
for "T::X". So the next best thing seemed to revert to pre-r11-86
behavior: return early when parser->scope is bad, otherwise proceed to
get the parser error.
gcc/cp/ChangeLog:
PR c++/96137
* parser.c (cp_parser_class_name): If parser->scope is
error_mark_node, return it, otherwise continue.
Ian Lance Taylor [Thu, 28 Jan 2021 23:46:59 +0000 (15:46 -0800)]
gccgo driver: always act as though -g is passed
The go1 compiler always turns on debugging, to support Go stack traces
and functions like runtime.Callers. With the recent switch to turn on
DWARF 5 by default, this caused failures with some versions of gas,
such as 2.35.1, because the assembly code would assume DWARF 5 but the
driver would not pass --gdwarf-5 to gas. gas would then give an
error: "file number less than one".
This change avoids that problem by having the gccgo driver spec add a
-g option to the command line if no other -g option is present. The
newly added -g option is passed to the assembler as --gdwarf-5.
* gospec.c (lang_specific_driver): Add -g if no debugging options
were passed.
Jakub Jelinek [Thu, 28 Jan 2021 23:39:00 +0000 (00:39 +0100)]
c++: Fix -Weffc++ in templates [PR98841]
We emit a bogus warning on the following testcase, suggesting that the
operator should return *this even when it does that already.
The problem is that normally cp_build_indirect_ref_1 ensures that *this
is folded as current_class_ref, but in templates (if return type is
non-dependent, otherwise check_return_expr doesn't check it) it didn't
go through cp_build_indirect_ref_1, but just built another INDIRECT_REF.
Which means it then doesn't compare pointer-equal to current_class_ref.
The following patch fixes it by doing in build_x_indirect_ref for
*this what cp_build_indirect_ref_1 would do.
2021-01-28 Jakub Jelinek <jakub@redhat.com>
PR c++/98841
* typeck.c (build_x_indirect_ref): For *this, return current_class_ref.
Marek Polacek [Thu, 28 Jan 2021 21:21:50 +0000 (16:21 -0500)]
tree: Don't reuse types if TYPE_USER_ALIGN differ [PR94775]
A year ago I submitted this patch:
~~
Here we trip on the TYPE_USER_ALIGN (t) assert in strip_typedefs: it
gets "const d[0]" with TYPE_USER_ALIGN=0 but the result built by
build_cplus_array_type is "const char[0]" with TYPE_USER_ALIGN=1.
When we strip_typedefs the element of the array "const d", we see it's
a typedef_variant_p, so we look at its DECL_ORIGINAL_TYPE, which is
char, but we need to add the const qualifier, so we call
cp_build_qualified_type -> build_qualified_type
where get_qualified_type checks to see if we already have such a type
by walking the variants list, which in this case is:
char -> c -> const char -> const char -> d -> const d
Because check_base_type only checks TYPE_ALIGN and not TYPE_USER_ALIGN,
we choose the first const char, which has TYPE_USER_ALIGN set. If the
element type of an array has TYPE_USER_ALIGN, the array type gets it too.
So we can make check_base_type stricter. I was afraid that it might make
us reuse types less often, but measuring showed that we build the same
amount of types with and without the patch, while bootstrapping.
~~
However, the patch broke a few tests on STRICT_ALIGNMENT platforms and
had to be reverted. This is another try. The original patch is kept
unchanged, but I added the finalize_type_size hunk that ought to fix the
STRICT_ALIGNMENT issues.
The problem is that finalize_type_size can clear TYPE_USER_ALIGN on the
main variant of a type, but doesn't clear it on any of the variants.
Then we end up with types which share the same TYPE_MAIN_VARIANT, but
their TYPE_CANONICAL differs and then the usual "canonical types differ
for identical types" follows.
I've created alignas19.C to exercise this scenario. What happens is:
- when parsing the class S we create a type S in xref_tag,
- we see alignas(8) so common_handle_aligned_attribute sets T_U_A in S,
- we parse the member function fn and build_memfn_type creates a copy
of S to add const; this variant has T_U_A set,
- we finish_struct S which calls layout_class_type -> finish_record_type
-> finalize_size_type where we reset T_U_A in S (but const S keeps it),
- finish_non_static_data_member for arr calls maybe_dummy_object with
type = S,
- maybe_dummy_object calls same_type_ignoring_top_level_qualifiers_p
to check if S and TREE_TYPE (current_class_ref), which is const S,
are the same,
- same_type_ignoring_top_level_qualifiers_p creates cv-unqualified
versions of the passed types. Previously we'd use our main variant
S when stripping "const S" of const, but since the T_U_A flags don't
match (check_base_type), we create a new variant S'. Then we crash in
comptypes because S and S' have the same TYPE_MAIN_VARIANT but
different TYPE_CANONICALs.
With my patch we'll clear T_U_A for S's variants too, and then instead
of S' we'll just use S.
gcc/ChangeLog:
PR c++/94775
* stor-layout.c (finalize_type_size): If we reset TYPE_USER_ALIGN in
the main variant, maybe reset it in its variants too.
* tree.c (check_base_type): Return true only if TYPE_USER_ALIGN match.
(check_aligned_type): Check if TYPE_USER_ALIGN match.
gcc/testsuite/ChangeLog:
PR c++/94775
* g++.dg/cpp0x/alignas19.C: New test.
* g++.dg/warn/Warray-bounds15.C: New test.