Peter Bergner [Thu, 1 Sep 2022 02:14:36 +0000 (21:14 -0500)]
rs6000: Don't ICE when we disassemble an MMA variable [PR101322]
When we expand an MMA disassemble built-in with C++ using a pointer that
is cast to a valid MMA type, the type isn't passed down to the expand
machinery and we end up using the base type of the pointer which leads to
an ICE. This patch enforces we always use the correct MMA type regardless
of the pointer type being used.
2022-08-31 Peter Bergner <bergner@linux.ibm.com>
gcc/
PR target/101322
* config/rs6000/rs6000-builtin.cc (rs6000_gimple_fold_mma_builtin):
Enforce the use of a valid MMA pointer type.
gcc/testsuite/
PR target/101322
* g++.target/powerpc/pr101322.C: New test.
Joseph Myers [Wed, 31 Aug 2022 22:22:07 +0000 (22:22 +0000)]
c: C2x attributes fixes and updates
Implement some changes to the currently supported C2x standard
attributes that have been made to the specification since they were
first implemented in GCC, and some consequent changes:
* maybe_unused is now supported on labels. In fact that was already
accidentally supported in GCC as a result of sharing the
implementation with __attribute__ ((unused)), but needed to be
covered in the tests.
* As part of the support for maybe_unused on labels, its
__has_c_attribute value changed.
* The issue of maybe_unused accidentally being already supported on
labels showed up the lack of tests for other standard attributes
being incorrectly applied to labels; add such tests.
* Use of fallthrough or nodiscard attributes on labels already
properly resulted in a pedwarn. For the deprecated attribute,
however, there was only a warning, and the wording "'deprecated'
attribute ignored for 'void'" included an unhelpful "for 'void'".
Arrange for the case of the deprecated attribute on a label to be
checked for separately and result in a pedwarn. As with
inappropriate uses of fallthrough (see commit 6c80b1b56dec2691436f3e2676e3d1b105b01b89), it seems reasonable for
this pedwarn to apply regardless of whether [[]] or __attribute__
was used and regardless of whether C or C++ is being compiled.
* Attributes on case or default labels (the standard syntax supports
attributes on all kinds of labels) were quietly ignored, whether or
not appropriate for use in such a context, because they weren't
passed to decl_attributes at all. (Note where I'm changing the
do_case prototype that such a function is actually only defined in
the C front end, not for C++, despite the declaration being in
c-common.h.)
* A recent change as part of the editorial review in preparation for
the C2x CD ballot has changed the __has_c_attribute value for
fallthrough to 201910 to reflect when that attribute was actually
voted into the working draft.
Bootstrapped with no regressions for x86_64-pc-linux-gnu.
gcc/c-family/
* c-attribs.cc (handle_deprecated_attribute): Check and pedwarn
for LABEL_DECL.
* c-common.cc (c_add_case_label): Add argument ATTRS. Call
decl_attributes.
* c-common.h (do_case, c_add_case_label): Update declarations.
* c-lex.cc (c_common_has_attribute): For C, produce a result of
201910 for fallthrough and 202106 for maybe_unused.
gcc/c/
* c-parser.cc (c_parser_label): Pass attributes to do_case.
* c-typeck.cc (do_case): Add argument ATTRS. Pass it to
c_add_case_label.
gcc/testsuite/
* gcc.dg/c2x-attr-deprecated-2.c, gcc.dg/c2x-attr-fallthrough-2.c,
gcc.dg/c2x-attr-maybe_unused-1.c, gcc.dg/c2x-attr-nodiscard-2.c:
Add tests of attributes on labels.
* gcc.dg/c2x-has-c-attribute-2.c: Update expected results for
maybe_unused and fallthrough.
Patrick Palka [Wed, 31 Aug 2022 20:45:30 +0000 (16:45 -0400)]
libstdc++: A few more minor <ranges> cleanups
libstdc++-v3/ChangeLog:
* include/bits/ranges_base.h (__advance_fn::operator()): Add
parentheses in assert condition to avoid -Wparentheses warning.
* include/std/ranges: (take_view::take_view): Uglify 'base'.
(take_while_view::take_while_view): Likewise.
(elements_view::elements_view): Likewise.
(views::_Zip::operator()): Adjust position of [[nodiscard]] for
compatibility with -fconcepts-ts.
(zip_transform_view::_Sentinel): Uglify 'OtherConst'.
(views::_ZipTransform::operator()): Adjust position of
[[nodiscard]] for compatibilty with -fconcepts-ts.
Martin Liska [Wed, 31 Aug 2022 19:55:45 +0000 (21:55 +0200)]
Support --disable-fixincludes.
Always install limits.h and syslimits.h header files
to include folder.
When --disable-fixincludes is used, then no system header files
are fixed by the tools in fixincludes. Moreover, the fixincludes
tools are not built any longer.
gcc/ChangeLog:
* Makefile.in: Always install limits.h and syslimits.h to
include folder.
* configure.ac: Assign STMP_FIXINC blank if
--disable-fixincludes is used.
* configure: Regenerate.
aarch64: Update sizeless tests for recent GNU C changes
The tests for sizeless SVE types include checks that the types
are handled for initialisation purposes in the same way as scalars.
GNU C and C2x now allow scalars to be initialised using empty braces,
so this patch updates the SVE tests to match.
Richard Biener [Wed, 31 Aug 2022 13:25:32 +0000 (15:25 +0200)]
Avoid fatal fails in predicate::init_from_control_deps
When processing USE predicates we can drop from the AND chain,
when procsssing DEF predicates we can drop from the OR chain. Do
that instead of giving up completely. This also removes cases
that should never trigger.
* gimple-predicate-analysis.cc (predicate::init_from_control_deps):
Assert the guard_bb isn't empty and has more than one successor.
Drop appropriate parts of the predicate when an edge fails to
register a predicate.
(predicate::dump): Dump empty predicate as TRUE.
Jonathan Wakely [Wed, 31 Aug 2022 12:57:34 +0000 (13:57 +0100)]
libstdc++: Add noexcept-specifier to std::reference_wrapper::operator()
This isn't required by the standard, but there's an LWG issue suggesting
to add it.
Also use __invoke_result instead of result_of, to match the spec in
recent standards.
libstdc++-v3/ChangeLog:
* include/bits/refwrap.h (reference_wrapper::operator()): Add
noexcept-specifier and use __invoke_result instead of result_of.
* testsuite/20_util/reference_wrapper/invoke-noexcept.cc: New test.
Richard Biener [Wed, 31 Aug 2022 12:04:46 +0000 (14:04 +0200)]
tree-optimization/90994 - fix uninit diagnostics with EH
r12-3640-g94c12ffac234b2 sneaked in a hack to avoid the diagnostic
for the testcase in PR90994 which sees non-call EH control flow
confusing predicate analysis. The following patch instead adjusts
the existing code handling EH to handle non-calls and do what I
think was intented.
PR tree-optimization/90994
* gimple-predicate-analysis.cc (predicate::init_from_control_deps):
Ignore exceptional control flow and skip the edge for the purpose of
predicate generation also for non-calls.
Richard Biener [Tue, 30 Aug 2022 08:31:26 +0000 (10:31 +0200)]
tree-optimization/65244 - include asserts in predicates for uninit
When uninit computes the actual predicates from the control dependence
edges it currently skips those that are assert-like (where one edge
leads to a block which ends in a noreturn call). That leads to
bogus uninit diagnostics when applied on the USE side.
PR tree-optimization/65244
* gimple-predicate-analysis.h (predicate::init_from_control_deps):
Add argument to specify whether the predicate is for the USE.
* gimple-predicate-analysis.cc (predicate::init_from_control_deps):
Also include predicates effective fallthru control edges when
the predicate is for the USE.
Richard Biener [Wed, 31 Aug 2022 06:52:58 +0000 (08:52 +0200)]
tree-optimization/73550 - more switch handling improvements for uninit
The following makes predicate analysis handle case labels with
a non-singleton contiguous range.
PR tree-optimization/73550
* gimple-predicate-analysis.cc (predicate::init_from_control_deps):
Sanitize debug dumping. Handle case labels with a CASE_HIGH.
(predicate::dump): Adjust for better readability.
Jakub Jelinek [Wed, 31 Aug 2022 08:22:36 +0000 (10:22 +0200)]
libcpp: Make static checkers happy about makeuname2c [PR106778]
The assertion ensures that we point within the image and at a byte
we haven't touched yet (or at least that it isn't the first byte
of an already stored tree), some static checker was unhappy about
first checking that it is zero and only afterwards checking that it
is within bounds.
2022-08-31 Jakub Jelinek <jakub@redhat.com>
PR preprocessor/106778
* makeuname2c.cc (write_nodes): Reverse order of && operands in
assert.
vect: Fix stray argument in call to dump_printf_loc
One call to dump_printf_loc had a stray left-over argument
from an earlier version of the patch. This went unnoticed
on aarch64-linux-gnu and x86_64-linux-gnu since the parameters
that actually mattered were passed in FPRs rather than GPRs,
but I assume this is the reason for the i686-linux-gnu failures
that Jakub hit.
Aldy Hernandez [Tue, 30 Aug 2022 13:46:43 +0000 (15:46 +0200)]
Improve union of ranges containing NAN.
Previously [5,6] U NAN would just drop to VARYING. With this patch,
the resulting range becomes [5,6] with the NAN bit set to unknown.
[I still have yet to decide what to do with intersections. ISTM, the
intersection of a known NAN with anything else should be a NAN, but it
could also be undefined (the empty set). I'll have to run some tests
and see. Currently, we drop to VARYING cause well... it's always safe
to give up;-).]
gcc/ChangeLog:
* value-range.cc (early_nan_resolve): Change comment.
(frange::union_): Handle union when one side is a NAN.
(range_tests_nan): Add tests for NAN union.
Andrew Stubbs [Fri, 5 Aug 2022 12:28:50 +0000 (13:28 +0100)]
omp-simd-clone: Allow fixed-lane vectors
The vecsize_int/vecsize_float has an assumption that all arguments will use
the same bitsize, and vary the number of lanes according to the element size,
but this is inappropriate on targets where the number of lanes is fixed and
the bitsize varies (i.e. amdgcn).
With this change the vecsize can be left zero and the vectorization factor will
be the same for all types.
gcc/ChangeLog:
* doc/tm.texi: Regenerate.
* omp-simd-clone.cc (simd_clone_adjust_return_type): Allow zero
vecsize.
(simd_clone_adjust_argument_types): Likewise.
* target.def (compute_vecsize_and_simdlen): Document the new
vecsize_int and vecsize_float semantics.
store_bit_field_1 tries to convert a field assignment into a subreg
assignment. Normally it must check that the field occupies a full
word (or more specifically, a full REGMODE_NATURAL_SIZE chunk),
so that writing to the subreg doesn't clobber any other fields.
But it can skip that check if the structure is known to be in
an undefined state.
The idea was that, in the undefined case, we could rely on
simplify_gen_subreg to do the check for a valid subreg, rather
than having to repeat the required endianness logic in the caller.
Before the addition of the undefined case, the code could use
regnum * regsize to get the byte offset, where regnum came from
checking that the start was word-aligned. In the undefined case
we need to calculate the byte offset explicitly.
Currently SLP tries to force permute operations "down" the graph
from loads in the hope of reducing the total number of permutations
needed or (in the best case) removing the need for the permutations
entirely. This patch tries to extend it as follows:
- Allow loads to take a different permutation from the one they
started with, rather than choosing between "original permutation"
and "no permutation".
- Allow changes in both directions, if the target supports the
reverse permutation.
- Treat the placement of permutations as a two-way dataflow problem:
after propagating information from leaves to roots (as now), propagate
information back up the graph.
- Take execution frequency into account when optimising for speed,
so that (for example) permutations inside loops have a higher
cost than permutations outside loops.
- Try to reduce the total number of permutations when optimising for
size, even if that increases the number of permutations on a given
execution path.
See the big block comment above vect_optimize_slp_pass for
a detailed description.
The original motivation for doing this was to add a framework that would
allow other layout differences in future. The two main ones are:
- Make it easier to represent predicated operations, including
predicated operations with gaps. E.g.:
a[0] += 1;
a[1] += 1;
a[3] += 1;
could be a single load/add/store for SVE. We could handle this
by representing a layout such as { 0, 1, _, 2 } or { 0, 1, _, 3 }
(depending on what's being counted). We might need to move
elements between lanes at various points, like with permutes.
(This would first mean adding support for stores with gaps.)
- Make it easier to switch between an even/odd and unpermuted layout
when switching between wide and narrow elements. E.g. if a widening
operation produces an even vector and an odd vector, we should try
to keep operations on the wide elements in that order rather than
force them to be permuted back "in order".
To give some examples of what the patch does:
int f1(int *__restrict a, int *__restrict b, int *__restrict c,
int *__restrict d)
{
a[0] = (b[1] << c[3]) - d[1];
a[1] = (b[0] << c[2]) - d[0];
a[2] = (b[3] << c[1]) - d[3];
a[3] = (b[2] << c[0]) - d[2];
}
continues to produce the same code as before when optimising for
speed: b, c and d are permuted at load time. But when optimising
for size we instead permute c into the same order as b+d and then
permute the result of the arithmetic into the same order as a:
int f2(int *__restrict a, int *__restrict b, int *__restrict c,
int *__restrict d)
{
a[0] = (b[3] << c[3]) - d[3];
a[1] = (b[2] << c[2]) - d[2];
a[2] = (b[1] << c[1]) - d[1];
a[3] = (b[0] << c[0]) - d[0];
}
continues to push the reverse down to just before the store,
like the previous code did.
In:
int f3(int *__restrict a, int *__restrict b, int *__restrict c,
int *__restrict d)
{
for (int i = 0; i < 100; ++i)
{
a[0] = (a[0] + c[3]);
a[1] = (a[1] + c[2]);
a[2] = (a[2] + c[1]);
a[3] = (a[3] + c[0]);
c += 4;
}
}
the loads of a are hoisted and the stores of a are sunk, so that
only the load from c happens in the loop. When optimising for
speed, we prefer to have the loop operate on the reversed layout,
changing on entry and exit from the loop:
int f4(int *__restrict a, int *__restrict b, int *__restrict c,
int *__restrict d)
{
int a0 = a[0];
int a1 = a[1];
int a2 = a[2];
int a3 = a[3];
for (int i = 0; i < 100; ++i)
{
a0 ^= c[0];
a1 ^= c[1];
a2 ^= c[2];
a3 ^= c[3];
c += 4;
for (int j = 0; j < 100; ++j)
{
a0 += d[1];
a1 += d[0];
a2 += d[3];
a3 += d[2];
d += 4;
}
b[0] = a0;
b[1] = a1;
b[2] = a2;
b[3] = a3;
b += 4;
}
a[0] = a0;
a[1] = a1;
a[2] = a2;
a[3] = a3;
}
the a vector in the inner loop maintains the order { 1, 0, 3, 2 },
even though it's part of an SCC that includes the outer loop.
In other words, this is a motivating case for not assigning
permutes at SCC granularity. The code we get is:
bb-slp-layout-17.c is a collection of compile tests for problems
I hit with earlier versions of the patch. The same prolems might
show up elsewhere, but it seemed worth having the test anyway.
In slp-11b.c we previously pushed the permutation of the in[i*4]
group down from the load to just before the store. That didn't
reduce the number or frequency of the permutations (or increase
them either). But separating the permute from the load meant
that we could no longer use load/store lanes.
Whether load/store lanes are a good idea here is another question.
If there were two sets of loads, and if we could use a single
permutation instead of one per load, then avoiding load/store
lanes should be a good thing even under the current abstract
cost model. But I think under the current model we should
try to avoid splitting up potential load/store lanes groups
if there is no specific benefit to the split.
Preferring load/store lanes is still a source of missed optimisations
that we should fix one day...
gcc/
* params.opt (-param=vect-max-layout-candidates=): New parameter.
* doc/invoke.texi (vect-max-layout-candidates): Document it.
* tree-vectorizer.h (auto_lane_permutation_t): New typedef.
(auto_load_permutation_t): Likewise.
* tree-vect-slp.cc (vect_slp_node_weight): New function.
(slpg_layout_cost): New class.
(slpg_vertex): Replace perm_in and perm_out with partition,
out_degree, weight and out_weight.
(slpg_partition_info, slpg_partition_layout_costs): New classes.
(vect_optimize_slp_pass): Likewise, cannibalizing some part of
the previous vect_optimize_slp.
(vect_optimize_slp): Use it.
gcc/testsuite/
* lib/target-supports.exp (check_effective_target_vect_var_shift):
Return true for aarch64.
* gcc.dg/vect/bb-slp-layout-1.c: New test.
* gcc.dg/vect/bb-slp-layout-2.c: New test.
* gcc.dg/vect/bb-slp-layout-3.c: New test.
* gcc.dg/vect/bb-slp-layout-4.c: New test.
* gcc.dg/vect/bb-slp-layout-5.c: New test.
* gcc.dg/vect/bb-slp-layout-6.c: New test.
* gcc.dg/vect/bb-slp-layout-7.c: New test.
* gcc.dg/vect/bb-slp-layout-8.c: New test.
* gcc.dg/vect/bb-slp-layout-9.c: New test.
* gcc.dg/vect/bb-slp-layout-10.c: New test.
* gcc.dg/vect/bb-slp-layout-11.c: New test.
* gcc.dg/vect/bb-slp-layout-13.c: New test.
* gcc.dg/vect/bb-slp-layout-14.c: New test.
* gcc.dg/vect/bb-slp-layout-15.c: New test.
* gcc.dg/vect/bb-slp-layout-16.c: New test.
* gcc.dg/vect/bb-slp-layout-17.c: New test.
* gcc.dg/vect/slp-11b.c: XFAIL SLP test for load-lanes targets.
(1) hashing and equality of integers
(2) using spare integer encodings to represent empty and deleted slots
(1) is really independent of (2), and could be useful in cases where
no spare integer encodings are available. This patch adds a base class
(int_hash_base) for (1) and makes int_hash inherit from it.
If we follow a similar style for future hashes, we can make
unbounded_hashmap_traits take the "base" hash for the key
as a template parameter, rather than requiring every type of
key to have a separate derivative of unbounded_hashmap_traits.
A later patch applies this to vector keys.
No functional change intended.
gcc/
* hash-traits.h (int_hash_base): New struct, split out from...
(int_hash): ...this class, which now inherits from int_hash_base.
* hash-map-traits.h (unbounded_hashmap_traits): Take a template
parameter for the key that provides hash and equality functions.
(unbounded_int_hashmap_traits): Turn into a type alias of
unbounded_hashmap_traits.
Make graphds_scc pass the node order back to callers
As a side-effect, graphds_scc constructs a vector in which all
nodes in an SCC are listed consecutively. This can be useful
information, so that the patch adds an optional pass-back parameter
for it. The interface is similar to the one for graphds_dfs.
gcc/
* graphds.cc (graphds_scc): Add a pass-back parameter for the
final node order.
* graphds.h (graphds_scc): Update prototype accordingly.
Similarly to the previous vectorizable_slp_permutation patch,
this one splits out the main part of vect_transform_slp_perm_load
so that a later patch can test a permutation without constructing
a node for it.
Also fixes a lingering use of STMT_VINFO_VECTYPE.
gcc/
* tree-vect-slp.cc (vect_transform_slp_perm_load_1): Split out from...
(vect_transform_slp_perm_load): ...here. Use SLP_TREE_VECTYPE instead
of STMT_VINFO_VECTYPE.
A later patch needs to test whether the target supports a
lane_permutation_t without having to construct a full SLP
node to test that. This patch splits out most of the work
of vectorizable_slp_permutation into a subroutine, so that
properties of the permutation can be passed explicitly without
disturbing the main interface.
The new subroutine still uses an slp_tree argument to get things
like the number of lanes and the vector type. That's a bit clunky,
but it seemed like the least worst option.
gcc/
* tree-vect-slp.cc (vectorizable_slp_permutation_1): Split out from...
(vectorizable_slp_permutation): ...here.
Builds of glibc with SVE enabled have been failing since V1DI was added
to the aarch64 port. The problem is that BB SLP starts the (hopeless)
attempt to use variable-length modes to vectorise a single-element
vector, and that now gets further than it did before.
Initially we tried getting a vector mode with 1 + 1X DI elements
(i.e. 1 DI per 128-bit vector chunk). We don't provide such a mode --
it would be VNx1DI -- because it isn't a native SVE format. We then
try just 1 DI, which previously failed but now succeeds.
There are numerous ways we could fix this. Perhaps the most obvious
would be to skip variable-length modes for BB SLP. However, I think
that'd just be kicking the can down the road, since eventually we want
to support BB SLP and VLA vectors using predication.
However, if we do use VLA vectors for BB SLP, the vector modes
we use should actually be variable length. We don't want to use
variable-length vectors for some element types/group sizes and
fixed-length vectors for others, since it would be difficult
to handle the seams.
The same principle applies during loop vectorisation. We can't
use a mixture of variable-length and fixed-length vectors for
the same loop because the relative unroll/vectorisation factors
would not be constant (compile-time) multiples of each other.
This patch therefore makes get_related_vectype_for_scalar_type
check that the provided number of units is interoperable with
the provided prevailing mode. The function is generally quite
forgiving -- it does basic things like checking for scalarness
itself rather than expecting callers to do them -- so the new
check feels in keeping with that.
This seems to subsume the fix for PR96974. I'm not sure it's
worth reverting that code to an assert though, so the patch just
drops the scan for the associated message.
gcc/
* tree-vect-stmts.cc (get_related_vectype_for_scalar_type): Check
that the requested number of units is interoperable with the requested
prevailing mode.
gcc/testsuite/
* gcc.target/aarch64/sve/slp_15.c: New test.
* g++.target/aarch64/sve/pr96974.C: Remove scan test.
Ulrich Drepper [Tue, 30 Aug 2022 14:33:51 +0000 (16:33 +0200)]
Change get_std_name_hint to use generated hash table
The get_std_name_hint function so far uses linear search to locate
matching entries. After adding more hint entries this might not be
appropriate anymore. Therefore this patch also replaces the linear
array with a gperf-generated hash table.
contrib/ChangeLog
* gcc_update (files_and_dependencies): Add rule for
gcc/cp/std-name-hint.h.
gcc/cp/ChangeLog
* Make-lang.in: Add rule to rebuild std-name-hint.h from
std-name-hint.gperf.
* name-lookup.cc (get_std_name_hint): Remove hints array.
Use gperf-generated class std_name_hint_lookup.
Include "std-name-hint.h".
* std-name-hint.gperf: New file.
* std-name-hint.h: New file. Generated from the .gperf file.
The MAX_NUM_CHAINS is applied once with <= and once with < which
results in the chains not limited but analyis dropped completely.
That's one issue in the PR.
PR tree-optimization/73550
* gimple-predicate-analysis.cc (predicate::init_from_control_deps):
Do not apply MAX_NUM_CHAINS again.
Marek Polacek [Mon, 29 Aug 2022 20:54:05 +0000 (16:54 -0400)]
c++: __has_builtin gives the wrong answer [PR106759]
We've supported __is_nothrow_constructible since r11-4386, but
names_builtin_p didn't know about it, so it gave the wrong answer for
#if __has_builtin(__is_nothrow_constructible)
...
#endif
I've tested all C++-only built-ins and only two were missing.
PR c++/106759
gcc/cp/ChangeLog:
* cp-objcp-common.cc (names_builtin_p): Handle RID_IS_NOTHROW_ASSIGNABLE
and RID_IS_NOTHROW_CONSTRUCTIBLE.
Aldy Hernandez [Tue, 30 Aug 2022 10:13:31 +0000 (12:13 +0200)]
Force a [NAN, NAN] range when the definite NAN property is set.
Setting the definite NAN property should also force a [NAN, NAN]
range, otherwise we'd have two ways of representing a NAN: with the
endpoints or with the property. In the ranger world we avoid at all
costs having more than one representation for a range.
In doing this, I removed the FRANGE_PROP_ACCESSOR macro, since it
looks like setting a property may have repercurssions in the range
itself, so it's best for the client to definte its own setter.
gcc/ChangeLog:
* value-range-storage.cc (frange_storage_slot::get_frange): Use
frange_nan.
* value-range.cc (frange::set_nan): New.
(frange_nan): Move to header file.
(range_tests_nan): Adjust frange_nan callers to pass type.
New test.
* value-range.h (FRANGE_PROP_ACCESSOR): Remove.
(frange_nan): New.
Richard Biener [Tue, 30 Aug 2022 09:47:49 +0000 (11:47 +0200)]
tree-optimization/67196 - normalize use predicates earlier
The following makes sure to have use predicates simplified and
normalized before doing uninit_analysis::overlap because that
otherwise cannot pick up all flag setting cases. This fixes
half of the issue in PR67196 and conveniently resolves the
XFAIL in gcc.dg/uninit-pred-7_a.c.
PR tree-optimization/67196
* gimple-predicate-analysis.cc (uninit_analysis::is_use_guarded):
Simplify and normalize use prediates before first use.
Richard Biener [Tue, 30 Aug 2022 09:41:02 +0000 (11:41 +0200)]
Remove GENERIC expr building from predicate analysis, improve dumps
The following removes duplicate dumping and makes the predicate
dumping more readable. That makes the GENERIC predicate build
routines unused which is also nice.
* gimple-predicate-analysis.cc (dump_pred_chain): Fix
parentizing and AND prepending.
(predicate::dump): Do not dump the GENERIC expanded
predicate, properly parentize and prepend ORs to the
piecewise predicate dump.
(build_pred_expr): Remove.
Aldy Hernandez [Tue, 30 Aug 2022 06:23:33 +0000 (08:23 +0200)]
Add support for floating point endpoints to frange.
The current implementation of frange is just a type with some bits to
represent NAN and INF. We can do better and represent endpoints to
ultimately solve longstanding PRs such as PR24021. This patch adds
these endpoints. In follow-up patches I will add support for a bare
bones PLUS_EXPR range-op-float entry to solve the PR.
I have chosen to use REAL_VALUE_TYPEs for the endpoints, since that's
what we use underneath the trees. This will be somewhat analogous to
our eventual use of wide-ints in the irange. No sense going through
added levels of indirection if we can avoid it. That, plus real.*
already has a nice API for dealing with floats.
With this patch, ranges will be closed float point intervals, which
make the implementation simpler, since we don't have to keep track of
open/closed intervals. This is conservative enough for use in the
ranger world, as we'd rather err on the side of more elements in a
range, than less.
For example, even though we cannot precisely represent the open
interval (3.0, 5.0) with this approach, it is perfectably reasonable
to represent it as [3.0, 5.0] since the closed interval is a super set
of the open one. In the VRP/ranger world, it is always better to
err on the side of more information in a range, than not. After all,
when we don't know anything about a range, we just use VARYING which
is a fancy term for a range spanning the entire domain.
Since REAL_VALUE_TYPEs have properly defined infinity and NAN
semantics, all the math can be made to work:
Also, since REAL_VALUE_TYPEs can represent the minimum and maximum
representable values of a TYPE_MODE, we can disambiguate between them
and negative and positive infinity (see get_max_float in real.cc).
This also makes the math all work. For example, suppose we know
nothing about x and y (VARYING). On the TRUE side of x > y, we can
deduce that:
(a) x cannot be NAN
(b) y cannot be NAN
(c) y cannot be +INF.
(c) means that we can drop the upper bound of "y" from +INF to the
maximum representable value for its type.
Having endpoints with different representation for infinity and the
maximum representable values, means we can drop the +-INF properties
we currently have in the frange.
Aldy Hernandez [Mon, 29 Aug 2022 15:52:20 +0000 (17:52 +0200)]
A == 0 ? A : -A same as -A (when A is 0.0)
The upcoming work for frange triggers a regression in
gcc.dg/tree-ssa/phi-opt-24.c.
For -O2 -fno-signed-zeros, we fail to transform the following into -A:
float f0(float A)
{
// A == 0? A : -A same as -A
if (A == 0) return A;
return -A;
}
This is because the abs/negative match.pd pattern here:
/* abs/negative simplifications moved from fold_cond_expr_with_comparison,
Need to handle (A - B) case as fold_cond_expr_with_comparison does.
Need to handle UN* comparisons.
...
...
Martin Liska [Tue, 30 Aug 2022 08:46:26 +0000 (10:46 +0200)]
s390: fix build on 32-bit hosts
Fixes build on i686:
gcc/config/s390/s390.cc: In function 'bool s390_rtx_costs(rtx, machine_mode, int, int, int*, bool)':
gcc/config/s390/s390.cc:3728:63: error: cannot convert 'long int*' to 'long long int*'
gcc/ChangeLog:
* config/s390/s390.cc (s390_rtx_costs): Use proper type as
argument.
Richard Biener [Fri, 19 Aug 2022 13:11:14 +0000 (15:11 +0200)]
Use reachability analysis to improve uninit diagnostic
This patch does what the comment in uninit diagnostic suggests.
When the value-numbering run done without optimizing figures there's
a fallthru path, consider blocks on it as always executed.
* tree-ssa-uninit.cc (warn_uninitialized_vars): Pre-compute
the set of fallthru reachable blocks from function entry
and use that to determine wlims.always_executed.
Richard Biener [Mon, 29 Aug 2022 14:16:44 +0000 (16:16 +0200)]
tree-optimization/56654 - sort uninit candidates after RPO
The following sorts the immediate uses of a possibly uninitialized
SSA variable after their RPO order so we prefer warning for an
earlier occuring use rather than issueing the diagnostic for the
first uninitialized immediate use.
The sorting will inevitably be imperfect but it also allows us to
optimize the expensive predicate check for the case where there
are multiple uses in the same basic-block which is a nice side-effect.
PR tree-optimization/56654
* tree-ssa-uninit.cc (cand_cmp): New.
(find_uninit_use): First process all PHIs and collect candidate
stmts, then sort those after RPO.
(warn_uninitialized_phi): Pass on bb_to_rpo.
(execute_late_warn_uninitialized): Compute and pass on
reverse lookup of RPO number from basic block index.
Richard Biener [Mon, 29 Aug 2022 10:20:10 +0000 (12:20 +0200)]
Make uninit PHI processing more consistent
Currently the main working of the maybe-uninit pass is to scan over
all PHIs with possibly undefined arguments, diagnosing whether there's
a direct not guarded use. For not guarded uses in PHIs those are queued for
later processing and to make the uninit analysis PHI def handling work,
mark the PHI def as possibly uninitialized. But this happens only
for those PHI uses that happen to be seen before a direct not guarded
use and whether all arguments of a PHI node which are defined by a PHI
are properly marked as maybe uninitialized depends on the processing
order.
The following changes the uninit pass to perform an RPO walk over
the function, ensuring that PHI argument defs are visited before
the PHI node (besides backedge uses which we ignore already),
getting rid of the worklist. It also makes sure to process all
PHI uses, but recording those that are properly guarded so they
are not treated as maybe undefined when processing the PHI use
later.
Overall this should make behavior more consistent, avoid some
false negative because of the previous early out and order issue,
and avoid some false positive because of the missed recording
of guarded PHI uses.
The patch correctly diagnoses an uninitalized use of 'regnum'
in store_bit_field_1 and also diagnoses an uninitialized use of
best_match::m_best_candidate_len in c-decl.cc which I've chosen to
silence by initializing m_best_candidate_len. The warning is
a false positive but GCC cannot see that m_best_candidate_len is
initialized when m_best_candidate is not NULL so from this
perspective this was a false negative. I've added
g++.dg/uninit-pred-5.C with a reduced testcase that nicely shows
how the previous behavior missed the diagnostic because the
worklist ended up visiting the PHI with the dependend uninit
value before visiting the PHIs producing it.
* gimple-predicate-analysis.h (uninit_analysis::operator()):
Remove.
* gimple-predicate-analysis.cc
(uninit_analysis::collect_phi_def_edges): Use phi_arg_set,
simplify a bit.
* tree-ssa-uninit.cc (defined_args): New global.
(compute_uninit_opnds_pos): Mask with the recorded set
of guarded maybe-uninitialized uses.
(uninit_undef_val_t::operator()): Remove.
(find_uninit_use): Process all PHI uses, recording the
guarded ones and marking the PHI result as uninitialized
consistently.
(warn_uninitialized_phi): Adjust.
(execute_late_warn_uninitialized): Get rid of the PHI worklist
and instead walk the function in RPO order.
* spellcheck.h (best_match::m_best_candidate_len): Initialize.
Marek Polacek [Fri, 26 Aug 2022 22:03:53 +0000 (18:03 -0400)]
c++: Fix C++11 attribute propagation [PR106712]
When we have
[[noreturn]] int fn1 [[nodiscard]](), fn2();
"noreturn" should apply to both fn1 and fn2 but "nodiscard" only to fn1:
[dcl.pre]/3: "The attribute-specifier-seq appertains to each of
the entities declared by the declarators of the init-declarator-list."
[dcl.spec.general]: "The attribute-specifier-seq affects the type
only for the declaration it appears in, not other declarations involving
the same type."
As Ed Catmur correctly analyzed, this is because, for the test above,
we call start_decl with prefix_attributes=noreturn, but this line:
results in attributes == prefix_attributes, because chainon sees
that attributes is null so it just returns prefix_attributes. Then
in grokdeclarator we reach
which modifies prefix_attributes so now it's "noreturn, nodiscard"
and so fn2 is wrongly marked nodiscard as well. Fixed by reversing
the order of arguments to attr_chainon. That way, we tack the prefix
attributes onto ->std_attributes, avoiding modifying prefix_attributes.
PR c++/106712
gcc/cp/ChangeLog:
* decl.cc (grokdeclarator): Reverse the order of arguments to
attr_chainon.
David Faust [Mon, 29 Aug 2022 18:21:52 +0000 (11:21 -0700)]
bpf: handle anonymous members in CO-RE reloc [PR106745]
The old method for computing a member index for a CO-RE relocation
relied on a name comparison, which could SEGV if the member in question
is itself part of an anonymous inner struct or union.
This patch changes the index computation to not rely on a name, while
maintaining the ability to account for other sibling fields which may
not have a representation in BTF.
gcc/ChangeLog:
PR target/106745
* config/bpf/coreout.cc (bpf_core_get_sou_member_index): Fix
computation of index for anonymous members.
gcc/testsuite/ChangeLog:
PR target/106745
* gcc.target/bpf/core-pr106745.c: New test.
Xi Ruoyao [Wed, 24 Aug 2022 11:34:47 +0000 (19:34 +0800)]
LoongArch: testsuite: refine __tls_get_addr tests with tls_native
If GCC is not built with a working linker for the target (developers
occansionally build such a "minimal" GCC for testing and debugging),
TLS will be emulated and __tls_get_addr won't be used. Refine those
tests depending on __tls_get_addr with tls_native to avoid test
failures.
Robin Dapp [Thu, 3 Feb 2022 11:50:04 +0000 (12:50 +0100)]
s390: Change SET rtx_cost handling.
The IF_THEN_ELSE detection currently prevents us from properly costing
register-register moves which causes the lower-subreg pass to assume that
a VR-VR move is as expensive as two GPR-GPR moves.
This patch adds handling for SETs containing REGs as well as MEMs and is
inspired by the aarch64 implementation.
gcc/ChangeLog:
* config/s390/s390.cc (s390_address_cost): Declare.
(s390_hard_regno_nregs): Declare.
(s390_rtx_costs): Add handling for REG and MEM in SET.
gcc/testsuite/ChangeLog:
* gcc.target/s390/vector/vec-sum-across-no-lower-subreg-1.c: New test.
This adds functions to recognize reverse/element swap permute patterns
for vler, vster as well as vpdi and rotate.
gcc/ChangeLog:
* config/s390/s390.cc (expand_perm_with_vpdi): Recognize swap pattern.
(is_reverse_perm_mask): New function.
(expand_perm_with_rot): Recognize reverse pattern.
(expand_perm_with_vstbrq): New function.
(expand_perm_with_vster): Use vler/vster for element reversal on z15.
(vectorize_vec_perm_const_1): Use.
(s390_vectorize_vec_perm_const): Add expand functions.
* config/s390/vx-builtins.md: Prefer vster over vler.
gcc/testsuite/ChangeLog:
* gcc.target/s390/vector/vperm-rev-z14.c: New test.
* gcc.target/s390/vector/vperm-rev-z15.c: New test.
* gcc.target/s390/zvector/vec-reve-store-byte.c: Adjust test
expectation.
Robin Dapp [Mon, 4 Jul 2022 12:19:29 +0000 (14:19 +0200)]
s390: Implement vec_extract via vec_select.
vec_select can handle dynamic/runtime masks nowadays. Therefore we can
get rid of the UNSPEC_VEC_EXTRACT that was preventing further
optimizations like combining instructions with vec_extract patterns.
gcc/ChangeLog:
* config/s390/s390.md: Remove UNSPEC_VEC_EXTRACT.
* config/s390/vector.md: Rewrite patterns to use vec_select.
* config/s390/vx-builtins.md (vec_scatter_element<V_HW_2:mode>_SI):
Likewise.
Robin Dapp [Fri, 24 Jun 2022 13:15:14 +0000 (15:15 +0200)]
s390: Use vpdi and verllg in vec_reve.
Swapping the two elements of a V2DImode or V2DFmode vector can be done
with vpdi instead of using the generic way of loading a permutation mask
from the literal pool and vperm.
Analogous to the V2DI/V2DF case reversing the elements of a four-element
vector can be done by first swapping the elements of the first
doubleword as well the ones of the second one and subsequently rotate
the doublewords by 32 bits.
gcc/ChangeLog:
PR target/100869
* config/s390/vector.md (@vpdi4_2<mode>): New pattern.
(rotl<mode>3_di): New pattern.
* config/s390/vx-builtins.md: Use vpdi and verll for reversing
elements.
gcc/testsuite/ChangeLog:
* gcc.target/s390/zvector/vec-reve-int-long.c: New test.
Robin Dapp [Thu, 3 Mar 2022 14:06:21 +0000 (15:06 +0100)]
s390: Add -munroll-only-small-loops.
Inspired by Power we also introduce -munroll-only-small-loops. This
implies activating -funroll-loops and -munroll-only-small-loops at -O2 and
above.
gcc/ChangeLog:
* common/config/s390/s390-common.cc: Enable -funroll-loops and
-munroll-only-small-loops for OPT_LEVELS_2_PLUS_SPEED_ONLY.
* config/s390/s390.cc (s390_loop_unroll_adjust): Do not unroll
loops larger than 12 instructions.
(s390_override_options_after_change): Set unroll options.
(s390_option_override_internal): Likewise.
* config/s390/s390.opt: Document munroll-only-small-loops.
gcc/testsuite/ChangeLog:
* gcc.target/s390/vector/vec-copysign.c: Do not unroll.
* gcc.target/s390/zvector/autovec-double-quiet-uneq.c: Dito.
* gcc.target/s390/zvector/autovec-double-signaling-ltgt.c: Dito.
* gcc.target/s390/zvector/autovec-float-quiet-uneq.c: Dito.
* gcc.target/s390/zvector/autovec-float-signaling-ltgt.c: Dito.
Richard Biener [Fri, 26 Aug 2022 12:25:51 +0000 (14:25 +0200)]
Refactor init_use_preds and find_control_equiv_block
The following inlines find_control_equiv_block and is_loop_exit
into init_use_preds and refactors that for better readability and
similarity with the post-dominator walk in compute_control_dep_chain.
* gimple-predicate-analysis.cc (is_loop_exit,
find_control_equiv_block): Inline into single caller ...
(uninit_analysis::init_use_preds): ... here and refactor.
Richard Biener [Fri, 26 Aug 2022 11:39:29 +0000 (13:39 +0200)]
Improve compute_control_dep_chain documentation
The following refactors compute_control_dep_chain slightly by
inlining is_loop_exit and factoring the check on the loop
invariant condition. It also adds a comment as of how I
understand the code and it's current problem.
* gimple-predicate-analysis.cc (compute_control_dep_chain):
Inline is_loop_exit and refactor, add comment about
loop exits.
Kito Cheng [Mon, 29 Aug 2022 02:28:28 +0000 (10:28 +0800)]
RISC-V: Suppress -Wclass-memaccess warning
poly_int64 is non-trivial type, we need to clean up manully instead
of memset to prevent this warning.
../../gcc/gcc/config/riscv/riscv.cc: In function 'void riscv_compute_frame_info()':
../../gcc/gcc/config/riscv/riscv.cc:4113:10: error: 'void* memset(void*, int, size_t)' clearing an object of non-trivial type 'struct riscv_frame_info'; use assignment or value-initialization instead [-Werror=class-memaccess]
4113 | memset (frame, 0, sizeof (*frame));
| ~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~
../../gcc/gcc/config/riscv/riscv.cc:101:17: note: 'struct riscv_frame_info' declared here
101 | struct GTY(()) riscv_frame_info {
| ^~~~~~~~~~~~~~~~
cc1plus: all warnings being treated as errors
gcc/ChangeLog:
* config/riscv/riscv.cc (riscv_frame_info): Introduce `reset(void)`;
(riscv_frame_info::reset(void)): New.
(riscv_compute_frame_info): Use riscv_frame_info::reset instead
of memset when clean frame.
Peter Bergner [Sun, 28 Aug 2022 00:44:16 +0000 (19:44 -0500)]
rs6000: Allow conversions of MMA pointer types [PR106017]
GCC incorrectly disables conversions between MMA pointer types, which
are allowed with clang. The original intent was to disable conversions
between MMA types and other other types, but pointer conversions should
have been allowed. The fix is to just remove the MMA pointer conversion
handling code altogether.