Jakub Jelinek [Thu, 25 Apr 2024 18:09:35 +0000 (20:09 +0200)]
openmp: Copy DECL_LANG_SPECIFIC and DECL_LANG_FLAG_? to tree-nested decl copy [PR114825]
tree-nested.cc creates in 2 spots artificial VAR_DECLs, one of them is used
both for debug info and OpenMP/OpenACC lowering purposes, the other solely for
OpenMP/OpenACC lowering purposes.
When the decls are used in OpenMP/OpenACC lowering, the OMP langhooks (mostly
Fortran, C just a little and C++ doesn't have nested functions) then inspect
the flags on the vars and based on that decide how to lower the corresponding
clauses.
Unfortunately we weren't copying DECL_LANG_SPECIFIC and DECL_LANG_FLAG_?, so
the langhooks made decisions on the default flags on those instead.
As the original decl isn't necessarily a VAR_DECL, could be e.g. PARM_DECL,
using copy_node wouldn't work properly, so this patch just copies those
flags in addition to other flags it was copying already. And I've removed
code duplication by introducing a helper function which does copying common
to both uses.
2024-04-25 Jakub Jelinek <jakub@redhat.com>
PR fortran/114825
* tree-nested.cc (get_debug_decl): New function.
(get_nonlocal_debug_decl): Use it.
(get_local_debug_decl): Likewise.
Jakub Jelinek [Fri, 19 Apr 2024 06:47:53 +0000 (08:47 +0200)]
rtlanal: Fix set_noop_p for volatile loads or stores [PR114768]
On the following testcase, combine propagates the mem/v load into mem store
with the same address and then removes it, because noop_move_p says it is a
no-op move. If it was the other way around, i.e. mem/v store and mem load,
or both would be mem/v, it would be kept.
The problem is that rtx_equal_p never checks any kind of flags on the rtxes
(and I think it would be quite dangerous to change it at this point), and
set_noop_p checks side_effects_p on just one of the operands, not both.
In the MEM <- MEM set, it only checks it on the destination, in
store to ZERO_EXTRACT only checks it on the source.
The following patch adds the missing side_effects_p checks.
2024-04-19 Jakub Jelinek <jakub@redhat.com>
PR rtl-optimization/114768
* rtlanal.cc (set_noop_p): Don't return true for MEM <- MEM
sets if src has side-effects or for stores into ZERO_EXTRACT
if ZERO_EXTRACT operand has side-effects.
Jakub Jelinek [Thu, 18 Apr 2024 07:45:14 +0000 (09:45 +0200)]
internal-fn: Temporarily disable flag_trapv during .{ADD,SUB,MUL}_OVERFLOW etc. expansion [PR114753]
__builtin_{add,sub,mul}_overflow{,_p} builtins are well defined
for all inputs even for -ftrapv, and the -fsanitize=signed-integer-overflow
ifns shouldn't abort in libgcc but emit the desired ubsan diagnostics
or abort depending on -fsanitize* setting regardless of -ftrapv.
The expansion of these internal functions uses expand_expr* in various
places (e.g. MULT_EXPR at least in 2 spots), so temporarily disabling
flag_trapv in all those spots would be hard.
The following patch disables it around the bodies of 3 functions
which can do the expand_expr calls.
If it was in the C++ FE, I'd use some RAII sentinel, but I don't think
we have one in the middle-end.
2024-04-18 Jakub Jelinek <jakub@redhat.com>
PR middle-end/114753
* internal-fn.cc (expand_mul_overflow): Save flag_trapv and
temporarily clear it for the duration of the function, then
restore previous value.
(expand_vector_ubsan_overflow): Likewise.
(expand_arith_overflow): Likewise.
Jakub Jelinek [Mon, 15 Apr 2024 08:25:22 +0000 (10:25 +0200)]
attribs: Don't crash on NULL TREE_TYPE in diag_attr_exclusions [PR114634]
The enumerator still doesn't have TREE_TYPE set but diag_attr_exclusions
assumes that all decls must have types.
I think it is better in something as unimportant as diag_attr_exclusions
to be more robust, if there is no type, it can just diagnose exclusions
on the DECL_ATTRIBUTES, like for types it only diagnoses it on
TYPE_ATTRIBUTES.
2024-04-15 Jakub Jelinek <jakub@redhat.com>
PR c++/114634
* attribs.cc (diag_attr_exclusions): Set attrs[1] to NULL_TREE for
decls with NULL TREE_TYPE.
Jakub Jelinek [Fri, 12 Apr 2024 18:53:10 +0000 (20:53 +0200)]
c++: Fix bogus warnings about ignored annotations [PR114691]
The middle-end warns about the ANNOTATE_EXPR added for while/for loops
if they declare a var inside of the loop condition.
This is because the assumption is that ANNOTATE_EXPR argument is used
immediately in a COND_EXPR (later GIMPLE_COND), but simplify_loop_decl_cond
wraps the ANNOTATE_EXPR inside of a TRUTH_NOT_EXPR, so it no longer
holds.
The following patch fixes that by adding the TRUTH_NOT_EXPR inside of the
ANNOTATE_EXPR argument if any.
2024-04-12 Jakub Jelinek <jakub@redhat.com>
PR c++/114691
* semantics.cc (simplify_loop_decl_cond): Use cp_build_unary_op with
TRUTH_NOT_EXPR on ANNOTATE_EXPR argument (if any) rather than
ANNOTATE_EXPR itself.
Jakub Jelinek [Thu, 11 Apr 2024 09:12:11 +0000 (11:12 +0200)]
asan, v3: Fix up handling of > 32 byte aligned variables with -fsanitize=address -fstack-protector* [PR110027]
On Tue, Mar 26, 2024 at 02:08:02PM +0800, liuhongt wrote:
> > > So, try to add some other variable with larger size and smaller alignment
> > > to the frame (and make sure it isn't optimized away).
> > >
> > > alignb above is the alignment of the first partition's var, if
> > > align_frame_offset really needs to depend on the var alignment, it probably
> > > should be the maximum alignment of all the vars with alignment
> > > alignb * BITS_PER_UNIT <=3D MAX_SUPPORTED_STACK_ALIGNMENT
> > >
>
> In asan_emit_stack_protection, when it allocated fake stack, it assume
> bottom of stack is also aligned to alignb. And the place violated this
> is the first var partition. which is 32 bytes offsets, it should be
> BIGGEST_ALIGNMENT / BITS_PER_UNIT.
> So I think we need to use MAX (BIGGEST_ALIGNMENT /
> BITS_PER_UNIT, ASAN_RED_ZONE_SIZE) for the first var partition.
Your first patch aligned offsets[0] to maximum of alignb and
ASAN_RED_ZONE_SIZE. But as I wrote in the reply to that mail, alignb there
is the alignment of just a single variable which is the first one to appear
in the sorted list and is placed in the highest spot in the stack frame.
That is not necessarily the largest alignment, the sorting ensures that it
is a variable with the largest size in the frame (and only if several of
them have equal size, largest alignment from the same sized ones). Your
second patch used maximum of BIGGEST_ALIGNMENT / BITS_PER_UNIT and
ASAN_RED_ZONE_SIZE. That doesn't change anything at all when using
-mno-avx512f - offsets[0] is still just 32-byte aligned in that case
relative to top of frame, just changes the -mavx512f case to be 64-byte
aligned offsets[0] (aka offsets[0] is then either 0 or -64 instead of either
0 or -32). That will not help if any variable in the frame needs 128-byte,
256-byte, 512-byte ... 4096-byte alignment. If you want to fix the bug in
the spot you've touched, you'd need to walk all the
stack_vars[stack_vars_sorted[si2]] for si2 [si + 1, n - 1] and for those
where the loop would do anything (i.e.
stack_vars[i2].representative == i2
&& TREE_CODE (decl2) == SSA_NAME
? SA.partition_to_pseudo[var_to_partition (SA.map, decl2)] == NULL_RTX
: DECL_RTL (decl2) == pc_rtx
and the pred applies (but that means also walking the earlier ones!
because with -fstack-protector* the vars can be processed in several calls) and
alignb2 * BITS_PER_UNIT <= MAX_SUPPORTED_STACK_ALIGNMENT
and compute maximum of those alignments.
That maximum is already computed,
data->asan_alignb = MAX (data->asan_alignb, alignb);
computes that, but you get the final result only after you do all the
expand_stack_vars calls. You'd need to compute it before.
Though, that change would be still in the wrong place.
The thing is, it would be a waste of the precious stack space when it isn't
needed at all (e.g. when asan will not at compile time do the use after
return checking, or if it won't do it at runtime, or even if it will do at
runtime it will waste the space on the stack).
The following patch fixes it solely for the __asan_stack_malloc_N
allocations, doesn't enlarge unnecessarily further the actual stack frame.
Because asan is only supported on FRAME_GROWS_DOWNWARD architectures
(mips, rs6000 and xtensa are conditional FRAME_GROWS_DOWNWARD arches, which
for -fsanitize=address or -fstack-protector* use FRAME_GROWS_DOWNWARD 1,
otherwise 0, others supporting asan always just use 1), the assumption for
the dynamic stack realignment is that the top of the stack frame (aka offset
0) is aligned to alignb passed to the function (which is the maximum of alignb
of all the vars in the frame). As checked by the assertion in the patch,
offsets[0] is 0 most of the time and so that assumption is correct, the only
case when it is not 0 is if -fstack-protector* is on together with
-fsanitize=address and cfgexpand.cc (create_stack_guard) created a stack
guard. That is the only variable which is allocated in the stack frame
right away, for all others with -fsanitize=address defer_stack_allocation
(or -fstack-protector*) returns true and so they aren't allocated
immediately but handled during the frame layout phases. So, the original
frame_offset of 0 is changed because of the stack guard to
-pointer_size_in_bytes and later at the
if (data->asan_vec.is_empty ())
{
align_frame_offset (ASAN_RED_ZONE_SIZE);
prev_offset = frame_offset.to_constant ();
}
to -ASAN_RED_ZONE_SIZE. The asan_emit_stack_protection code wasn't
taking this into account though, so essentially assumed in the
__asan_stack_malloc_N allocated memory it needs to align it such that
pointer corresponding to offsets[0] is alignb aligned. But that isn't
correct if alignb > ASAN_RED_ZONE_SIZE, in that case it needs to ensure that
pointer corresponding to frame offset 0 is alignb aligned.
The following patch fixes that. Unlike the previous case where
we knew that asan_frame_size + base_align_bias falls into the same bucket
as asan_frame_size, this isn't in some cases true anymore, so the patch
recomputes which bucket to use and if going to bucket 11 (because there is
no __asan_stack_malloc_11 function in the library) disables the after return
sanitization.
2024-04-11 Jakub Jelinek <jakub@redhat.com>
PR middle-end/110027
* asan.cc (asan_emit_stack_protection): Assert offsets[0] is
zero if there is no stack protect guard, otherwise
-ASAN_RED_ZONE_SIZE. If alignb > ASAN_RED_ZONE_SIZE and there is
stack pointer guard, take the ASAN_RED_ZONE_SIZE bytes allocated at
the top of the stack into account when computing base_align_bias.
Recompute use_after_return_class from asan_frame_size + base_align_bias
and set to -1 if that would overflow to 11.
Jakub Jelinek [Tue, 9 Apr 2024 07:31:42 +0000 (09:31 +0200)]
c++: Fix up maybe_warn_for_constant_evaluated calls [PR114580]
When looking at maybe_warn_for_constant_evaluated for the trivial
infinite loops patch, I've noticed that it can emit weird diagnostics
for if constexpr in templates, first warn that std::is_constant_evaluted()
always evaluates to false (because the function template is not constexpr)
and then during instantiation warn that std::is_constant_evaluted()
always evaluates to true (because it is used in if constexpr condition).
Now, only the latter is actually true, even when the if constexpr
is in a non-constexpr function, it will still always evaluate to true.
So, the following patch fixes it to call maybe_warn_for_constant_evaluated
always with IF_STMT_CONSTEXPR_P (if_stmt) as the second argument rather than
true if it is if constexpr with non-dependent condition etc.
2024-04-09 Jakub Jelinek <jakub@redhat.com>
PR c++/114580
* semantics.cc (finish_if_stmt_cond): Call
maybe_warn_for_constant_evaluated with IF_STMT_CONSTEXPR_P (if_stmt)
as the second argument, rather than true/false depending on if
it is if constexpr with non-dependent constant expression with
bool type.
* g++.dg/cpp2a/is-constant-evaluated15.C: New test.
Jakub Jelinek [Fri, 5 Apr 2024 12:56:14 +0000 (14:56 +0200)]
vect: Don't clear base_misaligned in update_epilogue_loop_vinfo [PR114566]
The following testcase is miscompiled, because in the vectorized
epilogue the vectorizer assumes it can use aligned loads/stores
(if the base decl gets alignment increased), but it actually doesn't
increase that.
This is because r10-4203-g97c1460367 added the hunk following
patch removes. The explanation feels reasonable, but actually it
is not true as the testcase proves.
The thing is, we vectorize the main loop with 64-byte vectors
and the corresponding data refs have base_alignment 16 (the
a array has DECL_ALIGN 128) and offset_alignment 32. Now, because
of the offset_alignment 32 rather than 64, we need to use unaligned
loads/stores in the main loop (and ditto in the first load/store
in vectorized epilogue). But the second load/store in the vectorized
epilogue uses only 32-byte vectors and because it is a multiple
of offset_alignment, it checks if we could increase alignment of the
a VAR_DECL, the function returns true, sets base_misaligned = true
and says the access is then aligned.
But when update_epilogue_loop_vinfo clears base_misaligned with the
assumption that the var had to have the alignment increased already,
the update of DECL_ALIGN doesn't happen anymore.
Now, I'd think this base_alignment = false was needed before r10-4030-gd2db7f7901 change was committed where it incorrectly
overwrote DECL_ALIGN even if it was already larger, rather than
just always increasing it. But with that change in, it doesn't
make sense to me anymore.
Note, the testcase is latent on the trunk, but reproduces on the 13
branch.
Jakub Jelinek [Fri, 5 Apr 2024 07:31:28 +0000 (09:31 +0200)]
c++: Fix ICE with weird copy assignment operator [PR114572]
While ctors/dtors don't return anything (undeclared void or this pointer
on arm) and copy assignment operators normally return a reference to *this,
it isn't invalid to return uselessly some class object which might need
destructing, but the OpenMP clause handling code wasn't expecting that.
The following patch fixes that.
2024-04-05 Jakub Jelinek <jakub@redhat.com>
PR c++/114572
* cp-gimplify.cc (cxx_omp_clause_apply_fn): Call build_cplus_new
on build_call_a result if it has class type.
Jakub Jelinek [Thu, 4 Apr 2024 08:47:52 +0000 (10:47 +0200)]
fold-const: Handle NON_LVALUE_EXPR in native_encode_initializer [PR114537]
The following testcase is incorrectly rejected. The problem is that
for bit-fields native_encode_initializer expects the corresponding
CONSTRUCTOR elt value must be INTEGER_CST, but that isn't the case
here, it is wrapped into NON_LVALUE_EXPR by maybe_wrap_with_location.
We could STRIP_ANY_LOCATION_WRAPPER as well, but as all we are looking for
is INTEGER_CST inside, just looking through NON_LVALUE_EXPR seems easier.
2024-04-04 Jakub Jelinek <jakub@redhat.com>
PR c++/114537
* fold-const.cc (native_encode_initializer): Look through
NON_LVALUE_EXPR if val is INTEGER_CST.
Jakub Jelinek [Wed, 3 Apr 2024 08:02:35 +0000 (10:02 +0200)]
libquadmath: Don't assume the storage for __float128 arguments is aligned [PR114533]
With the register_printf_type/register_printf_modifier/register_printf_specifier
APIs the C library is just told the size of the argument and is provided with
a callback to fetch the argument from va_list using va_arg into C library provided
memory. The C library isn't told what alignment requirement it has, but we were
using direct load of a __float128 value from that memory which assumes
__alignof (__float128) alignment.
The following patch fixes that by using memcpy instead.
I haven't been able to reproduce an actual crash, tried
#include <quadmath.h>
#include <stdlib.h>
#include <stdio.h>
int main ()
{
__float128 r;
int prec = 20;
int width = 46;
char buf[128];
r = 2.0q;
r = sqrtq (r);
int n = quadmath_snprintf (buf, sizeof buf, "%+-#*.20Qe", width, r);
if ((size_t) n < sizeof buf)
printf ("%s\n", buf);
/* Prints: +1.41421356237309504880e+00 */
quadmath_snprintf (buf, sizeof buf, "%Qa", r);
if ((size_t) n < sizeof buf)
printf ("%s\n", buf);
/* Prints: 0x1.6a09e667f3bcc908b2fb1366ea96p+0 */
n = quadmath_snprintf (NULL, 0, "%+-#46.*Qe", prec, r);
if (n > -1)
{
char *str = malloc (n + 1);
if (str)
{
quadmath_snprintf (str, n + 1, "%+-#46.*Qe", prec, r);
printf ("%s\n", str);
/* Prints: +1.41421356237309504880e+00 */
}
free (str);
}
printf ("%+-#*.20Qe\n", width, r);
printf ("%Qa\n", r);
printf ("%+-#46.*Qe\n", prec, r);
printf ("%d %Qe %d %Qe %d %Qe\n", 1, r, 2, r, 3, r);
return 0;
}
In any case, I think memcpy for loading from it is right.
2024-04-03 Simon Chopin <simon.chopin@canonical.com>
Jakub Jelinek <jakub@redhat.com>
PR libquadmath/114533
* printf/printf_fp.c (__quadmath_printf_fp): Use memcpy to copy
__float128 out of args.
* printf/printf_fphex.c (__quadmath_printf_fphex): Likewise.
Jakub Jelinek [Thu, 14 Mar 2024 16:48:30 +0000 (17:48 +0100)]
icf: Reset SSA_NAME_{PTR,RANGE}_INFO in successfully merged functions [PR113907]
AFAIK we have no code in LTO streaming to stream out or in
SSA_NAME_{RANGE,PTR}_INFO, so LTO effectively throws it all away
and let vrp1 and alias analysis after IPA recompute that. There is
just one spot, for IPA VRP and IPA bit CCP we save/restore ranges
and set SSA_NAME_{PTR,RANGE}_INFO e.g. on parameters depending on what
we saved and propagated, but that is after streaming in bodies for the
post IPA optimizations.
Now, without LTO SSA_NAME_{RANGE,PTR}_INFO is already computed from
earlier in many cases (er.g. evrp and early alias analysis but other spots
too), but IPA ICF is ignoring the ranges and points-to details when
comparing the bodies. I think ignoring that is just fine, that is
effectively what we do for LTO where we throw that information away
before the analysis, and not ignoring it could lead to fewer ICF merging
possibilities.
So, the following patch instead verifies that for LTO SSA_NAME_{PTR,RANGE}_INFO
just isn't there on SSA_NAMEs in functions into which other functions have
been ICFed, and for non-LTO throws that information away (which matches the
LTO behavior).
Another possibility would be to remember the SSA_NAME <-> SSA_NAME mapping
vector (just one of the 2) on successful sem_function::equals on the
sem_function which is not the chosen leader (e.g. how SSA_NAMEs in the
leader map to SSA_NAMEs in the other function) and use that vector
to union the ranges in sem_function::merge. I can implement that for
comparison, but wanted to post this first if there is an agreement on
doing that or if Honza thinks we should take SSA_NAME_{RANGE,PTR}_INFO
into account. I think we can compare SSA_NAME_RANGE_INFO, but have
no idea how to try to compare points to info. And I think it will result
in less effective ICF for non-LTO vs. LTO unnecessarily.
2024-03-12 Jakub Jelinek <jakub@redhat.com>
PR middle-end/113907
* ipa-icf.cc (sem_item_optimizer::merge_classes): Reset
SSA_NAME_RANGE_INFO and SSA_NAME_PTR_INFO on successfully ICF merged
functions.
Jakub Jelinek [Thu, 14 Mar 2024 13:09:20 +0000 (14:09 +0100)]
aarch64: Fix TImode __sync_*_compare_and_exchange expansion with LSE [PR114310]
The following testcase ICEs with LSE atomics.
The problem is that the @atomic_compare_and_swap<mode> expander uses
aarch64_reg_or_zero predicate for the desired operand, which is fine,
given that for most of the modes and even for TImode in some cases
it can handle zero immediate just fine, but the TImode
@aarch64_compare_and_swap<mode>_lse just uses register_operand for
that operand instead, again intentionally so, because the casp,
caspa, caspl and caspal instructions need to use a pair of consecutive
registers for the operand and xzr is just one register and we can't
just store zero into the link register to emulate pair of zeros.
So, the following patch fixes that by forcing the newval operand into
a register for the TImode LSE case.
2024-03-14 Jakub Jelinek <jakub@redhat.com>
PR target/114310
* config/aarch64/aarch64.cc (aarch64_expand_compare_and_swap): For
TImode force newval into a register.
Jakub Jelinek [Thu, 7 Mar 2024 09:02:49 +0000 (10:02 +0100)]
bb-reorder: Fix -freorder-blocks-and-partition ICEs on aarch64 with asm goto [PR110079]
The following testcase ICEs, because fix_crossing_unconditional_branches
thinks that asm goto is an unconditional jump and removes it, replacing it
with unconditional jump to one of the labels.
This doesn't happen on x86 because the function in question isn't invoked
there at all:
/* If the architecture does not have unconditional branches that
can span all of memory, convert crossing unconditional branches
into indirect jumps. Since adding an indirect jump also adds
a new register usage, update the register usage information as
well. */
if (!HAS_LONG_UNCOND_BRANCH)
fix_crossing_unconditional_branches ();
I think for the asm goto case, for the non-fallthru edge if any we should
handle it like any other fallthru (and fix_crossing_unconditional_branches
doesn't really deal with those, it only looks at explicit branches at the
end of bbs and we are in cfglayout mode at that point) and for the labels
we just pass the labels as immediates to the assembly and it is up to the
user to figure out how to store them/branch to them or whatever they want to
do.
So, the following patch fixes this by not treating asm goto as a simple
unconditional jump.
I really think that on the !HAS_LONG_UNCOND_BRANCH targets we have a bug
somewhere else, where outofcfglayout or whatever should actually create
those indirect jumps on the crossing edges instead of adding normal
unconditional jumps, I see e.g. in
__attribute__((cold)) int bar (char *);
__attribute__((hot)) int baz (char *);
void qux (int x) { if (__builtin_expect (!x, 1)) goto l1; bar (""); goto l1; l1: baz (""); }
void corge (int x) { if (__builtin_expect (!x, 0)) goto l1; baz (""); l2: return; l1: bar (""); goto l2; }
with -O2 -freorder-blocks-and-partition on aarch64 before/after this patch
just b .L? jumps which I believe are +-32MB, so if .text is larger than
32MB, it could fail to link, but this patch doesn't address that.
Jakub Jelinek [Mon, 4 Mar 2024 09:04:19 +0000 (10:04 +0100)]
i386: Fix ICEs with SUBREGs from vector etc. constants to XFmode [PR114184]
The Intel extended format has the various weird number categories,
pseudo denormals, pseudo infinities, pseudo NaNs and unnormals.
Those are not representable in the GCC real_value and so neither
GIMPLE nor RTX VIEW_CONVERT_EXPR/SUBREG folding folds those into
constants.
As can be seen on the following testcase, because it isn't folded
(since GCC 12, before that we were folding it) we can end up with
a SUBREG of a CONST_VECTOR or similar constant, which isn't valid
general_operand, so we ICE during vregs pass trying to recognize
the move instruction.
Initially I thought it is a middle-end bug, the movxf instruction
has general_operand predicate, but the middle-end certainly never
tests that predicate, seems moves are special optabs.
And looking at other mov optabs, e.g. for vector modes the i386
patterns use nonimmediate_operand predicate on the input, yet
ix86_expand_vector_move deals with CONSTANT_P and SUBREG of CONSTANT_P
arguments which if the predicate was checked couldn't ever make it through.
The following patch handles this case similarly to the
ix86_expand_vector_move's SUBREG of CONSTANT_P case, does it just for XFmode
because I believe that is the only mode that needs it from the scalar ones,
others should just be folded.
2024-03-04 Jakub Jelinek <jakub@redhat.com>
PR target/114184
* config/i386/i386-expand.cc (ix86_expand_move): If XFmode op1
is SUBREG of CONSTANT_P, force the SUBREG_REG into memory or
register.
Jakub Jelinek [Thu, 22 Feb 2024 18:32:02 +0000 (19:32 +0100)]
c: Handle scoped attributes in __has*attribute and scoped attribute parsing changes in -std=c11 etc. modes [PR114007]
We aren't able to parse __has_attribute (vendor::attr) (and __has_c_attribute
and __has_cpp_attribute) in strict C < C23 modes. While in -std=gnu* modes
or in -std=c23 there is CPP_SCOPE token, in -std=c* (except for -std=c23)
there are is just a pair of CPP_COLON tokens.
The c-lex.cc hunk adds support for that, but always returns 0 in that case
unlike the GCC 14+ version.
2024-02-22 Jakub Jelinek <jakub@redhat.com>
PR c/114007
gcc/c-family/
* c-lex.cc (c_common_has_attribute): Parse 2 CPP_COLONs with
the first one with COLON_SCOPE flag the same as CPP_SCOPE but
ensure 0 is returned then.
gcc/testsuite/
* gcc.dg/c23-attr-syntax-8.c: New test.
libcpp/
* include/cpplib.h (COLON_SCOPE): Define to PURE_ZERO.
* lex.cc (_cpp_lex_direct): When lexing CPP_COLON with another
colon after it, if !CPP_OPTION (pfile, scope) set COLON_SCOPE
flag on the first CPP_COLON token.
The C and C++ FEs when parsing attributes already canonicalize them
(i.e. if they start with __ and end with __ substrings, we remove those).
lookup_attribute already verifies in gcc_assert that the first character
of name is not an underscore, and even lookup_scoped_attribute_spec doesn't
attempt to canonicalize the namespace it is passed. But for some historic
reason it was canonicalizing the name argument, which misbehaves when
an attribute starts with ____ and ends with ____.
I believe it is just wrong to try to canonicalize
lookup_scope_attribute_spec name attribute, it should have been
canonicalized already, in other spots where it is called it is already
canonicalized before.
2024-02-12 Jakub Jelinek <jakub@redhat.com>
PR c++/113674
* attribs.cc (extract_attribute_substring): Remove.
(lookup_scoped_attribute_spec): Don't call it.
Jakub Jelinek [Sat, 3 Feb 2024 13:37:19 +0000 (14:37 +0100)]
ggc-common: Fix save PCH assertion
We are getting a gnuradio PCH ICE
/usr/include/pybind11/stl.h:447:1: internal compiler error: in gt_pch_save, at ggc-common.cc:693
0x1304e7d gt_pch_save(_IO_FILE*)
../../gcc/ggc-common.cc:693
0x12a45fb c_common_write_pch()
../../gcc/c-family/c-pch.cc:175
0x18ad711 c_parse_final_cleanups()
../../gcc/cp/decl2.cc:5062
0x213988b c_common_parse_file()
../../gcc/c-family/c-opts.cc:1319
(unfortunately it isn't reproduceable always, but often needs
up to 100 attempts, isn't reproduceable in a cross etc.).
The bug is in the assertion I've added in gt_pch_save when adding
relocation support for the PCH files in case they happen not to be
mmapped at the selected address.
addr is a relocated address which points to a location in the PCH
blob (starting at mmi.preferred_base, with mmi.size bytes) which contains
a pointer that needs to be relocated. So the assertion is meant to
verify the address is within the PCH blob, obviously it needs to be
equal or above mmi.preferred_base, but I got the other comparison wrong
and when one is very unlucky and the last sizeof (void *) bytes of the
blob happen to be a pointer which needs to be relocated, such as on the
s390x host addr 0x8008a04ff8, mmi.preferred_base 0x8000000000 and
mmi.size 0x8a05000, addr + sizeof (void *) is equal to mmi.preferred_base +
mmi.size and that is still fine, both addresses are end of something.
2024-02-03 Jakub Jelinek <jakub@redhat.com>
* ggc-common.cc (gt_pch_save): Allow addr to be equal to
mmi.preferred_base + mmi.size - sizeof (void *).
Jakub Jelinek [Tue, 30 Jan 2024 08:58:05 +0000 (09:58 +0100)]
tree-ssa-strlen: Fix up handle_store [PR113603]
Since r10-2101-gb631bdb3c16e85f35d3 handle_store uses
count_nonzero_bytes{,_addr} which (more recently limited to statements
with the same vuse) can walk earlier statements feeding the rhs
of the store and call get_stridx on it.
Unlike most of the other functions where get_stridx is called first on
rhs and only later on lhs, handle_store calls get_stridx on the lhs before
the count_nonzero_bytes* call and does some si->nonzero_bytes comparison
on it.
Now, strinfo structures are refcounted and it is important not to screw
it up.
What happens on the following testcase is that we call get_strinfo on the
destination idx's base (g), which returns a strinfo at that moment
with refcount of 2, one copy referenced in bb 2 final strinfos, one in bb 3
(the vector of strinfos was unshared from the dominator there because some
other strinfo was added) and finally we process a store in bb 6.
Now, count_nonzero_bytes is called and that sees &g[1] in a PHI and
calls get_stridx on it, which in turn calls get_stridx_plus_constant
because &g + 1 address doesn't have stridx yet. This creates a new
strinfo for it:
si = new_strinfo (ptr, idx, build_int_cst (size_type_node, nonzero_chars),
basesi->full_string_p);
set_strinfo (idx, si);
and the latter call, because it is the first one in bb 6 that needs it,
unshares the stridx_to_strinfo vector (so refcount of the g strinfo becomes
3).
Now, get_stridx_plus_constant needs to chain the new strinfo of &g[1] in
between the related strinfos, so after the g record. Because the strinfo
is now shared between the current bb and 2 other bbs, it needs to
unshare_strinfo it (creating a new strinfo which can be modified as a copy
of the old one, decrementing refcount of the old shared one and setting
refcount of the new one to 1):
if (strinfo *nextsi = get_strinfo (chainsi->next))
{
nextsi = unshare_strinfo (nextsi);
si->next = nextsi->idx;
nextsi->prev = idx;
}
chainsi = unshare_strinfo (chainsi);
if (chainsi->first == 0)
chainsi->first = chainsi->idx;
chainsi->next = idx;
Now, the bug is that the caller of this a couple of frames above,
handle_store, holds on a pointer to this g strinfo (but doesn't know
about the unsharing, so the pointer is to the old strinfo with refcount
of 2), and later needs to update it, so it
si = unshare_strinfo (si);
and modifies some fields in it.
This creates a new strinfo (with refcount of 1 which is stored into
the vector of the current bb) based on the old strinfo for g and
decrements refcount of the old one to 1. So, now we are in inconsistent
state, because the old strinfo for g is referenced in bb 2 and bb 3
vectors, but has just refcount of 1, and then have one strinfo (the one
created by unshare_strinfo (chainsi) in get_stridx_plus_constant) which
has refcount of 1 but isn't referenced from anywhere anymore.
Later on when we free one of the bb 2 or bb 3 vectors (forgot which)
that decrements refcount from 1 to 0 and poisons the strinfo/returns it to
the pool, but then maybe_invalidate when looking at the other bb's pointer
to it ICEs.
The following patch fixes it by calling get_strinfo again, it is guaranteed
to return non-NULL, but could be an unshared copy instead of the originally
fetched shared one.
I believe we only need to do this refetching for the case where get_strinfo
is called on the lhs before get_stridx is called on other operands, because
we should be always modifying (apart from the chaining changes) the strinfo
for the destination of the statements, not other strinfos just consumed in
there.
2024-01-30 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/113603
* tree-ssa-strlen.cc (strlen_pass::handle_store): After
count_nonzero_bytes call refetch si using get_strinfo in case it
has been unshared in the meantime.
Jakub Jelinek [Thu, 18 Jan 2024 09:21:12 +0000 (10:21 +0100)]
i386: Add -masm=intel profiling support [PR113122]
x86_function_profiler emits assembly directly into file and only emits
AT&T syntax. The following patch adjusts it to emit MASM syntax
if -masm=intel.
As it doesn't use asm_fprintf, I can't use {|} syntax for the dialects.
I've tested using
for i in -mcmodel=large "-mcmodel=large -fpic" "" -fpic "-m32 -fpic" "-m32"; do
./xgcc -B ./ -c -O2 -fprofile $i -masm=att pr113122.c -o pr113122.o1;
./xgcc -B ./ -c -O2 -fprofile $i -masm=intel pr113122.c -o pr113122.o2;
objdump -dr pr113122.o1 > /tmp/1; objdump -dr pr113122.o2 > /tmp/2;
diff -up /tmp/1 /tmp/2; done
that the emitted sequences are identical after assembly.
2024-01-18 Jakub Jelinek <jakub@redhat.com>
PR target/113122
* config/i386/i386.cc (x86_function_profiler): Add -masm=intel
support. Add missing space after , in emitted assembly in some
cases. Formatting fixes.
* gcc.target/i386/pr113122-1.c: New test.
* gcc.target/i386/pr113122-2.c: New test.
* gcc.target/i386/pr113122-3.c: New test.
* gcc.target/i386/pr113122-4.c: New test.
Jakub Jelinek [Tue, 16 Jan 2024 10:49:34 +0000 (11:49 +0100)]
cfgexpand: Workaround CSE of ADDR_EXPRs in VAR_DECL partitioning [PR113372]
The following patch adds a quick workaround to bugs in VAR_DECL
partitioning.
The problem is that there is no dependency between ADDR_EXPRs of local
decls and CLOBBERs of those vars, so VN can CSE uses of ADDR_EXPRs
(including ivopts integral variants thereof), which can break
add_scope_conflicts discovery of what variables are actually used
in certain region.
E.g. we can have
ivtmp.40_3 = (unsigned long) &MEM <unsigned long[100]> [(void *)&bitint.6 + 8B];
...
uses of ivtmp.40_3
...
bitint.6 ={v} {CLOBBER(eos)};
...
ivtmp.28_43 = (unsigned long) &MEM <unsigned long[100]> [(void *)&bitint.6 + 8B];
...
uses of ivtmp.28_43
before VN (such as dom3), which the add_scope_conflicts code identifies as 2
independent uses of bitint.6 variable (which is correct), but then VN
determines ivtmp.28_43 is the same as ivtmp.40_3 and just uses ivtmp.40_3
even in the second region; at that point add_scope_conflict thinks the
bitint.6 variable is not used in that region anymore.
The following patch does a simple single def-stmt check for such ADDR_EXPRs
(rather than say trying to do a full propagation of what SSA_NAMEs can
contain ADDR_EXPRs of local variables), which seems to workaround all 4 PRs.
In addition to this patch I've used the attached one to gather statistics
on the total size of all variable partitions in a function and seems besides
the new testcases nothing is really affected compared to no patch (I've
actually just modified the patch to == OMP_SCAN instead of == ADDR_EXPR, so
it looks the same except that it never triggers). The comparison wasn't
perfect because I've only gathered BITS_PER_WORD, main_input_filename (did
some replacement of build directories and /tmp/ccXXXXXX names of LTO to make
it more similar between the two bootstraps/regtests), current_function_name
and the total size of all variable partitions if any, because I didn't
record e.g. the optimization options and so e.g. torture tests which iterate
over options could have different partition sizes even in one compiler when
BITS_PER_WORD, main_input_filename and current_function_name are all equal.
So had to write an awk script to check if the first triple in the second
build appeared in the first one and the quadruple in the second build
appeared in the first one too, otherwise print result and that only
triggered in the new tests.
Also, the cc1plus binary according to objdump -dr is identical between the
two builds except for the ADDR_EXPR vs. OMP_SCAN constant in the two spots.
2024-01-16 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/113372
PR middle-end/90348
PR middle-end/110115
PR middle-end/111422
* cfgexpand.cc (add_scope_conflicts_2): New function.
(add_scope_conflicts_1): Use it.
* gcc.c-torture/execute/pr90348.c: New test.
* gcc.c-torture/execute/pr110115.c: New test.
* gcc.c-torture/execute/pr111422.c: New test.
Jakub Jelinek [Wed, 10 Jan 2024 12:29:47 +0000 (13:29 +0100)]
libgomp: Fix up FLOCK fallback handling [PR113192]
My earlier change broke Solaris testing, because @FLOCK@ isn't substituted
just into libgomp/Makefile where it worked, but also the
testsuite/libgomp-site-extra.exp file where Make variables aren't present
and can't be substituted.
The following patch instead computes the absolute srcdir path and uses it
for FLOCK.
2024-01-10 Jakub Jelinek <jakub@redhat.com>
PR libgomp/113192
* configure.ac (FLOCK): Use $libgomp_abs_srcdir/testsuite/flock
instead of \$(abs_top_srcdir)/testsuite/flock.
* configure: Regenerated.
The copy attributes is allowed on decls as well as types and even has
checks whether decl (set to *node) is DECL_P or TYPE_P, but for diagnostics
unconditionally uses DECL_SOURCE_LOCATION (decl), which obviously only works
if it applies to a decl.
2024-01-09 Jakub Jelinek <jakub@redhat.com>
PR c/113262
* c-attribs.cc (handle_copy_attribute): Don't use
DECL_SOURCE_LOCATION (decl) if decl is not DECL_P, use input_location
instead. Formatting fixes.
Richard Biener [Mon, 26 Jun 2023 10:51:37 +0000 (12:51 +0200)]
tree-optimization/110381 - preserve SLP permutation with in-order reductions
The following fixes a bug that manifests itself during fold-left
reduction transform in picking not the last scalar def to replace
and thus double-counting some elements. But the underlying issue
is that we merge a load permutation into the in-order reduction
which is of course wrong.
Now, reduction analysis has not yet been performend when optimizing
permutations so we have to resort to check that ourselves.
PR tree-optimization/110381
* tree-vect-slp.cc (vect_optimize_slp_pass::start_choosing_layouts):
Materialize permutes before fold-left reductions.
Richard Biener [Wed, 14 Feb 2024 11:33:13 +0000 (12:33 +0100)]
tree-optimization/113910 - huge compile time during PTA
For the testcase in PR113910 we spend a lot of time in PTA comparing
bitmaps for looking up equivalence class members. This points to
the very weak bitmap_hash function which effectively hashes set
and a subset of not set bits.
The major problem with it is that it simply truncates the
BITMAP_WORD sized intermediate hash to hashval_t which is
unsigned int, effectively not hashing half of the bits.
This reduces the compile-time for the testcase from tens of minutes
to 42 seconds and PTA time from 99% to 46%.
PR tree-optimization/113910
* bitmap.cc (bitmap_hash): Mix the full element "hash" to
the hashval_t hash.
This was another PR caused by the way that
vect_determine_precisions_from_range handles shifts. We tried to
narrow 32768 >> x to a 16-bit shift based on range information for
the inputs and outputs, with vect_recog_over_widening_pattern
(after PR110828) adjusting the shift amount. But this doesn't
work for the case where x is in [16, 31], since then 32-bit
32768 >> x is a well-defined zero, whereas no well-defined
16-bit 32768 >> y will produce 0.
We could perhaps generate x < 16 ? 32768 >> x : 0 instead,
but since vect_determine_precisions_from_range was never really
supposed to rely on fix-ups, it seems better to fix that instead.
The patch also makes the code more selective about which codes
can be narrowed based on input and output ranges. This showed
that vect_truncatable_operation_p was missing cases for
BIT_NOT_EXPR (equivalent to BIT_XOR_EXPR of -1) and NEGATE_EXPR
(equivalent to BIT_NOT_EXPR followed by a PLUS_EXPR of 1).
pr113281-1.c is the original testcase. pr113281-[23].c failed
before the patch due to overly optimistic narrowing. pr113281-[45].c
previously passed and are meant to protect against accidental
optimisation regressions.
gcc/
PR target/113281
* tree-vect-patterns.cc (vect_recog_over_widening_pattern): Remove
workaround for right shifts.
(vect_truncatable_operation_p): Handle NEGATE_EXPR and BIT_NOT_EXPR.
(vect_determine_precisions_from_range): Be more selective about
which codes can be narrowed based on their input and output ranges.
For shifts, require at least one more bit of precision than the
maximum shift amount.
create_intersect_range_checks checks whether two access ranges
a and b are alias-free using something equivalent to:
end_a <= start_b || end_b <= start_a
It has two ways of doing this: a "vanilla" way that calculates
the exact exclusive end pointers, and another way that uses the
last inclusive aligned pointers (and changes the comparisons
accordingly). The comment for the latter is:
/* Calculate the minimum alignment shared by all four pointers,
then arrange for this alignment to be subtracted from the
exclusive maximum values to get inclusive maximum values.
This "- min_align" is cumulative with a "+ access_size"
in the calculation of the maximum values. In the best
(and common) case, the two cancel each other out, leaving
us with an inclusive bound based only on seg_len. In the
worst case we're simply adding a smaller number than before.
The problem is that the associated code implicitly assumed that the
access size was a multiple of the pointer alignment, and so the
alignment could be carried over to the exclusive end pointer.
The testcase started failing after g:9fa5b473b5b8e289b6542
because that commit improved the alignment information for
the accesses.
gcc/
PR tree-optimization/115192
* tree-data-ref.cc (create_intersect_range_checks): Take the
alignment of the access sizes into account.
gcc/testsuite/
PR tree-optimization/115192
* gcc.dg/vect/pr115192.c: New test.
where SImode divmod_operator (div,mod,udiv,umod) has DImode operands.
Wrap input operand with truncate:SI to make machine modes consistent.
PR target/115297
gcc/ChangeLog:
* config/alpha/alpha.md (<any_divmod:code>si3): Wrap DImode
operands 3 and 4 with truncate:SI RTX.
(*divmodsi_internal_er): Ditto for operands 1 and 2.
(*divmodsi_internal_er_1): Ditto.
(*divmodsi_internal): Ditto.
* config/alpha/constraints.md ("b"): Correct register
number in the description.
Jonathan Wakely [Wed, 29 May 2024 09:59:48 +0000 (10:59 +0100)]
libstdc++: Replace link to gcc-4.3.2 docs in manual [PR115269]
Link to the docs for GCC trunk instead. For the release branches, the
link should be to the docs for appropriate release branch.
Also replace the incomplete/outdated list of explicit -std options with
a single entry for the -std option.
libstdc++-v3/ChangeLog:
PR libstdc++/115269
* doc/xml/manual/using.xml: Replace link to gcc-4.3.2 docs.
Replace list of -std=... options with a single entry for -std.
* doc/html/manual/using.html: Regenerate.
YunQiang Su [Tue, 28 May 2024 18:28:25 +0000 (02:28 +0800)]
MIPS16: Mark $2/$3 as clobbered if GP is used
PR Target/84790.
The gp init sequence
li $2,%hi(_gp_disp)
addiu $3,$pc,%lo(_gp_disp)
sll $2,16
addu $2,$3
is generated directly in `mips_output_function_prologue`, and does
not appear in the RTL.
So the IRA/IPA passes are not aware that $2/$3 have been clobbered,
so they may be used for cross (local) function call.
Let's mark $2/$3 clobber both:
- Just after the UNSPEC_GP RTL of a function;
- Just after a function call.
Reported-by: Matthias Schiffer <mschiffer@universe-factory.net> Origin-Patch-by: Felix Fietkau <nbd@nbd.name>.
gcc
* config/mips/mips.cc(mips16_gp_pseudo_reg): Mark
MIPS16_PIC_TEMP and MIPS_PROLOGUE_TEMP clobbered.
(mips_emit_call_insn): Mark MIPS16_PIC_TEMP and
MIPS_PROLOGUE_TEMP clobbered if MIPS16 and CALL_CLOBBERED_GP.
Jakub Jelinek [Wed, 22 May 2024 07:12:28 +0000 (09:12 +0200)]
ubsan: Use right address space for MEM_REF created for bool/enum sanitization [PR115172]
The following testcase is miscompiled, because -fsanitize=bool,enum
creates a MEM_REF without propagating there address space qualifiers,
so what should be normally loaded using say %gs:/%fs: segment prefix
isn't. Together with asan it then causes that load to be sanitized.
2024-05-22 Jakub Jelinek <jakub@redhat.com>
PR sanitizer/115172
* ubsan.cc (instrument_bool_enum_load): If rhs is not in generic
address space, use qualified version of utype with the right
address space. Formatting fix.
Martin Jambor [Tue, 28 May 2024 11:33:02 +0000 (13:33 +0200)]
ipa: Compare jump functions in ICF (PR 113907)
This is a manual backport of r14-9840-g1162861439fd3c from master.
Manual because the bits and value range representation in jump
functions have changes during the gcc 14 development cycle.
In PR 113907 comment #58, Honza found a case where ICF thinks bodies
of functions are equivalent but becaise of difference in aliases in a
memory access, different aggregate jump functions are associated with
supposedly equivalent call statements. This patch adds a way to
compare jump functions and plugs it into ICF to avoid the issue.
gcc/ChangeLog:
2024-05-14 Martin Jambor <mjambor@suse.cz>
PR ipa/113907
* ipa-prop.h (ipa_jump_functions_equivalent_p): Declare.
(values_equal_for_ipcp_p): Likewise.
* ipa-prop.cc (ipa_agg_pass_through_jf_equivalent_p): New function.
(ipa_agg_jump_functions_equivalent_p): Likewise.
(ipa_jump_functions_equivalent_p): Likewise.
* ipa-cp.cc (values_equal_for_ipcp_p): Make function public.
* ipa-icf-gimple.cc: Include alloc-pool.h, symbol-summary.h, sreal.h,
ipa-cp.h and ipa-prop.h.
(func_checker::compare_gimple_call): Comapre jump functions.
Jason Merrill [Wed, 27 Mar 2024 20:14:01 +0000 (16:14 -0400)]
c++: __is_constructible ref binding [PR100667]
The requirement that a type argument be complete is excessive in the case of
direct reference binding to the same type, which does not rely on any
properties of the type. This is LWG 2939.
PR c++/100667
gcc/cp/ChangeLog:
* semantics.cc (same_type_ref_bind_p): New.
(finish_trait_expr): Use it.
Jason Merrill [Thu, 25 Jan 2024 17:02:07 +0000 (12:02 -0500)]
c++: array of PMF [PR113598]
Here AGGREGATE_TYPE_P includes pointers to member functions, which is not
what we want. Instead we should use class||array, as elsewhere in the
function.
Jason Merrill [Tue, 2 Apr 2024 14:52:28 +0000 (10:52 -0400)]
c++: binding reference to comma expr [PR114561]
We represent a reference binding where the referent type is more qualified
by a ck_ref_bind around a ck_qual. We performed the ck_qual and then tried
to undo it with STRIP_NOPS, but that doesn't work if the conversion is
buried in COMPOUND_EXPR. So instead let's avoid performing that fake
conversion in the first place.
PR c++/114561
PR c++/114562
gcc/cp/ChangeLog:
* call.cc (convert_like_internal): Avoid adding qualification
conversion in direct reference binding.
gcc/testsuite/ChangeLog:
* g++.dg/conversion/ref10.C: New test.
* g++.dg/conversion/ref11.C: New test.
The following fixes a wrong pattern that didn't match the behavior
of the original fold_widened_comparison in that get_unwidened
returned a constant always in the wider type. But here we're
using (int) 4294967295u without the conversion applied. Fixed
by doing as earlier in the pattern - matching constants only
if the conversion was actually applied.
PR middle-end/110176
* match.pd (zext (bool) <= (int) 4294967295u): Make sure
to match INTEGER_CST only without outstanding conversion.
Richard Biener [Mon, 20 Nov 2023 12:39:52 +0000 (13:39 +0100)]
tree-optimization/112281 - loop distribution and zero dependence distances
The following fixes an omission in dependence testing for loop
distribution. When the overall dependence distance is not zero but
the dependence direction in the innermost common loop is = there is
a conflict between the partitions and we have to merge them.
PR tree-optimization/112281
* tree-loop-distribution.cc
(loop_distribution::pg_add_dependence_edges): For = in the
innermost common loop record a partition conflict.
* gcc.dg/torture/pr112281-1.c: New testcase.
* gcc.dg/torture/pr112281-2.c: Likewise.
Richard Biener [Mon, 22 Jan 2024 14:42:59 +0000 (15:42 +0100)]
debug/112718 - reset all type units with -ffat-lto-objects
When mixing -flto, -ffat-lto-objects and -fdebug-type-section we
fail to reset all type units after early output resulting in an
ICE when attempting to add then duplicate sibling attributes.
PR debug/112718
* dwarf2out.cc (dwarf2out_finish): Reset all type units
for the fat part of an LTO compile.
Richard Biener [Wed, 13 Dec 2023 13:23:31 +0000 (14:23 +0100)]
tree-optimization/112793 - SLP of constant/external code-generated twice
The following makes the attempt at code-generating a constant/external
SLP node twice well-formed as that can happen when partitioning BB
vectorization attempts where we keep constants/externals unpartitioned.
PR tree-optimization/112793
* tree-vect-slp.cc (vect_schedule_slp_node): Already
code-generated constant/external nodes are OK.
When we classify a conditional reduction chain as CONST_COND_REDUCTION
we fail to verify all involved conditionals have the same constant.
That's a quite unlikely situation so the following simply disables
such classification when there's more than one reduction statement.
PR tree-optimization/114027
* tree-vect-loop.cc (vecctorizable_reduction): Use optimized
condition reduction classification only for single-element
chains.
Richard Biener [Mon, 18 Mar 2024 11:39:03 +0000 (12:39 +0100)]
tree-optimization/114375 - disallow SLP discovery of permuted mask loads
We cannot currently handle permutations of mask loads in code generation
or permute optimization. But we simply drop any permutation on the
floor, so the following instead rejects the SLP build rather than
producing wrong-code. I've also made sure to reject them in
vectorizable_load for completeness.
PR tree-optimization/114375
* tree-vect-slp.cc (vect_build_slp_tree_2): Compute the
load permutation for masked loads but reject it when any
such is necessary.
* tree-vect-stmts.cc (vectorizable_load): Reject masked
VMAT_ELEMENTWISE and VMAT_STRIDED_SLP as those are not
supported.
Richard Biener [Tue, 5 Mar 2024 09:55:56 +0000 (10:55 +0100)]
tree-optimization/114231 - use patterns for BB SLP discovery root stmts
The following makes sure to use recognized patterns when vectorizing
roots during BB SLP discovery. We need to apply those late since
during root discovery we've not yet done pattern recognition.
All parts of the vectorizer assume patterns get used, for the testcase
we mix this up when doing live lane computation.
PR tree-optimization/114231
* tree-vect-slp.cc (vect_analyze_slp): Lookup patterns when
processing a BB SLP root.
Richard Biener [Fri, 26 Apr 2024 13:47:13 +0000 (15:47 +0200)]
middle-end/114734 - wrong code with expand_call_mem_ref
When expand_call_mem_ref looks at the definition of the address
argument to eventually expand a &TARGET_MEM_REF argument together
with a masked load it fails to honor constraints imposed by SSA
coalescing decisions. The following fixes this.
PR middle-end/114734
* internal-fn.cc (expand_call_mem_ref): Use
get_gimple_for_ssa_name to get at the def stmt of the address
argument to honor SSA coalescing constraints.
Richard Biener [Tue, 9 Apr 2024 12:25:57 +0000 (14:25 +0200)]
lto/114655 - -flto=4 at link time doesn't override -flto=auto at compile time
The following adjusts -flto option processing in lto-wrapper to have
link-time -flto override any compile time setting.
PR lto/114655
* lto-wrapper.cc (merge_flto_options): Add force argument.
(merge_and_complain): Do not force here.
(run_gcc): But here to make the link-time -flto option override
any compile-time one.
Richard Biener [Mon, 15 Apr 2024 09:09:17 +0000 (11:09 +0200)]
gcov-profile/114715 - missing coverage for switch
The following avoids missing coverage for the line of a switch statement
which happens when gimplification emits a BIND_EXPR wrapping the switch
as that prevents us from setting locations on the containing statements
via annotate_all_with_location. Instead set the location of the GIMPLE
switch directly.
PR gcov-profile/114715
* gimplify.cc (gimplify_switch_expr): Set the location of the
GIMPLE switch.
Martin Jambor [Mon, 8 Apr 2024 15:34:33 +0000 (17:34 +0200)]
ipa: Self-DCE of uses of removed call LHSs (PR 108007)
PR 108007 is another manifestation where we rely on DCE to clean-up
after IPA-SRA and if the user explicitely switches DCE off, IPA-SRA
can leave behind statements which are fed uninitialized values and
trap, even though their results are themselves never used.
I have already fixed this for unused parameters in callees, this bug
shows that almost the same thing can happen for removed returns, on
the side of callers. This means that the issue has to be fixed
elsewhere, in call redirection. This patch adds a function which
looks for (and through, using a work-list) uses of operations fed
specific SSA names and removes them all.
That would have been easy if it wasn't for debug statements during
tree-inline (from which call redirection is also invoked). Debug
statements are decoupled from the rest at this point and iterating
over uses of SSAs does not bring them up. During tree-inline they are
handled especially at the end, I assume in order to make sure that
relative ordering of UIDs are the same with and without debug info.
This means that during tree-inline we need to make a hash of killed
SSAs, that we already have in copy_body_data, available to the
function making the purging. So the patch duly does also that, making
the interface slightly ugly. Moreover, all newly unused SSA names
need to be freed and as PR 112616 showed, it must be done in a defined
order, which is what newly added ipa_release_ssas_in_hash does.
PR ipa/108007
PR ipa/112616
* cgraph.h (cgraph_edge): Add a parameter to
redirect_call_stmt_to_callee.
* ipa-param-manipulation.h (ipa_param_adjustments): Add a
parameter to modify_call.
(ipa_release_ssas_in_hash): Declare.
* cgraph.cc (cgraph_edge::redirect_call_stmt_to_callee): New
parameter killed_ssas, pass it to padjs->modify_call.
* ipa-param-manipulation.cc (purge_all_uses): New function.
(ipa_param_adjustments::modify_call): New parameter killed_ssas.
Instead of substituting uses, invoke purge_all_uses. If
hash of killed SSAs has not been provided, create a temporary one
and release SSAs that have been added to it.
(compare_ssa_versions): New function.
(ipa_release_ssas_in_hash): Likewise.
* tree-inline.cc (redirect_all_calls): Create
id->killed_new_ssa_names earlier, pass it to edge redirection,
adjust a comment.
(copy_body): Release SSAs in id->killed_new_ssa_names.
Martin Jambor [Tue, 14 May 2024 12:13:36 +0000 (14:13 +0200)]
ipa: Force args obtined through pass-through maps to the expected type (PR 114247)
Interactions of IPA-CP and IPA-SRA on the same data is a rather big
source of issues, I'm afraid. PR 113964 is a situation where IPA-CP
propagates an unsigned short in a union parameter into a function
which itself calls a different function which has a same union
parameter and both these union parameters are split with IPA-SRA. The
leaf function however uses a signed short member of the union.
In the calling function, we get the unsigned constant as the
replacement for the union and it is then passed in the call without
any type compatibility checks. Apparently on riscv64 it matters
whether the parameter is signed or unsigned short and so the leaf
function can see different values.
Fixed by using useless_type_conversion_p at the appropriate place and
if it fails, use force_value_to type as elsewhere in similar
situations.
gcc/ChangeLog:
2024-04-04 Martin Jambor <mjambor@suse.cz>
PR ipa/114247
* ipa-param-manipulation.cc (ipa_param_adjustments::modify_call):
Force values obtined through pass-through maps to the expected
split type.
gcc/testsuite/ChangeLog:
2024-04-04 Patrick O'Neill <patrick@rivosinc.com>
Martin Jambor <mjambor@suse.cz>
Objective-C, NeXT, v2: Correct a regression in code-gen.
There have been several changes in the ABI of Objective-C which
depend on the OS version targetted. In this case Protocols and
LabelProtocols should be made weak/hidden/extern from macOS 10.7
however there was a mistake in the code causing this to occur
from macOS 10.6. Fixed thus.
gcc/objc/ChangeLog:
* objc-next-runtime-abi-02.cc (WEAK_PROTOCOLS_AFTER): New.
(next_runtime_abi_02_protocol_decl): Use WEAK_PROTOCOLS_AFTER
to determine this ABI change.
(build_v2_protocol_list_address_table): Likewise.
Jakub Jelinek [Thu, 9 May 2024 09:18:21 +0000 (11:18 +0200)]
testsuite: Fix up vector-subaccess-1.C test for ia32 [PR89224]
The test FAILs on i686-linux due to
.../gcc/testsuite/g++.dg/torture/vector-subaccess-1.C:16:6: warning: SSE vector argument without SSE enabled changes the ABI [-Wpsabi]
excess warnings.
This fixes it by adding -Wno-psabi, like commonly done in other tests.
2024-05-09 Jakub Jelinek <jakub@redhat.com>
PR c++/89224
* g++.dg/torture/vector-subaccess-1.C: Add -Wno-psabi as additional
options.
Andrew Pinski [Sun, 24 Sep 2023 04:53:09 +0000 (21:53 -0700)]
Fix PR 110386: backprop vs ABSU_EXPR
The issue here is that when backprop tries to go
and strip sign ops, it skips over ABSU_EXPR but
ABSU_EXPR not only does an ABS, it also changes the
type to unsigned.
Since strip_sign_op_1 is only supposed to strip off
sign changing operands and not ones that change types,
removing ABSU_EXPR here is correct. We don't handle
nop conversions so this does cause any missed optimizations either.
OK? Bootstrapped and tested on x86_64-linux-gnu with no regressions.
Andrew Pinski [Thu, 22 Feb 2024 04:12:21 +0000 (20:12 -0800)]
warn-access: Fix handling of unnamed types [PR109804]
This looks like an oversight of handling DEMANGLE_COMPONENT_UNNAMED_TYPE.
DEMANGLE_COMPONENT_UNNAMED_TYPE only has the u.s_number.number set while
the code expected newc.u.s_binary.left would be valid.
So this treats DEMANGLE_COMPONENT_UNNAMED_TYPE like we treat function paramaters
(DEMANGLE_COMPONENT_FUNCTION_PARAM) and template paramaters (DEMANGLE_COMPONENT_TEMPLATE_PARAM).
Note the code in the demangler does this when it sets DEMANGLE_COMPONENT_UNNAMED_TYPE:
ret->type = DEMANGLE_COMPONENT_UNNAMED_TYPE;
ret->u.s_number.number = num;
Committed as obvious after bootstrap/test on x86_64-linux-gnu
The problem here is after r6-7425-ga9fee7cdc3c62d0e51730,
the comparison to see if the transformation could be done was using the
wrong value. Instead of see if the inner was LE (for MIN and GE for MAX)
the outer value, it was comparing the inner to the value used in the comparison
which was wrong.
Committed to GCC 13 branch after bootstrapped and tested on x86_64-linux-gnu.
gcc/ChangeLog:
PR tree-optimization/111331
* tree-ssa-phiopt.cc (minmax_replacement):
Fix the LE/GE comparison for the
`(a CMP CST1) ? max<a,CST2> : a` optimization.
gcc/testsuite/ChangeLog:
PR tree-optimization/111331
* gcc.c-torture/execute/pr111331-1.c: New test.
* gcc.c-torture/execute/pr111331-2.c: New test.
* gcc.c-torture/execute/pr111331-3.c: New test.
Andrew Pinski [Sun, 10 Mar 2024 22:17:09 +0000 (22:17 +0000)]
Fold: Fix up merge_truthop_with_opposite_arm for NaNs [PR95351]
The problem here is that merge_truthop_with_opposite_arm would
use the type of the result of the comparison rather than the operands
of the comparison to figure out if we are honoring NaNs.
This fixes that oversight and now we get the correct results in this
case.
Committed as obvious after a bootstrap/test on x86_64-linux-gnu.
PR middle-end/95351
gcc/ChangeLog:
* fold-const.cc (merge_truthop_with_opposite_arm): Use
the type of the operands of the comparison and not the type
of the comparison.
PR libstdc++/114803
* include/experimental/bits/simd_builtin.h
(_SimdBase2::operator __vector_type_t): There is no __builtin()
function in _SimdWrapper, instead use its conversion operator.
* testsuite/experimental/simd/pr114803_vecbuiltin_cvt.cc: New
test.