Robin Dapp [Fri, 13 Dec 2024 10:23:03 +0000 (11:23 +0100)]
RISC-V: Increase cost for vec_construct [PR118019].
For a generic vec_construct from scalar elements we need to load each
scalar element and move it over to a vector register.
Right now we only use a cost of 1 per element.
This patch uses register-move cost as well as scalar_to_vec and
multiplies it with the number of elements in the vector instead.
Jonathan Wakely [Thu, 12 Dec 2024 20:42:19 +0000 (20:42 +0000)]
libstdc++: Initialize all members of hashtable local iterators
Currently the _M_bucket members are left uninitialized for
default-initialized local iterators, and then copy construction copies
indeterminate values. We should just ensure they're initialized on
construction.
Setting them to zero makes default-initialization consistent with
value-initialization and avoids indeterminate values.
For the _Local_iterator_base<..., false> specialization we preserve the
existing behaviour of setting _M_bucket_count to -1 in the default
constructor, as a sentinel value to indicate there's no hash object
present.
libstdc++-v3/ChangeLog:
* include/bits/hashtable_policy.h (_Local_iterator_base): Use
default member-initializers.
Jonathan Wakely [Thu, 12 Dec 2024 20:40:15 +0000 (20:40 +0000)]
libstdc++: Use alias-declarations in bits/hashtable_policy,h
This file is only for C++11 and later, so replace typedefs with
alias-declarations for clarity. Also remove redundant std::
qualification on size_t, ptrdiff_t etc.
We can also remove the result_type, first_argument_type and
second_argument_type typedefs from the range hashers. We don't need
those types to follow the C++98 adaptable function object protocol.
libstdc++-v3/ChangeLog:
* include/bits/hashtable_policy.h: Replace typedefs with
alias-declarations. Remove redundant std:: qualification.
(_Mod_range_hashing, _Mask_range_hashing): Remove adaptable
function object typedefs.
Jonathan Wakely [Thu, 5 Dec 2024 15:48:30 +0000 (15:48 +0000)]
libstdc++: Simplify storage of hasher in local iterators
The fix for PR libstdc++/56267 (relating to the lifetime of the hash
object stored in a local iterator) has undefined behaviour, as it relies
on being able to call a member function on an empty object that never
started its lifetime. Although the member function probably doesn't care
about the empty object's state, this is still technically undefined
because there is no object of that type at that address. It's also
possible that the hash object would have a stricter alignment than the
_Hash_code_storage object, so that the reinterpret_cast would produce a
misaligned pointer.
This fix replaces _Local_iterator_base's _Hash_code_storage base-class
with a new class template containing a potentially-overlapping (i.e.
[[no_unique_address]]) union member. This means that we always have
storage of the correct type, and it can be initialized/destroyed when
required. We no longer need a reinterpret_cast that gives us a pointer
that we should not dereference.
It would be nice if we could just use a union containing the _Hash
object as a data member of _Local_iterator_base, but that would be an
ABI change. The _Hash_code_storage that contains the _Hash object is the
first base-class, before the _Node_iterator_base base-class. Making the
union a data member of _Local_iterator_base would make it come after the
_Node_iterator_base base instead of before it, altering the layout.
Since we're changing _Hash_code_storage anyway, we can replace it with a
new class template that stores the _Hash object itself in the union,
rather than a _Hash_code_base that holds the _Hash. This removes an
unnecessary level of indirection in the class hierarchy. This change
requires the effects of _Hash_code_base::_M_bucket_index to be inlined
into the _Local_iterator_base::_M_incr function, but that's easy.
We don't need separate specializations of _Hash_obj_storage for an empty
hash function and a non-empty one. Using [[no_unique_address]] gives us
an empty base-class when possible.
libstdc++-v3/ChangeLog:
* include/bits/hashtable_policy.h (_Hash_code_storage): Remove.
(_Hash_obj_storage): New class template. Store the hash
function as a union member instead of using a byte buffer.
(_Local_iterator_base): Use _Hash_obj_storage instead of
_Hash_code_storage, adjust members that construct and destroy
the hash object.
(_Local_iterator_base::_M_incr): Calculate bucket index.
Jonathan Wakely [Wed, 4 Dec 2024 21:52:40 +0000 (21:52 +0000)]
libstdc++: Further simplify _Hashtable inheritance hierarchy
The main change here is using [[no_unique_address]] instead of the Empty
Base-class Optimization. Using the attribute allows us to use data
members instead of base-classes. That simplifies the inheritance
hierarchy, which means less work for the compiler. It also means that
ADL has fewer associated classes and associated namespaces to consider,
further reducing the work the compiler has to do.
Reducing the differences between the _Hashtable_ebo_helper primary
template and the partial specialization means we no longer need to use
member functions to access the stored object, because it's now always a
data member called _M_obj. This means we can also remove a number of
other helper functions that were using those member functions to access
the object, for example we can swap the _Hash and _Equal objects
directly in _Hashtable::swap instead of calling _Hashtable_base::_M_swap
which then calls _Hash_code_base::_M_swap.
Although [[no_unique_address]] would allow us to reduce the size for
empty types that are also 'final', doing so would be an ABI break
because those types were previously excluded from using the EBO. So we
still need the _Hashtable_ebo_helper class template and a partial
specialization, so that we only use the attribute under exactly the same
conditions as we previously used the EBO. This could be avoided with a
non-standard [[no_unique_address(expr)]] attribute that took a boolean
condition, or with reflection and token sequence injection, but we don't
have either of those things.
Because _Hashtable_ebo_helper is no longer used as a base-class we don't
need to disambiguate possible identical bases, so it doesn't need an
integral non-type template parameter.
libstdc++-v3/ChangeLog:
* include/bits/hashtable.h (_Hashtable::swap): Swap hash
function and equality predicate here. Inline allocator swap
instead of using __alloc_on_swap.
* include/bits/hashtable_policy.h (_Hashtable_ebo_helper):
Replace EBO with no_unique_address attribute. Remove NTTP.
(_Hash_code_base): Replace base class with data member using
no_unique_address attribute.
(_Hash_code_base::_M_swap): Remove.
(_Hash_code_base::_M_hash): Remove.
(_Hashtable_base): Replace base class with data member using
no_unique_address attribute.
(_Hashtable_base::_M_swap): Remove.
(_Hashtable_alloc): Replace base class with data member using
no_unique_address attribute.
Jonathan Wakely [Sun, 8 Dec 2024 14:34:01 +0000 (14:34 +0000)]
libstdc++: Fix fancy pointer support in linked lists [PR57272]
The union members I used in the new _Node types for fancy pointers only
work for value types that are trivially default constructible. This
change replaces the anonymous union with a named union so it can be
given a default constructor and destructor, to leave the variant member
uninitialized.
This also fixes the incorrect macro names in the alloc_ptr_ignored.cc
tests as pointed out by François, and fixes some std::list pointer
confusions that the fixed alloc_ptr_ignored.cc test revealed.
libstdc++-v3/ChangeLog:
PR libstdc++/57272
* include/bits/forward_list.h (__fwd_list::_Node): Add
user-provided special member functions to union.
* include/bits/stl_list.h (__list::_Node): Likewise.
(_Node_base::_M_hook, _Node_base::swap): Use _M_base() instead
of std::pointer_traits::pointer_to.
(_Node_base::_M_transfer): Likewise. Add noexcept.
(_List_base::_M_put_node): Use 'if constexpr' to avoid using
pointer_traits::pointer_to when not necessary.
(_List_base::_M_destroy_node): Fix parameter to be the pointer
type used internally, not the allocator's pointer.
(list::_M_create_node): Likewise.
* testsuite/23_containers/forward_list/requirements/explicit_instantiation/alloc_ptr.cc:
Check explicit instantiation of non-trivial value type.
* testsuite/23_containers/list/requirements/explicit_instantiation/alloc_ptr.cc:
Likewise.
* testsuite/23_containers/forward_list/requirements/explicit_instantiation/alloc_ptr_ignored.cc:
Fix macro name.
* testsuite/23_containers/list/requirements/explicit_instantiation/alloc_ptr_ignored.cc:
Likewise.
Mark Harmstone [Sat, 30 Nov 2024 22:35:24 +0000 (22:35 +0000)]
Fix non-aligned CodeView symbols
CodeView symbols in PDB files are aligned to four-byte boundaries. It's
not really clear what logic MSVC uses to enforce this; sometimes the
symbols are padded in the object file, sometimes the linker seems to do
the work.
It makes more sense to do this in the compiler, so fix the two instances
where we can write symbols with a non-aligned length. S_FRAMEPROC is
unusually not a multiple of 4, so will always have 2 bytes padding.
S_INLINESITE is followed by variable-length "binary annotations", so
will also usually have padding.
If a function receives nonlocal gotos, it needs to save the frame
pointer in the argument save area. This ensures that LRA sets
frame_pointer_needed when it saves arguments in the save area.
2024-12-15 John David Anglin <danglin@gcc.gnu.org>
gcc/ChangeLog:
PR target/118018
* config/pa/pa.cc (pa_frame_pointer_required): Declare and
implement.
(TARGET_FRAME_POINTER_REQUIRED): Define.
Iain Sandoe [Thu, 3 Oct 2024 08:02:59 +0000 (09:02 +0100)]
c++, coroutines: Make the resume index consistent for debug.
At present, we only update the resume index when we actually are
at the stage that the coroutine is considered suspended. This is
on the basis that it is UB to resume or destroy a coroutines that
is not suspended (and therefore we never need to access this value
otherwise). However, it is possible that someone could set a debug
breakpoint on the resume which can be reached without suspending
if await_ready() returns true. In that case, the debugger would
read an incorrect resume index. Fixed by moving the update to
just before the test for ready.
gcc/cp/ChangeLog:
* coroutines.cc (expand_one_await_expression): Update the
resume index before checking if the coroutine is ready.
Iain Sandoe [Fri, 1 Nov 2024 23:30:58 +0000 (23:30 +0000)]
c++, coroutines:Ensure bind exprs are visited once [PR98935].
Recent changes in the C++ FE and the coroutines implementation have
exposed a latent issue in which a bind expression containing a var
that we need to capture in the coroutine state gets visited twice.
This causes an ICE (from a checking assert). Fixed by adding a pset
to the relevant tree walk. Exit the callback early when the tree is
not a BIND_EXPR.
PR c++/98935
gcc/cp/ChangeLog:
* coroutines.cc (register_local_var_uses): Add a pset to the
tree walk to avoid visiting the same BIND_EXPR twice. Make
an early exit for cases that the callback does not apply.
(cp_coroutine_transform::apply_transforms): Add a pset to the
tree walk for register_local_vars.
Tamar Christina [Sun, 15 Dec 2024 13:21:44 +0000 (13:21 +0000)]
arm: fix bootstrap after MVE changes
The recent commits for MVE on Saturday have broken armhf bootstrap due to a
-Werror false positive:
inlined from 'virtual rtx_def* {anonymous}::vstrq_scatter_base_impl::expand(arm_mve::function_expander&) const' at /gcc/config/arm/arm-mve-builtins-base.cc:352:17:
./genrtl.h:38:16: error: 'new_base' may be used uninitialized [-Werror=maybe-uninitialized]
38 | XEXP (rt, 1) = arg1;
/gcc/config/arm/arm-mve-builtins-base.cc: In member function 'virtual rtx_def* {anonymous}::vstrq_scatter_base_impl::expand(arm_mve::function_expander&) const':
/gcc/config/arm/arm-mve-builtins-base.cc:311:26: note: 'new_base' was declared here
311 | rtx insns, base_ptr, new_base;
| ^~~~~~~~
In function 'rtx_def* init_rtx_fmt_ee(rtx, machine_mode, rtx, rtx)',
inlined from 'rtx_def* gen_rtx_fmt_ee_stat(rtx_code, machine_mode, rtx, rtx)' at ./genrtl.h:50:26,
inlined from 'virtual rtx_def* {anonymous}::vldrq_gather_base_impl::expand(arm_mve::function_expander&) const' at /gcc/config/arm/arm-mve-builtins-base.cc:527:17:
./genrtl.h:38:16: error: 'new_base' may be used uninitialized [-Werror=maybe-uninitialized]
38 | XEXP (rt, 1) = arg1;
/gcc/config/arm/arm-mve-builtins-base.cc: In member function 'virtual rtx_def* {anonymous}::vldrq_gather_base_impl::expand(arm_mve::function_expander&) const':
/gcc/config/arm/arm-mve-builtins-base.cc:486:26: note: 'new_base' was declared here
486 | rtx insns, base_ptr, new_base;
Jakub Jelinek [Sun, 15 Dec 2024 12:13:07 +0000 (13:13 +0100)]
Shrink back size of tree_exp from 40 bytes to 32
The following patch implements what I've mentioned in the 64-bit
location_t thread.
struct tree_exp had unsigned condition_uid member added for something
rarely used (-fcondition-coverage) and even there used only on very small
subset of trees only for the duration of the gimplification.
The following patch uses a hash_map instead, which allows shrinking
tree_exp to its previous size (32 bytes + (number of operands - 1) * sizeof
(tree)).
2024-12-15 Jakub Jelinek <jakub@redhat.com>
* tree-core.h (struct tree_exp): Remove condition_uid member.
* tree.h (SET_EXPR_UID, EXPR_COND_UID): Remove.
* gimplify.cc (nextuid): Rename to ...
(nextconduid): ... this.
(cond_uids): New static variable.
(next_cond_uid, reset_cond_uid): Adjust for the renaming,
formatting fix.
(tree_associate_condition_with_expr): New function.
(shortcut_cond_r, tag_shortcut_cond, shortcut_cond_expr): Use it
instead of SET_EXPR_UID.
(gimplify_cond_expr): Look up cond_uid in cond_uids hash map if
non-NULL instead of using EXPR_COND_UID.
(gimplify_function_tree): Delete cond_uids and set it to NULL.
Jovan Vukic [Sat, 14 Dec 2024 21:47:35 +0000 (14:47 -0700)]
[PATCH v3] match.pd: Add pattern to simplify `(a - 1) & -a` to `0`
Thank you for the feedback. I have made the minor changes that were requested.
Additionally, I extracted the repetitive code into a reusable helper function,
match_plus_neg_pattern, making the code much more readable. Furthermore, the
logic, code, and tests remain the same as in version 2 of the patch.
gcc/ChangeLog:
* match.pd: New pattern.
* simplify-rtx.cc (match_plus_neg_pattern): New helper function.
(simplify_context::simplify_binary_operation_1): New
code to handle (a - 1) & -a, (a - 1) | -a and (a - 1) ^ -a.
Jakub Jelinek [Sat, 14 Dec 2024 10:28:25 +0000 (11:28 +0100)]
gimple-fold: Fix the recent ifcombine optimization for _BitInt [PR118023]
The BIT_FIELD_REF verifier has:
if (INTEGRAL_TYPE_P (TREE_TYPE (op))
&& !type_has_mode_precision_p (TREE_TYPE (op)))
{
error ("%qs of non-mode-precision operand", code_name);
return true;
}
check among other things, so one can't extract something out of say
_BitInt(63) or _BitInt(4096).
The new ifcombine optimization happily creates such BIT_FIELD_REFs
and ICEs during their verification.
The following patch fixes that by rejecting those in decode_field_reference.
2024-12-14 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/118023
* gimple-fold.cc (decode_field_reference): Return NULL_TREE if
inner has non-type_has_mode_precision_p integral type.
Jakub Jelinek [Sat, 14 Dec 2024 10:27:20 +0000 (11:27 +0100)]
warn-access: Fix up matching_alloc_calls_p [PR118024]
The following testcase ICEs because of a bug in matching_alloc_calls_p.
The loop was apparently meant to be walking the two attribute chains
in lock-step, but doesn't really do that. If the first lookup_attribute
returns non-NULL, the second one is not done, so rmats in that case can
be some random unrelated attribute rather than "malloc" attribute; the
body assumes even rmats if non-NULL is "malloc" attribute and relies
on its argument to be a "malloc" argument and if it is some other
attribute with incompatible attribute, it just crashes.
Now, fixing that in the obvious way, instead of doing
(amats = lookup_attribute ("malloc", amats))
|| (rmats = lookup_attribute ("malloc", rmats))
in the condition do
((amats = lookup_attribute ("malloc", amats)),
(rmats = lookup_attribute ("malloc", rmats)),
(amats || rmats))
fixes the testcase but regresses Wmismatched-dealloc-{2,3}.c tests.
The problem is that walking the attribute lists in a lock-step is obviously
a very bad idea, there is no requirement that the same deallocators are
present in the same order on both decls, e.g. there could be an extra malloc
attribute without argument in just one of the lists, or the order of say
free/realloc could be swapped, etc. We don't generally document nor enforce
any particular ordering of attributes (even when for some attributes we just
handle the first one rather than all).
So, this patch instead simply splits it into two loops, the first one walks
alloc_decl attributes, the second one walks dealloc_decl attributes.
If the malloc attribute argument is a built-in, that doesn't change
anything, and otherwise we have the chance to populate the whole
common_deallocs hash_set in the first loop and then can check it in the
second one (and don't need to use more expensive add method on it, can just
check contains there). Not to mention that it also fixes the case when
the function would incorrectly return true if there wasn't a common
deallocator between the two, but dealloc_decl had 2 malloc attributes with
the same deallocator.
2024-12-14 Jakub Jelinek <jakub@redhat.com>
PR middle-end/118024
* gimple-ssa-warn-access.cc (matching_alloc_calls_p): Walk malloc
attributes of alloc_decl and dealloc_decl in separate loops rather
than in lock-step. Use common_deallocs.contains rather than
common_deallocs.add in the second loop.
Jakub Jelinek [Sat, 14 Dec 2024 10:25:08 +0000 (11:25 +0100)]
opts: Use OPTION_SET_P instead of magic value 2 for -fshort-enums default [PR118011]
The magic values for default (usually -1 or sometimes 2) for some options
are from times we haven't global_options_set, I think we should eventually
get rid of all of those.
The PR is about gcc -Q --help=optimizers reporting -fshort-enums as
[enabled] when it is disabled.
For this the following patch is just partial fix; with explicit
gcc -Q --help=optimizers -fshort-enums
or
gcc -Q --help=optimizers -fno-short-enums
it already worked correctly before, with this patch it will report
even with just
gcc -Q --help=optimizers
correct value on most targets, except 32-bit arm with some options or
defaults, so I think it is a step in the right direction.
But, as I wrote in the PR, process_options isn't done before --help=
and even shouldn't be in its current form where it warns on some option
combinations or errors or emits sorry on others, so I think ideally
process_options should have some bool argument whether it is done for
--help= purposes or not, if yes, not emit warnings and just adjust the
options, otherwise do what it currently does.
2024-12-14 Jakub Jelinek <jakub@redhat.com>
PR c/118011
gcc/
* opts.cc (init_options_struct): Don't set opts->x_flag_short_enums to
2.
* toplev.cc (process_options): Test !OPTION_SET_P (flag_short_enums)
rather than flag_short_enums == 2.
gcc/ada/
* gcc-interface/misc.cc (gnat_post_options): Test
!OPTION_SET_P (flag_short_enums) rather than flag_short_enums == 2.
Nathaniel Shead [Thu, 7 Nov 2024 10:37:28 +0000 (21:37 +1100)]
c++: Disallow decomposition of lambda bases [PR90321]
Decomposition of lambda closure types is not allowed by
[dcl.struct.bind] p6, since members of a closure have no name.
r244909 made this an error, but missed the case where a lambda is used
as a base. This patch moves the check to find_decomp_class_base to
handle this case.
As a drive-by improvement, we also slightly improve the diagnostics to
indicate why a base class was being inspected. Ideally the diagnostic
would point directly at the relevant base, but there doesn't seem to be
an easy way to get this location just from the binfo so I don't worry
about that here.
PR c++/90321
gcc/cp/ChangeLog:
* decl.cc (find_decomp_class_base): Check for decomposing a
lambda closure type. Report base class chains if needed.
(cp_finish_decomp): Remove no-longer-needed check.
gcc/testsuite/ChangeLog:
* g++.dg/cpp1z/decomp62.C: New test.
Signed-off-by: Nathaniel Shead <nathanieloshead@gmail.com> Reviewed-by: Marek Polacek <polacek@redhat.com>
Jakub Jelinek [Fri, 13 Dec 2024 23:41:00 +0000 (00:41 +0100)]
cse: Fix up record_jump_equiv checks [PR117095]
The following testcase is miscompiled on s390x-linux with -O2 -march=z15.
The problem happens during cse2, which sees in an extended basic block
(jump_insn 217 78 216 10 (parallel [
(set (pc)
(if_then_else (ne (reg:SI 165)
(const_int 1 [0x1]))
(label_ref 216)
(pc)))
(set (reg:SI 165)
(plus:SI (reg:SI 165)
(const_int -1 [0xffffffffffffffff])))
(clobber (scratch:SI))
(clobber (reg:CC 33 %cc))
]) "t.c":14:17 discrim 1 2192 {doloop_si64}
(int_list:REG_BR_PROB 955630228 (nil))
-> 216)
...
(insn 99 98 100 12 (set (reg:SI 138)
(const_int 1 [0x1])) "t.c":9:31 1507 {*movsi_zarch}
(nil))
(insn 100 99 103 12 (parallel [
(set (reg:SI 137)
(minus:SI (reg:SI 138)
(subreg:SI (reg:HI 135 [ a ]) 0)))
(clobber (reg:CC 33 %cc))
]) "t.c":9:31 1904 {*subsi3}
(expr_list:REG_DEAD (reg:SI 138)
(expr_list:REG_DEAD (reg:HI 135 [ a ])
(expr_list:REG_UNUSED (reg:CC 33 %cc)
(nil)))))
Note, cse2 has df_note_add_problem () before df_analyze, which add
(expr_list:REG_UNUSED (reg:SI 165)
(expr_list:REG_UNUSED (reg:CC 33 %cc)
notes to the first insn (correctly so, %cc is clobbered there and pseudo
165 isn't used after the insn).
Now, cse_extended_basic_block has an extra optimization on conditional
jumps, where it records equivalence on the edge which continues in the ebb.
Here it sees (ne reg:SI 165) (const_int 1) is false on the edge and
remembers that pseudo 165 is comparison equivalent to (const_int 1),
so on insn 100 it decides to replace (reg:SI 138) with (reg:SI 165).
This optimization isn't correct here though, because the JUMP_INSN has
multiple sets. Before r0-77890 record_jump_equiv has been called from
cse_insn guarded on n_sets == 1 && any_condjump_p (insn), so it wouldn't
be done on the above JUMP_INSN where n_sets == 2. But since that change
it is guarded with single_set (insn) && any_condjump_p (insn) and that
is true because of the REG_UNUSED note. Looking at that note is
inappropriate in CSE though, because the whole intent of the pass is to
extend the lifetimes of the pseudos if equivalence is found, so the fact
that there is REG_UNUSED note for (reg:SI 165) and that the reg isn't used
later doesn't imply that it won't be used after the optimization.
So, unless we manage to process the other sets on the JUMP_INSN (it wouldn't
be terribly hard in this exact case, the doloop insn decreases the register
by 1 and so we could just record equivalence to (const_int 0) instead, but
generally it might be hard), we should IMHO just punt if there are multiple
sets.
The patch below adds !multiple_sets (insn) check instead of replacing with
it the single_set (insn) check, because apparently any_condjump_p uses
pc_set which supports the case where PATTERN is a SET to PC (that is a
single_set (insn) && !multiple_sets (insn), PATTERN is a PARALLEL with a
single SET to PC (likewise) and some CLOBBERs, PARALLEL with two or more
SETs where the first one is SET to PC (that could be single_set (insn)
with REG_UNUSED notes but is not !multiple_sets (insn)) or PATTERN
is UNSPEC/UNSPEC_VOLATILE with SET inside of it. For the last case
!multiple_sets (insn) will be true, but IMHO we shouldn't try to derive
anything from those because we haven't checked the rest of the UNSPEC*
and we don't really know what it does.
Patrick Palka [Fri, 13 Dec 2024 18:17:29 +0000 (13:17 -0500)]
libstdc++: Avoid unnecessary copies in ranges::min/max [PR112349]
Use a local reference for the (now possibly lifetime extended) result of
*__first so that we copy it only when necessary.
PR libstdc++/112349
libstdc++-v3/ChangeLog:
* include/bits/ranges_algo.h (__min_fn::operator()): Turn local
object __tmp into a reference.
* include/bits/ranges_util.h (__max_fn::operator()): Likewise.
* testsuite/25_algorithms/max/constrained.cc (test04): New test.
* testsuite/25_algorithms/min/constrained.cc (test04): New test.
Christophe Lyon [Sun, 24 Nov 2024 18:08:48 +0000 (18:08 +0000)]
arm: [MVE intrinsics] Fix support for predicate constants [PR target/114801]
In this PR, we have to handle a case where MVE predicates are supplied
as a const_int, where individual predicates have illegal boolean
values (such as 0xc for a 4-bit boolean predicate). To avoid the ICE,
fix the constant (any non-zero value is converted to all 1s) and emit
a warning.
On MVE, V8BI and V4BI multi-bit masks are interpreted byte-by-byte at
instruction level, but end-users should describe lanes rather than
bytes (so all bytes of a true-predicated lane should be '1'), see the
section on MVE intrinsics in the Arm ACLE specification.
Since force_lowpart_subreg cannot handle const_int (because they have VOID mode),
use gen_lowpart on them, force_lowpart_subreg otherwise.
2024-11-20 Christophe Lyon <christophe.lyon@linaro.org>
Jakub Jelinek <jakub@redhat.com>
Implement vst2q, vst4q, vld2q and vld4q using the new MVE builtins
framework.
Since MVE uses different tuple modes than Neon, we need to use
VALID_MVE_STRUCT_MODE because VALID_NEON_STRUCT_MODE is no longer a
super-set of it, for instance in output_move_neon and
arm_print_operand_address.
In arm_hard_regno_mode_ok, the change is similar but a bit more
intrusive.
Expand the VSTRUCT iterator, so that mov<mode> and neon_mov<mode>
patterns from neon.md still work for MVE.
Besides the small updates to the patterns in mve.md, we have to update
vec_load_lanes and vec_store_lanes in vec-common.md so that the
vectorizer can handle the new modes. These patterns are now different
from Neon's, so maybe we should move them back to neon.md and mve.md
The patch adds arm_array_mode, which is used by build_array_type_nelts
and makes it possible to support the new assert in
register_builtin_tuple_types.
Christophe Lyon [Wed, 13 Nov 2024 15:30:44 +0000 (15:30 +0000)]
arm: [MVE intrinsics] add support for tuples
This patch is largely a copy/paste from the aarch64 SVE counterpart,
and adds support for tuples to the MVE intrinsics framework.
Introduce function_resolver::infer_tuple_type which will be used to
resolve overloaded vst2q and vst4q function names in a later patch.
Fix access to acle_vector_types in a few places, as well as in
infer_vector_or_tuple_type because we should shift the tuple size to
the right by one bit when computing the array index.
The new wrap_type_in_struct, register_type_decl and infer_tuple_type
are largely copies of the aarch64 versions, and
register_builtin_tuple_types is very similar.
gcc/ChangeLog:
* config/arm/arm-mve-builtins-shapes.cc (parse_type): Fix access
to acle_vector_types.
* config/arm/arm-mve-builtins.cc (wrap_type_in_struct): New.
(register_type_decl): New.
(register_builtin_tuple_types): Fix support for tuples.
(function_resolver::infer_tuple_type): New.
* config/arm/arm-mve-builtins.h
(function_resolver::infer_tuple_type): Declare.
(function_instance::tuple_type): Fix access to acle_vector_types.
Christophe Lyon [Wed, 16 Aug 2023 13:42:53 +0000 (13:42 +0000)]
arm: [MVE intrinsics] Fix condition for vec_extract patterns
Remove floating-point condition from mve_vec_extract_sext_internal and
mve_vec_extract_zext_internal, since the MVE_2 iterator does not
include any FP mode.
Christophe Lyon [Wed, 30 Oct 2024 17:32:48 +0000 (17:32 +0000)]
arm: [MVE intrinsics] rework vldr gather_base
Implement vldr?q_gather_base using the new MVE builtins framework.
The patch updates two testcases rather than using different iterators
for predicated and non-predicated versions. According to ACLE:
vldrdq_gather_base_s64 is expected to generate VLDRD.64
vldrdq_gather_base_z_s64 is expected to generate VLDRDT.U64
Christophe Lyon [Tue, 29 Oct 2024 10:34:23 +0000 (10:34 +0000)]
arm: [MVE intrinsics] rework vldr gather_offset
Implement vldr?q_gather_offset using the new MVE builtins framework.
The patch introduces a new attribute iterator (MVE_u_elem) to
accomodate the fact that ACLE's expected output description uses "uNN"
for all modes, except V8HF where it expects ".f16". Using "V_sz_elem"
would work, but would require to update several testcases.
Implement vstr?q_scatter_shifted_offset intrinsics using the MVE
builtins framework.
We use the same approach as the previous patch, and we now have four
sets of patterns:
- vector scatter stores with shifted offset (non-truncating)
- predicated vector scatter stores with shifted offset (non-truncating)
- truncating vector scatter stores with shifted offset
- predicated truncating vector scatter stores with shifted offset
Note that the truncating patterns do not use an iterator since there
is only one such variant: V4SI to V4HI.
We need to introduce new iterators:
- MVE_VLD_ST_scatter_shifted, same as MVE_VLD_ST_scatter without V16QI
- MVE_scatter_shift to map the mode to the shift amount
This patch implements vstr?q_scatter_offset using the new MVE builtins
framework.
It uses a similar approach to a previous patch which grouped
truncating and non-truncating stores in two sets of patterns, rather
than having groups of patterns depending on the destination size.
We need to add the 'integer_64' types of suffixes in order to support
vstrdq_scatter_offset.
The patch introduces the MVE_VLD_ST_scatter iterator, similar to
MVE_VLD_ST but which also includes V2DI (again, for
vstrdq_scatter_offset).
The new MVE_scatter_offset mode attribute is used to map the
destination type to the offset type (both are usually equal, except
when the destination is floating-point).
We end up with four sets of patterns:
- vector scatter stores with offset (non-truncating)
- predicated vector scatter stores with offset (non-truncating)
- truncating vector scatter stores with offset
- predicated truncating vector scatter stores with offset
Christophe Lyon [Thu, 10 Oct 2024 16:35:23 +0000 (16:35 +0000)]
arm: [MVE intrinsics] add mode_after_pred helper in function_shape
This new helper returns true if the mode suffix goes after the
predicate suffix. This is true in most cases, so the base
implementations in nonoverloaded_base and overloaded_base return true.
For instance: vaddq_m_n_s32.
This will be useful in later patches to implement
vstr?q_scatter_offset_p (_p appears after _offset).
Robin Dapp [Tue, 26 Nov 2024 13:44:17 +0000 (14:44 +0100)]
genrecog: Split into separate partitions [PR111600].
Hi,
this patch makes genrecog split its output into separate files (10 by
default) in the same vein genemit does. The changes are mostly
mechanical again, changing printfs and puts to fprintf.
As insn-recog.cc relies on being able to call other recog functions a
header insn-recog.h is introduced that pre declares all of those.
For simplicity the number of files is determined by (re-using)
--with-insnemit-partitions. Naming suggestions welcome :)
Bootstrapped and regtested on x86 and power10, regtested on riscv.
aarch64 bootstrap is currently blocked because of the
"maybe uninitialized" issue discussed on IRC.
Regards
Robin
PR target/111600
gcc/ChangeLog:
* Makefile.in: Add insn-recog split.
* configure: Regenerate.
* configure.ac: Document that the number of insnemit partitions is
used for insn-recog as well.
* genconditions.cc (write_one_condition): Use fprintf.
* genpreds.cc (write_predicate_expr): Ditto.
(write_init_reg_class_start_regs): Ditto.
* genrecog.cc (write_header): Add header file to includes.
(printf_indent): Use fprintf.
(change_state): Ditto.
(print_code): Ditto.
(print_host_wide_int): Ditto.
(print_parameter_value): Ditto.
(print_test_rtx): Ditto.
(print_nonbool_test): Ditto.
(print_label_value): Ditto.
(print_test): Ditto.
(print_decision): Ditto.
(print_state): Ditto.
(print_subroutine_call): Ditto.
(print_acceptance): Ditto.
(print_subroutine_start): Ditto.
(print_pattern): Ditto.
(print_subroutine): Ditto.
(print_subroutine_group): Ditto.
(handle_arg): Add -O and -H for output and header file handling.
(main): Use callback.
* gentarget-def.cc (def_target_insn): Use fprintf.
* read-md.cc (md_reader::print_c_condition): Ditto.
* read-md.h (class md_reader): Ditto.
Jonathan Wakely [Fri, 13 Dec 2024 10:54:29 +0000 (10:54 +0000)]
libstdc++: Fix uninitialized data in std::basic_spanbuf::seekoff
I noticed a -Wmaybe-uninitialized warning for this function, which turns
out to be correct. If the caller passes a valid std::ios_base::seekdir
value then there's no problem, but if they pass std::seekdir(999) then
we don't initialize the __base variable before adding it to __off.
Rather than initialize it to an arbitrary value, we should return an
error.
Also add [[unlikely]] attributes to the paths that return an error.
libstdc++-v3/ChangeLog:
* include/std/spanstream (basic_spanbuf::seekoff): Return an
error for invalid seekdir values.
Jonathan Wakely [Thu, 12 Dec 2024 23:24:39 +0000 (23:24 +0000)]
libstdc++: Swap expressions in noexcept-specifier of ranges::not_equal_to
Although this should never make a difference for sensible code, we
should really make the expression in the noexcept-specifier match the
expression in the function body.
libstdc++-v3/ChangeLog:
* include/bits/ranges_cmp.h (not_equal_to): Make order of
expressions in noexcept-specifier match the body.
* testsuite/20_util/function_objects/range.cmp/not_equal_to.cc:
Check noexcept.
Tamar Christina [Fri, 13 Dec 2024 11:20:18 +0000 (11:20 +0000)]
AArch64: Set L1 data cache size according to size on CPUs
This sets the L1 data cache size for some cores based on their size in their
Technical Reference Manuals.
Today the port minimum is 256 bytes as explained in commit
g:9a99559a478111f7fbeec29bd78344df7651c707, however like Neoverse V2 most cores
actually define the L1 cache size as 64-bytes. The generic Armv9-A model was
already changed in g:f000cb8cbc58b23a91c84d47d69481904981a1d9 and this
change follows suite for a few other cores based on their TRMs.
This results in less memory pressure when running on large core count machines.
Tamar Christina [Fri, 13 Dec 2024 11:17:55 +0000 (11:17 +0000)]
AArch64: Add CMP+CSEL and CMP+CSET for cores that support it
GCC 15 added two new fusions CMP+CSEL and CMP+CSET.
This patch enables them for cores that support based on their Software
Optimization Guides and generically on Armv9-A. Even if a core does not
support it there's no negative performance impact.
gcc/ChangeLog:
* config/aarch64/aarch64-fusion-pairs.def (AARCH64_FUSE_NEOVERSE_BASE):
New.
* config/aarch64/tuning_models/neoverse512tvb.h: Use it.
* config/aarch64/tuning_models/neoversen2.h: Use it.
* config/aarch64/tuning_models/neoversen3.h: Use it.
* config/aarch64/tuning_models/neoversev1.h: Use it.
* config/aarch64/tuning_models/neoversev2.h: Use it.
* config/aarch64/tuning_models/neoversev3.h: Use it.
* config/aarch64/tuning_models/neoversev3ae.h: Use it.
* config/aarch64/tuning_models/cortexx925.h: Add fusions.
* config/aarch64/tuning_models/generic_armv9_a.h: Add fusions.
As mentioned in the PR, the addition of vec_addsubv2sf3 expander caused
the testcase to be vectorized and no longer to use fma.
The following patch adds new expanders so that it can be vectorized
again with the alternating add/sub fma instructions.
There is some bug on the slp cost computation side which causes it
not to count some scalar multiplication costs, but I think the patch
is desirable anyway before that is fixed and the testcase for now just
uses -fvect-cost-model=unlimited.
2024-12-13 Jakub Jelinek <jakub@redhat.com>
PR target/116979
* config/i386/mmx.md (vec_fmaddsubv2sf4, vec_fmsubaddv2sf4): New
define_expand patterns.
Robin Dapp [Sat, 16 Nov 2024 14:13:09 +0000 (15:13 +0100)]
RISC-V: Improve slide1up pattern.
This patch adds a second variant to implement the extract/slide1up
pattern. In order to do a permutation like
<3, 4, 5, 6> from vectors <0, 1, 2, 3> and <4, 5, 6, 7>
we currently extract <3> from the first vector and re-insert it into the
second vector. Unless register-file crossing latency is essentially
zero it should be preferable to first slide the second vector up by
one, then slide down the first vector by (nunits - 1).
gcc/ChangeLog:
* config/riscv/riscv-protos.h (riscv_register_move_cost):
Export.
* config/riscv/riscv-v.cc (shuffle_extract_and_slide1up_patterns):
Rename...
(shuffle_off_by_one_patterns): ... to this and add slideup/slidedown
variant.
(expand_vec_perm_const_1): Call renamed function.
* config/riscv/riscv.cc (riscv_secondary_memory_needed): Remove
static.
(riscv_register_move_cost): Add VR<->GR/FR handling.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/pr112599-2.c: Adjust test
expectation.
Robin Dapp [Thu, 12 Dec 2024 09:33:28 +0000 (10:33 +0100)]
RISC-V: Emit vector shift pattern for const_vector [PR117353].
In PR117353 and PR117878 we expand a const vector during reload. For
this we use an unpredicated left shift. Normally an insn like this is
split but as we introduce it late and cannot create pseudos anymore
it remains unpredicated and is not recognized by the vsetvl pass (where
we expect all insns to be in predicated RVV format).
This patch directly emits a predicated shift instead. We could
distinguish between !lra_in_progress and lra_in_progress and emit
an unpredicated shift in the former case but we're not very likely
to optimize it anyway so it doesn't seem worth it.
PR target/117353
PR target/117878
gcc/ChangeLog:
* config/riscv/riscv-v.cc (expand_const_vector): Use predicated
instead of simple shift.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/pr117353.c: New test.
Pan Li [Fri, 13 Dec 2024 02:45:38 +0000 (10:45 +0800)]
RISC-V: Make vector strided load alias all other memories
The vector strided load doesn't include the (mem:BLK (scratch)) to
alias all other memories. It will make the alias analysis only
consider the base address of strided load and promopt the store
before the strided load. For example as below
#define STEP 10
char d[225];
int e[STEP];
int main() {
// store 0, 10, 20, 30, 40, 50, 60, 70, 80, 90
for (long h = 0; h < STEP; ++h)
d[h * STEP] = 9;
// load 30, 40, 50, 60, 70, 80, 90
// store 3, 4, 5, 6, 7, 8, 9
for (int h = 3; h < STEP; h += 1)
e[h] = d[h * STEP];
if (e[5] != 9) {
__builtin_abort ();
}
return 0;
}
The asm dump will be:
main:
lui a5,%hi(.LANCHOR0)
addi a5,a5,%lo(.LANCHOR0)
li a4,9
sb a4,30(a5)
addi a3,a5,30
vsetivli zero,7,e32,m1,ta,ma
li a2,10
vlse8.v v2,0(a3),a2 // depends on 30(a5), 40(a5), ... 90(a5) but
// only 30(a5) has been promoted before vlse.
// It is store after load mistake.
addi a3,a5,252
sb a4,0(a5)
sb a4,10(a5)
sb a4,20(a5)
sb a4,40(a5)
vzext.vf4 v1,v2
sb a4,50(a5)
sb a4,60(a5)
vse32.v v1,0(a3)
li a0,0
sb a4,70(a5)
sb a4,80(a5)
sb a4,90(a5)
lw a5,260(a5)
beq a5,a4,.L4
li a0,123
After this patch:
main:
vsetivli zero,4,e32,m1,ta,ma
vmv.v.i v1,9
lui a5,%hi(.LANCHOR0)
addi a5,a5,%lo(.LANCHOR0)
addi a4,a5,244
vse32.v v1,0(a4)
li a4,9
sb a4,0(a5)
sb a4,10(a5)
sb a4,20(a5)
sb a4,30(a5)
sb a4,40(a5)
sb a4,50(a5)
sb a4,60(a5)
sb a4,70(a5)
sb a4,80(a5)
sb a4,90(a5)
vsetivli zero,3,e32,m1,ta,ma
addi a4,a5,70
li a3,10
vlse8.v v2,0(a4),a3
addi a5,a5,260
li a0,0
vzext.vf4 v1,v2
vse32.v v1,0(a5)
ret
The below test suites are passed for this patch.
* The rv64gcv fully regression test.
PR target/117990
gcc/ChangeLog:
* config/riscv/vector.md: Add the (mem:BLK (scratch)) to the
vector strided load.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/pr117990-run-1.c: New test.
Tom Tromey [Fri, 15 Nov 2024 17:12:28 +0000 (10:12 -0700)]
ada: Pass artificial_p to create_type_decl
The recent "nameless types" change to gcc-interface caused the gdb
pretty-printer for VSS to fail. This happens because one call to
create_type_decl unconditionally passes "true" as the "artificial_p"
parameter. This patch changes this call to instead pass the entity's
local artificial_p value instead. This makes sense, I think, because
the type decl being created for debug purposes (as the comment says)
is there to represent the relevant entity from the source.
gcc/ada/ChangeLog:
* gcc-interface/decl.cc (gnat_to_gnu_entity): Pass artificial_p to
create_type_decl.
Javier Miranda [Sun, 1 Dec 2024 19:02:52 +0000 (19:02 +0000)]
ada: Cleanup preanalysis of static expressions
During preanalysis, the frontend does not generate freeze nodes.
The exception to this rule occurs during the preanalysis of default
and per-object expressions, where static expressions are frozen.
A patch merged six years ago to address an issue in this area introduced
additional complexity and confusion regarding the frontend's behavior in
such cases. The purpose of this patch is to revert that change, simplifying
the support for the preanalysis of static expressions to make it cleaner
and easier to understand.
gcc/ada/ChangeLog:
* sem.ads (Inside_Preanalysis_Without_Freezing): Removed.
* sem.adb (Semantics): Remove Inside_Preanalysis_Without_Freezing.
* sem_ch6.adb (Preanalyze_Formal_Expression): Removed.
* sem_ch3.ads (Preanalyze_Assert_Expression): Add documentation.
(Preanalyze_Spec_Expression): Add documentation.
* sem_ch3.adb (Preanalyze_Assert_Expression) Code cleanup.
(Preanalyze_Default_Expression): Code cleanup.
* sem_res.ads (Preanalyze_With_Freezing_And_Resolve): Removed.
* sem_res.adb (Preanalyze_With_Freezing_And_Resolve): Removed.
(Preanalyze_And_Resolve): Code cleanup.
* freeze.adb (Freeze_Entity): No freeze under strict preanalysis.
(Freeze_Expression): Code cleanup.
(Freeze_Expr_Types): Replace call to Preanalyze_Spec_Expression by
strict preanalysis during preanalysis of a duplicate of the
expression performed to have available the minimum decoration
to locate referenced unfrozen types.
* sem_aggr.adb (Resolve_Array_Aggregate): Minor code cleanup.
* sem_attr.adb (Resolve_Attribute): Add documentation.
* sem_ch13.adb (Resolve_Aspect_Expressions[Aspect_Default_Value]):
Replace call to Preanalyze_Spec_Expression by Preanalyze_And_Resolve.
(Resolve_Aspect_Expressions[Aspect_Default_Component_Value]): Ditto.
* sem_ch8.adb (Set_Entity_Or_Discriminal): Code cleaup.
* sem_prag.adb (Analyze_Initial_Condition_In_Decl_Part): Replace
call to Preanalyze_Assert_Expression by call to Preanalyze_And_Resolve.
(Analyze_Pre_Post_Condition): Replace call to Preanayze_Spec_Expression
by call to Preanalyze_Assert_Expression.
* sem_util.ads (In_Pragma_Expression): Adding a formal to extend the
functionality of this subprogram.
(Within_Static_Expression): New subprogram.
* sem_util.adb (In_Pragma_Expression): Ditto.
(Within_Static_Expression): Ditto.
* checks.adb (Install_Null_Excluding_Check): No check during preanalysis.
(Install_Primitive_Elaboration_Check): Ditto.
Eric Botcazou [Sun, 1 Dec 2024 22:42:36 +0000 (23:42 +0100)]
ada: Improve expansion of nested conditional expressions in return statements
This arranges for nested conditional expressions in simple return statements
to have their expansion delayed until the returns are distributed into their
dependent expressions. This comprises the case of the elsif part of an if
expression present in the source code.
This also distributes qualified expressions into the dependent expressions
of conditional expressions, although this seems to occur rarely in practice.
gcc/ada/ChangeLog:
* exp_aggr.ads (Is_Delayed_Conditional_Expression): Move to...
* exp_aggr.adb (Is_Delayed_Conditional_Expression): Move to...
(Convert_To_Assignments): Use Delay_Conditional_Expressions_Between.
* exp_ch3.adb (Expand_N_Object_Declaration): Reset the Analyzed flag
by means of Unanalyze_Delayed_Conditional_Expression.
* exp_ch4.adb (Expand_N_Case_Expression): Likewise. Delay expanding
the expression if it is in the context of a simple return statement.
(Expand_N_If_Expression): Likewise.
(Expand_N_Qualified_Expression): Fold identical operand. Distribute
the expression into an operand that is a conditional expression with
expansion delayed.
(Process_Transient_In_Expression): Also test the parent node for the
presence of a simple return statement.
* exp_ch6.adb (Expand_Ctrl_Function_Call): Test the unconditional
parent node for the presence of a simple return statement.
* exp_util.ads (Delayed Expansion): New description.
(Delay_Conditional_Expressions_Between): New procedure.
(Is_Delayed_Conditional_Expression): ...here.
(Unanalyze_Delayed_Conditional_Expression): New procedure.
(Unconditional_Parent): New function.
* exp_util.adb (Find_Hook_Context): Take into account conditional
statements coming from conditional expressions.
(Within_Conditional_Expression): Likewise.
(Delay_Conditional_Expressions_Between): New procedure.
(Is_Delayed_Conditional_Expression): ...here.
(Unanalyze_Delayed_Conditional_Expression): New procedure.
(Unconditional_Parent): New function.
* sinfo.ads (Expansion_Delayed): Adjust description.
Marc Poulhiès [Fri, 29 Nov 2024 08:15:42 +0000 (09:15 +0100)]
ada: Fix fixed point text-io when subtype has dynamic range
When the fixed point subtype has dynamic range, for example in the
context of a generic procedure Test where Fixed_Type is a type formal:
procedure Test (Low, High : Fixed_Type) is
type New_Subtype is new Fixed_Type range Low .. High;
package New_Io is new Text_IO.Fixed_IO (New_Subtype);
the compiler would complain with:
non-static universal integer value out of range
Have the check use the Base type for checking what integer type can be
used. If a given integer type can be used for a base type, it can
also be used for any of its subtypes.
Piotr Trojanek [Fri, 22 Nov 2024 13:31:52 +0000 (14:31 +0100)]
ada: Implement new rules about effectively volatile types in SPARK
New rules make record types effectively volatile based on the effective
volatility of their components; same for effectively volatile for
reading. Now volatility composition for records works like volatility
composition for arrays.
gcc/ada/ChangeLog:
* sem_util.adb (Is_Effectively_Volatile,
Is_Effectively_Volatile_For_Reading): Implement new rule for
record types.
* sem_util.ads (Is_Effectively_Volatile,
Is_Effectively_Volatile_For_Reading): Adjust comments.
Piotr Trojanek [Fri, 22 Nov 2024 10:31:38 +0000 (11:31 +0100)]
ada: Remove unused parameter from volatile type queries
Routines Is_Effectively_Volatile and Is_Effectively_Volatile_For_Reading
were always called with Ignore_Protected parameter set to True (or has
been passed unmodified on recursive calls), so this parameter wasn't
actually needed.
Code cleanup; semantics is unaffected.
gcc/ada/ChangeLog:
* sem_util.adb (Is_Effectively_Volatile,
Is_Effectively_Volatile_For_Reading): Remove Ignore_Protected
parameter.
(Is_Effectively_Volatile_Object,
Is_Effectively_Volatile_Object_For_Reading): Remove
single-parameter wrappers that are needed to instantiate
generic subprogram.
* sem_util.ads (Is_Effectively_Volatile,
Is_Effectively_Volatile_For_Reading): Remove parameter; adjust
comment.
Eric Botcazou [Fri, 29 Nov 2024 08:21:09 +0000 (09:21 +0100)]
ada: Elide copy for calls in allocators for nonlimited by-reference types
This prevents a temporary from being created on the primary stack to hold
the result of the function calls before it is copied to the newly allocated
memory in the nonlimited by-reference case.
That's already not done in the nonlimited non-by-reference case and there is
no reason to do it in the former case either. The main issue is the call to
Remove_Side_Effects in Expand_Allocator_Expression, but its only purpose is
to cover the problematic processing done in Build_Allocate_Deallocate_Proc
on (part of) the expression; once this is fixed, the call is unnecessary.
The change also contains another small fix to deal with the corner case of
allocators for access-to-access types.
gcc/ada/ChangeLog:
* exp_ch4.adb (Expand_Allocator_Expression): Do not preventively
call Remove_Side_Effects on the expression in the nonlimited
by-reference case. Always call Build_Allocate_Deallocate_Proc
in the default case.
* exp_ch6.adb (Expand_Ctrl_Function_Call): Bail out if the call
is the qualified expression of an allocator.
* exp_util.adb (Build_Allocate_Deallocate_Proc): Replace all the
calls to Relocate_Node by calls to Duplicate_Subexpr_No_Checks.
Eric Botcazou [Fri, 29 Nov 2024 08:04:09 +0000 (09:04 +0100)]
ada: Remove last call to Preanalyze_And_Resolve from Exp_Aggr
All the expressions are now at least preanalyzed in a non-iterated context,
so we do not need to redo it in Aggr_Assignment_OK_For_Backend, given that
Is_OK_Aggregate explicitly rejects iterated component associations.
gcc/ada/ChangeLog:
* exp_aggr.adb (Aggr_Assignment_OK_For_Backend): Do not call again
Preanalyze_And_Resolve on the expression.
Eric Botcazou [Wed, 27 Nov 2024 12:03:08 +0000 (13:03 +0100)]
ada: Fix dangling reference with user-defined indexing of function call
This happens with a noncontrolled type because the user-defined indexing is
expanded into a function call that binds the lifetime of the original call
to its return value. The temporary must be created explicitly in this case,
so that the front-end can control its lifetime.
gcc/ada/ChangeLog:
* exp_ch6.adb (Expand_Call_Helper): Also create a temporary in the
case of a noncontrolled user-defined indexing.
ada: Exclude library units from gnatcov instrumentation
Before this patch, we instrumented code that's only used during the
build process to generate more code. This patch marks the
code-generating code so it's not instrumented for coverage.
gcc/ada/ChangeLog:
* gnat2.gpr: Add library units to coverage exclusion list.
Eric Botcazou [Thu, 24 Oct 2024 15:09:39 +0000 (17:09 +0200)]
ada: Further work in semantic analysis of iterated component associations
This finishes up the transition to preanalysis of a copy of the expression
for iterated component associations in all contexts, thus voiding the need
to clean things up afterward.
However, this requires a larger cleanup in semantics analysis of aggregates,
in particular for others choices, which are currently skipped in Sem_Aggr,
with Exp_Aggr trying to patch things up afterward but leaving some legality
loopholes in the end. That's why this makes sure that all the expressions
appearing in aggregates are either analyzed or preanalyzed by Sem_Aggr, as
documented in the spec of Sem, modulo the copy in an iteration context.
gcc/ada/ChangeLog:
* exp_aggr.adb (Build_Array_Aggr_Code): Remove obsolete comment.
(Convert_To_Positional): Remove Ctyp local variable.
(Is_Static_Element): Remove Dims parameter and do not preanalyze the
expression there.
(Expand_Array_Aggregate): Make Ctyp a constant.
(Compute_Others_Present): Do not preanalyze the expression there.
* sem_aggr.adb (Resolve_Array_Aggregate): New Ctyp constant. Use it
throughout the procedure to denote the component type.
(Resolve_Aggr_Expr): Always preanalyze a copy of the expression in
an iteration context. Preanalyze it directly when the expander is
active and the choice may cover multiple components. Otherwise,
fully analyze it.
Do not reanalyze an iterated component association with an others
choice either when there are positional components.
(Resolve_Iterated_Component_Association): Do not remove references
from the expression after invoking Resolve_Aggr_Expr on it.
Eric Botcazou [Tue, 26 Nov 2024 20:20:08 +0000 (21:20 +0100)]
ada: Remove implicit assumption in the double case
The assumption is fulfilled in all the instantiations of the package, but
it should not be made in the generic code.
gcc/ada/ChangeLog:
* libgnat/s-imager.adb (Set_Image_Real): In the case where a double
integer is needed, do not implicit assume that it can contain up to
'Digits of the floating-point type.
Sandra Loosemore [Fri, 13 Dec 2024 00:26:29 +0000 (00:26 +0000)]
Fix -fstrict-flex-arrays documentation, again [PR111659]
My previous attempt to fix this issue ended up garbling the text
instead. Trying again to make the descriptions of the attribute and
command-line option consistent.
gcc/ChangeLog
PR middle-end/111659
* doc/extend.texi (Common Variable Attributes): Copy-edit description
of the strict_flex_array attribute levels.
* doc/invoke.texi (C Dialect Options): Swap documented behavior for
levels 0 and 3. Copy the description for the other levels from the
attribute instead of indirecting to it.
hppa: Remove extra clobber from divsi3, udivsi3, modsi3 and umodsi3 patterns
The $$divI, $$divU, $$remI and $$remU millicode calls clobber r1,
r26, r25 and the return link register (r31 or r2). We don't need
to clobber any other registers.
2024-12-12 John David Anglin <danglin@gcc.gnu.org>
gcc/ChangeLog:
* config/pa/pa.cc (pa_emit_hpdiv_const): Clobber r1, r25,
r25 and return register.
* config/pa/pa.md (divsi3): Revise clobbers and operands.
Remove second clobber from div:SI insns.
(udivsi3, modsi3, umodsi3): Likewise.
Sandra Loosemore [Thu, 12 Dec 2024 20:12:42 +0000 (20:12 +0000)]
Regenerate attr-urls.def.
I noticed there is this new generated file that needs to be updated by
"make regenerate-attr-urls" similarly to "make regenerate-opt-urls", but
nobody had done that recently as the buildbot does not nag about it yet.
Sandra Loosemore [Thu, 12 Dec 2024 19:56:04 +0000 (19:56 +0000)]
Clean up documentation of -Wsuggest-attribute= [PR115532]
The list of -Wsuggest-attribute= variants was out of date in the option
summary (and getting too long to fit on one line), and an index entry was
missing for -Wsuggest-attribute=returns_nonnull.
gcc/ChangeLog
PR c/115532
* common.opt.urls: Regenerated.
* doc/invoke.texi (Option Summary): Don't try to list all the
-Wsuggest-attribute= variants inline here.
(Warning Options): Likewise. Add @opindex for
Wsuggest-attribute=returns_nonnull and its no- form. Remove
@itemx for no- form.
Co-Authored-By: Peter Eisentraut <peter@eisentraut.org>
Jakub Jelinek [Thu, 12 Dec 2024 18:47:46 +0000 (19:47 +0100)]
match.pd: Defer some CTZ/CLZ foldings until after ubsan pass for -fsanitize=builtin [PR115127]
As the following testcase shows, -fsanitize=builtin instruments the
builtins in the ubsan pass which is done shortly after going into
SSA, but if optimizations optimize the builtins away before that,
nothing is instrumented. Now, I think it is just fine if the
result of the builtins isn't used in any way and we just DCE them,
but in the following optimizations the result is used.
So, the following patch for -fsanitize=builtin only defers the
optimizations that might turn single argument CLZ/CTZ (aka undefined
at zero) until the ubsan pass is done.
Now, we don't have PROP_ubsan and am not sure it is worth adding it,
there is PROP_ssa set by the ssa pass which is 3 passes before
ubsan, but there are only 2 warning passes in between, so PROP_ssa
looked good enough to me.
2024-12-12 Jakub Jelinek <jakub@redhat.com>
PR sanitizer/115127
* match.pd (clz (X) == C, ctz (X) == C, ctz (X) >= C): Don't
optimize if -fsanitize=builtin and not yet in SSA form.
Tobias Burnus [Thu, 12 Dec 2024 17:58:59 +0000 (18:58 +0100)]
OpenMP: Enable has_device_addr clause for 'dispatch' in C/C++
The 'has_device_addr' of 'dispatch' has to be seen in conjunction with the
'need_device_addr' modifier to the 'adjust_args' clause of 'declare variant'.
As the latter has not yet been implemented, 'has_device_addr' has no real
effect. However, to prepare for 'need_device_addr' and as service to the user:
For C, where 'need_device_addr' is not permitted (contrary to C++ and Fortran),
a note is output when then the user tries to use it (alongside the existing
error that either 'nothing' or 'need_device_ptr' was expected).
And, on the ME side, is is lightly handled by diagnosing when - for the
same argument - there is a mismatch between the variant's adjust_args
'need_device_ptr' modifier and dispatch having an 'has_device_addr' clause
(or likewise for need_device_addr with is_device_ptr) as, according to the
spec, those are completely separate.
Thus, 'dispatch' will still do the host to device pointer conversion for
a 'need_device_ptr' argument, even if it appeared in a 'has_device_addr'
clause.
gcc/c/ChangeLog:
* c-parser.cc (OMP_DISPATCH_CLAUSE_MASK): Add has_device_addr clause.
(c_finish_omp_declare_variant): Add an 'inform' telling the user that
'need_device_addr' is invalid for C.
* gimplify.cc (gimplify_call_expr): When handling OpenMP's dispatch,
add diagnostic when there is a ptr vs. addr mismatch between
need_device_{addr,ptr} and {is,has}_device_{ptr,addr}, respectively.
gcc/testsuite/ChangeLog:
* c-c++-common/gomp/adjust-args-3.c: New test.
* gcc.dg/gomp/adjust-args-2.c: New test.