Iain Buclaw [Mon, 20 Jan 2025 19:01:03 +0000 (20:01 +0100)]
d: Fix failing test with 32-bit compiler [PR114434]
Since the introduction of gdc.test/runnable/test23514.d, it's exposed an
incorrect compilation when adding a 64-bit constant to a link-time
address. The current cast to size_t causes a loss of precision, which
can result in incorrect compilation.
PR d/114434
gcc/d/ChangeLog:
* expr.cc (ExprVisitor::visit (PtrExp *)): Get the offset as a
dinteger_t rather than a size_t.
(ExprVisitor::visit (SymOffExp *)): Likewise.
Uros Bizjak [Fri, 20 Dec 2024 15:16:15 +0000 (16:16 +0100)]
i386: Disable SImode/DImode moves from/to mask regs without avx512bw [PR118067]
SImode and DImode moves from/to mask registers are valid only with AVX512BW,
so mark relevant alternatives in *movsi_internal and *movdi_internal as such.
Even with the patch, the testcase still fails, but now with:
pr118067.c: In function ‘foo’:
pr118067.c:13:1: internal compiler error: maximum number of generated reload insns per insn achieved (90)
13 | }
| ^
0x2c3b581 internal_error(char const*, ...)
../../git/gcc/gcc/diagnostic-global-context.cc:517
0xb68938 lra_constraints(bool)
../../git/gcc/gcc/lra-constraints.cc:5411
0xb51a0d lra(_IO_FILE*, int)
../../git/gcc/gcc/lra.cc:2449
0xaf9f4d do_reload
../../git/gcc/gcc/ira.cc:5977
0xafa462 execute
../../git/gcc/gcc/ira.cc:6165
Simon Martin [Sun, 5 Jan 2025 09:36:47 +0000 (10:36 +0100)]
c++: Friend classes don't shadow enclosing template class paramater [PR118255]
We currently reject the following code
=== code here ===
template <int non_template> struct S { friend class non_template; };
class non_template {};
S<0> s;
=== code here ===
While EDG agrees with the current behaviour, clang and MSVC don't (see
https://godbolt.org/z/69TGaabhd), and I believe that this code is valid,
since the friend clause does not actually declare a type, so it cannot
shadow anything. The fact that we didn't error out if the non_template
class was declared before S backs this up as well.
This patch fixes this by skipping the call to check_template_shadow for
hidden bindings.
PR c++/118255
gcc/cp/ChangeLog:
* name-lookup.cc (pushdecl): Don't call check_template_shadow
for hidden bindings.
gcc/testsuite/ChangeLog:
* g++.dg/lookup/pr99116-1.C: Adjust test expectation.
* g++.dg/template/friend84.C: New test.
Nathaniel Shead [Fri, 20 Dec 2024 11:09:39 +0000 (22:09 +1100)]
c++: Allow pragmas in NSDMIs [PR118147]
This patch removes the (unnecessary) CPP_PRAGMA_EOL case from
cp_parser_cache_defarg, which currently has the result that any pragmas
in the NSDMI cause an error.
PR c++/118147
gcc/cp/ChangeLog:
* parser.cc (cp_parser_cache_defarg): Don't error when
CPP_PRAGMA_EOL.
Simon Martin [Thu, 16 Jan 2025 15:27:06 +0000 (16:27 +0100)]
c++: Make sure fold_sizeof_expr returns the correct type [PR117775]
We currently ICE upon the following code, that is valid under
-Wno-pointer-arith:
=== cut here ===
int main() {
decltype( [](auto) { return sizeof(void); } ) x;
return x.operator()(0);
}
=== cut here ===
The problem is that "fold_sizeof_expr (sizeof(void))" returns
size_one_node, that has a different TREE_TYPE from that of the sizeof
expression, which later triggers an assert in cxx_eval_store_expression.
This patch makes sure that fold_sizeof_expr always returns a tree with
the size_type_node type.
PR c++/117775
gcc/cp/ChangeLog:
* decl.cc (fold_sizeof_expr): Make sure the folded result has
type size_type_node.
Eugene Rozenfeld [Sat, 11 Jan 2025 03:48:52 +0000 (19:48 -0800)]
Fix setting of call graph node AutoFDO count
We are initializing both the call graph node count and
the entry block count of the function with the head_count value
from the profile.
Count propagation algorithm may refine the entry block count
and we may end up with a case where the call graph node count
is set to zero but the entry block count is non-zero. That becomes
a problem because we have this code in execute_fixup_cfg:
profile_count num = node->count;
profile_count den = ENTRY_BLOCK_PTR_FOR_FN (cfun)->count;
bool scale = num.initialized_p () && !(num == den);
Here if num is 0 but den is not 0, scale becomes true and we
lose the counts in
if (scale)
bb->count = bb->count.apply_scale (num, den);
This is what happened in the issue reported in PR116743
(a 10% regression in MySQL HAMMERDB tests). 3d9e6767939e9658260e2506e81ec32b37cba041 made an improvement in
AutoFDO count propagation, which caused a mismatch between
the call graph node count (zero) and the entry block count (non-zero)
and subsequent loss of counts as described above.
The fix is to update the call graph node count once we've done count propagation.
Tested on x86_64-pc-linux-gnu.
gcc/ChangeLog:
PR gcov-profile/116743
* auto-profile.cc (afdo_annotate_cfg): Fix mismatch between the call graph node count
and the entry block count.
Iain Buclaw [Thu, 16 Jan 2025 23:23:45 +0000 (00:23 +0100)]
d: Fix record layout of compiler-generated TypeInfo_Class [PR115249]
In r14-8766, the layout of TypeInfo_Class changed in the runtime
library, but didn't get reflected in the compiler-generated data,
causing a corruption of runtime type introspection on BigEndian targets.
This adjusts the size of the `ClassFlags' field from uint to ushort, and
adds a new ushort `depth' field in the space where ClassFlags used to
occupy.
Jonathan Wakely [Fri, 27 Sep 2024 20:01:46 +0000 (21:01 +0100)]
libstdc++: Fix more pedwarns in headers for C++98
Some tests e.g. 17_intro/headers/c++1998/all_pedantic_errors.cc FAIL
with GLIBCXX_TESTSUITE_STDS=98 due to numerous C++11 extensions still in
use in the library headers. The recent changes to not make them system
headers means we get warnings now.
This change adds more diagnostic pragmas to suppress those warnings.
libstdc++-v3/ChangeLog:
* include/bits/istream.tcc: Add diagnostic pragmas around uses
of long long and extern template.
* include/bits/locale_facets.h: Likewise.
* include/bits/locale_facets.tcc: Likewise.
* include/bits/locale_facets_nonio.tcc: Likewise.
* include/bits/ostream.tcc: Likewise.
* include/bits/stl_algobase.h: Likewise.
* include/c_global/cstdlib: Likewise.
* include/ext/pb_ds/detail/resize_policy/hash_prime_size_policy_imp.hpp:
Likewise.
* include/ext/pointer.h: Likewise.
* include/ext/stdio_sync_filebuf.h: Likewise.
* include/std/istream: Likewise.
* include/std/ostream: Likewise.
* include/tr1/cmath: Likewise.
* include/tr1/type_traits: Likewise.
* include/tr1/functional_hash.h: Likewise. Remove semi-colons
at namespace scope that aren't needed after macro expansion.
* include/tr1/tuple: Remove semi-colon at namespace scope.
* include/bits/vector.tcc: Change LL suffix to just L.
We have several overloads of std::deque::_M_insert_aux, one of which is
variadic and called by std::deque::emplace. With a suitable set of
arguments to emplace, it's possible for one of the non-variadic
_M_insert_aux overloads to be selected by overload resolution, making
emplace ill-formed.
Rename the variadic _M_insert_aux to _M_emplace_aux so that calls to
emplace never select an _M_insert_aux overload. Also add an inline
_M_insert_aux for the const lvalue overload that is called from
insert(const_iterator, const value_type&).
Match-and-simplified .COND_IOR (_41, d_lsm.7_11, _46, d_lsm.7_11) to 1
when _46 == 1. This happens by removing the conditional and applying
a | 1 = 1. Normally we re-introduce the conditional and its else value
if needed but that does not happen here as we're not dealing with a
vector type. For correctness's sake, we must not remove the conditional
even for non-vector types.
This patch re-introduces a COND_EXPR in such cases. For PR118140 this
result in a non-vectorized loop.
PR middle-end/118140
gcc/ChangeLog:
* gimple-match-exports.cc (maybe_resimplify_conditional_op): Add
COND_EXPR when we simplified to a scalar gimple value but still
have an else value.
gcc/testsuite/ChangeLog:
* gcc.dg/vect/pr118140.c: New test.
* gcc.target/riscv/rvv/autovec/pr118140.c: New test.
testsuite: The expect framework might introduce CR in output
When running tests using the "sim" config, the command is launched in
non-readonly mode and the text retrieved from the expect command will
then replace all LF with CRLF. (The problem can be found in sim_load
where it calls remote_spawn without an input file).
libstdc++-v3/ChangeLog:
* testsuite/27_io/print/1.cc: Allow both LF and CRLF in test.
* testsuite/27_io/print/3.cc: Likewise.
Nathaniel Shead [Sat, 11 Jan 2025 17:35:08 +0000 (04:35 +1100)]
testsuite: Fix flag used for modules test
GCC14 doesn't have the new spelling '-fmodules' for enabling modules;
use the old '-fmodules-ts' spelling instead.
gcc/testsuite/ChangeLog:
* g++.dg/modules/pr114630_a.C: Use -fmodules-ts instead of
-fmodules in testcase.
* g++.dg/modules/pr114630_b.C: Likewise.
* g++.dg/modules/pr114630_c.C: Likewise.
Nathaniel Shead [Thu, 9 Jan 2025 14:06:37 +0000 (01:06 +1100)]
c++/modules: Handle chaining already-imported local types [PR114630]
In the linked testcase, an ICE occurs because when reading the
(duplicate) function definition for _M_do_parse from module Y, the local
type definitions have already been streamed from module X and setup as
regular backreferences, rather than being found with find_duplicate,
causing issues with managing DECL_CHAIN.
It is tempting to just skip setting up the DECL_CHAIN for this case.
However, for the future it would be best to ensure that the block vars
for the duplicate definition are accurate, so that we could implement
ODR checking on function definitions at some point.
So to solve this, this patch creates a copy of the streamed-in local
type and chains that; it will be discarded along with the rest of the
duplicate function after we've finished processing.
A couple of suggested implementations from the discussion on the PR that
don't work:
- Replacing the `DECL_CHAIN` assertion with `(*chain && *chain != decl)`
doesn't handle the case where type definitions are followed by regular
local variables, since those won't have been imported as separate
backreferences and so the chains will diverge.
- Correcting the purviewness of GMF template instantiations to force Y
to emit copies of the local types rather than backreferences into X is
insufficient, as it's still possible that the local types got streamed
in a separate cluster to the function definition, and so will be again
referred to via regular backreferences when importing.
- Likewise, preventing the emission of function definitions where an
import has already provided that same definition also is insufficient,
for much the same reason.
PR c++/114630
gcc/cp/ChangeLog:
* module.cc (trees_in::core_vals) <BLOCK>: Chain a new node if
DECL_CHAIN already is set.
gcc/testsuite/ChangeLog:
* g++.dg/modules/pr114630.h: New test.
* g++.dg/modules/pr114630_a.C: New test.
* g++.dg/modules/pr114630_b.C: New test.
* g++.dg/modules/pr114630_c.C: New test.
Signed-off-by: Nathaniel Shead <nathanieloshead@gmail.com> Reviewed-by: Jason Merrill <jason@redhat.com> Reviewed-by: Patrick Palka <ppalka@redhat.com>
In GCC 12 there was a ~40% regression in the performance of hashmap->find.
This regression came about accidentally:
Before GCC 12 the find function was small enough that IPA would inline it even
though it wasn't marked inline. In GCC-12 an optimization was added to perform
a linear search when the entries in the hashmap are small.
This increased the size of the function enough that IPA would no longer inline.
Inlining had two benefits:
1. The return value is a reference. so it has to be returned and dereferenced
even though the search loop may have already dereference it.
2. The pattern is a hard pattern to track for branch predictors. This causes
a large number of branch misses if the value is immediately checked and
branched on. i.e. if (a != m.end()) which is a common pattern.
The patch fixes both these issues by adding the inline keyword to _M_locate
to allow the inliner to consider inlining again.
This and the other patches have been ran through serveral benchmarks where
the size, number of elements searched for and type (reference vs value) etc
were tested.
The change shows no statistical regression, but an average find improvement of
~27% and a range between ~10-60% improvements.
testsuite: arm: Add pattern for armv8-m.base to cmse-15.c test
Since armv8-m.base uses thumb1 that does not suport sibcall/tailcall,
a pattern is needed that uses PUSH/BL/POP sequence instead of a single
B instruction to reuse an already existing function in the compile unit.
gcc/testsuite/ChangeLog:
* gcc.target/arm/cmse/cmse-15.c: Added pattern for armv8-m.base.
Andrew Carlotti [Tue, 7 Jan 2025 18:32:23 +0000 (18:32 +0000)]
Disable a broken multiversioning optimisation
This patch skips redirect_to_specific clone for aarch64 and riscv,
because the optimisation has two flaws:
1. It checks the value of the "target" attribute, even on targets that
don't use this attribute for multiversioning.
2. The algorithm used is too aggressive, and will eliminate the
indirection in some cases where the runtime choice of callee version
can't be determined statically at compile time. A correct would need to
verify that:
- if the current caller version were selected at runtime, then the
chosen callee version would be eligible for selection.
- if any higher priority callee version were selected at runtime, then
a higher priority caller version would have been eligble for
selection (and hence the current caller version wouldn't have been
selected).
The current checks only verify a more restrictive version of the first
condition, and don't check the second condition at all.
Fixing the optimisation properly would require implementing target hooks
to check for implications between version attributes, which is too
complicated for this stage. However, I would like to see this hook
implemented in the future, since it could also help deduplicate other
multiversioning code.
Since this behavior has existed for x86 and powerpc for a while, I
think it's best to preserve the existing behavior on those targets,
unless any maintainer for those targets disagrees.
gcc/ChangeLog:
* multiple_target.cc
(redirect_to_specific_clone): Assert that "target" attribute is
used for FMV before checking it.
(ipa_target_clone): Skip redirect_to_specific_clone on some
targets.
Richard Biener [Thu, 5 Dec 2024 09:47:13 +0000 (10:47 +0100)]
tree-optimization/117912 - bogus address equivalences for __builtin_object_size
VN again is the culprit for exploiting address equivalences before
__builtin_object_size got the chance to do its job. This time
it isn't about union members but adjacent structure fields where
an address to one after the last element of an array field can
spill over to the next field.
The following protects all out-of-bound accesses on the upper bound
side (singling out TYPE_MAX_VALUE + 1 is more expensive). It
ignores other out-of-bound addresses that would invoke UB.
Zero-sized arrays are a bit awkward because the C++ represents them
with a -1U upper bound.
There's a similar issue for zero-sized components whose address can
be the same as the adjacent field in C.
PR tree-optimization/117912
* tree-ssa-sccvn.cc (copy_reference_ops_from_ref): For addresses
of zero-sized components do not set ->off if the object size pass
didn't run.
For OOB ARRAY_REF accesses in address expressions avoid setting
->off if the object size pass didn't run.
(valueize_refs_1): Likewise.
* c-c++-common/torture/pr117912-1.c: New testcase.
* c-c++-common/torture/pr117912-2.c: Likewise.
* c-c++-common/torture/pr117912-3.c: Likewise.
This member function was previously deprecated, but that was reverted by
P2875R4, approved earlier this year in Tokyo. Since it's not going to be
deprecated in C++26, and so presumably not removed, there is no point in
giving deprecated warnings for C++23 mode.
Jonathan Wakely [Thu, 11 Apr 2024 18:12:48 +0000 (19:12 +0100)]
libstdc++: Give std::memory_order a fixed underlying type [PR89624]
Prior to C++20 this enum type doesn't have a fixed underlying type,
which means it can be modified by -fshort-enums, which then means the
HLE bits are outside the range of valid values for the type.
As it has a fixed type of int in C++20 and later, do the same for
earlier standards too. This is technically a change for C++17 down,
because the implicit underlying type (without -fshort-enums) was
unsigned before. I doubt it matters in practice. That incompatibility
already exists between C++17 and C++20 and nobody has noticed or
complained. Now at least the underlying type will be int for all -std
modes.
libstdc++-v3/ChangeLog:
PR libstdc++/89624
* include/bits/atomic_base.h (memory_order): Use int as
underlying type.
* testsuite/29_atomics/atomic/89624.cc: New test.
The standard says that std::exclusive_scan can be used to work in
place, i.e. where the output range is the same as the input range. This
means that the first sum cannot be written to the output until after
reading the first input value, otherwise we'll already have overwritten
the first input value.
While writing a new testcase I also realised that the serial version of
std::exclusive_scan uses copy construction for the accumulator variable,
but the standard only requires Cpp17MoveConstructible. We also require
move assignable, which is missing from the standard's requirements, but
we should at least use move construction not copy construction.
A similar problem exists for some other new C++17 numeric algos, but
I'll fix the others in a subsequent commit.
libstdc++-v3/ChangeLog:
PR libstdc++/108236
* include/pstl/glue_numeric_impl.h (exclusive_scan): Pass __init
as rvalue.
* include/pstl/numeric_impl.h (__brick_transform_scan): Do not
write through __result until after reading through __first. Move
__init into return value.
(__pattern_transform_scan): Pass __init as rvalue.
* include/std/numeric (exclusive_scan): Move construct instead
of copy constructing.
* testsuite/26_numerics/exclusive_scan/2.cc: New test.
* testsuite/26_numerics/pstl/numeric_ops/108236.cc: New test.
Jonathan Wakely [Mon, 9 Dec 2024 10:52:10 +0000 (10:52 +0000)]
libstdc++: Fix debug containers for constant evaluation [PR117962]
Using a stateful allocator with std::vector would fail in Debug Mode,
because the allocator-extended move constructor tries to swap all the
attached safe iterators, but that uses a non-inline function which isn't
constexpr. We don't actually need to swap any iterators in constant
expressions, because we never attach them to the container in the first
place.
This bug went unnoticed because the tests for constexpr std::vector were
using a stateful allocator with a std::allocator base class, but were
failing to override the inherited is_always_equal trait from
std::allocator. That meant that the allocators took the always-equal
code paths, and didn't try to use the buggy constructor. In C++26 the
std::allocator::is_always_equal trait goes away, and so the tests
changed behaviour, revealing the bug.
libstdc++-v3/ChangeLog:
PR libstdc++/117962
* include/debug/safe_container.h: Make allocator-extended move
constructor a no-op during constant evaluation.
Jonathan Wakely [Tue, 10 Dec 2024 10:56:41 +0000 (10:56 +0000)]
libstdc++: Disable __gnu_debug::__is_singular(T*) in constexpr [PR109517]
Because of PR c++/85944 we have several bugs where _GLIBCXX_DEBUG causes
errors for constexpr code. Although Bug 117966 could be fixed by
avoiding redundant debug checks in std::span, and Bug 106212 could be
fixed by avoiding redundant debug checks in std::array, there are many
more cases where similar __glibcxx_requires_valid_range checks fail to
compile and removing the checks everywhere isn't desirable.
This just disables the __gnu_debug::__check_singular(T*) check during
constant evaluation. Attempting to dereference a null pointer will
certainly fail during constant evaluation (if it doesn't fail then it's
a compiler bug and not the library's problem). Disabling this check
during constant evaluation shouldn't do any harm.
libstdc++-v3/ChangeLog:
PR libstdc++/109517
PR libstdc++/109976
* include/debug/helper_functions.h (__valid_range_aux): Treat
all input iterator ranges as valid during constant evaluation.
Jonathan Wakely [Mon, 9 Dec 2024 17:35:24 +0000 (17:35 +0000)]
libstdc++: Skip redundant assertions in std::array equality [PR106212]
As PR c++/106212 shows, the Debug Mode checks cause a compilation error
for equality comparisons involving std::array prvalues in constant
expressions. Those Debug Mode checks are redundant when
comparing two std::array objects, because we already know we have a
valid range. We can also avoid the unnecessary step of using
std::__niter_base to do __normal_iterator unwrapping, which isn't needed
because our std::array iterators are just pointers. Using
std::__equal_aux1 instead of std::equal avoids the redundant checks in
std::equal and std::__equal_aux.
libstdc++-v3/ChangeLog:
PR libstdc++/106212
* include/std/array (operator==): Use std::__equal_aux1 instead
of std::equal.
* testsuite/23_containers/array/comparison_operators/106212.cc:
New test.
Jonathan Wakely [Mon, 9 Dec 2024 17:35:24 +0000 (17:35 +0000)]
libstdc++: Skip redundant assertions in std::span construction [PR117966]
As PR c++/117966 shows, the Debug Mode checks cause a compilation error
for a global constexpr std::span. Those debug checks are redundant when
constructing from an array or a range, because we already know we have a
valid range and we know its size. Instead of delegating to the
std::span(contiguous_iterator, contiguous_iterator) constructor, just
initialize the data members directly.
libstdc++-v3/ChangeLog:
PR libstdc++/117966
* include/std/span (span(T (&)[N])): Do not delegate to
constructor that performs redundant checks.
(span(array<T, N>&), span(const array<T, N>&)): Likewise.
(span(Range&&), span(const span<T, N>&)): Likewise.
* testsuite/23_containers/span/117966.cc: New test.
Inserting an empty range into a std::deque results in undefined calls to
either std::copy, std::copy_backward, std::move, or std::move_backward.
We call those algos with invalid arguments where the output range is the
same as the input range, e.g. std::copy(first, last, first) which
violates the preconditions for the algorithms.
This fix simply returns early if there's nothing to insert. Most callers
already ensure that we don't even call _M_range_insert_aux with an empty
range, but some callers don't. Rather than checking for n == 0 in each
of the callers, this just does the check once and uses __builtin_expect
to treat empty insertions as unlikely.
libstdc++-v3/ChangeLog:
PR libstdc++/118035
* include/bits/deque.tcc (_M_range_insert_aux): Return
immediately if inserting an empty range.
* testsuite/23_containers/deque/modifiers/insert/118035.cc: New
test.
Patrick Palka [Thu, 9 Jan 2025 15:50:19 +0000 (10:50 -0500)]
c++: ICE during requires-expr partial subst [PR118060]
Here during partial substitution of the requires-expression (as part of
CTAD constraint rewriting) we segfault from the INDIRECT_REF case of
convert_to_void due *f(u) being type-dependent. We should just defer
checking convert_to_void until satisfaction.
PR c++/118060
gcc/cp/ChangeLog:
* constraint.cc (tsubst_valid_expression_requirement): Don't
check convert_to_void during partial substitution.
Patrick Palka [Thu, 9 Jan 2025 15:50:12 +0000 (10:50 -0500)]
c++: constexpr potentiality of CAST_EXPR [PR117925]
We're incorrectly treating the templated callee (FnPtr)fnPtr, represented
as CAST_EXPR with TREE_LIST operand, as potentially constant here due to
neglecting to look through the TREE_LIST in the CAST_EXPR case of p_c_e_1.
PR c++/117925
gcc/cp/ChangeLog:
* constexpr.cc (potential_constant_expression_1) <case CAST_EXPR>:
Fix check for class conversion to literal type to properly look
through the TREE_LIST operand of a CAST_EXPR.
Patrick Palka [Thu, 9 Jan 2025 15:50:08 +0000 (10:50 -0500)]
c++: relax ICE for unexpected trees during constexpr [PR117925]
When we encounter an unexpected (likely templated) tree code during
constexpr evaluation we currently ICE even in release mode. But it
seems more user-friendly to just gracefully treat the expression as
non-constant, which will be harmless most of the time (e.g. in the case
of warning-specific or speculative constexpr folding as in the PR), and
at worst will transform an ICE-on-valid bug into a rejects-valid bug.
This is also what e.g. tsubst_expr does when it encounters an unexpected
tree code.
PR c++/117925
gcc/cp/ChangeLog:
* constexpr.cc (cxx_eval_constant_expression) <default>:
Relax ICE when encountering an unexpected tree code into a
checking ICE guarded by flag_checking.
Patrick Palka [Thu, 9 Jan 2025 15:49:45 +0000 (10:49 -0500)]
c++: template-id dependence wrt local static arg [PR117792]
Here we end up ICEing at instantiation time for the call to
f<local_static> ultimately because we wrongly consider the call to be
non-dependent, and so we specialize f ahead of time and then get
confused when fully substituting this specialization.
The call is dependent due to [temp.dep.temp]/3 and we miss that because
function template-id arguments aren't coerced until overload resolution,
and so the local static template argument lacks an implicit cast to
reference type that value_dependent_expression_p looks for before
considering dependence of the address. Other kinds of template-ids aren't
affected since they're coerced ahead of time.
So when considering dependence of a function template-id, we need to
conservatively consider dependence of the address of each argument (if
applicable).
PR c++/117792
gcc/cp/ChangeLog:
* pt.cc (type_dependent_expression_p): Consider the dependence
of the address of each template argument of a function
template-id.
Patrick Palka [Fri, 13 Dec 2024 18:17:29 +0000 (13:17 -0500)]
libstdc++: Avoid unnecessary copies in ranges::min/max [PR112349]
Use a local reference for the (now possibly lifetime extended) result of
*__first so that we copy it only when necessary.
PR libstdc++/112349
libstdc++-v3/ChangeLog:
* include/bits/ranges_algo.h (__min_fn::operator()): Turn local
object __tmp into a reference.
* include/bits/ranges_util.h (__max_fn::operator()): Likewise.
* testsuite/25_algorithms/max/constrained.cc (test04): New test.
* testsuite/25_algorithms/min/constrained.cc (test04): New test.
Patrick Palka [Tue, 29 Oct 2024 13:26:19 +0000 (09:26 -0400)]
libstdc++: Fix complexity of drop_view::begin() const [PR112641]
Views are required to have a amortized O(1) begin(), but our drop_view's
const begin overload is O(n) for non-common ranges with a non-sized
sentinel. This patch reimplements it so that it's O(1) always. See
also LWG 4009.
PR libstdc++/112641
libstdc++-v3/ChangeLog:
* include/std/ranges (drop_view::begin): Reimplement const
overload so that it's O(1) always.
* testsuite/std/ranges/adaptors/drop.cc (test10): New test.
When the test was initially created, -fcommon was the default, but in
commit r10-4867-g6271dd984d7 the default value changed to -fno-common.
This change made the test start failing. To counter the over-alignment
caused by 'a' no longer being common, use -Os.
gcc/testsuite/ChangeLog:
* gcc.target/arm/memset-inline-8.c: Use -Os and prefix assembler
instructions with a tab to improve test stability.
* gcc.target/arm/memset-inline-8-exe.c: Use -Os.
Marek Polacek [Thu, 12 Dec 2024 19:56:07 +0000 (14:56 -0500)]
c++: ICE initializing array of aggrs [PR117985]
This crash started with my r12-7803 but I believe the problem lies
elsewhere.
build_vec_init has cleanup_flags whose purpose is -- if I grok this
correctly -- to avoid destructing an object multiple times. Let's
say we are initializing an array of A. Then we might end up in
a scenario similar to initlist-eh1.C:
try
{
call A::A in a loop
// #0
try
{
call a fn using the array
}
finally
{
// #1
call A::~A in a loop
}
}
catch
{
// #2
call A::~A in a loop
}
cleanup_flags makes us emit a statement like
D.3048 = 2;
at #0 to disable performing the cleanup at #2, since #1 will take
care of the destruction of the array.
But if we are not emitting the loop because we can use a constant
initializer (and use a single { a, b, ...}), we shouldn't generate
the statement resetting the iterator to its initial value. Otherwise
we crash in gimplify_var_or_parm_decl because it gets the stray decl
D.3048.
PR c++/117985
gcc/cp/ChangeLog:
* init.cc (build_vec_init): Pop CLEANUP_FLAGS if we're not
generating the loop.
gcc/testsuite/ChangeLog:
* g++.dg/cpp0x/initlist-array23.C: New test.
* g++.dg/cpp0x/initlist-array24.C: New test.
Marek Polacek [Tue, 25 Jun 2024 21:42:01 +0000 (17:42 -0400)]
c++: unresolved overload with comma op [PR115430]
This works:
template<typename T>
int Func(T);
typedef int (*funcptrtype)(int);
funcptrtype fp0 = &Func<int>;
but this doesn't:
funcptrtype fp2 = (0, &Func<int>);
because we only call resolve_nondeduced_context on the LHS (via
convert_to_void) but not on the RHS, so cp_build_compound_expr's
type_unknown_p check issues an error.
PR c++/115430
gcc/cp/ChangeLog:
* typeck.cc (cp_build_compound_expr): Call resolve_nondeduced_context
on RHS.
gcc/testsuite/ChangeLog:
* g++.dg/cpp0x/noexcept41.C: Remove dg-error.
* g++.dg/overload/addr3.C: New test.
Marek Polacek [Tue, 3 Sep 2024 17:04:09 +0000 (13:04 -0400)]
c++: noexcept and pointer to member function type [PR113108]
We ICE in nothrow_spec_p because it got a DEFERRED_NOEXCEPT.
This DEFERRED_NOEXCEPT was created in implicitly_declare_fn
when declaring
Foo& operator=(Foo&&) = default;
in the test. The problem is that in resolve_overloaded_unification
we call maybe_instantiate_noexcept before try_one_overload only in
the TEMPLATE_ID_EXPR case.
Marek Polacek [Thu, 5 Sep 2024 20:45:32 +0000 (16:45 -0400)]
c++: ICE with structured bindings and m-d array [PR102594]
We ICE in decay_conversion with this test:
struct S {
S() {}
};
S arr[1][1];
auto [m](arr3);
But not when the last line is:
auto [n] = arr3;
Therefore the difference is between copy- and direct-init. In
particular, in build_vec_init we have:
if (direct_init)
from = build_tree_list (NULL_TREE, from);
and then we call build_vec_init again with init==from. Then
decay_conversion gets the TREE_LIST and it crashes.
build_aggr_init has:
/* Wrap the initializer in a CONSTRUCTOR so that build_vec_init
recognizes it as direct-initialization. */
init = build_constructor_single (init_list_type_node,
NULL_TREE, init);
CONSTRUCTOR_IS_DIRECT_INIT (init) = true;
so I propose to do the same in build_vec_init.
PR c++/102594
gcc/cp/ChangeLog:
* init.cc (build_vec_init): Build up a CONSTRUCTOR to signal
direct-initialization rather than a TREE_LIST.
Marek Polacek [Thu, 29 Aug 2024 19:13:03 +0000 (15:13 -0400)]
c++: mutable temps in rodata [PR116369]
Here we wrongly mark the reference temporary for g TREE_READONLY,
so it's put in .rodata and so we can't modify its subobject even
when the subobject is marked mutable. This is so since r9-869.
r14-1785 fixed a similar problem, but not in set_up_extended_ref_temp.
PR c++/116369
gcc/cp/ChangeLog:
* call.cc (set_up_extended_ref_temp): Don't mark a temporary
TREE_READONLY if its type is TYPE_HAS_MUTABLE_P.
Marek Polacek [Thu, 15 Aug 2024 22:47:29 +0000 (18:47 -0400)]
c++: ICE with enum and conversion fn in template [PR115657]
Here we initialize an enumerator with a class prvalue with a conversion
function. When we fold it in build_enumerator, we create a TARGET_EXPR
for the object, and subsequently crash in tsubst_expr, which should not
see such a code.
Normally, we fix similar problems by using an IMPLICIT_CONV_EXPR but here
I may get away with not using the result of fold_non_dependent_expr unless
the result is a constant. A TARGET_EXPR is not constant.
PR c++/115657
gcc/cp/ChangeLog:
* decl.cc (build_enumerator): Call maybe_fold_non_dependent_expr
instead of fold_non_dependent_expr.
gcc/testsuite/ChangeLog:
* g++.dg/cpp1y/constexpr-recursion2.C: New test.
* g++.dg/template/conv21.C: New test.
Marek Polacek [Wed, 8 May 2024 19:43:58 +0000 (15:43 -0400)]
c++: ICE with reference NSDMI [PR114854]
Here we crash on a cp_gimplify_expr/TARGET_EXPR assert:
/* A TARGET_EXPR that expresses direct-initialization should have been
elided by cp_gimplify_init_expr. */
gcc_checking_assert (!TARGET_EXPR_DIRECT_INIT_P (*expr_p));
the TARGET_EXPR in question is created for the NSDMI in:
class Vector { int m_size; };
struct S {
const Vector &vec{};
};
where we first need to create a Vector{} temporary, and then bind the
vec reference to it. The temporary is represented by a TARGET_EXPR
and it cannot be elided. When we create an object of type S, we get
Marek Polacek [Wed, 18 Sep 2024 19:44:31 +0000 (15:44 -0400)]
c++: concept in default argument [PR109859]
1) We're hitting the assert in cp_parser_placeholder_type_specifier.
It says that if it turns out to be false, we should do error() instead.
Do so, then.
2) lambda-targ8.C should compile fine, though. The problem was that
local_variables_forbidden_p wasn't cleared when we're about to parse
the optional template-parameter-list for a lambda in a default argument.
PR c++/109859
gcc/cp/ChangeLog:
* parser.cc (cp_parser_lambda_declarator_opt): Temporarily clear
local_variables_forbidden_p.
(cp_parser_placeholder_type_specifier): Turn an assert into an
error.
gcc/testsuite/ChangeLog:
* g++.dg/cpp2a/concepts-defarg3.C: New test.
* g++.dg/cpp2a/lambda-targ8.C: New test.
Christophe Lyon [Sun, 24 Nov 2024 18:08:48 +0000 (18:08 +0000)]
arm: [MVE intrinsics] Fix support for predicate constants [PR target/114801]
This backport is a cherry pick of commit 2089009210a1774c37e527ead8bbcaaa1a7a9d2d, with a small change needed
because force_lowpart_subreg does not exist in gcc-14: the patch
replaces it with the equivalent:
- x = force_lowpart_subreg (mode, x, GET_MODE (x));
+ {
+ auto byte = subreg_lowpart_offset (mode, GET_MODE (x));
+ x = force_subreg (mode, x, GET_MODE (x), byte);
+ }
In this PR, we have to handle a case where MVE predicates are supplied
as a const_int, where individual predicates have illegal boolean
values (such as 0xc for a 4-bit boolean predicate). To avoid the ICE,
fix the constant (any non-zero value is converted to all 1s) and emit
a warning.
On MVE, V8BI and V4BI multi-bit masks are interpreted byte-by-byte at
instruction level, but end-users should describe lanes rather than
bytes (so all bytes of a true-predicated lane should be '1'), see the
section on MVE intrinsics in the Arm ACLE specification.
Since force_lowpart_subreg cannot handle const_int (because they have VOID mode),
use gen_lowpart on them, force_lowpart_subreg otherwise.
2024-11-20 Christophe Lyon <christophe.lyon@linaro.org>
Jakub Jelinek <jakub@redhat.com>
Jonathan Wakely [Tue, 17 Dec 2024 21:32:19 +0000 (21:32 +0000)]
libstdc++: Fix std::future::wait_until for subsecond negative times [PR118093]
The current check for negative times (i.e. before the epoch) only checks
for a negative number of seconds. For a time 1ms before the epoch the
seconds part will be zero, but the futex syscall will still fail with an
EINVAL error. Extend the check to handle this case.
This change adds a redundant check in the headers too, so that we avoid
even calling into the library for negative times. Both checks can be
marked [[unlikely]]. The check in the headers avoids the cost of
splitting the time into seconds and nanoseconds and then making a PLT
call. The check inside the library matches where we were checking
already, and fixes existing binaries that were compiled against older
headers but use a newer libstdc++.so.6 at runtime.
libstdc++-v3/ChangeLog:
PR libstdc++/118093
* include/bits/atomic_futex.h (_M_load_and_test_until_impl):
Return false for times before the epoch.
* src/c++11/futex.cc (_M_futex_wait_until): Extend check for
negative times to check for subsecond times. Add unlikely
attribute.
(_M_futex_wait_until_steady): Likewise.
* testsuite/30_threads/future/members/118093.cc: New test.
Jakub Jelinek [Wed, 8 Jan 2025 22:12:02 +0000 (23:12 +0100)]
c++: Honor complain in cp_build_function_call_vec for check_function_arguments warnings [PR117825]
The following testcase ICEs due to re-entering diagnostics.
When diagnosing -Wformat-security warning, we try to print instantiation
context, which calls tsubst with tf_none, but that in the end calls
cp_build_function_call_vec which calls check_function_arguments which
diagnoses another warning (again -Wformat-security).
The other check_function_arguments caller, build_over_call, doesn't call
that function if !(complain & tf_warning), so I think the best fix is
to do it the same in cp_build_function_call_vec as well.
2025-01-08 Jakub Jelinek <jakub@redhat.com>
PR c++/117825
* typeck.cc (cp_build_function_call_vec): Don't call
check_function_arguments if complain doesn't have tf_warning bit set.
Jakub Jelinek [Tue, 17 Dec 2024 09:13:24 +0000 (10:13 +0100)]
c++: Diagnose earlier non-static data members with cv containing class type [PR116108]
In r10-6457 aka PR92593 fix a check has been added to reject
earlier non-static data members with current_class_type in templates,
as the deduction then can result in endless recursion in reshape_init.
It fixed the
template <class T>
struct S { S s = 1; };
S t{2};
crashes, but as the following testcase shows, didn't catch when there
are cv qualifiers on the non-static data member.
Fixed by using TYPE_MAIN_VARIANT.
2024-12-17 Jakub Jelinek <jakub@redhat.com>
PR c++/116108
gcc/cp/
* decl.cc (grokdeclarator): Pass TYYPE_MAIN_VARIANT (type)
rather than type to same_type_p when checking if the non-static
data member doesn't have current class type.
gcc/testsuite/
* g++.dg/cpp1z/class-deduction117.C: New test.
Jakub Jelinek [Sat, 14 Dec 2024 10:27:20 +0000 (11:27 +0100)]
warn-access: Fix up matching_alloc_calls_p [PR118024]
The following testcase ICEs because of a bug in matching_alloc_calls_p.
The loop was apparently meant to be walking the two attribute chains
in lock-step, but doesn't really do that. If the first lookup_attribute
returns non-NULL, the second one is not done, so rmats in that case can
be some random unrelated attribute rather than "malloc" attribute; the
body assumes even rmats if non-NULL is "malloc" attribute and relies
on its argument to be a "malloc" argument and if it is some other
attribute with incompatible attribute, it just crashes.
Now, fixing that in the obvious way, instead of doing
(amats = lookup_attribute ("malloc", amats))
|| (rmats = lookup_attribute ("malloc", rmats))
in the condition do
((amats = lookup_attribute ("malloc", amats)),
(rmats = lookup_attribute ("malloc", rmats)),
(amats || rmats))
fixes the testcase but regresses Wmismatched-dealloc-{2,3}.c tests.
The problem is that walking the attribute lists in a lock-step is obviously
a very bad idea, there is no requirement that the same deallocators are
present in the same order on both decls, e.g. there could be an extra malloc
attribute without argument in just one of the lists, or the order of say
free/realloc could be swapped, etc. We don't generally document nor enforce
any particular ordering of attributes (even when for some attributes we just
handle the first one rather than all).
So, this patch instead simply splits it into two loops, the first one walks
alloc_decl attributes, the second one walks dealloc_decl attributes.
If the malloc attribute argument is a built-in, that doesn't change
anything, and otherwise we have the chance to populate the whole
common_deallocs hash_set in the first loop and then can check it in the
second one (and don't need to use more expensive add method on it, can just
check contains there). Not to mention that it also fixes the case when
the function would incorrectly return true if there wasn't a common
deallocator between the two, but dealloc_decl had 2 malloc attributes with
the same deallocator.
2024-12-14 Jakub Jelinek <jakub@redhat.com>
PR middle-end/118024
* gimple-ssa-warn-access.cc (matching_alloc_calls_p): Walk malloc
attributes of alloc_decl and dealloc_decl in separate loops rather
than in lock-step. Use common_deallocs.contains rather than
common_deallocs.add in the second loop.
Jakub Jelinek [Fri, 13 Dec 2024 23:41:00 +0000 (00:41 +0100)]
cse: Fix up record_jump_equiv checks [PR117095]
The following testcase is miscompiled on s390x-linux with -O2 -march=z15.
The problem happens during cse2, which sees in an extended basic block
(jump_insn 217 78 216 10 (parallel [
(set (pc)
(if_then_else (ne (reg:SI 165)
(const_int 1 [0x1]))
(label_ref 216)
(pc)))
(set (reg:SI 165)
(plus:SI (reg:SI 165)
(const_int -1 [0xffffffffffffffff])))
(clobber (scratch:SI))
(clobber (reg:CC 33 %cc))
]) "t.c":14:17 discrim 1 2192 {doloop_si64}
(int_list:REG_BR_PROB 955630228 (nil))
-> 216)
...
(insn 99 98 100 12 (set (reg:SI 138)
(const_int 1 [0x1])) "t.c":9:31 1507 {*movsi_zarch}
(nil))
(insn 100 99 103 12 (parallel [
(set (reg:SI 137)
(minus:SI (reg:SI 138)
(subreg:SI (reg:HI 135 [ a ]) 0)))
(clobber (reg:CC 33 %cc))
]) "t.c":9:31 1904 {*subsi3}
(expr_list:REG_DEAD (reg:SI 138)
(expr_list:REG_DEAD (reg:HI 135 [ a ])
(expr_list:REG_UNUSED (reg:CC 33 %cc)
(nil)))))
Note, cse2 has df_note_add_problem () before df_analyze, which add
(expr_list:REG_UNUSED (reg:SI 165)
(expr_list:REG_UNUSED (reg:CC 33 %cc)
notes to the first insn (correctly so, %cc is clobbered there and pseudo
165 isn't used after the insn).
Now, cse_extended_basic_block has an extra optimization on conditional
jumps, where it records equivalence on the edge which continues in the ebb.
Here it sees (ne reg:SI 165) (const_int 1) is false on the edge and
remembers that pseudo 165 is comparison equivalent to (const_int 1),
so on insn 100 it decides to replace (reg:SI 138) with (reg:SI 165).
This optimization isn't correct here though, because the JUMP_INSN has
multiple sets. Before r0-77890 record_jump_equiv has been called from
cse_insn guarded on n_sets == 1 && any_condjump_p (insn), so it wouldn't
be done on the above JUMP_INSN where n_sets == 2. But since that change
it is guarded with single_set (insn) && any_condjump_p (insn) and that
is true because of the REG_UNUSED note. Looking at that note is
inappropriate in CSE though, because the whole intent of the pass is to
extend the lifetimes of the pseudos if equivalence is found, so the fact
that there is REG_UNUSED note for (reg:SI 165) and that the reg isn't used
later doesn't imply that it won't be used after the optimization.
So, unless we manage to process the other sets on the JUMP_INSN (it wouldn't
be terribly hard in this exact case, the doloop insn decreases the register
by 1 and so we could just record equivalence to (const_int 0) instead, but
generally it might be hard), we should IMHO just punt if there are multiple
sets.
The patch below adds !multiple_sets (insn) check instead of replacing with
it the single_set (insn) check, because apparently any_condjump_p uses
pc_set which supports the case where PATTERN is a SET to PC (that is a
single_set (insn) && !multiple_sets (insn), PATTERN is a PARALLEL with a
single SET to PC (likewise) and some CLOBBERs, PARALLEL with two or more
SETs where the first one is SET to PC (that could be single_set (insn)
with REG_UNUSED notes but is not !multiple_sets (insn)) or PATTERN
is UNSPEC/UNSPEC_VOLATILE with SET inside of it. For the last case
!multiple_sets (insn) will be true, but IMHO we shouldn't try to derive
anything from those because we haven't checked the rest of the UNSPEC*
and we don't really know what it does.
Jakub Jelinek [Wed, 11 Dec 2024 16:28:47 +0000 (17:28 +0100)]
c++: allow stores to anon union vars to change current union member in constexpr [PR117614]
Since r14-4771 the FE tries to differentiate between cases where the lhs
of a store allows changing the current union member and cases where it
doesn't, and cases where it doesn't includes everything that has gone
through the cxx_eval_constant_expression path on the lhs.
As the testcase shows, DECL_ANON_UNION_VAR_P vars were handled like that
too, even when stores to them are the only way how to change the current
union member in the sources.
So, the following patch just handles that case manually without calling
cxx_eval_constant_expression and without setting evaluated to true.
2024-12-11 Jakub Jelinek <jakub@redhat.com>
PR c++/117614
* constexpr.cc (cxx_eval_store_expression): For stores to
DECL_ANON_UNION_VAR_P vars just continue with DECL_VALUE_EXPR
of it, without setting evaluated to true or full
cxx_eval_constant_expression.
Jakub Jelinek [Thu, 5 Dec 2024 12:01:21 +0000 (13:01 +0100)]
doloop: Fix up doloop df use [PR116799]
The following testcases are miscompiled on s390x-linux, because the
doloop_optimize
/* Ensure that the new sequence doesn't clobber a register that
is live at the end of the block. */
{
bitmap modified = BITMAP_ALLOC (NULL);
for (rtx_insn *i = doloop_seq; i != NULL; i = NEXT_INSN (i))
note_stores (i, record_reg_sets, modified);
basic_block loop_end = desc->out_edge->src;
bool fail = bitmap_intersect_p (df_get_live_out (loop_end), modified);
check doesn't work as intended.
The problem is that it uses df, but the df analysis was only done using
iv_analysis_loop_init (loop);
->
df_analyze_loop (loop);
which computes df inside on the bbs of the loop.
While loop_end bb is inside of the loop, df_get_live_out computed that
way includes registers set in the loop and used at the start of the next
iteration, but doesn't include registers set in the loop (or before the
loop) and used after the loop.
The following patch fixes that by doing whole function df_analyze first,
changes the loop iteration mode from 0 to LI_ONLY_INNERMOST (on many
targets which use can_use_doloop_if_innermost target hook a so are known
to only handle innermost loops) or LI_FROM_INNERMOST (I think only bfin
actually allows non-innermost loops) and checking not just
df_get_live_out (loop_end) (that is needed for something used by the
next iteration), but also df_get_live_in (desc->out_edge->dest),
i.e. what will be used after the loop. df of such a bb shouldn't
be affected by the df_analyze_loop and so should be from df_analyze
of the whole function.
2024-12-05 Jakub Jelinek <jakub@redhat.com>
PR rtl-optimization/113994
PR rtl-optimization/116799
* loop-doloop.cc: Include targhooks.h.
(doloop_optimize): Also punt on intersection of modified
with df_get_live_in (desc->out_edge->dest).
(doloop_optimize_loops): Call df_analyze. Use
LI_ONLY_INNERMOST or LI_FROM_INNERMOST instead of 0 as
second loops_list argument.
* gcc.c-torture/execute/pr116799.c: New test.
* g++.dg/torture/pr113994.C: New test.
Jakub Jelinek [Tue, 3 Dec 2024 10:16:37 +0000 (11:16 +0100)]
bitintlower: Fix up ?ROTATE_EXPR lowering [PR117847]
In the ?ROTATE_EXPR lowering I forgot to handle rotation by 0 correctly.
INTEGER_CST 0 is very unlikely, it would be probably folded away, but
a non-constant count can't use just p - n because then the shift count
is out of bounds for zero.
In the FE I use n == 0 ? x : (x << n) | (x >> (p - n)) but bitintlower
here isn't prepared at this point to have bb split and am not sure if
using COND_EXPR is a good idea either, so the patch uses (p - n) % p.
Perhaps I should just disable lowering the rotate in the FE for the
non-mode precision BITINT_TYPEs too.
2024-12-03 Jakub Jelinek <jakub@redhat.com>
PR middle-end/117847
* gimple-lower-bitint.cc (gimple_lower_bitint) <case LROTATE_EXPR>:
Use m = (p - n) % p instead of m = p - n for the other shift count.
Jakub Jelinek [Sat, 30 Nov 2024 10:19:12 +0000 (11:19 +0100)]
openmp: Add crtoffloadtableS.o and use it [PR117851]
Unlike crtoffload{begin,end}.o which just define some symbols at the start/end
of the various .gnu.offload* sections, crtoffloadtable.o contains
const void *const __OFFLOAD_TABLE__[]
__attribute__ ((__visibility__ ("hidden"))) =
{
&__offload_func_table, &__offload_funcs_end,
&__offload_var_table, &__offload_vars_end,
&__offload_ind_func_table, &__offload_ind_funcs_end,
};
The problem is that linking this into PIEs or shared libraries doesn't
work when it is compiled without -fpic/-fpie - __OFFLOAD_TABLE__ for non-PIC
code is put into .rodata section, but it really needs relocations, so for
PIC it should go into .data.rel.ro/.data.rel.ro.local.
As I think we don't want .data.rel.ro section in non-PIE binaries, this patch
follows the path of e.g. crtbegin.o vs. crtbeginS.o and adds crtoffloadtableS.o
next to crtoffloadtable.o, where crtoffloadtableS.o is compiled with -fpic.
2024-11-30 Jakub Jelinek <jakub@redhat.com>
PR libgomp/117851
gcc/
* lto-wrapper.cc (find_crtoffloadtable): Add PIE_OR_SHARED argument,
search for crtoffloadtableS.o rather than crtoffloadtable.o if
true.
(run_gcc): Add pie_or_shared variable. If OPT_pie or OPT_shared or
OPT_static_pie is seen, set pie_or_shared to true, if OPT_no_pie is
seen, set pie_or_shared to false. Pass it to find_crtoffloadtable.
libgcc/
* configure.ac (extra_parts): Add crtoffloadtableS.o.
* Makefile.in (crtoffloadtableS$(objext)): New goal.
* configure: Regenerated.
Jakub Jelinek [Thu, 28 Nov 2024 13:31:44 +0000 (14:31 +0100)]
docs: Fix up __sync_* documentation [PR117642]
The PR14311 commit which added support for __sync_* builtins documented that
there is a warning if a particular operation cannot be implemented.
But that commit nor anything later on implemented such warning, it was
always silent generation of the mentioned calls (which can in most cases
result in linker errors of course because those functions aren't implemented
anywhere, in libatomic or elsewhere in code shipped in gcc).
So, the following patch just adjust the documentation to match the
implementation.
2024-11-28 Jakub Jelinek <jakub@redhat.com>
PR target/117642
* doc/extend.texi: Remove documentation of warning for unimplemented
__sync_* operations, such warning has never been implemented.
Jakub Jelinek [Thu, 28 Nov 2024 09:23:47 +0000 (10:23 +0100)]
builtins: Handle BITINT_TYPE in __builtin_iseqsig folding [PR117802]
In check_builtin_function_arguments in the _BitInt patchset I've changed
INTEGER_TYPE tests to INTEGER_TYPE or BITINT_TYPE, but haven't done the
same in fold_builtin_iseqsig, which now ICEs because of that.
The following patch fixes that.
BTW, that TYPE_PRECISION (type0) >= TYPE_PRECISION (type1) test
for REAL_TYPE vs. REAL_TYPE looks pretty random and dangerous, I think
it would be useful to handle this builtin also in the C and C++ FEs,
if both arguments have REAL_TYPE, use the FE specific routine to decide
which types to use and error if a comparison between types would be
erroneous (e.g. complain about _Decimal* vs. float/double/long
double/_Float*, pick up the preferred type, complain about
__ibm128 vs. _Float128 in C++, etc.).
But the FEs can just promote one argument to the other in that case
and keep fold_builtin_iseqsig as is for say Fortran and other FEs.
2024-11-28 Jakub Jelinek <jakub@redhat.com>
PR c/117802
* builtins.cc (fold_builtin_iseqsig): Handle BITINT_TYPE like
INTEGER_TYPE.
* gcc.dg/builtin-iseqsig-1.c: New test.
* gcc.dg/bitint-118.c: New test.
Jakub Jelinek [Wed, 27 Nov 2024 16:29:28 +0000 (17:29 +0100)]
c: Fix sizeof error recovery [PR117745]
Compilation of the following testcase hangs forever after emitting first
error. The problem is that in one place we just return error_mark_node
directly rather than going through c_expr_sizeof_expr or c_expr_sizeof_type.
The parsing of the expression could have called record_maybe_used_decl
though, but nothing calls pop_maybe_used which needs to be called after
parsing of every sizeof/typeof, successful or not.
At the end of the toplevel declaration we free the parser_obstack and in
another function record_maybe_used_decl is called again and due to the
missing pop_maybe_unused we end up with a cycle in the chain.
The following patch fixes it by just setting error and goto to the
sizeof_expr:
c_inhibit_evaluation_warnings--;
in_sizeof--;
mark_exp_read (expr.value);
if (TREE_CODE (expr.value) == COMPONENT_REF
&& DECL_C_BIT_FIELD (TREE_OPERAND (expr.value, 1)))
error_at (expr_loc, "%<sizeof%> applied to a bit-field");
result = c_expr_sizeof_expr (expr_loc, expr);
where c_expr_sizeof_expr will do:
struct c_expr ret;
if (expr.value == error_mark_node)
{
ret.value = error_mark_node;
ret.original_code = ERROR_MARK;
ret.original_type = NULL;
ret.m_decimal = 0;
pop_maybe_used (false);
}
...
return ret;
which is exactly what the old code did manually except for the missing
pop_maybe_used call. mark_exp_read does nothing on error_mark_node and
error_mark_node doesn't have COMPONENT_REF tree_code.
2024-11-27 Jakub Jelinek <jakub@redhat.com>
PR c/117745
* c-parser.cc (c_parser_sizeof_expression): If type_name is NULL,
just expr.set_error () and goto sizeof_expr instead of doing error
recovery manually.
Jakub Jelinek [Tue, 26 Nov 2024 08:46:51 +0000 (09:46 +0100)]
builtins: Fix up DFP ICEs on __builtin_fpclassify [PR102674]
This patch is similar to the one I've just posted, __builtin_fpclassify also
needs to print decimal float minimum differently and use real_from_string3.
Plus I've done some formatting fixes.
2024-11-26 Jakub Jelinek <jakub@redhat.com>
PR middle-end/102674
* builtins.cc (fold_builtin_fpclassify): Use real_from_string3 rather
than real_from_string. Use "1E%d" format string rather than "0x1p%d"
for decimal float minimum. Formatting fixes.
Jakub Jelinek [Tue, 26 Nov 2024 08:45:21 +0000 (09:45 +0100)]
builtins: Fix up DFP ICEs on __builtin_is{inf,finite,normal} [PR43374]
__builtin_is{inf,finite,normal} builtins ICE on _Decimal{32,64,128,64x}
operands unless those operands are constant.
The problem is that we fold the builtins to comparisons with the largest
finite number, but
a) get_max_float was only handling binary floats
b) real_from_string again assumes binary float
and so we were ICEing in the build_real called after the two calls.
This patch adds decimal handling into get_max_float (well, moves it
from c-cppbuiltin.cc which was printing those for __DEC{32,64,128}_MAX__
macros) and uses real_from_string3 (perhaps it is time to rename it
to just real_from_string now that we can use function overloading)
so that it handles both binary and decimal floats.
2024-11-26 Jakub Jelinek <jakub@redhat.com>
PR middle-end/43374
gcc/
* real.cc (get_max_float): Handle decimal float.
* builtins.cc (fold_builtin_interclass_mathfn): Use
real_from_string3 rather than real_from_string. Use
"1E%d" format string rather than "0x1p%d" for decimal
float minimum.
gcc/c-family/
* c-cppbuiltin.cc (builtin_define_decimal_float_constants): Use
get_max_float.
gcc/testsuite/
* gcc.dg/dfp/pr43374.c: New test.
Jakub Jelinek [Fri, 22 Nov 2024 18:47:52 +0000 (19:47 +0100)]
c-family: Yet another fix for _BitInt & __sync_* builtins [PR117641]
Sorry, the last patch only partially fixed the __sync_* ICEs with
_BitInt(128) on ia32.
Even for !fetch we need to error out and return 0. I was afraid of
APIs like __atomic_exchange/__atomic_compare_exchange, those obviously
need to be supported even on _BitInt(128) on ia32, but they actually never
sync_resolve_size, they are handled by adding the size argument and using
the library version much earlier.
For fetch && !orig_format (i.e. __atomic_fetch_* etc.) we need to return -1
so that we handle it with a manualy __atomic_load +
__atomic_compare_exchange loop in the caller, all other cases should
be rejected.
2024-11-22 Jakub Jelinek <jakub@redhat.com>
PR c/117641
* c-common.cc (sync_resolve_size): For size 16 with _BitInt
on targets where TImode isn't supported, use goto incompatible if
!fetch.
Jakub Jelinek [Thu, 21 Nov 2024 08:38:01 +0000 (09:38 +0100)]
phiopt: Fix a pasto in spaceship_replacement [PR117612]
When working on the PR117612 fix, I've noticed a pasto in
tree-ssa-phiopt.cc (spaceship_replacement).
The code is
if (absu_hwi (tree_to_shwi (arg2)) != 1)
return false;
if (e1->flags & EDGE_TRUE_VALUE)
{
if (tree_to_shwi (arg0) != 2
|| absu_hwi (tree_to_shwi (arg1)) != 1
|| wi::to_widest (arg1) == wi::to_widest (arg2))
return false;
}
else if (tree_to_shwi (arg1) != 2
|| absu_hwi (tree_to_shwi (arg0)) != 1
|| wi::to_widest (arg0) == wi::to_widest (arg1))
return false;
where arg{0,1,2,3} are PHI args and wants to ensure that if e1 is a
true edge, then arg0 is 2 and one of arg{1,2} is -1 and one is 1,
otherwise arg1 is 2 and one of arg{0,2} is -1 and one is 1.
But due to pasto in the latte case doesn't verify that arg0
is different from arg2, it could be both -1 or both 1 and we wouldn't
punt. The wi::to_widest (arg0) == wi::to_widest (arg1) test
is always false when we've made sure in the earlier conditions that
arg1 is 2 and arg0 is -1 or 1, so never 2.
2024-11-21 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/94589
PR tree-optimization/117612
* tree-ssa-phiopt.cc (spaceship_replacement): Fix up
a pasto in check when arg1 is 2.
Jakub Jelinek [Tue, 19 Nov 2024 19:36:00 +0000 (20:36 +0100)]
c-family: Fix ICE with __sync_*_and_* on _BitInt [PR117641]
Only __atomic_* builtins are meant to work on arbitrary _BitInt types
(if not supported in hw we emit a CAS loop which uses __atomic_load_*
in that case), the compatibility __sync_* builtins work only if there
is a corresponding normal integral type (for _BitInt on 32-bit ARM
we'll need to limit even that to no padding, because the padding bits
are well defined there and the hw or libatomic __sync_* APIs don't
guarantee that), IMHO people shouldn't mix very old APIs with very
new ones and I don't see a replacement for the __atomic_load_*.
For size > 16 that is how it already correctly behaves,
in the hunk shown in the patch it is immediately followed by
which returns -1 for the __atomic_* builtins (i.e. !orig_format),
which causes caller to use atomic_bitint_fetch_using_cas_loop,
and otherwise does diagnostic and return 0 (which causes caller
to punt). But for size == 16 if TImode isn't suipported (i.e.
mostly 32-bit arches), we return (correctly) -1 if !orig_format,
so again force atomic_bitint_fetch_using_cas_loop on those arches
for e.g. _BitInt(115), but for orig_format the function returns
16 as if it could do 16 byte __sync_*_and_* (which it can't
because TImode isn't supported; for 16 byte it can only do
(perhaps using libatomic) normal compare and swap). So we need
to error and return 0, rather than return 16.
The following patch ensures that.
2024-11-19 Jakub Jelinek <jakub@redhat.com>
PR c/117641
* c-common.cc (sync_resolve_size): For size == 16 fetch of
BITINT_TYPE if TImode isn't supported scalar mode diagnose
and return 0 if orig_format instead of returning 16.
Jakub Jelinek [Tue, 19 Nov 2024 09:26:44 +0000 (10:26 +0100)]
expand: Fix up ICE on VCE from _Complex types to _BitInt [PR117458]
extract_bit_field can't handle extraction of non-mode precision
from complex mode operands which don't live in memory, e.g. gen_lowpart
crashes on those.
The following patch in that case defers the extract_bit_field call
until op0 is forced into memory.
2024-11-19 Jakub Jelinek <jakub@redhat.com>
PR middle-end/117458
* expr.cc (expand_expr_real_1) <case VIEW_CONVERT_EXPR>: Don't
call extract_bit_field if op0 has complex mode and isn't a MEM,
instead first force op0 into memory and then call extract_bit_field.
Jakub Jelinek [Tue, 19 Nov 2024 09:25:57 +0000 (10:25 +0100)]
bitintlower: Handle PAREN_EXPR [PR117459]
The following patch handles PAREN_EXPR in bitint lowering, and handles it
as an optimization barrier, so that temporary arithmetics from PAREN_EXPR
isn't mixed with temporary arithmetics from outside of the PAREN_EXPR.
Jakub Jelinek [Sat, 9 Nov 2024 15:45:44 +0000 (16:45 +0100)]
m2: Fix up dependencies some more
Every now and then my x86_64-linux bootstrap fails due to missing
dependencies somewhere in m2, usually during stage3. I'm using
make -j32 and run 2 bootstraps concurrently (x86_64-linux and i686-linux)
on the same box.
Last night the same happened to me again,
with the first error
In file included from ./tm.h:29,
from ../../gcc/backend.h:28,
from ../../gcc/m2/gm2-gcc/gcc-consolidation.h:27,
from m2/gm2-compiler-boot/M2AsmUtil.c:26:
../../gcc/config/i386/i386.h:2484:10: fatal error: insn-attr-common.h: No such file or directory
2484 | #include "insn-attr-common.h"
| ^~~~~~~~~~~~~~~~~~~~
compilation terminated.
make[3]: *** [../../gcc/m2/Make-lang.in:1576: m2/gm2-compiler-boot/M2AsmUtil.o] Error 1
make[3]: *** Waiting for unfinished jobs....
Now, gcc/Makefile.in has a general rule:
# In order for parallel make to really start compiling the expensive
# objects from $(OBJS) as early as possible, build all their
# prerequisites strictly before all objects.
$(ALL_HOST_OBJS) : | $(generated_files)
which ensures that everything that might depend on the generated files
waits for those to be generated.
The above error clearly shows that such waiting didn't happen for
m2/gm2-compiler-boot/M2AsmUtil.o and some others.
ALL_HOST_OBJS includes $(ALL_HOST_FRONTEND_OBJS),
where the latter is
ALL_HOST_FRONTEND_OBJS = $(foreach v,$(CONFIG_LANGUAGES),$($(v)_OBJS))
m2_OBJS already includes various *.o files, for all those we wait until
the generated files are generated. Though, seems
cc1gm2 depends on m2/stage1/cc1gm2 (which is just copied there),
and that depends on m2/gm2-compiler-boot/m2flex.o,
$(GM2_C_OBJS) and m2/gm2-gcc/rtegraph.o already included in m2_OBJS,
but also on
$(GM2_LIBS_BOOT) $(MC_LIBS)
where
MC_LIBS=m2/mc-boot-ch/Glibc.o m2/mc-boot-ch/Gmcrts.o
GM2_LIBS_BOOT = m2/gm2-compiler-boot/gm2.a \
m2/gm2-libs-boot/libgm2.a \
$(GM2-BOOT-O)
GM2-BOOT-O isn't defined, and the 2 libraries depend on
$(BUILD-LIBS-BOOT) $(BUILD-COMPILER-BOOT)
So, the following patch adds those to m2_OBJS.
I'm not sure if something further is needed, like some objects
used to build the helper programs, mc and whatever else is needed,
I guess it depends on if they use or can use say tm.h or similar
headers which depend on the generated headers.
2024-11-09 Jakub Jelinek <jakub@redhat.com>
gcc/m2/
* Make-lang.in (m2_OBJS): Add $(BUILD-LIBS-BOOT),
$(BUILD-COMPILER-BOOT) and $(MC_LIBS).
Jakub Jelinek [Fri, 8 Nov 2024 12:36:05 +0000 (13:36 +0100)]
c++: Fix ICE on constexpr virtual function [PR117317]
Since C++20 virtual methods can be constexpr, and if they are
constexpr evaluated, we choose tentative_decl_linkage for those
defer their output and decide at_eof again.
On the following testcases we ICE though, because if
expand_or_defer_fn_1 decides to use tentative_decl_linkage, it
returns true and the caller in that case cals emit_associated_thunks,
where use_thunk which it calls asserts DECL_INTERFACE_KNOWN on the
thunk destination, which isn't the case for tentative_decl_linkage.
The following patch fixes the ICE by not emitting the thunks
for the DECL_DEFER_OUTPUT fns just yet but waiting until at_eof
time when we return to those.
Note, the second testcase ICEs already since r0-110035 with -std=c++0x
before it gets a chance to diagnose constexpr virtual method.
2024-11-08 Jakub Jelinek <jakub@redhat.com>
PR c++/117317
* semantics.cc (emit_associated_thunks): Do nothing for
!DECL_INTERFACE_KNOWN && DECL_DEFER_OUTPUT fns.
* g++.dg/cpp2a/pr117317-1.C: New test.
* g++.dg/cpp2a/pr117317-2.C: New test.
Jakub Jelinek [Wed, 6 Nov 2024 09:22:13 +0000 (10:22 +0100)]
store-merging: Apply --param=store-merging-max-size= in more spots [PR117439]
Store merging assumes a merged region won't be too large. The assumption is
e.g. in using inappropriate types in various spots (e.g. int for bit sizes
and bit positions in a few spots, or unsigned for the total size in bytes of
the merged region), in doing XNEWVEC for the whole total size of the merged
region and preparing everything in there and even that XALLOCAVEC in two
spots. The last case is what was breaking the test below in the patch,
64MB XALLOCAVEC is just too large, but even with that fixed I think we just
shouldn't be merging gigabyte large merge groups.
We already have --param=store-merging-max-size= parameter, right now with
65536 bytes maximum (if needed, we could raise that limit a little bit).
That parameter is currently used when merging two adjacent stores, if the
size of the already merged bitregion together with the new store's bitregion
is above that limit, we don't merge those.
I guess initially that was sufficient, at that time a store was always
limited to MAX_BITSIZE_MODE_ANY_INT bits.
But later on we've added support for empty ctors ({} and even later
{CLOBBER}) and also added another spot where we merge further stores into
the merge group, if there is some overlap, we can merge various other stores
in one coalesce_immediate_stores iteration.
And, we weren't applying the --param=store-merging-max-size= parameter
in either of those cases. So a single store can be gigabytes long, and
if there is some overlap, we can extend the region again to gigabytes in
size.
The following patch attempts to apply that parameter even in those cases.
So, if testing if it should merge the merged group with info (we've already
punted if those together are above the parameter) and some other stores,
the first two hunks just punt if that would make the merge group too large.
And the third hunk doesn't even add stores which are over the limit.
2024-11-06 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/117439
* gimple-ssa-store-merging.cc
(imm_store_chain_info::coalesce_immediate_stores): Punt if merging of
any of the additional overlapping stores would result in growing the
bitregion size over param_store_merging_max_size.
(pass_store_merging::process_store): Terminate all aliasing chains
for stores with bitregion larger than param_store_merging_max_size.
Jakub Jelinek [Wed, 6 Nov 2024 09:21:09 +0000 (10:21 +0100)]
store-merging: Don't use sub_byte_op_p mode for empty_ctor_p unless necessary [PR117439]
encode_tree_to_bitpos uses the more expensive sub_byte_op_p mode in which
it has to allocate a buffer and do various extra work like shifting the bits
etc. if bitlen or bitpos aren't multiples of BITS_PER_UNIT, or if bitlen
doesn't have corresponding integer mode.
The last case is explained later in the comments:
/* The native_encode_expr machinery uses TYPE_MODE to determine how many
bytes to write. This means it can write more than
ROUND_UP (bitlen, BITS_PER_UNIT) / BITS_PER_UNIT bytes (for example
write 8 bytes for a bitlen of 40). Skip the bytes that are not within
bitlen and zero out the bits that are not relevant as well (that may
contain a sign bit due to sign-extension). */
Now, we've later added empty_ctor_p support, either {} CONSTRUCTOR
or {CLOBBER}, which doesn't use native_encode_expr at all, just memset,
so that case doesn't need those fancy games unless bitlen or bitpos
aren't multiples of BITS_PER_UNIT (unlikely, but let's pretend it is
possible).
The following patch makes us use the fast path even for empty_ctor_p
which occupy full bytes, we can just memset that in the provided buffer and
don't need to XALLOCAVEC another buffer.
This patch in itself fixes the testcase from the PR (which was about using
huge XALLLOCAVEC), but I want to do some other changes, to be posted in a
next patch.
2024-11-06 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/117439
* gimple-ssa-store-merging.cc (encode_tree_to_bitpos): For
empty_ctor_p use !sub_byte_op_p even if bitlen doesn't have an
integral mode.
Stafford Horne [Mon, 6 Jan 2025 12:12:40 +0000 (12:12 +0000)]
or1k: add .note.GNU-stack section on linux
In the OpenRISC build we get the following warning:
ld: warning: __modsi3_s.o: missing .note.GNU-stack section implies executable stack
ld: NOTE: This behaviour is deprecated and will be removed in a future version of the linker
Fix this by adding a .note.GNU-stack to indicate the stack does not need to be
executable for the lib1funcs.
Note, this is also needed for the upcoming glibc 2.41.
libgcc/
* config/or1k/lib1funcs.S: Add .note.GNU-stack section on linux.