d: Fix PR 108842: Cannot use enum array with -fno-druntime
Restrict the generating of CONST_DECLs for D manifest constants to just
scalars without pointers. It shouldn't happen that a reference to a
manifest constant has not been expanded within a function body during
codegen, but it has been found to occur in older versions of the D
front-end (PR98277), so if the decl of a non-scalar constant is
requested, just return its initializer as an expression.
PR d/108842
gcc/d/ChangeLog:
* decl.cc (DeclVisitor::visit (VarDeclaration *)): Only emit scalar
manifest constants.
(get_symbol_decl): Don't generate CONST_DECL for non-scalar manifest
constants.
* imports.cc (ImportVisitor::visit (VarDeclaration *)): New method.
gcc/testsuite/ChangeLog:
* gdc.dg/pr98277.d: Add more tests.
* gdc.dg/pr108842.d: New test.
Fix power10 fusion bug with prefixed loads, PR target/105325
This changes fixes PR target/105325. PR target/105325 is a bug where an
invalid lwa instruction is generated due to power10 fusion of a load
instruction to a GPR and an compare immediate instruction with the immediate
being -1, 0, or 1.
In some cases, when the load instruction is done, the GCC compiler would
generate a load instruction with an offset that was too large to fit into the
normal load instruction.
In particular, loads from the stack might originally have a small offset, so
that the load is not a prefixed load. However, after the stack is set up, and
register allocation has been done, the offset now is large enough that we would
have to use a prefixed load instruction.
The support for prefixed loads did not consider that patterns with a fused load
and compare might have a prefixed address. Without this support, the proper
prefixed load won't be generated.
In the original code, when the split2 pass is run after reload has finished the
ds_form_mem_operand predicate that was used for lwa and ld no longer returns
true. When the pattern was created, ds_form_mem_operand recognized the insn as
being valid since the offset was small. But after register allocation,
ds_form_mem_operand did not return true. Because it didn't return true, the
insn could not be split. Since the insn was not split and the prefix support
did not indicate a prefixed instruction was used, the wrong load is generated.
The solution involves:
1) Don't use ds_form_mem_operand for ld and lwa, always use
non_update_memory_operand.
2) Delete ds_form_mem_operand since it is no longer used.
3) Use the "YZ" constraints for ld/lwa instead of "m".
4) If we don't need to sign extend the lwa, convert it to lwz, and use
cmpwi instead of cmpdi. Adjust the insn name to reflect the code
generate.
5) Insure that the insn using lwa will be recognized as having a prefixed
operand (and hence the insn length will be 16 bytes instead of 8
bytes).
5a) Set the prefixed and maybe_prefix attributes to know that
fused_load_cmpi are also load insns;
5b) In the case where we are just setting CC and not using the memory
afterward, set the clobber to use a DI register, and put an
explicit sign_extend operation in the split;
5c) Set the sign_extend attribute to "yes" for lwa.
5d) 5a-5c are the things that prefixed_load_p in rs6000.cc checks to
ensure that lwa is treated as a ds-form instruction and not as
a d-form instruction (i.e. lwz).
6) Add a new test case for this case.
7) Adjust the insn counts in fusion-p10-ldcmpi.c. Because we are no
longer using ds_form_mem_operand, the ld and lwa instructions will fuse
x-form (reg+reg) addresses in addition ds-form (reg+offset or reg).
2023-06-23 Michael Meissner <meissner@linux.ibm.com>
gcc/
PR target/105325
* config/rs6000/genfusion.pl (gen_ld_cmpi_p10_one): Fix problems that
allowed prefixed lwa to be generated.
* config/rs6000/fusion.md: Regenerate.
* config/rs6000/predicates.md (ds_form_mem_operand): Delete.
* config/rs6000/rs6000.md (prefixed attribute): Add support for load
plus compare immediate fused insns.
(maybe_prefixed): Likewise.
This makes the code more readable, more digestible, more maintainable,
more extensible. That kind of thing. It does that by pulling things
apart a bit, but also making what stays together more cohesive lumps.
The original function was a bunch of loops and early-outs, and then
quite a bit of stuff done per iteration, with the iterations essentially
independent of each other. This patch moves the stuff done for one
iteration to a new _one function.
The second big thing is the stuff printed to the .md file is done in
"here documents" now, which is a lot more readable than having to quote
and escape and double-escape pieces of text. Whitespace inside the
here-document is significant (will be printed as-is), which is a bit
awkward sometimes, or might take some getting used to, but it is also
one of the benefits of using them.
Local variables are declared at first use (or close to first use).
There also shouldn't be many at all, often you can write easier to
read and manage code by omitting to name something that is hard to name
in the first place.
Finally some things are done in more typical, more modern, and tighter
Perl style, for example REs in "if"s or "qw" for lists of constants.
d: Fix core.volatile.volatileLoad discarded if result is unused
The first pass of code generation in the D front-end splits up all
compound expressions and discards expressions that have no side effects.
This included calls to the `volatileLoad' intrinsic if its result was
not used, causing such calls to be eliminated from the program.
We already set TREE_THIS_VOLATILE on the expression, however the
tree documentation says if this bit is set in an expression, so is
TREE_SIDE_EFFECTS. So set TREE_SIDE_EFFECTS on the expression too.
This prevents any early discarding from occuring.
PR d/110516
gcc/d/ChangeLog:
* intrinsics.cc (expand_volatile_load): Set TREE_SIDE_EFFECTS on the
expanded expression.
(expand_volatile_store): Likewise.
gcc/testsuite/ChangeLog:
* gdc.dg/torture/pr110516a.d: New test.
* gdc.dg/torture/pr110516b.d: New test.
d: Fix ICE in setValue, at d/dmd/dinterpret.c:7013
Backports ICE fix from upstream. When casting null to integer or real,
instead of painting the type on the NullExp, we emplace an
IntegerExp/RealExp with the value zero. Same as when casting from
NullExp to bool.
Paul E. Murphy [Thu, 22 Jun 2023 22:53:46 +0000 (17:53 -0500)]
go: Update usage of TARGET_AIX to TARGET_AIX_OS
TARGET_AIX is defined to a non-zero value on linux and maybe other
powerpc64le targets. This leads to unexpected behavior such as
dropping the .go_export section when linking a shared library
on linux/powerpc64le.
Instead, use TARGET_AIX_OS to toggle AIX specific behavior.
Fixes golang/go#60798.
2023-06-22 Paul E. Murphy <murphyp@linux.ibm.com>
gcc/go/
* go-backend.c [TARGET_AIX]: Rename and update usage to TARGET_AIX_OS.
* go-lang.c: Likewise.
If mem_addr points to a memory region with less than whole vector size
bytes of accessible memory and k is a mask that would prevent reading
the inaccessible bytes from mem_addr, add UNSPEC_MASKMOV to prevent
it to be transformed to any other whole memory access instructions.
gcc/ChangeLog:
PR rtl-optimization/110237
* config/i386/sse.md (<avx512>_store<mode>_mask): Refine with
UNSPEC_MASKMOV.
(maskstore<mode><avx512fmaskmodelower): Ditto.
(*<avx512>_store<mode>_mask): New define_insn, it's renamed
from original <avx512>_store<mode>_mask.
liuhongt [Tue, 20 Jun 2023 07:41:00 +0000 (15:41 +0800)]
Refine maskloadmn pattern with UNSPEC_MASKLOAD.
If mem_addr points to a memory region with less than whole vector size
bytes of accessible memory and k is a mask that would prevent reading
the inaccessible bytes from mem_addr, add UNSPEC_MASKLOAD to prevent
it to be transformed to vpblendd.
Thomas Schwinge [Mon, 15 May 2023 18:00:07 +0000 (20:00 +0200)]
Support parallel testing in libgomp: fallback Perl 'flock' [PR66005]
Follow-up to commit 6c3b30ef9e0578509bdaf59c13da4a212fe6c2ba
"Support parallel testing in libgomp, part II [PR66005]"
("..., and enable if 'flock' is available for serializing execution testing"),
where we saw:
> On my Dell Precision 7530 laptop:
>
> $ uname -srvi
> Linux 5.15.0-71-generic #78-Ubuntu SMP Tue Apr 18 09:00:29 UTC 2023 x86_64
> $ grep '^model name' < /proc/cpuinfo | uniq -c
> 12 model name : Intel(R) Core(TM) i7-8850H CPU @ 2.60GHz
> $ nvidia-smi -L
> GPU 0: Quadro P1000 (UUID: GPU-e043973b-b52a-d02b-c066-a8fdbf64e8ea)
>
> ... [...]: case (c) standard configuration, no offloading
> configured, [...]
PR testsuite/66005
gcc/
* doc/install.texi: Document (optional) Perl usage for parallel
testing of libgomp.
libgomp/
* testsuite/lib/libgomp.exp: 'flock' through stdout.
* testsuite/flock: New.
* configure.ac (FLOCK): Point to that if no 'flock' available, but
'perl' is.
* configure: Regenerate.
Thomas Schwinge [Tue, 25 Apr 2023 21:53:12 +0000 (23:53 +0200)]
Support parallel testing in libgomp, part II [PR66005]
..., and enable if 'flock' is available for serializing execution testing.
Regarding the default of 19 parallel slots, this turned out to be a local
minimum for wall time when testing this on:
$ uname -srvi
Linux 4.2.0-42-generic #49~14.04.1-Ubuntu SMP Wed Jun 29 20:22:11 UTC 2016 x86_64
$ grep '^model name' < /proc/cpuinfo | uniq -c
32 model name : Intel(R) Xeon(R) CPU E5-2640 v3 @ 2.60GHz
... in two configurations: case (a) standard configuration, no offloading
configured, case (b) offloading for GCN and nvptx configured but no devices
available. For both cases, default plus '-m32' variant.
$ \time make check-target-libgomp RUNTESTFLAGS="--target_board=unix\{,-m32\}"
$ uname -srvi
Linux 5.15.0-71-generic #78-Ubuntu SMP Tue Apr 18 09:00:29 UTC 2023 x86_64
$ grep '^model name' < /proc/cpuinfo | uniq -c
12 model name : Intel(R) Core(TM) i7-8850H CPU @ 2.60GHz
$ nvidia-smi -L
GPU 0: Quadro P1000 (UUID: GPU-e043973b-b52a-d02b-c066-a8fdbf64e8ea)
... in two configurations: case (c) standard configuration, no offloading
configured, case (d) offloading for nvptx configured and device available.
For both cases, only default variant, no '-m32'.
$ \time make check-target-libgomp
Case (c), baseline; roughly half of case (a) (just one variant):
Worth noting is that with nvptx offloading, there is one execution test case
that times out ('libgomp.fortran/reverse-offload-5.f90'). This effectively
stalls progress for almost 5 min: quickly other executions test cases queue up
on the lock for all parallel slots. That's working as expected; just noting
this as it accordingly does skew the wall time numbers.
But the document of mvzeroupper doesn't mention the insertion
required -O2 and above, it may confuse users when they explicitly
use -Os -mvzeroupper.
------------
mvzeroupper
Target Mask(VZEROUPPER) Save
Generate vzeroupper instruction before a transfer of control flow out of
the function.
------------
The patch moves flag_expensive_optimizations && !optimize_size to
ix86_option_override_internal. It makes -mvzeroupper independent of
optimization level, but still keeps the behavior of architecture
tuning(emit_vzeroupper) unchanged.
gcc/ChangeLog:
* config/i386/i386-features.c (pass_insert_vzeroupper:gate):
Move flag_expensive_optimizations && !optimize_size to ..
* config/i386/i386-options.c (ix86_option_override_internal):
.. this, it makes -mvzeroupper independent of optimization
level, but still keeps the behavior of architecture
tuning(emit_vzeroupper) unchanged.
(rest_of_handle_insert_vzeroupper): Remove
flag_expensive_optimizations && !optimize_size.
gcc/testsuite/ChangeLog:
* gcc.target/i386/avx-vzeroupper-29.c: New testcase.
Iain Buclaw [Mon, 26 Jun 2023 01:24:27 +0000 (03:24 +0200)]
d: Suboptimal codegen for __builtin_expect(cond, false)
Since PR96435, both boolean objects and expressions have been evaluated
in the following way.
(*(ubyte*)&obj_or_expr) & 1
It has been noted that sometimes this can cause the back-end to optimize
in non-obvious ways - in particular with __builtin_expect.
This @safe feature is now restricted to just when reading the value of a
bool field that comes from a union.
PR d/110359
gcc/d/ChangeLog:
* d-convert.cc (convert_for_rvalue): Only apply the @safe boolean
conversion to boolean fields of a union.
(convert_for_condition): Call convert_for_rvalue in the default case.
Jonathan Wakely [Mon, 15 May 2023 20:41:56 +0000 (21:41 +0100)]
libstdc++: Document removal of implicit allocator rebinding extensions
Traditionally libstdc++ allowed containers to be
instantiated with allocator's that have the wrong value type, implicitly
rebinding the allocator to the container's value type. Since C++20 that
has been explicitly ill-formed, so the extension is no longer supported
in strict modes (e.g. -std=c++17) and in C++20 and later.
libstdc++-v3/ChangeLog:
* doc/xml/manual/evolution.xml: Document removal of implicit
allocator rebinding extensions in strict mode and for C++20.
* doc/html/*: Regenerate.
Jonathan Wakely [Fri, 18 Mar 2022 13:10:01 +0000 (13:10 +0000)]
libstdc++: Simplify constraints for std::any construction [PR104242]
Partially revert r12-4190-g6da36b7d0e43b6f9281c65c19a025d4888a25b2d
because using __and_<..., is_copy_constructible<T>> when T is incomplete
results in an error about deriving from is_copy_constructible<T> when
that is incomplete. I don't know how to fix that, so this simply
restores the previous constraint which worked in this case (even though
I think it's technically undefined to use is_copy_constructible<T> with
incomplete T). This doesn't restore exactly what we had before, but uses
the is_copy_constructible_v and __is_in_place_type_v variable templates
instead of the ::value member.
libstdc++-v3/ChangeLog:
PR libstdc++/104242
* include/std/any (any(T&&)): Revert change to constraints.
* testsuite/20_util/any/cons/104242.cc: New test.
Kewen Lin [Tue, 20 Jun 2023 06:40:52 +0000 (01:40 -0500)]
rs6000: Guard __builtin_{un,}pack_vector_int128 with vsx [PR109932]
As PR109932 shows, builtins __builtin_{un,}pack_vector_int128
should be guarded under vsx rather than power7, as their
corresponding bif patterns have the conditions TARGET_VSX
and VECTOR_MEM_ALTIVEC_OR_VSX_P (V1TImode). This patch is to
ensure __builtin_{un,}pack_vector_int128 only available under
vsx.
PR target/109932
gcc/ChangeLog:
* config/rs6000/rs6000-builtin.def (BU_VSX_MISC_2): New macro.
({un,}pack_vector_int128): Use BU_VSX_MISC_2 instead of
BU_P7_MISC_2.
gcc/testsuite/ChangeLog:
* gcc.target/powerpc/pr109932-1.c: New test.
* gcc.target/powerpc/pr109932-2.c: New test.
Kewen Lin [Mon, 12 Jun 2023 06:07:52 +0000 (01:07 -0500)]
rs6000: Don't use TFmode for 128 bits fp constant in toc [PR110011]
As PR110011 shows, when encoding 128 bits fp constant into
toc, we adopts REAL_VALUE_TO_TARGET_LONG_DOUBLE which is
to find the first float mode with LONG_DOUBLE_TYPE_SIZE
bits of precision, it would be TFmode here. But the 128
bits fp constant can be with mode IFmode or KFmode, which
doesn't necessarily have the same underlying float format
as the one of TFmode, like this PR exposes, with option
-mabi=ibmlongdouble TFmode has ibm_extended_format while
KFmode has ieee_quad_format, mixing up the formats (the
encoding/decoding ways) would cause unexpected results.
This patch is to make it use constant's own mode instead
of TFmode for real_to_target call.
PR target/110011
gcc/ChangeLog:
* config/rs6000/rs6000.c (output_toc): Use the mode of the 128-bit
floating constant itself for real_to_target call.
where M1 and M2 are of equal mode size. That is problematic for the splitter
vfp.md:no_literal_pool_df_immediate in the arm backend, which tries to pun an
lvalue DFmode pseudo into DImode and assign a constant to it with
emit_move_insn, as the new transformation simply undoes this, and we end up
splitting indefinitely.
This patch changes things around in the arm backend so that we use a
DImode temporary (instead of DFmode) and first load the DImode constant
into the pseudo, and then pun the pseudo into DFmode as an rvalue in a
reg -> reg move. I believe this should be semantically equivalent but
avoids the pathalogical behaviour seen in the PR.
gcc/ChangeLog:
PR target/109800
* config/arm/arm.md (movdf): Generate temporary pseudo in DImode
instead of DFmode.
* config/arm/vfp.md (no_literal_pool_df_immediate): Rather than punning an
lvalue DFmode pseudo into DImode, use a DImode pseudo and pun it into
DFmode as an rvalue.
gcc/testsuite/ChangeLog:
PR target/109800
* gcc.target/arm/pure-code/pr109800.c: New test.
* config/rs6000/rs6000.c (darwin_rs6000_special_round_type_align):
Make sure that we do not have a cap on field alignment before altering
the struct layout based on the type alignment of the first entry.
gcc/testsuite/ChangeLog:
* gcc.target/powerpc/darwin-abi-13-0.c: New test.
* gcc.target/powerpc/darwin-abi-13-1.c: New test.
* gcc.target/powerpc/darwin-abi-13-2.c: New test.
* gcc.target/powerpc/darwin-structs-0.h: New test.
Jakub Jelinek [Fri, 9 Jun 2023 07:10:29 +0000 (09:10 +0200)]
fortran: Fix ICE on pr96024.f90 on big-endian hosts [PR96024]
The pr96024.f90 testcase ICEs on big-endian hosts. The problem is
that length->val.integer is accessed after checking
length->expr_type == EXPR_CONSTANT, but it is a CHARACTER constant
which uses length->val.character union member instead and on big-endian
we end up reading constant 0x100000000 rather than some small number
on little-endian and if target doesn't have enough memory for 4 times
that (i.e. 16GB allocation), it ICEs.
2023-06-09 Jakub Jelinek <jakub@redhat.com>
PR fortran/96024
* primary.c (gfc_convert_to_structure_constructor): Only do
constant string ctor length verification and truncation/padding
if constant length has INTEGER type.
Jakub Jelinek [Sun, 21 May 2023 11:36:56 +0000 (13:36 +0200)]
match.pd: Ensure (op CONSTANT_CLASS_P CONSTANT_CLASS_P) is simplified [PR109505]
On the following testcase we hang, because POLY_INT_CST is CONSTANT_CLASS_P,
but BIT_AND_EXPR with it and INTEGER_CST doesn't simplify and the
(x | CST1) & CST2 -> (x & CST2) | (CST1 & CST2)
simplification actually relies on the (CST1 & CST2) simplification,
otherwise it is a deoptimization, trading 2 ops for 3 and furthermore
running into
/* Given a bit-wise operation CODE applied to ARG0 and ARG1, see if both
operands are another bit-wise operation with a common input. If so,
distribute the bit operations to save an operation and possibly two if
constants are involved. For example, convert
(A | B) & (A | C) into A | (B & C)
Further simplification will occur if B and C are constants. */
simplification which simplifies that
(x & CST2) | (CST1 & CST2) back to
CST2 & (x | CST1).
I went through all other places I could find where we have a simplification
with 2 CONSTANT_CLASS_P operands and perform some operation on those two,
while the other spots aren't that severe (just trade 2 operations for
another 2 if the two constants don't simplify, rather than as in the above
case trading 2 ops for 3), I still think all those spots really intend
to optimize only if the 2 constants simplify.
So, the following patch adds to those a ! modifier to ensure that,
even at GENERIC that modifier means !EXPR_P which is exactly what we want
IMHO.
2023-05-21 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/109505
* match.pd ((x | CST1) & CST2 -> (x & CST2) | (CST1 & CST2),
Combine successive equal operations with constants,
(A +- CST1) +- CST2 -> A + CST3, (CST1 - A) +- CST2 -> CST3 - A,
CST1 - (CST2 - A) -> CST3 + A): Use ! on ops with 2 CONSTANT_CLASS_P
operands.
PR libstdc++/109822
* include/experimental/bits/simd.h (to_native): Use int NTTP
as specified in PTS2.
(to_compatible): Likewise. Add missing tag to call mask
generator ctor.
* testsuite/experimental/simd/pr109822_cast_functions.cc: New
test.
* testsuite/experimental/simd/tests/operator_cvt.cc: Make long
double <-> (u)long conversion tests conditional on sizeof(long
double) and sizeof(long).
Matthias Kretz [Thu, 23 Mar 2023 08:32:58 +0000 (09:32 +0100)]
libstdc++: Add missing constexpr to simd
The constexpr API is only available with -std=gnu++XX (and proposed for
C++26). The proposal is to have the complete simd API usable in constant
expressions.
This patch resolves several issues with using simd in constant
expressions.
Issues why constant_evaluated branches are necessary:
* subscripting vector builtins is not allowed in constant expressions
* if the implementation needs/uses memcpy
* if the implementation would otherwise call SIMD intrinsics/builtins
PR libstdc++/109949
* include/experimental/bits/simd.h (__intrinsic_type): If
__ALTIVEC__ is defined, map gnu::vector_size types to their
corresponding __vector T types without losing unsignedness of
integer types. Also prefer long long over long.
* include/experimental/bits/simd_ppc.h (_S_popcount): Cast mask
object to the expected unsigned vector type.
PR libstdc++/109261
* include/experimental/bits/simd.h (__intrinsic_type):
Specialize __intrinsic_type<double, 8> and
__intrinsic_type<double, 16> in any case, but provide the member
type only with __aarch64__.
PR libstdc++/109261
* include/experimental/bits/simd_neon.h (_S_reduce): Add
constexpr and make NEON implementation conditional on
not __builtin_is_constant_evaluated.
Matthias Kretz [Thu, 23 Feb 2023 13:55:08 +0000 (14:55 +0100)]
libstdc++: Fix simd compilation with Clang
Clang fails to compile some constant expressions involving simd.
Therefore, just disable this non-conforming extension for clang.
Fix AVX512 blend implementation for Clang. It was converting the bitmask
to bool before, which is obviously wrong. Instead use a Clang builtin to
convert the bitmask to vector-mask before using a vector blend ?:. A
similar change is required for the masked unary implementation, because
the GCC builtins do not exist on Clang.
* include/experimental/bits/simd_detail.h: Don't declare the
simd API as constexpr with Clang.
* include/experimental/bits/simd_x86.h (__movm): New.
(_S_blend_avx512): Resolve FIXME. Implement blend using __movm
and ?:.
(_SimdImplX86::_S_masked_unary): Clang does not implement the
same builtins. Implement the function using __movm, ?:, and -
operators on vector_size types instead.
Matthias Kretz [Mon, 20 Feb 2023 16:49:37 +0000 (17:49 +0100)]
libstdc++: Always-inline most of non-cmath fixed_size implementation
For simd, the inlining behavior should be similar to builtin types. (No
operator on buitin types is ever translated into a function call.)
Therefore, always_inline is the right choice (i.e. inline on -O0 as
well).
PR libstdc++/108856
* include/experimental/bits/simd_builtin.h
(_SimdImplBuiltin::_S_masked_unary): More efficient
implementation of masked inc-/decrement for integers and floats
without AVX2.
* include/experimental/bits/simd_x86.h
(_SimdImplX86::_S_masked_unary): New. Use AVX512 masked subtract
builtins for masked inc-/decrement.
Matthias Kretz [Sat, 14 Jan 2023 16:07:59 +0000 (17:07 +0100)]
libstdc++: Annotate most lambdas with always_inline
All of the annotated lambdas are simply a necessary means for
implementing these functions and should never result in an actual
function call. Many of these lambdas would go away if C++ had better
language support for packs.
Michael Meissner [Mon, 22 May 2023 15:17:01 +0000 (11:17 -0400)]
Do not generate vmaddfp and vnmsubfp
This is version 3 of the patch. This is essentially version 1 with the removal
of changes to altivec.md, and cleanup of the comments.
Version 2 generated the vmaddfp and vnmsubfp instructions if -Ofast was used,
and those changes are deleted in this patch.
The Altivec instructions vmaddfp and vnmsubfp have different rounding behaviors
than the VSX xvmaddsp and xvnmsubsp instructions. In particular, generating
these instructions seems to break Eigen on big endian systems.
I have done bootstrap builds on power9 little endian (with both IEEE long
double and IBM long double). I have also done the builds and test on a power8
big endian system (testing both 32-bit and 64-bit code generation). Chip has
verified that it fixes the problem that Eigen encountered. Can I check this
into the master GCC branch? After a burn-in period, can I check this patch
into the active GCC branches?
Thanks in advance.
2023-04-07 Michael Meissner <meissner@linux.ibm.com>
gcc/
PR target/70243
* config/rs6000/vsx.md (vsx_fmav4sf4): Do not generate vmaddfp. Back
port from master 04/10/2023.
(vsx_nfmsv4sf4): Do not generate vnmsubfp.
gcc/testsuite/
PR target/70243
* gcc.target/powerpc/pr70243.c: New test. Back port from master
04/10/2023.
Patrick Palka [Tue, 14 Mar 2023 20:44:32 +0000 (16:44 -0400)]
libstdc++: Implement P2520R0 changes to move_iterator's iterator_concept
libstdc++-v3/ChangeLog:
* include/bits/stl_iterator.h (move_iterator::_S_iter_concept):
Define.
(__cpp_lib_move_iterator_concept): Define for C++20.
(move_iterator::iterator_concept): Strengthen as per P2520R0.
* include/std/version (__cpp_lib_move_iterator_concept): Define
for C++20.
* testsuite/24_iterators/move_iterator/p2520r0.cc: New test.
Patrick Palka [Fri, 3 Mar 2023 16:37:02 +0000 (11:37 -0500)]
c++: thinko in extract_local_specs [PR108998]
In order to fix PR100295, r13-4730-g18499b9f848707 attempted to make
extract_local_specs walk the given pattern twice, ignoring unevaluated
operands the first time around so that we prefer to process a local
specialization in an evaluated context if it appears in one (we process
each local specialization once even if it appears multiple times in the
pattern).
But there's a thinko in the patch, namely that we don't actually walk
the pattern twice since we don't clear the visited set for the second
walk (to avoid processing a local specialization twice) and so the root
node (and any node leading up to an unevaluated operand) is considered
visited already. So the patch effectively made extract_local_specs
ignore unevaluated operands altogether, which this testcase demonstrates
isn't quite safe (extract_local_specs never sees 'aa' and we don't record
its local specialization, so later we try to specialize 'aa' on the spot
with the args {{int},{17}} which causes us to nonsensically substitute
its auto with 17.)
This patch fixes this by refining the second walk to start from the
trees we skipped over during the first walk.
PR c++/108998
gcc/cp/ChangeLog:
* pt.c (el_data::skipped_trees): New data member.
(extract_locals_r): Push to skipped_trees any unevaluated
contexts that we skipped over.
(extract_local_specs): For the second walk, start from each
tree in skipped_trees.
Patrick Palka [Thu, 15 Dec 2022 21:02:05 +0000 (16:02 -0500)]
c++: extract_local_specs and unevaluated contexts [PR100295]
Here during partial instantiation of the constexpr if, extra_local_specs
walks the statement looking for local specializations within to capture.
However, we're thwarted by the fact that 'ts' first appears inside an
unevaluated context, and so the calls to process_outer_var_ref for its
local specializations are a no-op. And since we walk each tree exactly
once, we end up not capturing the local specializations despite 'ts'
later occurring in an evaluated context.
This patch fixes this by making extract_local_specs walk evaluated
contexts first before walking unevaluated contexts. We could probably
get away with not walking unevaluated contexts at all, but this approach
seems more clearly safe.
PR c++/100295
PR c++/107579
gcc/cp/ChangeLog:
* pt.c (el_data::skip_unevaluated_operands): New data member.
(extract_locals_r): If skip_unevaluated_operands is true,
don't walk into unevaluated contexts.
(extract_local_specs): Walk the pattern twice, first with
skip_unevaluated_operands true followed by it set to false.
Patrick Palka [Tue, 29 Nov 2022 14:55:21 +0000 (09:55 -0500)]
c++: explicit specialization and trailing requirements [PR107864]
Here we're crashing when using the explicit specialization of the
function template g with trailing requirements ultimately because
earlier decls_match (called indirectly from register_specialization) for
for the explicit specialization returned false since the template has
trailing requirements whereas the specialization doesn't.
In r12-2230-gddd25bd1a7c8f4, we fixed a similar issue concerning template
requirements instead of trailing requirements. We could extend that fix
to ignore trailing requirement mismatches for explicit specializations
as well, but it seems cleaner to just propagate constraints from the
specialized template to the specialization when declaring an explicit
specialization so that decls_match will naturally return true in this
case. And it looks like determine_specialization already does this,
albeit inconsistently (only when specializing a non-template member
function of a class template as in cpp2a/concepts-explicit-spec4.C).
So this patch makes determine_specialization consistently propagate
constraints from the specialized template to the specialization, which
in turn lets us get rid of the function_requirements_equivalent_p special
case added by r12-2230.
PR c++/107864
gcc/cp/ChangeLog:
* decl.c (function_requirements_equivalent_p): Don't check
DECL_TEMPLATE_SPECIALIZATION.
* pt.c (determine_specialization): Propagate constraints when
specializing a function template too. Simplify by using
add_outermost_template_args.
Patrick Palka [Thu, 3 Nov 2022 19:35:18 +0000 (15:35 -0400)]
c++: requires-expr and access checking [PR107179]
Like during satisfaction, we also need to avoid deferring access checks
during substitution of a requires-expr because the outcome of an access
check can determine the value of the requires-expr. Otherwise (in
deferred access checking contexts such as within a base-clause), the
requires-expr may evaluate to the wrong result, and along the way a
failed access check may leak out from it into a non-SFINAE context and
cause a hard error (as in the below testcase).
PR c++/107179
gcc/cp/ChangeLog:
* constraint.cc (tsubst_requires_expr): Make sure we're not
deferring access checks.