Kewen Lin [Wed, 12 Aug 2020 09:19:16 +0000 (04:19 -0500)]
testsuite: Add -fno-common to pr82374.c [PR94077]
As the PR comments show, the case gcc.dg/gomp/pr82374.c fails on
Power7 since gcc8. But it passes from gcc10. By looking into
the difference, it's due to that gcc10 sets -fno-common as default,
which makes vectorizer force the alignment and be able to use
aligned vector load/store on those targets which doesn't support
unaligned vector load/store (here it's Power7).
As Jakub suggested in the PR, this patch is to append -fno-common
into dg-options.
Verified with gcc8/gcc9 releases on ppc64-redhat-linux (Power7).
Christophe Lyon [Wed, 12 Aug 2020 09:22:38 +0000 (09:22 +0000)]
testsuite: Fix gcc.target/arm/stack-protector-1.c for Cortex-M
The stack-protector-1.c test fails when compiled for Cortex-M:
- for Cortex-M0/M1, str r0, [sp #-8]! is not supported
- for Cortex-M3/M4..., the assembler complains that "use of r13 is
deprecated"
This patch replaces the str instruction with
sub sp, sp, #8
str r0, [sp]
and removes the check for r13, which is unlikely to leak the canary
value.
Jakub Jelinek [Mon, 3 Aug 2020 20:55:28 +0000 (22:55 +0200)]
aarch64: Fix up __aarch64_cas16_acq_rel fallback
As mentioned in the PR, the fallback path when LSE is unavailable writes
incorrect registers to the memory if the previous content compares equal
to x0, x1 - it writes copy of x0, x1 from the start of function, but it
should write x2, x3.
2020-08-03 Jakub Jelinek <jakub@redhat.com>
PR target/96402
* config/aarch64/lse.S (__aarch64_cas16_acq_rel): Use x2, x3 instead
of x(tmp0), x(tmp1) in STXP arguments.
arm: Clear canary value after stack_protect_test [PR96191]
The stack_protect_test patterns were leaving the canary value in the
temporary register, meaning that it was often still in registers on
return from the function. An attacker might therefore have been
able to use it to defeat stack-smash protection for a later function.
gcc/
PR target/96191
* config/arm/arm.md (arm_stack_protect_test_insn): Zero out
operand 2 after use.
* config/arm/thumb1.md (thumb1_stack_protect_test_insn): Likewise.
gcc/testsuite/
* gcc.target/arm/stack-protector-1.c: New test.
* gcc.target/arm/stack-protector-2.c: Likewise.
aarch64: Clear canary value after stack_protect_test [PR96191]
The stack_protect_test patterns were leaving the canary value in the
temporary register, meaning that it was often still in registers on
return from the function. An attacker might therefore have been
able to use it to defeat stack-smash protection for a later function.
gcc/
PR target/96191
* config/aarch64/aarch64.md (stack_protect_test_<mode>): Set the
CC register directly, instead of a GPR. Replace the original GPR
destination with an extra scratch register. Zero out operand 3
after use.
(stack_protect_test): Update accordingly.
gcc/testsuite/
PR target/96191
* gcc.target/aarch64/stack-protector-1.c: New test.
* gcc.target/aarch64/stack-protector-2.c: Likewise.
This function was unimplemented, simply returning the native format
string instead.
PR libstdc++/93245
* include/experimental/bits/fs_path.h (path::generic_string<C,T,A>()):
Return the generic format path, not the native one.
* testsuite/experimental/filesystem/path/generic/generic_string.cc:
Improve test coverage.
It's not possible to construct a path::string_type from an allocator of
a different type. Create the correct specialization of basic_string, and
adjust path::_S_str_convert to use a basic_string_view so that it is
independent of the allocator type.
PR libstdc++/94242
* include/bits/fs_path.h (path::_S_str_convert): Replace first
parameter with basic_string_view so that strings with different
allocators can be accepted.
(path::generic_string<C,T,A>()): Use basic_string object that uses the
right allocator type.
* testsuite/27_io/filesystem/path/generic/94242.cc: New test.
* testsuite/27_io/filesystem/path/generic/generic_string.cc: Improve
test coverage.
Qian Jianhua [Fri, 7 Aug 2020 09:39:40 +0000 (10:39 +0100)]
aarch64: Add A64FX machine model
This patch add support for Fujitsu A64FX, as the first step of adding
A64FX machine model.
A64FX is used in FUJITSU Supercomputer PRIMEHPC FX1000,
PRIMEHPC FX700, and supercomputer Fugaku.
The official microarchitecture information of A64FX can be read at
https://github.com/fujitsu/A64FX.
2020-08-07 Qian jianhua <qianjh@cn.fujitsu.com>
gcc/
* config/aarch64/aarch64-cores.def (a64fx): New core.
* config/aarch64/aarch64-tune.md: Regenerated.
* config/aarch64/aarch64.c (a64fx_prefetch_tune, a64fx_tunings): New.
* doc/invoke.texi: Add a64fx to the list.
early-remat: Handle sets of multiple candidate regs [PR94605]
early-remat.c:process_block wasn't handling insns that set multiple
candidate registers, which led to an assertion failure at the end
of the main loop.
Instructions that set two pseudos aren't rematerialisation candidates in
themselves, but we still need to track them if another instruction that
sets the same register is a rematerialisation candidate.
gcc/
PR rtl-optimization/94605
* early-remat.c (early_remat::process_block): Handle insns that
set multiple candidate registers.
gcc/testsuite/
PR rtl-optimization/94605
* gcc.target/aarch64/sve/pr94605.c: New test.
ipa-devirt: Fix crash in obj_type_ref_class [PR95114]
The testcase has failed since r9-5035, because obj_type_ref_class
tries to look up an ODR type when no ODR type information is
available. (The information was available earlier in the
compilation, but was freed during pass_ipa_free_lang_data.)
We then crash dereferencing the null get_odr_type result.
The test passes with -O2. However, it fails again if -fdump-tree-all
is used, since obj_type_ref_class is called indirectly from the
dump routines.
Other code creates ODR type entries on the fly by passing “true”
as the insert parameter. But obj_type_ref_class can't do that
unconditionally, since it should have no side-effects when used
from the dumping code.
Following a suggestion from Honza, this patch adds parameters
to say whether the routines are being called from dump routines
and uses those to derive the insert parameter.
gcc/
PR middle-end/95114
* tree.h (virtual_method_call_p): Add a default-false parameter
that indicates whether the function is being called from dump
routines.
(obj_type_ref_class): Likewise.
* tree.c (virtual_method_call_p): Likewise.
* ipa-devirt.c (obj_type_ref_class): Likewise. Lazily add ODR
type information for the type when the parameter is false.
* tree-pretty-print.c (dump_generic_node): Update calls to
virtual_method_call_p and obj_type_ref_class accordingly.
gcc/testsuite/
PR middle-end/95114
* g++.target/aarch64/pr95114.C: New test.
This patch introduces the mitigation for Straight Line Speculation past
the BLR instruction.
This mitigation replaces BLR instructions with a BL to a stub which uses
a BR to jump to the original value. These function stubs are then
appended with a speculation barrier to ensure no straight line
speculation happens after these jumps.
When optimising for speed we use a set of stubs for each function since
this should help the branch predictor make more accurate predictions
about where a stub should branch.
When optimising for size we use one set of stubs for all functions.
This set of stubs can have human readable names, and we are using
`__call_indirect_x<N>` for register x<N>.
When BTI branch protection is enabled the BLR instruction can jump to a
`BTI c` instruction using any register, while the BR instruction can
only jump to a `BTI c` instruction using the x16 or x17 registers.
Hence, in order to ensure this transformation is safe we mov the value
of the original register into x16 and use x16 for the BR.
As an example when optimising for size:
a
BLR x0
instruction would get transformed to something like
BL __call_indirect_x0
where __call_indirect_x0 labels a thunk that contains
__call_indirect_x0:
MOV X16, X0
BR X16
<speculation barrier>
The first version of this patch used local symbols specific to a
compilation unit to try and avoid relocations.
This was mistaken since functions coming from the same compilation unit
can still be in different sections, and the assembler will insert
relocations at jumps between sections.
On any relocation the linker is permitted to emit a veneer to handle
jumps between symbols that are very far apart. The registers x16 and
x17 may be clobbered by these veneers.
Hence the function stubs cannot rely on the values of x16 and x17 being
the same as just before the function stub is called.
Similar can be said for the hot/cold partitioning of single functions,
so function-local stubs have the same restriction.
This updated version of the patch never emits function stubs for x16 and
x17, and instead forces other registers to be used.
Given the above, there is now no benefit to local symbols (since they
are not enough to avoid dealing with linker intricacies). This patch
now uses global symbols with hidden visibility each stored in their own
COMDAT section. This means stubs can be shared between compilation
units while still avoiding the PLT indirection.
This patch also removes the `__call_indirect_x30` stub (and
function-local equivalent) which would simply jump back to the original
location.
The function-local stubs are emitted to the assembly output file in one
chunk, which means we need not add the speculation barrier directly
after each one.
This is because we know for certain that the instructions directly after
the BR in all but the last function stub will be from another one of
these stubs and hence will not contain a speculation gadget.
Instead we add a speculation barrier at the end of the sequence of
stubs.
The global stubs are emitted in COMDAT/.linkonce sections by
themselves so that the linker can remove duplicates from multiple object
files. This means they are not emitted in one chunk, and each one must
include the speculation barrier.
Another difference is that since the global stubs are shared across
compilation units we do not know that all functions will be targeting an
architecture supporting the SB instruction.
Rather than provide multiple stubs for each architecture, we provide a
stub that will work for all architectures -- using the DSB+ISB barrier.
This mitigation does not apply for BLR instructions in the following
places:
- Some accesses to thread-local variables use a code sequence with a BLR
instruction. This code sequence is part of the binary interface between
compiler and linker. If this BLR instruction needs to be mitigated, it'd
probably be best to do so in the linker. It seems that the code sequence
for thread-local variable access is unlikely to lead to a Spectre Revalation
Gadget.
- PLT stubs are produced by the linker and each contain a BLR instruction.
It seems that at most only after the last PLT stub a Spectre Revalation
Gadget might appear.
Testing:
Bootstrap and regtest on AArch64
(with BOOT_CFLAGS="-mharden-sls=retbr,blr")
Used a temporary hack(1) in gcc-dg.exp to use these options on every
test in the testsuite, a slight modification to emit the speculation
barrier after every function stub, and a script to check that the
output never emitted a BLR, or unmitigated BR or RET instruction.
Similar on an aarch64-none-elf cross-compiler.
1) Temporary hack emitted a speculation barrier at the end of every stub
function, and used a script to ensure that:
a) Every RET or BR is immediately followed by a speculation barrier.
b) No BLR instruction is emitted by compiler.
* config/aarch64/aarch64-protos.h (aarch64_indirect_call_asm):
New declaration.
* config/aarch64/aarch64.c (aarch64_regno_regclass): Handle new
stub registers class.
(aarch64_class_max_nregs): Likewise.
(aarch64_register_move_cost): Likewise.
(aarch64_sls_shared_thunks): Global array to store stub labels.
(aarch64_sls_emit_function_stub): New.
(aarch64_create_blr_label): New.
(aarch64_sls_emit_blr_function_thunks): New.
(aarch64_sls_emit_shared_blr_thunks): New.
(aarch64_asm_file_end): New.
(aarch64_indirect_call_asm): New.
(TARGET_ASM_FILE_END): Use aarch64_asm_file_end.
(TARGET_ASM_FUNCTION_EPILOGUE): Use
aarch64_sls_emit_blr_function_thunks.
* config/aarch64/aarch64.h (STB_REGNUM_P): New.
(enum reg_class): Add STUB_REGS class.
(machine_function): Introduce `call_via` array for
function-local stub labels.
* config/aarch64/aarch64.md (*call_insn, *call_value_insn): Use
aarch64_indirect_call_asm to emit code when hardening BLR
instructions.
* config/aarch64/constraints.md (Ucr): New constraint
representing registers for indirect calls. Is GENERAL_REGS
usually, and STUB_REGS when hardening BLR instruction against
SLS.
* config/aarch64/predicates.md (aarch64_general_reg): STUB_REGS class
is also a general register.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/sls-mitigation/sls-miti-blr-bti.c: New test.
* gcc.target/aarch64/sls-mitigation/sls-miti-blr.c: New test.
aarch64: Introduce SLS mitigation for RET and BR instructions
Instructions following RET or BR are not necessarily executed. In order
to avoid speculation past RET and BR we can simply append a speculation
barrier.
Since these speculation barriers will not be architecturally executed,
they are not expected to add a high performance penalty.
The speculation barrier is to be SB when targeting architectures which
have this enabled, and DSB SY + ISB otherwise.
We add tests for each of the cases where such an instruction was seen.
This is implemented by modifying each machine description pattern that
emits either a RET or a BR instruction. We choose not to use something
like `TARGET_ASM_FUNCTION_EPILOGUE` since it does not affect the
`indirect_jump`, `jump`, `sibcall_insn` and `sibcall_value_insn`
patterns and we find it preferable to implement the functionality in the
same way for every pattern.
There is one particular case which is slightly tricky. The
implementation of TARGET_ASM_TRAMPOLINE_TEMPLATE uses a BR which needs
to be mitigated against. The trampoline template is used *once* per
compilation unit, and the TRAMPOLINE_SIZE is exposed to the user via the
builtin macro __LIBGCC_TRAMPOLINE_SIZE__.
In the future we may implement function specific attributes to turn on
and off hardening on a per-function basis.
The fixed nature of the trampoline described above implies it will be
safer to ensure this speculation barrier is always used.
Testing:
Bootstrap and regtest done on aarch64-none-linux
Used a temporary hack(1) to use these options on every test in the
testsuite and a script to check that the output never emitted an
unmitigated RET or BR.
1) Temporary hack was a change to the testsuite to always use
`-save-temps` and run a script on the assembly output of those
compilations which produced one to ensure every RET or BR is immediately
followed by a speculation barrier.
* config/aarch64/aarch64-protos.h (aarch64_sls_barrier): New.
* config/aarch64/aarch64.c (aarch64_output_casesi): Emit
speculation barrier after BR instruction if needs be.
(aarch64_trampoline_init): Handle ptr_mode value & adjust size
of code copied.
(aarch64_sls_barrier): New.
(aarch64_asm_trampoline_template): Add needed barriers.
* config/aarch64/aarch64.h (AARCH64_ISA_SB): New.
(TARGET_SB): New.
(TRAMPOLINE_SIZE): Account for barrier.
* config/aarch64/aarch64.md (indirect_jump, *casesi_dispatch,
simple_return, *do_return, *sibcall_insn, *sibcall_value_insn):
Emit barrier if needs be, also account for possible barrier using
"sls_length" attribute.
(sls_length): New attribute.
(length): Determine default using any non-default sls_length
value.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/sls-mitigation/sls-miti-retbr.c: New test.
* gcc.target/aarch64/sls-mitigation/sls-miti-retbr-pacret.c:
New test.
* gcc.target/aarch64/sls-mitigation/sls-mitigation.exp: New file.
* lib/target-supports.exp (check_effective_target_aarch64_asm_sb_ok):
New proc.
aarch64: New Straight Line Speculation (SLS) mitigation flags
Here we introduce the flags that will be used for straight line speculation.
The new flag introduced is `-mharden-sls=`.
This flag can take arguments of `none`, `all`, or a comma seperated list
of one or more of `retbr` or `blr`.
`none` indicates no special mitigation of the straight line speculation
vulnerability.
`all` requests all mitigations currently implemented.
`retbr` requests that the RET and BR instructions have a speculation
barrier inserted after them.
`blr` requests that BLR instructions are replaced by a BL to a function
stub using a BR with a speculation barrier after it.
Setting this on a per-function basis using attributes or the like is not
enabled, but may be in the future.
Both intrinsics did not handle the case where the va_list object comes
from a ref parameter.
gcc/d/ChangeLog:
PR d/96140
* intrinsics.cc (expand_intrinsic_vaarg): Handle ref parameters as
arguments to va_arg().
(expand_intrinsic_vastart): Handle ref parameters as arguments to
va_start().
Mark Eggleston [Thu, 11 Jun 2020 10:05:40 +0000 (11:05 +0100)]
Fortran : ICE in gfc_check_pointer_assign PR95612
Output an error if the right hand value is a zero sized array or
does not have a symbol tree otherwise continue checking.
2020-07-27 Steven G. Kargl <kargl@gcc.gnu.org>
gcc/fortran/
PR fortran/95612
* expr.c (gfc_check_pointer_assigb): Output an error if
rvalue is a zero sized array or output an error if rvalue
doesn't have a symbol tree.
2020-07-27 Mark Eggleston <markeggleston@gcc.gnu.org>
gcc/testsuite/
PR fortran/95612
* gfortran.dg/pr95612.f90: New test.
Thomas Koenig [Thu, 23 Jul 2020 18:26:10 +0000 (20:26 +0200)]
Fix handling of implicit_pure by checking if non-pure procedures are called.
Procedures are marked as implicit_pure if they fulfill the criteria of
pure procedures. In this case, a procedure was not marked as not being
implicit_pure which called another procedure, which had not yet been
marked as not being implicit_impure.
Fixed by iterating over all procedures, setting callers of procedures
which are non-pure and non-implicit_pure as non-implicit_pure and
doing this until no more procedure has been changed.
Thomas Koenig [Sun, 19 Jul 2020 15:27:45 +0000 (17:27 +0200)]
Always use name from c_interop_kinds_table for -fc-prototypes.
When a user specified a KIND that was a parameter taking the value
of an iso_c_binding KIND, the code used the name of that parameter
to look up the type name. Corrected by always looking it up in
the table of C interop kinds (which was previously done for
non-C-interop types, anyway).
gcc/fortran/ChangeLog:
PR fortran/96220
* dump-parse-tree.c (get_c_type_name): Always use the entries from
c_interop_kinds_table to find the correct C type.
David Edelsohn [Fri, 6 Mar 2020 01:41:08 +0000 (20:41 -0500)]
rs6000: Correct logic to disable NO_SUM_IN_TOC and NO_FP_IN_TOC [PR94065]
aix61.h, aix71.h and aix72.h intends to prevent SUM_IN_TOC and FP_IN_TOC
when cmodel=large. This patch defines the variables associated with the
target options to 1 to _enable_ NO_SUM_IN_TOC and enable NO_FP_IN_TOC.
Bootstrapped on powerpc-ibm-aix7.2.0.0
2020-03-06 David Edelsohn <dje.gcc@gmail.com>
PR target/94065
* config/rs6000/aix61.h (TARGET_NO_SUM_IN_TOC): Set to 1 for
cmodel=large.
(TARGET_NO_FP_IN_TOC): Same.
* config/rs6000/aix71.h: Same.
* config/rs6000/aix72.h: Same.
Szabolcs Nagy [Thu, 28 May 2020 09:28:30 +0000 (10:28 +0100)]
doc: Clarify __builtin_return_address [PR94891]
The expected semantics and valid usage of __builtin_return_address is
not clear since it exposes implementation internals that are normally
not meaningful to portable c code.
This documentation change tries to clarify the semantics in case the
return address is stored in a mangled form. This affects AArch64 when
pointer authentication is used for the return address signing (i.e.
-mbranch-protection=pac-ret).
2020-07-13 Szabolcs Nagy <szabolcs.nagy@arm.com>
gcc/ChangeLog:
PR target/94891
* doc/extend.texi: Update the text for __builtin_return_address.
Note that a mangled address might not even fit into a void *, e.g.
with AArch64 ilp32 ABI the return address is stored as 64bit, so
the mangled return address cannot be accessed via _Unwind_GetPtr.
This patch changes the unwinder hooks as follows:
MD_POST_EXTRACT_ROOT_ADDR is removed: root address comes from
__builtin_return_address which is not mangled.
MD_POST_EXTRACT_FRAME_ADDR is renamed to MD_DEMANGLE_RETURN_ADDR,
it now operates on _Unwind_Word instead of void *, so the hook
should work when return address signing is enabled on AArch64 ilp32.
(But for that __builtin_aarch64_autia1716 should be fixed to operate
on 64bit input instead of a void *.)
MD_POST_FROB_EH_HANDLER_ADDR is removed: it is the responsibility of
__builtin_eh_return to do the mangling if necessary.
Szabolcs Nagy [Thu, 4 Jun 2020 12:42:16 +0000 (13:42 +0100)]
aarch64: fix __builtin_eh_return with pac-ret [PR94891]
Currently __builtin_eh_return takes a signed return address, which can
cause ABI and API issues: 1) pointer representation problems if the
address is passed around before eh return, 2) the source code needs
pac-ret specific changes and needs to know if pac-ret is used in the
current frame, 3) signed address may not be representible as void *
(with ilp32 abi).
Using address signing to protect eh return is ineffective because the
instruction sequence in the unwinder that starts from the address
signing and ends with a ret can be used as a return to anywhere gadget.
Using indirect branch istead of ret with bti j landing pads at the
target can reduce the potential of such gadget, which also implies
that __builtin_eh_return should not take a signed address.
This is a big hammer fix to the ABI and API issues: it turns pac-ret
off for the caller completely (not just on the eh return path). To
harden the caller against ROP attacks, it should use indirect branch
instead of ret, this is not attempted so the patch remains small and
backportable.
2020-07-13 Szabolcs Nagy <szabolcs.nagy@arm.com>
gcc/ChangeLog:
PR target/94891
* config/aarch64/aarch64.c (aarch64_return_address_signing_enabled):
Disable return address signing if __builtin_eh_return is used.
Szabolcs Nagy [Tue, 2 Jun 2020 15:44:41 +0000 (16:44 +0100)]
aarch64: fix return address access with pac [PR94891][PR94791]
This is a big hammer fix for __builtin_return_address (PR target/94891)
returning signed addresses (sometimes, depending on wether lr happens
to be signed or not at the time of call which depends on optimizations),
and similarly -pg may pass signed return address to _mcount
(PR target/94791).
At the time of return address expansion we don't know if it's signed or
not so it is done unconditionally.
Szabolcs Nagy [Thu, 2 Jul 2020 16:12:05 +0000 (17:12 +0100)]
aarch64: Fix BTI support in libitm
sjlj.S did not have the GNU property note markup and the BTI c
instructions that are necessary when it is built with branch
protection.
The notes are only added when libitm is built with branch
protection, because old linkers mishandle the note (merge
them incorrectly or emit warnings), the BTI instructions
are added unconditionally.
2020-07-09 Szabolcs Nagy <szabolcs.nagy@arm.com>
libitm/ChangeLog:
* config/aarch64/sjlj.S: Add BTI marking and related definitions,
and add BTI c to function entries.
Szabolcs Nagy [Thu, 2 Jul 2020 16:11:56 +0000 (17:11 +0100)]
aarch64: Fix BTI support in libgcc [PR96001]
lse.S did not have the GNU property note markup and the BTI c
instructions that are necessary when it is built with branch
protection.
The notes are only added when libgcc is built with branch
protection, because old linkers mishandle the note (merge
them incorrectly or emit warnings), the BTI instructions
are added unconditionally.
Note: BTI c is only necessary at function entry if the function
may be called indirectly, currently lse functions are not called
indirectly, but BTI is added for ABI reasons e.g. to allow
linkers later to emit stub code with indirect jump.
2020-07-09 Szabolcs Nagy <szabolcs.nagy@arm.com>
libgcc/ChangeLog:
PR target/96001
* config/aarch64/lse.S: Add BTI marking and related definitions,
and add BTI c to function entries.
Harald Anlauf [Thu, 2 Jul 2020 18:41:51 +0000 (20:41 +0200)]
PR fortran/93337 - ICE in gfc_dt_upper_string, at fortran/module.c:441
When declaring a polymorphic variable that is not a dummy, allocatable or
pointer, an ICE occurred due to a NULL pointer dereference. Check for
that situation and punt.
gcc/fortran/
PR fortran/93337
* class.c (gfc_find_derived_vtab): Punt if name is not set.
Will Schmidt [Wed, 24 Jun 2020 18:59:34 +0000 (13:59 -0500)]
Backport to gcc-9
[PATCH, PR target/94954] Fix wrong codegen for vec_pack_to_short_fp32() builtin
Hi,
Fix codegen for builtin vec_pack_to_short_fp32. This includes adding
a define_insn for xvcvsphp, and adding a new define_expand for
convert_4f32_8f16.
[v2]
Comment on altivec.md "convert_4f32_8f16" enhanced.
Testsuite builtins-1-p9-runnable.c updated.
OK for trunk and backports?
Thanks
-Will
PR target/94954
2020-07-06 Will Schmidt <will_schmidt@vnet.ibm.com>
gcc/ChangeLog:
* config/rs6000/altivec.h (vec_pack_to_short_fp32): Update.
* config/rs6000/altivec.md (UNSPEC_CONVERT_4F32_8F16): New unspec.
(convert_4f32_8f16): New define_expand
* config/rs6000/rs6000-builtin.def (convert_4f32_8f16): New builtin define
and overload.
* config/rs6000/rs6000-c.c (P9V_BUILTIN_VEC_CONVERT_4F32_8F16): New
overloaded builtin entry.
* config/rs6000/vsx.md (UNSPEC_VSX_XVCVSPHP): New unspec.
(vsx_xvcvsphp): New define_insn.
PR libstdc++/91807
* include/std/variant
(_Copy_assign_base::operator=(const _Copy_assign_base&):
Do the move-assignment from a temporary so that the temporary
is constructed with an explicit index.
* testsuite/20_util/variant/91807.cc: New.
Martin Liska [Thu, 2 Jul 2020 08:51:06 +0000 (10:51 +0200)]
gcc-changelog: sync from master.
contrib/ChangeLog:
* gcc-changelog/git_check_commit.py: New file.
* gcc-changelog/git_commit.py: New file.
* gcc-changelog/git_email.py: New file.
* gcc-changelog/git_repository.py: New file.
* gcc-changelog/git_update_version.py: New file.
* gcc-changelog/test_email.py: New file.
* gcc-changelog/test_patches.txt: New file.