This patch introduces the mitigation for Straight Line Speculation past
the BLR instruction.
This mitigation replaces BLR instructions with a BL to a stub which uses
a BR to jump to the original value. These function stubs are then
appended with a speculation barrier to ensure no straight line
speculation happens after these jumps.
When optimising for speed we use a set of stubs for each function since
this should help the branch predictor make more accurate predictions
about where a stub should branch.
When optimising for size we use one set of stubs for all functions.
This set of stubs can have human readable names, and we are using
`__call_indirect_x<N>` for register x<N>.
When BTI branch protection is enabled the BLR instruction can jump to a
`BTI c` instruction using any register, while the BR instruction can
only jump to a `BTI c` instruction using the x16 or x17 registers.
Hence, in order to ensure this transformation is safe we mov the value
of the original register into x16 and use x16 for the BR.
As an example when optimising for size:
a
BLR x0
instruction would get transformed to something like
BL __call_indirect_x0
where __call_indirect_x0 labels a thunk that contains
__call_indirect_x0:
MOV X16, X0
BR X16
<speculation barrier>
The first version of this patch used local symbols specific to a
compilation unit to try and avoid relocations.
This was mistaken since functions coming from the same compilation unit
can still be in different sections, and the assembler will insert
relocations at jumps between sections.
On any relocation the linker is permitted to emit a veneer to handle
jumps between symbols that are very far apart. The registers x16 and
x17 may be clobbered by these veneers.
Hence the function stubs cannot rely on the values of x16 and x17 being
the same as just before the function stub is called.
Similar can be said for the hot/cold partitioning of single functions,
so function-local stubs have the same restriction.
This updated version of the patch never emits function stubs for x16 and
x17, and instead forces other registers to be used.
Given the above, there is now no benefit to local symbols (since they
are not enough to avoid dealing with linker intricacies). This patch
now uses global symbols with hidden visibility each stored in their own
COMDAT section. This means stubs can be shared between compilation
units while still avoiding the PLT indirection.
This patch also removes the `__call_indirect_x30` stub (and
function-local equivalent) which would simply jump back to the original
location.
The function-local stubs are emitted to the assembly output file in one
chunk, which means we need not add the speculation barrier directly
after each one.
This is because we know for certain that the instructions directly after
the BR in all but the last function stub will be from another one of
these stubs and hence will not contain a speculation gadget.
Instead we add a speculation barrier at the end of the sequence of
stubs.
The global stubs are emitted in COMDAT/.linkonce sections by
themselves so that the linker can remove duplicates from multiple object
files. This means they are not emitted in one chunk, and each one must
include the speculation barrier.
Another difference is that since the global stubs are shared across
compilation units we do not know that all functions will be targeting an
architecture supporting the SB instruction.
Rather than provide multiple stubs for each architecture, we provide a
stub that will work for all architectures -- using the DSB+ISB barrier.
This mitigation does not apply for BLR instructions in the following
places:
- Some accesses to thread-local variables use a code sequence with a BLR
instruction. This code sequence is part of the binary interface between
compiler and linker. If this BLR instruction needs to be mitigated, it'd
probably be best to do so in the linker. It seems that the code sequence
for thread-local variable access is unlikely to lead to a Spectre Revalation
Gadget.
- PLT stubs are produced by the linker and each contain a BLR instruction.
It seems that at most only after the last PLT stub a Spectre Revalation
Gadget might appear.
Testing:
Bootstrap and regtest on AArch64
(with BOOT_CFLAGS="-mharden-sls=retbr,blr")
Used a temporary hack(1) in gcc-dg.exp to use these options on every
test in the testsuite, a slight modification to emit the speculation
barrier after every function stub, and a script to check that the
output never emitted a BLR, or unmitigated BR or RET instruction.
Similar on an aarch64-none-elf cross-compiler.
1) Temporary hack emitted a speculation barrier at the end of every stub
function, and used a script to ensure that:
a) Every RET or BR is immediately followed by a speculation barrier.
b) No BLR instruction is emitted by compiler.
* config/aarch64/aarch64-protos.h (aarch64_indirect_call_asm):
New declaration.
* config/aarch64/aarch64.c (aarch64_regno_regclass): Handle new
stub registers class.
(aarch64_class_max_nregs): Likewise.
(aarch64_register_move_cost): Likewise.
(aarch64_sls_shared_thunks): Global array to store stub labels.
(aarch64_sls_emit_function_stub): New.
(aarch64_create_blr_label): New.
(aarch64_sls_emit_blr_function_thunks): New.
(aarch64_sls_emit_shared_blr_thunks): New.
(aarch64_asm_file_end): New.
(aarch64_indirect_call_asm): New.
(TARGET_ASM_FILE_END): Use aarch64_asm_file_end.
(TARGET_ASM_FUNCTION_EPILOGUE): Use
aarch64_sls_emit_blr_function_thunks.
* config/aarch64/aarch64.h (STB_REGNUM_P): New.
(enum reg_class): Add STUB_REGS class.
(machine_function): Introduce `call_via` array for
function-local stub labels.
* config/aarch64/aarch64.md (*call_insn, *call_value_insn): Use
aarch64_indirect_call_asm to emit code when hardening BLR
instructions.
* config/aarch64/constraints.md (Ucr): New constraint
representing registers for indirect calls. Is GENERAL_REGS
usually, and STUB_REGS when hardening BLR instruction against
SLS.
* config/aarch64/predicates.md (aarch64_general_reg): STUB_REGS class
is also a general register.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/sls-mitigation/sls-miti-blr-bti.c: New test.
* gcc.target/aarch64/sls-mitigation/sls-miti-blr.c: New test.
aarch64: Introduce SLS mitigation for RET and BR instructions
Instructions following RET or BR are not necessarily executed. In order
to avoid speculation past RET and BR we can simply append a speculation
barrier.
Since these speculation barriers will not be architecturally executed,
they are not expected to add a high performance penalty.
The speculation barrier is to be SB when targeting architectures which
have this enabled, and DSB SY + ISB otherwise.
We add tests for each of the cases where such an instruction was seen.
This is implemented by modifying each machine description pattern that
emits either a RET or a BR instruction. We choose not to use something
like `TARGET_ASM_FUNCTION_EPILOGUE` since it does not affect the
`indirect_jump`, `jump`, `sibcall_insn` and `sibcall_value_insn`
patterns and we find it preferable to implement the functionality in the
same way for every pattern.
There is one particular case which is slightly tricky. The
implementation of TARGET_ASM_TRAMPOLINE_TEMPLATE uses a BR which needs
to be mitigated against. The trampoline template is used *once* per
compilation unit, and the TRAMPOLINE_SIZE is exposed to the user via the
builtin macro __LIBGCC_TRAMPOLINE_SIZE__.
In the future we may implement function specific attributes to turn on
and off hardening on a per-function basis.
The fixed nature of the trampoline described above implies it will be
safer to ensure this speculation barrier is always used.
Testing:
Bootstrap and regtest done on aarch64-none-linux
Used a temporary hack(1) to use these options on every test in the
testsuite and a script to check that the output never emitted an
unmitigated RET or BR.
1) Temporary hack was a change to the testsuite to always use
`-save-temps` and run a script on the assembly output of those
compilations which produced one to ensure every RET or BR is immediately
followed by a speculation barrier.
* config/aarch64/aarch64-protos.h (aarch64_sls_barrier): New.
* config/aarch64/aarch64.c (aarch64_output_casesi): Emit
speculation barrier after BR instruction if needs be.
(aarch64_trampoline_init): Handle ptr_mode value & adjust size
of code copied.
(aarch64_sls_barrier): New.
(aarch64_asm_trampoline_template): Add needed barriers.
* config/aarch64/aarch64.h (AARCH64_ISA_SB): New.
(TARGET_SB): New.
(TRAMPOLINE_SIZE): Account for barrier.
* config/aarch64/aarch64.md (indirect_jump, *casesi_dispatch,
simple_return, *do_return, *sibcall_insn, *sibcall_value_insn):
Emit barrier if needs be, also account for possible barrier using
"sls_length" attribute.
(sls_length): New attribute.
(length): Determine default using any non-default sls_length
value.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/sls-mitigation/sls-miti-retbr.c: New test.
* gcc.target/aarch64/sls-mitigation/sls-miti-retbr-pacret.c:
New test.
* gcc.target/aarch64/sls-mitigation/sls-mitigation.exp: New file.
* lib/target-supports.exp (check_effective_target_aarch64_asm_sb_ok):
New proc.
aarch64: New Straight Line Speculation (SLS) mitigation flags
Here we introduce the flags that will be used for straight line speculation.
The new flag introduced is `-mharden-sls=`.
This flag can take arguments of `none`, `all`, or a comma seperated list
of one or more of `retbr` or `blr`.
`none` indicates no special mitigation of the straight line speculation
vulnerability.
`all` requests all mitigations currently implemented.
`retbr` requests that the RET and BR instructions have a speculation
barrier inserted after them.
`blr` requests that BLR instructions are replaced by a BL to a function
stub using a BR with a speculation barrier after it.
Setting this on a per-function basis using attributes or the like is not
enabled, but may be in the future.
Both intrinsics did not handle the case where the va_list object comes
from a ref parameter.
gcc/d/ChangeLog:
PR d/96140
* intrinsics.cc (expand_intrinsic_vaarg): Handle ref parameters as
arguments to va_arg().
(expand_intrinsic_vastart): Handle ref parameters as arguments to
va_start().
Mark Eggleston [Thu, 11 Jun 2020 10:05:40 +0000 (11:05 +0100)]
Fortran : ICE in gfc_check_pointer_assign PR95612
Output an error if the right hand value is a zero sized array or
does not have a symbol tree otherwise continue checking.
2020-07-27 Steven G. Kargl <kargl@gcc.gnu.org>
gcc/fortran/
PR fortran/95612
* expr.c (gfc_check_pointer_assigb): Output an error if
rvalue is a zero sized array or output an error if rvalue
doesn't have a symbol tree.
2020-07-27 Mark Eggleston <markeggleston@gcc.gnu.org>
gcc/testsuite/
PR fortran/95612
* gfortran.dg/pr95612.f90: New test.
Thomas Koenig [Thu, 23 Jul 2020 18:26:10 +0000 (20:26 +0200)]
Fix handling of implicit_pure by checking if non-pure procedures are called.
Procedures are marked as implicit_pure if they fulfill the criteria of
pure procedures. In this case, a procedure was not marked as not being
implicit_pure which called another procedure, which had not yet been
marked as not being implicit_impure.
Fixed by iterating over all procedures, setting callers of procedures
which are non-pure and non-implicit_pure as non-implicit_pure and
doing this until no more procedure has been changed.
Thomas Koenig [Sun, 19 Jul 2020 15:27:45 +0000 (17:27 +0200)]
Always use name from c_interop_kinds_table for -fc-prototypes.
When a user specified a KIND that was a parameter taking the value
of an iso_c_binding KIND, the code used the name of that parameter
to look up the type name. Corrected by always looking it up in
the table of C interop kinds (which was previously done for
non-C-interop types, anyway).
gcc/fortran/ChangeLog:
PR fortran/96220
* dump-parse-tree.c (get_c_type_name): Always use the entries from
c_interop_kinds_table to find the correct C type.
David Edelsohn [Fri, 6 Mar 2020 01:41:08 +0000 (20:41 -0500)]
rs6000: Correct logic to disable NO_SUM_IN_TOC and NO_FP_IN_TOC [PR94065]
aix61.h, aix71.h and aix72.h intends to prevent SUM_IN_TOC and FP_IN_TOC
when cmodel=large. This patch defines the variables associated with the
target options to 1 to _enable_ NO_SUM_IN_TOC and enable NO_FP_IN_TOC.
Bootstrapped on powerpc-ibm-aix7.2.0.0
2020-03-06 David Edelsohn <dje.gcc@gmail.com>
PR target/94065
* config/rs6000/aix61.h (TARGET_NO_SUM_IN_TOC): Set to 1 for
cmodel=large.
(TARGET_NO_FP_IN_TOC): Same.
* config/rs6000/aix71.h: Same.
* config/rs6000/aix72.h: Same.
Szabolcs Nagy [Thu, 28 May 2020 09:28:30 +0000 (10:28 +0100)]
doc: Clarify __builtin_return_address [PR94891]
The expected semantics and valid usage of __builtin_return_address is
not clear since it exposes implementation internals that are normally
not meaningful to portable c code.
This documentation change tries to clarify the semantics in case the
return address is stored in a mangled form. This affects AArch64 when
pointer authentication is used for the return address signing (i.e.
-mbranch-protection=pac-ret).
2020-07-13 Szabolcs Nagy <szabolcs.nagy@arm.com>
gcc/ChangeLog:
PR target/94891
* doc/extend.texi: Update the text for __builtin_return_address.
Note that a mangled address might not even fit into a void *, e.g.
with AArch64 ilp32 ABI the return address is stored as 64bit, so
the mangled return address cannot be accessed via _Unwind_GetPtr.
This patch changes the unwinder hooks as follows:
MD_POST_EXTRACT_ROOT_ADDR is removed: root address comes from
__builtin_return_address which is not mangled.
MD_POST_EXTRACT_FRAME_ADDR is renamed to MD_DEMANGLE_RETURN_ADDR,
it now operates on _Unwind_Word instead of void *, so the hook
should work when return address signing is enabled on AArch64 ilp32.
(But for that __builtin_aarch64_autia1716 should be fixed to operate
on 64bit input instead of a void *.)
MD_POST_FROB_EH_HANDLER_ADDR is removed: it is the responsibility of
__builtin_eh_return to do the mangling if necessary.
Szabolcs Nagy [Thu, 4 Jun 2020 12:42:16 +0000 (13:42 +0100)]
aarch64: fix __builtin_eh_return with pac-ret [PR94891]
Currently __builtin_eh_return takes a signed return address, which can
cause ABI and API issues: 1) pointer representation problems if the
address is passed around before eh return, 2) the source code needs
pac-ret specific changes and needs to know if pac-ret is used in the
current frame, 3) signed address may not be representible as void *
(with ilp32 abi).
Using address signing to protect eh return is ineffective because the
instruction sequence in the unwinder that starts from the address
signing and ends with a ret can be used as a return to anywhere gadget.
Using indirect branch istead of ret with bti j landing pads at the
target can reduce the potential of such gadget, which also implies
that __builtin_eh_return should not take a signed address.
This is a big hammer fix to the ABI and API issues: it turns pac-ret
off for the caller completely (not just on the eh return path). To
harden the caller against ROP attacks, it should use indirect branch
instead of ret, this is not attempted so the patch remains small and
backportable.
2020-07-13 Szabolcs Nagy <szabolcs.nagy@arm.com>
gcc/ChangeLog:
PR target/94891
* config/aarch64/aarch64.c (aarch64_return_address_signing_enabled):
Disable return address signing if __builtin_eh_return is used.
Szabolcs Nagy [Tue, 2 Jun 2020 15:44:41 +0000 (16:44 +0100)]
aarch64: fix return address access with pac [PR94891][PR94791]
This is a big hammer fix for __builtin_return_address (PR target/94891)
returning signed addresses (sometimes, depending on wether lr happens
to be signed or not at the time of call which depends on optimizations),
and similarly -pg may pass signed return address to _mcount
(PR target/94791).
At the time of return address expansion we don't know if it's signed or
not so it is done unconditionally.
Szabolcs Nagy [Thu, 2 Jul 2020 16:12:05 +0000 (17:12 +0100)]
aarch64: Fix BTI support in libitm
sjlj.S did not have the GNU property note markup and the BTI c
instructions that are necessary when it is built with branch
protection.
The notes are only added when libitm is built with branch
protection, because old linkers mishandle the note (merge
them incorrectly or emit warnings), the BTI instructions
are added unconditionally.
2020-07-09 Szabolcs Nagy <szabolcs.nagy@arm.com>
libitm/ChangeLog:
* config/aarch64/sjlj.S: Add BTI marking and related definitions,
and add BTI c to function entries.
Szabolcs Nagy [Thu, 2 Jul 2020 16:11:56 +0000 (17:11 +0100)]
aarch64: Fix BTI support in libgcc [PR96001]
lse.S did not have the GNU property note markup and the BTI c
instructions that are necessary when it is built with branch
protection.
The notes are only added when libgcc is built with branch
protection, because old linkers mishandle the note (merge
them incorrectly or emit warnings), the BTI instructions
are added unconditionally.
Note: BTI c is only necessary at function entry if the function
may be called indirectly, currently lse functions are not called
indirectly, but BTI is added for ABI reasons e.g. to allow
linkers later to emit stub code with indirect jump.
2020-07-09 Szabolcs Nagy <szabolcs.nagy@arm.com>
libgcc/ChangeLog:
PR target/96001
* config/aarch64/lse.S: Add BTI marking and related definitions,
and add BTI c to function entries.
Harald Anlauf [Thu, 2 Jul 2020 18:41:51 +0000 (20:41 +0200)]
PR fortran/93337 - ICE in gfc_dt_upper_string, at fortran/module.c:441
When declaring a polymorphic variable that is not a dummy, allocatable or
pointer, an ICE occurred due to a NULL pointer dereference. Check for
that situation and punt.
gcc/fortran/
PR fortran/93337
* class.c (gfc_find_derived_vtab): Punt if name is not set.
Will Schmidt [Wed, 24 Jun 2020 18:59:34 +0000 (13:59 -0500)]
Backport to gcc-9
[PATCH, PR target/94954] Fix wrong codegen for vec_pack_to_short_fp32() builtin
Hi,
Fix codegen for builtin vec_pack_to_short_fp32. This includes adding
a define_insn for xvcvsphp, and adding a new define_expand for
convert_4f32_8f16.
[v2]
Comment on altivec.md "convert_4f32_8f16" enhanced.
Testsuite builtins-1-p9-runnable.c updated.
OK for trunk and backports?
Thanks
-Will
PR target/94954
2020-07-06 Will Schmidt <will_schmidt@vnet.ibm.com>
gcc/ChangeLog:
* config/rs6000/altivec.h (vec_pack_to_short_fp32): Update.
* config/rs6000/altivec.md (UNSPEC_CONVERT_4F32_8F16): New unspec.
(convert_4f32_8f16): New define_expand
* config/rs6000/rs6000-builtin.def (convert_4f32_8f16): New builtin define
and overload.
* config/rs6000/rs6000-c.c (P9V_BUILTIN_VEC_CONVERT_4F32_8F16): New
overloaded builtin entry.
* config/rs6000/vsx.md (UNSPEC_VSX_XVCVSPHP): New unspec.
(vsx_xvcvsphp): New define_insn.
PR libstdc++/91807
* include/std/variant
(_Copy_assign_base::operator=(const _Copy_assign_base&):
Do the move-assignment from a temporary so that the temporary
is constructed with an explicit index.
* testsuite/20_util/variant/91807.cc: New.
Martin Liska [Thu, 2 Jul 2020 08:51:06 +0000 (10:51 +0200)]
gcc-changelog: sync from master.
contrib/ChangeLog:
* gcc-changelog/git_check_commit.py: New file.
* gcc-changelog/git_commit.py: New file.
* gcc-changelog/git_email.py: New file.
* gcc-changelog/git_repository.py: New file.
* gcc-changelog/git_update_version.py: New file.
* gcc-changelog/test_email.py: New file.
* gcc-changelog/test_patches.txt: New file.
Alex Coplan [Mon, 18 May 2020 15:29:04 +0000 (16:29 +0100)]
arm: Don't generate invalid LDRD insns
This fixes a bug in the arm backend where GCC generates invalid LDRD
instructions. The LDRD instruction requires the first transfer register to be
even, but GCC attempts to use odd registers here. For example, with the
following C code:
struct c {
double a;
} __attribute((aligned)) __attribute((packed));
struct c d;
struct c f(struct c);
void e() { f(d); }
The struct d is passed in registers r1 and r2 to the function f, and GCC
attempted to do this with a LDRD instruction when compiling with -march=armv7-a
on a soft float toolchain.
The fix is analogous to the corresponding one for STRD in the same function:
https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=52057dc4ac5295caebf83147f688d769c93cbc8d
gcc/:
* config/arm/arm.c (output_move_double): Fix codegen when loading into
a register pair with an odd base register.
gcc/testsuite/:
* gcc.c-torture/compile/packed-aligned-1.c: New test.
* gcc.c-torture/execute/packed-aligned.c: New test.
Thomas Koenig [Mon, 29 Jun 2020 21:11:06 +0000 (23:11 +0200)]
Do not generate recursion check for compiler-generated procedures.
This one-line fix removes a check for recursion for procedures
which are compiler-generated, such as finalizers or deallocation.
These need to be recursive, even if the user code should not be.
gcc/fortran/ChangeLog:
PR fortran/95743
* trans-decl.c (gfc_generate_function_code): Do not generate
recursion check for compiler-generated procedures.
Harald Anlauf [Wed, 24 Jun 2020 20:44:11 +0000 (22:44 +0200)]
Revert "PR fortran/95689 - ICE in check_sym_interfaces, at fortran/interface.c:2015"
With submodules, name mangling of interfaces may result in long internal
symbols overflowing an internal buffer. We now check that we do not
exceed the enlarged buffer size.
gcc/fortran/
PR fortran/95689
* interface.c (check_sym_interfaces): Enlarge temporary buffer,
and add check on length on mangled name to prevent overflow.
gcc/testsuite/
PR fortran/95689
* gfortran.dg/pr95689.f90: New test.
Jonathan Wakely [Wed, 24 Jun 2020 10:45:01 +0000 (11:45 +0100)]
libstdc++: Fix std::from_chars to ignore leading zeros in base 2
The parser for binary numbers returned an error if the entire string
contains more digits than the result type. Leading zeros should be
ignored.
libstdc++-v3/ChangeLog:
* include/std/charconv (__from_chars_binary): Ignore leading zeros.
* testsuite/20_util/from_chars/1.cc: Check "0x1" for all bases,
not just 10 and 16.
* testsuite/20_util/from_chars/3.cc: New test.
Harald Anlauf [Sat, 20 Jun 2020 14:09:45 +0000 (16:09 +0200)]
PR fortran/95689 - ICE in check_sym_interfaces, at fortran/interface.c:2015
With submodules, name mangling of interfaces may result in long internal
symbols overflowing an internal buffer. We now check that we do not
exceed the enlarged buffer size.
gcc/fortran/
PR fortran/95689
* interface.c (check_sym_interfaces): Enlarge temporary buffer,
and add check on length on mangled name to prevent overflow.
Harald Anlauf [Sat, 20 Jun 2020 14:05:13 +0000 (16:05 +0200)]
PR fortran/95587 - ICE in gfc_target_encode_expr, at fortran/target-memory.c:362
EQUIVALENCE objects are subject to constraints listed in the Fortran 2018
standard, section 8.10.1.1. These constraints are to be checked
also for CLASS variables.
gcc/fortran/
PR fortran/95587
* match.c (gfc_match_equivalence): Check constraints on
EQUIVALENCE objects also for CLASS variables.
Eric Botcazou [Tue, 23 Jun 2020 16:33:28 +0000 (18:33 +0200)]
Fix memory corruption with vector and variant record
The problem is that Has_Constrained_Partial_View must be tested on the
base type of the designated type of an allocator.
gcc/ada/ChangeLog:
* gcc-interface/trans.c (gnat_to_gnu) <N_Allocator>: Minor tweaks.
Call Has_Constrained_Partial_View on base type of designated type.
Mark Eggleston [Mon, 22 Jun 2020 12:35:01 +0000 (13:35 +0100)]
Fortran : ICE in resolve_fl_procedure PR95708
Now issues an error "Intrinsic procedure 'num_images' not
allowed in PROCEDURE" instead of an ICE.
2020-06-22 Steven G. Kargl <kargl@gcc.gnu.org>
gcc/fortran/
PR fortran/95708
* intrinsic.c (add_functions): Replace CLASS_INQUIRY with
CLASS_TRANSFORMATIONAL for intrinsic num_images.
(make_generic): Replace ACTUAL_NO with ACTUAL_YES for
intrinsic team_number.
* resolve.c (resolve_fl_procedure): Check pointer ts.u.derived
exists before using it.
2020-06-22 Mark Eggleston <markeggleston@gcc.gnu.org>
gcc/testsuite/
PR fortran/95708
* gfortran.dg/pr95708.f90: New test.