From: Greg Kroah-Hartman Date: Wed, 9 Mar 2022 15:15:44 +0000 (+0100) Subject: 4.9-stable patches X-Git-Tag: v4.9.306~42 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=862b15ea565117e3a2f1054d2c7ea91c3de786cb;p=thirdparty%2Fkernel%2Fstable-queue.git 4.9-stable patches added patches: documentation-add-section-about-cpu-vulnerabilities-for-spectre.patch documentation-add-swapgs-description-to-the-spectre-v1-documentation.patch documentation-hw-vuln-update-spectre-doc.patch documentation-refer-to-config-randomize_base-for-kernel-address-space-randomization.patch x86-bugs-unconditionally-allow-spectre_v2-retpoline-amd.patch x86-retpoline-make-config_retpoline-depend-on-compiler-support.patch x86-retpoline-remove-minimal-retpoline-support.patch x86-speculation-add-eibrs-retpoline-options.patch x86-speculation-add-retpoline_amd-support-to-the-inline-asm-call_nospec-variant.patch x86-speculation-include-unprivileged-ebpf-status-in-spectre-v2-mitigation-reporting.patch x86-speculation-merge-one-test-in-spectre_v2_user_select_mitigation.patch x86-speculation-rename-retpoline_amd-to-retpoline_lfence.patch x86-speculation-update-link-to-amd-speculation-whitepaper.patch x86-speculation-use-generic-retpoline-by-default-on-amd.patch x86-speculation-warn-about-eibrs-lfence-unprivileged-ebpf-smt.patch x86-speculation-warn-about-spectre-v2-lfence-mitigation.patch --- diff --git a/queue-4.9/documentation-add-section-about-cpu-vulnerabilities-for-spectre.patch b/queue-4.9/documentation-add-section-about-cpu-vulnerabilities-for-spectre.patch new file mode 100644 index 00000000000..d4d7d3d30b5 --- /dev/null +++ b/queue-4.9/documentation-add-section-about-cpu-vulnerabilities-for-spectre.patch @@ -0,0 +1,745 @@ +From foo@baz Wed Mar 9 04:10:24 PM CET 2022 +From: Tim Chen +Date: Thu, 20 Jun 2019 16:10:50 -0700 +Subject: Documentation: Add section about CPU vulnerabilities for Spectre + +From: Tim Chen + +commit 6e88559470f581741bcd0f2794f9054814ac9740 upstream. + +Add documentation for Spectre vulnerability and the mitigation mechanisms: + +- Explain the problem and risks +- Document the mitigation mechanisms +- Document the command line controls +- Document the sysfs files + +Co-developed-by: Andi Kleen +Signed-off-by: Andi Kleen +Co-developed-by: Tim Chen +Signed-off-by: Tim Chen +Reviewed-by: Randy Dunlap +Reviewed-by: Thomas Gleixner +Cc: stable@vger.kernel.org +Signed-off-by: Jonathan Corbet +[bwh: Backported to 4.9: + - Drop chnages in spec_ctrl.rst, which is a plain-text document here + - Adjust filenames and references to spec_ctrl.rst] +Signed-off-by: Ben Hutchings +Signed-off-by: Greg Kroah-Hartman +--- + Documentation/hw-vuln/index.rst | 1 + Documentation/hw-vuln/spectre.rst | 697 ++++++++++++++++++++++++++++++++++++++ + 2 files changed, 698 insertions(+) + create mode 100644 Documentation/hw-vuln/spectre.rst + +--- a/Documentation/hw-vuln/index.rst ++++ b/Documentation/hw-vuln/index.rst +@@ -9,6 +9,7 @@ are configurable at compile, boot or run + .. toctree:: + :maxdepth: 1 + ++ spectre + l1tf + mds + tsx_async_abort +--- /dev/null ++++ b/Documentation/hw-vuln/spectre.rst +@@ -0,0 +1,697 @@ ++.. SPDX-License-Identifier: GPL-2.0 ++ ++Spectre Side Channels ++===================== ++ ++Spectre is a class of side channel attacks that exploit branch prediction ++and speculative execution on modern CPUs to read memory, possibly ++bypassing access controls. Speculative execution side channel exploits ++do not modify memory but attempt to infer privileged data in the memory. ++ ++This document covers Spectre variant 1 and Spectre variant 2. ++ ++Affected processors ++------------------- ++ ++Speculative execution side channel methods affect a wide range of modern ++high performance processors, since most modern high speed processors ++use branch prediction and speculative execution. ++ ++The following CPUs are vulnerable: ++ ++ - Intel Core, Atom, Pentium, and Xeon processors ++ ++ - AMD Phenom, EPYC, and Zen processors ++ ++ - IBM POWER and zSeries processors ++ ++ - Higher end ARM processors ++ ++ - Apple CPUs ++ ++ - Higher end MIPS CPUs ++ ++ - Likely most other high performance CPUs. Contact your CPU vendor for details. ++ ++Whether a processor is affected or not can be read out from the Spectre ++vulnerability files in sysfs. See :ref:`spectre_sys_info`. ++ ++Related CVEs ++------------ ++ ++The following CVE entries describe Spectre variants: ++ ++ ============= ======================= ================= ++ CVE-2017-5753 Bounds check bypass Spectre variant 1 ++ CVE-2017-5715 Branch target injection Spectre variant 2 ++ ============= ======================= ================= ++ ++Problem ++------- ++ ++CPUs use speculative operations to improve performance. That may leave ++traces of memory accesses or computations in the processor's caches, ++buffers, and branch predictors. Malicious software may be able to ++influence the speculative execution paths, and then use the side effects ++of the speculative execution in the CPUs' caches and buffers to infer ++privileged data touched during the speculative execution. ++ ++Spectre variant 1 attacks take advantage of speculative execution of ++conditional branches, while Spectre variant 2 attacks use speculative ++execution of indirect branches to leak privileged memory. ++See :ref:`[1] ` :ref:`[5] ` :ref:`[7] ` ++:ref:`[10] ` :ref:`[11] `. ++ ++Spectre variant 1 (Bounds Check Bypass) ++--------------------------------------- ++ ++The bounds check bypass attack :ref:`[2] ` takes advantage ++of speculative execution that bypasses conditional branch instructions ++used for memory access bounds check (e.g. checking if the index of an ++array results in memory access within a valid range). This results in ++memory accesses to invalid memory (with out-of-bound index) that are ++done speculatively before validation checks resolve. Such speculative ++memory accesses can leave side effects, creating side channels which ++leak information to the attacker. ++ ++There are some extensions of Spectre variant 1 attacks for reading data ++over the network, see :ref:`[12] `. However such attacks ++are difficult, low bandwidth, fragile, and are considered low risk. ++ ++Spectre variant 2 (Branch Target Injection) ++------------------------------------------- ++ ++The branch target injection attack takes advantage of speculative ++execution of indirect branches :ref:`[3] `. The indirect ++branch predictors inside the processor used to guess the target of ++indirect branches can be influenced by an attacker, causing gadget code ++to be speculatively executed, thus exposing sensitive data touched by ++the victim. The side effects left in the CPU's caches during speculative ++execution can be measured to infer data values. ++ ++.. _poison_btb: ++ ++In Spectre variant 2 attacks, the attacker can steer speculative indirect ++branches in the victim to gadget code by poisoning the branch target ++buffer of a CPU used for predicting indirect branch addresses. Such ++poisoning could be done by indirect branching into existing code, ++with the address offset of the indirect branch under the attacker's ++control. Since the branch prediction on impacted hardware does not ++fully disambiguate branch address and uses the offset for prediction, ++this could cause privileged code's indirect branch to jump to a gadget ++code with the same offset. ++ ++The most useful gadgets take an attacker-controlled input parameter (such ++as a register value) so that the memory read can be controlled. Gadgets ++without input parameters might be possible, but the attacker would have ++very little control over what memory can be read, reducing the risk of ++the attack revealing useful data. ++ ++One other variant 2 attack vector is for the attacker to poison the ++return stack buffer (RSB) :ref:`[13] ` to cause speculative ++subroutine return instruction execution to go to a gadget. An attacker's ++imbalanced subroutine call instructions might "poison" entries in the ++return stack buffer which are later consumed by a victim's subroutine ++return instructions. This attack can be mitigated by flushing the return ++stack buffer on context switch, or virtual machine (VM) exit. ++ ++On systems with simultaneous multi-threading (SMT), attacks are possible ++from the sibling thread, as level 1 cache and branch target buffer ++(BTB) may be shared between hardware threads in a CPU core. A malicious ++program running on the sibling thread may influence its peer's BTB to ++steer its indirect branch speculations to gadget code, and measure the ++speculative execution's side effects left in level 1 cache to infer the ++victim's data. ++ ++Attack scenarios ++---------------- ++ ++The following list of attack scenarios have been anticipated, but may ++not cover all possible attack vectors. ++ ++1. A user process attacking the kernel ++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ++ ++ The attacker passes a parameter to the kernel via a register or ++ via a known address in memory during a syscall. Such parameter may ++ be used later by the kernel as an index to an array or to derive ++ a pointer for a Spectre variant 1 attack. The index or pointer ++ is invalid, but bound checks are bypassed in the code branch taken ++ for speculative execution. This could cause privileged memory to be ++ accessed and leaked. ++ ++ For kernel code that has been identified where data pointers could ++ potentially be influenced for Spectre attacks, new "nospec" accessor ++ macros are used to prevent speculative loading of data. ++ ++ Spectre variant 2 attacker can :ref:`poison ` the branch ++ target buffer (BTB) before issuing syscall to launch an attack. ++ After entering the kernel, the kernel could use the poisoned branch ++ target buffer on indirect jump and jump to gadget code in speculative ++ execution. ++ ++ If an attacker tries to control the memory addresses leaked during ++ speculative execution, he would also need to pass a parameter to the ++ gadget, either through a register or a known address in memory. After ++ the gadget has executed, he can measure the side effect. ++ ++ The kernel can protect itself against consuming poisoned branch ++ target buffer entries by using return trampolines (also known as ++ "retpoline") :ref:`[3] ` :ref:`[9] ` for all ++ indirect branches. Return trampolines trap speculative execution paths ++ to prevent jumping to gadget code during speculative execution. ++ x86 CPUs with Enhanced Indirect Branch Restricted Speculation ++ (Enhanced IBRS) available in hardware should use the feature to ++ mitigate Spectre variant 2 instead of retpoline. Enhanced IBRS is ++ more efficient than retpoline. ++ ++ There may be gadget code in firmware which could be exploited with ++ Spectre variant 2 attack by a rogue user process. To mitigate such ++ attacks on x86, Indirect Branch Restricted Speculation (IBRS) feature ++ is turned on before the kernel invokes any firmware code. ++ ++2. A user process attacking another user process ++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ++ ++ A malicious user process can try to attack another user process, ++ either via a context switch on the same hardware thread, or from the ++ sibling hyperthread sharing a physical processor core on simultaneous ++ multi-threading (SMT) system. ++ ++ Spectre variant 1 attacks generally require passing parameters ++ between the processes, which needs a data passing relationship, such ++ as remote procedure calls (RPC). Those parameters are used in gadget ++ code to derive invalid data pointers accessing privileged memory in ++ the attacked process. ++ ++ Spectre variant 2 attacks can be launched from a rogue process by ++ :ref:`poisoning ` the branch target buffer. This can ++ influence the indirect branch targets for a victim process that either ++ runs later on the same hardware thread, or running concurrently on ++ a sibling hardware thread sharing the same physical core. ++ ++ A user process can protect itself against Spectre variant 2 attacks ++ by using the prctl() syscall to disable indirect branch speculation ++ for itself. An administrator can also cordon off an unsafe process ++ from polluting the branch target buffer by disabling the process's ++ indirect branch speculation. This comes with a performance cost ++ from not using indirect branch speculation and clearing the branch ++ target buffer. When SMT is enabled on x86, for a process that has ++ indirect branch speculation disabled, Single Threaded Indirect Branch ++ Predictors (STIBP) :ref:`[4] ` are turned on to prevent the ++ sibling thread from controlling branch target buffer. In addition, ++ the Indirect Branch Prediction Barrier (IBPB) is issued to clear the ++ branch target buffer when context switching to and from such process. ++ ++ On x86, the return stack buffer is stuffed on context switch. ++ This prevents the branch target buffer from being used for branch ++ prediction when the return stack buffer underflows while switching to ++ a deeper call stack. Any poisoned entries in the return stack buffer ++ left by the previous process will also be cleared. ++ ++ User programs should use address space randomization to make attacks ++ more difficult (Set /proc/sys/kernel/randomize_va_space = 1 or 2). ++ ++3. A virtualized guest attacking the host ++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ++ ++ The attack mechanism is similar to how user processes attack the ++ kernel. The kernel is entered via hyper-calls or other virtualization ++ exit paths. ++ ++ For Spectre variant 1 attacks, rogue guests can pass parameters ++ (e.g. in registers) via hyper-calls to derive invalid pointers to ++ speculate into privileged memory after entering the kernel. For places ++ where such kernel code has been identified, nospec accessor macros ++ are used to stop speculative memory access. ++ ++ For Spectre variant 2 attacks, rogue guests can :ref:`poison ++ ` the branch target buffer or return stack buffer, causing ++ the kernel to jump to gadget code in the speculative execution paths. ++ ++ To mitigate variant 2, the host kernel can use return trampolines ++ for indirect branches to bypass the poisoned branch target buffer, ++ and flushing the return stack buffer on VM exit. This prevents rogue ++ guests from affecting indirect branching in the host kernel. ++ ++ To protect host processes from rogue guests, host processes can have ++ indirect branch speculation disabled via prctl(). The branch target ++ buffer is cleared before context switching to such processes. ++ ++4. A virtualized guest attacking other guest ++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ++ ++ A rogue guest may attack another guest to get data accessible by the ++ other guest. ++ ++ Spectre variant 1 attacks are possible if parameters can be passed ++ between guests. This may be done via mechanisms such as shared memory ++ or message passing. Such parameters could be used to derive data ++ pointers to privileged data in guest. The privileged data could be ++ accessed by gadget code in the victim's speculation paths. ++ ++ Spectre variant 2 attacks can be launched from a rogue guest by ++ :ref:`poisoning ` the branch target buffer or the return ++ stack buffer. Such poisoned entries could be used to influence ++ speculation execution paths in the victim guest. ++ ++ Linux kernel mitigates attacks to other guests running in the same ++ CPU hardware thread by flushing the return stack buffer on VM exit, ++ and clearing the branch target buffer before switching to a new guest. ++ ++ If SMT is used, Spectre variant 2 attacks from an untrusted guest ++ in the sibling hyperthread can be mitigated by the administrator, ++ by turning off the unsafe guest's indirect branch speculation via ++ prctl(). A guest can also protect itself by turning on microcode ++ based mitigations (such as IBPB or STIBP on x86) within the guest. ++ ++.. _spectre_sys_info: ++ ++Spectre system information ++-------------------------- ++ ++The Linux kernel provides a sysfs interface to enumerate the current ++mitigation status of the system for Spectre: whether the system is ++vulnerable, and which mitigations are active. ++ ++The sysfs file showing Spectre variant 1 mitigation status is: ++ ++ /sys/devices/system/cpu/vulnerabilities/spectre_v1 ++ ++The possible values in this file are: ++ ++ ======================================= ================================= ++ 'Mitigation: __user pointer sanitation' Protection in kernel on a case by ++ case base with explicit pointer ++ sanitation. ++ ======================================= ================================= ++ ++However, the protections are put in place on a case by case basis, ++and there is no guarantee that all possible attack vectors for Spectre ++variant 1 are covered. ++ ++The spectre_v2 kernel file reports if the kernel has been compiled with ++retpoline mitigation or if the CPU has hardware mitigation, and if the ++CPU has support for additional process-specific mitigation. ++ ++This file also reports CPU features enabled by microcode to mitigate ++attack between user processes: ++ ++1. Indirect Branch Prediction Barrier (IBPB) to add additional ++ isolation between processes of different users. ++2. Single Thread Indirect Branch Predictors (STIBP) to add additional ++ isolation between CPU threads running on the same core. ++ ++These CPU features may impact performance when used and can be enabled ++per process on a case-by-case base. ++ ++The sysfs file showing Spectre variant 2 mitigation status is: ++ ++ /sys/devices/system/cpu/vulnerabilities/spectre_v2 ++ ++The possible values in this file are: ++ ++ - Kernel status: ++ ++ ==================================== ================================= ++ 'Not affected' The processor is not vulnerable ++ 'Vulnerable' Vulnerable, no mitigation ++ 'Mitigation: Full generic retpoline' Software-focused mitigation ++ 'Mitigation: Full AMD retpoline' AMD-specific software mitigation ++ 'Mitigation: Enhanced IBRS' Hardware-focused mitigation ++ ==================================== ================================= ++ ++ - Firmware status: Show if Indirect Branch Restricted Speculation (IBRS) is ++ used to protect against Spectre variant 2 attacks when calling firmware (x86 only). ++ ++ ========== ============================================================= ++ 'IBRS_FW' Protection against user program attacks when calling firmware ++ ========== ============================================================= ++ ++ - Indirect branch prediction barrier (IBPB) status for protection between ++ processes of different users. This feature can be controlled through ++ prctl() per process, or through kernel command line options. This is ++ an x86 only feature. For more details see below. ++ ++ =================== ======================================================== ++ 'IBPB: disabled' IBPB unused ++ 'IBPB: always-on' Use IBPB on all tasks ++ 'IBPB: conditional' Use IBPB on SECCOMP or indirect branch restricted tasks ++ =================== ======================================================== ++ ++ - Single threaded indirect branch prediction (STIBP) status for protection ++ between different hyper threads. This feature can be controlled through ++ prctl per process, or through kernel command line options. This is x86 ++ only feature. For more details see below. ++ ++ ==================== ======================================================== ++ 'STIBP: disabled' STIBP unused ++ 'STIBP: forced' Use STIBP on all tasks ++ 'STIBP: conditional' Use STIBP on SECCOMP or indirect branch restricted tasks ++ ==================== ======================================================== ++ ++ - Return stack buffer (RSB) protection status: ++ ++ ============= =========================================== ++ 'RSB filling' Protection of RSB on context switch enabled ++ ============= =========================================== ++ ++Full mitigation might require a microcode update from the CPU ++vendor. When the necessary microcode is not available, the kernel will ++report vulnerability. ++ ++Turning on mitigation for Spectre variant 1 and Spectre variant 2 ++----------------------------------------------------------------- ++ ++1. Kernel mitigation ++^^^^^^^^^^^^^^^^^^^^ ++ ++ For the Spectre variant 1, vulnerable kernel code (as determined ++ by code audit or scanning tools) is annotated on a case by case ++ basis to use nospec accessor macros for bounds clipping :ref:`[2] ++ ` to avoid any usable disclosure gadgets. However, it may ++ not cover all attack vectors for Spectre variant 1. ++ ++ For Spectre variant 2 mitigation, the compiler turns indirect calls or ++ jumps in the kernel into equivalent return trampolines (retpolines) ++ :ref:`[3] ` :ref:`[9] ` to go to the target ++ addresses. Speculative execution paths under retpolines are trapped ++ in an infinite loop to prevent any speculative execution jumping to ++ a gadget. ++ ++ To turn on retpoline mitigation on a vulnerable CPU, the kernel ++ needs to be compiled with a gcc compiler that supports the ++ -mindirect-branch=thunk-extern -mindirect-branch-register options. ++ If the kernel is compiled with a Clang compiler, the compiler needs ++ to support -mretpoline-external-thunk option. The kernel config ++ CONFIG_RETPOLINE needs to be turned on, and the CPU needs to run with ++ the latest updated microcode. ++ ++ On Intel Skylake-era systems the mitigation covers most, but not all, ++ cases. See :ref:`[3] ` for more details. ++ ++ On CPUs with hardware mitigation for Spectre variant 2 (e.g. Enhanced ++ IBRS on x86), retpoline is automatically disabled at run time. ++ ++ The retpoline mitigation is turned on by default on vulnerable ++ CPUs. It can be forced on or off by the administrator ++ via the kernel command line and sysfs control files. See ++ :ref:`spectre_mitigation_control_command_line`. ++ ++ On x86, indirect branch restricted speculation is turned on by default ++ before invoking any firmware code to prevent Spectre variant 2 exploits ++ using the firmware. ++ ++ Using kernel address space randomization (CONFIG_RANDOMIZE_SLAB=y ++ and CONFIG_SLAB_FREELIST_RANDOM=y in the kernel configuration) makes ++ attacks on the kernel generally more difficult. ++ ++2. User program mitigation ++^^^^^^^^^^^^^^^^^^^^^^^^^^ ++ ++ User programs can mitigate Spectre variant 1 using LFENCE or "bounds ++ clipping". For more details see :ref:`[2] `. ++ ++ For Spectre variant 2 mitigation, individual user programs ++ can be compiled with return trampolines for indirect branches. ++ This protects them from consuming poisoned entries in the branch ++ target buffer left by malicious software. Alternatively, the ++ programs can disable their indirect branch speculation via prctl() ++ (See Documentation/spec_ctrl.txt). ++ On x86, this will turn on STIBP to guard against attacks from the ++ sibling thread when the user program is running, and use IBPB to ++ flush the branch target buffer when switching to/from the program. ++ ++ Restricting indirect branch speculation on a user program will ++ also prevent the program from launching a variant 2 attack ++ on x86. All sand-boxed SECCOMP programs have indirect branch ++ speculation restricted by default. Administrators can change ++ that behavior via the kernel command line and sysfs control files. ++ See :ref:`spectre_mitigation_control_command_line`. ++ ++ Programs that disable their indirect branch speculation will have ++ more overhead and run slower. ++ ++ User programs should use address space randomization ++ (/proc/sys/kernel/randomize_va_space = 1 or 2) to make attacks more ++ difficult. ++ ++3. VM mitigation ++^^^^^^^^^^^^^^^^ ++ ++ Within the kernel, Spectre variant 1 attacks from rogue guests are ++ mitigated on a case by case basis in VM exit paths. Vulnerable code ++ uses nospec accessor macros for "bounds clipping", to avoid any ++ usable disclosure gadgets. However, this may not cover all variant ++ 1 attack vectors. ++ ++ For Spectre variant 2 attacks from rogue guests to the kernel, the ++ Linux kernel uses retpoline or Enhanced IBRS to prevent consumption of ++ poisoned entries in branch target buffer left by rogue guests. It also ++ flushes the return stack buffer on every VM exit to prevent a return ++ stack buffer underflow so poisoned branch target buffer could be used, ++ or attacker guests leaving poisoned entries in the return stack buffer. ++ ++ To mitigate guest-to-guest attacks in the same CPU hardware thread, ++ the branch target buffer is sanitized by flushing before switching ++ to a new guest on a CPU. ++ ++ The above mitigations are turned on by default on vulnerable CPUs. ++ ++ To mitigate guest-to-guest attacks from sibling thread when SMT is ++ in use, an untrusted guest running in the sibling thread can have ++ its indirect branch speculation disabled by administrator via prctl(). ++ ++ The kernel also allows guests to use any microcode based mitigation ++ they choose to use (such as IBPB or STIBP on x86) to protect themselves. ++ ++.. _spectre_mitigation_control_command_line: ++ ++Mitigation control on the kernel command line ++--------------------------------------------- ++ ++Spectre variant 2 mitigation can be disabled or force enabled at the ++kernel command line. ++ ++ nospectre_v2 ++ ++ [X86] Disable all mitigations for the Spectre variant 2 ++ (indirect branch prediction) vulnerability. System may ++ allow data leaks with this option, which is equivalent ++ to spectre_v2=off. ++ ++ ++ spectre_v2= ++ ++ [X86] Control mitigation of Spectre variant 2 ++ (indirect branch speculation) vulnerability. ++ The default operation protects the kernel from ++ user space attacks. ++ ++ on ++ unconditionally enable, implies ++ spectre_v2_user=on ++ off ++ unconditionally disable, implies ++ spectre_v2_user=off ++ auto ++ kernel detects whether your CPU model is ++ vulnerable ++ ++ Selecting 'on' will, and 'auto' may, choose a ++ mitigation method at run time according to the ++ CPU, the available microcode, the setting of the ++ CONFIG_RETPOLINE configuration option, and the ++ compiler with which the kernel was built. ++ ++ Selecting 'on' will also enable the mitigation ++ against user space to user space task attacks. ++ ++ Selecting 'off' will disable both the kernel and ++ the user space protections. ++ ++ Specific mitigations can also be selected manually: ++ ++ retpoline ++ replace indirect branches ++ retpoline,generic ++ google's original retpoline ++ retpoline,amd ++ AMD-specific minimal thunk ++ ++ Not specifying this option is equivalent to ++ spectre_v2=auto. ++ ++For user space mitigation: ++ ++ spectre_v2_user= ++ ++ [X86] Control mitigation of Spectre variant 2 ++ (indirect branch speculation) vulnerability between ++ user space tasks ++ ++ on ++ Unconditionally enable mitigations. Is ++ enforced by spectre_v2=on ++ ++ off ++ Unconditionally disable mitigations. Is ++ enforced by spectre_v2=off ++ ++ prctl ++ Indirect branch speculation is enabled, ++ but mitigation can be enabled via prctl ++ per thread. The mitigation control state ++ is inherited on fork. ++ ++ prctl,ibpb ++ Like "prctl" above, but only STIBP is ++ controlled per thread. IBPB is issued ++ always when switching between different user ++ space processes. ++ ++ seccomp ++ Same as "prctl" above, but all seccomp ++ threads will enable the mitigation unless ++ they explicitly opt out. ++ ++ seccomp,ibpb ++ Like "seccomp" above, but only STIBP is ++ controlled per thread. IBPB is issued ++ always when switching between different ++ user space processes. ++ ++ auto ++ Kernel selects the mitigation depending on ++ the available CPU features and vulnerability. ++ ++ Default mitigation: ++ If CONFIG_SECCOMP=y then "seccomp", otherwise "prctl" ++ ++ Not specifying this option is equivalent to ++ spectre_v2_user=auto. ++ ++ In general the kernel by default selects ++ reasonable mitigations for the current CPU. To ++ disable Spectre variant 2 mitigations, boot with ++ spectre_v2=off. Spectre variant 1 mitigations ++ cannot be disabled. ++ ++Mitigation selection guide ++-------------------------- ++ ++1. Trusted userspace ++^^^^^^^^^^^^^^^^^^^^ ++ ++ If all userspace applications are from trusted sources and do not ++ execute externally supplied untrusted code, then the mitigations can ++ be disabled. ++ ++2. Protect sensitive programs ++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ++ ++ For security-sensitive programs that have secrets (e.g. crypto ++ keys), protection against Spectre variant 2 can be put in place by ++ disabling indirect branch speculation when the program is running ++ (See Documentation/spec_ctrl.txt). ++ ++3. Sandbox untrusted programs ++^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ++ ++ Untrusted programs that could be a source of attacks can be cordoned ++ off by disabling their indirect branch speculation when they are run ++ (See Documentation/spec_ctrl.txt). ++ This prevents untrusted programs from polluting the branch target ++ buffer. All programs running in SECCOMP sandboxes have indirect ++ branch speculation restricted by default. This behavior can be ++ changed via the kernel command line and sysfs control files. See ++ :ref:`spectre_mitigation_control_command_line`. ++ ++3. High security mode ++^^^^^^^^^^^^^^^^^^^^^ ++ ++ All Spectre variant 2 mitigations can be forced on ++ at boot time for all programs (See the "on" option in ++ :ref:`spectre_mitigation_control_command_line`). This will add ++ overhead as indirect branch speculations for all programs will be ++ restricted. ++ ++ On x86, branch target buffer will be flushed with IBPB when switching ++ to a new program. STIBP is left on all the time to protect programs ++ against variant 2 attacks originating from programs running on ++ sibling threads. ++ ++ Alternatively, STIBP can be used only when running programs ++ whose indirect branch speculation is explicitly disabled, ++ while IBPB is still used all the time when switching to a new ++ program to clear the branch target buffer (See "ibpb" option in ++ :ref:`spectre_mitigation_control_command_line`). This "ibpb" option ++ has less performance cost than the "on" option, which leaves STIBP ++ on all the time. ++ ++References on Spectre ++--------------------- ++ ++Intel white papers: ++ ++.. _spec_ref1: ++ ++[1] `Intel analysis of speculative execution side channels `_. ++ ++.. _spec_ref2: ++ ++[2] `Bounds check bypass `_. ++ ++.. _spec_ref3: ++ ++[3] `Deep dive: Retpoline: A branch target injection mitigation `_. ++ ++.. _spec_ref4: ++ ++[4] `Deep Dive: Single Thread Indirect Branch Predictors `_. ++ ++AMD white papers: ++ ++.. _spec_ref5: ++ ++[5] `AMD64 technology indirect branch control extension `_. ++ ++.. _spec_ref6: ++ ++[6] `Software techniques for managing speculation on AMD processors `_. ++ ++ARM white papers: ++ ++.. _spec_ref7: ++ ++[7] `Cache speculation side-channels `_. ++ ++.. _spec_ref8: ++ ++[8] `Cache speculation issues update `_. ++ ++Google white paper: ++ ++.. _spec_ref9: ++ ++[9] `Retpoline: a software construct for preventing branch-target-injection `_. ++ ++MIPS white paper: ++ ++.. _spec_ref10: ++ ++[10] `MIPS: response on speculative execution and side channel vulnerabilities `_. ++ ++Academic papers: ++ ++.. _spec_ref11: ++ ++[11] `Spectre Attacks: Exploiting Speculative Execution `_. ++ ++.. _spec_ref12: ++ ++[12] `NetSpectre: Read Arbitrary Memory over Network `_. ++ ++.. _spec_ref13: ++ ++[13] `Spectre Returns! Speculation Attacks using the Return Stack Buffer `_. diff --git a/queue-4.9/documentation-add-swapgs-description-to-the-spectre-v1-documentation.patch b/queue-4.9/documentation-add-swapgs-description-to-the-spectre-v1-documentation.patch new file mode 100644 index 00000000000..ed030538e68 --- /dev/null +++ b/queue-4.9/documentation-add-swapgs-description-to-the-spectre-v1-documentation.patch @@ -0,0 +1,168 @@ +From foo@baz Wed Mar 9 04:10:24 PM CET 2022 +From: Josh Poimboeuf +Date: Sat, 3 Aug 2019 21:21:54 +0200 +Subject: Documentation: Add swapgs description to the Spectre v1 documentation + +From: Josh Poimboeuf + +commit 4c92057661a3412f547ede95715641d7ee16ddac upstream. + +Add documentation to the Spectre document about the new swapgs variant of +Spectre v1. + +Signed-off-by: Josh Poimboeuf +Signed-off-by: Thomas Gleixner +[bwh: Backported to 4.9: adjust filename] +Signed-off-by: Ben Hutchings +Signed-off-by: Greg Kroah-Hartman +--- + Documentation/hw-vuln/spectre.rst | 88 ++++++++++++++++++++++++++++++++++---- + 1 file changed, 80 insertions(+), 8 deletions(-) + +--- a/Documentation/hw-vuln/spectre.rst ++++ b/Documentation/hw-vuln/spectre.rst +@@ -41,10 +41,11 @@ Related CVEs + + The following CVE entries describe Spectre variants: + +- ============= ======================= ================= ++ ============= ======================= ========================== + CVE-2017-5753 Bounds check bypass Spectre variant 1 + CVE-2017-5715 Branch target injection Spectre variant 2 +- ============= ======================= ================= ++ CVE-2019-1125 Spectre v1 swapgs Spectre variant 1 (swapgs) ++ ============= ======================= ========================== + + Problem + ------- +@@ -78,6 +79,13 @@ There are some extensions of Spectre var + over the network, see :ref:`[12] `. However such attacks + are difficult, low bandwidth, fragile, and are considered low risk. + ++Note that, despite "Bounds Check Bypass" name, Spectre variant 1 is not ++only about user-controlled array bounds checks. It can affect any ++conditional checks. The kernel entry code interrupt, exception, and NMI ++handlers all have conditional swapgs checks. Those may be problematic ++in the context of Spectre v1, as kernel code can speculatively run with ++a user GS. ++ + Spectre variant 2 (Branch Target Injection) + ------------------------------------------- + +@@ -132,6 +140,9 @@ not cover all possible attack vectors. + 1. A user process attacking the kernel + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + ++Spectre variant 1 ++~~~~~~~~~~~~~~~~~ ++ + The attacker passes a parameter to the kernel via a register or + via a known address in memory during a syscall. Such parameter may + be used later by the kernel as an index to an array or to derive +@@ -144,7 +155,40 @@ not cover all possible attack vectors. + potentially be influenced for Spectre attacks, new "nospec" accessor + macros are used to prevent speculative loading of data. + +- Spectre variant 2 attacker can :ref:`poison ` the branch ++Spectre variant 1 (swapgs) ++~~~~~~~~~~~~~~~~~~~~~~~~~~ ++ ++ An attacker can train the branch predictor to speculatively skip the ++ swapgs path for an interrupt or exception. If they initialize ++ the GS register to a user-space value, if the swapgs is speculatively ++ skipped, subsequent GS-related percpu accesses in the speculation ++ window will be done with the attacker-controlled GS value. This ++ could cause privileged memory to be accessed and leaked. ++ ++ For example: ++ ++ :: ++ ++ if (coming from user space) ++ swapgs ++ mov %gs:, %reg ++ mov (%reg), %reg1 ++ ++ When coming from user space, the CPU can speculatively skip the ++ swapgs, and then do a speculative percpu load using the user GS ++ value. So the user can speculatively force a read of any kernel ++ value. If a gadget exists which uses the percpu value as an address ++ in another load/store, then the contents of the kernel value may ++ become visible via an L1 side channel attack. ++ ++ A similar attack exists when coming from kernel space. The CPU can ++ speculatively do the swapgs, causing the user GS to get used for the ++ rest of the speculative window. ++ ++Spectre variant 2 ++~~~~~~~~~~~~~~~~~ ++ ++ A spectre variant 2 attacker can :ref:`poison ` the branch + target buffer (BTB) before issuing syscall to launch an attack. + After entering the kernel, the kernel could use the poisoned branch + target buffer on indirect jump and jump to gadget code in speculative +@@ -280,11 +324,18 @@ The sysfs file showing Spectre variant 1 + + The possible values in this file are: + +- ======================================= ================================= +- 'Mitigation: __user pointer sanitation' Protection in kernel on a case by +- case base with explicit pointer +- sanitation. +- ======================================= ================================= ++ .. list-table:: ++ ++ * - 'Not affected' ++ - The processor is not vulnerable. ++ * - 'Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers' ++ - The swapgs protections are disabled; otherwise it has ++ protection in the kernel on a case by case base with explicit ++ pointer sanitation and usercopy LFENCE barriers. ++ * - 'Mitigation: usercopy/swapgs barriers and __user pointer sanitization' ++ - Protection in the kernel on a case by case base with explicit ++ pointer sanitation, usercopy LFENCE barriers, and swapgs LFENCE ++ barriers. + + However, the protections are put in place on a case by case basis, + and there is no guarantee that all possible attack vectors for Spectre +@@ -366,12 +417,27 @@ Turning on mitigation for Spectre varian + 1. Kernel mitigation + ^^^^^^^^^^^^^^^^^^^^ + ++Spectre variant 1 ++~~~~~~~~~~~~~~~~~ ++ + For the Spectre variant 1, vulnerable kernel code (as determined + by code audit or scanning tools) is annotated on a case by case + basis to use nospec accessor macros for bounds clipping :ref:`[2] + ` to avoid any usable disclosure gadgets. However, it may + not cover all attack vectors for Spectre variant 1. + ++ Copy-from-user code has an LFENCE barrier to prevent the access_ok() ++ check from being mis-speculated. The barrier is done by the ++ barrier_nospec() macro. ++ ++ For the swapgs variant of Spectre variant 1, LFENCE barriers are ++ added to interrupt, exception and NMI entry where needed. These ++ barriers are done by the FENCE_SWAPGS_KERNEL_ENTRY and ++ FENCE_SWAPGS_USER_ENTRY macros. ++ ++Spectre variant 2 ++~~~~~~~~~~~~~~~~~ ++ + For Spectre variant 2 mitigation, the compiler turns indirect calls or + jumps in the kernel into equivalent return trampolines (retpolines) + :ref:`[3] ` :ref:`[9] ` to go to the target +@@ -473,6 +539,12 @@ Mitigation control on the kernel command + Spectre variant 2 mitigation can be disabled or force enabled at the + kernel command line. + ++ nospectre_v1 ++ ++ [X86,PPC] Disable mitigations for Spectre Variant 1 ++ (bounds check bypass). With this option data leaks are ++ possible in the system. ++ + nospectre_v2 + + [X86] Disable all mitigations for the Spectre variant 2 diff --git a/queue-4.9/documentation-hw-vuln-update-spectre-doc.patch b/queue-4.9/documentation-hw-vuln-update-spectre-doc.patch new file mode 100644 index 00000000000..33438b5187d --- /dev/null +++ b/queue-4.9/documentation-hw-vuln-update-spectre-doc.patch @@ -0,0 +1,108 @@ +From foo@baz Wed Mar 9 04:10:24 PM CET 2022 +From: Peter Zijlstra +Date: Wed, 16 Feb 2022 20:57:02 +0100 +Subject: Documentation/hw-vuln: Update spectre doc + +From: Peter Zijlstra + +commit 5ad3eb1132453b9795ce5fd4572b1c18b292cca9 upstream. + +Update the doc with the new fun. + + [ bp: Massage commit message. ] + +Signed-off-by: Peter Zijlstra (Intel) +Signed-off-by: Borislav Petkov +Reviewed-by: Thomas Gleixner +[fllinden@amazon.com: backported to 4.19] +Signed-off-by: Frank van der Linden +[bwh: Backported to 4.9: adjust filenames] +Signed-off-by: Ben Hutchings +Signed-off-by: Greg Kroah-Hartman +--- + Documentation/hw-vuln/spectre.rst | 42 ++++++++++++++++++++++++------------ + Documentation/kernel-parameters.txt | 8 +++++- + 2 files changed, 35 insertions(+), 15 deletions(-) + +--- a/Documentation/hw-vuln/spectre.rst ++++ b/Documentation/hw-vuln/spectre.rst +@@ -131,6 +131,19 @@ steer its indirect branch speculations t + speculative execution's side effects left in level 1 cache to infer the + victim's data. + ++Yet another variant 2 attack vector is for the attacker to poison the ++Branch History Buffer (BHB) to speculatively steer an indirect branch ++to a specific Branch Target Buffer (BTB) entry, even if the entry isn't ++associated with the source address of the indirect branch. Specifically, ++the BHB might be shared across privilege levels even in the presence of ++Enhanced IBRS. ++ ++Currently the only known real-world BHB attack vector is via ++unprivileged eBPF. Therefore, it's highly recommended to not enable ++unprivileged eBPF, especially when eIBRS is used (without retpolines). ++For a full mitigation against BHB attacks, it's recommended to use ++retpolines (or eIBRS combined with retpolines). ++ + Attack scenarios + ---------------- + +@@ -364,13 +377,15 @@ The possible values in this file are: + + - Kernel status: + +- ==================================== ================================= +- 'Not affected' The processor is not vulnerable +- 'Vulnerable' Vulnerable, no mitigation +- 'Mitigation: Full generic retpoline' Software-focused mitigation +- 'Mitigation: Full AMD retpoline' AMD-specific software mitigation +- 'Mitigation: Enhanced IBRS' Hardware-focused mitigation +- ==================================== ================================= ++ ======================================== ================================= ++ 'Not affected' The processor is not vulnerable ++ 'Mitigation: None' Vulnerable, no mitigation ++ 'Mitigation: Retpolines' Use Retpoline thunks ++ 'Mitigation: LFENCE' Use LFENCE instructions ++ 'Mitigation: Enhanced IBRS' Hardware-focused mitigation ++ 'Mitigation: Enhanced IBRS + Retpolines' Hardware-focused + Retpolines ++ 'Mitigation: Enhanced IBRS + LFENCE' Hardware-focused + LFENCE ++ ======================================== ================================= + + - Firmware status: Show if Indirect Branch Restricted Speculation (IBRS) is + used to protect against Spectre variant 2 attacks when calling firmware (x86 only). +@@ -584,12 +599,13 @@ kernel command line. + + Specific mitigations can also be selected manually: + +- retpoline +- replace indirect branches +- retpoline,generic +- google's original retpoline +- retpoline,amd +- AMD-specific minimal thunk ++ retpoline auto pick between generic,lfence ++ retpoline,generic Retpolines ++ retpoline,lfence LFENCE; indirect branch ++ retpoline,amd alias for retpoline,lfence ++ eibrs enhanced IBRS ++ eibrs,retpoline enhanced IBRS + Retpolines ++ eibrs,lfence enhanced IBRS + LFENCE + + Not specifying this option is equivalent to + spectre_v2=auto. +--- a/Documentation/kernel-parameters.txt ++++ b/Documentation/kernel-parameters.txt +@@ -4174,8 +4174,12 @@ bytes respectively. Such letter suffixes + Specific mitigations can also be selected manually: + + retpoline - replace indirect branches +- retpoline,generic - google's original retpoline +- retpoline,amd - AMD-specific minimal thunk ++ retpoline,generic - Retpolines ++ retpoline,lfence - LFENCE; indirect branch ++ retpoline,amd - alias for retpoline,lfence ++ eibrs - enhanced IBRS ++ eibrs,retpoline - enhanced IBRS + Retpolines ++ eibrs,lfence - enhanced IBRS + LFENCE + + Not specifying this option is equivalent to + spectre_v2=auto. diff --git a/queue-4.9/documentation-refer-to-config-randomize_base-for-kernel-address-space-randomization.patch b/queue-4.9/documentation-refer-to-config-randomize_base-for-kernel-address-space-randomization.patch new file mode 100644 index 00000000000..74744889fb1 --- /dev/null +++ b/queue-4.9/documentation-refer-to-config-randomize_base-for-kernel-address-space-randomization.patch @@ -0,0 +1,41 @@ +From foo@baz Wed Mar 9 04:10:24 PM CET 2022 +From: Lukas Bulwahn +Date: Thu, 30 Dec 2021 18:19:40 +0100 +Subject: Documentation: refer to config RANDOMIZE_BASE for kernel address-space randomization + +From: Lukas Bulwahn + +commit 82ca67321f55a8d1da6ac3ed611da3c32818bb37 upstream. + +The config RANDOMIZE_SLAB does not exist, the authors probably intended to +refer to the config RANDOMIZE_BASE, which provides kernel address-space +randomization. They probably just confused SLAB with BASE (these two +four-letter words coincidentally share three common letters), as they also +point out the config SLAB_FREELIST_RANDOM as further randomization within +the same sentence. + +Fix the reference of the config for kernel address-space randomization to +the config that provides that. + +Fixes: 6e88559470f5 ("Documentation: Add section about CPU vulnerabilities for Spectre") +Signed-off-by: Lukas Bulwahn +Link: https://lore.kernel.org/r/20211230171940.27558-1-lukas.bulwahn@gmail.com +Signed-off-by: Jonathan Corbet +[bwh: Backported to 4.9: adjust filename] +Signed-off-by: Ben Hutchings +Signed-off-by: Greg Kroah-Hartman +--- + Documentation/hw-vuln/spectre.rst | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/Documentation/hw-vuln/spectre.rst ++++ b/Documentation/hw-vuln/spectre.rst +@@ -468,7 +468,7 @@ Spectre variant 2 + before invoking any firmware code to prevent Spectre variant 2 exploits + using the firmware. + +- Using kernel address space randomization (CONFIG_RANDOMIZE_SLAB=y ++ Using kernel address space randomization (CONFIG_RANDOMIZE_BASE=y + and CONFIG_SLAB_FREELIST_RANDOM=y in the kernel configuration) makes + attacks on the kernel generally more difficult. + diff --git a/queue-4.9/series b/queue-4.9/series index aa87e2cb717..d87f9244685 100644 --- a/queue-4.9/series +++ b/queue-4.9/series @@ -1,3 +1,19 @@ +x86-speculation-add-retpoline_amd-support-to-the-inline-asm-call_nospec-variant.patch +x86-retpoline-make-config_retpoline-depend-on-compiler-support.patch +x86-retpoline-remove-minimal-retpoline-support.patch +documentation-add-section-about-cpu-vulnerabilities-for-spectre.patch +documentation-add-swapgs-description-to-the-spectre-v1-documentation.patch +documentation-refer-to-config-randomize_base-for-kernel-address-space-randomization.patch +x86-speculation-merge-one-test-in-spectre_v2_user_select_mitigation.patch +x86-bugs-unconditionally-allow-spectre_v2-retpoline-amd.patch +x86-speculation-rename-retpoline_amd-to-retpoline_lfence.patch +x86-speculation-add-eibrs-retpoline-options.patch +documentation-hw-vuln-update-spectre-doc.patch +x86-speculation-include-unprivileged-ebpf-status-in-spectre-v2-mitigation-reporting.patch +x86-speculation-use-generic-retpoline-by-default-on-amd.patch +x86-speculation-update-link-to-amd-speculation-whitepaper.patch +x86-speculation-warn-about-spectre-v2-lfence-mitigation.patch +x86-speculation-warn-about-eibrs-lfence-unprivileged-ebpf-smt.patch arm-arm64-provide-a-wrapper-for-smccc-1.1-calls.patch arm-arm64-smccc-psci-add-arm_smccc_1_1_get_conduit.patch arm-report-spectre-v2-status-through-sysfs.patch diff --git a/queue-4.9/x86-bugs-unconditionally-allow-spectre_v2-retpoline-amd.patch b/queue-4.9/x86-bugs-unconditionally-allow-spectre_v2-retpoline-amd.patch new file mode 100644 index 00000000000..5723c5c4073 --- /dev/null +++ b/queue-4.9/x86-bugs-unconditionally-allow-spectre_v2-retpoline-amd.patch @@ -0,0 +1,40 @@ +From foo@baz Wed Mar 9 04:10:24 PM CET 2022 +From: Peter Zijlstra +Date: Tue, 26 Oct 2021 14:01:46 +0200 +Subject: x86,bugs: Unconditionally allow spectre_v2=retpoline,amd + +From: Peter Zijlstra + +commit f8a66d608a3e471e1202778c2a36cbdc96bae73b upstream. + +Currently Linux prevents usage of retpoline,amd on !AMD hardware, this +is unfriendly and gets in the way of testing. Remove this restriction. + +Signed-off-by: Peter Zijlstra (Intel) +Reviewed-by: Borislav Petkov +Acked-by: Josh Poimboeuf +Tested-by: Alexei Starovoitov +Link: https://lore.kernel.org/r/20211026120310.487348118@infradead.org +[fllinden@amazon.com: backported to 4.19 (no Hygon in 4.19)] +Signed-off-by: Frank van der Linden +Signed-off-by: Ben Hutchings +Signed-off-by: Greg Kroah-Hartman +--- + arch/x86/kernel/cpu/bugs.c | 6 ------ + 1 file changed, 6 deletions(-) + +--- a/arch/x86/kernel/cpu/bugs.c ++++ b/arch/x86/kernel/cpu/bugs.c +@@ -838,12 +838,6 @@ static enum spectre_v2_mitigation_cmd __ + return SPECTRE_V2_CMD_AUTO; + } + +- if (cmd == SPECTRE_V2_CMD_RETPOLINE_AMD && +- boot_cpu_data.x86_vendor != X86_VENDOR_AMD) { +- pr_err("retpoline,amd selected but CPU is not AMD. Switching to AUTO select\n"); +- return SPECTRE_V2_CMD_AUTO; +- } +- + spec_v2_print_cond(mitigation_options[i].option, + mitigation_options[i].secure); + return cmd; diff --git a/queue-4.9/x86-retpoline-make-config_retpoline-depend-on-compiler-support.patch b/queue-4.9/x86-retpoline-make-config_retpoline-depend-on-compiler-support.patch new file mode 100644 index 00000000000..0669715270b --- /dev/null +++ b/queue-4.9/x86-retpoline-make-config_retpoline-depend-on-compiler-support.patch @@ -0,0 +1,120 @@ +From foo@baz Wed Mar 9 04:10:24 PM CET 2022 +From: Zhenzhong Duan +Date: Fri, 2 Nov 2018 01:45:41 -0700 +Subject: x86/retpoline: Make CONFIG_RETPOLINE depend on compiler support + +From: Zhenzhong Duan + +commit 4cd24de3a0980bf3100c9dcb08ef65ca7c31af48 upstream. + +Since retpoline capable compilers are widely available, make +CONFIG_RETPOLINE hard depend on the compiler capability. + +Break the build when CONFIG_RETPOLINE is enabled and the compiler does not +support it. Emit an error message in that case: + + "arch/x86/Makefile:226: *** You are building kernel with non-retpoline + compiler, please update your compiler.. Stop." + +[dwmw: Fail the build with non-retpoline compiler] + +Suggested-by: Peter Zijlstra +Signed-off-by: Zhenzhong Duan +Signed-off-by: Thomas Gleixner +Cc: David Woodhouse +Cc: Borislav Petkov +Cc: Daniel Borkmann +Cc: H. Peter Anvin +Cc: Konrad Rzeszutek Wilk +Cc: Andy Lutomirski +Cc: Masahiro Yamada +Cc: Michal Marek +Cc: +Cc: stable@vger.kernel.org +Link: https://lkml.kernel.org/r/cca0cb20-f9e2-4094-840b-fb0f8810cd34@default +[bwh: Backported to 4.9: + - Drop change to objtool options + - Adjust context, indentation] +Signed-off-by: Ben Hutchings +Signed-off-by: Greg Kroah-Hartman +--- + arch/x86/Kconfig | 4 ---- + arch/x86/Makefile | 5 +++-- + arch/x86/include/asm/nospec-branch.h | 10 ++++++---- + arch/x86/kernel/cpu/bugs.c | 2 +- + 4 files changed, 10 insertions(+), 11 deletions(-) + +--- a/arch/x86/Kconfig ++++ b/arch/x86/Kconfig +@@ -418,10 +418,6 @@ config RETPOLINE + branches. Requires a compiler with -mindirect-branch=thunk-extern + support for full protection. The kernel may run slower. + +- Without compiler support, at least indirect branches in assembler +- code are eliminated. Since this includes the syscall entry path, +- it is not entirely pointless. +- + if X86_32 + config X86_EXTENDED_PLATFORM + bool "Support for extended (non-PC) x86 platforms" +--- a/arch/x86/Makefile ++++ b/arch/x86/Makefile +@@ -221,9 +221,10 @@ ifdef CONFIG_RETPOLINE + RETPOLINE_CFLAGS_CLANG := -mretpoline-external-thunk + + RETPOLINE_CFLAGS += $(call cc-option,$(RETPOLINE_CFLAGS_GCC),$(call cc-option,$(RETPOLINE_CFLAGS_CLANG))) +- ifneq ($(RETPOLINE_CFLAGS),) +- KBUILD_CFLAGS += $(RETPOLINE_CFLAGS) -DRETPOLINE ++ ifeq ($(RETPOLINE_CFLAGS),) ++ $(error You are building kernel with non-retpoline compiler, please update your compiler.) + endif ++ KBUILD_CFLAGS += $(RETPOLINE_CFLAGS) + endif + + archscripts: scripts_basic +--- a/arch/x86/include/asm/nospec-branch.h ++++ b/arch/x86/include/asm/nospec-branch.h +@@ -164,11 +164,12 @@ + _ASM_PTR " 999b\n\t" \ + ".popsection\n\t" + +-#if defined(CONFIG_X86_64) && defined(RETPOLINE) ++#ifdef CONFIG_RETPOLINE ++#ifdef CONFIG_X86_64 + + /* +- * Since the inline asm uses the %V modifier which is only in newer GCC, +- * the 64-bit one is dependent on RETPOLINE not CONFIG_RETPOLINE. ++ * Inline asm uses the %V modifier which is only in newer GCC ++ * which is ensured when CONFIG_RETPOLINE is defined. + */ + # define CALL_NOSPEC \ + ANNOTATE_NOSPEC_ALTERNATIVE \ +@@ -183,7 +184,7 @@ + X86_FEATURE_RETPOLINE_AMD) + # define THUNK_TARGET(addr) [thunk_target] "r" (addr) + +-#elif defined(CONFIG_X86_32) && defined(CONFIG_RETPOLINE) ++#else /* CONFIG_X86_32 */ + /* + * For i386 we use the original ret-equivalent retpoline, because + * otherwise we'll run out of registers. We don't care about CET +@@ -213,6 +214,7 @@ + X86_FEATURE_RETPOLINE_AMD) + + # define THUNK_TARGET(addr) [thunk_target] "rm" (addr) ++#endif + #else /* No retpoline for C / inline asm */ + # define CALL_NOSPEC "call *%[thunk_target]\n" + # define THUNK_TARGET(addr) [thunk_target] "rm" (addr) +--- a/arch/x86/kernel/cpu/bugs.c ++++ b/arch/x86/kernel/cpu/bugs.c +@@ -812,7 +812,7 @@ static void __init spec_v2_print_cond(co + + static inline bool retp_compiler(void) + { +- return __is_defined(RETPOLINE); ++ return __is_defined(CONFIG_RETPOLINE); + } + + static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void) diff --git a/queue-4.9/x86-retpoline-remove-minimal-retpoline-support.patch b/queue-4.9/x86-retpoline-remove-minimal-retpoline-support.patch new file mode 100644 index 00000000000..4c3ed91fb1a --- /dev/null +++ b/queue-4.9/x86-retpoline-remove-minimal-retpoline-support.patch @@ -0,0 +1,81 @@ +From foo@baz Wed Mar 9 04:10:24 PM CET 2022 +From: Zhenzhong Duan +Date: Fri, 2 Nov 2018 01:45:41 -0700 +Subject: x86/retpoline: Remove minimal retpoline support + +From: Zhenzhong Duan + +commit ef014aae8f1cd2793e4e014bbb102bed53f852b7 upstream. + +Now that CONFIG_RETPOLINE hard depends on compiler support, there is no +reason to keep the minimal retpoline support around which only provided +basic protection in the assembly files. + +Suggested-by: Peter Zijlstra +Signed-off-by: Zhenzhong Duan +Signed-off-by: Thomas Gleixner +Cc: David Woodhouse +Cc: Borislav Petkov +Cc: H. Peter Anvin +Cc: Konrad Rzeszutek Wilk +Cc: +Cc: stable@vger.kernel.org +Link: https://lkml.kernel.org/r/f06f0a89-5587-45db-8ed2-0a9d6638d5c0@default +Signed-off-by: Ben Hutchings +Signed-off-by: Greg Kroah-Hartman +--- + arch/x86/include/asm/nospec-branch.h | 2 -- + arch/x86/kernel/cpu/bugs.c | 13 ++----------- + 2 files changed, 2 insertions(+), 13 deletions(-) + +--- a/arch/x86/include/asm/nospec-branch.h ++++ b/arch/x86/include/asm/nospec-branch.h +@@ -223,8 +223,6 @@ + /* The Spectre V2 mitigation variants */ + enum spectre_v2_mitigation { + SPECTRE_V2_NONE, +- SPECTRE_V2_RETPOLINE_MINIMAL, +- SPECTRE_V2_RETPOLINE_MINIMAL_AMD, + SPECTRE_V2_RETPOLINE_GENERIC, + SPECTRE_V2_RETPOLINE_AMD, + SPECTRE_V2_IBRS_ENHANCED, +--- a/arch/x86/kernel/cpu/bugs.c ++++ b/arch/x86/kernel/cpu/bugs.c +@@ -784,8 +784,6 @@ set_mode: + + static const char * const spectre_v2_strings[] = { + [SPECTRE_V2_NONE] = "Vulnerable", +- [SPECTRE_V2_RETPOLINE_MINIMAL] = "Vulnerable: Minimal generic ASM retpoline", +- [SPECTRE_V2_RETPOLINE_MINIMAL_AMD] = "Vulnerable: Minimal AMD ASM retpoline", + [SPECTRE_V2_RETPOLINE_GENERIC] = "Mitigation: Full generic retpoline", + [SPECTRE_V2_RETPOLINE_AMD] = "Mitigation: Full AMD retpoline", + [SPECTRE_V2_IBRS_ENHANCED] = "Mitigation: Enhanced IBRS", +@@ -810,11 +808,6 @@ static void __init spec_v2_print_cond(co + pr_info("%s selected on command line.\n", reason); + } + +-static inline bool retp_compiler(void) +-{ +- return __is_defined(CONFIG_RETPOLINE); +-} +- + static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void) + { + enum spectre_v2_mitigation_cmd cmd = SPECTRE_V2_CMD_AUTO; +@@ -912,14 +905,12 @@ retpoline_auto: + pr_err("Spectre mitigation: LFENCE not serializing, switching to generic retpoline\n"); + goto retpoline_generic; + } +- mode = retp_compiler() ? SPECTRE_V2_RETPOLINE_AMD : +- SPECTRE_V2_RETPOLINE_MINIMAL_AMD; ++ mode = SPECTRE_V2_RETPOLINE_AMD; + setup_force_cpu_cap(X86_FEATURE_RETPOLINE_AMD); + setup_force_cpu_cap(X86_FEATURE_RETPOLINE); + } else { + retpoline_generic: +- mode = retp_compiler() ? SPECTRE_V2_RETPOLINE_GENERIC : +- SPECTRE_V2_RETPOLINE_MINIMAL; ++ mode = SPECTRE_V2_RETPOLINE_GENERIC; + setup_force_cpu_cap(X86_FEATURE_RETPOLINE); + } + diff --git a/queue-4.9/x86-speculation-add-eibrs-retpoline-options.patch b/queue-4.9/x86-speculation-add-eibrs-retpoline-options.patch new file mode 100644 index 00000000000..bae62cc331f --- /dev/null +++ b/queue-4.9/x86-speculation-add-eibrs-retpoline-options.patch @@ -0,0 +1,273 @@ +From foo@baz Wed Mar 9 04:10:24 PM CET 2022 +From: Peter Zijlstra +Date: Wed, 16 Feb 2022 20:57:01 +0100 +Subject: x86/speculation: Add eIBRS + Retpoline options + +From: Peter Zijlstra + +commit 1e19da8522c81bf46b335f84137165741e0d82b7 upstream. + +Thanks to the chaps at VUsec it is now clear that eIBRS is not +sufficient, therefore allow enabling of retpolines along with eIBRS. + +Add spectre_v2=eibrs, spectre_v2=eibrs,lfence and +spectre_v2=eibrs,retpoline options to explicitly pick your preferred +means of mitigation. + +Since there's new mitigations there's also user visible changes in +/sys/devices/system/cpu/vulnerabilities/spectre_v2 to reflect these +new mitigations. + + [ bp: Massage commit message, trim error messages, + do more precise eIBRS mode checking. ] + +Co-developed-by: Josh Poimboeuf +Signed-off-by: Josh Poimboeuf +Signed-off-by: Peter Zijlstra (Intel) +Signed-off-by: Borislav Petkov +Reviewed-by: Patrick Colp +Reviewed-by: Thomas Gleixner +[fllinden@amazon.com: backported to 4.19 (no Hygon)] +Signed-off-by: Frank van der Linden +Signed-off-by: Ben Hutchings +Signed-off-by: Greg Kroah-Hartman +--- + arch/x86/include/asm/nospec-branch.h | 4 - + arch/x86/kernel/cpu/bugs.c | 131 +++++++++++++++++++++++++---------- + 2 files changed, 98 insertions(+), 37 deletions(-) + +--- a/arch/x86/include/asm/nospec-branch.h ++++ b/arch/x86/include/asm/nospec-branch.h +@@ -225,7 +225,9 @@ enum spectre_v2_mitigation { + SPECTRE_V2_NONE, + SPECTRE_V2_RETPOLINE, + SPECTRE_V2_LFENCE, +- SPECTRE_V2_IBRS_ENHANCED, ++ SPECTRE_V2_EIBRS, ++ SPECTRE_V2_EIBRS_RETPOLINE, ++ SPECTRE_V2_EIBRS_LFENCE, + }; + + /* The indirect branch speculation control variants */ +--- a/arch/x86/kernel/cpu/bugs.c ++++ b/arch/x86/kernel/cpu/bugs.c +@@ -621,6 +621,9 @@ enum spectre_v2_mitigation_cmd { + SPECTRE_V2_CMD_RETPOLINE, + SPECTRE_V2_CMD_RETPOLINE_GENERIC, + SPECTRE_V2_CMD_RETPOLINE_LFENCE, ++ SPECTRE_V2_CMD_EIBRS, ++ SPECTRE_V2_CMD_EIBRS_RETPOLINE, ++ SPECTRE_V2_CMD_EIBRS_LFENCE, + }; + + enum spectre_v2_user_cmd { +@@ -693,6 +696,13 @@ spectre_v2_parse_user_cmdline(enum spect + return SPECTRE_V2_USER_CMD_AUTO; + } + ++static inline bool spectre_v2_in_eibrs_mode(enum spectre_v2_mitigation mode) ++{ ++ return (mode == SPECTRE_V2_EIBRS || ++ mode == SPECTRE_V2_EIBRS_RETPOLINE || ++ mode == SPECTRE_V2_EIBRS_LFENCE); ++} ++ + static void __init + spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd) + { +@@ -760,7 +770,7 @@ spectre_v2_user_select_mitigation(enum s + */ + if (!boot_cpu_has(X86_FEATURE_STIBP) || + !smt_possible || +- spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED) ++ spectre_v2_in_eibrs_mode(spectre_v2_enabled)) + return; + + /* +@@ -782,7 +792,9 @@ static const char * const spectre_v2_str + [SPECTRE_V2_NONE] = "Vulnerable", + [SPECTRE_V2_RETPOLINE] = "Mitigation: Retpolines", + [SPECTRE_V2_LFENCE] = "Mitigation: LFENCE", +- [SPECTRE_V2_IBRS_ENHANCED] = "Mitigation: Enhanced IBRS", ++ [SPECTRE_V2_EIBRS] = "Mitigation: Enhanced IBRS", ++ [SPECTRE_V2_EIBRS_LFENCE] = "Mitigation: Enhanced IBRS + LFENCE", ++ [SPECTRE_V2_EIBRS_RETPOLINE] = "Mitigation: Enhanced IBRS + Retpolines", + }; + + static const struct { +@@ -796,6 +808,9 @@ static const struct { + { "retpoline,amd", SPECTRE_V2_CMD_RETPOLINE_LFENCE, false }, + { "retpoline,lfence", SPECTRE_V2_CMD_RETPOLINE_LFENCE, false }, + { "retpoline,generic", SPECTRE_V2_CMD_RETPOLINE_GENERIC, false }, ++ { "eibrs", SPECTRE_V2_CMD_EIBRS, false }, ++ { "eibrs,lfence", SPECTRE_V2_CMD_EIBRS_LFENCE, false }, ++ { "eibrs,retpoline", SPECTRE_V2_CMD_EIBRS_RETPOLINE, false }, + { "auto", SPECTRE_V2_CMD_AUTO, false }, + }; + +@@ -833,15 +848,29 @@ static enum spectre_v2_mitigation_cmd __ + + if ((cmd == SPECTRE_V2_CMD_RETPOLINE || + cmd == SPECTRE_V2_CMD_RETPOLINE_LFENCE || +- cmd == SPECTRE_V2_CMD_RETPOLINE_GENERIC) && ++ cmd == SPECTRE_V2_CMD_RETPOLINE_GENERIC || ++ cmd == SPECTRE_V2_CMD_EIBRS_LFENCE || ++ cmd == SPECTRE_V2_CMD_EIBRS_RETPOLINE) && + !IS_ENABLED(CONFIG_RETPOLINE)) { +- pr_err("%s selected but not compiled in. Switching to AUTO select\n", mitigation_options[i].option); ++ pr_err("%s selected but not compiled in. Switching to AUTO select\n", ++ mitigation_options[i].option); ++ return SPECTRE_V2_CMD_AUTO; ++ } ++ ++ if ((cmd == SPECTRE_V2_CMD_EIBRS || ++ cmd == SPECTRE_V2_CMD_EIBRS_LFENCE || ++ cmd == SPECTRE_V2_CMD_EIBRS_RETPOLINE) && ++ !boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) { ++ pr_err("%s selected but CPU doesn't have eIBRS. Switching to AUTO select\n", ++ mitigation_options[i].option); + return SPECTRE_V2_CMD_AUTO; + } + +- if ((cmd == SPECTRE_V2_CMD_RETPOLINE_LFENCE) && ++ if ((cmd == SPECTRE_V2_CMD_RETPOLINE_LFENCE || ++ cmd == SPECTRE_V2_CMD_EIBRS_LFENCE) && + !boot_cpu_has(X86_FEATURE_LFENCE_RDTSC)) { +- pr_err("%s selected, but CPU doesn't have a serializing LFENCE. Switching to AUTO select\n", mitigation_options[i].option); ++ pr_err("%s selected, but CPU doesn't have a serializing LFENCE. Switching to AUTO select\n", ++ mitigation_options[i].option); + return SPECTRE_V2_CMD_AUTO; + } + +@@ -850,6 +879,24 @@ static enum spectre_v2_mitigation_cmd __ + return cmd; + } + ++static enum spectre_v2_mitigation __init spectre_v2_select_retpoline(void) ++{ ++ if (!IS_ENABLED(CONFIG_RETPOLINE)) { ++ pr_err("Kernel not compiled with retpoline; no mitigation available!"); ++ return SPECTRE_V2_NONE; ++ } ++ ++ if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) { ++ if (!boot_cpu_has(X86_FEATURE_LFENCE_RDTSC)) { ++ pr_err("LFENCE not serializing, switching to generic retpoline\n"); ++ return SPECTRE_V2_RETPOLINE; ++ } ++ return SPECTRE_V2_LFENCE; ++ } ++ ++ return SPECTRE_V2_RETPOLINE; ++} ++ + static void __init spectre_v2_select_mitigation(void) + { + enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline(); +@@ -870,48 +917,60 @@ static void __init spectre_v2_select_mit + case SPECTRE_V2_CMD_FORCE: + case SPECTRE_V2_CMD_AUTO: + if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) { +- mode = SPECTRE_V2_IBRS_ENHANCED; +- /* Force it so VMEXIT will restore correctly */ +- x86_spec_ctrl_base |= SPEC_CTRL_IBRS; +- wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base); +- goto specv2_set_mode; ++ mode = SPECTRE_V2_EIBRS; ++ break; + } +- if (IS_ENABLED(CONFIG_RETPOLINE)) +- goto retpoline_auto; ++ ++ mode = spectre_v2_select_retpoline(); + break; ++ + case SPECTRE_V2_CMD_RETPOLINE_LFENCE: +- if (IS_ENABLED(CONFIG_RETPOLINE)) +- goto retpoline_lfence; ++ mode = SPECTRE_V2_LFENCE; + break; ++ + case SPECTRE_V2_CMD_RETPOLINE_GENERIC: +- if (IS_ENABLED(CONFIG_RETPOLINE)) +- goto retpoline_generic; ++ mode = SPECTRE_V2_RETPOLINE; + break; ++ + case SPECTRE_V2_CMD_RETPOLINE: +- if (IS_ENABLED(CONFIG_RETPOLINE)) +- goto retpoline_auto; ++ mode = spectre_v2_select_retpoline(); ++ break; ++ ++ case SPECTRE_V2_CMD_EIBRS: ++ mode = SPECTRE_V2_EIBRS; ++ break; ++ ++ case SPECTRE_V2_CMD_EIBRS_LFENCE: ++ mode = SPECTRE_V2_EIBRS_LFENCE; ++ break; ++ ++ case SPECTRE_V2_CMD_EIBRS_RETPOLINE: ++ mode = SPECTRE_V2_EIBRS_RETPOLINE; + break; + } +- pr_err("Spectre mitigation: kernel not compiled with retpoline; no mitigation available!"); +- return; + +-retpoline_auto: +- if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) { +- retpoline_lfence: +- if (!boot_cpu_has(X86_FEATURE_LFENCE_RDTSC)) { +- pr_err("Spectre mitigation: LFENCE not serializing, switching to generic retpoline\n"); +- goto retpoline_generic; +- } +- mode = SPECTRE_V2_LFENCE; ++ if (spectre_v2_in_eibrs_mode(mode)) { ++ /* Force it so VMEXIT will restore correctly */ ++ x86_spec_ctrl_base |= SPEC_CTRL_IBRS; ++ wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base); ++ } ++ ++ switch (mode) { ++ case SPECTRE_V2_NONE: ++ case SPECTRE_V2_EIBRS: ++ break; ++ ++ case SPECTRE_V2_LFENCE: ++ case SPECTRE_V2_EIBRS_LFENCE: + setup_force_cpu_cap(X86_FEATURE_RETPOLINE_LFENCE); ++ /* fallthrough */ ++ ++ case SPECTRE_V2_RETPOLINE: ++ case SPECTRE_V2_EIBRS_RETPOLINE: + setup_force_cpu_cap(X86_FEATURE_RETPOLINE); +- } else { +- retpoline_generic: +- mode = SPECTRE_V2_RETPOLINE; +- setup_force_cpu_cap(X86_FEATURE_RETPOLINE); ++ break; + } + +-specv2_set_mode: + spectre_v2_enabled = mode; + pr_info("%s\n", spectre_v2_strings[mode]); + +@@ -937,7 +996,7 @@ specv2_set_mode: + * the CPU supports Enhanced IBRS, kernel might un-intentionally not + * enable IBRS around firmware calls. + */ +- if (boot_cpu_has(X86_FEATURE_IBRS) && mode != SPECTRE_V2_IBRS_ENHANCED) { ++ if (boot_cpu_has(X86_FEATURE_IBRS) && !spectre_v2_in_eibrs_mode(mode)) { + setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW); + pr_info("Enabling Restricted Speculation for firmware calls\n"); + } +@@ -1597,7 +1656,7 @@ static ssize_t tsx_async_abort_show_stat + + static char *stibp_state(void) + { +- if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED) ++ if (spectre_v2_in_eibrs_mode(spectre_v2_enabled)) + return ""; + + switch (spectre_v2_user_stibp) { diff --git a/queue-4.9/x86-speculation-add-retpoline_amd-support-to-the-inline-asm-call_nospec-variant.patch b/queue-4.9/x86-speculation-add-retpoline_amd-support-to-the-inline-asm-call_nospec-variant.patch new file mode 100644 index 00000000000..d5aa772c82e --- /dev/null +++ b/queue-4.9/x86-speculation-add-retpoline_amd-support-to-the-inline-asm-call_nospec-variant.patch @@ -0,0 +1,75 @@ +From foo@baz Wed Mar 9 04:10:24 PM CET 2022 +From: Zhenzhong Duan +Date: Tue, 18 Sep 2018 07:45:00 -0700 +Subject: x86/speculation: Add RETPOLINE_AMD support to the inline asm CALL_NOSPEC variant + +From: Zhenzhong Duan + +commit 0cbb76d6285794f30953bfa3ab831714b59dd700 upstream. + +..so that they match their asm counterpart. + +Add the missing ANNOTATE_NOSPEC_ALTERNATIVE in CALL_NOSPEC, while at it. + +Signed-off-by: Zhenzhong Duan +Signed-off-by: Borislav Petkov +Cc: Daniel Borkmann +Cc: David Woodhouse +Cc: H. Peter Anvin +Cc: Ingo Molnar +Cc: Konrad Rzeszutek Wilk +Cc: Peter Zijlstra +Cc: Thomas Gleixner +Cc: Wang YanQing +Cc: dhaval.giani@oracle.com +Cc: srinivas.eeda@oracle.com +Link: http://lkml.kernel.org/r/c3975665-173e-4d70-8dee-06c926ac26ee@default +Signed-off-by: Ben Hutchings +Signed-off-by: Greg Kroah-Hartman +--- + arch/x86/include/asm/nospec-branch.h | 17 +++++++++++++---- + 1 file changed, 13 insertions(+), 4 deletions(-) + +--- a/arch/x86/include/asm/nospec-branch.h ++++ b/arch/x86/include/asm/nospec-branch.h +@@ -172,11 +172,15 @@ + */ + # define CALL_NOSPEC \ + ANNOTATE_NOSPEC_ALTERNATIVE \ +- ALTERNATIVE( \ ++ ALTERNATIVE_2( \ + ANNOTATE_RETPOLINE_SAFE \ + "call *%[thunk_target]\n", \ + "call __x86_indirect_thunk_%V[thunk_target]\n", \ +- X86_FEATURE_RETPOLINE) ++ X86_FEATURE_RETPOLINE, \ ++ "lfence;\n" \ ++ ANNOTATE_RETPOLINE_SAFE \ ++ "call *%[thunk_target]\n", \ ++ X86_FEATURE_RETPOLINE_AMD) + # define THUNK_TARGET(addr) [thunk_target] "r" (addr) + + #elif defined(CONFIG_X86_32) && defined(CONFIG_RETPOLINE) +@@ -186,7 +190,8 @@ + * here, anyway. + */ + # define CALL_NOSPEC \ +- ALTERNATIVE( \ ++ ANNOTATE_NOSPEC_ALTERNATIVE \ ++ ALTERNATIVE_2( \ + ANNOTATE_RETPOLINE_SAFE \ + "call *%[thunk_target]\n", \ + " jmp 904f;\n" \ +@@ -201,7 +206,11 @@ + " ret;\n" \ + " .align 16\n" \ + "904: call 901b;\n", \ +- X86_FEATURE_RETPOLINE) ++ X86_FEATURE_RETPOLINE, \ ++ "lfence;\n" \ ++ ANNOTATE_RETPOLINE_SAFE \ ++ "call *%[thunk_target]\n", \ ++ X86_FEATURE_RETPOLINE_AMD) + + # define THUNK_TARGET(addr) [thunk_target] "rm" (addr) + #else /* No retpoline for C / inline asm */ diff --git a/queue-4.9/x86-speculation-include-unprivileged-ebpf-status-in-spectre-v2-mitigation-reporting.patch b/queue-4.9/x86-speculation-include-unprivileged-ebpf-status-in-spectre-v2-mitigation-reporting.patch new file mode 100644 index 00000000000..cfcf1c73ce1 --- /dev/null +++ b/queue-4.9/x86-speculation-include-unprivileged-ebpf-status-in-spectre-v2-mitigation-reporting.patch @@ -0,0 +1,152 @@ +From foo@baz Wed Mar 9 04:10:24 PM CET 2022 +From: Josh Poimboeuf +Date: Fri, 18 Feb 2022 11:49:08 -0800 +Subject: x86/speculation: Include unprivileged eBPF status in Spectre v2 mitigation reporting + +From: Josh Poimboeuf + +commit 44a3918c8245ab10c6c9719dd12e7a8d291980d8 upstream. + +With unprivileged eBPF enabled, eIBRS (without retpoline) is vulnerable +to Spectre v2 BHB-based attacks. + +When both are enabled, print a warning message and report it in the +'spectre_v2' sysfs vulnerabilities file. + +Signed-off-by: Josh Poimboeuf +Signed-off-by: Borislav Petkov +Reviewed-by: Thomas Gleixner +[fllinden@amazon.com: backported to 4.19] +Signed-off-by: Frank van der Linden +[bwh: Backported to 4.9: adjust context] +Signed-off-by: Ben Hutchings +Signed-off-by: Greg Kroah-Hartman +--- + arch/x86/kernel/cpu/bugs.c | 35 +++++++++++++++++++++++++++++------ + include/linux/bpf.h | 11 +++++++++++ + kernel/sysctl.c | 8 ++++++++ + 3 files changed, 48 insertions(+), 6 deletions(-) + +--- a/arch/x86/kernel/cpu/bugs.c ++++ b/arch/x86/kernel/cpu/bugs.c +@@ -30,6 +30,7 @@ + #include + #include + #include ++#include + + #include "cpu.h" + +@@ -606,6 +607,16 @@ static inline const char *spectre_v2_mod + static inline const char *spectre_v2_module_string(void) { return ""; } + #endif + ++#define SPECTRE_V2_EIBRS_EBPF_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!\n" ++ ++#ifdef CONFIG_BPF_SYSCALL ++void unpriv_ebpf_notify(int new_state) ++{ ++ if (spectre_v2_enabled == SPECTRE_V2_EIBRS && !new_state) ++ pr_err(SPECTRE_V2_EIBRS_EBPF_MSG); ++} ++#endif ++ + static inline bool match_option(const char *arg, int arglen, const char *opt) + { + int len = strlen(opt); +@@ -949,6 +960,9 @@ static void __init spectre_v2_select_mit + break; + } + ++ if (mode == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled()) ++ pr_err(SPECTRE_V2_EIBRS_EBPF_MSG); ++ + if (spectre_v2_in_eibrs_mode(mode)) { + /* Force it so VMEXIT will restore correctly */ + x86_spec_ctrl_base |= SPEC_CTRL_IBRS; +@@ -1686,6 +1700,20 @@ static char *ibpb_state(void) + return ""; + } + ++static ssize_t spectre_v2_show_state(char *buf) ++{ ++ if (spectre_v2_enabled == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled()) ++ return sprintf(buf, "Vulnerable: Unprivileged eBPF enabled\n"); ++ ++ return sprintf(buf, "%s%s%s%s%s%s\n", ++ spectre_v2_strings[spectre_v2_enabled], ++ ibpb_state(), ++ boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "", ++ stibp_state(), ++ boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "", ++ spectre_v2_module_string()); ++} ++ + static ssize_t srbds_show_state(char *buf) + { + return sprintf(buf, "%s\n", srbds_strings[srbds_mitigation]); +@@ -1708,12 +1736,7 @@ static ssize_t cpu_show_common(struct de + return sprintf(buf, "%s\n", spectre_v1_strings[spectre_v1_mitigation]); + + case X86_BUG_SPECTRE_V2: +- return sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled], +- ibpb_state(), +- boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "", +- stibp_state(), +- boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "", +- spectre_v2_module_string()); ++ return spectre_v2_show_state(buf); + + case X86_BUG_SPEC_STORE_BYPASS: + return sprintf(buf, "%s\n", ssb_strings[ssb_mode]); +--- a/include/linux/bpf.h ++++ b/include/linux/bpf.h +@@ -295,6 +295,11 @@ static inline void bpf_long_memcpy(void + + /* verify correctness of eBPF program */ + int bpf_check(struct bpf_prog **fp, union bpf_attr *attr); ++ ++static inline bool unprivileged_ebpf_enabled(void) ++{ ++ return !sysctl_unprivileged_bpf_disabled; ++} + #else + static inline void bpf_register_prog_type(struct bpf_prog_type_list *tl) + { +@@ -322,6 +327,12 @@ static inline struct bpf_prog *bpf_prog_ + { + return ERR_PTR(-EOPNOTSUPP); + } ++ ++static inline bool unprivileged_ebpf_enabled(void) ++{ ++ return false; ++} ++ + #endif /* CONFIG_BPF_SYSCALL */ + + /* verifier prototypes for helper functions called from eBPF programs */ +--- a/kernel/sysctl.c ++++ b/kernel/sysctl.c +@@ -222,6 +222,11 @@ static int sysrq_sysctl_handler(struct c + #endif + + #ifdef CONFIG_BPF_SYSCALL ++ ++void __weak unpriv_ebpf_notify(int new_state) ++{ ++} ++ + static int bpf_unpriv_handler(struct ctl_table *table, int write, + void *buffer, size_t *lenp, loff_t *ppos) + { +@@ -239,6 +244,9 @@ static int bpf_unpriv_handler(struct ctl + return -EPERM; + *(int *)table->data = unpriv_enable; + } ++ ++ unpriv_ebpf_notify(unpriv_enable); ++ + return ret; + } + #endif diff --git a/queue-4.9/x86-speculation-merge-one-test-in-spectre_v2_user_select_mitigation.patch b/queue-4.9/x86-speculation-merge-one-test-in-spectre_v2_user_select_mitigation.patch new file mode 100644 index 00000000000..0ee51b73cb2 --- /dev/null +++ b/queue-4.9/x86-speculation-merge-one-test-in-spectre_v2_user_select_mitigation.patch @@ -0,0 +1,64 @@ +From foo@baz Wed Mar 9 04:10:24 PM CET 2022 +From: Borislav Petkov +Date: Mon, 15 Jun 2020 08:51:25 +0200 +Subject: x86/speculation: Merge one test in spectre_v2_user_select_mitigation() + +From: Borislav Petkov + +commit a5ce9f2bb665d1d2b31f139a02dbaa2dfbb62fa6 upstream. + +Merge the test whether the CPU supports STIBP into the test which +determines whether STIBP is required. Thus try to simplify what is +already an insane logic. + +Remove a superfluous newline in a comment, while at it. + +Signed-off-by: Borislav Petkov +Cc: Anthony Steinhauser +Link: https://lkml.kernel.org/r/20200615065806.GB14668@zn.tnic +[fllinden@amazon.com: fixed contextual conflict (comment) for 4.19] +Signed-off-by: Frank van der Linden +Signed-off-by: Ben Hutchings +Signed-off-by: Greg Kroah-Hartman +--- + arch/x86/kernel/cpu/bugs.c | 13 ++++--------- + 1 file changed, 4 insertions(+), 9 deletions(-) + +--- a/arch/x86/kernel/cpu/bugs.c ++++ b/arch/x86/kernel/cpu/bugs.c +@@ -755,10 +755,12 @@ spectre_v2_user_select_mitigation(enum s + } + + /* +- * If enhanced IBRS is enabled or SMT impossible, STIBP is not ++ * If no STIBP, enhanced IBRS is enabled or SMT impossible, STIBP is not + * required. + */ +- if (!smt_possible || spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED) ++ if (!boot_cpu_has(X86_FEATURE_STIBP) || ++ !smt_possible || ++ spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED) + return; + + /* +@@ -770,12 +772,6 @@ spectre_v2_user_select_mitigation(enum s + boot_cpu_has(X86_FEATURE_AMD_STIBP_ALWAYS_ON)) + mode = SPECTRE_V2_USER_STRICT_PREFERRED; + +- /* +- * If STIBP is not available, clear the STIBP mode. +- */ +- if (!boot_cpu_has(X86_FEATURE_STIBP)) +- mode = SPECTRE_V2_USER_NONE; +- + spectre_v2_user_stibp = mode; + + set_mode: +@@ -1254,7 +1250,6 @@ static int ib_prctl_set(struct task_stru + if (spectre_v2_user_ibpb == SPECTRE_V2_USER_NONE && + spectre_v2_user_stibp == SPECTRE_V2_USER_NONE) + return 0; +- + /* + * With strict mode for both IBPB and STIBP, the instruction + * code paths avoid checking this task flag and instead, diff --git a/queue-4.9/x86-speculation-rename-retpoline_amd-to-retpoline_lfence.patch b/queue-4.9/x86-speculation-rename-retpoline_amd-to-retpoline_lfence.patch new file mode 100644 index 00000000000..ac72065aa68 --- /dev/null +++ b/queue-4.9/x86-speculation-rename-retpoline_amd-to-retpoline_lfence.patch @@ -0,0 +1,195 @@ +From foo@baz Wed Mar 9 04:10:24 PM CET 2022 +From: "Peter Zijlstra (Intel)" +Date: Wed, 16 Feb 2022 20:57:00 +0100 +Subject: x86/speculation: Rename RETPOLINE_AMD to RETPOLINE_LFENCE + +From: "Peter Zijlstra (Intel)" + +commit d45476d9832409371537013ebdd8dc1a7781f97a upstream. + +The RETPOLINE_AMD name is unfortunate since it isn't necessarily +AMD only, in fact Hygon also uses it. Furthermore it will likely be +sufficient for some Intel processors. Therefore rename the thing to +RETPOLINE_LFENCE to better describe what it is. + +Add the spectre_v2=retpoline,lfence option as an alias to +spectre_v2=retpoline,amd to preserve existing setups. However, the output +of /sys/devices/system/cpu/vulnerabilities/spectre_v2 will be changed. + + [ bp: Fix typos, massage. ] + +Co-developed-by: Josh Poimboeuf +Signed-off-by: Josh Poimboeuf +Signed-off-by: Peter Zijlstra (Intel) +Signed-off-by: Borislav Petkov +Reviewed-by: Thomas Gleixner +[fllinden@amazon.com: backported to 4.19] +Signed-off-by: Frank van der Linden +[bwh: Backported to 4.9: adjust context] +Signed-off-by: Ben Hutchings +Signed-off-by: Greg Kroah-Hartman +--- + arch/x86/include/asm/cpufeatures.h | 2 +- + arch/x86/include/asm/nospec-branch.h | 12 ++++++------ + arch/x86/kernel/cpu/bugs.c | 29 ++++++++++++++++++----------- + tools/arch/x86/include/asm/cpufeatures.h | 2 +- + 4 files changed, 26 insertions(+), 19 deletions(-) + +--- a/arch/x86/include/asm/cpufeatures.h ++++ b/arch/x86/include/asm/cpufeatures.h +@@ -195,7 +195,7 @@ + #define X86_FEATURE_FENCE_SWAPGS_USER ( 7*32+10) /* "" LFENCE in user entry SWAPGS path */ + #define X86_FEATURE_FENCE_SWAPGS_KERNEL ( 7*32+11) /* "" LFENCE in kernel entry SWAPGS path */ + #define X86_FEATURE_RETPOLINE ( 7*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */ +-#define X86_FEATURE_RETPOLINE_AMD ( 7*32+13) /* "" AMD Retpoline mitigation for Spectre variant 2 */ ++#define X86_FEATURE_RETPOLINE_LFENCE ( 7*32+13) /* "" Use LFENCE for Spectre variant 2 */ + + #define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* "" MSR SPEC_CTRL is implemented */ + #define X86_FEATURE_SSBD ( 7*32+17) /* Speculative Store Bypass Disable */ +--- a/arch/x86/include/asm/nospec-branch.h ++++ b/arch/x86/include/asm/nospec-branch.h +@@ -119,7 +119,7 @@ + ANNOTATE_NOSPEC_ALTERNATIVE + ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; jmp *\reg), \ + __stringify(RETPOLINE_JMP \reg), X86_FEATURE_RETPOLINE, \ +- __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *\reg), X86_FEATURE_RETPOLINE_AMD ++ __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *\reg), X86_FEATURE_RETPOLINE_LFENCE + #else + jmp *\reg + #endif +@@ -130,7 +130,7 @@ + ANNOTATE_NOSPEC_ALTERNATIVE + ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; call *\reg), \ + __stringify(RETPOLINE_CALL \reg), X86_FEATURE_RETPOLINE,\ +- __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; call *\reg), X86_FEATURE_RETPOLINE_AMD ++ __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; call *\reg), X86_FEATURE_RETPOLINE_LFENCE + #else + call *\reg + #endif +@@ -181,7 +181,7 @@ + "lfence;\n" \ + ANNOTATE_RETPOLINE_SAFE \ + "call *%[thunk_target]\n", \ +- X86_FEATURE_RETPOLINE_AMD) ++ X86_FEATURE_RETPOLINE_LFENCE) + # define THUNK_TARGET(addr) [thunk_target] "r" (addr) + + #else /* CONFIG_X86_32 */ +@@ -211,7 +211,7 @@ + "lfence;\n" \ + ANNOTATE_RETPOLINE_SAFE \ + "call *%[thunk_target]\n", \ +- X86_FEATURE_RETPOLINE_AMD) ++ X86_FEATURE_RETPOLINE_LFENCE) + + # define THUNK_TARGET(addr) [thunk_target] "rm" (addr) + #endif +@@ -223,8 +223,8 @@ + /* The Spectre V2 mitigation variants */ + enum spectre_v2_mitigation { + SPECTRE_V2_NONE, +- SPECTRE_V2_RETPOLINE_GENERIC, +- SPECTRE_V2_RETPOLINE_AMD, ++ SPECTRE_V2_RETPOLINE, ++ SPECTRE_V2_LFENCE, + SPECTRE_V2_IBRS_ENHANCED, + }; + +--- a/arch/x86/kernel/cpu/bugs.c ++++ b/arch/x86/kernel/cpu/bugs.c +@@ -620,7 +620,7 @@ enum spectre_v2_mitigation_cmd { + SPECTRE_V2_CMD_FORCE, + SPECTRE_V2_CMD_RETPOLINE, + SPECTRE_V2_CMD_RETPOLINE_GENERIC, +- SPECTRE_V2_CMD_RETPOLINE_AMD, ++ SPECTRE_V2_CMD_RETPOLINE_LFENCE, + }; + + enum spectre_v2_user_cmd { +@@ -780,8 +780,8 @@ set_mode: + + static const char * const spectre_v2_strings[] = { + [SPECTRE_V2_NONE] = "Vulnerable", +- [SPECTRE_V2_RETPOLINE_GENERIC] = "Mitigation: Full generic retpoline", +- [SPECTRE_V2_RETPOLINE_AMD] = "Mitigation: Full AMD retpoline", ++ [SPECTRE_V2_RETPOLINE] = "Mitigation: Retpolines", ++ [SPECTRE_V2_LFENCE] = "Mitigation: LFENCE", + [SPECTRE_V2_IBRS_ENHANCED] = "Mitigation: Enhanced IBRS", + }; + +@@ -793,7 +793,8 @@ static const struct { + { "off", SPECTRE_V2_CMD_NONE, false }, + { "on", SPECTRE_V2_CMD_FORCE, true }, + { "retpoline", SPECTRE_V2_CMD_RETPOLINE, false }, +- { "retpoline,amd", SPECTRE_V2_CMD_RETPOLINE_AMD, false }, ++ { "retpoline,amd", SPECTRE_V2_CMD_RETPOLINE_LFENCE, false }, ++ { "retpoline,lfence", SPECTRE_V2_CMD_RETPOLINE_LFENCE, false }, + { "retpoline,generic", SPECTRE_V2_CMD_RETPOLINE_GENERIC, false }, + { "auto", SPECTRE_V2_CMD_AUTO, false }, + }; +@@ -831,13 +832,19 @@ static enum spectre_v2_mitigation_cmd __ + } + + if ((cmd == SPECTRE_V2_CMD_RETPOLINE || +- cmd == SPECTRE_V2_CMD_RETPOLINE_AMD || ++ cmd == SPECTRE_V2_CMD_RETPOLINE_LFENCE || + cmd == SPECTRE_V2_CMD_RETPOLINE_GENERIC) && + !IS_ENABLED(CONFIG_RETPOLINE)) { + pr_err("%s selected but not compiled in. Switching to AUTO select\n", mitigation_options[i].option); + return SPECTRE_V2_CMD_AUTO; + } + ++ if ((cmd == SPECTRE_V2_CMD_RETPOLINE_LFENCE) && ++ !boot_cpu_has(X86_FEATURE_LFENCE_RDTSC)) { ++ pr_err("%s selected, but CPU doesn't have a serializing LFENCE. Switching to AUTO select\n", mitigation_options[i].option); ++ return SPECTRE_V2_CMD_AUTO; ++ } ++ + spec_v2_print_cond(mitigation_options[i].option, + mitigation_options[i].secure); + return cmd; +@@ -872,9 +879,9 @@ static void __init spectre_v2_select_mit + if (IS_ENABLED(CONFIG_RETPOLINE)) + goto retpoline_auto; + break; +- case SPECTRE_V2_CMD_RETPOLINE_AMD: ++ case SPECTRE_V2_CMD_RETPOLINE_LFENCE: + if (IS_ENABLED(CONFIG_RETPOLINE)) +- goto retpoline_amd; ++ goto retpoline_lfence; + break; + case SPECTRE_V2_CMD_RETPOLINE_GENERIC: + if (IS_ENABLED(CONFIG_RETPOLINE)) +@@ -890,17 +897,17 @@ static void __init spectre_v2_select_mit + + retpoline_auto: + if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) { +- retpoline_amd: ++ retpoline_lfence: + if (!boot_cpu_has(X86_FEATURE_LFENCE_RDTSC)) { + pr_err("Spectre mitigation: LFENCE not serializing, switching to generic retpoline\n"); + goto retpoline_generic; + } +- mode = SPECTRE_V2_RETPOLINE_AMD; +- setup_force_cpu_cap(X86_FEATURE_RETPOLINE_AMD); ++ mode = SPECTRE_V2_LFENCE; ++ setup_force_cpu_cap(X86_FEATURE_RETPOLINE_LFENCE); + setup_force_cpu_cap(X86_FEATURE_RETPOLINE); + } else { + retpoline_generic: +- mode = SPECTRE_V2_RETPOLINE_GENERIC; ++ mode = SPECTRE_V2_RETPOLINE; + setup_force_cpu_cap(X86_FEATURE_RETPOLINE); + } + +--- a/tools/arch/x86/include/asm/cpufeatures.h ++++ b/tools/arch/x86/include/asm/cpufeatures.h +@@ -194,7 +194,7 @@ + #define X86_FEATURE_PROC_FEEDBACK ( 7*32+ 9) /* AMD ProcFeedbackInterface */ + + #define X86_FEATURE_RETPOLINE ( 7*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */ +-#define X86_FEATURE_RETPOLINE_AMD ( 7*32+13) /* "" AMD Retpoline mitigation for Spectre variant 2 */ ++#define X86_FEATURE_RETPOLINE_LFENCE ( 7*32+13) /* "" Use LFENCEs for Spectre variant 2 */ + + #define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* "" MSR SPEC_CTRL is implemented */ + #define X86_FEATURE_SSBD ( 7*32+17) /* Speculative Store Bypass Disable */ diff --git a/queue-4.9/x86-speculation-update-link-to-amd-speculation-whitepaper.patch b/queue-4.9/x86-speculation-update-link-to-amd-speculation-whitepaper.patch new file mode 100644 index 00000000000..6cc81d575c8 --- /dev/null +++ b/queue-4.9/x86-speculation-update-link-to-amd-speculation-whitepaper.patch @@ -0,0 +1,43 @@ +From foo@baz Wed Mar 9 04:10:24 PM CET 2022 +From: Kim Phillips +Date: Mon, 28 Feb 2022 11:23:16 -0600 +Subject: x86/speculation: Update link to AMD speculation whitepaper + +From: Kim Phillips + +commit e9b6013a7ce31535b04b02ba99babefe8a8599fa upstream. + +Update the link to the "Software Techniques for Managing Speculation +on AMD Processors" whitepaper. + +Signed-off-by: Kim Phillips +Signed-off-by: Borislav Petkov +[bwh: Backported to 4.9: adjust filename] +Signed-off-by: Ben Hutchings +Signed-off-by: Greg Kroah-Hartman +--- + Documentation/hw-vuln/spectre.rst | 6 +++--- + 1 file changed, 3 insertions(+), 3 deletions(-) + +--- a/Documentation/hw-vuln/spectre.rst ++++ b/Documentation/hw-vuln/spectre.rst +@@ -60,8 +60,8 @@ privileged data touched during the specu + Spectre variant 1 attacks take advantage of speculative execution of + conditional branches, while Spectre variant 2 attacks use speculative + execution of indirect branches to leak privileged memory. +-See :ref:`[1] ` :ref:`[5] ` :ref:`[7] ` +-:ref:`[10] ` :ref:`[11] `. ++See :ref:`[1] ` :ref:`[5] ` :ref:`[6] ` ++:ref:`[7] ` :ref:`[10] ` :ref:`[11] `. + + Spectre variant 1 (Bounds Check Bypass) + --------------------------------------- +@@ -746,7 +746,7 @@ AMD white papers: + + .. _spec_ref6: + +-[6] `Software techniques for managing speculation on AMD processors `_. ++[6] `Software techniques for managing speculation on AMD processors `_. + + ARM white papers: + diff --git a/queue-4.9/x86-speculation-use-generic-retpoline-by-default-on-amd.patch b/queue-4.9/x86-speculation-use-generic-retpoline-by-default-on-amd.patch new file mode 100644 index 00000000000..ab9a381388e --- /dev/null +++ b/queue-4.9/x86-speculation-use-generic-retpoline-by-default-on-amd.patch @@ -0,0 +1,42 @@ +From foo@baz Wed Mar 9 04:10:24 PM CET 2022 +From: Kim Phillips +Date: Mon, 28 Feb 2022 11:23:15 -0600 +Subject: x86/speculation: Use generic retpoline by default on AMD + +From: Kim Phillips + +commit 244d00b5dd4755f8df892c86cab35fb2cfd4f14b upstream. + +AMD retpoline may be susceptible to speculation. The speculation +execution window for an incorrect indirect branch prediction using +LFENCE/JMP sequence may potentially be large enough to allow +exploitation using Spectre V2. + +By default, don't use retpoline,lfence on AMD. Instead, use the +generic retpoline. + +Signed-off-by: Kim Phillips +Signed-off-by: Borislav Petkov +Signed-off-by: Ben Hutchings +Signed-off-by: Greg Kroah-Hartman +--- + arch/x86/kernel/cpu/bugs.c | 8 -------- + 1 file changed, 8 deletions(-) + +--- a/arch/x86/kernel/cpu/bugs.c ++++ b/arch/x86/kernel/cpu/bugs.c +@@ -897,14 +897,6 @@ static enum spectre_v2_mitigation __init + return SPECTRE_V2_NONE; + } + +- if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) { +- if (!boot_cpu_has(X86_FEATURE_LFENCE_RDTSC)) { +- pr_err("LFENCE not serializing, switching to generic retpoline\n"); +- return SPECTRE_V2_RETPOLINE; +- } +- return SPECTRE_V2_LFENCE; +- } +- + return SPECTRE_V2_RETPOLINE; + } + diff --git a/queue-4.9/x86-speculation-warn-about-eibrs-lfence-unprivileged-ebpf-smt.patch b/queue-4.9/x86-speculation-warn-about-eibrs-lfence-unprivileged-ebpf-smt.patch new file mode 100644 index 00000000000..b422ffb7474 --- /dev/null +++ b/queue-4.9/x86-speculation-warn-about-eibrs-lfence-unprivileged-ebpf-smt.patch @@ -0,0 +1,94 @@ +From foo@baz Wed Mar 9 04:10:24 PM CET 2022 +From: Josh Poimboeuf +Date: Fri, 25 Feb 2022 14:32:28 -0800 +Subject: x86/speculation: Warn about eIBRS + LFENCE + Unprivileged eBPF + SMT + +From: Josh Poimboeuf + +commit 0de05d056afdb00eca8c7bbb0c79a3438daf700c upstream. + +The commit + + 44a3918c8245 ("x86/speculation: Include unprivileged eBPF status in Spectre v2 mitigation reporting") + +added a warning for the "eIBRS + unprivileged eBPF" combination, which +has been shown to be vulnerable against Spectre v2 BHB-based attacks. + +However, there's no warning about the "eIBRS + LFENCE retpoline + +unprivileged eBPF" combo. The LFENCE adds more protection by shortening +the speculation window after a mispredicted branch. That makes an attack +significantly more difficult, even with unprivileged eBPF. So at least +for now the logic doesn't warn about that combination. + +But if you then add SMT into the mix, the SMT attack angle weakens the +effectiveness of the LFENCE considerably. + +So extend the "eIBRS + unprivileged eBPF" warning to also include the +"eIBRS + LFENCE + unprivileged eBPF + SMT" case. + + [ bp: Massage commit message. ] + +Suggested-by: Alyssa Milburn +Signed-off-by: Josh Poimboeuf +Signed-off-by: Borislav Petkov +Signed-off-by: Ben Hutchings +Signed-off-by: Greg Kroah-Hartman +--- + arch/x86/kernel/cpu/bugs.c | 27 +++++++++++++++++++++++++-- + 1 file changed, 25 insertions(+), 2 deletions(-) + +--- a/arch/x86/kernel/cpu/bugs.c ++++ b/arch/x86/kernel/cpu/bugs.c +@@ -609,12 +609,27 @@ static inline const char *spectre_v2_mod + + #define SPECTRE_V2_LFENCE_MSG "WARNING: LFENCE mitigation is not recommended for this CPU, data leaks possible!\n" + #define SPECTRE_V2_EIBRS_EBPF_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!\n" ++#define SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS+LFENCE mitigation and SMT, data leaks possible via Spectre v2 BHB attacks!\n" + + #ifdef CONFIG_BPF_SYSCALL + void unpriv_ebpf_notify(int new_state) + { +- if (spectre_v2_enabled == SPECTRE_V2_EIBRS && !new_state) ++ if (new_state) ++ return; ++ ++ /* Unprivileged eBPF is enabled */ ++ ++ switch (spectre_v2_enabled) { ++ case SPECTRE_V2_EIBRS: + pr_err(SPECTRE_V2_EIBRS_EBPF_MSG); ++ break; ++ case SPECTRE_V2_EIBRS_LFENCE: ++ if (sched_smt_active()) ++ pr_err(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG); ++ break; ++ default: ++ break; ++ } + } + #endif + +@@ -1074,6 +1089,10 @@ void arch_smt_update(void) + { + mutex_lock(&spec_ctrl_mutex); + ++ if (sched_smt_active() && unprivileged_ebpf_enabled() && ++ spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE) ++ pr_warn_once(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG); ++ + switch (spectre_v2_user_stibp) { + case SPECTRE_V2_USER_NONE: + break; +@@ -1700,7 +1719,11 @@ static ssize_t spectre_v2_show_state(cha + return sprintf(buf, "Vulnerable: LFENCE\n"); + + if (spectre_v2_enabled == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled()) +- return sprintf(buf, "Vulnerable: Unprivileged eBPF enabled\n"); ++ return sprintf(buf, "Vulnerable: eIBRS with unprivileged eBPF\n"); ++ ++ if (sched_smt_active() && unprivileged_ebpf_enabled() && ++ spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE) ++ return sprintf(buf, "Vulnerable: eIBRS+LFENCE with unprivileged eBPF and SMT\n"); + + return sprintf(buf, "%s%s%s%s%s%s\n", + spectre_v2_strings[spectre_v2_enabled], diff --git a/queue-4.9/x86-speculation-warn-about-spectre-v2-lfence-mitigation.patch b/queue-4.9/x86-speculation-warn-about-spectre-v2-lfence-mitigation.patch new file mode 100644 index 00000000000..8cf8a9b1afa --- /dev/null +++ b/queue-4.9/x86-speculation-warn-about-spectre-v2-lfence-mitigation.patch @@ -0,0 +1,63 @@ +From foo@baz Wed Mar 9 04:10:24 PM CET 2022 +From: Josh Poimboeuf +Date: Fri, 25 Feb 2022 14:31:49 -0800 +Subject: x86/speculation: Warn about Spectre v2 LFENCE mitigation + +From: Josh Poimboeuf + +commit eafd987d4a82c7bb5aa12f0e3b4f8f3dea93e678 upstream. + +With: + + f8a66d608a3e ("x86,bugs: Unconditionally allow spectre_v2=retpoline,amd") + +it became possible to enable the LFENCE "retpoline" on Intel. However, +Intel doesn't recommend it, as it has some weaknesses compared to +retpoline. + +Now AMD doesn't recommend it either. + +It can still be left available as a cmdline option. It's faster than +retpoline but is weaker in certain scenarios -- particularly SMT, but +even non-SMT may be vulnerable in some cases. + +So just unconditionally warn if the user requests it on the cmdline. + + [ bp: Massage commit message. ] + +Signed-off-by: Josh Poimboeuf +Signed-off-by: Borislav Petkov +Signed-off-by: Ben Hutchings +Signed-off-by: Greg Kroah-Hartman +--- + arch/x86/kernel/cpu/bugs.c | 5 +++++ + 1 file changed, 5 insertions(+) + +--- a/arch/x86/kernel/cpu/bugs.c ++++ b/arch/x86/kernel/cpu/bugs.c +@@ -607,6 +607,7 @@ static inline const char *spectre_v2_mod + static inline const char *spectre_v2_module_string(void) { return ""; } + #endif + ++#define SPECTRE_V2_LFENCE_MSG "WARNING: LFENCE mitigation is not recommended for this CPU, data leaks possible!\n" + #define SPECTRE_V2_EIBRS_EBPF_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!\n" + + #ifdef CONFIG_BPF_SYSCALL +@@ -928,6 +929,7 @@ static void __init spectre_v2_select_mit + break; + + case SPECTRE_V2_CMD_RETPOLINE_LFENCE: ++ pr_err(SPECTRE_V2_LFENCE_MSG); + mode = SPECTRE_V2_LFENCE; + break; + +@@ -1694,6 +1696,9 @@ static char *ibpb_state(void) + + static ssize_t spectre_v2_show_state(char *buf) + { ++ if (spectre_v2_enabled == SPECTRE_V2_LFENCE) ++ return sprintf(buf, "Vulnerable: LFENCE\n"); ++ + if (spectre_v2_enabled == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled()) + return sprintf(buf, "Vulnerable: Unprivileged eBPF enabled\n"); +