nvme-unblock-ctrl-state-transition-for-firmware-upda.patch
do_umount-add-missing-barrier-before-refcount-checks.patch
revert-net-phy-microchip-force-irq-polling-mode-for-lan88xx.patch
+x86-bpf-call-branch-history-clearing-sequence-on-exit.patch
+x86-bpf-add-ibhf-call-at-end-of-classic-bpf.patch
+x86-bhi-do-not-set-bhi_dis_s-in-32-bit-mode.patch
--- /dev/null
+From f04cab20573a9ba08c917e1f11be2edc0ae3bcc9 Mon Sep 17 00:00:00 2001
+From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
+Date: Mon, 5 May 2025 14:35:12 -0700
+Subject: x86/bhi: Do not set BHI_DIS_S in 32-bit mode
+
+From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
+
+commit 073fdbe02c69c43fb7c0d547ec265c7747d4a646 upstream.
+
+With the possibility of intra-mode BHI via cBPF, complete mitigation for
+BHI is to use IBHF (history fence) instruction with BHI_DIS_S set. Since
+this new instruction is only available in 64-bit mode, setting BHI_DIS_S in
+32-bit mode is only a partial mitigation.
+
+Do not set BHI_DIS_S in 32-bit mode so as to avoid reporting misleading
+mitigated status. With this change IBHF won't be used in 32-bit mode, also
+remove the CONFIG_X86_64 check from emit_spectre_bhb_barrier().
+
+Suggested-by: Josh Poimboeuf <jpoimboe@kernel.org>
+Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
+Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
+Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org>
+Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kernel/cpu/bugs.c | 5 +++--
+ arch/x86/net/bpf_jit_comp.c | 5 +++--
+ 2 files changed, 6 insertions(+), 4 deletions(-)
+
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -1656,10 +1656,11 @@ static void __init bhi_select_mitigation
+ return;
+ }
+
+- if (spec_ctrl_bhi_dis())
++ if (!IS_ENABLED(CONFIG_X86_64))
+ return;
+
+- if (!IS_ENABLED(CONFIG_X86_64))
++ /* Mitigate in hardware if supported */
++ if (spec_ctrl_bhi_dis())
+ return;
+
+ /* Mitigate KVM by default */
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -956,8 +956,7 @@ static int emit_spectre_bhb_barrier(u8 *
+ /* Insert IBHF instruction */
+ if ((cpu_feature_enabled(X86_FEATURE_CLEAR_BHB_LOOP) &&
+ cpu_feature_enabled(X86_FEATURE_HYPERVISOR)) ||
+- (cpu_feature_enabled(X86_FEATURE_CLEAR_BHB_HW) &&
+- IS_ENABLED(CONFIG_X86_64))) {
++ cpu_feature_enabled(X86_FEATURE_CLEAR_BHB_HW)) {
+ /*
+ * Add an Indirect Branch History Fence (IBHF). IBHF acts as a
+ * fence preventing branch history from before the fence from
+@@ -967,6 +966,8 @@ static int emit_spectre_bhb_barrier(u8 *
+ * hardware that doesn't need or support it. The REP and REX.W
+ * prefixes are required by the microcode, and they also ensure
+ * that the NOP is unlikely to be used in existing code.
++ *
++ * IBHF is not a valid instruction in 32-bit mode.
+ */
+ EMIT5(0xF3, 0x48, 0x0F, 0x1E, 0xF8); /* ibhf */
+ }
--- /dev/null
+From fd5b01fc8f9f4536300197d1686f4aed719c10a8 Mon Sep 17 00:00:00 2001
+From: Daniel Sneddon <daniel.sneddon@linux.intel.com>
+Date: Mon, 5 May 2025 14:35:12 -0700
+Subject: x86/bpf: Add IBHF call at end of classic BPF
+
+From: Daniel Sneddon <daniel.sneddon@linux.intel.com>
+
+commit 9f725eec8fc0b39bdc07dcc8897283c367c1a163 upstream.
+
+Classic BPF programs can be run by unprivileged users, allowing
+unprivileged code to execute inside the kernel. Attackers can use this to
+craft branch history in kernel mode that can influence the target of
+indirect branches.
+
+BHI_DIS_S provides user-kernel isolation of branch history, but cBPF can be
+used to bypass this protection by crafting branch history in kernel mode.
+To stop intra-mode attacks via cBPF programs, Intel created a new
+instruction Indirect Branch History Fence (IBHF). IBHF prevents the
+predicted targets of subsequent indirect branches from being influenced by
+branch history prior to the IBHF. IBHF is only effective while BHI_DIS_S is
+enabled.
+
+Add the IBHF instruction to cBPF jitted code's exit path. Add the new fence
+when the hardware mitigation is enabled (i.e., X86_FEATURE_CLEAR_BHB_HW is
+set) or after the software sequence (X86_FEATURE_CLEAR_BHB_LOOP) is being
+used in a virtual machine. Note that X86_FEATURE_CLEAR_BHB_HW and
+X86_FEATURE_CLEAR_BHB_LOOP are mutually exclusive, so the JIT compiler will
+only emit the new fence, not the SW sequence, when X86_FEATURE_CLEAR_BHB_HW
+is set.
+
+Hardware that enumerates BHI_NO basically has BHI_DIS_S protections always
+enabled, regardless of the value of BHI_DIS_S. Since BHI_DIS_S doesn't
+protect against intra-mode attacks, enumerate BHI bug on BHI_NO hardware as
+well.
+
+Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com>
+Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
+Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
+Acked-by: Daniel Borkmann <daniel@iogearbox.net>
+Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kernel/cpu/common.c | 9 ++++++---
+ arch/x86/net/bpf_jit_comp.c | 19 +++++++++++++++++++
+ 2 files changed, 25 insertions(+), 3 deletions(-)
+
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1345,9 +1345,12 @@ static void __init cpu_set_bug_bits(stru
+ if (vulnerable_to_rfds(x86_arch_cap_msr))
+ setup_force_cpu_bug(X86_BUG_RFDS);
+
+- /* When virtualized, eIBRS could be hidden, assume vulnerable */
+- if (!(x86_arch_cap_msr & ARCH_CAP_BHI_NO) &&
+- !cpu_matches(cpu_vuln_whitelist, NO_BHI) &&
++ /*
++ * Intel parts with eIBRS are vulnerable to BHI attacks. Parts with
++ * BHI_NO still need to use the BHI mitigation to prevent Intra-mode
++ * attacks. When virtualized, eIBRS could be hidden, assume vulnerable.
++ */
++ if (!cpu_matches(cpu_vuln_whitelist, NO_BHI) &&
+ (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED) ||
+ boot_cpu_has(X86_FEATURE_HYPERVISOR)))
+ setup_force_cpu_bug(X86_BUG_BHI);
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -36,6 +36,8 @@ static u8 *emit_code(u8 *ptr, u32 bytes,
+ #define EMIT2(b1, b2) EMIT((b1) + ((b2) << 8), 2)
+ #define EMIT3(b1, b2, b3) EMIT((b1) + ((b2) << 8) + ((b3) << 16), 3)
+ #define EMIT4(b1, b2, b3, b4) EMIT((b1) + ((b2) << 8) + ((b3) << 16) + ((b4) << 24), 4)
++#define EMIT5(b1, b2, b3, b4, b5) \
++ do { EMIT1(b1); EMIT4(b2, b3, b4, b5); } while (0)
+
+ #define EMIT1_off32(b1, off) \
+ do { EMIT1(b1); EMIT(off, 4); } while (0)
+@@ -951,6 +953,23 @@ static int emit_spectre_bhb_barrier(u8 *
+ EMIT1(0x59); /* pop rcx */
+ EMIT1(0x58); /* pop rax */
+ }
++ /* Insert IBHF instruction */
++ if ((cpu_feature_enabled(X86_FEATURE_CLEAR_BHB_LOOP) &&
++ cpu_feature_enabled(X86_FEATURE_HYPERVISOR)) ||
++ (cpu_feature_enabled(X86_FEATURE_CLEAR_BHB_HW) &&
++ IS_ENABLED(CONFIG_X86_64))) {
++ /*
++ * Add an Indirect Branch History Fence (IBHF). IBHF acts as a
++ * fence preventing branch history from before the fence from
++ * affecting indirect branches after the fence. This is
++ * specifically used in cBPF jitted code to prevent Intra-mode
++ * BHI attacks. The IBHF instruction is designed to be a NOP on
++ * hardware that doesn't need or support it. The REP and REX.W
++ * prefixes are required by the microcode, and they also ensure
++ * that the NOP is unlikely to be used in existing code.
++ */
++ EMIT5(0xF3, 0x48, 0x0F, 0x1E, 0xF8); /* ibhf */
++ }
+ *pprog = prog;
+ return 0;
+ }
--- /dev/null
+From ce0dbd7658ea5a3b1f5d2c7117caac5ff616daef Mon Sep 17 00:00:00 2001
+From: Daniel Sneddon <daniel.sneddon@linux.intel.com>
+Date: Mon, 5 May 2025 14:35:12 -0700
+Subject: x86/bpf: Call branch history clearing sequence on exit
+
+From: Daniel Sneddon <daniel.sneddon@linux.intel.com>
+
+commit d4e89d212d401672e9cdfe825d947ee3a9fbe3f5 upstream.
+
+Classic BPF programs have been identified as potential vectors for
+intra-mode Branch Target Injection (BTI) attacks. Classic BPF programs can
+be run by unprivileged users. They allow unprivileged code to execute
+inside the kernel. Attackers can use unprivileged cBPF to craft branch
+history in kernel mode that can influence the target of indirect branches.
+
+Introduce a branch history buffer (BHB) clearing sequence during the JIT
+compilation of classic BPF programs. The clearing sequence is the same as
+is used in previous mitigations to protect syscalls. Since eBPF programs
+already have their own mitigations in place, only insert the call on
+classic programs that aren't run by privileged users.
+
+Signed-off-by: Daniel Sneddon <daniel.sneddon@linux.intel.com>
+Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
+Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
+Acked-by: Daniel Borkmann <daniel@iogearbox.net>
+Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/net/bpf_jit_comp.c | 32 ++++++++++++++++++++++++++++++++
+ 1 file changed, 32 insertions(+)
+
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -932,6 +932,29 @@ static void emit_nops(u8 **pprog, int le
+
+ #define INSN_SZ_DIFF (((addrs[i] - addrs[i - 1]) - (prog - temp)))
+
++static int emit_spectre_bhb_barrier(u8 **pprog, u8 *ip,
++ struct bpf_prog *bpf_prog)
++{
++ u8 *prog = *pprog;
++ u8 *func;
++
++ if (cpu_feature_enabled(X86_FEATURE_CLEAR_BHB_LOOP)) {
++ /* The clearing sequence clobbers eax and ecx. */
++ EMIT1(0x50); /* push rax */
++ EMIT1(0x51); /* push rcx */
++ ip += 2;
++
++ func = (u8 *)clear_bhb_loop;
++
++ if (emit_call(&prog, func, ip))
++ return -EINVAL;
++ EMIT1(0x59); /* pop rcx */
++ EMIT1(0x58); /* pop rax */
++ }
++ *pprog = prog;
++ return 0;
++}
++
+ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
+ int oldproglen, struct jit_context *ctx, bool jmp_padding)
+ {
+@@ -1737,6 +1760,15 @@ emit_jmp:
+ seen_exit = true;
+ /* Update cleanup_addr */
+ ctx->cleanup_addr = proglen;
++
++ if (bpf_prog_was_classic(bpf_prog) &&
++ !capable(CAP_SYS_ADMIN)) {
++ u8 *ip = image + addrs[i - 1];
++
++ if (emit_spectre_bhb_barrier(&prog, ip, bpf_prog))
++ return -EINVAL;
++ }
++
+ pop_callee_regs(&prog, callee_regs_used);
+ EMIT1(0xC9); /* leave */
+ emit_return(&prog, image + addrs[i - 1] + (prog - temp));