From: Sasha Levin Date: Tue, 14 Feb 2023 17:48:28 +0000 (-0500) Subject: Fixes for 5.15 X-Git-Tag: v6.1.12~1^2 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=c2cb1a1906da492353447de7ba3dec6758341d0c;p=thirdparty%2Fkernel%2Fstable-queue.git Fixes for 5.15 Signed-off-by: Sasha Levin --- diff --git a/queue-5.15/documentation-hw-vuln-add-documentation-for-cross-th.patch b/queue-5.15/documentation-hw-vuln-add-documentation-for-cross-th.patch new file mode 100644 index 00000000000..e68bfc01a8c --- /dev/null +++ b/queue-5.15/documentation-hw-vuln-add-documentation-for-cross-th.patch @@ -0,0 +1,132 @@ +From 5e2a1b2c02213c977036d38dc5ae03adff8296d1 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 14 Feb 2023 12:09:56 -0500 +Subject: Documentation/hw-vuln: Add documentation for Cross-Thread Return + Predictions + +From: Tom Lendacky + +[ Upstream commit 493a2c2d23ca91afba96ac32b6cbafb54382c2a3 ] + +Add the admin guide for the Cross-Thread Return Predictions vulnerability. + +Signed-off-by: Tom Lendacky +Message-Id: <60f9c0b4396956ce70499ae180cb548720b25c7e.1675956146.git.thomas.lendacky@amd.com> +Signed-off-by: Paolo Bonzini +Signed-off-by: Sasha Levin +--- + .../admin-guide/hw-vuln/cross-thread-rsb.rst | 92 +++++++++++++++++++ + Documentation/admin-guide/hw-vuln/index.rst | 1 + + 2 files changed, 93 insertions(+) + create mode 100644 Documentation/admin-guide/hw-vuln/cross-thread-rsb.rst + +diff --git a/Documentation/admin-guide/hw-vuln/cross-thread-rsb.rst b/Documentation/admin-guide/hw-vuln/cross-thread-rsb.rst +new file mode 100644 +index 0000000000000..ec6e9f5bcf9e8 +--- /dev/null ++++ b/Documentation/admin-guide/hw-vuln/cross-thread-rsb.rst +@@ -0,0 +1,92 @@ ++ ++.. SPDX-License-Identifier: GPL-2.0 ++ ++Cross-Thread Return Address Predictions ++======================================= ++ ++Certain AMD and Hygon processors are subject to a cross-thread return address ++predictions vulnerability. When running in SMT mode and one sibling thread ++transitions out of C0 state, the other sibling thread could use return target ++predictions from the sibling thread that transitioned out of C0. ++ ++The Spectre v2 mitigations protect the Linux kernel, as it fills the return ++address prediction entries with safe targets when context switching to the idle ++thread. However, KVM does allow a VMM to prevent exiting guest mode when ++transitioning out of C0. This could result in a guest-controlled return target ++being consumed by the sibling thread. ++ ++Affected processors ++------------------- ++ ++The following CPUs are vulnerable: ++ ++ - AMD Family 17h processors ++ - Hygon Family 18h processors ++ ++Related CVEs ++------------ ++ ++The following CVE entry is related to this issue: ++ ++ ============== ======================================= ++ CVE-2022-27672 Cross-Thread Return Address Predictions ++ ============== ======================================= ++ ++Problem ++------- ++ ++Affected SMT-capable processors support 1T and 2T modes of execution when SMT ++is enabled. In 2T mode, both threads in a core are executing code. For the ++processor core to enter 1T mode, it is required that one of the threads ++requests to transition out of the C0 state. This can be communicated with the ++HLT instruction or with an MWAIT instruction that requests non-C0. ++When the thread re-enters the C0 state, the processor transitions back ++to 2T mode, assuming the other thread is also still in C0 state. ++ ++In affected processors, the return address predictor (RAP) is partitioned ++depending on the SMT mode. For instance, in 2T mode each thread uses a private ++16-entry RAP, but in 1T mode, the active thread uses a 32-entry RAP. Upon ++transition between 1T/2T mode, the RAP contents are not modified but the RAP ++pointers (which control the next return target to use for predictions) may ++change. This behavior may result in return targets from one SMT thread being ++used by RET predictions in the sibling thread following a 1T/2T switch. In ++particular, a RET instruction executed immediately after a transition to 1T may ++use a return target from the thread that just became idle. In theory, this ++could lead to information disclosure if the return targets used do not come ++from trustworthy code. ++ ++Attack scenarios ++---------------- ++ ++An attack can be mounted on affected processors by performing a series of CALL ++instructions with targeted return locations and then transitioning out of C0 ++state. ++ ++Mitigation mechanism ++-------------------- ++ ++Before entering idle state, the kernel context switches to the idle thread. The ++context switch fills the RAP entries (referred to as the RSB in Linux) with safe ++targets by performing a sequence of CALL instructions. ++ ++Prevent a guest VM from directly putting the processor into an idle state by ++intercepting HLT and MWAIT instructions. ++ ++Both mitigations are required to fully address this issue. ++ ++Mitigation control on the kernel command line ++--------------------------------------------- ++ ++Use existing Spectre v2 mitigations that will fill the RSB on context switch. ++ ++Mitigation control for KVM - module parameter ++--------------------------------------------- ++ ++By default, the KVM hypervisor mitigates this issue by intercepting guest ++attempts to transition out of C0. A VMM can use the KVM_CAP_X86_DISABLE_EXITS ++capability to override those interceptions, but since this is not common, the ++mitigation that covers this path is not enabled by default. ++ ++The mitigation for the KVM_CAP_X86_DISABLE_EXITS capability can be turned on ++using the boolean module parameter mitigate_smt_rsb, e.g.: ++ kvm.mitigate_smt_rsb=1 +diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst +index 4df436e7c4177..e0614760a99e7 100644 +--- a/Documentation/admin-guide/hw-vuln/index.rst ++++ b/Documentation/admin-guide/hw-vuln/index.rst +@@ -18,3 +18,4 @@ are configurable at compile, boot or run time. + core-scheduling.rst + l1d_flush.rst + processor_mmio_stale_data.rst ++ cross-thread-rsb.rst +-- +2.39.0 + diff --git a/queue-5.15/kvm-x86-mitigate-the-cross-thread-return-address-pre.patch b/queue-5.15/kvm-x86-mitigate-the-cross-thread-return-address-pre.patch new file mode 100644 index 00000000000..8c7327a89e8 --- /dev/null +++ b/queue-5.15/kvm-x86-mitigate-the-cross-thread-return-address-pre.patch @@ -0,0 +1,108 @@ +From 8ddfb2e85aaeb8e7c056ef63b3cf9643934f7d20 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 14 Feb 2023 12:09:55 -0500 +Subject: KVM: x86: Mitigate the cross-thread return address predictions bug + +From: Tom Lendacky + +[ Upstream commit 6f0f2d5ef895d66a3f2b32dd05189ec34afa5a55 ] + +By default, KVM/SVM will intercept attempts by the guest to transition +out of C0. However, the KVM_CAP_X86_DISABLE_EXITS capability can be used +by a VMM to change this behavior. To mitigate the cross-thread return +address predictions bug (X86_BUG_SMT_RSB), a VMM must not be allowed to +override the default behavior to intercept C0 transitions. + +Use a module parameter to control the mitigation on processors that are +vulnerable to X86_BUG_SMT_RSB. If the processor is vulnerable to the +X86_BUG_SMT_RSB bug and the module parameter is set to mitigate the bug, +KVM will not allow the disabling of the HLT, MWAIT and CSTATE exits. + +Signed-off-by: Tom Lendacky +Message-Id: <4019348b5e07148eb4d593380a5f6713b93c9a16.1675956146.git.thomas.lendacky@amd.com> +Signed-off-by: Paolo Bonzini +Signed-off-by: Sasha Levin +--- + arch/x86/kvm/x86.c | 43 ++++++++++++++++++++++++++++++++----------- + 1 file changed, 32 insertions(+), 11 deletions(-) + +diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c +index fcfa3fedf84f1..45a3d11bb70d9 100644 +--- a/arch/x86/kvm/x86.c ++++ b/arch/x86/kvm/x86.c +@@ -184,6 +184,10 @@ module_param(force_emulation_prefix, bool, S_IRUGO); + int __read_mostly pi_inject_timer = -1; + module_param(pi_inject_timer, bint, S_IRUGO | S_IWUSR); + ++/* Enable/disable SMT_RSB bug mitigation */ ++bool __read_mostly mitigate_smt_rsb; ++module_param(mitigate_smt_rsb, bool, 0444); ++ + /* + * Restoring the host value for MSRs that are only consumed when running in + * usermode, e.g. SYSCALL MSRs and TSC_AUX, can be deferred until the CPU +@@ -4164,10 +4168,15 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) + r = KVM_CLOCK_TSC_STABLE; + break; + case KVM_CAP_X86_DISABLE_EXITS: +- r |= KVM_X86_DISABLE_EXITS_HLT | KVM_X86_DISABLE_EXITS_PAUSE | +- KVM_X86_DISABLE_EXITS_CSTATE; +- if(kvm_can_mwait_in_guest()) +- r |= KVM_X86_DISABLE_EXITS_MWAIT; ++ r = KVM_X86_DISABLE_EXITS_PAUSE; ++ ++ if (!mitigate_smt_rsb) { ++ r |= KVM_X86_DISABLE_EXITS_HLT | ++ KVM_X86_DISABLE_EXITS_CSTATE; ++ ++ if (kvm_can_mwait_in_guest()) ++ r |= KVM_X86_DISABLE_EXITS_MWAIT; ++ } + break; + case KVM_CAP_X86_SMM: + /* SMBASE is usually relocated above 1M on modern chipsets, +@@ -5746,15 +5755,26 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, + if (cap->args[0] & ~KVM_X86_DISABLE_VALID_EXITS) + break; + +- if ((cap->args[0] & KVM_X86_DISABLE_EXITS_MWAIT) && +- kvm_can_mwait_in_guest()) +- kvm->arch.mwait_in_guest = true; +- if (cap->args[0] & KVM_X86_DISABLE_EXITS_HLT) +- kvm->arch.hlt_in_guest = true; + if (cap->args[0] & KVM_X86_DISABLE_EXITS_PAUSE) + kvm->arch.pause_in_guest = true; +- if (cap->args[0] & KVM_X86_DISABLE_EXITS_CSTATE) +- kvm->arch.cstate_in_guest = true; ++ ++#define SMT_RSB_MSG "This processor is affected by the Cross-Thread Return Predictions vulnerability. " \ ++ "KVM_CAP_X86_DISABLE_EXITS should only be used with SMT disabled or trusted guests." ++ ++ if (!mitigate_smt_rsb) { ++ if (boot_cpu_has_bug(X86_BUG_SMT_RSB) && cpu_smt_possible() && ++ (cap->args[0] & ~KVM_X86_DISABLE_EXITS_PAUSE)) ++ pr_warn_once(SMT_RSB_MSG); ++ ++ if ((cap->args[0] & KVM_X86_DISABLE_EXITS_MWAIT) && ++ kvm_can_mwait_in_guest()) ++ kvm->arch.mwait_in_guest = true; ++ if (cap->args[0] & KVM_X86_DISABLE_EXITS_HLT) ++ kvm->arch.hlt_in_guest = true; ++ if (cap->args[0] & KVM_X86_DISABLE_EXITS_CSTATE) ++ kvm->arch.cstate_in_guest = true; ++ } ++ + r = 0; + break; + case KVM_CAP_MSR_PLATFORM_INFO: +@@ -12796,6 +12816,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_vmgexit_msr_protocol_exit); + static int __init kvm_x86_init(void) + { + kvm_mmu_x86_module_init(); ++ mitigate_smt_rsb &= boot_cpu_has_bug(X86_BUG_SMT_RSB) && cpu_smt_possible(); + return 0; + } + module_init(kvm_x86_init); +-- +2.39.0 + diff --git a/queue-5.15/series b/queue-5.15/series index a14f534261c..dd9e664cdc2 100644 --- a/queue-5.15/series +++ b/queue-5.15/series @@ -61,3 +61,6 @@ fix-page-corruption-caused-by-racy-check-in-__free_pages.patch drm-amdgpu-fence-fix-oops-due-to-non-matching-drm_sched-init-fini.patch drm-i915-initialize-the-obj-flags-for-shmem-objects.patch drm-i915-fix-vbt-dsi-dvo-port-handling.patch +x86-speculation-identify-processors-vulnerable-to-sm.patch +kvm-x86-mitigate-the-cross-thread-return-address-pre.patch +documentation-hw-vuln-add-documentation-for-cross-th.patch diff --git a/queue-5.15/x86-speculation-identify-processors-vulnerable-to-sm.patch b/queue-5.15/x86-speculation-identify-processors-vulnerable-to-sm.patch new file mode 100644 index 00000000000..2ec96733250 --- /dev/null +++ b/queue-5.15/x86-speculation-identify-processors-vulnerable-to-sm.patch @@ -0,0 +1,79 @@ +From c96c86ce4892cce7bb5c4b191f5bfae0c5c66242 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 14 Feb 2023 12:09:54 -0500 +Subject: x86/speculation: Identify processors vulnerable to SMT RSB + predictions + +From: Tom Lendacky + +[ Upstream commit be8de49bea505e7777a69ef63d60e02ac1712683 ] + +Certain AMD processors are vulnerable to a cross-thread return address +predictions bug. When running in SMT mode and one of the sibling threads +transitions out of C0 state, the other sibling thread could use return +target predictions from the sibling thread that transitioned out of C0. + +The Spectre v2 mitigations cover the Linux kernel, as it fills the RSB +when context switching to the idle thread. However, KVM allows a VMM to +prevent exiting guest mode when transitioning out of C0. A guest could +act maliciously in this situation, so create a new x86 BUG that can be +used to detect if the processor is vulnerable. + +Reviewed-by: Borislav Petkov (AMD) +Signed-off-by: Tom Lendacky +Message-Id: <91cec885656ca1fcd4f0185ce403a53dd9edecb7.1675956146.git.thomas.lendacky@amd.com> +Signed-off-by: Paolo Bonzini +Signed-off-by: Sasha Levin +--- + arch/x86/include/asm/cpufeatures.h | 1 + + arch/x86/kernel/cpu/common.c | 9 +++++++-- + 2 files changed, 8 insertions(+), 2 deletions(-) + +diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h +index f3cb8c8bf8d9c..e31c7e75d6b02 100644 +--- a/arch/x86/include/asm/cpufeatures.h ++++ b/arch/x86/include/asm/cpufeatures.h +@@ -452,5 +452,6 @@ + #define X86_BUG_MMIO_UNKNOWN X86_BUG(26) /* CPU is too old and its MMIO Stale Data status is unknown */ + #define X86_BUG_RETBLEED X86_BUG(27) /* CPU is affected by RETBleed */ + #define X86_BUG_EIBRS_PBRSB X86_BUG(28) /* EIBRS is vulnerable to Post Barrier RSB Predictions */ ++#define X86_BUG_SMT_RSB X86_BUG(29) /* CPU is vulnerable to Cross-Thread Return Address Predictions */ + + #endif /* _ASM_X86_CPUFEATURES_H */ +diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c +index 9c1df6222df92..1698470dbea5f 100644 +--- a/arch/x86/kernel/cpu/common.c ++++ b/arch/x86/kernel/cpu/common.c +@@ -1125,6 +1125,8 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = { + #define MMIO_SBDS BIT(2) + /* CPU is affected by RETbleed, speculating where you would not expect it */ + #define RETBLEED BIT(3) ++/* CPU is affected by SMT (cross-thread) return predictions */ ++#define SMT_RSB BIT(4) + + static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { + VULNBL_INTEL_STEPPINGS(IVYBRIDGE, X86_STEPPING_ANY, SRBDS), +@@ -1156,8 +1158,8 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { + + VULNBL_AMD(0x15, RETBLEED), + VULNBL_AMD(0x16, RETBLEED), +- VULNBL_AMD(0x17, RETBLEED), +- VULNBL_HYGON(0x18, RETBLEED), ++ VULNBL_AMD(0x17, RETBLEED | SMT_RSB), ++ VULNBL_HYGON(0x18, RETBLEED | SMT_RSB), + {} + }; + +@@ -1275,6 +1277,9 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) + !(ia32_cap & ARCH_CAP_PBRSB_NO)) + setup_force_cpu_bug(X86_BUG_EIBRS_PBRSB); + ++ if (cpu_matches(cpu_vuln_blacklist, SMT_RSB)) ++ setup_force_cpu_bug(X86_BUG_SMT_RSB); ++ + if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN)) + return; + +-- +2.39.0 +