]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blame - queue-5.10/kvm-vmx-move-verw-closer-to-vmentry-for-mds-mitigation.patch
5.10-stable patches
[thirdparty/kernel/stable-queue.git] / queue-5.10 / kvm-vmx-move-verw-closer-to-vmentry-for-mds-mitigation.patch
CommitLineData
f0b05e03
GKH
1From stable+bounces-27548-greg=kroah.com@vger.kernel.org Tue Mar 12 23:41:11 2024
2From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
3Date: Tue, 12 Mar 2024 15:41:02 -0700
4Subject: KVM/VMX: Move VERW closer to VMentry for MDS mitigation
5To: stable@vger.kernel.org
6Cc: Dave Hansen <dave.hansen@linux.intel.com>, Sean Christopherson <seanjc@google.com>
7Message-ID: <20240312-delay-verw-backport-5-10-y-v2-7-ad081ccd89ca@linux.intel.com>
8Content-Disposition: inline
9
10From: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
11
12commit 43fb862de8f628c5db5e96831c915b9aebf62d33 upstream.
13
14During VMentry VERW is executed to mitigate MDS. After VERW, any memory
15access like register push onto stack may put host data in MDS affected
16CPU buffers. A guest can then use MDS to sample host data.
17
18Although likelihood of secrets surviving in registers at current VERW
19callsite is less, but it can't be ruled out. Harden the MDS mitigation
20by moving the VERW mitigation late in VMentry path.
21
22Note that VERW for MMIO Stale Data mitigation is unchanged because of
23the complexity of per-guest conditional VERW which is not easy to handle
24that late in asm with no GPRs available. If the CPU is also affected by
25MDS, VERW is unconditionally executed late in asm regardless of guest
26having MMIO access.
27
28 [ pawan: conflict resolved in backport ]
29
30Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
31Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
32Acked-by: Sean Christopherson <seanjc@google.com>
33Link: https://lore.kernel.org/all/20240213-delay-verw-v8-6-a6216d83edb7%40linux.intel.com
34Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
35---
36 arch/x86/kvm/vmx/vmenter.S | 3 +++
37 arch/x86/kvm/vmx/vmx.c | 12 ++++++++----
38 2 files changed, 11 insertions(+), 4 deletions(-)
39
40--- a/arch/x86/kvm/vmx/vmenter.S
41+++ b/arch/x86/kvm/vmx/vmenter.S
42@@ -99,6 +99,9 @@ SYM_FUNC_START(__vmx_vcpu_run)
43 /* Load guest RAX. This kills the @regs pointer! */
44 mov VCPU_RAX(%_ASM_AX), %_ASM_AX
45
46+ /* Clobbers EFLAGS.ZF */
47+ CLEAR_CPU_BUFFERS
48+
49 /* Check EFLAGS.CF from the VMX_RUN_VMRESUME bit test above. */
50 jnc .Lvmlaunch
51
52--- a/arch/x86/kvm/vmx/vmx.c
53+++ b/arch/x86/kvm/vmx/vmx.c
54@@ -397,7 +397,8 @@ static __always_inline void vmx_enable_f
55
56 static void vmx_update_fb_clear_dis(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx)
57 {
58- vmx->disable_fb_clear = vmx_fb_clear_ctrl_available;
59+ vmx->disable_fb_clear = !cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF) &&
60+ vmx_fb_clear_ctrl_available;
61
62 /*
63 * If guest will not execute VERW, there is no need to set FB_CLEAR_DIS
64@@ -6792,11 +6793,14 @@ static noinstr void vmx_vcpu_enter_exit(
65 guest_enter_irqoff();
66 lockdep_hardirqs_on(CALLER_ADDR0);
67
68- /* L1D Flush includes CPU buffer clear to mitigate MDS */
69+ /*
70+ * L1D Flush includes CPU buffer clear to mitigate MDS, but VERW
71+ * mitigation for MDS is done late in VMentry and is still
72+ * executed in spite of L1D Flush. This is because an extra VERW
73+ * should not matter much after the big hammer L1D Flush.
74+ */
75 if (static_branch_unlikely(&vmx_l1d_should_flush))
76 vmx_l1d_flush(vcpu);
77- else if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF))
78- mds_clear_cpu_buffers();
79 else if (static_branch_unlikely(&mmio_stale_data_clear) &&
80 kvm_arch_has_assigned_device(vcpu->kvm))
81 mds_clear_cpu_buffers();