]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blame - releases/4.14.114/kvm-x86-don-t-clear-efer-during-smm-transitions-for-32-bit-vcpu.patch
Linux 4.14.114
[thirdparty/kernel/stable-queue.git] / releases / 4.14.114 / kvm-x86-don-t-clear-efer-during-smm-transitions-for-32-bit-vcpu.patch
CommitLineData
21cf661e
GKH
1From 8f4dc2e77cdfaf7e644ef29693fa229db29ee1de Mon Sep 17 00:00:00 2001
2From: Sean Christopherson <sean.j.christopherson@intel.com>
3Date: Tue, 2 Apr 2019 08:10:47 -0700
4Subject: KVM: x86: Don't clear EFER during SMM transitions for 32-bit vCPU
5
6From: Sean Christopherson <sean.j.christopherson@intel.com>
7
8commit 8f4dc2e77cdfaf7e644ef29693fa229db29ee1de upstream.
9
10Neither AMD nor Intel CPUs have an EFER field in the legacy SMRAM save
11state area, i.e. don't save/restore EFER across SMM transitions. KVM
12somewhat models this, e.g. doesn't clear EFER on entry to SMM if the
13guest doesn't support long mode. But during RSM, KVM unconditionally
14clears EFER so that it can get back to pure 32-bit mode in order to
15start loading CRs with their actual non-SMM values.
16
17Clear EFER only when it will be written when loading the non-SMM state
18so as to preserve bits that can theoretically be set on 32-bit vCPUs,
19e.g. KVM always emulates EFER_SCE.
20
21And because CR4.PAE is cleared only to play nice with EFER, wrap that
22code in the long mode check as well. Note, this may result in a
23compiler warning about cr4 being consumed uninitialized. Re-read CR4
24even though it's technically unnecessary, as doing so allows for more
25readable code and RSM emulation is not a performance critical path.
26
27Fixes: 660a5d517aaab ("KVM: x86: save/load state on SMM switch")
28Cc: stable@vger.kernel.org
29Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
30Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
31Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
32
33---
34 arch/x86/kvm/emulate.c | 23 ++++++++++++-----------
35 1 file changed, 12 insertions(+), 11 deletions(-)
36
37--- a/arch/x86/kvm/emulate.c
38+++ b/arch/x86/kvm/emulate.c
39@@ -2588,15 +2588,13 @@ static int em_rsm(struct x86_emulate_ctx
40 * CR0/CR3/CR4/EFER. It's all a bit more complicated if the vCPU
41 * supports long mode.
42 */
43- cr4 = ctxt->ops->get_cr(ctxt, 4);
44 if (emulator_has_longmode(ctxt)) {
45 struct desc_struct cs_desc;
46
47 /* Zero CR4.PCIDE before CR0.PG. */
48- if (cr4 & X86_CR4_PCIDE) {
49+ cr4 = ctxt->ops->get_cr(ctxt, 4);
50+ if (cr4 & X86_CR4_PCIDE)
51 ctxt->ops->set_cr(ctxt, 4, cr4 & ~X86_CR4_PCIDE);
52- cr4 &= ~X86_CR4_PCIDE;
53- }
54
55 /* A 32-bit code segment is required to clear EFER.LMA. */
56 memset(&cs_desc, 0, sizeof(cs_desc));
57@@ -2610,13 +2608,16 @@ static int em_rsm(struct x86_emulate_ctx
58 if (cr0 & X86_CR0_PE)
59 ctxt->ops->set_cr(ctxt, 0, cr0 & ~(X86_CR0_PG | X86_CR0_PE));
60
61- /* Now clear CR4.PAE (which must be done before clearing EFER.LME). */
62- if (cr4 & X86_CR4_PAE)
63- ctxt->ops->set_cr(ctxt, 4, cr4 & ~X86_CR4_PAE);
64-
65- /* And finally go back to 32-bit mode. */
66- efer = 0;
67- ctxt->ops->set_msr(ctxt, MSR_EFER, efer);
68+ if (emulator_has_longmode(ctxt)) {
69+ /* Clear CR4.PAE before clearing EFER.LME. */
70+ cr4 = ctxt->ops->get_cr(ctxt, 4);
71+ if (cr4 & X86_CR4_PAE)
72+ ctxt->ops->set_cr(ctxt, 4, cr4 & ~X86_CR4_PAE);
73+
74+ /* And finally go back to 32-bit mode. */
75+ efer = 0;
76+ ctxt->ops->set_msr(ctxt, MSR_EFER, efer);
77+ }
78
79 smbase = ctxt->ops->get_smbase(ctxt);
80 if (emulator_has_longmode(ctxt))