]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blob - releases/5.0.4/kvm-nvmx-sign-extend-displacements-of-vmx-instr-s-mem-operands.patch
Linux 5.0.4
[thirdparty/kernel/stable-queue.git] / releases / 5.0.4 / kvm-nvmx-sign-extend-displacements-of-vmx-instr-s-mem-operands.patch
1 From 946c522b603f281195af1df91837a1d4d1eb3bc9 Mon Sep 17 00:00:00 2001
2 From: Sean Christopherson <sean.j.christopherson@intel.com>
3 Date: Wed, 23 Jan 2019 14:39:23 -0800
4 Subject: KVM: nVMX: Sign extend displacements of VMX instr's mem operands
5 MIME-Version: 1.0
6 Content-Type: text/plain; charset=UTF-8
7 Content-Transfer-Encoding: 8bit
8
9 From: Sean Christopherson <sean.j.christopherson@intel.com>
10
11 commit 946c522b603f281195af1df91837a1d4d1eb3bc9 upstream.
12
13 The VMCS.EXIT_QUALIFCATION field reports the displacements of memory
14 operands for various instructions, including VMX instructions, as a
15 naturally sized unsigned value, but masks the value by the addr size,
16 e.g. given a ModRM encoded as -0x28(%ebp), the -0x28 displacement is
17 reported as 0xffffffd8 for a 32-bit address size. Despite some weird
18 wording regarding sign extension, the SDM explicitly states that bits
19 beyond the instructions address size are undefined:
20
21 In all cases, bits of this field beyond the instruction’s address
22 size are undefined.
23
24 Failure to sign extend the displacement results in KVM incorrectly
25 treating a negative displacement as a large positive displacement when
26 the address size of the VMX instruction is smaller than KVM's native
27 size, e.g. a 32-bit address size on a 64-bit KVM.
28
29 The very original decoding, added by commit 064aea774768 ("KVM: nVMX:
30 Decoding memory operands of VMX instructions"), sort of modeled sign
31 extension by truncating the final virtual/linear address for a 32-bit
32 address size. I.e. it messed up the effective address but made it work
33 by adjusting the final address.
34
35 When segmentation checks were added, the truncation logic was kept
36 as-is and no sign extension logic was introduced. In other words, it
37 kept calculating the wrong effective address while mostly generating
38 the correct virtual/linear address. As the effective address is what's
39 used in the segment limit checks, this results in KVM incorreclty
40 injecting #GP/#SS faults due to non-existent segment violations when
41 a nested VMM uses negative displacements with an address size smaller
42 than KVM's native address size.
43
44 Using the -0x28(%ebp) example, an EBP value of 0x1000 will result in
45 KVM using 0x100000fd8 as the effective address when checking for a
46 segment limit violation. This causes a 100% failure rate when running
47 a 32-bit KVM build as L1 on top of a 64-bit KVM L0.
48
49 Fixes: f9eb4af67c9d ("KVM: nVMX: VMX instructions: add checks for #GP/#SS exceptions")
50 Cc: stable@vger.kernel.org
51 Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
52 Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
53 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
54
55 ---
56 arch/x86/kvm/vmx/nested.c | 4 ++++
57 1 file changed, 4 insertions(+)
58
59 --- a/arch/x86/kvm/vmx/nested.c
60 +++ b/arch/x86/kvm/vmx/nested.c
61 @@ -4035,6 +4035,10 @@ int get_vmx_mem_address(struct kvm_vcpu
62 /* Addr = segment_base + offset */
63 /* offset = base + [index * scale] + displacement */
64 off = exit_qualification; /* holds the displacement */
65 + if (addr_size == 1)
66 + off = (gva_t)sign_extend64(off, 31);
67 + else if (addr_size == 0)
68 + off = (gva_t)sign_extend64(off, 15);
69 if (base_is_valid)
70 off += kvm_register_read(vcpu, base_reg);
71 if (index_is_valid)