]> git.ipfire.org Git - thirdparty/kernel/stable.git/commitdiff
KVM: nSVM: Fail emulation of VMRUN/VMLOAD/VMSAVE if mapping vmcb12 fails
authorYosry Ahmed <yosry@kernel.org>
Mon, 16 Mar 2026 20:27:30 +0000 (20:27 +0000)
committerSean Christopherson <seanjc@google.com>
Fri, 3 Apr 2026 23:08:04 +0000 (16:08 -0700)
KVM currently injects a #GP if mapping vmcb12 fails when emulating
VMRUN/VMLOAD/VMSAVE. This is not architectural behavior, as #GP should
only be injected if the physical address is not supported or not
aligned. Instead, handle it as an emulation failure, similar to how nVMX
handles failures to read/write guest memory in several emulation paths.

When virtual VMLOAD/VMSAVE is enabled, if vmcb12's GPA is not mapped in
the NPTs a VMEXIT(#NPF) will be generated, and KVM will install an MMIO
SPTE and emulate the instruction if there is no corresponding memslot.
x86_emulate_insn() will return EMULATION_FAILED as VMLOAD/VMSAVE are not
handled as part of the twobyte_insn cases.

Even though this will also result in an emulation failure, it will only
result in a straight return to userspace if
KVM_CAP_EXIT_ON_EMULATION_FAILURE is set. Otherwise, it would inject #UD
and only exit to userspace if not in guest mode. So the behavior is
slightly different if virtual VMLOAD/VMSAVE is enabled.

Fixes: 3d6368ef580a ("KVM: SVM: Add VMRUN handler")
Reported-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Yosry Ahmed <yosry@kernel.org>
Link: https://patch.msgid.link/20260316202732.3164936-8-yosry@kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
arch/x86/kvm/svm/nested.c
arch/x86/kvm/svm/svm.c

index 16f4bc4f48f5e7e87de67537b25f1647589a9553..b42d95fc84990c595b2926d495a7b79c6b786f65 100644 (file)
@@ -1111,10 +1111,8 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
 
        ret = nested_svm_copy_vmcb12_to_cache(vcpu, vmcb12_gpa);
        if (ret) {
-               if (ret == -EFAULT) {
-                       kvm_inject_gp(vcpu, 0);
-                       return 1;
-               }
+               if (ret == -EFAULT)
+                       return kvm_handle_memory_failure(vcpu, X86EMUL_IO_NEEDED, NULL);
 
                /* Advance RIP past VMRUN as part of the nested #VMEXIT. */
                return kvm_skip_emulated_instruction(vcpu);
index b83d524a6e781fdb1b8ac1e8f371d3f1f0ee5f68..1e51cbb80e864d4e39fe0ab73bc0c23917174af9 100644 (file)
@@ -2204,10 +2204,8 @@ static int vmload_vmsave_interception(struct kvm_vcpu *vcpu, bool vmload)
                return 1;
        }
 
-       if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcb12_gpa), &map)) {
-               kvm_inject_gp(vcpu, 0);
-               return 1;
-       }
+       if (kvm_vcpu_map(vcpu, gpa_to_gfn(vmcb12_gpa), &map))
+               return kvm_handle_memory_failure(vcpu, X86EMUL_IO_NEEDED, NULL);
 
        vmcb12 = map.hva;