]> git.ipfire.org Git - thirdparty/linux.git/commitdiff
KVM: x86: Don't read guest CR3 when doing async pf while the MMU is direct
authorXiaoyao Li <xiaoyao.li@intel.com>
Fri, 12 Dec 2025 13:50:51 +0000 (21:50 +0800)
committerSean Christopherson <seanjc@google.com>
Thu, 8 Jan 2026 20:04:58 +0000 (12:04 -0800)
Don't read guest CR3 in kvm_arch_setup_async_pf() if the MMU is direct
and use INVALID_GPA instead.

When KVM tries to perform the host-only async page fault for the shared
memory of TDX guests, the following WARNING is triggered:

  WARNING: CPU: 1 PID: 90922 at arch/x86/kvm/vmx/main.c:483 vt_cache_reg+0x16/0x20
  Call Trace:
  __kvm_mmu_faultin_pfn
  kvm_mmu_faultin_pfn
  kvm_tdp_page_fault
  kvm_mmu_do_page_fault
  kvm_mmu_page_fault
  tdx_handle_ept_violation

This WARNING is triggered when calling kvm_mmu_get_guest_pgd() to cache
the guest CR3 in kvm_arch_setup_async_pf() for later use in
kvm_arch_async_page_ready() to determine if it's possible to fix the
page fault in the current vCPU context to save one VM exit. However, when
guest state is protected, KVM cannot read the guest CR3.

Since protected guests aren't compatible with shadow paging, i.e, they
must use direct MMU, avoid calling kvm_mmu_get_guest_pgd() to read guest
CR3 when the MMU is direct and use INVALID_GPA instead.

Note that for protected guests mmu->root_role.direct is always true, so
that kvm_mmu_get_guest_pgd() in kvm_arch_async_page_ready() won't be
reached.

Reported-by: Farrah Chen <farrah.chen@intel.com>
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com>
Link: https://patch.msgid.link/20251212135051.2155280-1-xiaoyao.li@intel.com
[sean: explicitly cast to "unsigned long" to make 32-bit builds happy]
Signed-off-by: Sean Christopherson <seanjc@google.com>
arch/x86/kvm/mmu/mmu.c

index f173245469005808c460be4064cc8276e730f8d8..3911ac9bddfd5d67318fcf84601304cced36f80b 100644 (file)
@@ -4521,7 +4521,10 @@ static bool kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu,
        arch.gfn = fault->gfn;
        arch.error_code = fault->error_code;
        arch.direct_map = vcpu->arch.mmu->root_role.direct;
-       arch.cr3 = kvm_mmu_get_guest_pgd(vcpu, vcpu->arch.mmu);
+       if (arch.direct_map)
+               arch.cr3 = (unsigned long)INVALID_GPA;
+       else
+               arch.cr3 = kvm_mmu_get_guest_pgd(vcpu, vcpu->arch.mmu);
 
        return kvm_setup_async_pf(vcpu, fault->addr,
                                  kvm_vcpu_gfn_to_hva(vcpu, fault->gfn), &arch);