From: Xiaoyao Li Date: Fri, 12 Dec 2025 13:50:51 +0000 (+0800) Subject: KVM: x86: Don't read guest CR3 when doing async pf while the MMU is direct X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=57a7b47ab30f6201769988392dc8b1c0822a3369;p=thirdparty%2Flinux.git KVM: x86: Don't read guest CR3 when doing async pf while the MMU is direct Don't read guest CR3 in kvm_arch_setup_async_pf() if the MMU is direct and use INVALID_GPA instead. When KVM tries to perform the host-only async page fault for the shared memory of TDX guests, the following WARNING is triggered: WARNING: CPU: 1 PID: 90922 at arch/x86/kvm/vmx/main.c:483 vt_cache_reg+0x16/0x20 Call Trace: __kvm_mmu_faultin_pfn kvm_mmu_faultin_pfn kvm_tdp_page_fault kvm_mmu_do_page_fault kvm_mmu_page_fault tdx_handle_ept_violation This WARNING is triggered when calling kvm_mmu_get_guest_pgd() to cache the guest CR3 in kvm_arch_setup_async_pf() for later use in kvm_arch_async_page_ready() to determine if it's possible to fix the page fault in the current vCPU context to save one VM exit. However, when guest state is protected, KVM cannot read the guest CR3. Since protected guests aren't compatible with shadow paging, i.e, they must use direct MMU, avoid calling kvm_mmu_get_guest_pgd() to read guest CR3 when the MMU is direct and use INVALID_GPA instead. Note that for protected guests mmu->root_role.direct is always true, so that kvm_mmu_get_guest_pgd() in kvm_arch_async_page_ready() won't be reached. Reported-by: Farrah Chen Suggested-by: Sean Christopherson Signed-off-by: Xiaoyao Li Link: https://patch.msgid.link/20251212135051.2155280-1-xiaoyao.li@intel.com [sean: explicitly cast to "unsigned long" to make 32-bit builds happy] Signed-off-by: Sean Christopherson --- diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f173245469005..3911ac9bddfd5 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4521,7 +4521,10 @@ static bool kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu, arch.gfn = fault->gfn; arch.error_code = fault->error_code; arch.direct_map = vcpu->arch.mmu->root_role.direct; - arch.cr3 = kvm_mmu_get_guest_pgd(vcpu, vcpu->arch.mmu); + if (arch.direct_map) + arch.cr3 = (unsigned long)INVALID_GPA; + else + arch.cr3 = kvm_mmu_get_guest_pgd(vcpu, vcpu->arch.mmu); return kvm_setup_async_pf(vcpu, fault->addr, kvm_vcpu_gfn_to_hva(vcpu, fault->gfn), &arch);