From: Sean Christopherson Date: Thu, 10 Oct 2024 18:23:52 +0000 (-0700) Subject: KVM: VMX: Hold mmu_lock until page is released when updating APIC access page X-Git-Tag: v6.13-rc1~97^2~17^2~35 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=cb444acb697943c0aaeab085c43e07727cb0b85c;p=thirdparty%2Fkernel%2Flinux.git KVM: VMX: Hold mmu_lock until page is released when updating APIC access page Hold mmu_lock across kvm_release_pfn_clean() when refreshing the APIC access page address to ensure that KVM doesn't mark a page/folio as accessed after it has been unmapped. Practically speaking marking a folio accesses is benign in this scenario, as KVM does hold a reference (it's really just marking folios dirty that is problematic), but there's no reason not to be paranoid (moving the APIC access page isn't a hot path), and no reason to be different from other mmu_notifier-protected flows in KVM. Tested-by: Alex Bennée Signed-off-by: Sean Christopherson Tested-by: Dmitry Osipenko Signed-off-by: Paolo Bonzini Message-ID: <20241010182427.1434605-51-seanjc@google.com> --- diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index 81ed596e4454c..bee90089c8a7b 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6832,25 +6832,22 @@ void vmx_set_apic_access_page_addr(struct kvm_vcpu *vcpu) return; read_lock(&vcpu->kvm->mmu_lock); - if (mmu_invalidate_retry_gfn(kvm, mmu_seq, gfn)) { + if (mmu_invalidate_retry_gfn(kvm, mmu_seq, gfn)) kvm_make_request(KVM_REQ_APIC_PAGE_RELOAD, vcpu); - read_unlock(&vcpu->kvm->mmu_lock); - goto out; - } - - vmcs_write64(APIC_ACCESS_ADDR, pfn_to_hpa(pfn)); - read_unlock(&vcpu->kvm->mmu_lock); + else + vmcs_write64(APIC_ACCESS_ADDR, pfn_to_hpa(pfn)); - /* - * No need for a manual TLB flush at this point, KVM has already done a - * flush if there were SPTEs pointing at the previous page. - */ -out: /* * Do not pin apic access page in memory, the MMU notifier * will call us again if it is migrated or swapped out. */ kvm_release_pfn_clean(pfn); + + /* + * No need for a manual TLB flush at this point, KVM has already done a + * flush if there were SPTEs pointing at the previous page. + */ + read_unlock(&vcpu->kvm->mmu_lock); } void vmx_hwapic_isr_update(int max_isr)