From: Sean Christopherson Date: Thu, 10 Oct 2024 18:23:55 +0000 (-0700) Subject: KVM: PPC: e500: Mark "struct page" pfn accessed before dropping mmu_lock X-Git-Tag: v6.13-rc1~97^2~17^2~32 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=84cf78dcd9d65c45ab73998d4ad50f433d53fb93;p=thirdparty%2Fkernel%2Flinux.git KVM: PPC: e500: Mark "struct page" pfn accessed before dropping mmu_lock Mark pages accessed before dropping mmu_lock when faulting in guest memory so that shadow_map() can convert to kvm_release_faultin_page() without tripping its lockdep assertion on mmu_lock being held. Marking pages accessed outside of mmu_lock is ok (not great, but safe), but marking pages _dirty_ outside of mmu_lock can make filesystems unhappy. Signed-off-by: Sean Christopherson Tested-by: Dmitry Osipenko Signed-off-by: Paolo Bonzini Message-ID: <20241010182427.1434605-54-seanjc@google.com> --- diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c index 5c2adfd19e123..334dd96f80814 100644 --- a/arch/powerpc/kvm/e500_mmu_host.c +++ b/arch/powerpc/kvm/e500_mmu_host.c @@ -498,11 +498,9 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500, kvmppc_mmu_flush_icache(pfn); out: - spin_unlock(&kvm->mmu_lock); - /* Drop refcount on page, so that mmu notifiers can clear it */ kvm_release_pfn_clean(pfn); - + spin_unlock(&kvm->mmu_lock); return ret; }