From: Sean Christopherson Date: Thu, 10 Oct 2024 18:23:40 +0000 (-0700) Subject: KVM: x86/mmu: Put direct prefetched pages via kvm_release_page_clean() X-Git-Tag: v6.13-rc1~97^2~17^2~47 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=64d5cd99f78ec39edbc691bb332f34e6c22c32c9;p=thirdparty%2Fkernel%2Flinux.git KVM: x86/mmu: Put direct prefetched pages via kvm_release_page_clean() Use kvm_release_page_clean() to put prefeteched pages instead of calling put_page() directly. This will allow de-duplicating the prefetch code between indirect and direct MMUs. Note, there's a small functional change as kvm_release_page_clean() marks the page/folio as accessed. While it's not strictly guaranteed that the guest will access the page, KVM won't intercept guest accesses, i.e. won't mark the page accessed if it _is_ accessed by the guest (unless A/D bits are disabled, but running without A/D bits is effectively limited to pre-HSW Intel CPUs). Tested-by: Alex Bennée Signed-off-by: Sean Christopherson Tested-by: Dmitry Osipenko Signed-off-by: Paolo Bonzini Message-ID: <20241010182427.1434605-39-seanjc@google.com> --- diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 7dcd34628f49d..5e0c91f017f77 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2972,7 +2972,7 @@ static int direct_pte_prefetch_many(struct kvm_vcpu *vcpu, for (i = 0; i < ret; i++, gfn++, start++) { mmu_set_spte(vcpu, slot, start, access, gfn, page_to_pfn(pages[i]), NULL); - put_page(pages[i]); + kvm_release_page_clean(pages[i]); } return 0;