From: Sean Christopherson Date: Wed, 23 Sep 2020 18:37:28 +0000 (-0700) Subject: KVM: x86/mmu: Commit zap of remaining invalid pages when recovering lpages X-Git-Tag: v5.8.17~561 X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=f7b5e3c6ab6ebdde1854f77ea531206f372b60a2;p=thirdparty%2Fkernel%2Fstable.git KVM: x86/mmu: Commit zap of remaining invalid pages when recovering lpages commit e89505698c9f70125651060547da4ff5046124fc upstream. Call kvm_mmu_commit_zap_page() after exiting the "prepare zap" loop in kvm_recover_nx_lpages() to finish zapping pages in the unlikely event that the loop exited due to lpage_disallowed_mmu_pages being empty. Because the recovery thread drops mmu_lock() when rescheduling, it's possible that lpage_disallowed_mmu_pages could be emptied by a different thread without to_zap reaching zero despite to_zap being derived from the number of disallowed lpages. Fixes: 1aa9b9572b105 ("kvm: x86: mmu: Recovery of shattered NX large pages") Cc: Junaid Shahid Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson Message-Id: <20200923183735.584-2-sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini Signed-off-by: Greg Kroah-Hartman --- diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 1e6724c30cc05..57cd70801216f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6341,6 +6341,7 @@ static void kvm_recover_nx_lpages(struct kvm *kvm) cond_resched_lock(&kvm->mmu_lock); } } + kvm_mmu_commit_zap_page(kvm, &invalid_list); spin_unlock(&kvm->mmu_lock); srcu_read_unlock(&kvm->srcu, rcu_idx);