]> git.ipfire.org Git - thirdparty/linux.git/commitdiff
KVM: x86/mmu: Voluntarily reschedule as needed when zapping MMIO sptes
authorSean Christopherson <sean.j.christopherson@intel.com>
Tue, 5 Feb 2019 21:01:24 +0000 (13:01 -0800)
committerPaolo Bonzini <pbonzini@redhat.com>
Wed, 20 Feb 2019 21:48:40 +0000 (22:48 +0100)
Call cond_resched_lock() when zapping MMIO to reschedule if needed or to
release and reacquire mmu_lock in case of contention.  There is no need
to flush or zap when temporarily dropping mmu_lock as zapping MMIO sptes
is done when holding the memslots lock and with the "update in-progress"
bit set in the memslots generation, which disables MMIO spte caching.
The walk does need to be restarted if mmu_lock is dropped as the active
pages list may be modified.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
arch/x86/kvm/mmu.c

index d80c1558b23cb90a015ba3466a40a4f926d647ae..2190679eda398720e736a8f92645af3284fa8c83 100644 (file)
@@ -5954,7 +5954,8 @@ restart:
        list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) {
                if (!sp->mmio_cached)
                        continue;
-               if (kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list))
+               if (kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list) ||
+                   cond_resched_lock(&kvm->mmu_lock))
                        goto restart;
        }