]> git.ipfire.org Git - thirdparty/kernel/linux.git/commitdiff
KVM: Assert that slots_lock is held when resetting per-vCPU dirty rings
authorSean Christopherson <seanjc@google.com>
Fri, 16 May 2025 21:35:40 +0000 (14:35 -0700)
committerSean Christopherson <seanjc@google.com>
Fri, 20 Jun 2025 20:41:04 +0000 (13:41 -0700)
Assert that slots_lock is held in kvm_dirty_ring_reset() and add a comment
to explain _why_ slots needs to be held for the duration of the reset.

Link: https://lore.kernel.org/all/aCSns6Q5oTkdXUEe@google.com
Suggested-by: James Houghton <jthoughton@google.com>
Reviewed-by: Yan Zhao <yan.y.zhao@intel.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Link: https://lore.kernel.org/r/20250516213540.2546077-7-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
virt/kvm/dirty_ring.c

index 4caa63e610d261d8bdd5d76a415e5eb894716925..02bc6b00d76cbd247329a7a07927294279e9ce09 100644 (file)
@@ -122,6 +122,14 @@ int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring,
        unsigned long mask = 0;
        struct kvm_dirty_gfn *entry;
 
+       /*
+        * Ensure concurrent calls to KVM_RESET_DIRTY_RINGS are serialized,
+        * e.g. so that KVM fully resets all entries processed by a given call
+        * before returning to userspace.  Holding slots_lock also protects
+        * the various memslot accesses.
+        */
+       lockdep_assert_held(&kvm->slots_lock);
+
        while (likely((*nr_entries_reset) < INT_MAX)) {
                if (signal_pending(current))
                        return -EINTR;