]> git.ipfire.org Git - thirdparty/kernel/stable.git/commit
KVM: x86/mmu: Add support for lockless walks of rmap SPTEs
authorSean Christopherson <seanjc@google.com>
Tue, 4 Feb 2025 00:40:37 +0000 (00:40 +0000)
committerSean Christopherson <seanjc@google.com>
Fri, 14 Feb 2025 15:17:44 +0000 (07:17 -0800)
commitbb6c7749ccee3452a4aff95dda2616e73fe14982
tree0c68c92d20a41d72f1e0eb0659ae7bef997ea560
parent4834eaded91e5c90141540ccfb1af2bd40a4ac80
KVM: x86/mmu: Add support for lockless walks of rmap SPTEs

Add a lockless version of for_each_rmap_spte(), which is pretty much the
same as the normal version, except that it doesn't BUG() the host if a
non-present SPTE is encountered.  When mmu_lock is held, it should be
impossible for a different task to zap a SPTE, _and_ zapped SPTEs must
be removed from their rmap chain prior to dropping mmu_lock.  Thus, the
normal walker BUG()s if a non-present SPTE is encountered as something is
wildly broken.

When walking rmaps without holding mmu_lock, the SPTEs pointed at by the
rmap chain can be zapped/dropped, and so a lockless walk can observe a
non-present SPTE if it runs concurrently with a different operation that
is zapping SPTEs.

Signed-off-by: James Houghton <jthoughton@google.com>
Link: https://lore.kernel.org/r/20250204004038.1680123-11-jthoughton@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
arch/x86/kvm/mmu/mmu.c