]> git.ipfire.org Git - thirdparty/kernel/stable.git/commit
KVM: x86/mmu: Walk rmaps (shadow MMU) without holding mmu_lock when aging gfns
authorSean Christopherson <seanjc@google.com>
Tue, 4 Feb 2025 00:40:38 +0000 (00:40 +0000)
committerSean Christopherson <seanjc@google.com>
Fri, 14 Feb 2025 15:17:47 +0000 (07:17 -0800)
commitaf3b6a9eba48419732b07a0472db7160282f0f39
tree5c89ac6cb79eff7d9abd3c89d1b09e17d4f5bd7e
parentbb6c7749ccee3452a4aff95dda2616e73fe14982
KVM: x86/mmu: Walk rmaps (shadow MMU) without holding mmu_lock when aging gfns

Convert the shadow MMU to use per-rmap locking instead of the per-VM
mmu_lock to protect rmaps when aging SPTEs.  When A/D bits are enabled, it
is safe to simply clear the Accessed bits, i.e. KVM just needs to ensure
the parent page table isn't freed.

The less obvious case is marking SPTEs for access tracking in the
non-A/D case (for EPT only).  Because aging a gfn means making the SPTE
not-present, KVM needs to play nice with the case where the CPU has TLB
entries for a SPTE that is not-present in memory.  For example, when
doing dirty tracking, if KVM encounters a non-present shadow accessed SPTE,
KVM must know to do a TLB invalidation.

Fortunately, KVM already provides (and relies upon) the necessary
functionality.  E.g. KVM doesn't flush TLBs when aging pages (even in the
clear_flush_young() case), and when harvesting dirty bitmaps, KVM flushes
based on the dirty bitmaps, not on SPTEs.

Co-developed-by: James Houghton <jthoughton@google.com>
Signed-off-by: James Houghton <jthoughton@google.com>
Link: https://lore.kernel.org/r/20250204004038.1680123-12-jthoughton@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
arch/x86/kvm/mmu/mmu.c