]> git.ipfire.org Git - thirdparty/kernel/linux.git/commitdiff
KVM: x86/mmu: WARN if huge page recovery triggered during dirty logging
authorDavid Matlack <dmatlack@google.com>
Fri, 23 Aug 2024 23:56:48 +0000 (16:56 -0700)
committerSean Christopherson <seanjc@google.com>
Tue, 5 Nov 2024 02:37:23 +0000 (18:37 -0800)
WARN and bail out of recover_huge_pages_range() if dirty logging is
enabled. KVM shouldn't be recovering huge pages during dirty logging
anyway, since KVM needs to track writes at 4KiB. However it's not out of
the possibility that that changes in the future.

If KVM wants to recover huge pages during dirty logging, make_huge_spte()
must be updated to write-protect the new huge page mapping. Otherwise,
writes through the newly recovered huge page mapping will not be tracked.

Note that this potential risk did not exist back when KVM zapped to
recover huge page mappings, since subsequent accesses would just be
faulted in at PG_LEVEL_4K if dirty logging was enabled.

Signed-off-by: David Matlack <dmatlack@google.com>
Link: https://lore.kernel.org/r/20240823235648.3236880-7-dmatlack@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
arch/x86/kvm/mmu/tdp_mmu.c

index 233570e2cd35d00e3bac5cba3997096818d9e088..4508d868f1cdcf54c2cd0db2a24318d8480b5110 100644 (file)
@@ -1590,6 +1590,9 @@ static void recover_huge_pages_range(struct kvm *kvm,
        u64 huge_spte;
        int r;
 
+       if (WARN_ON_ONCE(kvm_slot_dirty_track_enabled(slot)))
+               return;
+
        rcu_read_lock();
 
        for_each_tdp_pte_min_level(iter, root, PG_LEVEL_2M, start, end) {