From: David Matlack Date: Fri, 23 Aug 2024 23:56:48 +0000 (-0700) Subject: KVM: x86/mmu: WARN if huge page recovery triggered during dirty logging X-Git-Tag: v6.13-rc1~97^2~13^2~2 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=06c4cd957b5cfc8ce995474d3dc935cf89bcf454;p=thirdparty%2Flinux.git KVM: x86/mmu: WARN if huge page recovery triggered during dirty logging WARN and bail out of recover_huge_pages_range() if dirty logging is enabled. KVM shouldn't be recovering huge pages during dirty logging anyway, since KVM needs to track writes at 4KiB. However it's not out of the possibility that that changes in the future. If KVM wants to recover huge pages during dirty logging, make_huge_spte() must be updated to write-protect the new huge page mapping. Otherwise, writes through the newly recovered huge page mapping will not be tracked. Note that this potential risk did not exist back when KVM zapped to recover huge page mappings, since subsequent accesses would just be faulted in at PG_LEVEL_4K if dirty logging was enabled. Signed-off-by: David Matlack Link: https://lore.kernel.org/r/20240823235648.3236880-7-dmatlack@google.com Signed-off-by: Sean Christopherson --- diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 233570e2cd35d..4508d868f1cdc 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1590,6 +1590,9 @@ static void recover_huge_pages_range(struct kvm *kvm, u64 huge_spte; int r; + if (WARN_ON_ONCE(kvm_slot_dirty_track_enabled(slot))) + return; + rcu_read_lock(); for_each_tdp_pte_min_level(iter, root, PG_LEVEL_2M, start, end) {