]> git.ipfire.org Git - thirdparty/linux.git/commitdiff
KVM: x86/mmu: Further check old SPTE is leaf for spurious prefetch fault
authorYan Zhao <yan.y.zhao@intel.com>
Tue, 18 Mar 2025 01:31:11 +0000 (09:31 +0800)
committerSean Christopherson <seanjc@google.com>
Mon, 28 Apr 2025 18:03:06 +0000 (11:03 -0700)
Instead of simply treating a prefetch fault as spurious when there's a
shadow-present old SPTE, further check if the old SPTE is leaf to determine
if a prefetch fault is spurious.

It's not reasonable to treat a prefetch fault as spurious when there's a
shadow-present non-leaf SPTE without a corresponding shadow-present leaf
SPTE. e.g., in the following sequence, a prefetch fault should not be
considered spurious:
1. add a memslot with size 4K
2. prefault GPA A in the memslot
3. delete the memslot (zap all disabled)
4. re-add the memslot with size 2M
5. prefault GPA A again.
In step 5, the prefetch fault attempts to install a 2M huge entry.
Since step 3 zaps the leaf SPTE for GPA A while keeping the non-leaf SPTE,
the leaf entry will remain empty after step 5 if the fetch fault is
regarded as spurious due to a shadow-present non-leaf SPTE.

Signed-off-by: Yan Zhao <yan.y.zhao@intel.com>
Link: https://lore.kernel.org/r/20250318013111.5648-1-yan.y.zhao@intel.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
arch/x86/kvm/mmu/mmu.c
arch/x86/kvm/mmu/tdp_mmu.c

index a284dce227a08ecee060ada56997b156f4cfaac3..b50d9e715806aed3cd65886ec2314da609c78e8f 100644 (file)
@@ -3020,7 +3020,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot,
        }
 
        if (is_shadow_present_pte(*sptep)) {
-               if (prefetch)
+               if (prefetch && is_last_spte(*sptep, level))
                        return RET_PF_SPURIOUS;
 
                /*
index 405874f4d08803b1de724c2d34a16e429ad33a43..f534022572178cc292832ee1d22f86d878f1fb9c 100644 (file)
@@ -1153,7 +1153,8 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
        if (WARN_ON_ONCE(sp->role.level != fault->goal_level))
                return RET_PF_RETRY;
 
-       if (fault->prefetch && is_shadow_present_pte(iter->old_spte))
+       if (fault->prefetch && is_shadow_present_pte(iter->old_spte) &&
+           is_last_spte(iter->old_spte, iter->level))
                return RET_PF_SPURIOUS;
 
        if (is_shadow_present_pte(iter->old_spte) &&