]> git.ipfire.org Git - thirdparty/kernel/linux.git/commitdiff
KVM: x86: Avoid calling kvm_is_mmio_pfn() when kvm_x86_ops.get_mt_mask is NULL
authorSean Christopherson <seanjc@google.com>
Fri, 23 May 2025 01:17:52 +0000 (18:17 -0700)
committerSean Christopherson <seanjc@google.com>
Tue, 24 Jun 2025 20:29:31 +0000 (13:29 -0700)
Guard the call to kvm_x86_call(get_mt_mask) with an explicit check on
kvm_x86_ops.get_mt_mask so as to avoid unnecessarily calling
kvm_is_mmio_pfn(), which is moderately expensive for some backing types.
E.g. lookup_memtype() conditionally takes a system-wide spinlock if KVM
ends up being call pat_pfn_immune_to_uc_mtrr(), e.g. for DAX memory.

While the call to kvm_x86_ops.get_mt_mask() itself is elided, the compiler
still needs to compute all parameters, as it can't know at build time that
the call will be squashed.

   <+243>:   call   0xffffffff812ad880 <kvm_is_mmio_pfn>
   <+248>:   mov    %r13,%rsi
   <+251>:   mov    %rbx,%rdi
   <+254>:   movzbl %al,%edx
   <+257>:   call   0xffffffff81c26af0 <__SCT__kvm_x86_get_mt_mask>

Fixes: 3fee4837ef40 ("KVM: x86: remove shadow_memtype_mask")
Tested-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com>
Link: https://lore.kernel.org/r/20250523011756.3243624-2-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
arch/x86/kvm/mmu/spte.c

index cfce03d8f123405a3b8b14f2fdb5fea239626a87..f262c380f40efce9c559d6c9ee29838c9a7e3ecd 100644 (file)
@@ -209,7 +209,9 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
        if (level > PG_LEVEL_4K)
                spte |= PT_PAGE_SIZE_MASK;
 
-       spte |= kvm_x86_call(get_mt_mask)(vcpu, gfn, kvm_is_mmio_pfn(pfn));
+       if (kvm_x86_ops.get_mt_mask)
+               spte |= kvm_x86_call(get_mt_mask)(vcpu, gfn, kvm_is_mmio_pfn(pfn));
+
        if (host_writable)
                spte |= shadow_host_writable_mask;
        else