From: Sean Christopherson Date: Wed, 8 Jan 2020 00:12:10 +0000 (-0800) Subject: KVM: x86/mmu: Apply max PA check for MMIO sptes to 32-bit KVM X-Git-Tag: v3.16.84~39 X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=94c3d6738673cf797e0aeda359c90101f2ad657f;p=thirdparty%2Fkernel%2Fstable.git KVM: x86/mmu: Apply max PA check for MMIO sptes to 32-bit KVM commit e30a7d623dccdb3f880fbcad980b0cb589a1da45 upstream. Remove the bogus 64-bit only condition from the check that disables MMIO spte optimization when the system supports the max PA, i.e. doesn't have any reserved PA bits. 32-bit KVM always uses PAE paging for the shadow MMU, and per Intel's SDM: PAE paging translates 32-bit linear addresses to 52-bit physical addresses. The kernel's restrictions on max physical addresses are limits on how much memory the kernel can reasonably use, not what physical addresses are supported by hardware. Fixes: ce88decffd17 ("KVM: MMU: mmio page fault support") Signed-off-by: Sean Christopherson Signed-off-by: Paolo Bonzini [bwh: Backported to 3.16: adjust filename, context] Signed-off-by: Ben Hutchings --- diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index d516e0a4584a0..c73ff06cbd716 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5734,7 +5734,7 @@ static void kvm_set_mmio_spte_mask(void) * If reserved bit is not supported, clear the present bit to disable * mmio page fault. */ - if (IS_ENABLED(CONFIG_X86_64) && maxphyaddr == 52) + if (maxphyaddr == 52) mask &= ~1ull; kvm_mmu_set_mmio_spte_mask(mask);