From: Vitaly Kuznetsov Date: Mon, 28 Jun 2021 10:44:20 +0000 (+0200) Subject: KVM: nSVM: Check the value written to MSR_VM_HSAVE_PA X-Git-Tag: v5.12.19~283 X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=f98191691c325380ec6d6fb6ea201366c02c44fa;p=thirdparty%2Fkernel%2Fstable.git KVM: nSVM: Check the value written to MSR_VM_HSAVE_PA commit fce7e152ffc8f89d02a80617b16c7aa1527847c8 upstream. APM states that #GP is raised upon write to MSR_VM_HSAVE_PA when the supplied address is not page-aligned or is outside of "maximum supported physical address for this implementation". page_address_valid() check seems suitable. Also, forcefully page-align the address when it's written from VMM. Signed-off-by: Vitaly Kuznetsov Message-Id: <20210628104425.391276-2-vkuznets@redhat.com> Cc: stable@vger.kernel.org Reviewed-by: Maxim Levitsky [Add comment about behavior for host-provided values. - Paolo] Signed-off-by: Paolo Bonzini Signed-off-by: Greg Kroah-Hartman --- diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index 30569bbbca9ac..f262edf7551b3 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -2982,7 +2982,16 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) svm_disable_lbrv(vcpu); break; case MSR_VM_HSAVE_PA: - svm->nested.hsave_msr = data; + /* + * Old kernels did not validate the value written to + * MSR_VM_HSAVE_PA. Allow KVM_SET_MSR to set an invalid + * value to allow live migrating buggy or malicious guests + * originating from those kernels. + */ + if (!msr->host_initiated && !page_address_valid(vcpu, data)) + return 1; + + svm->nested.hsave_msr = data & PAGE_MASK; break; case MSR_VM_CR: return svm_set_vm_cr(vcpu, data);