From: Paolo Bonzini Date: Mon, 13 Apr 2026 11:19:36 +0000 (+0200) Subject: Merge tag 'kvm-x86-svm-7.1' of https://github.com/kvm-x86/linux into HEAD X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=92cdeac6a417391349481933aa32e3216a1cc217;p=thirdparty%2Fkernel%2Flinux.git Merge tag 'kvm-x86-svm-7.1' of https://github.com/kvm-x86/linux into HEAD KVM SVM changes for 7.1 - Fix and optimize IRQ window inhibit handling for AVIC (the tracking needs to be per-vCPU, e.g. so that KVM doesn't prematurely re-enable AVIC if multiple vCPUs have to-be-injected IRQs). - Fix an undefined behavior warning where a crafty userspace can read the "avic" module param before it's fully initialized. - Fix a (likely benign) bug in the "OS-visible workarounds" handling, where KVM could clobber state when enabling virtualization on multiple CPUs in parallel, and clean up and optimize the code. - Drop a WARN in KVM_MEMORY_ENCRYPT_REG_REGION where KVM complains about a "too large" size based purely on user input, and clean up and harden the related pinning code. - Disallow synchronizing a VMSA of an already-launched/encrypted vCPU, as doing so for an SNP guest will trigger an RMP violation #PF and crash the host. - Protect all of sev_mem_enc_register_region() with kvm->lock to ensure sev_guest() is stable for the entire of the function. - Lock all vCPUs when synchronizing VMSAs for SNP guests to ensure the VMSA page isn't actively being used. - Overhaul KVM's APIs for detecting SEV+ guests so that VM-scoped queries are required to hold kvm->lock (KVM has had multiple bugs due "is SEV?" checks becoming stale), enforced by lockdep. Add and use vCPU-scoped APIs when possible/appropriate, as all checks that originate from a vCPU are guaranteed to be stable. - Convert a pile of kvm->lock SEV code to guard(). --- 92cdeac6a417391349481933aa32e3216a1cc217 diff --cc arch/x86/kvm/svm/avic.c index 2885c5993ebcb,7056c4891f93e..adf211860949a --- a/arch/x86/kvm/svm/avic.c +++ b/arch/x86/kvm/svm/avic.c @@@ -226,9 -237,6 +237,9 @@@ static void avic_deactivate_vmcb(struc vmcb->control.int_ctl &= ~(AVIC_ENABLE_MASK | X2APIC_MODE_MASK); vmcb->control.avic_physical_id &= ~AVIC_PHYSICAL_MAX_INDEX_MASK; - if (!sev_es_guest(svm->vcpu.kvm)) ++ if (!is_sev_es_guest(&svm->vcpu)) + svm_set_intercept(svm, INTERCEPT_CR8_WRITE); + /* * If running nested and the guest uses its own MSR bitmap, there * is no need to update L0's msr bitmap diff --cc arch/x86/kvm/svm/svm.c index 9a9e081e9554f,20d36f78104c9..e7fdd7a9c280d --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@@ -256,11 -242,9 +257,11 @@@ int svm_set_efer(struct kvm_vcpu *vcpu * Never intercept #GP for SEV guests, KVM can't * decrypt guest memory to workaround the erratum. */ - if (svm_gp_erratum_intercept && !sev_guest(vcpu->kvm)) + if (svm_gp_erratum_intercept && !is_sev_guest(vcpu)) set_exception_intercept(svm, GP_VECTOR); } + + kvm_make_request(KVM_REQ_RECALC_INTERCEPTS, vcpu); } svm->vmcb->save.efer = efer | EFER_SVME; @@@ -851,8 -881,8 +868,8 @@@ void svm_enable_lbrv(struct kvm_vcpu *v static void __svm_disable_lbrv(struct kvm_vcpu *vcpu) { - KVM_BUG_ON(sev_es_guest(vcpu->kvm), vcpu->kvm); + KVM_BUG_ON(is_sev_es_guest(vcpu), vcpu->kvm); - to_svm(vcpu)->vmcb->control.virt_ext &= ~LBR_CTL_ENABLE_MASK; + to_svm(vcpu)->vmcb->control.misc_ctl2 &= ~SVM_MISC2_ENABLE_V_LBR; } void svm_update_lbrv(struct kvm_vcpu *vcpu)