From: Sean Christopherson Date: Tue, 10 Mar 2026 23:48:23 +0000 (-0700) Subject: KVM: SEV: Assert that kvm->lock is held when querying SEV+ support X-Git-Url: http://git.ipfire.org/index.cgi?a=commitdiff_plain;h=ba903f7382490776d2df2fca6bf5c8ef2eb4663f;p=thirdparty%2Flinux.git KVM: SEV: Assert that kvm->lock is held when querying SEV+ support Assert that kvm->lock is held when checking if a VM is an SEV+ VM, as KVM sets *and* resets the relevant flags when initialization SEV state, i.e. it's extremely easy to end up with TOCTOU bugs if kvm->lock isn't held. Add waivers for a VM being torn down (refcount is '0') and for there being a loaded vCPU, with comments for both explaining why they're safe. Note, the "vCPU loaded" waiver is necessary to avoid splats on the SNP checks in sev_gmem_prepare() and sev_gmem_max_mapping_level(), which are currently called when handling nested page faults. Alternatively, those checks could key off KVM_X86_SNP_VM, as kvm_arch.vm_type is stable early in VM creation. Prioritize consistency, at least for now, and to leave a "reminder" that the max mapping level code in particular likely needs special attention if/when KVM supports dirty logging for SNP guests. Link: https://patch.msgid.link/20260310234829.2608037-16-seanjc@google.com Signed-off-by: Sean Christopherson --- diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index ed8bb60341ae3..57f3ec36b62a0 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -107,17 +107,42 @@ static unsigned int nr_asids; static unsigned long *sev_asid_bitmap; static unsigned long *sev_reclaim_asid_bitmap; +static __always_inline void kvm_lockdep_assert_sev_lock_held(struct kvm *kvm) +{ +#ifdef CONFIG_PROVE_LOCKING + /* + * Querying SEV+ support is safe if there are no other references, i.e. + * if concurrent initialization of SEV+ is impossible. + */ + if (!refcount_read(&kvm->users_count)) + return; + + /* + * Querying SEV+ support from vCPU context is always safe, as vCPUs can + * only be created after SEV+ is initialized (and KVM disallows all SEV + * sub-ioctls while vCPU creation is in-progress). + */ + if (kvm_get_running_vcpu()) + return; + + lockdep_assert_held(&kvm->lock); +#endif +} + static bool sev_guest(struct kvm *kvm) { + kvm_lockdep_assert_sev_lock_held(kvm); return ____sev_guest(kvm); } static bool sev_es_guest(struct kvm *kvm) { + kvm_lockdep_assert_sev_lock_held(kvm); return ____sev_es_guest(kvm); } static bool sev_snp_guest(struct kvm *kvm) { + kvm_lockdep_assert_sev_lock_held(kvm); return ____sev_snp_guest(kvm); }