From: Will Deacon Date: Mon, 30 Mar 2026 14:48:04 +0000 (+0100) Subject: KVM: arm64: Move handle check into pkvm_pgtable_stage2_destroy_range() X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=9f02deef471e1b6637a6641ce6bf9e2a1dd7d2c1;p=thirdparty%2Fkernel%2Fstable.git KVM: arm64: Move handle check into pkvm_pgtable_stage2_destroy_range() When pKVM is enabled, a VM has a 'handle' allocated by the hypervisor in kvm_arch_init_vm() and released later by kvm_arch_destroy_vm(). Consequently, the only time __pkvm_pgtable_stage2_unmap() can run into an uninitialised 'handle' is on the kvm_arch_init_vm() failure path, where we destroy the empty stage-2 page-table if we fail to allocate a handle. Move the handle check into pkvm_pgtable_stage2_destroy_range(), which will additionally handle protected VMs in subsequent patches. Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba Tested-by: Mostafa Saleh Signed-off-by: Will Deacon Link: https://patch.msgid.link/20260330144841.26181-4-will@kernel.org Signed-off-by: Marc Zyngier --- diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index d7a0f69a9982..7797813f4dbe 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -329,9 +329,6 @@ static int __pkvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 start, u64 e struct pkvm_mapping *mapping; int ret; - if (!handle) - return 0; - for_each_mapping_in_range_safe(pgt, start, end, mapping) { ret = kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gfn, mapping->nr_pages); @@ -347,6 +344,12 @@ static int __pkvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 start, u64 e void pkvm_pgtable_stage2_destroy_range(struct kvm_pgtable *pgt, u64 addr, u64 size) { + struct kvm *kvm = kvm_s2_mmu_to_kvm(pgt->mmu); + pkvm_handle_t handle = kvm->arch.pkvm.handle; + + if (!handle) + return; + __pkvm_pgtable_stage2_unmap(pgt, addr, addr + size); }