]> git.ipfire.org Git - thirdparty/kernel/linux.git/commit
x86/kvm: Prefer native qspinlock for dedicated vCPUs irrespective of PV_UNHALT
authorLi RongQing <lirongqing@baidu.com>
Tue, 22 Jul 2025 11:00:05 +0000 (19:00 +0800)
committerSean Christopherson <seanjc@google.com>
Thu, 11 Sep 2025 15:58:37 +0000 (08:58 -0700)
commit960550503965094b0babd7e8c83ec66c8a763b0b
tree547cb9a8ee76730e40cbb9f1bd77d81edb6f4384
parent657bf7048d77c1db6baf0841dd1a65c60d7fc4c7
x86/kvm: Prefer native qspinlock for dedicated vCPUs irrespective of PV_UNHALT

The commit b2798ba0b876 ("KVM: X86: Choose qspinlock when dedicated
physical CPUs are available") states that when PV_DEDICATED=1
(vCPU has dedicated pCPU), qspinlock should be preferred regardless of
PV_UNHALT.  However, the current implementation doesn't reflect this: when
PV_UNHALT=0, we still use virt_spin_lock() even with dedicated pCPUs.

This is suboptimal because:
1. Native qspinlocks should outperform virt_spin_lock() for dedicated
   vCPUs irrespective of HALT exiting
2. virt_spin_lock() should only be preferred when vCPUs may be preempted
   (non-dedicated case)

So reorder the PV spinlock checks to:
1. First handle dedicated pCPU case (disable virt_spin_lock_key)
2. Second check single CPU, and nopvspin configuration
3. Only then check PV_UNHALT support

This ensures we always use native qspinlock for dedicated vCPUs, delivering
pretty performance gains at high contention levels.

Signed-off-by: Li RongQing <lirongqing@baidu.com>
Reviewed-by: Sean Christopherson <seanjc@google.com>
Tested-by: Wangyang Guo <wangyang.guo@intel.com>
Link: https://lore.kernel.org/r/20250722110005.4988-1-lirongqing@baidu.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
arch/x86/kernel/kvm.c