From: Shrikanth Hegde Date: Mon, 23 Mar 2026 19:36:27 +0000 (+0530) Subject: sched/fair: Get this cpu once in find_new_ilb() X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=76504bce4ee6b8757647e07bc1710dcac9acdc2e;p=thirdparty%2Fkernel%2Flinux.git sched/fair: Get this cpu once in find_new_ilb() Calling smp_processor_id() on: - In CONFIG_DEBUG_PREEMPT=y, if preemption/irq is disabled, then it does not print any warning. - In CONFIG_DEBUG_PREEMPT=n, it doesn't do anything apart from getting __smp_processor_id So with both CONFIG_DEBUG_PREEMPT=y/n, in preemption disabled section it is better to cache the value. It could save a few cycles. Though tiny, repeated in loop could add up to a small value. find_new_ilb is called in interrupt context. So preemption is disabled. So Hoist the this_cpu out of loop Signed-off-by: Shrikanth Hegde Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Mukesh Kumar Chaurasiya (IBM) Reviewed-by: K Prateek Nayak Link: https://patch.msgid.link/20260323193630.640311-2-sshegde@linux.ibm.com --- diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 0a35a82e47920..226509231e673 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -12614,14 +12614,14 @@ static inline int on_null_domain(struct rq *rq) */ static inline int find_new_ilb(void) { + int this_cpu = smp_processor_id(); const struct cpumask *hk_mask; int ilb_cpu; hk_mask = housekeeping_cpumask(HK_TYPE_KERNEL_NOISE); for_each_cpu_and(ilb_cpu, nohz.idle_cpus_mask, hk_mask) { - - if (ilb_cpu == smp_processor_id()) + if (ilb_cpu == this_cpu) continue; if (idle_cpu(ilb_cpu))