From: Shrikanth Hegde Date: Mon, 23 Mar 2026 19:36:28 +0000 (+0530) Subject: sched/core: Get this cpu once in ttwu_queue_cond() X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=0e81fe79fec5a639700f09f39c8ab680c3312ba2;p=thirdparty%2Fkernel%2Flinux.git sched/core: Get this cpu once in ttwu_queue_cond() Calling smp_processor_id() on: - In CONFIG_DEBUG_PREEMPT=y, if preemption/irq is disabled, then it does not print any warning. - In CONFIG_DEBUG_PREEMPT=n, it doesn't do anything apart from getting __smp_processor_id So with both CONFIG_DEBUG_PREEMPT=y/n, in preemption disabled section it is better to cache the value. It could save a few cycles. Though tiny, repeated could add up to a small value. ttwu_queue_cond is called with interrupt disabled. So preemption is disabled. Hence cache the value once instead. Signed-off-by: Shrikanth Hegde Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Mukesh Kumar Chaurasiya (IBM) Link: https://patch.msgid.link/20260323193630.640311-3-sshegde@linux.ibm.com --- diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 64b467c1d5b6e..7c7d4bf686d71 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3842,6 +3842,8 @@ bool cpus_share_resources(int this_cpu, int that_cpu) static inline bool ttwu_queue_cond(struct task_struct *p, int cpu) { + int this_cpu = smp_processor_id(); + /* See SCX_OPS_ALLOW_QUEUED_WAKEUP. */ if (!scx_allow_ttwu_queue(p)) return false; @@ -3866,10 +3868,10 @@ static inline bool ttwu_queue_cond(struct task_struct *p, int cpu) * If the CPU does not share cache, then queue the task on the * remote rqs wakelist to avoid accessing remote data. */ - if (!cpus_share_cache(smp_processor_id(), cpu)) + if (!cpus_share_cache(this_cpu, cpu)) return true; - if (cpu == smp_processor_id()) + if (cpu == this_cpu) return false; /*