From: Tejun Heo Date: Sat, 25 Apr 2026 00:31:36 +0000 (-1000) Subject: sched_ext: Pass held rq to SCX_CALL_OP() for core_sched_before X-Git-Tag: v7.1-rc2~27^2~7 X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=4155fb489fa175ec74eedde7d02219cf2fe74303;p=thirdparty%2Fkernel%2Flinux.git sched_ext: Pass held rq to SCX_CALL_OP() for core_sched_before scx_prio_less() runs from core-sched's pick_next_task() path with rq locked but invokes ops.core_sched_before() with NULL locked_rq, leaving scx_locked_rq_state NULL. If the BPF callback calls a kfunc that re-acquires rq based on scx_locked_rq() - e.g. scx_bpf_cpuperf_set(cpu) - it re-acquires the already-held rq. Pass task_rq(a). Fixes: 7b0888b7cc19 ("sched_ext: Implement core-sched support") Cc: stable@vger.kernel.org # v6.12+ Reported-by: Chris Mason Signed-off-by: Tejun Heo Reviewed-by: Andrea Righi --- diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c index 73d629559d6d..ba977154273c 100644 --- a/kernel/sched/ext.c +++ b/kernel/sched/ext.c @@ -3198,7 +3198,7 @@ bool scx_prio_less(const struct task_struct *a, const struct task_struct *b, if (sch_a == sch_b && SCX_HAS_OP(sch_a, core_sched_before) && !scx_bypassing(sch_a, task_cpu(a))) return SCX_CALL_OP_2TASKS_RET(sch_a, core_sched_before, - NULL, + task_rq(a), (struct task_struct *)a, (struct task_struct *)b); else