From: Tejun Heo Date: Sun, 19 Apr 2026 15:33:41 +0000 (-1000) Subject: sched_ext: Mark scx_sched_hash insecure_elasticity X-Git-Tag: v7.1-rc2~27^2~24 X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=87019cb6c26178cef8fb9f9265b6ab7c4bda5262;p=thirdparty%2Fkernel%2Flinux.git sched_ext: Mark scx_sched_hash insecure_elasticity scx_sched_hash is inserted into under scx_sched_lock (raw_spinlock_irq) in scx_link_sched(). rhashtable's sync grow path calls get_random_u32() and does a GFP_ATOMIC allocation; both acquire regular spinlocks, which is unsafe under raw_spinlock_t. Set insecure_elasticity to skip the sync grow. v2: - Dropped dsq_hash changes. Insertion is not under raw_spin_lock. - Switched from no_sync_grow flag to insecure_elasticity. Fixes: 25037af712eb ("sched_ext: Add rhashtable lookup for sub-schedulers") Signed-off-by: Tejun Heo --- diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c index 012ca8bd70fb..7edd46f3ac43 100644 --- a/kernel/sched/ext.c +++ b/kernel/sched/ext.c @@ -32,6 +32,7 @@ static const struct rhashtable_params scx_sched_hash_params = { .key_len = sizeof_field(struct scx_sched, ops.sub_cgroup_id), .key_offset = offsetof(struct scx_sched, ops.sub_cgroup_id), .head_offset = offsetof(struct scx_sched, hash_node), + .insecure_elasticity = true, /* inserted under scx_sched_lock */ }; static struct rhashtable scx_sched_hash;