From: Greg Kroah-Hartman Date: Thu, 23 Oct 2025 15:11:27 +0000 (+0200) Subject: drop 5.4 patch that shouldn't hav gone there X-Git-Tag: v5.4.301~48 X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=6b7c90c5ef818bf0de3d232b916600e8152e8af3;p=thirdparty%2Fkernel%2Fstable-queue.git drop 5.4 patch that shouldn't hav gone there --- diff --git a/queue-5.4/sched-fair-block-delayed-tasks-on-throttled-hierarchy-during-dequeue.patch b/queue-5.4/sched-fair-block-delayed-tasks-on-throttled-hierarchy-during-dequeue.patch deleted file mode 100644 index 102e23d86e..0000000000 --- a/queue-5.4/sched-fair-block-delayed-tasks-on-throttled-hierarchy-during-dequeue.patch +++ /dev/null @@ -1,103 +0,0 @@ -From kprateek.nayak@amd.com Thu Oct 23 17:10:28 2025 -From: K Prateek Nayak -Date: Thu, 23 Oct 2025 04:03:59 +0000 -Subject: sched/fair: Block delayed tasks on throttled hierarchy during dequeue -To: Greg Kroah-Hartman , Sasha Levin , , Matt Fleming , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , -Cc: Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , , Matt Fleming , "Oleg Nesterov" , John Stultz , Chris Arges , "Luis Claudio R. Goncalves" , "K Prateek Nayak" -Message-ID: <20251023040359.39021-1-kprateek.nayak@amd.com> - -From: K Prateek Nayak - -Dequeuing a fair task on a throttled hierarchy returns early on -encountering a throttled cfs_rq since the throttle path has already -dequeued the hierarchy above and has adjusted the h_nr_* accounting till -the root cfs_rq. - -dequeue_entities() crucially misses calling __block_task() for delayed -tasks being dequeued on the throttled hierarchies, but this was mostly -harmless until commit b7ca5743a260 ("sched/core: Tweak -wait_task_inactive() to force dequeue sched_delayed tasks") since all -existing cases would re-enqueue the task if task_on_rq_queued() returned -true and the task would eventually be blocked at pick after the -hierarchy was unthrottled. - -wait_task_inactive() is special as it expects the delayed task on -throttled hierarchy to reach the blocked state on dequeue but since -__block_task() is never called, task_on_rq_queued() continues to return -true. Furthermore, since the task is now off the hierarchy, the pick -never reaches it to fully block the task even after unthrottle leading -to wait_task_inactive() looping endlessly. - -Remedy this by calling __block_task() if a delayed task is being -dequeued on a throttled hierarchy. - -This fix is only required for stabled kernels implementing delay dequeue -(>= v6.12) before v6.18 since upstream commit e1fad12dcb66 ("sched/fair: -Switch to task based throttle model") indirectly fixes this by removing -the early return conditions in dequeue_entities() as part of the per-task -throttle feature. - -Cc: stable@vger.kernel.org -Reported-by: Matt Fleming -Closes: https://lore.kernel.org/all/20250925133310.1843863-1-matt@readmodwrite.com/ -Fixes: b7ca5743a260 ("sched/core: Tweak wait_task_inactive() to force dequeue sched_delayed tasks") -Tested-by: Matt Fleming -Signed-off-by: K Prateek Nayak -Signed-off-by: Greg Kroah-Hartman ---- - kernel/sched/fair.c | 9 ++++++--- - 1 file changed, 6 insertions(+), 3 deletions(-) - -diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c -index 8ce56a8d507f..f0a4d9d7424d 100644 ---- a/kernel/sched/fair.c -+++ b/kernel/sched/fair.c -@@ -6969,6 +6969,7 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags) - int h_nr_runnable = 0; - struct cfs_rq *cfs_rq; - u64 slice = 0; -+ int ret = 0; - - if (entity_is_task(se)) { - p = task_of(se); -@@ -6998,7 +6999,7 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags) - - /* end evaluation on encountering a throttled cfs_rq */ - if (cfs_rq_throttled(cfs_rq)) -- return 0; -+ goto out; - - /* Don't dequeue parent if it has other entities besides us */ - if (cfs_rq->load.weight) { -@@ -7039,7 +7040,7 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags) - - /* end evaluation on encountering a throttled cfs_rq */ - if (cfs_rq_throttled(cfs_rq)) -- return 0; -+ goto out; - } - - sub_nr_running(rq, h_nr_queued); -@@ -7048,6 +7049,8 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags) - if (unlikely(!was_sched_idle && sched_idle_rq(rq))) - rq->next_balance = jiffies; - -+ ret = 1; -+out: - if (p && task_delayed) { - WARN_ON_ONCE(!task_sleep); - WARN_ON_ONCE(p->on_rq != 1); -@@ -7063,7 +7066,7 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags) - __block_task(rq, p); - } - -- return 1; -+ return ret; - } - - /* - -base-commit: 6c7871823908a4330e145d635371582f76ce1407 --- -2.34.1 - diff --git a/queue-5.4/series b/queue-5.4/series index 53c658d90a..36a20c2d30 100644 --- a/queue-5.4/series +++ b/queue-5.4/series @@ -169,4 +169,3 @@ sched-balancing-rename-newidle_balance-sched_balance.patch sched-fair-fix-pelt-lost-idle-time-detection.patch alsa-firewire-amdtp-stream-fix-enum-kernel-doc-warni.patch hfsplus-fix-slab-out-of-bounds-read-in-hfsplus_strcasecmp.patch -sched-fair-block-delayed-tasks-on-throttled-hierarchy-during-dequeue.patch