From: Greg Kroah-Hartman Date: Thu, 19 Mar 2026 14:09:35 +0000 (+0100) Subject: 6.1-stable patches X-Git-Tag: v6.18.19~2 X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=180fa314df429a646a7d9f76a4ee08da0a62cbc5;p=thirdparty%2Fkernel%2Fstable-queue.git 6.1-stable patches added patches: io_uring-io-wq-check-io_wq_bit_exit-inside-work-run-loop.patch io_uring-kbuf-check-if-target-buffer-list-is-still-legacy-on-recycle.patch --- diff --git a/queue-6.1/io_uring-io-wq-check-io_wq_bit_exit-inside-work-run-loop.patch b/queue-6.1/io_uring-io-wq-check-io_wq_bit_exit-inside-work-run-loop.patch new file mode 100644 index 0000000000..35470e3b97 --- /dev/null +++ b/queue-6.1/io_uring-io-wq-check-io_wq_bit_exit-inside-work-run-loop.patch @@ -0,0 +1,91 @@ +From 10dc959398175736e495f71c771f8641e1ca1907 Mon Sep 17 00:00:00 2001 +From: Jens Axboe +Date: Tue, 20 Jan 2026 07:42:50 -0700 +Subject: io_uring/io-wq: check IO_WQ_BIT_EXIT inside work run loop + +From: Jens Axboe + +commit 10dc959398175736e495f71c771f8641e1ca1907 upstream. + +Currently this is checked before running the pending work. Normally this +is quite fine, as work items either end up blocking (which will create a +new worker for other items), or they complete fairly quickly. But syzbot +reports an issue where io-wq takes seemingly forever to exit, and with a +bit of debugging, this turns out to be because it queues a bunch of big +(2GB - 4096b) reads with a /dev/msr* file. Since this file type doesn't +support ->read_iter(), loop_rw_iter() ends up handling them. Each read +returns 16MB of data read, which takes 20 (!!) seconds. With a bunch of +these pending, processing the whole chain can take a long time. Easily +longer than the syzbot uninterruptible sleep timeout of 140 seconds. +This then triggers a complaint off the io-wq exit path: + +INFO: task syz.4.135:6326 blocked for more than 143 seconds. + Not tainted syzkaller #0 + Blocked by coredump. +"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. +task:syz.4.135 state:D stack:26824 pid:6326 tgid:6324 ppid:5957 task_flags:0x400548 flags:0x00080000 +Call Trace: + + context_switch kernel/sched/core.c:5256 [inline] + __schedule+0x1139/0x6150 kernel/sched/core.c:6863 + __schedule_loop kernel/sched/core.c:6945 [inline] + schedule+0xe7/0x3a0 kernel/sched/core.c:6960 + schedule_timeout+0x257/0x290 kernel/time/sleep_timeout.c:75 + do_wait_for_common kernel/sched/completion.c:100 [inline] + __wait_for_common+0x2fc/0x4e0 kernel/sched/completion.c:121 + io_wq_exit_workers io_uring/io-wq.c:1328 [inline] + io_wq_put_and_exit+0x271/0x8a0 io_uring/io-wq.c:1356 + io_uring_clean_tctx+0x10d/0x190 io_uring/tctx.c:203 + io_uring_cancel_generic+0x69c/0x9a0 io_uring/cancel.c:651 + io_uring_files_cancel include/linux/io_uring.h:19 [inline] + do_exit+0x2ce/0x2bd0 kernel/exit.c:911 + do_group_exit+0xd3/0x2a0 kernel/exit.c:1112 + get_signal+0x2671/0x26d0 kernel/signal.c:3034 + arch_do_signal_or_restart+0x8f/0x7e0 arch/x86/kernel/signal.c:337 + __exit_to_user_mode_loop kernel/entry/common.c:41 [inline] + exit_to_user_mode_loop+0x8c/0x540 kernel/entry/common.c:75 + __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline] + syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline] + syscall_exit_to_user_mode_work include/linux/entry-common.h:159 [inline] + syscall_exit_to_user_mode include/linux/entry-common.h:194 [inline] + do_syscall_64+0x4ee/0xf80 arch/x86/entry/syscall_64.c:100 + entry_SYSCALL_64_after_hwframe+0x77/0x7f +RIP: 0033:0x7fa02738f749 +RSP: 002b:00007fa0281ae0e8 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca +RAX: fffffffffffffe00 RBX: 00007fa0275e6098 RCX: 00007fa02738f749 +RDX: 0000000000000000 RSI: 0000000000000080 RDI: 00007fa0275e6098 +RBP: 00007fa0275e6090 R08: 0000000000000000 R09: 0000000000000000 +R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 +R13: 00007fa0275e6128 R14: 00007fff14e4fcb0 R15: 00007fff14e4fd98 + +There's really nothing wrong here, outside of processing these reads +will take a LONG time. However, we can speed up the exit by checking the +IO_WQ_BIT_EXIT inside the io_worker_handle_work() loop, as syzbot will +exit the ring after queueing up all of these reads. Then once the first +item is processed, io-wq will simply cancel the rest. That should avoid +syzbot running into this complaint again. + +Cc: stable@vger.kernel.org +Link: https://lore.kernel.org/all/68a2decc.050a0220.e29e5.0099.GAE@google.com/ +Reported-by: syzbot+4eb282331cab6d5b6588@syzkaller.appspotmail.com +Signed-off-by: Jens Axboe +[ Minor conflict resolved. ] +Signed-off-by: Jianqiang kang +Signed-off-by: Greg Kroah-Hartman +--- + io_uring/io-wq.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/io_uring/io-wq.c ++++ b/io_uring/io-wq.c +@@ -554,9 +554,9 @@ static void io_worker_handle_work(struct + struct io_wqe_acct *acct = io_wqe_get_acct(worker); + struct io_wqe *wqe = worker->wqe; + struct io_wq *wq = wqe->wq; +- bool do_kill = test_bit(IO_WQ_BIT_EXIT, &wq->state); + + do { ++ bool do_kill = test_bit(IO_WQ_BIT_EXIT, &wq->state); + struct io_wq_work *work; + + /* diff --git a/queue-6.1/io_uring-kbuf-check-if-target-buffer-list-is-still-legacy-on-recycle.patch b/queue-6.1/io_uring-kbuf-check-if-target-buffer-list-is-still-legacy-on-recycle.patch new file mode 100644 index 0000000000..b7f4b8eda9 --- /dev/null +++ b/queue-6.1/io_uring-kbuf-check-if-target-buffer-list-is-still-legacy-on-recycle.patch @@ -0,0 +1,44 @@ +From 35660c08085dad169e708326998a50ef59aefcbc Mon Sep 17 00:00:00 2001 +From: Jens Axboe +Date: Thu, 12 Mar 2026 08:59:25 -0600 +Subject: io_uring/kbuf: check if target buffer list is still legacy on recycle + +From: Jens Axboe + +Commit c2c185be5c85d37215397c8e8781abf0a69bec1f upstream. + +There's a gap between when the buffer was grabbed and when it +potentially gets recycled, where if the list is empty, someone could've +upgraded it to a ring provided type. This can happen if the request +is forced via io-wq. The legacy recycling is missing checking if the +buffer_list still exists, and if it's of the correct type. Add those +checks. + +Cc: stable@vger.kernel.org +Fixes: c7fb19428d67 ("io_uring: add support for ring mapped supplied buffers") +Reported-by: Keenan Dong +Signed-off-by: Jens Axboe +Signed-off-by: Greg Kroah-Hartman +--- + io_uring/kbuf.c | 8 +++++++- + 1 file changed, 7 insertions(+), 1 deletion(-) + +--- a/io_uring/kbuf.c ++++ b/io_uring/kbuf.c +@@ -69,9 +69,15 @@ void io_kbuf_recycle_legacy(struct io_ki + + buf = req->kbuf; + bl = io_buffer_get_list(ctx, buf->bgid); +- list_add(&buf->list, &bl->buf_list); ++ /* ++ * If the buffer list was upgraded to a ring-based one, or removed, ++ * while the request was in-flight in io-wq, drop it. ++ */ ++ if (bl && !bl->buf_nr_pages) ++ list_add(&buf->list, &bl->buf_list); + req->flags &= ~REQ_F_BUFFER_SELECTED; + req->buf_index = buf->bgid; ++ req->kbuf = NULL; + + io_ring_submit_unlock(ctx, issue_flags); + return; diff --git a/queue-6.1/series b/queue-6.1/series index bbbcc112f1..ac0199d8ce 100644 --- a/queue-6.1/series +++ b/queue-6.1/series @@ -277,3 +277,5 @@ drm-bridge-ti-sn65dsi86-add-support-for-displayport-mode-with-hpd.patch ipv6-use-rcu-in-ip6_xmit.patch bpf-forget-ranges-when-refining-tnum-after-jset.patch l2tp-do-not-use-sock_hold-in-pppol2tp_session_get_sock.patch +io_uring-io-wq-check-io_wq_bit_exit-inside-work-run-loop.patch +io_uring-kbuf-check-if-target-buffer-list-is-still-legacy-on-recycle.patch