From: Zheng Yejian Date: Wed, 6 Sep 2023 08:19:30 +0000 (+0800) Subject: ring-buffer: Avoid softlockup in ring_buffer_resize() X-Git-Tag: v6.6-rc2~33^2~9 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=f6bd2c92488c30ef53b5bd80c52f0a7eee9d545a;p=thirdparty%2Fkernel%2Flinux.git ring-buffer: Avoid softlockup in ring_buffer_resize() When user resize all trace ring buffer through file 'buffer_size_kb', then in ring_buffer_resize(), kernel allocates buffer pages for each cpu in a loop. If the kernel preemption model is PREEMPT_NONE and there are many cpus and there are many buffer pages to be allocated, it may not give up cpu for a long time and finally cause a softlockup. To avoid it, call cond_resched() after each cpu buffer allocation. Link: https://lore.kernel.org/linux-trace-kernel/20230906081930.3939106-1-zhengyejian1@huawei.com Cc: Signed-off-by: Zheng Yejian Signed-off-by: Steven Rostedt (Google) --- diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 78502d4c7214e..72ccf75defd0f 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -2198,6 +2198,8 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size, err = -ENOMEM; goto out_err; } + + cond_resched(); } cpus_read_lock();