--- /dev/null
+From 49aa8a1f4d6800721c7971ed383078257f12e8f9 Mon Sep 17 00:00:00 2001
+From: Zheng Yejian <zhengyejian@huaweicloud.com>
+Date: Tue, 27 Aug 2024 20:46:54 +0800
+Subject: tracing: Avoid possible softlockup in tracing_iter_reset()
+
+From: Zheng Yejian <zhengyejian@huaweicloud.com>
+
+commit 49aa8a1f4d6800721c7971ed383078257f12e8f9 upstream.
+
+In __tracing_open(), when max latency tracers took place on the cpu,
+the time start of its buffer would be updated, then event entries with
+timestamps being earlier than start of the buffer would be skipped
+(see tracing_iter_reset()).
+
+Softlockup will occur if the kernel is non-preemptible and too many
+entries were skipped in the loop that reset every cpu buffer, so add
+cond_resched() to avoid it.
+
+Cc: stable@vger.kernel.org
+Fixes: 2f26ebd549b9a ("tracing: use timestamp to determine start of latency traces")
+Link: https://lore.kernel.org/20240827124654.3817443-1-zhengyejian@huaweicloud.com
+Suggested-by: Steven Rostedt <rostedt@goodmis.org>
+Signed-off-by: Zheng Yejian <zhengyejian@huaweicloud.com>
+Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/trace/trace.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -4068,6 +4068,8 @@ void tracing_iter_reset(struct trace_ite
+ break;
+ entries++;
+ ring_buffer_iter_advance(buf_iter);
++ /* This could be a big loop */
++ cond_resched();
+ }
+
+ per_cpu_ptr(iter->array_buffer->data, cpu)->skipped_entries = entries;