+++ /dev/null
-From nstange@suse.de Fri Jan 29 11:09:13 2021
-From: Nicolai Stange <nstange@suse.de>
-Date: Wed, 27 Jan 2021 14:34:43 +0100
-Subject: [PATCH for stable 5.4] io_uring: Fix current->fs handling in io_sq_wq_submit_work()
-To: stable@vger.kernel.org
-Cc: Jens Axboe <axboe@kernel.dk>, Nicolai Stange <nstange@suse.de>
-Message-ID: <20210127133443.2413-1-nstange@suse.de>
-
-From: Nicolai Stange <nstange@suse.de>
-
-No upstream commit, this is a fix to a stable 5.4 specific backport.
-
-The intention of backport commit cac68d12c531 ("io_uring: grab ->fs as part
-of async offload") as found in the stable 5.4 tree was to make
-io_sq_wq_submit_work() to switch the workqueue task's ->fs over to the
-submitting task's one during the IO operation.
-
-However, due to a small logic error, this change turned out to not have any
-actual effect. From a high level, the relevant code in
-io_sq_wq_submit_work() looks like
-
- old_fs_struct = current->fs;
- do {
- ...
- if (req->fs != current->fs && current->fs != old_fs_struct) {
- task_lock(current);
- if (req->fs)
- current->fs = req->fs;
- else
- current->fs = old_fs_struct;
- task_unlock(current);
- }
- ...
- } while (req);
-
-The if condition is supposed to cover the case that current->fs doesn't
-match what's needed for processing the request, but observe how it fails
-to ever evaluate to true due to the second clause:
-current->fs != old_fs_struct will be false in the first iteration as per
-the initialization of old_fs_struct and because this prevents current->fs
-from getting replaced, the same follows inductively for all subsequent
-iterations.
-
-Fix said if condition such that
-- if req->fs is set and doesn't match current->fs, the latter will be
- switched to the former
-- or if req->fs is unset, the switch back to the initial old_fs_struct
- will be made, if necessary.
-
-While at it, also correct the condition for the ->fs related cleanup right
-before the return of io_sq_wq_submit_work(): currently, old_fs_struct is
-restored only if it's non-NULL. It is always non-NULL though and thus, the
-if-condition is rendundant. Supposedly, the motivation had been to optimize
-and avoid switching current->fs back to the initial old_fs_struct in case
-it is found to have the desired value already. Make it so.
-
-Cc: stable@vger.kernel.org # v5.4
-Fixes: cac68d12c531 ("io_uring: grab ->fs as part of async offload")
-Reviewed-by: Jens Axboe <axboe@kernel.dk>
-Signed-off-by: Nicolai Stange <nstange@suse.de>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
----
-Tested on top of v5.4.90.
-
- fs/io_uring.c | 5 +++--
- 1 file changed, 3 insertions(+), 2 deletions(-)
-
-diff --git a/fs/io_uring.c b/fs/io_uring.c
-index 4127ea027a14..478df7e10767 100644
---- a/fs/io_uring.c
-+++ b/fs/io_uring.c
-@@ -2226,7 +2226,8 @@ static void io_sq_wq_submit_work(struct work_struct *work)
- /* Ensure we clear previously set non-block flag */
- req->rw.ki_flags &= ~IOCB_NOWAIT;
-
-- if (req->fs != current->fs && current->fs != old_fs_struct) {
-+ if ((req->fs && req->fs != current->fs) ||
-+ (!req->fs && current->fs != old_fs_struct)) {
- task_lock(current);
- if (req->fs)
- current->fs = req->fs;
-@@ -2351,7 +2352,7 @@ static void io_sq_wq_submit_work(struct work_struct *work)
- mmput(cur_mm);
- }
- revert_creds(old_cred);
-- if (old_fs_struct) {
-+ if (old_fs_struct != current->fs) {
- task_lock(current);
- current->fs = old_fs_struct;
- task_unlock(current);
---
-2.26.2
-
--- /dev/null
+From bbeb97464eefc65f506084fd9f18f21653e01137 Mon Sep 17 00:00:00 2001
+From: Gaurav Kohli <gkohli@codeaurora.org>
+Date: Tue, 6 Oct 2020 15:03:53 +0530
+Subject: tracing: Fix race in trace_open and buffer resize call
+
+From: Gaurav Kohli <gkohli@codeaurora.org>
+
+commit bbeb97464eefc65f506084fd9f18f21653e01137 upstream.
+
+Below race can come, if trace_open and resize of
+cpu buffer is running parallely on different cpus
+CPUX CPUY
+ ring_buffer_resize
+ atomic_read(&buffer->resize_disabled)
+tracing_open
+tracing_reset_online_cpus
+ring_buffer_reset_cpu
+rb_reset_cpu
+ rb_update_pages
+ remove/insert pages
+resetting pointer
+
+This race can cause data abort or some times infinte loop in
+rb_remove_pages and rb_insert_pages while checking pages
+for sanity.
+
+Take buffer lock to fix this.
+
+Link: https://lkml.kernel.org/r/1601976833-24377-1-git-send-email-gkohli@codeaurora.org
+
+Cc: stable@vger.kernel.org
+Fixes: 83f40318dab00 ("ring-buffer: Make removal of ring buffer pages atomic")
+Reported-by: Denis Efremov <efremov@linux.com>
+Signed-off-by: Gaurav Kohli <gkohli@codeaurora.org>
+Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ kernel/trace/ring_buffer.c | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -4262,6 +4262,8 @@ void ring_buffer_reset_cpu(struct ring_b
+
+ if (!cpumask_test_cpu(cpu, buffer->cpumask))
+ return;
++ /* prevent another thread from changing buffer sizes */
++ mutex_lock(&buffer->mutex);
+
+ atomic_inc(&buffer->resize_disabled);
+ atomic_inc(&cpu_buffer->record_disabled);
+@@ -4285,6 +4287,8 @@ void ring_buffer_reset_cpu(struct ring_b
+
+ atomic_dec(&cpu_buffer->record_disabled);
+ atomic_dec(&buffer->resize_disabled);
++
++ mutex_unlock(&buffer->mutex);
+ }
+ EXPORT_SYMBOL_GPL(ring_buffer_reset_cpu);
+