--- /dev/null
+From 77de62ad3de3967818c3dbe656b7336ebee461d2 Mon Sep 17 00:00:00 2001
+From: Haocheng Yu <yuhaocheng035@gmail.com>
+Date: Tue, 3 Feb 2026 00:20:56 +0800
+Subject: perf/core: Fix refcount bug and potential UAF in perf_mmap
+
+From: Haocheng Yu <yuhaocheng035@gmail.com>
+
+commit 77de62ad3de3967818c3dbe656b7336ebee461d2 upstream.
+
+Syzkaller reported a refcount_t: addition on 0; use-after-free warning
+in perf_mmap.
+
+The issue is caused by a race condition between a failing mmap() setup
+and a concurrent mmap() on a dependent event (e.g., using output
+redirection).
+
+In perf_mmap(), the ring_buffer (rb) is allocated and assigned to
+event->rb with the mmap_mutex held. The mutex is then released to
+perform map_range().
+
+If map_range() fails, perf_mmap_close() is called to clean up.
+However, since the mutex was dropped, another thread attaching to
+this event (via inherited events or output redirection) can acquire
+the mutex, observe the valid event->rb pointer, and attempt to
+increment its reference count. If the cleanup path has already
+dropped the reference count to zero, this results in a
+use-after-free or refcount saturation warning.
+
+Fix this by extending the scope of mmap_mutex to cover the
+map_range() call. This ensures that the ring buffer initialization
+and mapping (or cleanup on failure) happens atomically effectively,
+preventing other threads from accessing a half-initialized or
+dying ring buffer.
+
+Closes: https://lore.kernel.org/oe-kbuild-all/202602020208.m7KIjdzW-lkp@intel.com/
+Reported-by: kernel test robot <lkp@intel.com>
+Signed-off-by: Haocheng Yu <yuhaocheng035@gmail.com>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Link: https://patch.msgid.link/20260202162057.7237-1-yuhaocheng035@gmail.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/events/core.c | 38 +++++++++++++++++++-------------------
+ 1 file changed, 19 insertions(+), 19 deletions(-)
+
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -7188,28 +7188,28 @@ static int perf_mmap(struct file *file,
+ ret = perf_mmap_aux(vma, event, nr_pages);
+ if (ret)
+ return ret;
+- }
+
+- /*
+- * Since pinned accounting is per vm we cannot allow fork() to copy our
+- * vma.
+- */
+- vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
+- vma->vm_ops = &perf_mmap_vmops;
++ /*
++ * Since pinned accounting is per vm we cannot allow fork() to copy our
++ * vma.
++ */
++ vm_flags_set(vma, VM_DONTCOPY | VM_DONTEXPAND | VM_DONTDUMP);
++ vma->vm_ops = &perf_mmap_vmops;
+
+- mapped = get_mapped(event, event_mapped);
+- if (mapped)
+- mapped(event, vma->vm_mm);
++ mapped = get_mapped(event, event_mapped);
++ if (mapped)
++ mapped(event, vma->vm_mm);
+
+- /*
+- * Try to map it into the page table. On fail, invoke
+- * perf_mmap_close() to undo the above, as the callsite expects
+- * full cleanup in this case and therefore does not invoke
+- * vmops::close().
+- */
+- ret = map_range(event->rb, vma);
+- if (ret)
+- perf_mmap_close(vma);
++ /*
++ * Try to map it into the page table. On fail, invoke
++ * perf_mmap_close() to undo the above, as the callsite expects
++ * full cleanup in this case and therefore does not invoke
++ * vmops::close().
++ */
++ ret = map_range(event->rb, vma);
++ if (ret)
++ perf_mmap_close(vma);
++ }
+
+ return ret;
+ }