From: Peter Zijlstra Date: Mon, 4 Nov 2024 13:39:26 +0000 (+0100) Subject: perf/core: Remove retry loop from perf_mmap() X-Git-Tag: v6.15-rc1~217^2~22 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=8eaec7bb723c9a0addfc0457e2f28e41735607af;p=thirdparty%2Fkernel%2Flinux.git perf/core: Remove retry loop from perf_mmap() AFAICT there is no actual benefit from the mutex drop on re-try. The 'worst' case scenario is that we instantly re-gain the mutex without perf_mmap_close() getting it. So might as well make that the normal case. Reflow the code to make the ring buffer detach case naturally flow into the no ring buffer case. [ mingo: Forward ported it ] Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Ingo Molnar Reviewed-by: Ravi Bangoria Link: https://lore.kernel.org/r/20241104135519.463607258@infradead.org --- diff --git a/kernel/events/core.c b/kernel/events/core.c index 4cd3494c65e20..ca4c1242c29b5 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -6719,28 +6719,33 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma) return -EINVAL; WARN_ON_ONCE(event->ctx->parent_ctx); -again: mutex_lock(&event->mmap_mutex); + if (event->rb) { if (data_page_nr(event->rb) != nr_pages) { ret = -EINVAL; goto unlock; } - if (!atomic_inc_not_zero(&event->rb->mmap_count)) { + if (atomic_inc_not_zero(&event->rb->mmap_count)) { /* - * Raced against perf_mmap_close(); remove the - * event and try again. + * Success -- managed to mmap() the same buffer + * multiple times. */ - ring_buffer_attach(event, NULL); - mutex_unlock(&event->mmap_mutex); - goto again; + ret = 0; + /* We need the rb to map pages. */ + rb = event->rb; + goto unlock; } - /* We need the rb to map pages. */ - rb = event->rb; - goto unlock; + /* + * Raced against perf_mmap_close()'s + * atomic_dec_and_mutex_lock() remove the + * event and continue as if !event->rb + */ + ring_buffer_attach(event, NULL); } + } else { /* * AUX area mapping: if rb->aux_nr_pages != 0, it's already