From: Greg Kroah-Hartman Date: Sun, 14 Feb 2021 08:17:55 +0000 (+0100) Subject: 4.9-stable patches X-Git-Tag: v5.4.99~45 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=78e8af30c15073e2cd016428238a23641ec7422c;p=thirdparty%2Fkernel%2Fstable-queue.git 4.9-stable patches added patches: tracing-check-length-before-giving-out-the-filter-buffer.patch tracing-do-not-count-ftrace-events-in-top-level-enable-output.patch --- diff --git a/queue-4.9/series b/queue-4.9/series index a8ec603d22a..cf3f658e085 100644 --- a/queue-4.9/series +++ b/queue-4.9/series @@ -16,3 +16,5 @@ futex-cure-exit-race.patch squashfs-add-more-sanity-checks-in-id-lookup.patch squashfs-add-more-sanity-checks-in-inode-lookup.patch squashfs-add-more-sanity-checks-in-xattr-id-lookup.patch +tracing-do-not-count-ftrace-events-in-top-level-enable-output.patch +tracing-check-length-before-giving-out-the-filter-buffer.patch diff --git a/queue-4.9/tracing-check-length-before-giving-out-the-filter-buffer.patch b/queue-4.9/tracing-check-length-before-giving-out-the-filter-buffer.patch new file mode 100644 index 00000000000..c32580ea227 --- /dev/null +++ b/queue-4.9/tracing-check-length-before-giving-out-the-filter-buffer.patch @@ -0,0 +1,46 @@ +From b220c049d5196dd94d992dd2dc8cba1a5e6123bf Mon Sep 17 00:00:00 2001 +From: "Steven Rostedt (VMware)" +Date: Wed, 10 Feb 2021 11:53:22 -0500 +Subject: tracing: Check length before giving out the filter buffer + +From: Steven Rostedt (VMware) + +commit b220c049d5196dd94d992dd2dc8cba1a5e6123bf upstream. + +When filters are used by trace events, a page is allocated on each CPU and +used to copy the trace event fields to this page before writing to the ring +buffer. The reason to use the filter and not write directly into the ring +buffer is because a filter may discard the event and there's more overhead +on discarding from the ring buffer than the extra copy. + +The problem here is that there is no check against the size being allocated +when using this page. If an event asks for more than a page size while being +filtered, it will get only a page, leading to the caller writing more that +what was allocated. + +Check the length of the request, and if it is more than PAGE_SIZE minus the +header default back to allocating from the ring buffer directly. The ring +buffer may reject the event if its too big anyway, but it wont overflow. + +Link: https://lore.kernel.org/ath10k/1612839593-2308-1-git-send-email-wgong@codeaurora.org/ + +Cc: stable@vger.kernel.org +Fixes: 0fc1b09ff1ff4 ("tracing: Use temp buffer when filtering events") +Reported-by: Wen Gong +Signed-off-by: Steven Rostedt (VMware) +Signed-off-by: Greg Kroah-Hartman +--- + kernel/trace/trace.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/kernel/trace/trace.c ++++ b/kernel/trace/trace.c +@@ -2090,7 +2090,7 @@ trace_event_buffer_lock_reserve(struct r + (entry = this_cpu_read(trace_buffered_event))) { + /* Try to use the per cpu buffer first */ + val = this_cpu_inc_return(trace_buffered_event_cnt); +- if (val == 1) { ++ if ((len < (PAGE_SIZE - sizeof(*entry))) && val == 1) { + trace_event_setup(entry, type, flags, pc); + entry->array[0] = len; + return entry; diff --git a/queue-4.9/tracing-do-not-count-ftrace-events-in-top-level-enable-output.patch b/queue-4.9/tracing-do-not-count-ftrace-events-in-top-level-enable-output.patch new file mode 100644 index 00000000000..5b16f4bc078 --- /dev/null +++ b/queue-4.9/tracing-do-not-count-ftrace-events-in-top-level-enable-output.patch @@ -0,0 +1,48 @@ +From 256cfdd6fdf70c6fcf0f7c8ddb0ebd73ce8f3bc9 Mon Sep 17 00:00:00 2001 +From: "Steven Rostedt (VMware)" +Date: Fri, 5 Feb 2021 15:40:04 -0500 +Subject: tracing: Do not count ftrace events in top level enable output + +From: Steven Rostedt (VMware) + +commit 256cfdd6fdf70c6fcf0f7c8ddb0ebd73ce8f3bc9 upstream. + +The file /sys/kernel/tracing/events/enable is used to enable all events by +echoing in "1", or disabling all events when echoing in "0". To know if all +events are enabled, disabled, or some are enabled but not all of them, +cating the file should show either "1" (all enabled), "0" (all disabled), or +"X" (some enabled but not all of them). This works the same as the "enable" +files in the individule system directories (like tracing/events/sched/enable). + +But when all events are enabled, the top level "enable" file shows "X". The +reason is that its checking the "ftrace" events, which are special events +that only exist for their format files. These include the format for the +function tracer events, that are enabled when the function tracer is +enabled, but not by the "enable" file. The check includes these events, +which will always be disabled, and even though all true events are enabled, +the top level "enable" file will show "X" instead of "1". + +To fix this, have the check test the event's flags to see if it has the +"IGNORE_ENABLE" flag set, and if so, not test it. + +Cc: stable@vger.kernel.org +Fixes: 553552ce1796c ("tracing: Combine event filter_active and enable into single flags field") +Reported-by: "Yordan Karadzhov (VMware)" +Signed-off-by: Steven Rostedt (VMware) +Signed-off-by: Greg Kroah-Hartman +--- + kernel/trace/trace_events.c | 3 ++- + 1 file changed, 2 insertions(+), 1 deletion(-) + +--- a/kernel/trace/trace_events.c ++++ b/kernel/trace/trace_events.c +@@ -1105,7 +1105,8 @@ system_enable_read(struct file *filp, ch + mutex_lock(&event_mutex); + list_for_each_entry(file, &tr->events, list) { + call = file->event_call; +- if (!trace_event_name(call) || !call->class || !call->class->reg) ++ if ((call->flags & TRACE_EVENT_FL_IGNORE_ENABLE) || ++ !trace_event_name(call) || !call->class || !call->class->reg) + continue; + + if (system && strcmp(call->class->system, system->name) != 0)