--- /dev/null
+From b220c049d5196dd94d992dd2dc8cba1a5e6123bf Mon Sep 17 00:00:00 2001
+From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>
+Date: Wed, 10 Feb 2021 11:53:22 -0500
+Subject: tracing: Check length before giving out the filter buffer
+
+From: Steven Rostedt (VMware) <rostedt@goodmis.org>
+
+commit b220c049d5196dd94d992dd2dc8cba1a5e6123bf upstream.
+
+When filters are used by trace events, a page is allocated on each CPU and
+used to copy the trace event fields to this page before writing to the ring
+buffer. The reason to use the filter and not write directly into the ring
+buffer is because a filter may discard the event and there's more overhead
+on discarding from the ring buffer than the extra copy.
+
+The problem here is that there is no check against the size being allocated
+when using this page. If an event asks for more than a page size while being
+filtered, it will get only a page, leading to the caller writing more that
+what was allocated.
+
+Check the length of the request, and if it is more than PAGE_SIZE minus the
+header default back to allocating from the ring buffer directly. The ring
+buffer may reject the event if its too big anyway, but it wont overflow.
+
+Link: https://lore.kernel.org/ath10k/1612839593-2308-1-git-send-email-wgong@codeaurora.org/
+
+Cc: stable@vger.kernel.org
+Fixes: 0fc1b09ff1ff4 ("tracing: Use temp buffer when filtering events")
+Reported-by: Wen Gong <wgong@codeaurora.org>
+Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/trace/trace.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -2090,7 +2090,7 @@ trace_event_buffer_lock_reserve(struct r
+ (entry = this_cpu_read(trace_buffered_event))) {
+ /* Try to use the per cpu buffer first */
+ val = this_cpu_inc_return(trace_buffered_event_cnt);
+- if (val == 1) {
++ if ((len < (PAGE_SIZE - sizeof(*entry))) && val == 1) {
+ trace_event_setup(entry, type, flags, pc);
+ entry->array[0] = len;
+ return entry;
--- /dev/null
+From 256cfdd6fdf70c6fcf0f7c8ddb0ebd73ce8f3bc9 Mon Sep 17 00:00:00 2001
+From: "Steven Rostedt (VMware)" <rostedt@goodmis.org>
+Date: Fri, 5 Feb 2021 15:40:04 -0500
+Subject: tracing: Do not count ftrace events in top level enable output
+
+From: Steven Rostedt (VMware) <rostedt@goodmis.org>
+
+commit 256cfdd6fdf70c6fcf0f7c8ddb0ebd73ce8f3bc9 upstream.
+
+The file /sys/kernel/tracing/events/enable is used to enable all events by
+echoing in "1", or disabling all events when echoing in "0". To know if all
+events are enabled, disabled, or some are enabled but not all of them,
+cating the file should show either "1" (all enabled), "0" (all disabled), or
+"X" (some enabled but not all of them). This works the same as the "enable"
+files in the individule system directories (like tracing/events/sched/enable).
+
+But when all events are enabled, the top level "enable" file shows "X". The
+reason is that its checking the "ftrace" events, which are special events
+that only exist for their format files. These include the format for the
+function tracer events, that are enabled when the function tracer is
+enabled, but not by the "enable" file. The check includes these events,
+which will always be disabled, and even though all true events are enabled,
+the top level "enable" file will show "X" instead of "1".
+
+To fix this, have the check test the event's flags to see if it has the
+"IGNORE_ENABLE" flag set, and if so, not test it.
+
+Cc: stable@vger.kernel.org
+Fixes: 553552ce1796c ("tracing: Combine event filter_active and enable into single flags field")
+Reported-by: "Yordan Karadzhov (VMware)" <y.karadz@gmail.com>
+Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/trace/trace_events.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -1105,7 +1105,8 @@ system_enable_read(struct file *filp, ch
+ mutex_lock(&event_mutex);
+ list_for_each_entry(file, &tr->events, list) {
+ call = file->event_call;
+- if (!trace_event_name(call) || !call->class || !call->class->reg)
++ if ((call->flags & TRACE_EVENT_FL_IGNORE_ENABLE) ||
++ !trace_event_name(call) || !call->class || !call->class->reg)
+ continue;
+
+ if (system && strcmp(call->class->system, system->name) != 0)