From: Greg Kroah-Hartman Date: Fri, 9 Oct 2020 14:11:04 +0000 (+0200) Subject: 4.9-stable patches X-Git-Tag: v4.4.239~55 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=4fe2d569bfddf8bebab69a436c224663a419fd56;p=thirdparty%2Fkernel%2Fstable-queue.git 4.9-stable patches added patches: ftrace-move-rcu-is-watching-check-after-recursion-check.patch --- diff --git a/queue-4.9/ftrace-move-rcu-is-watching-check-after-recursion-check.patch b/queue-4.9/ftrace-move-rcu-is-watching-check-after-recursion-check.patch new file mode 100644 index 00000000000..23ce268f5a3 --- /dev/null +++ b/queue-4.9/ftrace-move-rcu-is-watching-check-after-recursion-check.patch @@ -0,0 +1,57 @@ +From b40341fad6cc2daa195f8090fd3348f18fff640a Mon Sep 17 00:00:00 2001 +From: "Steven Rostedt (VMware)" +Date: Tue, 29 Sep 2020 12:40:31 -0400 +Subject: ftrace: Move RCU is watching check after recursion check + +From: Steven Rostedt (VMware) + +commit b40341fad6cc2daa195f8090fd3348f18fff640a upstream. + +The first thing that the ftrace function callback helper functions should do +is to check for recursion. Peter Zijlstra found that when +"rcu_is_watching()" had its notrace removed, it caused perf function tracing +to crash. This is because the call of rcu_is_watching() is tested before +function recursion is checked and and if it is traced, it will cause an +infinite recursion loop. + +rcu_is_watching() should still stay notrace, but to prevent this should +never had crashed in the first place. The recursion prevention must be the +first thing done in callback functions. + +Link: https://lore.kernel.org/r/20200929112541.GM2628@hirez.programming.kicks-ass.net + +Cc: stable@vger.kernel.org +Cc: Paul McKenney +Fixes: c68c0fa293417 ("ftrace: Have ftrace_ops_get_func() handle RCU and PER_CPU flags too") +Acked-by: Peter Zijlstra (Intel) +Reported-by: Peter Zijlstra (Intel) +Signed-off-by: Steven Rostedt (VMware) +Signed-off-by: Greg Kroah-Hartman + +--- + kernel/trace/ftrace.c | 8 +++----- + 1 file changed, 3 insertions(+), 5 deletions(-) + +--- a/kernel/trace/ftrace.c ++++ b/kernel/trace/ftrace.c +@@ -5334,17 +5334,15 @@ static void ftrace_ops_assist_func(unsig + { + int bit; + +- if ((op->flags & FTRACE_OPS_FL_RCU) && !rcu_is_watching()) +- return; +- + bit = trace_test_and_set_recursion(TRACE_LIST_START, TRACE_LIST_MAX); + if (bit < 0) + return; + + preempt_disable_notrace(); + +- if (!(op->flags & FTRACE_OPS_FL_PER_CPU) || +- !ftrace_function_local_disabled(op)) { ++ if ((!(op->flags & FTRACE_OPS_FL_RCU) || rcu_is_watching()) && ++ (!(op->flags & FTRACE_OPS_FL_PER_CPU) || ++ !ftrace_function_local_disabled(op))) { + op->func(ip, parent_ip, op, regs); + } + diff --git a/queue-4.9/series b/queue-4.9/series index 0e2d26d71d2..818166c68af 100644 --- a/queue-4.9/series +++ b/queue-4.9/series @@ -31,3 +31,4 @@ platform-x86-thinkpad_acpi-re-initialize-acpi-buffer-size-when-reuse.patch driver-core-fix-probe_count-imbalance-in-really_probe.patch perf-top-fix-stdio-interface-input-handling-with-glibc-2.28.patch mtd-rawnand-sunxi-fix-the-probe-error-path.patch +ftrace-move-rcu-is-watching-check-after-recursion-check.patch