From: Greg Kroah-Hartman Date: Mon, 27 Nov 2017 12:23:42 +0000 (+0100) Subject: 4.9-stable patches X-Git-Tag: v3.18.85~45 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=4a32d83decfeff379e5f0399a8c2ae3d65c8c022;p=thirdparty%2Fkernel%2Fstable-queue.git 4.9-stable patches added patches: lib-mpi-call-cond_resched-from-mpi_powm-loop.patch sched-make-resched_cpu-unconditional.patch --- diff --git a/queue-4.9/lib-mpi-call-cond_resched-from-mpi_powm-loop.patch b/queue-4.9/lib-mpi-call-cond_resched-from-mpi_powm-loop.patch new file mode 100644 index 00000000000..063e4bcdd62 --- /dev/null +++ b/queue-4.9/lib-mpi-call-cond_resched-from-mpi_powm-loop.patch @@ -0,0 +1,49 @@ +From 1d9ddde12e3c9bab7f3d3484eb9446315e3571ca Mon Sep 17 00:00:00 2001 +From: Eric Biggers +Date: Tue, 7 Nov 2017 14:15:27 -0800 +Subject: lib/mpi: call cond_resched() from mpi_powm() loop + +From: Eric Biggers + +commit 1d9ddde12e3c9bab7f3d3484eb9446315e3571ca upstream. + +On a non-preemptible kernel, if KEYCTL_DH_COMPUTE is called with the +largest permitted inputs (16384 bits), the kernel spends 10+ seconds +doing modular exponentiation in mpi_powm() without rescheduling. If all +threads do it, it locks up the system. Moreover, it can cause +rcu_sched-stall warnings. + +Notwithstanding the insanity of doing this calculation in kernel mode +rather than in userspace, fix it by calling cond_resched() as each bit +from the exponent is processed. It's still noninterruptible, but at +least it's preemptible now. + +Do the cond_resched() once per bit rather than once per MPI limb because +each limb might still easily take 100+ milliseconds on slow CPUs. + +Signed-off-by: Eric Biggers +Signed-off-by: Herbert Xu +Signed-off-by: Greg Kroah-Hartman + +--- + lib/mpi/mpi-pow.c | 2 ++ + 1 file changed, 2 insertions(+) + +--- a/lib/mpi/mpi-pow.c ++++ b/lib/mpi/mpi-pow.c +@@ -26,6 +26,7 @@ + * however I decided to publish this code under the plain GPL. + */ + ++#include + #include + #include "mpi-internal.h" + #include "longlong.h" +@@ -256,6 +257,7 @@ int mpi_powm(MPI res, MPI base, MPI exp, + } + e <<= 1; + c--; ++ cond_resched(); + } + + i--; diff --git a/queue-4.9/sched-make-resched_cpu-unconditional.patch b/queue-4.9/sched-make-resched_cpu-unconditional.patch new file mode 100644 index 00000000000..746dfbfe233 --- /dev/null +++ b/queue-4.9/sched-make-resched_cpu-unconditional.patch @@ -0,0 +1,66 @@ +From 7c2102e56a3f7d85b5d8f33efbd7aecc1f36fdd8 Mon Sep 17 00:00:00 2001 +From: "Paul E. McKenney" +Date: Mon, 18 Sep 2017 08:54:40 -0700 +Subject: sched: Make resched_cpu() unconditional + +From: Paul E. McKenney + +commit 7c2102e56a3f7d85b5d8f33efbd7aecc1f36fdd8 upstream. + +The current implementation of synchronize_sched_expedited() incorrectly +assumes that resched_cpu() is unconditional, which it is not. This means +that synchronize_sched_expedited() can hang when resched_cpu()'s trylock +fails as follows (analysis by Neeraj Upadhyay): + +o CPU1 is waiting for expedited wait to complete: + + sync_rcu_exp_select_cpus + rdp->exp_dynticks_snap & 0x1 // returns 1 for CPU5 + IPI sent to CPU5 + + synchronize_sched_expedited_wait + ret = swait_event_timeout(rsp->expedited_wq, + sync_rcu_preempt_exp_done(rnp_root), + jiffies_stall); + + expmask = 0x20, CPU 5 in idle path (in cpuidle_enter()) + +o CPU5 handles IPI and fails to acquire rq lock. + + Handles IPI + sync_sched_exp_handler + resched_cpu + returns while failing to try lock acquire rq->lock + need_resched is not set + +o CPU5 calls rcu_idle_enter() and as need_resched is not set, goes to + idle (schedule() is not called). + +o CPU 1 reports RCU stall. + +Given that resched_cpu() is now used only by RCU, this commit fixes the +assumption by making resched_cpu() unconditional. + +Reported-by: Neeraj Upadhyay +Suggested-by: Neeraj Upadhyay +Signed-off-by: Paul E. McKenney +Acked-by: Steven Rostedt (VMware) +Acked-by: Peter Zijlstra (Intel) +Signed-off-by: Greg Kroah-Hartman + +--- + kernel/sched/core.c | 3 +-- + 1 file changed, 1 insertion(+), 2 deletions(-) + +--- a/kernel/sched/core.c ++++ b/kernel/sched/core.c +@@ -507,8 +507,7 @@ void resched_cpu(int cpu) + struct rq *rq = cpu_rq(cpu); + unsigned long flags; + +- if (!raw_spin_trylock_irqsave(&rq->lock, flags)) +- return; ++ raw_spin_lock_irqsave(&rq->lock, flags); + resched_curr(rq); + raw_spin_unlock_irqrestore(&rq->lock, flags); + } diff --git a/queue-4.9/series b/queue-4.9/series index e8c2d37fdba..5da297831a6 100644 --- a/queue-4.9/series +++ b/queue-4.9/series @@ -6,3 +6,5 @@ acpi-ec-fix-regression-related-to-triggering-source-of-ec-event-handling.patch x86-mm-fix-use-after-free-of-vma-during-userfaultfd-fault.patch ipv6-only-call-ip6_route_dev_notify-once-for-netdev_unregister.patch vsock-use-new-wait-api-for-vsock_stream_sendmsg.patch +sched-make-resched_cpu-unconditional.patch +lib-mpi-call-cond_resched-from-mpi_powm-loop.patch