--- /dev/null
+From 1d9ddde12e3c9bab7f3d3484eb9446315e3571ca Mon Sep 17 00:00:00 2001
+From: Eric Biggers <ebiggers@google.com>
+Date: Tue, 7 Nov 2017 14:15:27 -0800
+Subject: lib/mpi: call cond_resched() from mpi_powm() loop
+
+From: Eric Biggers <ebiggers@google.com>
+
+commit 1d9ddde12e3c9bab7f3d3484eb9446315e3571ca upstream.
+
+On a non-preemptible kernel, if KEYCTL_DH_COMPUTE is called with the
+largest permitted inputs (16384 bits), the kernel spends 10+ seconds
+doing modular exponentiation in mpi_powm() without rescheduling. If all
+threads do it, it locks up the system. Moreover, it can cause
+rcu_sched-stall warnings.
+
+Notwithstanding the insanity of doing this calculation in kernel mode
+rather than in userspace, fix it by calling cond_resched() as each bit
+from the exponent is processed. It's still noninterruptible, but at
+least it's preemptible now.
+
+Do the cond_resched() once per bit rather than once per MPI limb because
+each limb might still easily take 100+ milliseconds on slow CPUs.
+
+Signed-off-by: Eric Biggers <ebiggers@google.com>
+Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ lib/mpi/mpi-pow.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+--- a/lib/mpi/mpi-pow.c
++++ b/lib/mpi/mpi-pow.c
+@@ -26,6 +26,7 @@
+ * however I decided to publish this code under the plain GPL.
+ */
+
++#include <linux/sched.h>
+ #include <linux/string.h>
+ #include "mpi-internal.h"
+ #include "longlong.h"
+@@ -256,6 +257,7 @@ int mpi_powm(MPI res, MPI base, MPI exp,
+ }
+ e <<= 1;
+ c--;
++ cond_resched();
+ }
+
+ i--;
--- /dev/null
+From 7c2102e56a3f7d85b5d8f33efbd7aecc1f36fdd8 Mon Sep 17 00:00:00 2001
+From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
+Date: Mon, 18 Sep 2017 08:54:40 -0700
+Subject: sched: Make resched_cpu() unconditional
+
+From: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
+
+commit 7c2102e56a3f7d85b5d8f33efbd7aecc1f36fdd8 upstream.
+
+The current implementation of synchronize_sched_expedited() incorrectly
+assumes that resched_cpu() is unconditional, which it is not. This means
+that synchronize_sched_expedited() can hang when resched_cpu()'s trylock
+fails as follows (analysis by Neeraj Upadhyay):
+
+o CPU1 is waiting for expedited wait to complete:
+
+ sync_rcu_exp_select_cpus
+ rdp->exp_dynticks_snap & 0x1 // returns 1 for CPU5
+ IPI sent to CPU5
+
+ synchronize_sched_expedited_wait
+ ret = swait_event_timeout(rsp->expedited_wq,
+ sync_rcu_preempt_exp_done(rnp_root),
+ jiffies_stall);
+
+ expmask = 0x20, CPU 5 in idle path (in cpuidle_enter())
+
+o CPU5 handles IPI and fails to acquire rq lock.
+
+ Handles IPI
+ sync_sched_exp_handler
+ resched_cpu
+ returns while failing to try lock acquire rq->lock
+ need_resched is not set
+
+o CPU5 calls rcu_idle_enter() and as need_resched is not set, goes to
+ idle (schedule() is not called).
+
+o CPU 1 reports RCU stall.
+
+Given that resched_cpu() is now used only by RCU, this commit fixes the
+assumption by making resched_cpu() unconditional.
+
+Reported-by: Neeraj Upadhyay <neeraju@codeaurora.org>
+Suggested-by: Neeraj Upadhyay <neeraju@codeaurora.org>
+Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
+Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
+Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ kernel/sched/core.c | 3 +--
+ 1 file changed, 1 insertion(+), 2 deletions(-)
+
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -632,8 +632,7 @@ void resched_cpu(int cpu)
+ struct rq *rq = cpu_rq(cpu);
+ unsigned long flags;
+
+- if (!raw_spin_trylock_irqsave(&rq->lock, flags))
+- return;
++ raw_spin_lock_irqsave(&rq->lock, flags);
+ resched_curr(rq);
+ raw_spin_unlock_irqrestore(&rq->lock, flags);
+ }