--- /dev/null
+From f9bc9bbe8afdf83412728f0b464979a72a3b9ec2 Mon Sep 17 00:00:00 2001
+From: Nicholas Piggin <npiggin@gmail.com>
+Date: Mon, 16 Oct 2023 22:43:00 +1000
+Subject: powerpc/qspinlock: Fix stale propagated yield_cpu
+
+From: Nicholas Piggin <npiggin@gmail.com>
+
+commit f9bc9bbe8afdf83412728f0b464979a72a3b9ec2 upstream.
+
+yield_cpu is a sample of a preempted lock holder that gets propagated
+back through the queue. Queued waiters use this to yield to the
+preempted lock holder without continually sampling the lock word (which
+would defeat the purpose of MCS queueing by bouncing the cache line).
+
+The problem is that yield_cpu can become stale. It can take some time to
+be passed down the chain, and if any queued waiter gets preempted then
+it will cease to propagate the yield_cpu to later waiters.
+
+This can result in yielding to a CPU that no longer holds the lock,
+which is bad, but particularly if it is currently in H_CEDE (idle),
+then it appears to be preempted and some hypervisors (PowerVM) can
+cause very long H_CONFER latencies waiting for H_CEDE wakeup. This
+results in latency spikes and hard lockups on oversubscribed
+partitions with lock contention.
+
+This is a minimal fix. Before yielding to yield_cpu, sample the lock
+word to confirm yield_cpu is still the owner, and bail out of it is not.
+
+Thanks to a bunch of people who reported this and tracked down the
+exact problem using tracepoints and dispatch trace logs.
+
+Fixes: 28db61e207ea ("powerpc/qspinlock: allow propagation of yield CPU down the queue")
+Cc: stable@vger.kernel.org # v6.2+
+Reported-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
+Reported-by: Laurent Dufour <ldufour@linux.ibm.com>
+Reported-by: Shrikanth Hegde <sshegde@linux.vnet.ibm.com>
+Debugged-by: "Nysal Jan K.A" <nysal@linux.ibm.com>
+Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
+Tested-by: Shrikanth Hegde <sshegde@linux.vnet.ibm.com>
+Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
+Link: https://msgid.link/20231016124305.139923-2-npiggin@gmail.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/powerpc/lib/qspinlock.c | 3 +++
+ 1 file changed, 3 insertions(+)
+
+diff --git a/arch/powerpc/lib/qspinlock.c b/arch/powerpc/lib/qspinlock.c
+index 253620979d0c..6dd2f46bd3ef 100644
+--- a/arch/powerpc/lib/qspinlock.c
++++ b/arch/powerpc/lib/qspinlock.c
+@@ -406,6 +406,9 @@ static __always_inline bool yield_to_prev(struct qspinlock *lock, struct qnode *
+ if ((yield_count & 1) == 0)
+ goto yield_prev; /* owner vcpu is running */
+
++ if (get_owner_cpu(READ_ONCE(lock->val)) != yield_cpu)
++ goto yield_prev; /* re-sample lock owner */
++
+ spin_end();
+
+ preempted = true;
+--
+2.42.0
+