]> git.ipfire.org Git - thirdparty/kernel/stable.git/commit
locking,qspinlock: Fix spin_is_locked() and spin_unlock_wait()
authorPeter Zijlstra <peterz@infradead.org>
Fri, 20 May 2016 16:04:36 +0000 (18:04 +0200)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Wed, 1 Jun 2016 19:17:02 +0000 (12:17 -0700)
commitacc55cf5339a99389669139f83463ada2b1a6559
tree8d5909bcb5d6f327ba50c699eb1b1a5ee2c0b7b2
parent5637fc0d84fc4dd499b4f59ea2390175ca154906
locking,qspinlock: Fix spin_is_locked() and spin_unlock_wait()

commit 54cf809b9512be95f53ed4a5e3b631d1ac42f0fa upstream.

Similar to commits:

  51d7d5205d33 ("powerpc: Add smp_mb() to arch_spin_is_locked()")
  d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers")

qspinlock suffers from the fact that the _Q_LOCKED_VAL store is
unordered inside the ACQUIRE of the lock.

And while this is not a problem for the regular mutual exclusive
critical section usage of spinlocks, it breaks creative locking like:

spin_lock(A) spin_lock(B)
spin_unlock_wait(B) if (!spin_is_locked(A))
do_something()   do_something()

In that both CPUs can end up running do_something at the same time,
because our _Q_LOCKED_VAL store can drop past the spin_unlock_wait()
spin_is_locked() loads (even on x86!!).

To avoid making the normal case slower, add smp_mb()s to the less used
spin_unlock_wait() / spin_is_locked() side of things to avoid this
problem.

Reported-and-tested-by: Davidlohr Bueso <dave@stgolabs.net>
Reported-by: Giovanni Gherdovich <ggherdovich@suse.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
include/asm-generic/qspinlock.h