]> git.ipfire.org Git - thirdparty/kernel/stable.git/commit
arm64: spinlocks: implement smp_mb__before_spinlock() as smp_mb()
authorWill Deacon <will.deacon@arm.com>
Mon, 5 Sep 2016 10:56:05 +0000 (11:56 +0100)
committerWilly Tarreau <w@1wt.eu>
Mon, 6 Feb 2017 22:32:54 +0000 (23:32 +0100)
commitd65df5171a258ac766fa959cf453c6c7b9851cf3
treebe03113208020b891f65167f2cd25a5396d706bc
parent1fd5c7b6545fc2f0dca50d302ac998e668ee2fde
arm64: spinlocks: implement smp_mb__before_spinlock() as smp_mb()

commit 872c63fbf9e153146b07f0cece4da0d70b283eeb upstream.

smp_mb__before_spinlock() is intended to upgrade a spin_lock() operation
to a full barrier, such that prior stores are ordered with respect to
loads and stores occuring inside the critical section.

Unfortunately, the core code defines the barrier as smp_wmb(), which
is insufficient to provide the required ordering guarantees when used in
conjunction with our load-acquire-based spinlock implementation.

This patch overrides the arm64 definition of smp_mb__before_spinlock()
to map to a full smp_mb().

Cc: Peter Zijlstra <peterz@infradead.org>
Reported-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
arch/arm64/include/asm/spinlock.h