From: H.J. Lu Date: Wed, 3 Nov 2021 01:33:07 +0000 (-0700) Subject: Add LLL_MUTEX_READ_LOCK [BZ #28537] X-Git-Tag: glibc-2.35~332 X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=d672a98a1af106bd68deb15576710cd61363f7a6;p=thirdparty%2Fglibc.git Add LLL_MUTEX_READ_LOCK [BZ #28537] CAS instruction is expensive. From the x86 CPU's point of view, getting a cache line for writing is more expensive than reading. See Appendix A.2 Spinlock in: https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/xeon-lock-scaling-analysis-paper.pdf The full compare and swap will grab the cache line exclusive and cause excessive cache line bouncing. Add LLL_MUTEX_READ_LOCK to do an atomic load and skip CAS in spinlock loop if compare may fail to reduce cache line bouncing on contended locks. Reviewed-by: Szabolcs Nagy --- diff --git a/nptl/pthread_mutex_lock.c b/nptl/pthread_mutex_lock.c index 59a28cff1aa..762059b230b 100644 --- a/nptl/pthread_mutex_lock.c +++ b/nptl/pthread_mutex_lock.c @@ -64,6 +64,11 @@ lll_mutex_lock_optimized (pthread_mutex_t *mutex) # define PTHREAD_MUTEX_VERSIONS 1 #endif +#ifndef LLL_MUTEX_READ_LOCK +# define LLL_MUTEX_READ_LOCK(mutex) \ + atomic_load_relaxed (&(mutex)->__data.__lock) +#endif + static int __pthread_mutex_lock_full (pthread_mutex_t *mutex) __attribute_noinline__; @@ -141,6 +146,8 @@ PTHREAD_MUTEX_LOCK (pthread_mutex_t *mutex) break; } atomic_spin_nop (); + if (LLL_MUTEX_READ_LOCK (mutex) != 0) + continue; } while (LLL_MUTEX_TRYLOCK (mutex) != 0);