From: Bernd Edlinger Date: Mon, 24 Feb 2025 06:51:16 +0000 (+0100) Subject: Revert wrong macos RCU fix X-Git-Tag: openssl-3.5.0-alpha1~91 X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=a6f512a1e6c1c2e3e1efaad51a6fcf65f260bbb1;p=thirdparty%2Fopenssl.git Revert wrong macos RCU fix This reverts #23974 which seems to be no longer needed now, due to other fixes nearby. Most likely the change did just slightly decrease the performance of the reader threads, and did therefore create the wrong impression that it fixed the issue. Reviewed-by: Paul Dale Reviewed-by: Tomas Mraz (Merged from https://github.com/openssl/openssl/pull/26881) --- diff --git a/crypto/threads_pthread.c b/crypto/threads_pthread.c index a36900c796e..71a489b271d 100644 --- a/crypto/threads_pthread.c +++ b/crypto/threads_pthread.c @@ -93,38 +93,7 @@ typedef void *pvoid; # if defined(__GNUC__) && defined(__ATOMIC_ACQUIRE) && !defined(BROKEN_CLANG_ATOMICS) \ && !defined(USE_ATOMIC_FALLBACKS) -# if defined(__APPLE__) && defined(__clang__) && defined(__aarch64__) && defined(__LP64__) -/* - * For pointers, Apple M1 virtualized cpu seems to have some problem using the - * ldapr instruction (see https://github.com/openssl/openssl/pull/23974) - * When using the native apple clang compiler, this instruction is emitted for - * atomic loads, which is bad. So, if - * 1) We are building on a target that defines __APPLE__ AND - * 2) We are building on a target using clang (__clang__) AND - * 3) We are building for an M1 processor (__aarch64__) AND - * 4) We are building with 64 bit pointers - * Then we should not use __atomic_load_n and instead implement our own - * function to issue the ldar instruction instead, which produces the proper - * sequencing guarantees - */ -static inline void *apple_atomic_load_n_pvoid(void **p, - ossl_unused int memorder) -{ - void *ret; - - __asm volatile("ldar %0, [%1]" : "=r" (ret): "r" (p):); - - return ret; -} - -/* For uint64_t, we should be fine, though */ -# define apple_atomic_load_n_uint32_t(p, o) __atomic_load_n(p, o) -# define apple_atomic_load_n_uint64_t(p, o) __atomic_load_n(p, o) - -# define ATOMIC_LOAD_N(t, p, o) apple_atomic_load_n_##t(p, o) -# else -# define ATOMIC_LOAD_N(t, p, o) __atomic_load_n(p, o) -# endif +# define ATOMIC_LOAD_N(t, p, o) __atomic_load_n(p, o) # define ATOMIC_STORE_N(t, p, v, o) __atomic_store_n(p, v, o) # define ATOMIC_STORE(t, p, v, o) __atomic_store(p, v, o) # define ATOMIC_ADD_FETCH(p, v, o) __atomic_add_fetch(p, v, o)