net: use try_cmpxchg() in lock_sock_nested()
Add a fast path in lock_sock_nested(), to avoid acquiring
the socket spinlock only to set @owned to one:
spin_lock_bh(&sk->sk_lock.slock);
if (unlikely(sock_owned_by_user_nocheck(sk)))
__lock_sock(sk);
sk->sk_lock.owned = 1;
spin_unlock_bh(&sk->sk_lock.slock);
On x86_64 compiler generates something quite efficient:
00000000000077c0 <lock_sock_nested>:
77c0: f3 0f 1e fa endbr64
77c4: e8 00 00 00 00 call __fentry__
77c9: b9 01 00 00 00 mov $0x1,%ecx
77ce: 31 c0 xor %eax,%eax
77d0: f0 48 0f b1 8f 48 01 00 00 lock cmpxchg %rcx,0x148(%rdi)
77d9: 75 06 jne slow_path
77db: 2e e9 00 00 00 00 cs jmp __x86_return_thunk-0x4
slow_path: ...
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Kuniyuki Iwashima <kuniyu@google.com>
Reviewed-by: Jason Xing <kerneljasonxing@gmail.com>
Link: https://patch.msgid.link/20260226021215.1764237-1-edumazet@google.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>