]> git.ipfire.org Git - thirdparty/kernel/linux.git/commit
xsk: move cq_cached_prod_lock to avoid touching a cacheline in sending path
authorJason Xing <kernelxing@tencent.com>
Sun, 4 Jan 2026 01:21:25 +0000 (09:21 +0800)
committerPaolo Abeni <pabeni@redhat.com>
Thu, 15 Jan 2026 09:07:45 +0000 (10:07 +0100)
commita2cb2e23b2bcc5e376a7aa63964e04a5b059d7a1
tree858293f26809e20f182270f851898b025ccb7746
parentcee715d907d0f93411542f19a4eb9161450e782b
xsk: move cq_cached_prod_lock to avoid touching a cacheline in sending path

We (Paolo and I) noticed that in the sending path touching an extra
cacheline due to cq_cached_prod_lock will impact the performance. After
moving the lock from struct xsk_buff_pool to struct xsk_queue, the
performance is increased by ~5% which can be observed by xdpsock.

An alternative approach [1] can be using atomic_try_cmpxchg() to have the
same effect. But unfortunately I don't have evident performance numbers to
prove the atomic approach is better than the current patch. The advantage
is to save the contention time among multiple xsks sharing the same pool
while the disadvantage is losing good maintenance. The full discussion can
be found at the following link.

[1]: https://lore.kernel.org/all/20251128134601.54678-1-kerneljasonxing@gmail.com/

Suggested-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Jason Xing <kernelxing@tencent.com>
Link: https://patch.msgid.link/20260104012125.44003-3-kerneljasonxing@gmail.com
Acked-by: Stanislav Fomichev <sdf@fomichev.me>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
include/net/xsk_buff_pool.h
net/xdp/xsk.c
net/xdp/xsk_buff_pool.c
net/xdp/xsk_queue.h