]> git.ipfire.org Git - thirdparty/kernel/linux.git/commit
xdp, libeth: make the xdp_init_buff() micro-optimization generic
authorAlexander Lobakin <aleksander.lobakin@intel.com>
Tue, 26 Aug 2025 15:54:55 +0000 (17:54 +0200)
committerTony Nguyen <anthony.l.nguyen@intel.com>
Mon, 8 Sep 2025 17:26:25 +0000 (10:26 -0700)
commit17d370a70bae277678b6ea82d71ef5892e7aaa97
tree0110bb6e5ef7ee62b15b2e8935e71a5a1baadc28
parentc6142e1913de563ab772f7b0e4ae78d6de9cc5b1
xdp, libeth: make the xdp_init_buff() micro-optimization generic

Often times the compilers are not able to expand two consecutive 32-bit
writes into one 64-bit on the corresponding architectures. This applies
to xdp_init_buff() called for every received frame (or at least once
per each 64 frames when the frag size is fixed).
Move the not-so-pretty hack from libeth_xdp straight to xdp_init_buff(),
but using a proper union around ::frame_sz and ::flags.
The optimization is limited to LE architectures due to the structure
layout.

One simple example from idpf with the XDP series applied (Clang 22-git,
CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE => -O2):

add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-27 (-27)
Function                                     old     new   delta
idpf_vport_splitq_napi_poll                 5076    5049     -27

The perf difference with XDP_DROP is around +0.8-1% which I see as more
than satisfying.

Suggested-by: Simon Horman <horms@kernel.org>
Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Tested-by: Ramu R <ramu.r@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
include/net/libeth/xdp.h
include/net/xdp.h