From: Nimrod Oren Date: Mon, 9 Mar 2026 08:13:01 +0000 (+0200) Subject: net: page_pool: scale alloc cache with PAGE_SIZE X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=15abbe7c82661209c1dc67c21903c07e2fff5aae;p=thirdparty%2Flinux.git net: page_pool: scale alloc cache with PAGE_SIZE The current page_pool alloc-cache size and refill values were chosen to match the NAPI budget and to leave headroom for XDP_DROP recycling. These fixed values do not scale well with large pages, as they significantly increase a given page_pool's memory footprint. Scale these values to better balance memory footprint across page sizes, while keeping behavior on 4KB-page systems unchanged. Reviewed-by: Dragos Tatulea Reviewed-by: Tariq Toukan Signed-off-by: Nimrod Oren Link: https://patch.msgid.link/20260309081301.103152-1-noren@nvidia.com Signed-off-by: Jakub Kicinski --- diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index cdd95477af7a2..03da138722f58 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -44,6 +44,8 @@ * use-case. The NAPI budget is 64 packets. After a NAPI poll the RX * ring is usually refilled and the max consumed elements will be 64, * thus a natural max size of objects needed in the cache. + * The refill watermark is set to 64 for 4KB pages, + * and scales to balance its size in bytes across page sizes. * * Keeping room for more objects, is due to XDP_DROP use-case. As * XDP_DROP allows the opportunity to recycle objects directly into @@ -51,8 +53,15 @@ * cache is already full (or partly full) then the XDP_DROP recycles * would have to take a slower code path. */ -#define PP_ALLOC_CACHE_SIZE 128 +#if PAGE_SIZE >= SZ_64K +#define PP_ALLOC_CACHE_REFILL 4 +#elif PAGE_SIZE >= SZ_16K +#define PP_ALLOC_CACHE_REFILL 16 +#else #define PP_ALLOC_CACHE_REFILL 64 +#endif + +#define PP_ALLOC_CACHE_SIZE (PP_ALLOC_CACHE_REFILL * 2) struct pp_alloc_cache { u32 count; netmem_ref cache[PP_ALLOC_CACHE_SIZE];