From: Eric Dumazet Date: Mon, 29 Sep 2025 18:21:12 +0000 (+0000) Subject: Revert "net: group sk_backlog and sk_receive_queue" X-Git-Tag: v6.18-rc1~132^2~22 X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=7d452516b67add4a53e63bfa496d8df930a66b9a;p=thirdparty%2Flinux.git Revert "net: group sk_backlog and sk_receive_queue" This reverts commit 4effb335b5dab08cb6e2c38d038910f8b527cfc9. This was a benefit for UDP flood case, which was later greatly improved with commits 6471658dc66c ("udp: use skb_attempt_defer_free()") and b650bf0977d3 ("udp: remove busylock and add per NUMA queues"). Apparently blamed commit added a regression for RAW sockets, possibly because they do not use the dual RX queue strategy that UDP has. sock_queue_rcv_skb_reason() and RAW recvmsg() compete for sk_receive_buf and sk_rmem_alloc changes, and them being in the same cache line reduce performance. Fixes: 4effb335b5da ("net: group sk_backlog and sk_receive_queue") Reported-by: kernel test robot Closes: https://lore.kernel.org/oe-lkp/202509281326.f605b4eb-lkp@intel.com Signed-off-by: Eric Dumazet Cc: Willem de Bruijn Cc: David Ahern Cc: Kuniyuki Iwashima Link: https://patch.msgid.link/20250929182112.824154-1-edumazet@google.com Signed-off-by: Jakub Kicinski --- diff --git a/include/net/sock.h b/include/net/sock.h index 8c5b64f41ab72..60bcb13f045c3 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -395,6 +395,7 @@ struct sock { atomic_t sk_drops; __s32 sk_peek_off; + struct sk_buff_head sk_error_queue; struct sk_buff_head sk_receive_queue; /* * The backlog queue is special, it is always used with @@ -412,7 +413,6 @@ struct sock { } sk_backlog; #define sk_rmem_alloc sk_backlog.rmem_alloc - struct sk_buff_head sk_error_queue; __cacheline_group_end(sock_write_rx); __cacheline_group_begin(sock_read_rx);