From: Jesper Dangaard Brouer Date: Wed, 19 Nov 2025 16:28:36 +0000 (+0100) Subject: veth: reduce XDP no_direct return section to fix race X-Git-Tag: v6.18~19^2~26 X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=a14602fcae17a3f1cb8a8521bedf31728f9e7e39;p=thirdparty%2Flinux.git veth: reduce XDP no_direct return section to fix race As explain in commit fa349e396e48 ("veth: Fix race with AF_XDP exposing old or uninitialized descriptors") for veth there is a chance after napi_complete_done() that another CPU can manage start another NAPI instance running veth_pool(). For NAPI this is correctly handled as the napi_schedule_prep() check will prevent multiple instances from getting scheduled, but for the remaining code in veth_pool() this can run concurrent with the newly started NAPI instance. The problem/race is that xdp_clear_return_frame_no_direct() isn't designed to be nested. Prior to commit 401cb7dae813 ("net: Reference bpf_redirect_info via task_struct on PREEMPT_RT.") the temporary BPF net context bpf_redirect_info was stored per CPU, where this wasn't an issue. Since this commit the BPF context is stored in 'current' task_struct. When running veth in threaded-NAPI mode, then the kthread becomes the storage area. Now a race exists between two concurrent veth_pool() function calls one exiting NAPI and one running new NAPI, both using the same BPF net context. Race is when another CPU gets within the xdp_set_return_frame_no_direct() section before exiting veth_pool() calls the clear-function xdp_clear_return_frame_no_direct(). Fixes: 401cb7dae8130 ("net: Reference bpf_redirect_info via task_struct on PREEMPT_RT.") Signed-off-by: Jesper Dangaard Brouer Link: https://patch.msgid.link/176356963888.337072.4805242001928705046.stgit@firesoul Signed-off-by: Jakub Kicinski --- diff --git a/drivers/net/veth.c b/drivers/net/veth.c index 35dd89aff4a9..cc502bf022d5 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -975,6 +975,9 @@ static int veth_poll(struct napi_struct *napi, int budget) if (stats.xdp_redirect > 0) xdp_do_flush(); + if (stats.xdp_tx > 0) + veth_xdp_flush(rq, &bq); + xdp_clear_return_frame_no_direct(); if (done < budget && napi_complete_done(napi, done)) { /* Write rx_notify_masked before reading ptr_ring */ @@ -987,10 +990,6 @@ static int veth_poll(struct napi_struct *napi, int budget) } } - if (stats.xdp_tx > 0) - veth_xdp_flush(rq, &bq); - xdp_clear_return_frame_no_direct(); - /* Release backpressure per NAPI poll */ smp_rmb(); /* Paired with netif_tx_stop_queue set_bit */ if (peer_txq && netif_tx_queue_stopped(peer_txq)) {