From: Greg Kroah-Hartman Date: Tue, 29 Apr 2025 12:52:44 +0000 (+0200) Subject: 6.6-stable patches X-Git-Tag: v5.4.293~33 X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=d64a4d333a55d8f069dd880d0e9b008f6178744d;p=thirdparty%2Fkernel%2Fstable-queue.git 6.6-stable patches added patches: vmxnet3-fix-malformed-packet-sizing-in-vmxnet3_process_xdp.patch --- diff --git a/queue-6.6/series b/queue-6.6/series index 44dcf91475..98b1259479 100644 --- a/queue-6.6/series +++ b/queue-6.6/series @@ -190,3 +190,4 @@ x86-pvh-call-c-code-via-the-kernel-virtual-mapping.patch revert-drivers-core-synchronize-really_probe-and-dev_uevent.patch driver-core-introduce-device_set_driver-helper.patch driver-core-fix-potential-null-pointer-dereference-in-dev_uevent.patch +vmxnet3-fix-malformed-packet-sizing-in-vmxnet3_process_xdp.patch diff --git a/queue-6.6/vmxnet3-fix-malformed-packet-sizing-in-vmxnet3_process_xdp.patch b/queue-6.6/vmxnet3-fix-malformed-packet-sizing-in-vmxnet3_process_xdp.patch new file mode 100644 index 0000000000..12a743fdfa --- /dev/null +++ b/queue-6.6/vmxnet3-fix-malformed-packet-sizing-in-vmxnet3_process_xdp.patch @@ -0,0 +1,69 @@ +From 4c2227656d9003f4d77afc76f34dd81b95e4c2c4 Mon Sep 17 00:00:00 2001 +From: Daniel Borkmann +Date: Wed, 23 Apr 2025 15:36:00 +0200 +Subject: vmxnet3: Fix malformed packet sizing in vmxnet3_process_xdp + +From: Daniel Borkmann + +commit 4c2227656d9003f4d77afc76f34dd81b95e4c2c4 upstream. + +vmxnet3 driver's XDP handling is buggy for packet sizes using ring0 (that +is, packet sizes between 128 - 3k bytes). + +We noticed MTU-related connectivity issues with Cilium's service load- +balancing in case of vmxnet3 as NIC underneath. A simple curl to a HTTP +backend service where the XDP LB was doing IPIP encap led to overly large +packet sizes but only for *some* of the packets (e.g. HTTP GET request) +while others (e.g. the prior TCP 3WHS) looked completely fine on the wire. + +In fact, the pcap recording on the backend node actually revealed that the +node with the XDP LB was leaking uninitialized kernel data onto the wire +for the affected packets, for example, while the packets should have been +152 bytes their actual size was 1482 bytes, so the remainder after 152 bytes +was padded with whatever other data was in that page at the time (e.g. we +saw user/payload data from prior processed packets). + +We only noticed this through an MTU issue, e.g. when the XDP LB node and +the backend node both had the same MTU (e.g. 1500) then the curl request +got dropped on the backend node's NIC given the packet was too large even +though the IPIP-encapped packet normally would never even come close to +the MTU limit. Lowering the MTU on the XDP LB (e.g. 1480) allowed to let +the curl request succeed (which also indicates that the kernel ignored the +padding, and thus the issue wasn't very user-visible). + +Commit e127ce7699c1 ("vmxnet3: Fix missing reserved tailroom") was too eager +to also switch xdp_prepare_buff() from rcd->len to rbi->len. It really needs +to stick to rcd->len which is the actual packet length from the descriptor. +The latter we also feed into vmxnet3_process_xdp_small(), by the way, and +it indicates the correct length needed to initialize the xdp->{data,data_end} +parts. For e127ce7699c1 ("vmxnet3: Fix missing reserved tailroom") the +relevant part was adapting xdp_init_buff() to address the warning given the +xdp_data_hard_end() depends on xdp->frame_sz. With that fixed, traffic on +the wire looks good again. + +Fixes: e127ce7699c1 ("vmxnet3: Fix missing reserved tailroom") +Signed-off-by: Daniel Borkmann +Tested-by: Andrew Sauber +Cc: Anton Protopopov +Cc: William Tu +Cc: Martin Zaharinov +Cc: Ronak Doshi +Reviewed-by: Simon Horman +Link: https://patch.msgid.link/20250423133600.176689-1-daniel@iogearbox.net +Signed-off-by: Jakub Kicinski +Signed-off-by: Greg Kroah-Hartman +--- + drivers/net/vmxnet3/vmxnet3_xdp.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/drivers/net/vmxnet3/vmxnet3_xdp.c ++++ b/drivers/net/vmxnet3/vmxnet3_xdp.c +@@ -397,7 +397,7 @@ vmxnet3_process_xdp(struct vmxnet3_adapt + + xdp_init_buff(&xdp, PAGE_SIZE, &rq->xdp_rxq); + xdp_prepare_buff(&xdp, page_address(page), rq->page_pool->p.offset, +- rbi->len, false); ++ rcd->len, false); + xdp_buff_clear_frags_flag(&xdp); + + xdp_prog = rcu_dereference(rq->adapter->xdp_bpf_prog);