1 From foo@baz Mon 16 Sep 2019 01:18:05 PM CEST
2 From: Shmulik Ladkani <shmulik@metanetworks.com>
3 Date: Fri, 6 Sep 2019 12:23:50 +0300
4 Subject: net: gso: Fix skb_segment splat when splitting gso_size mangled skb having linear-headed frag_list
6 From: Shmulik Ladkani <shmulik@metanetworks.com>
8 [ Upstream commit 3dcbdb134f329842a38f0e6797191b885ab00a00 ]
10 Historically, support for frag_list packets entering skb_segment() was
11 limited to frag_list members terminating on exact same gso_size
12 boundaries. This is verified with a BUG_ON since commit 89319d3801d1
13 ("net: Add frag_list support to skb_segment"), quote:
15 As such we require all frag_list members terminate on exact MSS
16 boundaries. This is checked using BUG_ON.
17 As there should only be one producer in the kernel of such packets,
18 namely GRO, this requirement should not be difficult to maintain.
20 However, since commit 6578171a7ff0 ("bpf: add bpf_skb_change_proto helper"),
21 the "exact MSS boundaries" assumption no longer holds:
22 An eBPF program using bpf_skb_change_proto() DOES modify 'gso_size', but
23 leaves the frag_list members as originally merged by GRO with the
24 original 'gso_size'. Example of such programs are bpf-based NAT46 or
27 This lead to a kernel BUG_ON for flows involving:
28 - GRO generating a frag_list skb
29 - bpf program performing bpf_skb_change_proto() or bpf_skb_adjust_room()
30 - skb_segment() of the skb
32 See example BUG_ON reports in [0].
34 In commit 13acc94eff12 ("net: permit skb_segment on head_frag frag_list skb"),
35 skb_segment() was modified to support the "gso_size mangling" case of
36 a frag_list GRO'ed skb, but *only* for frag_list members having
37 head_frag==true (having a page-fragment head).
39 Alas, GRO packets having frag_list members with a linear kmalloced head
40 (head_frag==false) still hit the BUG_ON.
42 This commit adds support to skb_segment() for a 'head_skb' packet having
43 a frag_list whose members are *non* head_frag, with gso_size mangled, by
44 disabling SG and thus falling-back to copying the data from the given
45 'head_skb' into the generated segmented skbs - as suggested by Willem de
48 Since this approach involves the penalty of skb_copy_and_csum_bits()
49 when building the segments, care was taken in order to enable this
50 solution only when required:
51 - untrusted gso_size, by testing SKB_GSO_DODGY is set
52 (SKB_GSO_DODGY is set by any gso_size mangling functions in
54 - the frag_list is non empty, its item is a non head_frag, *and* the
55 headlen of the given 'head_skb' does not match the gso_size.
58 https://lore.kernel.org/netdev/20190826170724.25ff616f@pixies/
59 https://lore.kernel.org/netdev/9265b93f-253d-6b8c-f2b8-4b54eff1835c@fb.com/
62 https://lore.kernel.org/netdev/CA+FuTSfVsgNDi7c=GUU8nMg2hWxF2SjCNLXetHeVPdnxAW5K-w@mail.gmail.com/
64 Fixes: 6578171a7ff0 ("bpf: add bpf_skb_change_proto helper")
65 Suggested-by: Willem de Bruijn <willemdebruijn.kernel@gmail.com>
66 Cc: Daniel Borkmann <daniel@iogearbox.net>
67 Cc: Eric Dumazet <eric.dumazet@gmail.com>
68 Cc: Alexander Duyck <alexander.duyck@gmail.com>
69 Signed-off-by: Shmulik Ladkani <shmulik.ladkani@gmail.com>
70 Reviewed-by: Willem de Bruijn <willemb@google.com>
71 Reviewed-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
72 Signed-off-by: David S. Miller <davem@davemloft.net>
73 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
75 net/core/skbuff.c | 19 +++++++++++++++++++
76 1 file changed, 19 insertions(+)
78 --- a/net/core/skbuff.c
79 +++ b/net/core/skbuff.c
80 @@ -3094,6 +3094,25 @@ struct sk_buff *skb_segment(struct sk_bu
84 + if (list_skb && !list_skb->head_frag && skb_headlen(list_skb) &&
85 + (skb_shinfo(head_skb)->gso_type & SKB_GSO_DODGY)) {
86 + /* gso_size is untrusted, and we have a frag_list with a linear
87 + * non head_frag head.
89 + * (we assume checking the first list_skb member suffices;
90 + * i.e if either of the list_skb members have non head_frag
91 + * head, then the first one has too).
93 + * If head_skb's headlen does not fit requested gso_size, it
94 + * means that the frag_list members do NOT terminate on exact
95 + * gso_size boundaries. Hence we cannot perform skb_frag_t page
96 + * sharing. Therefore we must fallback to copying the frag_list
97 + * skbs; we do so by disabling SG.
99 + if (mss != GSO_BY_FRAGS && mss != skb_headlen(head_skb))
100 + features &= ~NETIF_F_SG;
103 __skb_push(head_skb, doffset);
104 proto = skb_network_protocol(head_skb, &dummy);
105 if (unlikely(!proto))