]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blob - queue-4.4/xen-netback-fix-occasional-leak-of-grant-ref-mappings-under-memory-pressure.patch
drop perf-trace-support-multiple-vfs_getname-probes.patch from 4.4 and 4.9 queues
[thirdparty/kernel/stable-queue.git] / queue-4.4 / xen-netback-fix-occasional-leak-of-grant-ref-mappings-under-memory-pressure.patch
1 From foo@baz Fri Mar 8 10:00:48 CET 2019
2 From: Igor Druzhinin <igor.druzhinin@citrix.com>
3 Date: Thu, 28 Feb 2019 12:48:03 +0000
4 Subject: xen-netback: fix occasional leak of grant ref mappings under memory pressure
5
6 From: Igor Druzhinin <igor.druzhinin@citrix.com>
7
8 [ Upstream commit 99e87f56b48f490fb16b6e0f74691c1e664dea95 ]
9
10 Zero-copy callback flag is not yet set on frag list skb at the moment
11 xenvif_handle_frag_list() returns -ENOMEM. This eventually results in
12 leaking grant ref mappings since xenvif_zerocopy_callback() is never
13 called for these fragments. Those eventually build up and cause Xen
14 to kill Dom0 as the slots get reused for new mappings:
15
16 "d0v0 Attempt to implicitly unmap a granted PTE c010000329fce005"
17
18 That behavior is observed under certain workloads where sudden spikes
19 of page cache writes coexist with active atomic skb allocations from
20 network traffic. Additionally, rework the logic to deal with frag_list
21 deallocation in a single place.
22
23 Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
24 Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
25 Acked-by: Wei Liu <wei.liu2@citrix.com>
26 Signed-off-by: David S. Miller <davem@davemloft.net>
27 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
28 ---
29 drivers/net/xen-netback/netback.c | 10 +++++-----
30 1 file changed, 5 insertions(+), 5 deletions(-)
31
32 --- a/drivers/net/xen-netback/netback.c
33 +++ b/drivers/net/xen-netback/netback.c
34 @@ -1538,11 +1538,6 @@ static int xenvif_handle_frag_list(struc
35 skb_frag_size_set(&frags[i], len);
36 }
37
38 - /* Copied all the bits from the frag list -- free it. */
39 - skb_frag_list_init(skb);
40 - xenvif_skb_zerocopy_prepare(queue, nskb);
41 - kfree_skb(nskb);
42 -
43 /* Release all the original (foreign) frags. */
44 for (f = 0; f < skb_shinfo(skb)->nr_frags; f++)
45 skb_frag_unref(skb, f);
46 @@ -1611,6 +1606,8 @@ static int xenvif_tx_submit(struct xenvi
47 xenvif_fill_frags(queue, skb);
48
49 if (unlikely(skb_has_frag_list(skb))) {
50 + struct sk_buff *nskb = skb_shinfo(skb)->frag_list;
51 + xenvif_skb_zerocopy_prepare(queue, nskb);
52 if (xenvif_handle_frag_list(queue, skb)) {
53 if (net_ratelimit())
54 netdev_err(queue->vif->dev,
55 @@ -1619,6 +1616,9 @@ static int xenvif_tx_submit(struct xenvi
56 kfree_skb(skb);
57 continue;
58 }
59 + /* Copied all the bits from the frag list -- free it. */
60 + skb_frag_list_init(skb);
61 + kfree_skb(nskb);
62 }
63
64 skb->dev = queue->vif->dev;