]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blame - queue-4.4/xen-netback-fix-occasional-leak-of-grant-ref-mappings-under-memory-pressure.patch
drop perf-trace-support-multiple-vfs_getname-probes.patch from 4.4 and 4.9 queues
[thirdparty/kernel/stable-queue.git] / queue-4.4 / xen-netback-fix-occasional-leak-of-grant-ref-mappings-under-memory-pressure.patch
CommitLineData
7811d7db
GKH
1From foo@baz Fri Mar 8 10:00:48 CET 2019
2From: Igor Druzhinin <igor.druzhinin@citrix.com>
3Date: Thu, 28 Feb 2019 12:48:03 +0000
4Subject: xen-netback: fix occasional leak of grant ref mappings under memory pressure
5
6From: Igor Druzhinin <igor.druzhinin@citrix.com>
7
8[ Upstream commit 99e87f56b48f490fb16b6e0f74691c1e664dea95 ]
9
10Zero-copy callback flag is not yet set on frag list skb at the moment
11xenvif_handle_frag_list() returns -ENOMEM. This eventually results in
12leaking grant ref mappings since xenvif_zerocopy_callback() is never
13called for these fragments. Those eventually build up and cause Xen
14to kill Dom0 as the slots get reused for new mappings:
15
16"d0v0 Attempt to implicitly unmap a granted PTE c010000329fce005"
17
18That behavior is observed under certain workloads where sudden spikes
19of page cache writes coexist with active atomic skb allocations from
20network traffic. Additionally, rework the logic to deal with frag_list
21deallocation in a single place.
22
23Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
24Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
25Acked-by: Wei Liu <wei.liu2@citrix.com>
26Signed-off-by: David S. Miller <davem@davemloft.net>
27Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
28---
29 drivers/net/xen-netback/netback.c | 10 +++++-----
30 1 file changed, 5 insertions(+), 5 deletions(-)
31
32--- a/drivers/net/xen-netback/netback.c
33+++ b/drivers/net/xen-netback/netback.c
34@@ -1538,11 +1538,6 @@ static int xenvif_handle_frag_list(struc
35 skb_frag_size_set(&frags[i], len);
36 }
37
38- /* Copied all the bits from the frag list -- free it. */
39- skb_frag_list_init(skb);
40- xenvif_skb_zerocopy_prepare(queue, nskb);
41- kfree_skb(nskb);
42-
43 /* Release all the original (foreign) frags. */
44 for (f = 0; f < skb_shinfo(skb)->nr_frags; f++)
45 skb_frag_unref(skb, f);
46@@ -1611,6 +1606,8 @@ static int xenvif_tx_submit(struct xenvi
47 xenvif_fill_frags(queue, skb);
48
49 if (unlikely(skb_has_frag_list(skb))) {
50+ struct sk_buff *nskb = skb_shinfo(skb)->frag_list;
51+ xenvif_skb_zerocopy_prepare(queue, nskb);
52 if (xenvif_handle_frag_list(queue, skb)) {
53 if (net_ratelimit())
54 netdev_err(queue->vif->dev,
55@@ -1619,6 +1616,9 @@ static int xenvif_tx_submit(struct xenvi
56 kfree_skb(skb);
57 continue;
58 }
59+ /* Copied all the bits from the frag list -- free it. */
60+ skb_frag_list_init(skb);
61+ kfree_skb(nskb);
62 }
63
64 skb->dev = queue->vif->dev;