]> git.ipfire.org Git - thirdparty/kernel/stable.git/commitdiff
net: don't wait for order-3 page allocation
authorShaohua Li <shli@fb.com>
Thu, 11 Jun 2015 23:50:48 +0000 (16:50 -0700)
committerLuis Henriques <luis.henriques@canonical.com>
Mon, 6 Jul 2015 09:20:01 +0000 (10:20 +0100)
commit fb05e7a89f500cfc06ae277bdc911b281928995d upstream.

We saw excessive direct memory compaction triggered by skb_page_frag_refill.
This causes performance issues and add latency. Commit 5640f7685831e0
introduces the order-3 allocation. According to the changelog, the order-3
allocation isn't a must-have but to improve performance. But direct memory
compaction has high overhead. The benefit of order-3 allocation can't
compensate the overhead of direct memory compaction.

This patch makes the order-3 page allocation atomic. If there is no memory
pressure and memory isn't fragmented, the alloction will still success, so we
don't sacrifice the order-3 benefit here. If the atomic allocation fails,
direct memory compaction will not be triggered, skb_page_frag_refill will
fallback to order-0 immediately, hence the direct memory compaction overhead is
avoided. In the allocation failure case, kswapd is waken up and doing
compaction, so chances are allocation could success next time.

alloc_skb_with_frags is the same.

The mellanox driver does similar thing, if this is accepted, we must fix
the driver too.

V3: fix the same issue in alloc_skb_with_frags as pointed out by Eric
V2: make the changelog clearer

Cc: Eric Dumazet <edumazet@google.com>
Cc: Chris Mason <clm@fb.com>
Cc: Debabrata Banerjee <dbavatar@gmail.com>
Signed-off-by: Shaohua Li <shli@fb.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
[ luis: backported to 3.16: used davem's backport to 3.14 ]
Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
net/core/skbuff.c
net/core/sock.c

index dc27721ece4d83fbd6c75fa5180bfb05b43545f6..167a92c896b9b2281bf8bc890072cf86feb2eaa2 100644 (file)
@@ -368,9 +368,11 @@ refill:
                for (order = NETDEV_FRAG_PAGE_MAX_ORDER; ;) {
                        gfp_t gfp = gfp_mask;
 
-                       if (order)
+                       if (order) {
                                gfp |= __GFP_COMP | __GFP_NOWARN |
                                       __GFP_NOMEMALLOC;
+                               gfp &= ~__GFP_WAIT;
+                       }
                        nc->frag.page = alloc_pages(gfp, order);
                        if (likely(nc->frag.page))
                                break;
index a6ddd4ada3157cb9b4307ba1a59943f1d47df829..9956e854d0e6e4b39e4529477572b2eb4952f7f3 100644 (file)
@@ -1914,8 +1914,10 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio)
        do {
                gfp_t gfp = prio;
 
-               if (order)
+               if (order) {
                        gfp |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY;
+                       gfp &= ~__GFP_WAIT;
+               }
                pfrag->page = alloc_pages(gfp, order);
                if (likely(pfrag->page)) {
                        pfrag->offset = 0;