]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blob - releases/2.6.38.8/mm-vmscan-correctly-check-if-reclaimer-should-schedule.patch
Fixes for 4.19
[thirdparty/kernel/stable-queue.git] / releases / 2.6.38.8 / mm-vmscan-correctly-check-if-reclaimer-should-schedule.patch
1 From f06590bd718ed950c98828e30ef93204028f3210 Mon Sep 17 00:00:00 2001
2 From: Minchan Kim <minchan.kim@gmail.com>
3 Date: Tue, 24 May 2011 17:11:11 -0700
4 Subject: mm: vmscan: correctly check if reclaimer should schedule
5 during shrink_slab
6
7 From: Minchan Kim <minchan.kim@gmail.com>
8
9 commit f06590bd718ed950c98828e30ef93204028f3210 upstream.
10
11 It has been reported on some laptops that kswapd is consuming large
12 amounts of CPU and not being scheduled when SLUB is enabled during large
13 amounts of file copying. It is expected that this is due to kswapd
14 missing every cond_resched() point because;
15
16 shrink_page_list() calls cond_resched() if inactive pages were isolated
17 which in turn may not happen if all_unreclaimable is set in
18 shrink_zones(). If for whatver reason, all_unreclaimable is
19 set on all zones, we can miss calling cond_resched().
20
21 balance_pgdat() only calls cond_resched if the zones are not
22 balanced. For a high-order allocation that is balanced, it
23 checks order-0 again. During that window, order-0 might have
24 become unbalanced so it loops again for order-0 and returns
25 that it was reclaiming for order-0 to kswapd(). It can then
26 find that a caller has rewoken kswapd for a high-order and
27 re-enters balance_pgdat() without ever calling cond_resched().
28
29 shrink_slab only calls cond_resched() if we are reclaiming slab
30 pages. If there are a large number of direct reclaimers, the
31 shrinker_rwsem can be contended and prevent kswapd calling
32 cond_resched().
33
34 This patch modifies the shrink_slab() case. If the semaphore is
35 contended, the caller will still check cond_resched(). After each
36 successful call into a shrinker, the check for cond_resched() remains in
37 case one shrinker is particularly slow.
38
39 [mgorman@suse.de: preserve call to cond_resched after each call into shrinker]
40 Signed-off-by: Mel Gorman <mgorman@suse.de>
41 Signed-off-by: Minchan Kim <minchan.kim@gmail.com>
42 Cc: Rik van Riel <riel@redhat.com>
43 Cc: Johannes Weiner <hannes@cmpxchg.org>
44 Cc: Wu Fengguang <fengguang.wu@intel.com>
45 Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
46 Tested-by: Colin King <colin.king@canonical.com>
47 Cc: Raghavendra D Prabhu <raghu.prabhu13@gmail.com>
48 Cc: Jan Kara <jack@suse.cz>
49 Cc: Chris Mason <chris.mason@oracle.com>
50 Cc: Christoph Lameter <cl@linux.com>
51 Cc: Pekka Enberg <penberg@kernel.org>
52 Cc: Rik van Riel <riel@redhat.com>
53 Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
54 Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
55 Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
56
57 ---
58 mm/vmscan.c | 9 +++++++--
59 1 file changed, 7 insertions(+), 2 deletions(-)
60
61 --- a/mm/vmscan.c
62 +++ b/mm/vmscan.c
63 @@ -230,8 +230,11 @@ unsigned long shrink_slab(unsigned long
64 if (scanned == 0)
65 scanned = SWAP_CLUSTER_MAX;
66
67 - if (!down_read_trylock(&shrinker_rwsem))
68 - return 1; /* Assume we'll be able to shrink next time */
69 + if (!down_read_trylock(&shrinker_rwsem)) {
70 + /* Assume we'll be able to shrink next time */
71 + ret = 1;
72 + goto out;
73 + }
74
75 list_for_each_entry(shrinker, &shrinker_list, list) {
76 unsigned long long delta;
77 @@ -282,6 +285,8 @@ unsigned long shrink_slab(unsigned long
78 shrinker->nr += total_scan;
79 }
80 up_read(&shrinker_rwsem);
81 +out:
82 + cond_resched();
83 return ret;
84 }
85