From: Greg Kroah-Hartman Date: Sun, 4 May 2014 00:32:28 +0000 (-0400) Subject: delete mm-vmscan-do-not-swap-anon-pages-just-because-free-file-is-low.patch X-Git-Tag: v3.4.89~9 X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=3811ab017a2ec6d3ef7120d846c7e5c5657acad2;p=thirdparty%2Fkernel%2Fstable-queue.git delete mm-vmscan-do-not-swap-anon-pages-just-because-free-file-is-low.patch --- diff --git a/queue-3.10/mm-vmscan-do-not-swap-anon-pages-just-because-free-file-is-low.patch b/queue-3.10/mm-vmscan-do-not-swap-anon-pages-just-because-free-file-is-low.patch deleted file mode 100644 index 509e46524d5..00000000000 --- a/queue-3.10/mm-vmscan-do-not-swap-anon-pages-just-because-free-file-is-low.patch +++ /dev/null @@ -1,73 +0,0 @@ -From 0bf1457f0cfca7bc026a82323ad34bcf58ad035d Mon Sep 17 00:00:00 2001 -From: Johannes Weiner -Date: Tue, 8 Apr 2014 16:04:10 -0700 -Subject: mm: vmscan: do not swap anon pages just because free+file is low - -From: Johannes Weiner - -commit 0bf1457f0cfca7bc026a82323ad34bcf58ad035d upstream. - -Page reclaim force-scans / swaps anonymous pages when file cache drops -below the high watermark of a zone in order to prevent what little cache -remains from thrashing. - -However, on bigger machines the high watermark value can be quite large -and when the workload is dominated by a static anonymous/shmem set, the -file set might just be a small window of used-once cache. In such -situations, the VM starts swapping heavily when instead it should be -recycling the no longer used cache. - -This is a longer-standing problem, but it's more likely to trigger after -commit 81c0a2bb515f ("mm: page_alloc: fair zone allocator policy") -because file pages can no longer accumulate in a single zone and are -dispersed into smaller fractions among the available zones. - -To resolve this, do not force scan anon when file pages are low but -instead rely on the scan/rotation ratios to make the right prediction. - -Signed-off-by: Johannes Weiner -Acked-by: Rafael Aquini -Cc: Rik van Riel -Cc: Mel Gorman -Cc: Hugh Dickins -Cc: Suleiman Souhlal -Signed-off-by: Andrew Morton -Signed-off-by: Linus Torvalds -Signed-off-by: Greg Kroah-Hartman - ---- - mm/vmscan.c | 16 +--------------- - 1 file changed, 1 insertion(+), 15 deletions(-) - ---- a/mm/vmscan.c -+++ b/mm/vmscan.c -@@ -1659,7 +1659,7 @@ static void get_scan_count(struct lruvec - struct zone *zone = lruvec_zone(lruvec); - unsigned long anon_prio, file_prio; - enum scan_balance scan_balance; -- unsigned long anon, file, free; -+ unsigned long anon, file; - bool force_scan = false; - unsigned long ap, fp; - enum lru_list lru; -@@ -1713,20 +1713,6 @@ static void get_scan_count(struct lruvec - get_lru_size(lruvec, LRU_INACTIVE_FILE); - - /* -- * If it's foreseeable that reclaiming the file cache won't be -- * enough to get the zone back into a desirable shape, we have -- * to swap. Better start now and leave the - probably heavily -- * thrashing - remaining file pages alone. -- */ -- if (global_reclaim(sc)) { -- free = zone_page_state(zone, NR_FREE_PAGES); -- if (unlikely(file + free <= high_wmark_pages(zone))) { -- scan_balance = SCAN_ANON; -- goto out; -- } -- } -- -- /* - * There is enough inactive page cache, do not reclaim - * anything from the anonymous working set right now. - */ diff --git a/queue-3.10/series b/queue-3.10/series index 34f5c81df5d..e18769902b6 100644 --- a/queue-3.10/series +++ b/queue-3.10/series @@ -77,7 +77,6 @@ mtip32xx-set-queue-bounce-limit.patch sh-fix-format-string-bug-in-stack-tracer.patch mm-try_to_unmap_cluster-should-lock_page-before-mlocking.patch mm-hugetlb-fix-softlockup-when-a-large-number-of-hugepages-are-freed.patch -mm-vmscan-do-not-swap-anon-pages-just-because-free-file-is-low.patch hung_task-check-the-value-of-sysctl_hung_task_timeout_sec.patch ocfs2-dlm-fix-lock-migration-crash.patch ocfs2-dlm-fix-recovery-hung.patch diff --git a/queue-3.14/mm-vmscan-do-not-swap-anon-pages-just-because-free-file-is-low.patch b/queue-3.14/mm-vmscan-do-not-swap-anon-pages-just-because-free-file-is-low.patch deleted file mode 100644 index 24c63b34dac..00000000000 --- a/queue-3.14/mm-vmscan-do-not-swap-anon-pages-just-because-free-file-is-low.patch +++ /dev/null @@ -1,73 +0,0 @@ -From 0bf1457f0cfca7bc026a82323ad34bcf58ad035d Mon Sep 17 00:00:00 2001 -From: Johannes Weiner -Date: Tue, 8 Apr 2014 16:04:10 -0700 -Subject: mm: vmscan: do not swap anon pages just because free+file is low - -From: Johannes Weiner - -commit 0bf1457f0cfca7bc026a82323ad34bcf58ad035d upstream. - -Page reclaim force-scans / swaps anonymous pages when file cache drops -below the high watermark of a zone in order to prevent what little cache -remains from thrashing. - -However, on bigger machines the high watermark value can be quite large -and when the workload is dominated by a static anonymous/shmem set, the -file set might just be a small window of used-once cache. In such -situations, the VM starts swapping heavily when instead it should be -recycling the no longer used cache. - -This is a longer-standing problem, but it's more likely to trigger after -commit 81c0a2bb515f ("mm: page_alloc: fair zone allocator policy") -because file pages can no longer accumulate in a single zone and are -dispersed into smaller fractions among the available zones. - -To resolve this, do not force scan anon when file pages are low but -instead rely on the scan/rotation ratios to make the right prediction. - -Signed-off-by: Johannes Weiner -Acked-by: Rafael Aquini -Cc: Rik van Riel -Cc: Mel Gorman -Cc: Hugh Dickins -Cc: Suleiman Souhlal -Signed-off-by: Andrew Morton -Signed-off-by: Linus Torvalds -Signed-off-by: Greg Kroah-Hartman - ---- - mm/vmscan.c | 16 +--------------- - 1 file changed, 1 insertion(+), 15 deletions(-) - ---- a/mm/vmscan.c -+++ b/mm/vmscan.c -@@ -1848,7 +1848,7 @@ static void get_scan_count(struct lruvec - struct zone *zone = lruvec_zone(lruvec); - unsigned long anon_prio, file_prio; - enum scan_balance scan_balance; -- unsigned long anon, file, free; -+ unsigned long anon, file; - bool force_scan = false; - unsigned long ap, fp; - enum lru_list lru; -@@ -1902,20 +1902,6 @@ static void get_scan_count(struct lruvec - get_lru_size(lruvec, LRU_INACTIVE_FILE); - - /* -- * If it's foreseeable that reclaiming the file cache won't be -- * enough to get the zone back into a desirable shape, we have -- * to swap. Better start now and leave the - probably heavily -- * thrashing - remaining file pages alone. -- */ -- if (global_reclaim(sc)) { -- free = zone_page_state(zone, NR_FREE_PAGES); -- if (unlikely(file + free <= high_wmark_pages(zone))) { -- scan_balance = SCAN_ANON; -- goto out; -- } -- } -- -- /* - * There is enough inactive page cache, do not reclaim - * anything from the anonymous working set right now. - */ diff --git a/queue-3.14/series b/queue-3.14/series index 94e7f4201a1..ab97d95218e 100644 --- a/queue-3.14/series +++ b/queue-3.14/series @@ -136,7 +136,6 @@ sh-fix-format-string-bug-in-stack-tracer.patch mm-page_alloc-spill-to-remote-nodes-before-waking-kswapd.patch mm-try_to_unmap_cluster-should-lock_page-before-mlocking.patch mm-hugetlb-fix-softlockup-when-a-large-number-of-hugepages-are-freed.patch -mm-vmscan-do-not-swap-anon-pages-just-because-free-file-is-low.patch hung_task-check-the-value-of-sysctl_hung_task_timeout_sec.patch xattr-guard-against-simultaneous-glibc-header-inclusion.patch ocfs2-dlm-fix-lock-migration-crash.patch