]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blame - releases/4.13.6/mm-fix-data-corruption-caused-by-lazyfree-page.patch
fixes for 4.19
[thirdparty/kernel/stable-queue.git] / releases / 4.13.6 / mm-fix-data-corruption-caused-by-lazyfree-page.patch
CommitLineData
21bb2995
GKH
1From 9625456cc76391b7f3f2809579126542a8ed4d39 Mon Sep 17 00:00:00 2001
2From: Shaohua Li <shli@fb.com>
3Date: Tue, 3 Oct 2017 16:15:32 -0700
4Subject: mm: fix data corruption caused by lazyfree page
5
6From: Shaohua Li <shli@fb.com>
7
8commit 9625456cc76391b7f3f2809579126542a8ed4d39 upstream.
9
10MADV_FREE clears pte dirty bit and then marks the page lazyfree (clear
11SwapBacked). There is no lock to prevent the page is added to swap
12cache between these two steps by page reclaim. If page reclaim finds
13such page, it will simply add the page to swap cache without pageout the
14page to swap because the page is marked as clean. Next time, page fault
15will read data from the swap slot which doesn't have the original data,
16so we have a data corruption. To fix issue, we mark the page dirty and
17pageout the page.
18
19However, we shouldn't dirty all pages which is clean and in swap cache.
20swapin page is swap cache and clean too. So we only dirty page which is
21added into swap cache in page reclaim, which shouldn't be swapin page.
22As Minchan suggested, simply dirty the page in add_to_swap can do the
23job.
24
25Fixes: 802a3a92ad7a ("mm: reclaim MADV_FREE pages")
26Link: http://lkml.kernel.org/r/08c84256b007bf3f63c91d94383bd9eb6fee2daa.1506446061.git.shli@fb.com
27Signed-off-by: Shaohua Li <shli@fb.com>
28Reported-by: Artem Savkov <asavkov@redhat.com>
29Acked-by: Michal Hocko <mhocko@suse.com>
30Acked-by: Minchan Kim <minchan@kernel.org>
31Cc: Johannes Weiner <hannes@cmpxchg.org>
32Cc: Hillf Danton <hdanton@sina.com>
33Cc: Hugh Dickins <hughd@google.com>
34Cc: Rik van Riel <riel@redhat.com>
35Cc: Mel Gorman <mgorman@techsingularity.net>
36Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
37Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
38Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
39
40---
41 mm/swap_state.c | 11 +++++++++++
42 1 file changed, 11 insertions(+)
43
44--- a/mm/swap_state.c
45+++ b/mm/swap_state.c
46@@ -219,6 +219,17 @@ int add_to_swap(struct page *page)
47 * clear SWAP_HAS_CACHE flag.
48 */
49 goto fail;
50+ /*
51+ * Normally the page will be dirtied in unmap because its pte should be
52+ * dirty. A special case is MADV_FREE page. The page'e pte could have
53+ * dirty bit cleared but the page's SwapBacked bit is still set because
54+ * clearing the dirty bit and SwapBacked bit has no lock protected. For
55+ * such page, unmap will not set dirty bit for it, so page reclaim will
56+ * not write the page out. This can cause data corruption when the page
57+ * is swap in later. Always setting the dirty bit for the page solves
58+ * the problem.
59+ */
60+ set_page_dirty(page);
61
62 return 1;
63