]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blob - queue-4.19/mm-page_mkclean-vs-madv_dontneed-race.patch
Linux 5.1.10
[thirdparty/kernel/stable-queue.git] / queue-4.19 / mm-page_mkclean-vs-madv_dontneed-race.patch
1 From 9e2ce7c907960946158565cc010c4f14bf9c60af Mon Sep 17 00:00:00 2001
2 From: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
3 Date: Mon, 13 May 2019 17:19:11 -0700
4 Subject: mm: page_mkclean vs MADV_DONTNEED race
5
6 [ Upstream commit 024eee0e83f0df52317be607ca521e0fc572aa07 ]
7
8 MADV_DONTNEED is handled with mmap_sem taken in read mode. We call
9 page_mkclean without holding mmap_sem.
10
11 MADV_DONTNEED implies that pages in the region are unmapped and subsequent
12 access to the pages in that range is handled as a new page fault. This
13 implies that if we don't have parallel access to the region when
14 MADV_DONTNEED is run we expect those range to be unallocated.
15
16 w.r.t page_mkclean() we need to make sure that we don't break the
17 MADV_DONTNEED semantics. MADV_DONTNEED check for pmd_none without holding
18 pmd_lock. This implies we skip the pmd if we temporarily mark pmd none.
19 Avoid doing that while marking the page clean.
20
21 Keep the sequence same for dax too even though we don't support
22 MADV_DONTNEED for dax mapping
23
24 The bug was noticed by code review and I didn't observe any failures w.r.t
25 test run. This is similar to
26
27 commit 58ceeb6bec86d9140f9d91d71a710e963523d063
28 Author: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
29 Date: Thu Apr 13 14:56:26 2017 -0700
30
31 thp: fix MADV_DONTNEED vs. MADV_FREE race
32
33 commit ced108037c2aa542b3ed8b7afd1576064ad1362a
34 Author: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
35 Date: Thu Apr 13 14:56:20 2017 -0700
36
37 thp: fix MADV_DONTNEED vs. numa balancing race
38
39 Link: http://lkml.kernel.org/r/20190321040610.14226-1-aneesh.kumar@linux.ibm.com
40 Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
41 Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
42 Cc: Dan Williams <dan.j.williams@intel.com>
43 Cc:"Kirill A . Shutemov" <kirill@shutemov.name>
44 Cc: Andrea Arcangeli <aarcange@redhat.com>
45 Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
46 Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
47 Signed-off-by: Sasha Levin <sashal@kernel.org>
48 ---
49 fs/dax.c | 2 +-
50 mm/rmap.c | 2 +-
51 2 files changed, 2 insertions(+), 2 deletions(-)
52
53 diff --git a/fs/dax.c b/fs/dax.c
54 index 004c8ac1117c..75a289c31c7e 100644
55 --- a/fs/dax.c
56 +++ b/fs/dax.c
57 @@ -908,7 +908,7 @@ static void dax_mapping_entry_mkclean(struct address_space *mapping,
58 goto unlock_pmd;
59
60 flush_cache_page(vma, address, pfn);
61 - pmd = pmdp_huge_clear_flush(vma, address, pmdp);
62 + pmd = pmdp_invalidate(vma, address, pmdp);
63 pmd = pmd_wrprotect(pmd);
64 pmd = pmd_mkclean(pmd);
65 set_pmd_at(vma->vm_mm, address, pmdp, pmd);
66 diff --git a/mm/rmap.c b/mm/rmap.c
67 index 85b7f9423352..f048c2651954 100644
68 --- a/mm/rmap.c
69 +++ b/mm/rmap.c
70 @@ -926,7 +926,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma,
71 continue;
72
73 flush_cache_page(vma, address, page_to_pfn(page));
74 - entry = pmdp_huge_clear_flush(vma, address, pmd);
75 + entry = pmdp_invalidate(vma, address, pmd);
76 entry = pmd_wrprotect(entry);
77 entry = pmd_mkclean(entry);
78 set_pmd_at(vma->vm_mm, address, pmd, entry);
79 --
80 2.20.1
81