]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blob - queue-4.9/mm-mempolicy-make-mbind-return-eio-when-mpol_mf_strict-is-specified.patch
drop fs-make-splice-and-tee-take-into-account-o_nonblock-.patch from everywhere
[thirdparty/kernel/stable-queue.git] / queue-4.9 / mm-mempolicy-make-mbind-return-eio-when-mpol_mf_strict-is-specified.patch
1 From a7f40cfe3b7ada57af9b62fd28430eeb4a7cfcb7 Mon Sep 17 00:00:00 2001
2 From: Yang Shi <yang.shi@linux.alibaba.com>
3 Date: Thu, 28 Mar 2019 20:43:55 -0700
4 Subject: mm: mempolicy: make mbind() return -EIO when MPOL_MF_STRICT is specified
5
6 From: Yang Shi <yang.shi@linux.alibaba.com>
7
8 commit a7f40cfe3b7ada57af9b62fd28430eeb4a7cfcb7 upstream.
9
10 When MPOL_MF_STRICT was specified and an existing page was already on a
11 node that does not follow the policy, mbind() should return -EIO. But
12 commit 6f4576e3687b ("mempolicy: apply page table walker on
13 queue_pages_range()") broke the rule.
14
15 And commit c8633798497c ("mm: mempolicy: mbind and migrate_pages support
16 thp migration") didn't return the correct value for THP mbind() too.
17
18 If MPOL_MF_STRICT is set, ignore vma_migratable() to make sure it
19 reaches queue_pages_to_pte_range() or queue_pages_pmd() to check if an
20 existing page was already on a node that does not follow the policy.
21 And, non-migratable vma may be used, return -EIO too if MPOL_MF_MOVE or
22 MPOL_MF_MOVE_ALL was specified.
23
24 Tested with https://github.com/metan-ucw/ltp/blob/master/testcases/kernel/syscalls/mbind/mbind02.c
25
26 [akpm@linux-foundation.org: tweak code comment]
27 Link: http://lkml.kernel.org/r/1553020556-38583-1-git-send-email-yang.shi@linux.alibaba.com
28 Fixes: 6f4576e3687b ("mempolicy: apply page table walker on queue_pages_range()")
29 Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
30 Signed-off-by: Oscar Salvador <osalvador@suse.de>
31 Reported-by: Cyril Hrubis <chrubis@suse.cz>
32 Suggested-by: Kirill A. Shutemov <kirill@shutemov.name>
33 Acked-by: Rafael Aquini <aquini@redhat.com>
34 Reviewed-by: Oscar Salvador <osalvador@suse.de>
35 Acked-by: David Rientjes <rientjes@google.com>
36 Cc: Vlastimil Babka <vbabka@suse.cz>
37 Cc: <stable@vger.kernel.org>
38 Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
39 Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
40 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
41
42 ---
43 mm/mempolicy.c | 18 ++++++++++++++----
44 1 file changed, 14 insertions(+), 4 deletions(-)
45
46 --- a/mm/mempolicy.c
47 +++ b/mm/mempolicy.c
48 @@ -547,11 +547,16 @@ retry:
49 goto retry;
50 }
51
52 - migrate_page_add(page, qp->pagelist, flags);
53 + if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
54 + if (!vma_migratable(vma))
55 + break;
56 + migrate_page_add(page, qp->pagelist, flags);
57 + } else
58 + break;
59 }
60 pte_unmap_unlock(pte - 1, ptl);
61 cond_resched();
62 - return 0;
63 + return addr != end ? -EIO : 0;
64 }
65
66 static int queue_pages_hugetlb(pte_t *pte, unsigned long hmask,
67 @@ -623,7 +628,12 @@ static int queue_pages_test_walk(unsigne
68 unsigned long endvma = vma->vm_end;
69 unsigned long flags = qp->flags;
70
71 - if (!vma_migratable(vma))
72 + /*
73 + * Need check MPOL_MF_STRICT to return -EIO if possible
74 + * regardless of vma_migratable
75 + */
76 + if (!vma_migratable(vma) &&
77 + !(flags & MPOL_MF_STRICT))
78 return 1;
79
80 if (endvma > end)
81 @@ -650,7 +660,7 @@ static int queue_pages_test_walk(unsigne
82 }
83
84 /* queue pages from current vma */
85 - if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL))
86 + if (flags & MPOL_MF_VALID)
87 return 0;
88 return 1;
89 }