--- /dev/null
+From 1de13ee59225dfc98d483f8cce7d83f97c0b31de Mon Sep 17 00:00:00 2001
+From: Ralph Campbell <rcampbell@nvidia.com>
+Date: Tue, 13 Aug 2019 15:37:11 -0700
+Subject: mm/hmm: fix bad subpage pointer in try_to_unmap_one
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Ralph Campbell <rcampbell@nvidia.com>
+
+commit 1de13ee59225dfc98d483f8cce7d83f97c0b31de upstream.
+
+When migrating an anonymous private page to a ZONE_DEVICE private page,
+the source page->mapping and page->index fields are copied to the
+destination ZONE_DEVICE struct page and the page_mapcount() is
+increased. This is so rmap_walk() can be used to unmap and migrate the
+page back to system memory.
+
+However, try_to_unmap_one() computes the subpage pointer from a swap pte
+which computes an invalid page pointer and a kernel panic results such
+as:
+
+ BUG: unable to handle page fault for address: ffffea1fffffffc8
+
+Currently, only single pages can be migrated to device private memory so
+no subpage computation is needed and it can be set to "page".
+
+[rcampbell@nvidia.com: add comment]
+ Link: http://lkml.kernel.org/r/20190724232700.23327-4-rcampbell@nvidia.com
+Link: http://lkml.kernel.org/r/20190719192955.30462-4-rcampbell@nvidia.com
+Fixes: a5430dda8a3a1c ("mm/migrate: support un-addressable ZONE_DEVICE page in migration")
+Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
+Cc: "Jérôme Glisse" <jglisse@redhat.com>
+Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
+Cc: Mike Kravetz <mike.kravetz@oracle.com>
+Cc: Christoph Hellwig <hch@lst.de>
+Cc: Jason Gunthorpe <jgg@mellanox.com>
+Cc: John Hubbard <jhubbard@nvidia.com>
+Cc: Andrea Arcangeli <aarcange@redhat.com>
+Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
+Cc: Christoph Lameter <cl@linux.com>
+Cc: Dan Williams <dan.j.williams@intel.com>
+Cc: Dave Hansen <dave.hansen@linux.intel.com>
+Cc: Ira Weiny <ira.weiny@intel.com>
+Cc: Jan Kara <jack@suse.cz>
+Cc: Lai Jiangshan <jiangshanlai@gmail.com>
+Cc: Logan Gunthorpe <logang@deltatee.com>
+Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
+Cc: Matthew Wilcox <willy@infradead.org>
+Cc: Mel Gorman <mgorman@techsingularity.net>
+Cc: Michal Hocko <mhocko@suse.com>
+Cc: Pekka Enberg <penberg@kernel.org>
+Cc: Randy Dunlap <rdunlap@infradead.org>
+Cc: Vlastimil Babka <vbabka@suse.cz>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ mm/rmap.c | 8 ++++++++
+ 1 file changed, 8 insertions(+)
+
+--- a/mm/rmap.c
++++ b/mm/rmap.c
+@@ -1467,7 +1467,15 @@ static bool try_to_unmap_one(struct page
+ /*
+ * No need to invalidate here it will synchronize on
+ * against the special swap migration pte.
++ *
++ * The assignment to subpage above was computed from a
++ * swap PTE which results in an invalid pointer.
++ * Since only PAGE_SIZE pages can currently be
++ * migrated, just set it to page. This will need to be
++ * changed when hugepage migrations to device private
++ * memory are supported.
+ */
++ subpage = page;
+ goto discard;
+ }
+
--- /dev/null
+From 54a83d6bcbf8f4700013766b974bf9190d40b689 Mon Sep 17 00:00:00 2001
+From: Miles Chen <miles.chen@mediatek.com>
+Date: Tue, 13 Aug 2019 15:37:28 -0700
+Subject: mm/memcontrol.c: fix use after free in mem_cgroup_iter()
+
+From: Miles Chen <miles.chen@mediatek.com>
+
+commit 54a83d6bcbf8f4700013766b974bf9190d40b689 upstream.
+
+This patch is sent to report an use after free in mem_cgroup_iter()
+after merging commit be2657752e9e ("mm: memcg: fix use after free in
+mem_cgroup_iter()").
+
+I work with android kernel tree (4.9 & 4.14), and commit be2657752e9e
+("mm: memcg: fix use after free in mem_cgroup_iter()") has been merged
+to the trees. However, I can still observe use after free issues
+addressed in the commit be2657752e9e. (on low-end devices, a few times
+this month)
+
+backtrace:
+ css_tryget <- crash here
+ mem_cgroup_iter
+ shrink_node
+ shrink_zones
+ do_try_to_free_pages
+ try_to_free_pages
+ __perform_reclaim
+ __alloc_pages_direct_reclaim
+ __alloc_pages_slowpath
+ __alloc_pages_nodemask
+
+To debug, I poisoned mem_cgroup before freeing it:
+
+ static void __mem_cgroup_free(struct mem_cgroup *memcg)
+ for_each_node(node)
+ free_mem_cgroup_per_node_info(memcg, node);
+ free_percpu(memcg->stat);
+ + /* poison memcg before freeing it */
+ + memset(memcg, 0x78, sizeof(struct mem_cgroup));
+ kfree(memcg);
+ }
+
+The coredump shows the position=0xdbbc2a00 is freed.
+
+ (gdb) p/x ((struct mem_cgroup_per_node *)0xe5009e00)->iter[8]
+ $13 = {position = 0xdbbc2a00, generation = 0x2efd}
+
+ 0xdbbc2a00: 0xdbbc2e00 0x00000000 0xdbbc2800 0x00000100
+ 0xdbbc2a10: 0x00000200 0x78787878 0x00026218 0x00000000
+ 0xdbbc2a20: 0xdcad6000 0x00000001 0x78787800 0x00000000
+ 0xdbbc2a30: 0x78780000 0x00000000 0x0068fb84 0x78787878
+ 0xdbbc2a40: 0x78787878 0x78787878 0x78787878 0xe3fa5cc0
+ 0xdbbc2a50: 0x78787878 0x78787878 0x00000000 0x00000000
+ 0xdbbc2a60: 0x00000000 0x00000000 0x00000000 0x00000000
+ 0xdbbc2a70: 0x00000000 0x00000000 0x00000000 0x00000000
+ 0xdbbc2a80: 0x00000000 0x00000000 0x00000000 0x00000000
+ 0xdbbc2a90: 0x00000001 0x00000000 0x00000000 0x00100000
+ 0xdbbc2aa0: 0x00000001 0xdbbc2ac8 0x00000000 0x00000000
+ 0xdbbc2ab0: 0x00000000 0x00000000 0x00000000 0x00000000
+ 0xdbbc2ac0: 0x00000000 0x00000000 0xe5b02618 0x00001000
+ 0xdbbc2ad0: 0x00000000 0x78787878 0x78787878 0x78787878
+ 0xdbbc2ae0: 0x78787878 0x78787878 0x78787878 0x78787878
+ 0xdbbc2af0: 0x78787878 0x78787878 0x78787878 0x78787878
+ 0xdbbc2b00: 0x78787878 0x78787878 0x78787878 0x78787878
+ 0xdbbc2b10: 0x78787878 0x78787878 0x78787878 0x78787878
+ 0xdbbc2b20: 0x78787878 0x78787878 0x78787878 0x78787878
+ 0xdbbc2b30: 0x78787878 0x78787878 0x78787878 0x78787878
+ 0xdbbc2b40: 0x78787878 0x78787878 0x78787878 0x78787878
+ 0xdbbc2b50: 0x78787878 0x78787878 0x78787878 0x78787878
+ 0xdbbc2b60: 0x78787878 0x78787878 0x78787878 0x78787878
+ 0xdbbc2b70: 0x78787878 0x78787878 0x78787878 0x78787878
+ 0xdbbc2b80: 0x78787878 0x78787878 0x00000000 0x78787878
+ 0xdbbc2b90: 0x78787878 0x78787878 0x78787878 0x78787878
+ 0xdbbc2ba0: 0x78787878 0x78787878 0x78787878 0x78787878
+
+In the reclaim path, try_to_free_pages() does not setup
+sc.target_mem_cgroup and sc is passed to do_try_to_free_pages(), ...,
+shrink_node().
+
+In mem_cgroup_iter(), root is set to root_mem_cgroup because
+sc->target_mem_cgroup is NULL. It is possible to assign a memcg to
+root_mem_cgroup.nodeinfo.iter in mem_cgroup_iter().
+
+ try_to_free_pages
+ struct scan_control sc = {...}, target_mem_cgroup is 0x0;
+ do_try_to_free_pages
+ shrink_zones
+ shrink_node
+ mem_cgroup *root = sc->target_mem_cgroup;
+ memcg = mem_cgroup_iter(root, NULL, &reclaim);
+ mem_cgroup_iter()
+ if (!root)
+ root = root_mem_cgroup;
+ ...
+
+ css = css_next_descendant_pre(css, &root->css);
+ memcg = mem_cgroup_from_css(css);
+ cmpxchg(&iter->position, pos, memcg);
+
+My device uses memcg non-hierarchical mode. When we release a memcg:
+invalidate_reclaim_iterators() reaches only dead_memcg and its parents.
+If non-hierarchical mode is used, invalidate_reclaim_iterators() never
+reaches root_mem_cgroup.
+
+ static void invalidate_reclaim_iterators(struct mem_cgroup *dead_memcg)
+ {
+ struct mem_cgroup *memcg = dead_memcg;
+
+ for (; memcg; memcg = parent_mem_cgroup(memcg)
+ ...
+ }
+
+So the use after free scenario looks like:
+
+ CPU1 CPU2
+
+ try_to_free_pages
+ do_try_to_free_pages
+ shrink_zones
+ shrink_node
+ mem_cgroup_iter()
+ if (!root)
+ root = root_mem_cgroup;
+ ...
+ css = css_next_descendant_pre(css, &root->css);
+ memcg = mem_cgroup_from_css(css);
+ cmpxchg(&iter->position, pos, memcg);
+
+ invalidate_reclaim_iterators(memcg);
+ ...
+ __mem_cgroup_free()
+ kfree(memcg);
+
+ try_to_free_pages
+ do_try_to_free_pages
+ shrink_zones
+ shrink_node
+ mem_cgroup_iter()
+ if (!root)
+ root = root_mem_cgroup;
+ ...
+ mz = mem_cgroup_nodeinfo(root, reclaim->pgdat->node_id);
+ iter = &mz->iter[reclaim->priority];
+ pos = READ_ONCE(iter->position);
+ css_tryget(&pos->css) <- use after free
+
+To avoid this, we should also invalidate root_mem_cgroup.nodeinfo.iter
+in invalidate_reclaim_iterators().
+
+[cai@lca.pw: fix -Wparentheses compilation warning]
+ Link: http://lkml.kernel.org/r/1564580753-17531-1-git-send-email-cai@lca.pw
+Link: http://lkml.kernel.org/r/20190730015729.4406-1-miles.chen@mediatek.com
+Fixes: 5ac8fb31ad2e ("mm: memcontrol: convert reclaim iterator to simple css refcounting")
+Signed-off-by: Miles Chen <miles.chen@mediatek.com>
+Signed-off-by: Qian Cai <cai@lca.pw>
+Acked-by: Michal Hocko <mhocko@suse.com>
+Cc: Johannes Weiner <hannes@cmpxchg.org>
+Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ mm/memcontrol.c | 39 +++++++++++++++++++++++++++++----------
+ 1 file changed, 29 insertions(+), 10 deletions(-)
+
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -1037,26 +1037,45 @@ void mem_cgroup_iter_break(struct mem_cg
+ css_put(&prev->css);
+ }
+
+-static void invalidate_reclaim_iterators(struct mem_cgroup *dead_memcg)
++static void __invalidate_reclaim_iterators(struct mem_cgroup *from,
++ struct mem_cgroup *dead_memcg)
+ {
+- struct mem_cgroup *memcg = dead_memcg;
+ struct mem_cgroup_reclaim_iter *iter;
+ struct mem_cgroup_per_node *mz;
+ int nid;
+ int i;
+
+- for (; memcg; memcg = parent_mem_cgroup(memcg)) {
+- for_each_node(nid) {
+- mz = mem_cgroup_nodeinfo(memcg, nid);
+- for (i = 0; i <= DEF_PRIORITY; i++) {
+- iter = &mz->iter[i];
+- cmpxchg(&iter->position,
+- dead_memcg, NULL);
+- }
++ for_each_node(nid) {
++ mz = mem_cgroup_nodeinfo(from, nid);
++ for (i = 0; i <= DEF_PRIORITY; i++) {
++ iter = &mz->iter[i];
++ cmpxchg(&iter->position,
++ dead_memcg, NULL);
+ }
+ }
+ }
+
++static void invalidate_reclaim_iterators(struct mem_cgroup *dead_memcg)
++{
++ struct mem_cgroup *memcg = dead_memcg;
++ struct mem_cgroup *last;
++
++ do {
++ __invalidate_reclaim_iterators(memcg, dead_memcg);
++ last = memcg;
++ } while ((memcg = parent_mem_cgroup(memcg)));
++
++ /*
++ * When cgruop1 non-hierarchy mode is used,
++ * parent_mem_cgroup() does not walk all the way up to the
++ * cgroup root (root_mem_cgroup). So we have to handle
++ * dead_memcg from cgroup root separately.
++ */
++ if (last != root_mem_cgroup)
++ __invalidate_reclaim_iterators(root_mem_cgroup,
++ dead_memcg);
++}
++
+ /**
+ * mem_cgroup_scan_tasks - iterate over tasks of a memory cgroup hierarchy
+ * @memcg: hierarchy root
--- /dev/null
+From a53190a4aaa36494f4d7209fd1fcc6f2ee08e0e0 Mon Sep 17 00:00:00 2001
+From: Yang Shi <yang.shi@linux.alibaba.com>
+Date: Tue, 13 Aug 2019 15:37:18 -0700
+Subject: mm: mempolicy: handle vma with unmovable pages mapped correctly in mbind
+
+From: Yang Shi <yang.shi@linux.alibaba.com>
+
+commit a53190a4aaa36494f4d7209fd1fcc6f2ee08e0e0 upstream.
+
+When running syzkaller internally, we ran into the below bug on 4.9.x
+kernel:
+
+ kernel BUG at mm/huge_memory.c:2124!
+ invalid opcode: 0000 [#1] SMP KASAN
+ CPU: 0 PID: 1518 Comm: syz-executor107 Not tainted 4.9.168+ #2
+ Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.5.1 01/01/2011
+ task: ffff880067b34900 task.stack: ffff880068998000
+ RIP: split_huge_page_to_list+0x8fb/0x1030 mm/huge_memory.c:2124
+ Call Trace:
+ split_huge_page include/linux/huge_mm.h:100 [inline]
+ queue_pages_pte_range+0x7e1/0x1480 mm/mempolicy.c:538
+ walk_pmd_range mm/pagewalk.c:50 [inline]
+ walk_pud_range mm/pagewalk.c:90 [inline]
+ walk_pgd_range mm/pagewalk.c:116 [inline]
+ __walk_page_range+0x44a/0xdb0 mm/pagewalk.c:208
+ walk_page_range+0x154/0x370 mm/pagewalk.c:285
+ queue_pages_range+0x115/0x150 mm/mempolicy.c:694
+ do_mbind mm/mempolicy.c:1241 [inline]
+ SYSC_mbind+0x3c3/0x1030 mm/mempolicy.c:1370
+ SyS_mbind+0x46/0x60 mm/mempolicy.c:1352
+ do_syscall_64+0x1d2/0x600 arch/x86/entry/common.c:282
+ entry_SYSCALL_64_after_swapgs+0x5d/0xdb
+ Code: c7 80 1c 02 00 e8 26 0a 76 01 <0f> 0b 48 c7 c7 40 46 45 84 e8 4c
+ RIP [<ffffffff81895d6b>] split_huge_page_to_list+0x8fb/0x1030 mm/huge_memory.c:2124
+ RSP <ffff88006899f980>
+
+with the below test:
+
+ uint64_t r[1] = {0xffffffffffffffff};
+
+ int main(void)
+ {
+ syscall(__NR_mmap, 0x20000000, 0x1000000, 3, 0x32, -1, 0);
+ intptr_t res = 0;
+ res = syscall(__NR_socket, 0x11, 3, 0x300);
+ if (res != -1)
+ r[0] = res;
+ *(uint32_t*)0x20000040 = 0x10000;
+ *(uint32_t*)0x20000044 = 1;
+ *(uint32_t*)0x20000048 = 0xc520;
+ *(uint32_t*)0x2000004c = 1;
+ syscall(__NR_setsockopt, r[0], 0x107, 0xd, 0x20000040, 0x10);
+ syscall(__NR_mmap, 0x20fed000, 0x10000, 0, 0x8811, r[0], 0);
+ *(uint64_t*)0x20000340 = 2;
+ syscall(__NR_mbind, 0x20ff9000, 0x4000, 0x4002, 0x20000340, 0x45d4, 3);
+ return 0;
+ }
+
+Actually the test does:
+
+ mmap(0x20000000, 16777216, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x20000000
+ socket(AF_PACKET, SOCK_RAW, 768) = 3
+ setsockopt(3, SOL_PACKET, PACKET_TX_RING, {block_size=65536, block_nr=1, frame_size=50464, frame_nr=1}, 16) = 0
+ mmap(0x20fed000, 65536, PROT_NONE, MAP_SHARED|MAP_FIXED|MAP_POPULATE|MAP_DENYWRITE, 3, 0) = 0x20fed000
+ mbind(..., MPOL_MF_STRICT|MPOL_MF_MOVE) = 0
+
+The setsockopt() would allocate compound pages (16 pages in this test)
+for packet tx ring, then the mmap() would call packet_mmap() to map the
+pages into the user address space specified by the mmap() call.
+
+When calling mbind(), it would scan the vma to queue the pages for
+migration to the new node. It would split any huge page since 4.9
+doesn't support THP migration, however, the packet tx ring compound
+pages are not THP and even not movable. So, the above bug is triggered.
+
+However, the later kernel is not hit by this issue due to commit
+d44d363f6578 ("mm: don't assume anonymous pages have SwapBacked flag"),
+which just removes the PageSwapBacked check for a different reason.
+
+But, there is a deeper issue. According to the semantic of mbind(), it
+should return -EIO if MPOL_MF_MOVE or MPOL_MF_MOVE_ALL was specified and
+MPOL_MF_STRICT was also specified, but the kernel was unable to move all
+existing pages in the range. The tx ring of the packet socket is
+definitely not movable, however, mbind() returns success for this case.
+
+Although the most socket file associates with non-movable pages, but XDP
+may have movable pages from gup. So, it sounds not fine to just check
+the underlying file type of vma in vma_migratable().
+
+Change migrate_page_add() to check if the page is movable or not, if it
+is unmovable, just return -EIO. But do not abort pte walk immediately,
+since there may be pages off LRU temporarily. We should migrate other
+pages if MPOL_MF_MOVE* is specified. Set has_unmovable flag if some
+paged could not be not moved, then return -EIO for mbind() eventually.
+
+With this change the above test would return -EIO as expected.
+
+[yang.shi@linux.alibaba.com: fix review comments from Vlastimil]
+ Link: http://lkml.kernel.org/r/1563556862-54056-3-git-send-email-yang.shi@linux.alibaba.com
+Link: http://lkml.kernel.org/r/1561162809-59140-3-git-send-email-yang.shi@linux.alibaba.com
+Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
+Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
+Cc: Michal Hocko <mhocko@suse.com>
+Cc: Mel Gorman <mgorman@techsingularity.net>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ mm/mempolicy.c | 32 +++++++++++++++++++++++++-------
+ 1 file changed, 25 insertions(+), 7 deletions(-)
+
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -403,7 +403,7 @@ static const struct mempolicy_operations
+ },
+ };
+
+-static void migrate_page_add(struct page *page, struct list_head *pagelist,
++static int migrate_page_add(struct page *page, struct list_head *pagelist,
+ unsigned long flags);
+
+ struct queue_pages {
+@@ -463,12 +463,11 @@ static int queue_pages_pmd(pmd_t *pmd, s
+ flags = qp->flags;
+ /* go to thp migration */
+ if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
+- if (!vma_migratable(walk->vma)) {
++ if (!vma_migratable(walk->vma) ||
++ migrate_page_add(page, qp->pagelist, flags)) {
+ ret = 1;
+ goto unlock;
+ }
+-
+- migrate_page_add(page, qp->pagelist, flags);
+ } else
+ ret = -EIO;
+ unlock:
+@@ -532,7 +531,14 @@ static int queue_pages_pte_range(pmd_t *
+ has_unmovable = true;
+ break;
+ }
+- migrate_page_add(page, qp->pagelist, flags);
++
++ /*
++ * Do not abort immediately since there may be
++ * temporary off LRU pages in the range. Still
++ * need migrate other LRU pages.
++ */
++ if (migrate_page_add(page, qp->pagelist, flags))
++ has_unmovable = true;
+ } else
+ break;
+ }
+@@ -947,7 +953,7 @@ static long do_get_mempolicy(int *policy
+ /*
+ * page migration, thp tail pages can be passed.
+ */
+-static void migrate_page_add(struct page *page, struct list_head *pagelist,
++static int migrate_page_add(struct page *page, struct list_head *pagelist,
+ unsigned long flags)
+ {
+ struct page *head = compound_head(page);
+@@ -960,8 +966,19 @@ static void migrate_page_add(struct page
+ mod_node_page_state(page_pgdat(head),
+ NR_ISOLATED_ANON + page_is_file_cache(head),
+ hpage_nr_pages(head));
++ } else if (flags & MPOL_MF_STRICT) {
++ /*
++ * Non-movable page may reach here. And, there may be
++ * temporary off LRU pages or non-LRU movable pages.
++ * Treat them as unmovable pages since they can't be
++ * isolated, so they can't be moved at the moment. It
++ * should return -EIO for this case too.
++ */
++ return -EIO;
+ }
+ }
++
++ return 0;
+ }
+
+ /* page allocation callback for NUMA node migration */
+@@ -1164,9 +1181,10 @@ static struct page *new_page(struct page
+ }
+ #else
+
+-static void migrate_page_add(struct page *page, struct list_head *pagelist,
++static int migrate_page_add(struct page *page, struct list_head *pagelist,
+ unsigned long flags)
+ {
++ return -EIO;
+ }
+
+ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from,
--- /dev/null
+From d883544515aae54842c21730b880172e7894fde9 Mon Sep 17 00:00:00 2001
+From: Yang Shi <yang.shi@linux.alibaba.com>
+Date: Tue, 13 Aug 2019 15:37:15 -0700
+Subject: mm: mempolicy: make the behavior consistent when MPOL_MF_MOVE* and MPOL_MF_STRICT were specified
+
+From: Yang Shi <yang.shi@linux.alibaba.com>
+
+commit d883544515aae54842c21730b880172e7894fde9 upstream.
+
+When both MPOL_MF_MOVE* and MPOL_MF_STRICT was specified, mbind() should
+try best to migrate misplaced pages, if some of the pages could not be
+migrated, then return -EIO.
+
+There are three different sub-cases:
+ 1. vma is not migratable
+ 2. vma is migratable, but there are unmovable pages
+ 3. vma is migratable, pages are movable, but migrate_pages() fails
+
+If #1 happens, kernel would just abort immediately, then return -EIO,
+after a7f40cfe3b7a ("mm: mempolicy: make mbind() return -EIO when
+MPOL_MF_STRICT is specified").
+
+If #3 happens, kernel would set policy and migrate pages with
+best-effort, but won't rollback the migrated pages and reset the policy
+back.
+
+Before that commit, they behaves in the same way. It'd better to keep
+their behavior consistent. But, rolling back the migrated pages and
+resetting the policy back sounds not feasible, so just make #1 behave as
+same as #3.
+
+Userspace will know that not everything was successfully migrated (via
+-EIO), and can take whatever steps it deems necessary - attempt
+rollback, determine which exact page(s) are violating the policy, etc.
+
+Make queue_pages_range() return 1 to indicate there are unmovable pages
+or vma is not migratable.
+
+The #2 is not handled correctly in the current kernel, the following
+patch will fix it.
+
+[yang.shi@linux.alibaba.com: fix review comments from Vlastimil]
+ Link: http://lkml.kernel.org/r/1563556862-54056-2-git-send-email-yang.shi@linux.alibaba.com
+Link: http://lkml.kernel.org/r/1561162809-59140-2-git-send-email-yang.shi@linux.alibaba.com
+Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
+Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
+Cc: Michal Hocko <mhocko@suse.com>
+Cc: Mel Gorman <mgorman@techsingularity.net>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ mm/mempolicy.c | 68 ++++++++++++++++++++++++++++++++++++++++-----------------
+ 1 file changed, 48 insertions(+), 20 deletions(-)
+
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -429,11 +429,14 @@ static inline bool queue_pages_required(
+ }
+
+ /*
+- * queue_pages_pmd() has three possible return values:
+- * 1 - pages are placed on the right node or queued successfully.
+- * 0 - THP was split.
+- * -EIO - is migration entry or MPOL_MF_STRICT was specified and an existing
+- * page was already on a node that does not follow the policy.
++ * queue_pages_pmd() has four possible return values:
++ * 0 - pages are placed on the right node or queued successfully.
++ * 1 - there is unmovable page, and MPOL_MF_MOVE* & MPOL_MF_STRICT were
++ * specified.
++ * 2 - THP was split.
++ * -EIO - is migration entry or only MPOL_MF_STRICT was specified and an
++ * existing page was already on a node that does not follow the
++ * policy.
+ */
+ static int queue_pages_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr,
+ unsigned long end, struct mm_walk *walk)
+@@ -451,19 +454,17 @@ static int queue_pages_pmd(pmd_t *pmd, s
+ if (is_huge_zero_page(page)) {
+ spin_unlock(ptl);
+ __split_huge_pmd(walk->vma, pmd, addr, false, NULL);
++ ret = 2;
+ goto out;
+ }
+- if (!queue_pages_required(page, qp)) {
+- ret = 1;
++ if (!queue_pages_required(page, qp))
+ goto unlock;
+- }
+
+- ret = 1;
+ flags = qp->flags;
+ /* go to thp migration */
+ if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
+ if (!vma_migratable(walk->vma)) {
+- ret = -EIO;
++ ret = 1;
+ goto unlock;
+ }
+
+@@ -479,6 +480,13 @@ out:
+ /*
+ * Scan through pages checking if pages follow certain conditions,
+ * and move them to the pagelist if they do.
++ *
++ * queue_pages_pte_range() has three possible return values:
++ * 0 - pages are placed on the right node or queued successfully.
++ * 1 - there is unmovable page, and MPOL_MF_MOVE* & MPOL_MF_STRICT were
++ * specified.
++ * -EIO - only MPOL_MF_STRICT was specified and an existing page was already
++ * on a node that does not follow the policy.
+ */
+ static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr,
+ unsigned long end, struct mm_walk *walk)
+@@ -488,17 +496,17 @@ static int queue_pages_pte_range(pmd_t *
+ struct queue_pages *qp = walk->private;
+ unsigned long flags = qp->flags;
+ int ret;
++ bool has_unmovable = false;
+ pte_t *pte;
+ spinlock_t *ptl;
+
+ ptl = pmd_trans_huge_lock(pmd, vma);
+ if (ptl) {
+ ret = queue_pages_pmd(pmd, ptl, addr, end, walk);
+- if (ret > 0)
+- return 0;
+- else if (ret < 0)
++ if (ret != 2)
+ return ret;
+ }
++ /* THP was split, fall through to pte walk */
+
+ if (pmd_trans_unstable(pmd))
+ return 0;
+@@ -519,14 +527,21 @@ static int queue_pages_pte_range(pmd_t *
+ if (!queue_pages_required(page, qp))
+ continue;
+ if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
+- if (!vma_migratable(vma))
++ /* MPOL_MF_STRICT must be specified if we get here */
++ if (!vma_migratable(vma)) {
++ has_unmovable = true;
+ break;
++ }
+ migrate_page_add(page, qp->pagelist, flags);
+ } else
+ break;
+ }
+ pte_unmap_unlock(pte - 1, ptl);
+ cond_resched();
++
++ if (has_unmovable)
++ return 1;
++
+ return addr != end ? -EIO : 0;
+ }
+
+@@ -639,7 +654,13 @@ static int queue_pages_test_walk(unsigne
+ *
+ * If pages found in a given range are on a set of nodes (determined by
+ * @nodes and @flags,) it's isolated and queued to the pagelist which is
+- * passed via @private.)
++ * passed via @private.
++ *
++ * queue_pages_range() has three possible return values:
++ * 1 - there is unmovable page, but MPOL_MF_MOVE* & MPOL_MF_STRICT were
++ * specified.
++ * 0 - queue pages successfully or no misplaced page.
++ * -EIO - there is misplaced page and only MPOL_MF_STRICT was specified.
+ */
+ static int
+ queue_pages_range(struct mm_struct *mm, unsigned long start, unsigned long end,
+@@ -1168,6 +1189,7 @@ static long do_mbind(unsigned long start
+ struct mempolicy *new;
+ unsigned long end;
+ int err;
++ int ret;
+ LIST_HEAD(pagelist);
+
+ if (flags & ~(unsigned long)MPOL_MF_VALID)
+@@ -1229,10 +1251,15 @@ static long do_mbind(unsigned long start
+ if (err)
+ goto mpol_out;
+
+- err = queue_pages_range(mm, start, end, nmask,
++ ret = queue_pages_range(mm, start, end, nmask,
+ flags | MPOL_MF_INVERT, &pagelist);
+- if (!err)
+- err = mbind_range(mm, start, end, new);
++
++ if (ret < 0) {
++ err = -EIO;
++ goto up_out;
++ }
++
++ err = mbind_range(mm, start, end, new);
+
+ if (!err) {
+ int nr_failed = 0;
+@@ -1245,13 +1272,14 @@ static long do_mbind(unsigned long start
+ putback_movable_pages(&pagelist);
+ }
+
+- if (nr_failed && (flags & MPOL_MF_STRICT))
++ if ((ret > 0) || (nr_failed && (flags & MPOL_MF_STRICT)))
+ err = -EIO;
+ } else
+ putback_movable_pages(&pagelist);
+
++up_out:
+ up_write(&mm->mmap_sem);
+- mpol_out:
++mpol_out:
+ mpol_put(new);
+ return err;
+ }
--- /dev/null
+From 951531691c4bcaa59f56a316e018bc2ff1ddf855 Mon Sep 17 00:00:00 2001
+From: "Isaac J. Manjarres" <isaacm@codeaurora.org>
+Date: Tue, 13 Aug 2019 15:37:37 -0700
+Subject: mm/usercopy: use memory range to be accessed for wraparound check
+
+From: Isaac J. Manjarres <isaacm@codeaurora.org>
+
+commit 951531691c4bcaa59f56a316e018bc2ff1ddf855 upstream.
+
+Currently, when checking to see if accessing n bytes starting at address
+"ptr" will cause a wraparound in the memory addresses, the check in
+check_bogus_address() adds an extra byte, which is incorrect, as the
+range of addresses that will be accessed is [ptr, ptr + (n - 1)].
+
+This can lead to incorrectly detecting a wraparound in the memory
+address, when trying to read 4 KB from memory that is mapped to the the
+last possible page in the virtual address space, when in fact, accessing
+that range of memory would not cause a wraparound to occur.
+
+Use the memory range that will actually be accessed when considering if
+accessing a certain amount of bytes will cause the memory address to
+wrap around.
+
+Link: http://lkml.kernel.org/r/1564509253-23287-1-git-send-email-isaacm@codeaurora.org
+Fixes: f5509cc18daa ("mm: Hardened usercopy")
+Signed-off-by: Prasad Sodagudi <psodagud@codeaurora.org>
+Signed-off-by: Isaac J. Manjarres <isaacm@codeaurora.org>
+Co-developed-by: Prasad Sodagudi <psodagud@codeaurora.org>
+Reviewed-by: William Kucharski <william.kucharski@oracle.com>
+Acked-by: Kees Cook <keescook@chromium.org>
+Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Cc: Trilok Soni <tsoni@codeaurora.org>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ mm/usercopy.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/mm/usercopy.c
++++ b/mm/usercopy.c
+@@ -151,7 +151,7 @@ static inline void check_bogus_address(c
+ bool to_user)
+ {
+ /* Reject if object wraps past end of memory. */
+- if (ptr + n < ptr)
++ if (ptr + (n - 1) < ptr)
+ usercopy_abort("wrapped address", NULL, to_user, 0, ptr + n);
+
+ /* Reject if NULL or ZERO-allocation. */
--- /dev/null
+From 6a2aeab59e97101b4001bac84388fc49a992f87e Mon Sep 17 00:00:00 2001
+From: NeilBrown <neilb@suse.com>
+Date: Tue, 13 Aug 2019 15:37:44 -0700
+Subject: seq_file: fix problem when seeking mid-record
+
+From: NeilBrown <neilb@suse.com>
+
+commit 6a2aeab59e97101b4001bac84388fc49a992f87e upstream.
+
+If you use lseek or similar (e.g. pread) to access a location in a
+seq_file file that is within a record, rather than at a record boundary,
+then the first read will return the remainder of the record, and the
+second read will return the whole of that same record (instead of the
+next record). When seeking to a record boundary, the next record is
+correctly returned.
+
+This bug was introduced by a recent patch (identified below). Before
+that patch, seq_read() would increment m->index when the last of the
+buffer was returned (m->count == 0). After that patch, we rely on
+->next to increment m->index after filling the buffer - but there was
+one place where that didn't happen.
+
+Link: https://lkml.kernel.org/lkml/877e7xl029.fsf@notabene.neil.brown.name/
+Fixes: 1f4aace60b0e ("fs/seq_file.c: simplify seq_file iteration code and interface")
+Signed-off-by: NeilBrown <neilb@suse.com>
+Reported-by: Sergei Turchanov <turchanov@farpost.com>
+Tested-by: Sergei Turchanov <turchanov@farpost.com>
+Cc: Alexander Viro <viro@zeniv.linux.org.uk>
+Cc: Markus Elfring <Markus.Elfring@web.de>
+Cc: <stable@vger.kernel.org> [4.19+]
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ fs/seq_file.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/fs/seq_file.c
++++ b/fs/seq_file.c
+@@ -119,6 +119,7 @@ static int traverse(struct seq_file *m,
+ }
+ if (seq_has_overflowed(m))
+ goto Eoverflow;
++ p = m->op->next(m, p, &m->index);
+ if (pos + m->count > offset) {
+ m->from = offset - pos;
+ m->count -= m->from;
+@@ -126,7 +127,6 @@ static int traverse(struct seq_file *m,
+ }
+ pos += m->count;
+ m->count = 0;
+- p = m->op->next(m, p, &m->index);
+ if (pos == offset)
+ break;
+ }
--- /dev/null
+sh-kernel-hw_breakpoint-fix-missing-break-in-switch-statement.patch
+seq_file-fix-problem-when-seeking-mid-record.patch
+mm-hmm-fix-bad-subpage-pointer-in-try_to_unmap_one.patch
+mm-mempolicy-make-the-behavior-consistent-when-mpol_mf_move-and-mpol_mf_strict-were-specified.patch
+mm-mempolicy-handle-vma-with-unmovable-pages-mapped-correctly-in-mbind.patch
+mm-memcontrol.c-fix-use-after-free-in-mem_cgroup_iter.patch
+mm-usercopy-use-memory-range-to-be-accessed-for-wraparound-check.patch
--- /dev/null
+From 1ee1119d184bb06af921b48c3021d921bbd85bac Mon Sep 17 00:00:00 2001
+From: "Gustavo A. R. Silva" <gustavo@embeddedor.com>
+Date: Fri, 9 Aug 2019 23:43:56 -0500
+Subject: sh: kernel: hw_breakpoint: Fix missing break in switch statement
+
+From: Gustavo A. R. Silva <gustavo@embeddedor.com>
+
+commit 1ee1119d184bb06af921b48c3021d921bbd85bac upstream.
+
+Add missing break statement in order to prevent the code from falling
+through to case SH_BREAKPOINT_WRITE.
+
+Fixes: 09a072947791 ("sh: hw-breakpoints: Add preliminary support for SH-4A UBC.")
+Cc: stable@vger.kernel.org
+Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
+Reviewed-by: Guenter Roeck <linux@roeck-us.net>
+Tested-by: Guenter Roeck <linux@roeck-us.net>
+Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/sh/kernel/hw_breakpoint.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/arch/sh/kernel/hw_breakpoint.c
++++ b/arch/sh/kernel/hw_breakpoint.c
+@@ -160,6 +160,7 @@ int arch_bp_generic_fields(int sh_len, i
+ switch (sh_type) {
+ case SH_BREAKPOINT_READ:
+ *gen_type = HW_BREAKPOINT_R;
++ break;
+ case SH_BREAKPOINT_WRITE:
+ *gen_type = HW_BREAKPOINT_W;
+ break;