From: Greg Kroah-Hartman Date: Fri, 16 Aug 2019 10:05:57 +0000 (+0200) Subject: 5.2-stable patches X-Git-Tag: v4.19.68~72 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=c3aa6e9e53780452f7864c2a29744958feef3760;p=thirdparty%2Fkernel%2Fstable-queue.git 5.2-stable patches added patches: keys-trusted-allow-module-init-if-tpm-is-inactive-or-deactivated.patch mm-hmm-fix-bad-subpage-pointer-in-try_to_unmap_one.patch mm-memcontrol.c-fix-use-after-free-in-mem_cgroup_iter.patch mm-mempolicy-handle-vma-with-unmovable-pages-mapped-correctly-in-mbind.patch mm-mempolicy-make-the-behavior-consistent-when-mpol_mf_move-and-mpol_mf_strict-were-specified.patch mm-usercopy-use-memory-range-to-be-accessed-for-wraparound-check.patch mm-vmscan-do-not-special-case-slab-reclaim-when-watermarks-are-boosted.patch mm-z3fold.c-fix-z3fold_destroy_pool-ordering.patch mm-z3fold.c-fix-z3fold_destroy_pool-race-condition.patch seq_file-fix-problem-when-seeking-mid-record.patch sh-kernel-hw_breakpoint-fix-missing-break-in-switch-statement.patch --- diff --git a/queue-5.2/keys-trusted-allow-module-init-if-tpm-is-inactive-or-deactivated.patch b/queue-5.2/keys-trusted-allow-module-init-if-tpm-is-inactive-or-deactivated.patch new file mode 100644 index 00000000000..5eee7f3469d --- /dev/null +++ b/queue-5.2/keys-trusted-allow-module-init-if-tpm-is-inactive-or-deactivated.patch @@ -0,0 +1,65 @@ +From 2d6c25215ab26bb009de3575faab7b685f138e92 Mon Sep 17 00:00:00 2001 +From: Roberto Sassu +Date: Mon, 5 Aug 2019 18:44:27 +0200 +Subject: KEYS: trusted: allow module init if TPM is inactive or deactivated + +From: Roberto Sassu + +commit 2d6c25215ab26bb009de3575faab7b685f138e92 upstream. + +Commit c78719203fc6 ("KEYS: trusted: allow trusted.ko to initialize w/o a +TPM") allows the trusted module to be loaded even if a TPM is not found, to +avoid module dependency problems. + +However, trusted module initialization can still fail if the TPM is +inactive or deactivated. tpm_get_random() returns an error. + +This patch removes the call to tpm_get_random() and instead extends the PCR +specified by the user with zeros. The security of this alternative is +equivalent to the previous one, as either option prevents with a PCR update +unsealing and misuse of sealed data by a user space process. + +Even if a PCR is extended with zeros, instead of random data, it is still +computationally infeasible to find a value as input for a new PCR extend +operation, to obtain again the PCR value that would allow unsealing. + +Cc: stable@vger.kernel.org +Fixes: 240730437deb ("KEYS: trusted: explicitly use tpm_chip structure...") +Signed-off-by: Roberto Sassu +Reviewed-by: Tyler Hicks +Suggested-by: Mimi Zohar +Reviewed-by: Jarkko Sakkinen +Signed-off-by: Jarkko Sakkinen +Signed-off-by: Greg Kroah-Hartman + +--- + security/keys/trusted.c | 13 ------------- + 1 file changed, 13 deletions(-) + +--- a/security/keys/trusted.c ++++ b/security/keys/trusted.c +@@ -1228,24 +1228,11 @@ hashalg_fail: + + static int __init init_digests(void) + { +- u8 digest[TPM_MAX_DIGEST_SIZE]; +- int ret; +- int i; +- +- ret = tpm_get_random(chip, digest, TPM_MAX_DIGEST_SIZE); +- if (ret < 0) +- return ret; +- if (ret < TPM_MAX_DIGEST_SIZE) +- return -EFAULT; +- + digests = kcalloc(chip->nr_allocated_banks, sizeof(*digests), + GFP_KERNEL); + if (!digests) + return -ENOMEM; + +- for (i = 0; i < chip->nr_allocated_banks; i++) +- memcpy(digests[i].digest, digest, TPM_MAX_DIGEST_SIZE); +- + return 0; + } + diff --git a/queue-5.2/mm-hmm-fix-bad-subpage-pointer-in-try_to_unmap_one.patch b/queue-5.2/mm-hmm-fix-bad-subpage-pointer-in-try_to_unmap_one.patch new file mode 100644 index 00000000000..aaa3a8be256 --- /dev/null +++ b/queue-5.2/mm-hmm-fix-bad-subpage-pointer-in-try_to_unmap_one.patch @@ -0,0 +1,81 @@ +From 1de13ee59225dfc98d483f8cce7d83f97c0b31de Mon Sep 17 00:00:00 2001 +From: Ralph Campbell +Date: Tue, 13 Aug 2019 15:37:11 -0700 +Subject: mm/hmm: fix bad subpage pointer in try_to_unmap_one +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Ralph Campbell + +commit 1de13ee59225dfc98d483f8cce7d83f97c0b31de upstream. + +When migrating an anonymous private page to a ZONE_DEVICE private page, +the source page->mapping and page->index fields are copied to the +destination ZONE_DEVICE struct page and the page_mapcount() is +increased. This is so rmap_walk() can be used to unmap and migrate the +page back to system memory. + +However, try_to_unmap_one() computes the subpage pointer from a swap pte +which computes an invalid page pointer and a kernel panic results such +as: + + BUG: unable to handle page fault for address: ffffea1fffffffc8 + +Currently, only single pages can be migrated to device private memory so +no subpage computation is needed and it can be set to "page". + +[rcampbell@nvidia.com: add comment] + Link: http://lkml.kernel.org/r/20190724232700.23327-4-rcampbell@nvidia.com +Link: http://lkml.kernel.org/r/20190719192955.30462-4-rcampbell@nvidia.com +Fixes: a5430dda8a3a1c ("mm/migrate: support un-addressable ZONE_DEVICE page in migration") +Signed-off-by: Ralph Campbell +Cc: "Jérôme Glisse" +Cc: "Kirill A. Shutemov" +Cc: Mike Kravetz +Cc: Christoph Hellwig +Cc: Jason Gunthorpe +Cc: John Hubbard +Cc: Andrea Arcangeli +Cc: Andrey Ryabinin +Cc: Christoph Lameter +Cc: Dan Williams +Cc: Dave Hansen +Cc: Ira Weiny +Cc: Jan Kara +Cc: Lai Jiangshan +Cc: Logan Gunthorpe +Cc: Martin Schwidefsky +Cc: Matthew Wilcox +Cc: Mel Gorman +Cc: Michal Hocko +Cc: Pekka Enberg +Cc: Randy Dunlap +Cc: Vlastimil Babka +Cc: +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + +--- + mm/rmap.c | 8 ++++++++ + 1 file changed, 8 insertions(+) + +--- a/mm/rmap.c ++++ b/mm/rmap.c +@@ -1475,7 +1475,15 @@ static bool try_to_unmap_one(struct page + /* + * No need to invalidate here it will synchronize on + * against the special swap migration pte. ++ * ++ * The assignment to subpage above was computed from a ++ * swap PTE which results in an invalid pointer. ++ * Since only PAGE_SIZE pages can currently be ++ * migrated, just set it to page. This will need to be ++ * changed when hugepage migrations to device private ++ * memory are supported. + */ ++ subpage = page; + goto discard; + } + diff --git a/queue-5.2/mm-memcontrol.c-fix-use-after-free-in-mem_cgroup_iter.patch b/queue-5.2/mm-memcontrol.c-fix-use-after-free-in-mem_cgroup_iter.patch new file mode 100644 index 00000000000..9296959047b --- /dev/null +++ b/queue-5.2/mm-memcontrol.c-fix-use-after-free-in-mem_cgroup_iter.patch @@ -0,0 +1,225 @@ +From 54a83d6bcbf8f4700013766b974bf9190d40b689 Mon Sep 17 00:00:00 2001 +From: Miles Chen +Date: Tue, 13 Aug 2019 15:37:28 -0700 +Subject: mm/memcontrol.c: fix use after free in mem_cgroup_iter() + +From: Miles Chen + +commit 54a83d6bcbf8f4700013766b974bf9190d40b689 upstream. + +This patch is sent to report an use after free in mem_cgroup_iter() +after merging commit be2657752e9e ("mm: memcg: fix use after free in +mem_cgroup_iter()"). + +I work with android kernel tree (4.9 & 4.14), and commit be2657752e9e +("mm: memcg: fix use after free in mem_cgroup_iter()") has been merged +to the trees. However, I can still observe use after free issues +addressed in the commit be2657752e9e. (on low-end devices, a few times +this month) + +backtrace: + css_tryget <- crash here + mem_cgroup_iter + shrink_node + shrink_zones + do_try_to_free_pages + try_to_free_pages + __perform_reclaim + __alloc_pages_direct_reclaim + __alloc_pages_slowpath + __alloc_pages_nodemask + +To debug, I poisoned mem_cgroup before freeing it: + + static void __mem_cgroup_free(struct mem_cgroup *memcg) + for_each_node(node) + free_mem_cgroup_per_node_info(memcg, node); + free_percpu(memcg->stat); + + /* poison memcg before freeing it */ + + memset(memcg, 0x78, sizeof(struct mem_cgroup)); + kfree(memcg); + } + +The coredump shows the position=0xdbbc2a00 is freed. + + (gdb) p/x ((struct mem_cgroup_per_node *)0xe5009e00)->iter[8] + $13 = {position = 0xdbbc2a00, generation = 0x2efd} + + 0xdbbc2a00: 0xdbbc2e00 0x00000000 0xdbbc2800 0x00000100 + 0xdbbc2a10: 0x00000200 0x78787878 0x00026218 0x00000000 + 0xdbbc2a20: 0xdcad6000 0x00000001 0x78787800 0x00000000 + 0xdbbc2a30: 0x78780000 0x00000000 0x0068fb84 0x78787878 + 0xdbbc2a40: 0x78787878 0x78787878 0x78787878 0xe3fa5cc0 + 0xdbbc2a50: 0x78787878 0x78787878 0x00000000 0x00000000 + 0xdbbc2a60: 0x00000000 0x00000000 0x00000000 0x00000000 + 0xdbbc2a70: 0x00000000 0x00000000 0x00000000 0x00000000 + 0xdbbc2a80: 0x00000000 0x00000000 0x00000000 0x00000000 + 0xdbbc2a90: 0x00000001 0x00000000 0x00000000 0x00100000 + 0xdbbc2aa0: 0x00000001 0xdbbc2ac8 0x00000000 0x00000000 + 0xdbbc2ab0: 0x00000000 0x00000000 0x00000000 0x00000000 + 0xdbbc2ac0: 0x00000000 0x00000000 0xe5b02618 0x00001000 + 0xdbbc2ad0: 0x00000000 0x78787878 0x78787878 0x78787878 + 0xdbbc2ae0: 0x78787878 0x78787878 0x78787878 0x78787878 + 0xdbbc2af0: 0x78787878 0x78787878 0x78787878 0x78787878 + 0xdbbc2b00: 0x78787878 0x78787878 0x78787878 0x78787878 + 0xdbbc2b10: 0x78787878 0x78787878 0x78787878 0x78787878 + 0xdbbc2b20: 0x78787878 0x78787878 0x78787878 0x78787878 + 0xdbbc2b30: 0x78787878 0x78787878 0x78787878 0x78787878 + 0xdbbc2b40: 0x78787878 0x78787878 0x78787878 0x78787878 + 0xdbbc2b50: 0x78787878 0x78787878 0x78787878 0x78787878 + 0xdbbc2b60: 0x78787878 0x78787878 0x78787878 0x78787878 + 0xdbbc2b70: 0x78787878 0x78787878 0x78787878 0x78787878 + 0xdbbc2b80: 0x78787878 0x78787878 0x00000000 0x78787878 + 0xdbbc2b90: 0x78787878 0x78787878 0x78787878 0x78787878 + 0xdbbc2ba0: 0x78787878 0x78787878 0x78787878 0x78787878 + +In the reclaim path, try_to_free_pages() does not setup +sc.target_mem_cgroup and sc is passed to do_try_to_free_pages(), ..., +shrink_node(). + +In mem_cgroup_iter(), root is set to root_mem_cgroup because +sc->target_mem_cgroup is NULL. It is possible to assign a memcg to +root_mem_cgroup.nodeinfo.iter in mem_cgroup_iter(). + + try_to_free_pages + struct scan_control sc = {...}, target_mem_cgroup is 0x0; + do_try_to_free_pages + shrink_zones + shrink_node + mem_cgroup *root = sc->target_mem_cgroup; + memcg = mem_cgroup_iter(root, NULL, &reclaim); + mem_cgroup_iter() + if (!root) + root = root_mem_cgroup; + ... + + css = css_next_descendant_pre(css, &root->css); + memcg = mem_cgroup_from_css(css); + cmpxchg(&iter->position, pos, memcg); + +My device uses memcg non-hierarchical mode. When we release a memcg: +invalidate_reclaim_iterators() reaches only dead_memcg and its parents. +If non-hierarchical mode is used, invalidate_reclaim_iterators() never +reaches root_mem_cgroup. + + static void invalidate_reclaim_iterators(struct mem_cgroup *dead_memcg) + { + struct mem_cgroup *memcg = dead_memcg; + + for (; memcg; memcg = parent_mem_cgroup(memcg) + ... + } + +So the use after free scenario looks like: + + CPU1 CPU2 + + try_to_free_pages + do_try_to_free_pages + shrink_zones + shrink_node + mem_cgroup_iter() + if (!root) + root = root_mem_cgroup; + ... + css = css_next_descendant_pre(css, &root->css); + memcg = mem_cgroup_from_css(css); + cmpxchg(&iter->position, pos, memcg); + + invalidate_reclaim_iterators(memcg); + ... + __mem_cgroup_free() + kfree(memcg); + + try_to_free_pages + do_try_to_free_pages + shrink_zones + shrink_node + mem_cgroup_iter() + if (!root) + root = root_mem_cgroup; + ... + mz = mem_cgroup_nodeinfo(root, reclaim->pgdat->node_id); + iter = &mz->iter[reclaim->priority]; + pos = READ_ONCE(iter->position); + css_tryget(&pos->css) <- use after free + +To avoid this, we should also invalidate root_mem_cgroup.nodeinfo.iter +in invalidate_reclaim_iterators(). + +[cai@lca.pw: fix -Wparentheses compilation warning] + Link: http://lkml.kernel.org/r/1564580753-17531-1-git-send-email-cai@lca.pw +Link: http://lkml.kernel.org/r/20190730015729.4406-1-miles.chen@mediatek.com +Fixes: 5ac8fb31ad2e ("mm: memcontrol: convert reclaim iterator to simple css refcounting") +Signed-off-by: Miles Chen +Signed-off-by: Qian Cai +Acked-by: Michal Hocko +Cc: Johannes Weiner +Cc: Vladimir Davydov +Cc: +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + +--- + mm/memcontrol.c | 39 +++++++++++++++++++++++++++++---------- + 1 file changed, 29 insertions(+), 10 deletions(-) + +--- a/mm/memcontrol.c ++++ b/mm/memcontrol.c +@@ -1126,26 +1126,45 @@ void mem_cgroup_iter_break(struct mem_cg + css_put(&prev->css); + } + +-static void invalidate_reclaim_iterators(struct mem_cgroup *dead_memcg) ++static void __invalidate_reclaim_iterators(struct mem_cgroup *from, ++ struct mem_cgroup *dead_memcg) + { +- struct mem_cgroup *memcg = dead_memcg; + struct mem_cgroup_reclaim_iter *iter; + struct mem_cgroup_per_node *mz; + int nid; + int i; + +- for (; memcg; memcg = parent_mem_cgroup(memcg)) { +- for_each_node(nid) { +- mz = mem_cgroup_nodeinfo(memcg, nid); +- for (i = 0; i <= DEF_PRIORITY; i++) { +- iter = &mz->iter[i]; +- cmpxchg(&iter->position, +- dead_memcg, NULL); +- } ++ for_each_node(nid) { ++ mz = mem_cgroup_nodeinfo(from, nid); ++ for (i = 0; i <= DEF_PRIORITY; i++) { ++ iter = &mz->iter[i]; ++ cmpxchg(&iter->position, ++ dead_memcg, NULL); + } + } + } + ++static void invalidate_reclaim_iterators(struct mem_cgroup *dead_memcg) ++{ ++ struct mem_cgroup *memcg = dead_memcg; ++ struct mem_cgroup *last; ++ ++ do { ++ __invalidate_reclaim_iterators(memcg, dead_memcg); ++ last = memcg; ++ } while ((memcg = parent_mem_cgroup(memcg))); ++ ++ /* ++ * When cgruop1 non-hierarchy mode is used, ++ * parent_mem_cgroup() does not walk all the way up to the ++ * cgroup root (root_mem_cgroup). So we have to handle ++ * dead_memcg from cgroup root separately. ++ */ ++ if (last != root_mem_cgroup) ++ __invalidate_reclaim_iterators(root_mem_cgroup, ++ dead_memcg); ++} ++ + /** + * mem_cgroup_scan_tasks - iterate over tasks of a memory cgroup hierarchy + * @memcg: hierarchy root diff --git a/queue-5.2/mm-mempolicy-handle-vma-with-unmovable-pages-mapped-correctly-in-mbind.patch b/queue-5.2/mm-mempolicy-handle-vma-with-unmovable-pages-mapped-correctly-in-mbind.patch new file mode 100644 index 00000000000..0b80d38a0fd --- /dev/null +++ b/queue-5.2/mm-mempolicy-handle-vma-with-unmovable-pages-mapped-correctly-in-mbind.patch @@ -0,0 +1,196 @@ +From a53190a4aaa36494f4d7209fd1fcc6f2ee08e0e0 Mon Sep 17 00:00:00 2001 +From: Yang Shi +Date: Tue, 13 Aug 2019 15:37:18 -0700 +Subject: mm: mempolicy: handle vma with unmovable pages mapped correctly in mbind + +From: Yang Shi + +commit a53190a4aaa36494f4d7209fd1fcc6f2ee08e0e0 upstream. + +When running syzkaller internally, we ran into the below bug on 4.9.x +kernel: + + kernel BUG at mm/huge_memory.c:2124! + invalid opcode: 0000 [#1] SMP KASAN + CPU: 0 PID: 1518 Comm: syz-executor107 Not tainted 4.9.168+ #2 + Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.5.1 01/01/2011 + task: ffff880067b34900 task.stack: ffff880068998000 + RIP: split_huge_page_to_list+0x8fb/0x1030 mm/huge_memory.c:2124 + Call Trace: + split_huge_page include/linux/huge_mm.h:100 [inline] + queue_pages_pte_range+0x7e1/0x1480 mm/mempolicy.c:538 + walk_pmd_range mm/pagewalk.c:50 [inline] + walk_pud_range mm/pagewalk.c:90 [inline] + walk_pgd_range mm/pagewalk.c:116 [inline] + __walk_page_range+0x44a/0xdb0 mm/pagewalk.c:208 + walk_page_range+0x154/0x370 mm/pagewalk.c:285 + queue_pages_range+0x115/0x150 mm/mempolicy.c:694 + do_mbind mm/mempolicy.c:1241 [inline] + SYSC_mbind+0x3c3/0x1030 mm/mempolicy.c:1370 + SyS_mbind+0x46/0x60 mm/mempolicy.c:1352 + do_syscall_64+0x1d2/0x600 arch/x86/entry/common.c:282 + entry_SYSCALL_64_after_swapgs+0x5d/0xdb + Code: c7 80 1c 02 00 e8 26 0a 76 01 <0f> 0b 48 c7 c7 40 46 45 84 e8 4c + RIP [] split_huge_page_to_list+0x8fb/0x1030 mm/huge_memory.c:2124 + RSP + +with the below test: + + uint64_t r[1] = {0xffffffffffffffff}; + + int main(void) + { + syscall(__NR_mmap, 0x20000000, 0x1000000, 3, 0x32, -1, 0); + intptr_t res = 0; + res = syscall(__NR_socket, 0x11, 3, 0x300); + if (res != -1) + r[0] = res; + *(uint32_t*)0x20000040 = 0x10000; + *(uint32_t*)0x20000044 = 1; + *(uint32_t*)0x20000048 = 0xc520; + *(uint32_t*)0x2000004c = 1; + syscall(__NR_setsockopt, r[0], 0x107, 0xd, 0x20000040, 0x10); + syscall(__NR_mmap, 0x20fed000, 0x10000, 0, 0x8811, r[0], 0); + *(uint64_t*)0x20000340 = 2; + syscall(__NR_mbind, 0x20ff9000, 0x4000, 0x4002, 0x20000340, 0x45d4, 3); + return 0; + } + +Actually the test does: + + mmap(0x20000000, 16777216, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x20000000 + socket(AF_PACKET, SOCK_RAW, 768) = 3 + setsockopt(3, SOL_PACKET, PACKET_TX_RING, {block_size=65536, block_nr=1, frame_size=50464, frame_nr=1}, 16) = 0 + mmap(0x20fed000, 65536, PROT_NONE, MAP_SHARED|MAP_FIXED|MAP_POPULATE|MAP_DENYWRITE, 3, 0) = 0x20fed000 + mbind(..., MPOL_MF_STRICT|MPOL_MF_MOVE) = 0 + +The setsockopt() would allocate compound pages (16 pages in this test) +for packet tx ring, then the mmap() would call packet_mmap() to map the +pages into the user address space specified by the mmap() call. + +When calling mbind(), it would scan the vma to queue the pages for +migration to the new node. It would split any huge page since 4.9 +doesn't support THP migration, however, the packet tx ring compound +pages are not THP and even not movable. So, the above bug is triggered. + +However, the later kernel is not hit by this issue due to commit +d44d363f6578 ("mm: don't assume anonymous pages have SwapBacked flag"), +which just removes the PageSwapBacked check for a different reason. + +But, there is a deeper issue. According to the semantic of mbind(), it +should return -EIO if MPOL_MF_MOVE or MPOL_MF_MOVE_ALL was specified and +MPOL_MF_STRICT was also specified, but the kernel was unable to move all +existing pages in the range. The tx ring of the packet socket is +definitely not movable, however, mbind() returns success for this case. + +Although the most socket file associates with non-movable pages, but XDP +may have movable pages from gup. So, it sounds not fine to just check +the underlying file type of vma in vma_migratable(). + +Change migrate_page_add() to check if the page is movable or not, if it +is unmovable, just return -EIO. But do not abort pte walk immediately, +since there may be pages off LRU temporarily. We should migrate other +pages if MPOL_MF_MOVE* is specified. Set has_unmovable flag if some +paged could not be not moved, then return -EIO for mbind() eventually. + +With this change the above test would return -EIO as expected. + +[yang.shi@linux.alibaba.com: fix review comments from Vlastimil] + Link: http://lkml.kernel.org/r/1563556862-54056-3-git-send-email-yang.shi@linux.alibaba.com +Link: http://lkml.kernel.org/r/1561162809-59140-3-git-send-email-yang.shi@linux.alibaba.com +Signed-off-by: Yang Shi +Reviewed-by: Vlastimil Babka +Cc: Michal Hocko +Cc: Mel Gorman +Cc: +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + +--- + mm/mempolicy.c | 32 +++++++++++++++++++++++++------- + 1 file changed, 25 insertions(+), 7 deletions(-) + +--- a/mm/mempolicy.c ++++ b/mm/mempolicy.c +@@ -403,7 +403,7 @@ static const struct mempolicy_operations + }, + }; + +-static void migrate_page_add(struct page *page, struct list_head *pagelist, ++static int migrate_page_add(struct page *page, struct list_head *pagelist, + unsigned long flags); + + struct queue_pages { +@@ -463,12 +463,11 @@ static int queue_pages_pmd(pmd_t *pmd, s + flags = qp->flags; + /* go to thp migration */ + if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) { +- if (!vma_migratable(walk->vma)) { ++ if (!vma_migratable(walk->vma) || ++ migrate_page_add(page, qp->pagelist, flags)) { + ret = 1; + goto unlock; + } +- +- migrate_page_add(page, qp->pagelist, flags); + } else + ret = -EIO; + unlock: +@@ -532,7 +531,14 @@ static int queue_pages_pte_range(pmd_t * + has_unmovable = true; + break; + } +- migrate_page_add(page, qp->pagelist, flags); ++ ++ /* ++ * Do not abort immediately since there may be ++ * temporary off LRU pages in the range. Still ++ * need migrate other LRU pages. ++ */ ++ if (migrate_page_add(page, qp->pagelist, flags)) ++ has_unmovable = true; + } else + break; + } +@@ -961,7 +967,7 @@ static long do_get_mempolicy(int *policy + /* + * page migration, thp tail pages can be passed. + */ +-static void migrate_page_add(struct page *page, struct list_head *pagelist, ++static int migrate_page_add(struct page *page, struct list_head *pagelist, + unsigned long flags) + { + struct page *head = compound_head(page); +@@ -974,8 +980,19 @@ static void migrate_page_add(struct page + mod_node_page_state(page_pgdat(head), + NR_ISOLATED_ANON + page_is_file_cache(head), + hpage_nr_pages(head)); ++ } else if (flags & MPOL_MF_STRICT) { ++ /* ++ * Non-movable page may reach here. And, there may be ++ * temporary off LRU pages or non-LRU movable pages. ++ * Treat them as unmovable pages since they can't be ++ * isolated, so they can't be moved at the moment. It ++ * should return -EIO for this case too. ++ */ ++ return -EIO; + } + } ++ ++ return 0; + } + + /* page allocation callback for NUMA node migration */ +@@ -1178,9 +1195,10 @@ static struct page *new_page(struct page + } + #else + +-static void migrate_page_add(struct page *page, struct list_head *pagelist, ++static int migrate_page_add(struct page *page, struct list_head *pagelist, + unsigned long flags) + { ++ return -EIO; + } + + int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from, diff --git a/queue-5.2/mm-mempolicy-make-the-behavior-consistent-when-mpol_mf_move-and-mpol_mf_strict-were-specified.patch b/queue-5.2/mm-mempolicy-make-the-behavior-consistent-when-mpol_mf_move-and-mpol_mf_strict-were-specified.patch new file mode 100644 index 00000000000..ce1ae60a427 --- /dev/null +++ b/queue-5.2/mm-mempolicy-make-the-behavior-consistent-when-mpol_mf_move-and-mpol_mf_strict-were-specified.patch @@ -0,0 +1,219 @@ +From d883544515aae54842c21730b880172e7894fde9 Mon Sep 17 00:00:00 2001 +From: Yang Shi +Date: Tue, 13 Aug 2019 15:37:15 -0700 +Subject: mm: mempolicy: make the behavior consistent when MPOL_MF_MOVE* and MPOL_MF_STRICT were specified + +From: Yang Shi + +commit d883544515aae54842c21730b880172e7894fde9 upstream. + +When both MPOL_MF_MOVE* and MPOL_MF_STRICT was specified, mbind() should +try best to migrate misplaced pages, if some of the pages could not be +migrated, then return -EIO. + +There are three different sub-cases: + 1. vma is not migratable + 2. vma is migratable, but there are unmovable pages + 3. vma is migratable, pages are movable, but migrate_pages() fails + +If #1 happens, kernel would just abort immediately, then return -EIO, +after a7f40cfe3b7a ("mm: mempolicy: make mbind() return -EIO when +MPOL_MF_STRICT is specified"). + +If #3 happens, kernel would set policy and migrate pages with +best-effort, but won't rollback the migrated pages and reset the policy +back. + +Before that commit, they behaves in the same way. It'd better to keep +their behavior consistent. But, rolling back the migrated pages and +resetting the policy back sounds not feasible, so just make #1 behave as +same as #3. + +Userspace will know that not everything was successfully migrated (via +-EIO), and can take whatever steps it deems necessary - attempt +rollback, determine which exact page(s) are violating the policy, etc. + +Make queue_pages_range() return 1 to indicate there are unmovable pages +or vma is not migratable. + +The #2 is not handled correctly in the current kernel, the following +patch will fix it. + +[yang.shi@linux.alibaba.com: fix review comments from Vlastimil] + Link: http://lkml.kernel.org/r/1563556862-54056-2-git-send-email-yang.shi@linux.alibaba.com +Link: http://lkml.kernel.org/r/1561162809-59140-2-git-send-email-yang.shi@linux.alibaba.com +Signed-off-by: Yang Shi +Reviewed-by: Vlastimil Babka +Cc: Michal Hocko +Cc: Mel Gorman +Cc: +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + +--- + mm/mempolicy.c | 68 ++++++++++++++++++++++++++++++++++++++++----------------- + 1 file changed, 48 insertions(+), 20 deletions(-) + +--- a/mm/mempolicy.c ++++ b/mm/mempolicy.c +@@ -429,11 +429,14 @@ static inline bool queue_pages_required( + } + + /* +- * queue_pages_pmd() has three possible return values: +- * 1 - pages are placed on the right node or queued successfully. +- * 0 - THP was split. +- * -EIO - is migration entry or MPOL_MF_STRICT was specified and an existing +- * page was already on a node that does not follow the policy. ++ * queue_pages_pmd() has four possible return values: ++ * 0 - pages are placed on the right node or queued successfully. ++ * 1 - there is unmovable page, and MPOL_MF_MOVE* & MPOL_MF_STRICT were ++ * specified. ++ * 2 - THP was split. ++ * -EIO - is migration entry or only MPOL_MF_STRICT was specified and an ++ * existing page was already on a node that does not follow the ++ * policy. + */ + static int queue_pages_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr, + unsigned long end, struct mm_walk *walk) +@@ -451,19 +454,17 @@ static int queue_pages_pmd(pmd_t *pmd, s + if (is_huge_zero_page(page)) { + spin_unlock(ptl); + __split_huge_pmd(walk->vma, pmd, addr, false, NULL); ++ ret = 2; + goto out; + } +- if (!queue_pages_required(page, qp)) { +- ret = 1; ++ if (!queue_pages_required(page, qp)) + goto unlock; +- } + +- ret = 1; + flags = qp->flags; + /* go to thp migration */ + if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) { + if (!vma_migratable(walk->vma)) { +- ret = -EIO; ++ ret = 1; + goto unlock; + } + +@@ -479,6 +480,13 @@ out: + /* + * Scan through pages checking if pages follow certain conditions, + * and move them to the pagelist if they do. ++ * ++ * queue_pages_pte_range() has three possible return values: ++ * 0 - pages are placed on the right node or queued successfully. ++ * 1 - there is unmovable page, and MPOL_MF_MOVE* & MPOL_MF_STRICT were ++ * specified. ++ * -EIO - only MPOL_MF_STRICT was specified and an existing page was already ++ * on a node that does not follow the policy. + */ + static int queue_pages_pte_range(pmd_t *pmd, unsigned long addr, + unsigned long end, struct mm_walk *walk) +@@ -488,17 +496,17 @@ static int queue_pages_pte_range(pmd_t * + struct queue_pages *qp = walk->private; + unsigned long flags = qp->flags; + int ret; ++ bool has_unmovable = false; + pte_t *pte; + spinlock_t *ptl; + + ptl = pmd_trans_huge_lock(pmd, vma); + if (ptl) { + ret = queue_pages_pmd(pmd, ptl, addr, end, walk); +- if (ret > 0) +- return 0; +- else if (ret < 0) ++ if (ret != 2) + return ret; + } ++ /* THP was split, fall through to pte walk */ + + if (pmd_trans_unstable(pmd)) + return 0; +@@ -519,14 +527,21 @@ static int queue_pages_pte_range(pmd_t * + if (!queue_pages_required(page, qp)) + continue; + if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) { +- if (!vma_migratable(vma)) ++ /* MPOL_MF_STRICT must be specified if we get here */ ++ if (!vma_migratable(vma)) { ++ has_unmovable = true; + break; ++ } + migrate_page_add(page, qp->pagelist, flags); + } else + break; + } + pte_unmap_unlock(pte - 1, ptl); + cond_resched(); ++ ++ if (has_unmovable) ++ return 1; ++ + return addr != end ? -EIO : 0; + } + +@@ -639,7 +654,13 @@ static int queue_pages_test_walk(unsigne + * + * If pages found in a given range are on a set of nodes (determined by + * @nodes and @flags,) it's isolated and queued to the pagelist which is +- * passed via @private.) ++ * passed via @private. ++ * ++ * queue_pages_range() has three possible return values: ++ * 1 - there is unmovable page, but MPOL_MF_MOVE* & MPOL_MF_STRICT were ++ * specified. ++ * 0 - queue pages successfully or no misplaced page. ++ * -EIO - there is misplaced page and only MPOL_MF_STRICT was specified. + */ + static int + queue_pages_range(struct mm_struct *mm, unsigned long start, unsigned long end, +@@ -1182,6 +1203,7 @@ static long do_mbind(unsigned long start + struct mempolicy *new; + unsigned long end; + int err; ++ int ret; + LIST_HEAD(pagelist); + + if (flags & ~(unsigned long)MPOL_MF_VALID) +@@ -1243,10 +1265,15 @@ static long do_mbind(unsigned long start + if (err) + goto mpol_out; + +- err = queue_pages_range(mm, start, end, nmask, ++ ret = queue_pages_range(mm, start, end, nmask, + flags | MPOL_MF_INVERT, &pagelist); +- if (!err) +- err = mbind_range(mm, start, end, new); ++ ++ if (ret < 0) { ++ err = -EIO; ++ goto up_out; ++ } ++ ++ err = mbind_range(mm, start, end, new); + + if (!err) { + int nr_failed = 0; +@@ -1259,13 +1286,14 @@ static long do_mbind(unsigned long start + putback_movable_pages(&pagelist); + } + +- if (nr_failed && (flags & MPOL_MF_STRICT)) ++ if ((ret > 0) || (nr_failed && (flags & MPOL_MF_STRICT))) + err = -EIO; + } else + putback_movable_pages(&pagelist); + ++up_out: + up_write(&mm->mmap_sem); +- mpol_out: ++mpol_out: + mpol_put(new); + return err; + } diff --git a/queue-5.2/mm-usercopy-use-memory-range-to-be-accessed-for-wraparound-check.patch b/queue-5.2/mm-usercopy-use-memory-range-to-be-accessed-for-wraparound-check.patch new file mode 100644 index 00000000000..44eeadcc879 --- /dev/null +++ b/queue-5.2/mm-usercopy-use-memory-range-to-be-accessed-for-wraparound-check.patch @@ -0,0 +1,52 @@ +From 951531691c4bcaa59f56a316e018bc2ff1ddf855 Mon Sep 17 00:00:00 2001 +From: "Isaac J. Manjarres" +Date: Tue, 13 Aug 2019 15:37:37 -0700 +Subject: mm/usercopy: use memory range to be accessed for wraparound check + +From: Isaac J. Manjarres + +commit 951531691c4bcaa59f56a316e018bc2ff1ddf855 upstream. + +Currently, when checking to see if accessing n bytes starting at address +"ptr" will cause a wraparound in the memory addresses, the check in +check_bogus_address() adds an extra byte, which is incorrect, as the +range of addresses that will be accessed is [ptr, ptr + (n - 1)]. + +This can lead to incorrectly detecting a wraparound in the memory +address, when trying to read 4 KB from memory that is mapped to the the +last possible page in the virtual address space, when in fact, accessing +that range of memory would not cause a wraparound to occur. + +Use the memory range that will actually be accessed when considering if +accessing a certain amount of bytes will cause the memory address to +wrap around. + +Link: http://lkml.kernel.org/r/1564509253-23287-1-git-send-email-isaacm@codeaurora.org +Fixes: f5509cc18daa ("mm: Hardened usercopy") +Signed-off-by: Prasad Sodagudi +Signed-off-by: Isaac J. Manjarres +Co-developed-by: Prasad Sodagudi +Reviewed-by: William Kucharski +Acked-by: Kees Cook +Cc: Greg Kroah-Hartman +Cc: Trilok Soni +Cc: +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + +--- + mm/usercopy.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/mm/usercopy.c ++++ b/mm/usercopy.c +@@ -147,7 +147,7 @@ static inline void check_bogus_address(c + bool to_user) + { + /* Reject if object wraps past end of memory. */ +- if (ptr + n < ptr) ++ if (ptr + (n - 1) < ptr) + usercopy_abort("wrapped address", NULL, to_user, 0, ptr + n); + + /* Reject if NULL or ZERO-allocation. */ diff --git a/queue-5.2/mm-vmscan-do-not-special-case-slab-reclaim-when-watermarks-are-boosted.patch b/queue-5.2/mm-vmscan-do-not-special-case-slab-reclaim-when-watermarks-are-boosted.patch new file mode 100644 index 00000000000..ed3615e1be3 --- /dev/null +++ b/queue-5.2/mm-vmscan-do-not-special-case-slab-reclaim-when-watermarks-are-boosted.patch @@ -0,0 +1,285 @@ +From 28360f398778d7623a5ff8a8e90958c0d925e120 Mon Sep 17 00:00:00 2001 +From: Mel Gorman +Date: Tue, 13 Aug 2019 15:37:57 -0700 +Subject: mm, vmscan: do not special-case slab reclaim when watermarks are boosted + +From: Mel Gorman + +commit 28360f398778d7623a5ff8a8e90958c0d925e120 upstream. + +Dave Chinner reported a problem pointing a finger at commit 1c30844d2dfe +("mm: reclaim small amounts of memory when an external fragmentation +event occurs"). + +The report is extensive: + + https://lore.kernel.org/linux-mm/20190807091858.2857-1-david@fromorbit.com/ + +and it's worth recording the most relevant parts (colorful language and +typos included). + + When running a simple, steady state 4kB file creation test to + simulate extracting tarballs larger than memory full of small + files into the filesystem, I noticed that once memory fills up + the cache balance goes to hell. + + The workload is creating one dirty cached inode for every dirty + page, both of which should require a single IO each to clean and + reclaim, and creation of inodes is throttled by the rate at which + dirty writeback runs at (via balance dirty pages). Hence the ingest + rate of new cached inodes and page cache pages is identical and + steady. As a result, memory reclaim should quickly find a steady + balance between page cache and inode caches. + + The moment memory fills, the page cache is reclaimed at a much + faster rate than the inode cache, and evidence suggests that + the inode cache shrinker is not being called when large batches + of pages are being reclaimed. In roughly the same time period + that it takes to fill memory with 50% pages and 50% slab caches, + memory reclaim reduces the page cache down to just dirty pages + and slab caches fill the entirety of memory. + + The LRU is largely full of dirty pages, and we're getting spikes + of random writeback from memory reclaim so it's all going to shit. + Behaviour never recovers, the page cache remains pinned at just + dirty pages, and nothing I could tune would make any difference. + vfs_cache_pressure makes no difference - I would set it so high + it should trim the entire inode caches in a single pass, yet it + didn't do anything. It was clear from tracing and live telemetry + that the shrinkers were pretty much not running except when + there was absolutely no memory free at all, and then they did + the minimum necessary to free memory to make progress. + + So I went looking at the code, trying to find places where pages + got reclaimed and the shrinkers weren't called. There's only one + - kswapd doing boosted reclaim as per commit 1c30844d2dfe ("mm: + reclaim small amounts of memory when an external fragmentation + event occurs"). + +The watermark boosting introduced by the commit is triggered in response +to an allocation "fragmentation event". The boosting was not intended +to target THP specifically and triggers even if THP is disabled. +However, with Dave's perfectly reasonable workload, fragmentation events +can be very common given the ratio of slab to page cache allocations so +boosting remains active for long periods of time. + +As high-order allocations might use compaction and compaction cannot +move slab pages the decision was made in the commit to special-case +kswapd when watermarks are boosted -- kswapd avoids reclaiming slab as +reclaiming slab does not directly help compaction. + +As Dave notes, this decision means that slab can be artificially +protected for long periods of time and messes up the balance with slab +and page caches. + +Removing the special casing can still indirectly help avoid +fragmentation by avoiding fragmentation-causing events due to slab +allocation as pages from a slab pageblock will have some slab objects +freed. Furthermore, with the special casing, reclaim behaviour is +unpredictable as kswapd sometimes examines slab and sometimes does not +in a manner that is tricky to tune or analyse. + +This patch removes the special casing. The downside is that this is not +a universal performance win. Some benchmarks that depend on the +residency of data when rereading metadata may see a regression when slab +reclaim is restored to its original behaviour. Similarly, some +benchmarks that only read-once or write-once may perform better when +page reclaim is too aggressive. The primary upside is that slab +shrinker is less surprising (arguably more sane but that's a matter of +opinion), behaves consistently regardless of the fragmentation state of +the system and properly obeys VM sysctls. + +A fsmark benchmark configuration was constructed similar to what Dave +reported and is codified by the mmtest configuration +config-io-fsmark-small-file-stream. It was evaluated on a 1-socket +machine to avoid dealing with NUMA-related issues and the timing of +reclaim. The storage was an SSD Samsung Evo and a fresh trimmed XFS +filesystem was used for the test data. + +This is not an exact replication of Dave's setup. The configuration +scales its parameters depending on the memory size of the SUT to behave +similarly across machines. The parameters mean the first sample +reported by fs_mark is using 50% of RAM which will barely be throttled +and look like a big outlier. Dave used fake NUMA to have multiple +kswapd instances which I didn't replicate. Finally, the number of +iterations differ from Dave's test as the target disk was not large +enough. While not identical, it should be representative. + + fsmark + 5.3.0-rc3 5.3.0-rc3 + vanilla shrinker-v1r1 + Min 1-files/sec 4444.80 ( 0.00%) 4765.60 ( 7.22%) + 1st-qrtle 1-files/sec 5005.10 ( 0.00%) 5091.70 ( 1.73%) + 2nd-qrtle 1-files/sec 4917.80 ( 0.00%) 4855.60 ( -1.26%) + 3rd-qrtle 1-files/sec 4667.40 ( 0.00%) 4831.20 ( 3.51%) + Max-1 1-files/sec 11421.50 ( 0.00%) 9999.30 ( -12.45%) + Max-5 1-files/sec 11421.50 ( 0.00%) 9999.30 ( -12.45%) + Max-10 1-files/sec 11421.50 ( 0.00%) 9999.30 ( -12.45%) + Max-90 1-files/sec 4649.60 ( 0.00%) 4780.70 ( 2.82%) + Max-95 1-files/sec 4491.00 ( 0.00%) 4768.20 ( 6.17%) + Max-99 1-files/sec 4491.00 ( 0.00%) 4768.20 ( 6.17%) + Max 1-files/sec 11421.50 ( 0.00%) 9999.30 ( -12.45%) + Hmean 1-files/sec 5004.75 ( 0.00%) 5075.96 ( 1.42%) + Stddev 1-files/sec 1778.70 ( 0.00%) 1369.66 ( 23.00%) + CoeffVar 1-files/sec 33.70 ( 0.00%) 26.05 ( 22.71%) + BHmean-99 1-files/sec 5053.72 ( 0.00%) 5101.52 ( 0.95%) + BHmean-95 1-files/sec 5053.72 ( 0.00%) 5101.52 ( 0.95%) + BHmean-90 1-files/sec 5107.05 ( 0.00%) 5131.41 ( 0.48%) + BHmean-75 1-files/sec 5208.45 ( 0.00%) 5206.68 ( -0.03%) + BHmean-50 1-files/sec 5405.53 ( 0.00%) 5381.62 ( -0.44%) + BHmean-25 1-files/sec 6179.75 ( 0.00%) 6095.14 ( -1.37%) + + 5.3.0-rc3 5.3.0-rc3 + vanillashrinker-v1r1 + Duration User 501.82 497.29 + Duration System 4401.44 4424.08 + Duration Elapsed 8124.76 8358.05 + +This is showing a slight skew for the max result representing a large +outlier for the 1st, 2nd and 3rd quartile are similar indicating that +the bulk of the results show little difference. Note that an earlier +version of the fsmark configuration showed a regression but that +included more samples taken while memory was still filling. + +Note that the elapsed time is higher. Part of this is that the +configuration included time to delete all the test files when the test +completes -- the test automation handles the possibility of testing +fsmark with multiple thread counts. Without the patch, many of these +objects would be memory resident which is part of what the patch is +addressing. + +There are other important observations that justify the patch. + +1. With the vanilla kernel, the number of dirty pages in the system is + very low for much of the test. With this patch, dirty pages is + generally kept at 10% which matches vm.dirty_background_ratio which + is normal expected historical behaviour. + +2. With the vanilla kernel, the ratio of Slab/Pagecache is close to + 0.95 for much of the test i.e. Slab is being left alone and + dominating memory consumption. With the patch applied, the ratio + varies between 0.35 and 0.45 with the bulk of the measured ratios + roughly half way between those values. This is a different balance to + what Dave reported but it was at least consistent. + +3. Slabs are scanned throughout the entire test with the patch applied. + The vanille kernel has periods with no scan activity and then + relatively massive spikes. + +4. Without the patch, kswapd scan rates are very variable. With the + patch, the scan rates remain quite steady. + +4. Overall vmstats are closer to normal expectations + + 5.3.0-rc3 5.3.0-rc3 + vanilla shrinker-v1r1 + Ops Direct pages scanned 99388.00 328410.00 + Ops Kswapd pages scanned 45382917.00 33451026.00 + Ops Kswapd pages reclaimed 30869570.00 25239655.00 + Ops Direct pages reclaimed 74131.00 5830.00 + Ops Kswapd efficiency % 68.02 75.45 + Ops Kswapd velocity 5585.75 4002.25 + Ops Page reclaim immediate 1179721.00 430927.00 + Ops Slabs scanned 62367361.00 73581394.00 + Ops Direct inode steals 2103.00 1002.00 + Ops Kswapd inode steals 570180.00 5183206.00 + + o Vanilla kernel is hitting direct reclaim more frequently, + not very much in absolute terms but the fact the patch + reduces it is interesting + o "Page reclaim immediate" in the vanilla kernel indicates + dirty pages are being encountered at the tail of the LRU. + This is generally bad and means in this case that the LRU + is not long enough for dirty pages to be cleaned by the + background flush in time. This is much reduced by the + patch. + o With the patch, kswapd is reclaiming 10 times more slab + pages than with the vanilla kernel. This is indicative + of the watermark boosting over-protecting slab + +A more complete set of tests were run that were part of the basis for +introducing boosting and while there are some differences, they are well +within tolerances. + +Bottom line, the special casing kswapd to avoid slab behaviour is +unpredictable and can lead to abnormal results for normal workloads. + +This patch restores the expected behaviour that slab and page cache is +balanced consistently for a workload with a steady allocation ratio of +slab/pagecache pages. It also means that if there are workloads that +favour the preservation of slab over pagecache that it can be tuned via +vm.vfs_cache_pressure where as the vanilla kernel effectively ignores +the parameter when boosting is active. + +Link: http://lkml.kernel.org/r/20190808182946.GM2739@techsingularity.net +Fixes: 1c30844d2dfe ("mm: reclaim small amounts of memory when an external fragmentation event occurs") +Signed-off-by: Mel Gorman +Reviewed-by: Dave Chinner +Acked-by: Vlastimil Babka +Cc: Michal Hocko +Cc: [5.0+] +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + +--- + mm/vmscan.c | 13 ++----------- + 1 file changed, 2 insertions(+), 11 deletions(-) + +--- a/mm/vmscan.c ++++ b/mm/vmscan.c +@@ -88,9 +88,6 @@ struct scan_control { + /* Can pages be swapped as part of reclaim? */ + unsigned int may_swap:1; + +- /* e.g. boosted watermark reclaim leaves slabs alone */ +- unsigned int may_shrinkslab:1; +- + /* + * Cgroups are not reclaimed below their configured memory.low, + * unless we threaten to OOM. If any cgroups are skipped due to +@@ -2669,10 +2666,8 @@ static bool shrink_node(pg_data_t *pgdat + shrink_node_memcg(pgdat, memcg, sc, &lru_pages); + node_lru_pages += lru_pages; + +- if (sc->may_shrinkslab) { +- shrink_slab(sc->gfp_mask, pgdat->node_id, +- memcg, sc->priority); +- } ++ shrink_slab(sc->gfp_mask, pgdat->node_id, memcg, ++ sc->priority); + + /* Record the group's reclaim efficiency */ + vmpressure(sc->gfp_mask, memcg, false, +@@ -3149,7 +3144,6 @@ unsigned long try_to_free_pages(struct z + .may_writepage = !laptop_mode, + .may_unmap = 1, + .may_swap = 1, +- .may_shrinkslab = 1, + }; + + /* +@@ -3191,7 +3185,6 @@ unsigned long mem_cgroup_shrink_node(str + .may_unmap = 1, + .reclaim_idx = MAX_NR_ZONES - 1, + .may_swap = !noswap, +- .may_shrinkslab = 1, + }; + unsigned long lru_pages; + +@@ -3236,7 +3229,6 @@ unsigned long try_to_free_mem_cgroup_pag + .may_writepage = !laptop_mode, + .may_unmap = 1, + .may_swap = may_swap, +- .may_shrinkslab = 1, + }; + + /* +@@ -3545,7 +3537,6 @@ restart: + */ + sc.may_writepage = !laptop_mode && !nr_boost_reclaim; + sc.may_swap = !nr_boost_reclaim; +- sc.may_shrinkslab = !nr_boost_reclaim; + + /* + * Do some background aging of the anon list, to give diff --git a/queue-5.2/mm-z3fold.c-fix-z3fold_destroy_pool-ordering.patch b/queue-5.2/mm-z3fold.c-fix-z3fold_destroy_pool-ordering.patch new file mode 100644 index 00000000000..a6ef65076d0 --- /dev/null +++ b/queue-5.2/mm-z3fold.c-fix-z3fold_destroy_pool-ordering.patch @@ -0,0 +1,67 @@ +From 6051d3bd3b91e96c59e62b8be2dba1cc2b19ee40 Mon Sep 17 00:00:00 2001 +From: Henry Burns +Date: Tue, 13 Aug 2019 15:37:21 -0700 +Subject: mm/z3fold.c: fix z3fold_destroy_pool() ordering + +From: Henry Burns + +commit 6051d3bd3b91e96c59e62b8be2dba1cc2b19ee40 upstream. + +The constraint from the zpool use of z3fold_destroy_pool() is there are +no outstanding handles to memory (so no active allocations), but it is +possible for there to be outstanding work on either of the two wqs in +the pool. + +If there is work queued on pool->compact_workqueue when it is called, +z3fold_destroy_pool() will do: + + z3fold_destroy_pool() + destroy_workqueue(pool->release_wq) + destroy_workqueue(pool->compact_wq) + drain_workqueue(pool->compact_wq) + do_compact_page(zhdr) + kref_put(&zhdr->refcount) + __release_z3fold_page(zhdr, ...) + queue_work_on(pool->release_wq, &pool->work) *BOOM* + +So compact_wq needs to be destroyed before release_wq. + +Link: http://lkml.kernel.org/r/20190726224810.79660-1-henryburns@google.com +Fixes: 5d03a6613957 ("mm/z3fold.c: use kref to prevent page free/compact race") +Signed-off-by: Henry Burns +Reviewed-by: Shakeel Butt +Reviewed-by: Jonathan Adams +Cc: Vitaly Vul +Cc: Vitaly Wool +Cc: David Howells +Cc: Thomas Gleixner +Cc: Al Viro +Cc: +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + +--- + mm/z3fold.c | 9 ++++++++- + 1 file changed, 8 insertions(+), 1 deletion(-) + +--- a/mm/z3fold.c ++++ b/mm/z3fold.c +@@ -820,8 +820,15 @@ static void z3fold_destroy_pool(struct z + { + kmem_cache_destroy(pool->c_handle); + z3fold_unregister_migration(pool); +- destroy_workqueue(pool->release_wq); ++ ++ /* ++ * We need to destroy pool->compact_wq before pool->release_wq, ++ * as any pending work on pool->compact_wq will call ++ * queue_work(pool->release_wq, &pool->work). ++ */ ++ + destroy_workqueue(pool->compact_wq); ++ destroy_workqueue(pool->release_wq); + kfree(pool); + } + diff --git a/queue-5.2/mm-z3fold.c-fix-z3fold_destroy_pool-race-condition.patch b/queue-5.2/mm-z3fold.c-fix-z3fold_destroy_pool-race-condition.patch new file mode 100644 index 00000000000..ab09e273bea --- /dev/null +++ b/queue-5.2/mm-z3fold.c-fix-z3fold_destroy_pool-race-condition.patch @@ -0,0 +1,62 @@ +From b997052bc3ac444a0bceab1093aff7ae71ed419e Mon Sep 17 00:00:00 2001 +From: Henry Burns +Date: Tue, 13 Aug 2019 15:37:25 -0700 +Subject: mm/z3fold.c: fix z3fold_destroy_pool() race condition + +From: Henry Burns + +commit b997052bc3ac444a0bceab1093aff7ae71ed419e upstream. + +The constraint from the zpool use of z3fold_destroy_pool() is there are +no outstanding handles to memory (so no active allocations), but it is +possible for there to be outstanding work on either of the two wqs in +the pool. + +Calling z3fold_deregister_migration() before the workqueues are drained +means that there can be allocated pages referencing a freed inode, +causing any thread in compaction to be able to trip over the bad pointer +in PageMovable(). + +Link: http://lkml.kernel.org/r/20190726224810.79660-2-henryburns@google.com +Fixes: 1f862989b04a ("mm/z3fold.c: support page migration") +Signed-off-by: Henry Burns +Reviewed-by: Shakeel Butt +Reviewed-by: Jonathan Adams +Cc: Vitaly Vul +Cc: Vitaly Wool +Cc: David Howells +Cc: Thomas Gleixner +Cc: Al Viro +Cc: Henry Burns +Cc: +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + +--- + mm/z3fold.c | 5 ++++- + 1 file changed, 4 insertions(+), 1 deletion(-) + +--- a/mm/z3fold.c ++++ b/mm/z3fold.c +@@ -819,16 +819,19 @@ out: + static void z3fold_destroy_pool(struct z3fold_pool *pool) + { + kmem_cache_destroy(pool->c_handle); +- z3fold_unregister_migration(pool); + + /* + * We need to destroy pool->compact_wq before pool->release_wq, + * as any pending work on pool->compact_wq will call + * queue_work(pool->release_wq, &pool->work). ++ * ++ * There are still outstanding pages until both workqueues are drained, ++ * so we cannot unregister migration until then. + */ + + destroy_workqueue(pool->compact_wq); + destroy_workqueue(pool->release_wq); ++ z3fold_unregister_migration(pool); + kfree(pool); + } + diff --git a/queue-5.2/seq_file-fix-problem-when-seeking-mid-record.patch b/queue-5.2/seq_file-fix-problem-when-seeking-mid-record.patch new file mode 100644 index 00000000000..40ccd35d7ea --- /dev/null +++ b/queue-5.2/seq_file-fix-problem-when-seeking-mid-record.patch @@ -0,0 +1,56 @@ +From 6a2aeab59e97101b4001bac84388fc49a992f87e Mon Sep 17 00:00:00 2001 +From: NeilBrown +Date: Tue, 13 Aug 2019 15:37:44 -0700 +Subject: seq_file: fix problem when seeking mid-record + +From: NeilBrown + +commit 6a2aeab59e97101b4001bac84388fc49a992f87e upstream. + +If you use lseek or similar (e.g. pread) to access a location in a +seq_file file that is within a record, rather than at a record boundary, +then the first read will return the remainder of the record, and the +second read will return the whole of that same record (instead of the +next record). When seeking to a record boundary, the next record is +correctly returned. + +This bug was introduced by a recent patch (identified below). Before +that patch, seq_read() would increment m->index when the last of the +buffer was returned (m->count == 0). After that patch, we rely on +->next to increment m->index after filling the buffer - but there was +one place where that didn't happen. + +Link: https://lkml.kernel.org/lkml/877e7xl029.fsf@notabene.neil.brown.name/ +Fixes: 1f4aace60b0e ("fs/seq_file.c: simplify seq_file iteration code and interface") +Signed-off-by: NeilBrown +Reported-by: Sergei Turchanov +Tested-by: Sergei Turchanov +Cc: Alexander Viro +Cc: Markus Elfring +Cc: [4.19+] +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + +--- + fs/seq_file.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/fs/seq_file.c ++++ b/fs/seq_file.c +@@ -119,6 +119,7 @@ static int traverse(struct seq_file *m, + } + if (seq_has_overflowed(m)) + goto Eoverflow; ++ p = m->op->next(m, p, &m->index); + if (pos + m->count > offset) { + m->from = offset - pos; + m->count -= m->from; +@@ -126,7 +127,6 @@ static int traverse(struct seq_file *m, + } + pos += m->count; + m->count = 0; +- p = m->op->next(m, p, &m->index); + if (pos == offset) + break; + } diff --git a/queue-5.2/series b/queue-5.2/series new file mode 100644 index 00000000000..7c4805c3671 --- /dev/null +++ b/queue-5.2/series @@ -0,0 +1,11 @@ +keys-trusted-allow-module-init-if-tpm-is-inactive-or-deactivated.patch +sh-kernel-hw_breakpoint-fix-missing-break-in-switch-statement.patch +seq_file-fix-problem-when-seeking-mid-record.patch +mm-hmm-fix-bad-subpage-pointer-in-try_to_unmap_one.patch +mm-mempolicy-make-the-behavior-consistent-when-mpol_mf_move-and-mpol_mf_strict-were-specified.patch +mm-mempolicy-handle-vma-with-unmovable-pages-mapped-correctly-in-mbind.patch +mm-z3fold.c-fix-z3fold_destroy_pool-ordering.patch +mm-z3fold.c-fix-z3fold_destroy_pool-race-condition.patch +mm-memcontrol.c-fix-use-after-free-in-mem_cgroup_iter.patch +mm-usercopy-use-memory-range-to-be-accessed-for-wraparound-check.patch +mm-vmscan-do-not-special-case-slab-reclaim-when-watermarks-are-boosted.patch diff --git a/queue-5.2/sh-kernel-hw_breakpoint-fix-missing-break-in-switch-statement.patch b/queue-5.2/sh-kernel-hw_breakpoint-fix-missing-break-in-switch-statement.patch new file mode 100644 index 00000000000..190916ee97f --- /dev/null +++ b/queue-5.2/sh-kernel-hw_breakpoint-fix-missing-break-in-switch-statement.patch @@ -0,0 +1,34 @@ +From 1ee1119d184bb06af921b48c3021d921bbd85bac Mon Sep 17 00:00:00 2001 +From: "Gustavo A. R. Silva" +Date: Fri, 9 Aug 2019 23:43:56 -0500 +Subject: sh: kernel: hw_breakpoint: Fix missing break in switch statement + +From: Gustavo A. R. Silva + +commit 1ee1119d184bb06af921b48c3021d921bbd85bac upstream. + +Add missing break statement in order to prevent the code from falling +through to case SH_BREAKPOINT_WRITE. + +Fixes: 09a072947791 ("sh: hw-breakpoints: Add preliminary support for SH-4A UBC.") +Cc: stable@vger.kernel.org +Reviewed-by: Geert Uytterhoeven +Reviewed-by: Guenter Roeck +Tested-by: Guenter Roeck +Signed-off-by: Gustavo A. R. Silva +Signed-off-by: Greg Kroah-Hartman + +--- + arch/sh/kernel/hw_breakpoint.c | 1 + + 1 file changed, 1 insertion(+) + +--- a/arch/sh/kernel/hw_breakpoint.c ++++ b/arch/sh/kernel/hw_breakpoint.c +@@ -157,6 +157,7 @@ int arch_bp_generic_fields(int sh_len, i + switch (sh_type) { + case SH_BREAKPOINT_READ: + *gen_type = HW_BREAKPOINT_R; ++ break; + case SH_BREAKPOINT_WRITE: + *gen_type = HW_BREAKPOINT_W; + break;