From: Greg Kroah-Hartman Date: Sun, 23 Aug 2020 12:37:44 +0000 (+0200) Subject: 4.4-stable patches X-Git-Tag: v4.4.234~51 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=1845d960561dd410675a965c4f6ce383a31f4135;p=thirdparty%2Fkernel%2Fstable-queue.git 4.4-stable patches added patches: mm-include-cma-pages-in-lowmem_reserve-at-boot.patch mm-page_alloc-fix-core-hung-in-free_pcppages_bulk.patch romfs-fix-uninitialized-memory-leak-in-romfs_dev_read.patch --- diff --git a/queue-4.4/mm-include-cma-pages-in-lowmem_reserve-at-boot.patch b/queue-4.4/mm-include-cma-pages-in-lowmem_reserve-at-boot.patch new file mode 100644 index 00000000000..731e709c62e --- /dev/null +++ b/queue-4.4/mm-include-cma-pages-in-lowmem_reserve-at-boot.patch @@ -0,0 +1,85 @@ +From e08d3fdfe2dafa0331843f70ce1ff6c1c4900bf4 Mon Sep 17 00:00:00 2001 +From: Doug Berger +Date: Thu, 20 Aug 2020 17:42:24 -0700 +Subject: mm: include CMA pages in lowmem_reserve at boot + +From: Doug Berger + +commit e08d3fdfe2dafa0331843f70ce1ff6c1c4900bf4 upstream. + +The lowmem_reserve arrays provide a means of applying pressure against +allocations from lower zones that were targeted at higher zones. Its +values are a function of the number of pages managed by higher zones and +are assigned by a call to the setup_per_zone_lowmem_reserve() function. + +The function is initially called at boot time by the function +init_per_zone_wmark_min() and may be called later by accesses of the +/proc/sys/vm/lowmem_reserve_ratio sysctl file. + +The function init_per_zone_wmark_min() was moved up from a module_init to +a core_initcall to resolve a sequencing issue with khugepaged. +Unfortunately this created a sequencing issue with CMA page accounting. + +The CMA pages are added to the managed page count of a zone when +cma_init_reserved_areas() is called at boot also as a core_initcall. This +makes it uncertain whether the CMA pages will be added to the managed page +counts of their zones before or after the call to +init_per_zone_wmark_min() as it becomes dependent on link order. With the +current link order the pages are added to the managed count after the +lowmem_reserve arrays are initialized at boot. + +This means the lowmem_reserve values at boot may be lower than the values +used later if /proc/sys/vm/lowmem_reserve_ratio is accessed even if the +ratio values are unchanged. + +In many cases the difference is not significant, but for example +an ARM platform with 1GB of memory and the following memory layout + + cma: Reserved 256 MiB at 0x0000000030000000 + Zone ranges: + DMA [mem 0x0000000000000000-0x000000002fffffff] + Normal empty + HighMem [mem 0x0000000030000000-0x000000003fffffff] + +would result in 0 lowmem_reserve for the DMA zone. This would allow +userspace to deplete the DMA zone easily. + +Funnily enough + + $ cat /proc/sys/vm/lowmem_reserve_ratio + +would fix up the situation because as a side effect it forces +setup_per_zone_lowmem_reserve. + +This commit breaks the link order dependency by invoking +init_per_zone_wmark_min() as a postcore_initcall so that the CMA pages +have the chance to be properly accounted in their zone(s) and allowing +the lowmem_reserve arrays to receive consistent values. + +Fixes: bc22af74f271 ("mm: update min_free_kbytes from khugepaged after core initialization") +Signed-off-by: Doug Berger +Signed-off-by: Andrew Morton +Acked-by: Michal Hocko +Cc: Jason Baron +Cc: David Rientjes +Cc: "Kirill A. Shutemov" +Cc: +Link: http://lkml.kernel.org/r/1597423766-27849-1-git-send-email-opendmb@gmail.com +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + +--- + mm/page_alloc.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/mm/page_alloc.c ++++ b/mm/page_alloc.c +@@ -6285,7 +6285,7 @@ int __meminit init_per_zone_wmark_min(vo + setup_per_zone_inactive_ratio(); + return 0; + } +-core_initcall(init_per_zone_wmark_min) ++postcore_initcall(init_per_zone_wmark_min) + + /* + * min_free_kbytes_sysctl_handler - just a wrapper around proc_dointvec() so diff --git a/queue-4.4/mm-page_alloc-fix-core-hung-in-free_pcppages_bulk.patch b/queue-4.4/mm-page_alloc-fix-core-hung-in-free_pcppages_bulk.patch new file mode 100644 index 00000000000..dc2edd0f85b --- /dev/null +++ b/queue-4.4/mm-page_alloc-fix-core-hung-in-free_pcppages_bulk.patch @@ -0,0 +1,100 @@ +From 88e8ac11d2ea3acc003cf01bb5a38c8aa76c3cfd Mon Sep 17 00:00:00 2001 +From: Charan Teja Reddy +Date: Thu, 20 Aug 2020 17:42:27 -0700 +Subject: mm, page_alloc: fix core hung in free_pcppages_bulk() + +From: Charan Teja Reddy + +commit 88e8ac11d2ea3acc003cf01bb5a38c8aa76c3cfd upstream. + +The following race is observed with the repeated online, offline and a +delay between two successive online of memory blocks of movable zone. + +P1 P2 + +Online the first memory block in +the movable zone. The pcp struct +values are initialized to default +values,i.e., pcp->high = 0 & +pcp->batch = 1. + + Allocate the pages from the + movable zone. + +Try to Online the second memory +block in the movable zone thus it +entered the online_pages() but yet +to call zone_pcp_update(). + This process is entered into + the exit path thus it tries + to release the order-0 pages + to pcp lists through + free_unref_page_commit(). + As pcp->high = 0, pcp->count = 1 + proceed to call the function + free_pcppages_bulk(). +Update the pcp values thus the +new pcp values are like, say, +pcp->high = 378, pcp->batch = 63. + Read the pcp's batch value using + READ_ONCE() and pass the same to + free_pcppages_bulk(), pcp values + passed here are, batch = 63, + count = 1. + + Since num of pages in the pcp + lists are less than ->batch, + then it will stuck in + while(list_empty(list)) loop + with interrupts disabled thus + a core hung. + +Avoid this by ensuring free_pcppages_bulk() is called with proper count of +pcp list pages. + +The mentioned race is some what easily reproducible without [1] because +pcp's are not updated for the first memory block online and thus there is +a enough race window for P2 between alloc+free and pcp struct values +update through onlining of second memory block. + +With [1], the race still exists but it is very narrow as we update the pcp +struct values for the first memory block online itself. + +This is not limited to the movable zone, it could also happen in cases +with the normal zone (e.g., hotplug to a node that only has DMA memory, or +no other memory yet). + +[1]: https://patchwork.kernel.org/patch/11696389/ + +Fixes: 5f8dcc21211a ("page-allocator: split per-cpu list into one-list-per-migrate-type") +Signed-off-by: Charan Teja Reddy +Signed-off-by: Andrew Morton +Acked-by: David Hildenbrand +Acked-by: David Rientjes +Acked-by: Michal Hocko +Cc: Michal Hocko +Cc: Vlastimil Babka +Cc: Vinayak Menon +Cc: [2.6+] +Link: http://lkml.kernel.org/r/1597150703-19003-1-git-send-email-charante@codeaurora.org +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + +--- + mm/page_alloc.c | 5 +++++ + 1 file changed, 5 insertions(+) + +--- a/mm/page_alloc.c ++++ b/mm/page_alloc.c +@@ -843,6 +843,11 @@ static void free_pcppages_bulk(struct zo + if (nr_scanned) + __mod_zone_page_state(zone, NR_PAGES_SCANNED, -nr_scanned); + ++ /* ++ * Ensure proper count is passed which otherwise would stuck in the ++ * below while (list_empty(list)) loop. ++ */ ++ count = min(pcp->count, count); + while (to_free) { + struct page *page; + struct list_head *list; diff --git a/queue-4.4/romfs-fix-uninitialized-memory-leak-in-romfs_dev_read.patch b/queue-4.4/romfs-fix-uninitialized-memory-leak-in-romfs_dev_read.patch new file mode 100644 index 00000000000..9a75279c636 --- /dev/null +++ b/queue-4.4/romfs-fix-uninitialized-memory-leak-in-romfs_dev_read.patch @@ -0,0 +1,55 @@ +From bcf85fcedfdd17911982a3e3564fcfec7b01eebd Mon Sep 17 00:00:00 2001 +From: Jann Horn +Date: Thu, 20 Aug 2020 17:42:11 -0700 +Subject: romfs: fix uninitialized memory leak in romfs_dev_read() + +From: Jann Horn + +commit bcf85fcedfdd17911982a3e3564fcfec7b01eebd upstream. + +romfs has a superblock field that limits the size of the filesystem; data +beyond that limit is never accessed. + +romfs_dev_read() fetches a caller-supplied number of bytes from the +backing device. It returns 0 on success or an error code on failure; +therefore, its API can't represent short reads, it's all-or-nothing. + +However, when romfs_dev_read() detects that the requested operation would +cross the filesystem size limit, it currently silently truncates the +requested number of bytes. This e.g. means that when the content of a +file with size 0x1000 starts one byte before the filesystem size limit, +->readpage() will only fill a single byte of the supplied page while +leaving the rest uninitialized, leaking that uninitialized memory to +userspace. + +Fix it by returning an error code instead of truncating the read when the +requested read operation would go beyond the end of the filesystem. + +Fixes: da4458bda237 ("NOMMU: Make it possible for RomFS to use MTD devices directly") +Signed-off-by: Jann Horn +Signed-off-by: Andrew Morton +Reviewed-by: Greg Kroah-Hartman +Cc: David Howells +Cc: +Link: http://lkml.kernel.org/r/20200818013202.2246365-1-jannh@google.com +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + +--- + fs/romfs/storage.c | 4 +--- + 1 file changed, 1 insertion(+), 3 deletions(-) + +--- a/fs/romfs/storage.c ++++ b/fs/romfs/storage.c +@@ -221,10 +221,8 @@ int romfs_dev_read(struct super_block *s + size_t limit; + + limit = romfs_maxsize(sb); +- if (pos >= limit) ++ if (pos >= limit || buflen > limit - pos) + return -EIO; +- if (buflen > limit - pos) +- buflen = limit - pos; + + #ifdef CONFIG_ROMFS_ON_MTD + if (sb->s_mtd) diff --git a/queue-4.4/series b/queue-4.4/series index 79cf9f1e121..1fde23ebb37 100644 --- a/queue-4.4/series +++ b/queue-4.4/series @@ -8,3 +8,6 @@ khugepaged-khugepaged_test_exit-check-mmget_still_va.patch khugepaged-adjust-vm_bug_on_mm-in-__khugepaged_enter.patch btrfs-export-helpers-for-subvolume-name-id-resolutio.patch btrfs-don-t-show-full-path-of-bind-mounts-in-subvol.patch +romfs-fix-uninitialized-memory-leak-in-romfs_dev_read.patch +mm-include-cma-pages-in-lowmem_reserve-at-boot.patch +mm-page_alloc-fix-core-hung-in-free_pcppages_bulk.patch