]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blame - releases/4.4.140/mm-hugetlb-yield-when-prepping-struct-pages.patch
4.14-stable patches
[thirdparty/kernel/stable-queue.git] / releases / 4.4.140 / mm-hugetlb-yield-when-prepping-struct-pages.patch
CommitLineData
39f7188d
GKH
1From 520495fe96d74e05db585fc748351e0504d8f40d Mon Sep 17 00:00:00 2001
2From: Cannon Matthews <cannonmatthews@google.com>
3Date: Tue, 3 Jul 2018 17:02:43 -0700
4Subject: mm: hugetlb: yield when prepping struct pages
5
6From: Cannon Matthews <cannonmatthews@google.com>
7
8commit 520495fe96d74e05db585fc748351e0504d8f40d upstream.
9
10When booting with very large numbers of gigantic (i.e. 1G) pages, the
11operations in the loop of gather_bootmem_prealloc, and specifically
12prep_compound_gigantic_page, takes a very long time, and can cause a
13softlockup if enough pages are requested at boot.
14
15For example booting with 3844 1G pages requires prepping
16(set_compound_head, init the count) over 1 billion 4K tail pages, which
17takes considerable time.
18
19Add a cond_resched() to the outer loop in gather_bootmem_prealloc() to
20prevent this lockup.
21
22Tested: Booted with softlockup_panic=1 hugepagesz=1G hugepages=3844 and
23no softlockup is reported, and the hugepages are reported as
24successfully setup.
25
26Link: http://lkml.kernel.org/r/20180627214447.260804-1-cannonmatthews@google.com
27Signed-off-by: Cannon Matthews <cannonmatthews@google.com>
28Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
29Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
30Acked-by: Michal Hocko <mhocko@suse.com>
31Cc: Andres Lagar-Cavilla <andreslc@google.com>
32Cc: Peter Feiner <pfeiner@google.com>
33Cc: Greg Thelen <gthelen@google.com>
34Cc: <stable@vger.kernel.org>
35Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
36Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
37Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
38
39---
40 mm/hugetlb.c | 1 +
41 1 file changed, 1 insertion(+)
42
43--- a/mm/hugetlb.c
44+++ b/mm/hugetlb.c
45@@ -2038,6 +2038,7 @@ static void __init gather_bootmem_preall
46 */
47 if (hstate_is_gigantic(h))
48 adjust_managed_page_count(page, 1 << h->order);
49+ cond_resched();
50 }
51 }
52