]>
Commit | Line | Data |
---|---|---|
39f7188d GKH |
1 | From 520495fe96d74e05db585fc748351e0504d8f40d Mon Sep 17 00:00:00 2001 |
2 | From: Cannon Matthews <cannonmatthews@google.com> | |
3 | Date: Tue, 3 Jul 2018 17:02:43 -0700 | |
4 | Subject: mm: hugetlb: yield when prepping struct pages | |
5 | ||
6 | From: Cannon Matthews <cannonmatthews@google.com> | |
7 | ||
8 | commit 520495fe96d74e05db585fc748351e0504d8f40d upstream. | |
9 | ||
10 | When booting with very large numbers of gigantic (i.e. 1G) pages, the | |
11 | operations in the loop of gather_bootmem_prealloc, and specifically | |
12 | prep_compound_gigantic_page, takes a very long time, and can cause a | |
13 | softlockup if enough pages are requested at boot. | |
14 | ||
15 | For example booting with 3844 1G pages requires prepping | |
16 | (set_compound_head, init the count) over 1 billion 4K tail pages, which | |
17 | takes considerable time. | |
18 | ||
19 | Add a cond_resched() to the outer loop in gather_bootmem_prealloc() to | |
20 | prevent this lockup. | |
21 | ||
22 | Tested: Booted with softlockup_panic=1 hugepagesz=1G hugepages=3844 and | |
23 | no softlockup is reported, and the hugepages are reported as | |
24 | successfully setup. | |
25 | ||
26 | Link: http://lkml.kernel.org/r/20180627214447.260804-1-cannonmatthews@google.com | |
27 | Signed-off-by: Cannon Matthews <cannonmatthews@google.com> | |
28 | Reviewed-by: Andrew Morton <akpm@linux-foundation.org> | |
29 | Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> | |
30 | Acked-by: Michal Hocko <mhocko@suse.com> | |
31 | Cc: Andres Lagar-Cavilla <andreslc@google.com> | |
32 | Cc: Peter Feiner <pfeiner@google.com> | |
33 | Cc: Greg Thelen <gthelen@google.com> | |
34 | Cc: <stable@vger.kernel.org> | |
35 | Signed-off-by: Andrew Morton <akpm@linux-foundation.org> | |
36 | Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> | |
37 | Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> | |
38 | ||
39 | --- | |
40 | mm/hugetlb.c | 1 + | |
41 | 1 file changed, 1 insertion(+) | |
42 | ||
43 | --- a/mm/hugetlb.c | |
44 | +++ b/mm/hugetlb.c | |
45 | @@ -2038,6 +2038,7 @@ static void __init gather_bootmem_preall | |
46 | */ | |
47 | if (hstate_is_gigantic(h)) | |
48 | adjust_managed_page_count(page, 1 << h->order); | |
49 | + cond_resched(); | |
50 | } | |
51 | } | |
52 |