]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blob - releases/4.19.53/x86-mm-kaslr-compute-the-size-of-the-vmemmap-section-properly.patch
4.9-stable patches
[thirdparty/kernel/stable-queue.git] / releases / 4.19.53 / x86-mm-kaslr-compute-the-size-of-the-vmemmap-section-properly.patch
1 From 00e5a2bbcc31d5fea853f8daeba0f06c1c88c3ff Mon Sep 17 00:00:00 2001
2 From: Baoquan He <bhe@redhat.com>
3 Date: Thu, 23 May 2019 10:57:44 +0800
4 Subject: x86/mm/KASLR: Compute the size of the vmemmap section properly
5
6 From: Baoquan He <bhe@redhat.com>
7
8 commit 00e5a2bbcc31d5fea853f8daeba0f06c1c88c3ff upstream.
9
10 The size of the vmemmap section is hardcoded to 1 TB to support the
11 maximum amount of system RAM in 4-level paging mode - 64 TB.
12
13 However, 1 TB is not enough for vmemmap in 5-level paging mode. Assuming
14 the size of struct page is 64 Bytes, to support 4 PB system RAM in 5-level,
15 64 TB of vmemmap area is needed:
16
17 4 * 1000^5 PB / 4096 bytes page size * 64 bytes per page struct / 1000^4 TB = 62.5 TB.
18
19 This hardcoding may cause vmemmap to corrupt the following
20 cpu_entry_area section, if KASLR puts vmemmap very close to it and the
21 actual vmemmap size is bigger than 1 TB.
22
23 So calculate the actual size of the vmemmap region needed and then align
24 it up to 1 TB boundary.
25
26 In 4-level paging mode it is always 1 TB. In 5-level it's adjusted on
27 demand. The current code reserves 0.5 PB for vmemmap on 5-level. With
28 this change, the space can be saved and thus used to increase entropy
29 for the randomization.
30
31 [ bp: Spell out how the 64 TB needed for vmemmap is computed and massage commit
32 message. ]
33
34 Fixes: eedb92abb9bb ("x86/mm: Make virtual memory layout dynamic for CONFIG_X86_5LEVEL=y")
35 Signed-off-by: Baoquan He <bhe@redhat.com>
36 Signed-off-by: Borislav Petkov <bp@suse.de>
37 Reviewed-by: Kees Cook <keescook@chromium.org>
38 Acked-by: Kirill A. Shutemov <kirill@linux.intel.com>
39 Cc: Andy Lutomirski <luto@kernel.org>
40 Cc: Dave Hansen <dave.hansen@linux.intel.com>
41 Cc: "H. Peter Anvin" <hpa@zytor.com>
42 Cc: Ingo Molnar <mingo@kernel.org>
43 Cc: kirill.shutemov@linux.intel.com
44 Cc: Peter Zijlstra <peterz@infradead.org>
45 Cc: stable <stable@vger.kernel.org>
46 Cc: Thomas Gleixner <tglx@linutronix.de>
47 Cc: x86-ml <x86@kernel.org>
48 Link: https://lkml.kernel.org/r/20190523025744.3756-1-bhe@redhat.com
49 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
50
51 ---
52 arch/x86/mm/kaslr.c | 11 ++++++++++-
53 1 file changed, 10 insertions(+), 1 deletion(-)
54
55 --- a/arch/x86/mm/kaslr.c
56 +++ b/arch/x86/mm/kaslr.c
57 @@ -51,7 +51,7 @@ static __initdata struct kaslr_memory_re
58 } kaslr_regions[] = {
59 { &page_offset_base, 0 },
60 { &vmalloc_base, 0 },
61 - { &vmemmap_base, 1 },
62 + { &vmemmap_base, 0 },
63 };
64
65 /* Get size in bytes used by the memory region */
66 @@ -77,6 +77,7 @@ void __init kernel_randomize_memory(void
67 unsigned long rand, memory_tb;
68 struct rnd_state rand_state;
69 unsigned long remain_entropy;
70 + unsigned long vmemmap_size;
71
72 vaddr_start = pgtable_l5_enabled() ? __PAGE_OFFSET_BASE_L5 : __PAGE_OFFSET_BASE_L4;
73 vaddr = vaddr_start;
74 @@ -108,6 +109,14 @@ void __init kernel_randomize_memory(void
75 if (memory_tb < kaslr_regions[0].size_tb)
76 kaslr_regions[0].size_tb = memory_tb;
77
78 + /*
79 + * Calculate the vmemmap region size in TBs, aligned to a TB
80 + * boundary.
81 + */
82 + vmemmap_size = (kaslr_regions[0].size_tb << (TB_SHIFT - PAGE_SHIFT)) *
83 + sizeof(struct page);
84 + kaslr_regions[2].size_tb = DIV_ROUND_UP(vmemmap_size, 1UL << TB_SHIFT);
85 +
86 /* Calculate entropy available between regions */
87 remain_entropy = vaddr_end - vaddr_start;
88 for (i = 0; i < ARRAY_SIZE(kaslr_regions); i++)