]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blob - releases/5.0.14/x86-mm-fix-a-crash-with-kmemleak_scan.patch
Linux 5.0.14
[thirdparty/kernel/stable-queue.git] / releases / 5.0.14 / x86-mm-fix-a-crash-with-kmemleak_scan.patch
1 From 0d02113b31b2017dd349ec9df2314e798a90fa6e Mon Sep 17 00:00:00 2001
2 From: Qian Cai <cai@lca.pw>
3 Date: Tue, 23 Apr 2019 12:58:11 -0400
4 Subject: x86/mm: Fix a crash with kmemleak_scan()
5
6 From: Qian Cai <cai@lca.pw>
7
8 commit 0d02113b31b2017dd349ec9df2314e798a90fa6e upstream.
9
10 The first kmemleak_scan() call after boot would trigger the crash below
11 because this callpath:
12
13 kernel_init
14 free_initmem
15 mem_encrypt_free_decrypted_mem
16 free_init_pages
17
18 unmaps memory inside the .bss when DEBUG_PAGEALLOC=y.
19
20 kmemleak_init() will register the .data/.bss sections and then
21 kmemleak_scan() will scan those addresses and dereference them looking
22 for pointer references. If free_init_pages() frees and unmaps pages in
23 those sections, kmemleak_scan() will crash if referencing one of those
24 addresses:
25
26 BUG: unable to handle kernel paging request at ffffffffbd402000
27 CPU: 12 PID: 325 Comm: kmemleak Not tainted 5.1.0-rc4+ #4
28 RIP: 0010:scan_block
29 Call Trace:
30 scan_gray_list
31 kmemleak_scan
32 kmemleak_scan_thread
33 kthread
34 ret_from_fork
35
36 Since kmemleak_free_part() is tolerant to unknown objects (not tracked
37 by kmemleak), it is fine to call it from free_init_pages() even if not
38 all address ranges passed to this function are known to kmemleak.
39
40 [ bp: Massage. ]
41
42 Fixes: b3f0907c71e0 ("x86/mm: Add .bss..decrypted section to hold shared variables")
43 Signed-off-by: Qian Cai <cai@lca.pw>
44 Signed-off-by: Borislav Petkov <bp@suse.de>
45 Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
46 Cc: Andy Lutomirski <luto@kernel.org>
47 Cc: Brijesh Singh <brijesh.singh@amd.com>
48 Cc: Dave Hansen <dave.hansen@linux.intel.com>
49 Cc: "H. Peter Anvin" <hpa@zytor.com>
50 Cc: Ingo Molnar <mingo@redhat.com>
51 Cc: Peter Zijlstra <peterz@infradead.org>
52 Cc: Thomas Gleixner <tglx@linutronix.de>
53 Cc: x86-ml <x86@kernel.org>
54 Link: https://lkml.kernel.org/r/20190423165811.36699-1-cai@lca.pw
55 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
56
57 ---
58 arch/x86/mm/init.c | 6 ++++++
59 1 file changed, 6 insertions(+)
60
61 --- a/arch/x86/mm/init.c
62 +++ b/arch/x86/mm/init.c
63 @@ -5,6 +5,7 @@
64 #include <linux/memblock.h>
65 #include <linux/swapfile.h>
66 #include <linux/swapops.h>
67 +#include <linux/kmemleak.h>
68
69 #include <asm/set_memory.h>
70 #include <asm/e820/api.h>
71 @@ -766,6 +767,11 @@ void free_init_pages(const char *what, u
72 if (debug_pagealloc_enabled()) {
73 pr_info("debug: unmapping init [mem %#010lx-%#010lx]\n",
74 begin, end - 1);
75 + /*
76 + * Inform kmemleak about the hole in the memory since the
77 + * corresponding pages will be unmapped.
78 + */
79 + kmemleak_free_part((void *)begin, end - begin);
80 set_memory_np(begin, (end - begin) >> PAGE_SHIFT);
81 } else {
82 /*