]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blob - releases/3.18.137/mm-memory_hotplug-is_mem_section_removable-do-not-pa.patch
4.9-stable patches
[thirdparty/kernel/stable-queue.git] / releases / 3.18.137 / mm-memory_hotplug-is_mem_section_removable-do-not-pa.patch
1 From 101e03d1c6833a88f1653b995557194bebd55ce8 Mon Sep 17 00:00:00 2001
2 From: Michal Hocko <mhocko@suse.com>
3 Date: Fri, 1 Feb 2019 14:20:34 -0800
4 Subject: mm, memory_hotplug: is_mem_section_removable do not pass the end of a
5 zone
6
7 [ Upstream commit efad4e475c312456edb3c789d0996d12ed744c13 ]
8
9 Patch series "mm, memory_hotplug: fix uninitialized pages fallouts", v2.
10
11 Mikhail Zaslonko has posted fixes for the two bugs quite some time ago
12 [1]. I have pushed back on those fixes because I believed that it is
13 much better to plug the problem at the initialization time rather than
14 play whack-a-mole all over the hotplug code and find all the places
15 which expect the full memory section to be initialized.
16
17 We have ended up with commit 2830bf6f05fb ("mm, memory_hotplug:
18 initialize struct pages for the full memory section") merged and cause a
19 regression [2][3]. The reason is that there might be memory layouts
20 when two NUMA nodes share the same memory section so the merged fix is
21 simply incorrect.
22
23 In order to plug this hole we really have to be zone range aware in
24 those handlers. I have split up the original patch into two. One is
25 unchanged (patch 2) and I took a different approach for `removable'
26 crash.
27
28 [1] http://lkml.kernel.org/r/20181105150401.97287-2-zaslonko@linux.ibm.com
29 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1666948
30 [3] http://lkml.kernel.org/r/20190125163938.GA20411@dhcp22.suse.cz
31
32 This patch (of 2):
33
34 Mikhail has reported the following VM_BUG_ON triggered when reading sysfs
35 removable state of a memory block:
36
37 page:000003d08300c000 is uninitialized and poisoned
38 page dumped because: VM_BUG_ON_PAGE(PagePoisoned(p))
39 Call Trace:
40 is_mem_section_removable+0xb4/0x190
41 show_mem_removable+0x9a/0xd8
42 dev_attr_show+0x34/0x70
43 sysfs_kf_seq_show+0xc8/0x148
44 seq_read+0x204/0x480
45 __vfs_read+0x32/0x178
46 vfs_read+0x82/0x138
47 ksys_read+0x5a/0xb0
48 system_call+0xdc/0x2d8
49 Last Breaking-Event-Address:
50 is_mem_section_removable+0xb4/0x190
51 Kernel panic - not syncing: Fatal exception: panic_on_oops
52
53 The reason is that the memory block spans the zone boundary and we are
54 stumbling over an unitialized struct page. Fix this by enforcing zone
55 range in is_mem_section_removable so that we never run away from a zone.
56
57 Link: http://lkml.kernel.org/r/20190128144506.15603-2-mhocko@kernel.org
58 Signed-off-by: Michal Hocko <mhocko@suse.com>
59 Reported-by: Mikhail Zaslonko <zaslonko@linux.ibm.com>
60 Debugged-by: Mikhail Zaslonko <zaslonko@linux.ibm.com>
61 Tested-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
62 Tested-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
63 Reviewed-by: Oscar Salvador <osalvador@suse.de>
64 Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
65 Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
66 Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
67 Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
68 Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
69 Signed-off-by: Sasha Levin <sashal@kernel.org>
70 ---
71 mm/memory_hotplug.c | 3 ++-
72 1 file changed, 2 insertions(+), 1 deletion(-)
73
74 diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
75 index 05014e89efae..3fb2067c36a4 100644
76 --- a/mm/memory_hotplug.c
77 +++ b/mm/memory_hotplug.c
78 @@ -1321,7 +1321,8 @@ static struct page *next_active_pageblock(struct page *page)
79 int is_mem_section_removable(unsigned long start_pfn, unsigned long nr_pages)
80 {
81 struct page *page = pfn_to_page(start_pfn);
82 - struct page *end_page = page + nr_pages;
83 + unsigned long end_pfn = min(start_pfn + nr_pages, zone_end_pfn(page_zone(page)));
84 + struct page *end_page = pfn_to_page(end_pfn);
85
86 /* Check the starting page of each pageblock within the range */
87 for (; page < end_page; page = next_active_pageblock(page)) {
88 --
89 2.19.1
90