]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blob - queue-5.1/mem-hotplug-fix-node-spanned-pages-when-we-have-a-no.patch
fixes for 5.1
[thirdparty/kernel/stable-queue.git] / queue-5.1 / mem-hotplug-fix-node-spanned-pages-when-we-have-a-no.patch
1 From 02e9affead235d0c14313d33ec565b58256cb28e Mon Sep 17 00:00:00 2001
2 From: Linxu Fang <fanglinxu@huawei.com>
3 Date: Mon, 13 May 2019 17:19:17 -0700
4 Subject: mem-hotplug: fix node spanned pages when we have a node with only
5 ZONE_MOVABLE
6
7 [ Upstream commit 299c83dce9ea3a79bb4b5511d2cb996b6b8e5111 ]
8
9 342332e6a925 ("mm/page_alloc.c: introduce kernelcore=mirror option") and
10 later patches rewrote the calculation of node spanned pages.
11
12 e506b99696a2 ("mem-hotplug: fix node spanned pages when we have a movable
13 node"), but the current code still has problems,
14
15 When we have a node with only zone_movable and the node id is not zero,
16 the size of node spanned pages is double added.
17
18 That's because we have an empty normal zone, and zone_start_pfn or
19 zone_end_pfn is not between arch_zone_lowest_possible_pfn and
20 arch_zone_highest_possible_pfn, so we need to use clamp to constrain the
21 range just like the commit <96e907d13602> (bootmem: Reimplement
22 __absent_pages_in_range() using for_each_mem_pfn_range()).
23
24 e.g.
25 Zone ranges:
26 DMA [mem 0x0000000000001000-0x0000000000ffffff]
27 DMA32 [mem 0x0000000001000000-0x00000000ffffffff]
28 Normal [mem 0x0000000100000000-0x000000023fffffff]
29 Movable zone start for each node
30 Node 0: 0x0000000100000000
31 Node 1: 0x0000000140000000
32 Early memory node ranges
33 node 0: [mem 0x0000000000001000-0x000000000009efff]
34 node 0: [mem 0x0000000000100000-0x00000000bffdffff]
35 node 0: [mem 0x0000000100000000-0x000000013fffffff]
36 node 1: [mem 0x0000000140000000-0x000000023fffffff]
37
38 node 0 DMA spanned:0xfff present:0xf9e absent:0x61
39 node 0 DMA32 spanned:0xff000 present:0xbefe0 absent:0x40020
40 node 0 Normal spanned:0 present:0 absent:0
41 node 0 Movable spanned:0x40000 present:0x40000 absent:0
42 On node 0 totalpages(node_present_pages): 1048446
43 node_spanned_pages:1310719
44 node 1 DMA spanned:0 present:0 absent:0
45 node 1 DMA32 spanned:0 present:0 absent:0
46 node 1 Normal spanned:0x100000 present:0x100000 absent:0
47 node 1 Movable spanned:0x100000 present:0x100000 absent:0
48 On node 1 totalpages(node_present_pages): 2097152
49 node_spanned_pages:2097152
50 Memory: 6967796K/12582392K available (16388K kernel code, 3686K rwdata,
51 4468K rodata, 2160K init, 10444K bss, 5614596K reserved, 0K
52 cma-reserved)
53
54 It shows that the current memory of node 1 is double added.
55 After this patch, the problem is fixed.
56
57 node 0 DMA spanned:0xfff present:0xf9e absent:0x61
58 node 0 DMA32 spanned:0xff000 present:0xbefe0 absent:0x40020
59 node 0 Normal spanned:0 present:0 absent:0
60 node 0 Movable spanned:0x40000 present:0x40000 absent:0
61 On node 0 totalpages(node_present_pages): 1048446
62 node_spanned_pages:1310719
63 node 1 DMA spanned:0 present:0 absent:0
64 node 1 DMA32 spanned:0 present:0 absent:0
65 node 1 Normal spanned:0 present:0 absent:0
66 node 1 Movable spanned:0x100000 present:0x100000 absent:0
67 On node 1 totalpages(node_present_pages): 1048576
68 node_spanned_pages:1048576
69 memory: 6967796K/8388088K available (16388K kernel code, 3686K rwdata,
70 4468K rodata, 2160K init, 10444K bss, 1420292K reserved, 0K
71 cma-reserved)
72
73 Link: http://lkml.kernel.org/r/1554178276-10372-1-git-send-email-fanglinxu@huawei.com
74 Signed-off-by: Linxu Fang <fanglinxu@huawei.com>
75 Cc: Taku Izumi <izumi.taku@jp.fujitsu.com>
76 Cc: Xishi Qiu <qiuxishi@huawei.com>
77 Cc: Michal Hocko <mhocko@suse.com>
78 Cc: Vlastimil Babka <vbabka@suse.cz>
79 Cc: Pavel Tatashin <pavel.tatashin@microsoft.com>
80 Cc: Oscar Salvador <osalvador@suse.de>
81 Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
82 Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
83 Signed-off-by: Sasha Levin <sashal@kernel.org>
84 ---
85 mm/page_alloc.c | 6 ++++--
86 1 file changed, 4 insertions(+), 2 deletions(-)
87
88 diff --git a/mm/page_alloc.c b/mm/page_alloc.c
89 index c02cff1ed56e..475ca5b1a824 100644
90 --- a/mm/page_alloc.c
91 +++ b/mm/page_alloc.c
92 @@ -6244,13 +6244,15 @@ static unsigned long __init zone_spanned_pages_in_node(int nid,
93 unsigned long *zone_end_pfn,
94 unsigned long *ignored)
95 {
96 + unsigned long zone_low = arch_zone_lowest_possible_pfn[zone_type];
97 + unsigned long zone_high = arch_zone_highest_possible_pfn[zone_type];
98 /* When hotadd a new node from cpu_up(), the node should be empty */
99 if (!node_start_pfn && !node_end_pfn)
100 return 0;
101
102 /* Get the start and end of the zone */
103 - *zone_start_pfn = arch_zone_lowest_possible_pfn[zone_type];
104 - *zone_end_pfn = arch_zone_highest_possible_pfn[zone_type];
105 + *zone_start_pfn = clamp(node_start_pfn, zone_low, zone_high);
106 + *zone_end_pfn = clamp(node_end_pfn, zone_low, zone_high);
107 adjust_zone_range_for_zone_movable(nid, zone_type,
108 node_start_pfn, node_end_pfn,
109 zone_start_pfn, zone_end_pfn);
110 --
111 2.20.1
112