]>
Commit | Line | Data |
---|---|---|
add1b9b2 GKH |
1 | From b9bcc919931611498e856eae9bf66337330d04cc Mon Sep 17 00:00:00 2001 |
2 | From: Dave P Martin <Dave.Martin@arm.com> | |
3 | Date: Tue, 16 Jun 2015 17:38:47 +0100 | |
4 | Subject: arm64: mm: Fix freeing of the wrong memmap entries with !SPARSEMEM_VMEMMAP | |
5 | ||
6 | From: Dave P Martin <Dave.Martin@arm.com> | |
7 | ||
8 | commit b9bcc919931611498e856eae9bf66337330d04cc upstream. | |
9 | ||
10 | The memmap freeing code in free_unused_memmap() computes the end of | |
11 | each memblock by adding the memblock size onto the base. However, | |
12 | if SPARSEMEM is enabled then the value (start) used for the base | |
13 | may already have been rounded downwards to work out which memmap | |
14 | entries to free after the previous memblock. | |
15 | ||
16 | This may cause memmap entries that are in use to get freed. | |
17 | ||
18 | In general, you're not likely to hit this problem unless there | |
19 | are at least 2 memblocks and one of them is not aligned to a | |
20 | sparsemem section boundary. Note that carve-outs can increase | |
21 | the number of memblocks by splitting the regions listed in the | |
22 | device tree. | |
23 | ||
24 | This problem doesn't occur with SPARSEMEM_VMEMMAP, because the | |
25 | vmemmap code deals with freeing the unused regions of the memmap | |
26 | instead of requiring the arch code to do it. | |
27 | ||
28 | This patch gets the memblock base out of the memblock directly when | |
29 | computing the block end address to ensure the correct value is used. | |
30 | ||
31 | Signed-off-by: Dave Martin <Dave.Martin@arm.com> | |
32 | Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> | |
33 | Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> | |
34 | ||
35 | --- | |
36 | arch/arm64/mm/init.c | 2 +- | |
37 | 1 file changed, 1 insertion(+), 1 deletion(-) | |
38 | ||
39 | --- a/arch/arm64/mm/init.c | |
40 | +++ b/arch/arm64/mm/init.c | |
41 | @@ -260,7 +260,7 @@ static void __init free_unused_memmap(vo | |
42 | * memmap entries are valid from the bank end aligned to | |
43 | * MAX_ORDER_NR_PAGES. | |
44 | */ | |
45 | - prev_end = ALIGN(start + __phys_to_pfn(reg->size), | |
46 | + prev_end = ALIGN(__phys_to_pfn(reg->base + reg->size), | |
47 | MAX_ORDER_NR_PAGES); | |
48 | } | |
49 |