]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blame - releases/4.0.9/arm64-mm-fix-freeing-of-the-wrong-memmap-entries-with-sparsemem_vmemmap.patch
Drop watchdog patch
[thirdparty/kernel/stable-queue.git] / releases / 4.0.9 / arm64-mm-fix-freeing-of-the-wrong-memmap-entries-with-sparsemem_vmemmap.patch
CommitLineData
add1b9b2
GKH
1From b9bcc919931611498e856eae9bf66337330d04cc Mon Sep 17 00:00:00 2001
2From: Dave P Martin <Dave.Martin@arm.com>
3Date: Tue, 16 Jun 2015 17:38:47 +0100
4Subject: arm64: mm: Fix freeing of the wrong memmap entries with !SPARSEMEM_VMEMMAP
5
6From: Dave P Martin <Dave.Martin@arm.com>
7
8commit b9bcc919931611498e856eae9bf66337330d04cc upstream.
9
10The memmap freeing code in free_unused_memmap() computes the end of
11each memblock by adding the memblock size onto the base. However,
12if SPARSEMEM is enabled then the value (start) used for the base
13may already have been rounded downwards to work out which memmap
14entries to free after the previous memblock.
15
16This may cause memmap entries that are in use to get freed.
17
18In general, you're not likely to hit this problem unless there
19are at least 2 memblocks and one of them is not aligned to a
20sparsemem section boundary. Note that carve-outs can increase
21the number of memblocks by splitting the regions listed in the
22device tree.
23
24This problem doesn't occur with SPARSEMEM_VMEMMAP, because the
25vmemmap code deals with freeing the unused regions of the memmap
26instead of requiring the arch code to do it.
27
28This patch gets the memblock base out of the memblock directly when
29computing the block end address to ensure the correct value is used.
30
31Signed-off-by: Dave Martin <Dave.Martin@arm.com>
32Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
33Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
34
35---
36 arch/arm64/mm/init.c | 2 +-
37 1 file changed, 1 insertion(+), 1 deletion(-)
38
39--- a/arch/arm64/mm/init.c
40+++ b/arch/arm64/mm/init.c
41@@ -260,7 +260,7 @@ static void __init free_unused_memmap(vo
42 * memmap entries are valid from the bank end aligned to
43 * MAX_ORDER_NR_PAGES.
44 */
45- prev_end = ALIGN(start + __phys_to_pfn(reg->size),
46+ prev_end = ALIGN(__phys_to_pfn(reg->base + reg->size),
47 MAX_ORDER_NR_PAGES);
48 }
49