]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blame - releases/4.1.13/arm64-kernel-fix-tcr_el1.t0sz-restore-on-systems-with-extended-idmap.patch
4.9-stable patches
[thirdparty/kernel/stable-queue.git] / releases / 4.1.13 / arm64-kernel-fix-tcr_el1.t0sz-restore-on-systems-with-extended-idmap.patch
CommitLineData
fe2d4738
GKH
1From e13d918a19a7b6cba62b32884f5e336e764c2cc6 Mon Sep 17 00:00:00 2001
2From: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
3Date: Tue, 27 Oct 2015 17:29:10 +0000
4Subject: arm64: kernel: fix tcr_el1.t0sz restore on systems with extended idmap
5
6From: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
7
8commit e13d918a19a7b6cba62b32884f5e336e764c2cc6 upstream.
9
10Commit dd006da21646 ("arm64: mm: increase VA range of identity map")
11introduced a mechanism to extend the virtual memory map range
12to support arm64 systems with system RAM located at very high offset,
13where the identity mapping used to enable/disable the MMU requires
14additional translation levels to map the physical memory at an equal
15virtual offset.
16
17The kernel detects at boot time the tcr_el1.t0sz value required by the
18identity mapping and sets-up the tcr_el1.t0sz register field accordingly,
19any time the identity map is required in the kernel (ie when enabling the
20MMU).
21
22After enabling the MMU, in the cold boot path the kernel resets the
23tcr_el1.t0sz to its default value (ie the actual configuration value for
24the system virtual address space) so that after enabling the MMU the
25memory space translated by ttbr0_el1 is restored as expected.
26
27Commit dd006da21646 ("arm64: mm: increase VA range of identity map")
28also added code to set-up the tcr_el1.t0sz value when the kernel resumes
29from low-power states with the MMU off through cpu_resume() in order to
30effectively use the identity mapping to enable the MMU but failed to add
31the code required to restore the tcr_el1.t0sz to its default value, when
32the core returns to the kernel with the MMU enabled, so that the kernel
33might end up running with tcr_el1.t0sz value set-up for the identity
34mapping which can be lower than the value required by the actual virtual
35address space, resulting in an erroneous set-up.
36
37This patchs adds code in the resume path that restores the tcr_el1.t0sz
38default value upon core resume, mirroring this way the cold boot path
39behaviour therefore fixing the issue.
40
41Cc: Catalin Marinas <catalin.marinas@arm.com>
42Fixes: dd006da21646 ("arm64: mm: increase VA range of identity map")
43Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
44Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
45Signed-off-by: James Morse <james.morse@arm.com>
46Signed-off-by: Will Deacon <will.deacon@arm.com>
47Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
48
49---
50 arch/arm64/kernel/suspend.c | 22 +++++++++++++---------
51 1 file changed, 13 insertions(+), 9 deletions(-)
52
53--- a/arch/arm64/kernel/suspend.c
54+++ b/arch/arm64/kernel/suspend.c
55@@ -80,17 +80,21 @@ int cpu_suspend(unsigned long arg, int (
56 if (ret == 0) {
57 /*
58 * We are resuming from reset with TTBR0_EL1 set to the
59- * idmap to enable the MMU; restore the active_mm mappings in
60- * TTBR0_EL1 unless the active_mm == &init_mm, in which case
61- * the thread entered cpu_suspend with TTBR0_EL1 set to
62- * reserved TTBR0 page tables and should be restored as such.
63+ * idmap to enable the MMU; set the TTBR0 to the reserved
64+ * page tables to prevent speculative TLB allocations, flush
65+ * the local tlb and set the default tcr_el1.t0sz so that
66+ * the TTBR0 address space set-up is properly restored.
67+ * If the current active_mm != &init_mm we entered cpu_suspend
68+ * with mappings in TTBR0 that must be restored, so we switch
69+ * them back to complete the address space configuration
70+ * restoration before returning.
71 */
72- if (mm == &init_mm)
73- cpu_set_reserved_ttbr0();
74- else
75- cpu_switch_mm(mm->pgd, mm);
76-
77+ cpu_set_reserved_ttbr0();
78 flush_tlb_all();
79+ cpu_set_default_tcr_t0sz();
80+
81+ if (mm != &init_mm)
82+ cpu_switch_mm(mm->pgd, mm);
83
84 /*
85 * Restore per-cpu offset before any kernel