]>
Commit | Line | Data |
---|---|---|
fe2d4738 GKH |
1 | From e13d918a19a7b6cba62b32884f5e336e764c2cc6 Mon Sep 17 00:00:00 2001 |
2 | From: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> | |
3 | Date: Tue, 27 Oct 2015 17:29:10 +0000 | |
4 | Subject: arm64: kernel: fix tcr_el1.t0sz restore on systems with extended idmap | |
5 | ||
6 | From: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> | |
7 | ||
8 | commit e13d918a19a7b6cba62b32884f5e336e764c2cc6 upstream. | |
9 | ||
10 | Commit dd006da21646 ("arm64: mm: increase VA range of identity map") | |
11 | introduced a mechanism to extend the virtual memory map range | |
12 | to support arm64 systems with system RAM located at very high offset, | |
13 | where the identity mapping used to enable/disable the MMU requires | |
14 | additional translation levels to map the physical memory at an equal | |
15 | virtual offset. | |
16 | ||
17 | The kernel detects at boot time the tcr_el1.t0sz value required by the | |
18 | identity mapping and sets-up the tcr_el1.t0sz register field accordingly, | |
19 | any time the identity map is required in the kernel (ie when enabling the | |
20 | MMU). | |
21 | ||
22 | After enabling the MMU, in the cold boot path the kernel resets the | |
23 | tcr_el1.t0sz to its default value (ie the actual configuration value for | |
24 | the system virtual address space) so that after enabling the MMU the | |
25 | memory space translated by ttbr0_el1 is restored as expected. | |
26 | ||
27 | Commit dd006da21646 ("arm64: mm: increase VA range of identity map") | |
28 | also added code to set-up the tcr_el1.t0sz value when the kernel resumes | |
29 | from low-power states with the MMU off through cpu_resume() in order to | |
30 | effectively use the identity mapping to enable the MMU but failed to add | |
31 | the code required to restore the tcr_el1.t0sz to its default value, when | |
32 | the core returns to the kernel with the MMU enabled, so that the kernel | |
33 | might end up running with tcr_el1.t0sz value set-up for the identity | |
34 | mapping which can be lower than the value required by the actual virtual | |
35 | address space, resulting in an erroneous set-up. | |
36 | ||
37 | This patchs adds code in the resume path that restores the tcr_el1.t0sz | |
38 | default value upon core resume, mirroring this way the cold boot path | |
39 | behaviour therefore fixing the issue. | |
40 | ||
41 | Cc: Catalin Marinas <catalin.marinas@arm.com> | |
42 | Fixes: dd006da21646 ("arm64: mm: increase VA range of identity map") | |
43 | Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> | |
44 | Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> | |
45 | Signed-off-by: James Morse <james.morse@arm.com> | |
46 | Signed-off-by: Will Deacon <will.deacon@arm.com> | |
47 | Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> | |
48 | ||
49 | --- | |
50 | arch/arm64/kernel/suspend.c | 22 +++++++++++++--------- | |
51 | 1 file changed, 13 insertions(+), 9 deletions(-) | |
52 | ||
53 | --- a/arch/arm64/kernel/suspend.c | |
54 | +++ b/arch/arm64/kernel/suspend.c | |
55 | @@ -80,17 +80,21 @@ int cpu_suspend(unsigned long arg, int ( | |
56 | if (ret == 0) { | |
57 | /* | |
58 | * We are resuming from reset with TTBR0_EL1 set to the | |
59 | - * idmap to enable the MMU; restore the active_mm mappings in | |
60 | - * TTBR0_EL1 unless the active_mm == &init_mm, in which case | |
61 | - * the thread entered cpu_suspend with TTBR0_EL1 set to | |
62 | - * reserved TTBR0 page tables and should be restored as such. | |
63 | + * idmap to enable the MMU; set the TTBR0 to the reserved | |
64 | + * page tables to prevent speculative TLB allocations, flush | |
65 | + * the local tlb and set the default tcr_el1.t0sz so that | |
66 | + * the TTBR0 address space set-up is properly restored. | |
67 | + * If the current active_mm != &init_mm we entered cpu_suspend | |
68 | + * with mappings in TTBR0 that must be restored, so we switch | |
69 | + * them back to complete the address space configuration | |
70 | + * restoration before returning. | |
71 | */ | |
72 | - if (mm == &init_mm) | |
73 | - cpu_set_reserved_ttbr0(); | |
74 | - else | |
75 | - cpu_switch_mm(mm->pgd, mm); | |
76 | - | |
77 | + cpu_set_reserved_ttbr0(); | |
78 | flush_tlb_all(); | |
79 | + cpu_set_default_tcr_t0sz(); | |
80 | + | |
81 | + if (mm != &init_mm) | |
82 | + cpu_switch_mm(mm->pgd, mm); | |
83 | ||
84 | /* | |
85 | * Restore per-cpu offset before any kernel |