]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blob - releases/4.8.5/arm64-kvm-take-s1-walks-into-account-when-determining-s2-write-faults.patch
4.9-stable patches
[thirdparty/kernel/stable-queue.git] / releases / 4.8.5 / arm64-kvm-take-s1-walks-into-account-when-determining-s2-write-faults.patch
1 From 60e21a0ef54cd836b9eb22c7cb396989b5b11648 Mon Sep 17 00:00:00 2001
2 From: Will Deacon <will.deacon@arm.com>
3 Date: Thu, 29 Sep 2016 12:37:01 +0100
4 Subject: arm64: KVM: Take S1 walks into account when determining S2 write faults
5
6 From: Will Deacon <will.deacon@arm.com>
7
8 commit 60e21a0ef54cd836b9eb22c7cb396989b5b11648 upstream.
9
10 The WnR bit in the HSR/ESR_EL2 indicates whether a data abort was
11 generated by a read or a write instruction. For stage 2 data aborts
12 generated by a stage 1 translation table walk (i.e. the actual page
13 table access faults at EL2), the WnR bit therefore reports whether the
14 instruction generating the walk was a load or a store, *not* whether the
15 page table walker was reading or writing the entry.
16
17 For page tables marked as read-only at stage 2 (e.g. due to KSM merging
18 them with the tables from another guest), this could result in livelock,
19 where a page table walk generated by a load instruction attempts to
20 set the access flag in the stage 1 descriptor, but fails to trigger
21 CoW in the host since only a read fault is reported.
22
23 This patch modifies the arm64 kvm_vcpu_dabt_iswrite function to
24 take into account stage 2 faults in stage 1 walks. Since DBM cannot be
25 disabled at EL2 for CPUs that implement it, we assume that these faults
26 are always causes by writes, avoiding the livelock situation at the
27 expense of occasional, spurious CoWs.
28
29 We could, in theory, do a bit better by checking the guest TCR
30 configuration and inspecting the page table to see why the PTE faulted.
31 However, I doubt this is measurable in practice, and the threat of
32 livelock is real.
33
34 Cc: Julien Grall <julien.grall@arm.com>
35 Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
36 Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
37 Signed-off-by: Will Deacon <will.deacon@arm.com>
38 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
39
40 ---
41 arch/arm64/include/asm/kvm_emulate.h | 11 ++++++-----
42 1 file changed, 6 insertions(+), 5 deletions(-)
43
44 --- a/arch/arm64/include/asm/kvm_emulate.h
45 +++ b/arch/arm64/include/asm/kvm_emulate.h
46 @@ -167,11 +167,6 @@ static inline bool kvm_vcpu_dabt_isvalid
47 return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_ISV);
48 }
49
50 -static inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu)
51 -{
52 - return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WNR);
53 -}
54 -
55 static inline bool kvm_vcpu_dabt_issext(const struct kvm_vcpu *vcpu)
56 {
57 return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SSE);
58 @@ -192,6 +187,12 @@ static inline bool kvm_vcpu_dabt_iss1tw(
59 return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_S1PTW);
60 }
61
62 +static inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu)
63 +{
64 + return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WNR) ||
65 + kvm_vcpu_dabt_iss1tw(vcpu); /* AF/DBM update */
66 +}
67 +
68 static inline bool kvm_vcpu_dabt_is_cm(const struct kvm_vcpu *vcpu)
69 {
70 return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_CM);