]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blob - queue-6.6/kvm-arm64-ensure-target-address-is-granule-aligned-for-range-tlbi.patch
cb9187a8d0204866987441644425bc5e79dc256b
[thirdparty/kernel/stable-queue.git] / queue-6.6 / kvm-arm64-ensure-target-address-is-granule-aligned-for-range-tlbi.patch
1 From 4c36a156738887c1edd78589fe192d757989bcde Mon Sep 17 00:00:00 2001
2 From: Will Deacon <will@kernel.org>
3 Date: Wed, 27 Mar 2024 12:48:53 +0000
4 Subject: KVM: arm64: Ensure target address is granule-aligned for range TLBI
5
6 From: Will Deacon <will@kernel.org>
7
8 commit 4c36a156738887c1edd78589fe192d757989bcde upstream.
9
10 When zapping a table entry in stage2_try_break_pte(), we issue range
11 TLB invalidation for the region that was mapped by the table. However,
12 we neglect to align the base address down to the granule size and so
13 if we ended up reaching the table entry via a misaligned address then
14 we will accidentally skip invalidation for some prefix of the affected
15 address range.
16
17 Align 'ctx->addr' down to the granule size when performing TLB
18 invalidation for an unmapped table in stage2_try_break_pte().
19
20 Cc: Raghavendra Rao Ananta <rananta@google.com>
21 Cc: Gavin Shan <gshan@redhat.com>
22 Cc: Shaoqin Huang <shahuang@redhat.com>
23 Cc: Quentin Perret <qperret@google.com>
24 Fixes: defc8cc7abf0 ("KVM: arm64: Invalidate the table entries upon a range")
25 Signed-off-by: Will Deacon <will@kernel.org>
26 Reviewed-by: Shaoqin Huang <shahuang@redhat.com>
27 Reviewed-by: Marc Zyngier <maz@kernel.org>
28 Link: https://lore.kernel.org/r/20240327124853.11206-5-will@kernel.org
29 Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
30 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
31 ---
32 arch/arm64/kvm/hyp/pgtable.c | 11 +++++++----
33 1 file changed, 7 insertions(+), 4 deletions(-)
34
35 --- a/arch/arm64/kvm/hyp/pgtable.c
36 +++ b/arch/arm64/kvm/hyp/pgtable.c
37 @@ -805,12 +805,15 @@ static bool stage2_try_break_pte(const s
38 * Perform the appropriate TLB invalidation based on the
39 * evicted pte value (if any).
40 */
41 - if (kvm_pte_table(ctx->old, ctx->level))
42 - kvm_tlb_flush_vmid_range(mmu, ctx->addr,
43 - kvm_granule_size(ctx->level));
44 - else if (kvm_pte_valid(ctx->old))
45 + if (kvm_pte_table(ctx->old, ctx->level)) {
46 + u64 size = kvm_granule_size(ctx->level);
47 + u64 addr = ALIGN_DOWN(ctx->addr, size);
48 +
49 + kvm_tlb_flush_vmid_range(mmu, addr, size);
50 + } else if (kvm_pte_valid(ctx->old)) {
51 kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu,
52 ctx->addr, ctx->level);
53 + }
54 }
55
56 if (stage2_pte_is_counted(ctx->old))