From: Will Deacon Date: Thu, 22 Aug 2019 14:03:45 +0000 (+0100) Subject: arm64: tlb: Ensure we execute an ISB following walk cache invalidation X-Git-Tag: v5.2.19~58 X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=ef2fa63bbe95f6d658c30c7a0cb7dc0004e7ac4d;p=thirdparty%2Fkernel%2Fstable.git arm64: tlb: Ensure we execute an ISB following walk cache invalidation commit 51696d346c49c6cf4f29e9b20d6e15832a2e3408 upstream. 05f2d2f83b5a ("arm64: tlbflush: Introduce __flush_tlb_kernel_pgtable") added a new TLB invalidation helper which is used when freeing intermediate levels of page table used for kernel mappings, but is missing the required ISB instruction after completion of the TLBI instruction. Add the missing barrier. Cc: Fixes: 05f2d2f83b5a ("arm64: tlbflush: Introduce __flush_tlb_kernel_pgtable") Reviewed-by: Mark Rutland Signed-off-by: Will Deacon Signed-off-by: Greg Kroah-Hartman --- diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 8af7a85f76bdb..bc39490647259 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -251,6 +251,7 @@ static inline void __flush_tlb_kernel_pgtable(unsigned long kaddr) dsb(ishst); __tlbi(vaae1is, addr); dsb(ish); + isb(); } #endif