From: Will Deacon Date: Tue, 11 Jun 2019 11:47:34 +0000 (+0100) Subject: arm64: tlbflush: Ensure start/end of address range are aligned to stride X-Git-Tag: v5.2-rc5~15^2~1 X-Git-Url: http://git.ipfire.org/?p=thirdparty%2Fkernel%2Fstable.git;a=commitdiff_plain;h=01d57485fcdb9f9101a10a18e32d5f8b023cab86 arm64: tlbflush: Ensure start/end of address range are aligned to stride Since commit 3d65b6bbc01e ("arm64: tlbi: Set MAX_TLBI_OPS to PTRS_PER_PTE"), we resort to per-ASID invalidation when attempting to perform more than PTRS_PER_PTE invalidation instructions in a single call to __flush_tlb_range(). Whilst this is beneficial, the mmu_gather code does not ensure that the end address of the range is rounded-up to the stride when freeing intermediate page tables in pXX_free_tlb(), which defeats our range checking. Align the bounds passed into __flush_tlb_range(). Cc: Catalin Marinas Cc: Peter Zijlstra Reported-by: Hanjun Guo Tested-by: Hanjun Guo Reviewed-by: Hanjun Guo Signed-off-by: Will Deacon --- diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index 3a18702289469..dff8f9ea5754f 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -195,6 +195,9 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, unsigned long asid = ASID(vma->vm_mm); unsigned long addr; + start = round_down(start, stride); + end = round_up(end, stride); + if ((end - start) >= (MAX_TLBI_OPS * stride)) { flush_tlb_mm(vma->vm_mm); return;