]> git.ipfire.org Git - thirdparty/kernel/stable.git/commit
arm64: tlbflush: Ensure start/end of address range are aligned to stride
authorWill Deacon <will.deacon@arm.com>
Tue, 11 Jun 2019 11:47:34 +0000 (12:47 +0100)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Wed, 10 Jul 2019 07:52:27 +0000 (09:52 +0200)
commite2beb74acae6e8a036b3b6a2dc28d577e1c07df0
tree75218b8472cb93da7177d21f269c417b9a61e198
parent60bc55dea8053f69cf3677adcc7a23f94e4b3c5c
arm64: tlbflush: Ensure start/end of address range are aligned to stride

[ Upstream commit 01d57485fcdb9f9101a10a18e32d5f8b023cab86 ]

Since commit 3d65b6bbc01e ("arm64: tlbi: Set MAX_TLBI_OPS to
PTRS_PER_PTE"), we resort to per-ASID invalidation when attempting to
perform more than PTRS_PER_PTE invalidation instructions in a single
call to __flush_tlb_range(). Whilst this is beneficial, the mmu_gather
code does not ensure that the end address of the range is rounded-up
to the stride when freeing intermediate page tables in pXX_free_tlb(),
which defeats our range checking.

Align the bounds passed into __flush_tlb_range().

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Reported-by: Hanjun Guo <guohanjun@huawei.com>
Tested-by: Hanjun Guo <guohanjun@huawei.com>
Reviewed-by: Hanjun Guo <guohanjun@huawei.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
arch/arm64/include/asm/tlbflush.h