From: Ryan Roberts Date: Thu, 6 Nov 2025 16:09:42 +0000 (+0000) Subject: arm64: mm: Optimize range_split_to_ptes() X-Git-Tag: v6.18-rc6~38^2~5 X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=40a292f701474f7c21b27911677485efa233e94e;p=thirdparty%2Flinux.git arm64: mm: Optimize range_split_to_ptes() Enter lazy_mmu mode while splitting a range of memory to pte mappings. This causes barriers, which would otherwise be emitted after every pte (and pmd/pud) write, to be deferred until exiting lazy_mmu mode. For large systems, this is expected to significantly speed up fallback to pte-mapping the linear map for the case where the boot CPU has BBML2_NOABORT, but secondary CPUs do not. I haven't directly measured it, but this is equivalent to commit 1fcb7cea8a5f ("arm64: mm: Batch dsb and isb when populating pgtables"). Note that for the path from arch_kfence_init_pool(), we may sleep while allocating memory inside the lazy_mmu mode. Sleeping is not allowed by generic code inside lazy_mmu, but we know that the arm64 implementation is sleep-safe. So this is ok and follows the same pattern already used by split_kernel_leaf_mapping(). Signed-off-by: Ryan Roberts Reviewed-by: Yang Shi Signed-off-by: Will Deacon --- diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index a364ac2c9c61..652bb8c14035 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -832,8 +832,14 @@ static const struct mm_walk_ops split_to_ptes_ops = { static int range_split_to_ptes(unsigned long start, unsigned long end, gfp_t gfp) { - return walk_kernel_page_table_range_lockless(start, end, + int ret; + + arch_enter_lazy_mmu_mode(); + ret = walk_kernel_page_table_range_lockless(start, end, &split_to_ptes_ops, NULL, &gfp); + arch_leave_lazy_mmu_mode(); + + return ret; } static bool linear_map_requires_bbml2 __initdata;