From b81c688426a9785bcae93bab3705ed2cd5fab175 Mon Sep 17 00:00:00 2001 From: Ryan Roberts Date: Mon, 12 May 2025 11:22:40 +0100 Subject: [PATCH] arm64/mm: Disable barrier batching in interrupt contexts Commit 5fdd05efa1cd ("arm64/mm: Batch barriers when updating kernel mappings") enabled arm64 kernels to track "lazy mmu mode" using TIF flags in order to defer barriers until exiting the mode. At the same time, it added warnings to check that pte manipulations were never performed in interrupt context, because the tracking implementation could not deal with nesting. But it turns out that some debug features (e.g. KFENCE, DEBUG_PAGEALLOC) do manipulate ptes in softirq context, which triggered the warnings. So let's take the simplest and safest route and disable the batching optimization in interrupt contexts. This makes these users no worse off than prior to the optimization. Additionally the known offenders are debug features that only manipulate a single PTE, so there is no performance gain anyway. There may be some obscure case of encrypted/decrypted DMA with the dma_free_coherent called from an interrupt context, but again, this is no worse off than prior to the commit. Some options for supporting nesting were considered, but there is a difficult to solve problem if any code manipulates ptes within interrupt context but *outside of* a lazy mmu region. If this case exists, the code would expect the updates to be immediate, but because the task context may have already been in lazy mmu mode, the updates would be deferred, which could cause incorrect behaviour. This problem is avoided by always ensuring updates within interrupt context are immediate. Fixes: 5fdd05efa1cd ("arm64/mm: Batch barriers when updating kernel mappings") Reported-by: syzbot+5c0d9392e042f41d45c5@syzkaller.appspotmail.com Closes: https://lore.kernel.org/linux-arm-kernel/681f2a09.050a0220.f2294.0006.GAE@google.com/ Signed-off-by: Ryan Roberts Reviewed-by: Catalin Marinas Link: https://lore.kernel.org/r/20250512102242.4156463-1-ryan.roberts@arm.com Signed-off-by: Will Deacon --- arch/arm64/include/asm/pgtable.h | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index ab4a1b19e5961..e65083ec35cb8 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -64,7 +64,11 @@ static inline void queue_pte_barriers(void) { unsigned long flags; - VM_WARN_ON(in_interrupt()); + if (in_interrupt()) { + emit_pte_barriers(); + return; + } + flags = read_thread_flags(); if (flags & BIT(TIF_LAZY_MMU)) { @@ -79,7 +83,9 @@ static inline void queue_pte_barriers(void) #define __HAVE_ARCH_ENTER_LAZY_MMU_MODE static inline void arch_enter_lazy_mmu_mode(void) { - VM_WARN_ON(in_interrupt()); + if (in_interrupt()) + return; + VM_WARN_ON(test_thread_flag(TIF_LAZY_MMU)); set_thread_flag(TIF_LAZY_MMU); @@ -87,12 +93,18 @@ static inline void arch_enter_lazy_mmu_mode(void) static inline void arch_flush_lazy_mmu_mode(void) { + if (in_interrupt()) + return; + if (test_and_clear_thread_flag(TIF_LAZY_MMU_PENDING)) emit_pte_barriers(); } static inline void arch_leave_lazy_mmu_mode(void) { + if (in_interrupt()) + return; + arch_flush_lazy_mmu_mode(); clear_thread_flag(TIF_LAZY_MMU); } -- 2.47.2