From: Alexander Gordeev Date: Mon, 15 Dec 2025 15:03:10 +0000 (+0000) Subject: powerpc/64s: do not re-activate batched TLB flush X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=58852f24f9566602340130804bf7f4474a3f5f2a;p=thirdparty%2Fkernel%2Flinux.git powerpc/64s: do not re-activate batched TLB flush Patch series "Nesting support for lazy MMU mode", v6. When the lazy MMU mode was introduced eons ago, it wasn't made clear whether such a sequence was legal: arch_enter_lazy_mmu_mode() ... arch_enter_lazy_mmu_mode() ... arch_leave_lazy_mmu_mode() ... arch_leave_lazy_mmu_mode() It seems fair to say that nested calls to arch_{enter,leave}_lazy_mmu_mode() were not expected, and most architectures never explicitly supported it. Nesting does in fact occur in certain configurations, and avoiding it has proved difficult. This series therefore enables lazy_mmu sections to nest, on all architectures. Nesting is handled using a counter in task_struct (patch 8), like other stateless APIs such as pagefault_{disable,enable}(). This is fully handled in a new generic layer in ; the arch_* API remains unchanged. A new pair of calls, lazy_mmu_mode_{pause,resume}(), is also introduced to allow functions that are called with the lazy MMU mode enabled to temporarily pause it, regardless of nesting. An arch now opts in to using the lazy MMU mode by selecting CONFIG_ARCH_LAZY_MMU; this is more appropriate now that we have a generic API, especially with state conditionally added to task_struct. This patch (of 14): Since commit b9ef323ea168 ("powerpc/64s: Disable preemption in hash lazy mmu mode") a task can not be preempted while in lazy MMU mode. Therefore, the batch re-activation code is never called, so remove it. Link: https://lkml.kernel.org/r/20251215150323.2218608-1-kevin.brodsky@arm.com Link: https://lkml.kernel.org/r/20251215150323.2218608-2-kevin.brodsky@arm.com Signed-off-by: Alexander Gordeev Signed-off-by: Kevin Brodsky Reviewed-by: David Hildenbrand Reviewed-by: Ritesh Harjani (IBM) Reviewed-by: Ryan Roberts Tested-by: Venkat Rao Bagalkote Reviewed-by: Yeoreum Yun Cc: Andreas Larsson Cc: Anshuman Khandual Cc: Borislav Betkov Cc: Boris Ostrovsky Cc: Catalin Marinas Cc: Christophe Leroy Cc: David S. Miller Cc: David Woodhouse Cc: "H. Peter Anvin" Cc: Ingo Molnar Cc: Jann Horn Cc: Juegren Gross Cc: levi.yun Cc: Liam Howlett Cc: Lorenzo Stoakes Cc: Madhavan Srinivasan Cc: Michael Ellerman Cc: Michal Hocko Cc: Mike Rapoport Cc: Nicholas Piggin Cc: Peter Zijlstra Cc: Suren Baghdasaryan Cc: Thomas Gleinxer Cc: Vlastimil Babka Cc: Will Deacon Cc: David Hildenbrand (Red Hat) Signed-off-by: Andrew Morton --- diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h index b0f200aba2b3d..97f35f9b1a96e 100644 --- a/arch/powerpc/include/asm/thread_info.h +++ b/arch/powerpc/include/asm/thread_info.h @@ -154,12 +154,10 @@ void arch_setup_new_exec(void); /* Don't move TLF_NAPPING without adjusting the code in entry_32.S */ #define TLF_NAPPING 0 /* idle thread enabled NAP mode */ #define TLF_SLEEPING 1 /* suspend code enabled SLEEP mode */ -#define TLF_LAZY_MMU 3 /* tlb_batch is active */ #define TLF_RUNLATCH 4 /* Is the runlatch enabled? */ #define _TLF_NAPPING (1 << TLF_NAPPING) #define _TLF_SLEEPING (1 << TLF_SLEEPING) -#define _TLF_LAZY_MMU (1 << TLF_LAZY_MMU) #define _TLF_RUNLATCH (1 << TLF_RUNLATCH) #ifndef __ASSEMBLER__ diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c index a45fe147868bc..a15d0b619b1f1 100644 --- a/arch/powerpc/kernel/process.c +++ b/arch/powerpc/kernel/process.c @@ -1281,9 +1281,6 @@ struct task_struct *__switch_to(struct task_struct *prev, { struct thread_struct *new_thread, *old_thread; struct task_struct *last; -#ifdef CONFIG_PPC_64S_HASH_MMU - struct ppc64_tlb_batch *batch; -#endif new_thread = &new->thread; old_thread = ¤t->thread; @@ -1291,14 +1288,6 @@ struct task_struct *__switch_to(struct task_struct *prev, WARN_ON(!irqs_disabled()); #ifdef CONFIG_PPC_64S_HASH_MMU - batch = this_cpu_ptr(&ppc64_tlb_batch); - if (batch->active) { - current_thread_info()->local_flags |= _TLF_LAZY_MMU; - if (batch->index) - __flush_tlb_pending(batch); - batch->active = 0; - } - /* * On POWER9 the copy-paste buffer can only paste into * foreign real addresses, so unprivileged processes can not @@ -1369,20 +1358,6 @@ struct task_struct *__switch_to(struct task_struct *prev, */ #ifdef CONFIG_PPC_BOOK3S_64 -#ifdef CONFIG_PPC_64S_HASH_MMU - /* - * This applies to a process that was context switched while inside - * arch_enter_lazy_mmu_mode(), to re-activate the batch that was - * deactivated above, before _switch(). This will never be the case - * for new tasks. - */ - if (current_thread_info()->local_flags & _TLF_LAZY_MMU) { - current_thread_info()->local_flags &= ~_TLF_LAZY_MMU; - batch = this_cpu_ptr(&ppc64_tlb_batch); - batch->active = 1; - } -#endif - /* * Math facilities are masked out of the child MSR in copy_thread. * A new task does not need to restore_math because it will