From: Greg Kroah-Hartman Date: Thu, 17 Apr 2025 11:53:03 +0000 (+0200) Subject: 5.4-stable patches X-Git-Tag: v6.12.24~69 X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=ef085086543f0b71e0d4c5e8b847264470c3b8ba;p=thirdparty%2Fkernel%2Fstable-queue.git 5.4-stable patches added patches: mm-add-missing-release-barrier-on-pgdat_reclaim_locked-unlock.patch sparc-mm-disable-preemption-in-lazy-mmu-mode.patch --- diff --git a/queue-5.4/mm-add-missing-release-barrier-on-pgdat_reclaim_locked-unlock.patch b/queue-5.4/mm-add-missing-release-barrier-on-pgdat_reclaim_locked-unlock.patch new file mode 100644 index 0000000000..fddb94827e --- /dev/null +++ b/queue-5.4/mm-add-missing-release-barrier-on-pgdat_reclaim_locked-unlock.patch @@ -0,0 +1,62 @@ +From c0ebbb3841e07c4493e6fe351698806b09a87a37 Mon Sep 17 00:00:00 2001 +From: Mathieu Desnoyers +Date: Wed, 12 Mar 2025 10:10:13 -0400 +Subject: mm: add missing release barrier on PGDAT_RECLAIM_LOCKED unlock + +From: Mathieu Desnoyers + +commit c0ebbb3841e07c4493e6fe351698806b09a87a37 upstream. + +The PGDAT_RECLAIM_LOCKED bit is used to provide mutual exclusion of node +reclaim for struct pglist_data using a single bit. + +It is "locked" with a test_and_set_bit (similarly to a try lock) which +provides full ordering with respect to loads and stores done within +__node_reclaim(). + +It is "unlocked" with clear_bit(), which does not provide any ordering +with respect to loads and stores done before clearing the bit. + +The lack of clear_bit() memory ordering with respect to stores within +__node_reclaim() can cause a subsequent CPU to fail to observe stores from +a prior node reclaim. This is not an issue in practice on TSO (e.g. +x86), but it is an issue on weakly-ordered architectures (e.g. arm64). + +Fix this by using clear_bit_unlock rather than clear_bit to clear +PGDAT_RECLAIM_LOCKED with a release memory ordering semantic. + +This provides stronger memory ordering (release rather than relaxed). + +Link: https://lkml.kernel.org/r/20250312141014.129725-1-mathieu.desnoyers@efficios.com +Fixes: d773ed6b856a ("mm: test and set zone reclaim lock before starting reclaim") +Signed-off-by: Mathieu Desnoyers +Cc: Lorenzo Stoakes +Cc: Matthew Wilcox +Cc: Alan Stern +Cc: Andrea Parri +Cc: Will Deacon +Cc: Peter Zijlstra +Cc: Boqun Feng +Cc: Nicholas Piggin +Cc: David Howells +Cc: Jade Alglave +Cc: Luc Maranget +Cc: "Paul E. McKenney" +Cc: +Signed-off-by: Andrew Morton +Signed-off-by: Greg Kroah-Hartman +--- + mm/vmscan.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/mm/vmscan.c ++++ b/mm/vmscan.c +@@ -4331,7 +4331,7 @@ int node_reclaim(struct pglist_data *pgd + return NODE_RECLAIM_NOSCAN; + + ret = __node_reclaim(pgdat, gfp_mask, order); +- clear_bit(PGDAT_RECLAIM_LOCKED, &pgdat->flags); ++ clear_bit_unlock(PGDAT_RECLAIM_LOCKED, &pgdat->flags); + + if (!ret) + count_vm_event(PGSCAN_ZONE_RECLAIM_FAILED); diff --git a/queue-5.4/series b/queue-5.4/series index ec09ec827f..5c78a3351d 100644 --- a/queue-5.4/series +++ b/queue-5.4/series @@ -63,3 +63,5 @@ lib-scatterlist-fix-sg_split_phys-to-preserve-original-scatterlist-offsets.patch mtd-inftlcore-add-error-check-for-inftl_read_oob.patch mtd-rawnand-add-status-chack-in-r852_ready.patch arm64-dts-mediatek-mt8173-fix-disp-pwm-compatible-string.patch +sparc-mm-disable-preemption-in-lazy-mmu-mode.patch +mm-add-missing-release-barrier-on-pgdat_reclaim_locked-unlock.patch diff --git a/queue-5.4/sparc-mm-disable-preemption-in-lazy-mmu-mode.patch b/queue-5.4/sparc-mm-disable-preemption-in-lazy-mmu-mode.patch new file mode 100644 index 0000000000..d78717c2db --- /dev/null +++ b/queue-5.4/sparc-mm-disable-preemption-in-lazy-mmu-mode.patch @@ -0,0 +1,70 @@ +From a1d416bf9faf4f4871cb5a943614a07f80a7d70f Mon Sep 17 00:00:00 2001 +From: Ryan Roberts +Date: Mon, 3 Mar 2025 14:15:37 +0000 +Subject: sparc/mm: disable preemption in lazy mmu mode + +From: Ryan Roberts + +commit a1d416bf9faf4f4871cb5a943614a07f80a7d70f upstream. + +Since commit 38e0edb15bd0 ("mm/apply_to_range: call pte function with lazy +updates") it's been possible for arch_[enter|leave]_lazy_mmu_mode() to be +called without holding a page table lock (for the kernel mappings case), +and therefore it is possible that preemption may occur while in the lazy +mmu mode. The Sparc lazy mmu implementation is not robust to preemption +since it stores the lazy mode state in a per-cpu structure and does not +attempt to manage that state on task switch. + +Powerpc had the same issue and fixed it by explicitly disabling preemption +in arch_enter_lazy_mmu_mode() and re-enabling in +arch_leave_lazy_mmu_mode(). See commit b9ef323ea168 ("powerpc/64s: +Disable preemption in hash lazy mmu mode"). + +Given Sparc's lazy mmu mode is based on powerpc's, let's fix it in the +same way here. + +Link: https://lkml.kernel.org/r/20250303141542.3371656-4-ryan.roberts@arm.com +Fixes: 38e0edb15bd0 ("mm/apply_to_range: call pte function with lazy updates") +Signed-off-by: Ryan Roberts +Acked-by: David Hildenbrand +Acked-by: Andreas Larsson +Acked-by: Juergen Gross +Cc: Borislav Betkov +Cc: Boris Ostrovsky +Cc: Catalin Marinas +Cc: Dave Hansen +Cc: David S. Miller +Cc: "H. Peter Anvin" +Cc: Ingo Molnar +Cc: Juegren Gross +Cc: Matthew Wilcow (Oracle) +Cc: Thomas Gleinxer +Cc: +Signed-off-by: Andrew Morton +Signed-off-by: Greg Kroah-Hartman +--- + arch/sparc/mm/tlb.c | 5 ++++- + 1 file changed, 4 insertions(+), 1 deletion(-) + +--- a/arch/sparc/mm/tlb.c ++++ b/arch/sparc/mm/tlb.c +@@ -53,8 +53,10 @@ out: + + void arch_enter_lazy_mmu_mode(void) + { +- struct tlb_batch *tb = this_cpu_ptr(&tlb_batch); ++ struct tlb_batch *tb; + ++ preempt_disable(); ++ tb = this_cpu_ptr(&tlb_batch); + tb->active = 1; + } + +@@ -65,6 +67,7 @@ void arch_leave_lazy_mmu_mode(void) + if (tb->tlb_nr) + flush_tlb_pending(); + tb->active = 0; ++ preempt_enable(); + } + + static void tlb_batch_add_one(struct mm_struct *mm, unsigned long vaddr,