From: Greg Kroah-Hartman Date: Thu, 11 Sep 2025 10:59:43 +0000 (+0200) Subject: 6.12-stable patches X-Git-Tag: v5.10.244~16 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=984cadf3c6df4432156f1100b564c5780b3e328d;p=thirdparty%2Fkernel%2Fstable-queue.git 6.12-stable patches added patches: kunit-kasan_test-disable-fortify-string-checker-on-kasan_strings-test.patch mm-introduce-and-use-pgd-p4d-_populate_kernel.patch net-mlx5-hws-change-error-flow-on-matcher-disconnect.patch --- diff --git a/queue-6.12/kunit-kasan_test-disable-fortify-string-checker-on-kasan_strings-test.patch b/queue-6.12/kunit-kasan_test-disable-fortify-string-checker-on-kasan_strings-test.patch new file mode 100644 index 0000000000..94f1cdc7bc --- /dev/null +++ b/queue-6.12/kunit-kasan_test-disable-fortify-string-checker-on-kasan_strings-test.patch @@ -0,0 +1,57 @@ +From 7a19afee6fb39df63ddea7ce78976d8c521178c6 Mon Sep 17 00:00:00 2001 +From: Yeoreum Yun +Date: Fri, 1 Aug 2025 13:02:36 +0100 +Subject: kunit: kasan_test: disable fortify string checker on kasan_strings() test + +From: Yeoreum Yun + +commit 7a19afee6fb39df63ddea7ce78976d8c521178c6 upstream. + +Similar to commit 09c6304e38e4 ("kasan: test: fix compatibility with +FORTIFY_SOURCE") the kernel is panicing in kasan_string(). + +This is due to the `src` and `ptr` not being hidden from the optimizer +which would disable the runtime fortify string checker. + +Call trace: + __fortify_panic+0x10/0x20 (P) + kasan_strings+0x980/0x9b0 + kunit_try_run_case+0x68/0x190 + kunit_generic_run_threadfn_adapter+0x34/0x68 + kthread+0x1c4/0x228 + ret_from_fork+0x10/0x20 + Code: d503233f a9bf7bfd 910003fd 9424b243 (d4210000) + ---[ end trace 0000000000000000 ]--- + note: kunit_try_catch[128] exited with irqs disabled + note: kunit_try_catch[128] exited with preempt_count 1 + # kasan_strings: try faulted: last +** replaying previous printk message ** + # kasan_strings: try faulted: last line seen mm/kasan/kasan_test_c.c:1600 + # kasan_strings: internal error occurred preventing test case from running: -4 + +Link: https://lkml.kernel.org/r/20250801120236.2962642-1-yeoreum.yun@arm.com +Fixes: 73228c7ecc5e ("KASAN: port KASAN Tests to KUnit") +Signed-off-by: Yeoreum Yun +Cc: Alexander Potapenko +Cc: Andrey Konovalov +Cc: Andrey Ryabinin +Cc: Dmitriy Vyukov +Cc: Vincenzo Frascino +Cc: +Signed-off-by: Andrew Morton +Signed-off-by: Yeoreum Yun +Signed-off-by: Greg Kroah-Hartman +--- + mm/kasan/kasan_test_c.c | 1 + + 1 file changed, 1 insertion(+) + +--- a/mm/kasan/kasan_test_c.c ++++ b/mm/kasan/kasan_test_c.c +@@ -1500,6 +1500,7 @@ static void kasan_memchr(struct kunit *t + + ptr = kmalloc(size, GFP_KERNEL | __GFP_ZERO); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); ++ OPTIMIZER_HIDE_VAR(ptr); + + OPTIMIZER_HIDE_VAR(ptr); + OPTIMIZER_HIDE_VAR(size); diff --git a/queue-6.12/mm-introduce-and-use-pgd-p4d-_populate_kernel.patch b/queue-6.12/mm-introduce-and-use-pgd-p4d-_populate_kernel.patch new file mode 100644 index 0000000000..d367b51480 --- /dev/null +++ b/queue-6.12/mm-introduce-and-use-pgd-p4d-_populate_kernel.patch @@ -0,0 +1,287 @@ +From f2d2f9598ebb0158a3fe17cda0106d7752e654a2 Mon Sep 17 00:00:00 2001 +From: Harry Yoo +Date: Mon, 18 Aug 2025 11:02:05 +0900 +Subject: mm: introduce and use {pgd,p4d}_populate_kernel() + +From: Harry Yoo + +commit f2d2f9598ebb0158a3fe17cda0106d7752e654a2 upstream. + +Introduce and use {pgd,p4d}_populate_kernel() in core MM code when +populating PGD and P4D entries for the kernel address space. These +helpers ensure proper synchronization of page tables when updating the +kernel portion of top-level page tables. + +Until now, the kernel has relied on each architecture to handle +synchronization of top-level page tables in an ad-hoc manner. For +example, see commit 9b861528a801 ("x86-64, mem: Update all PGDs for direct +mapping and vmemmap mapping changes"). + +However, this approach has proven fragile for following reasons: + + 1) It is easy to forget to perform the necessary page table + synchronization when introducing new changes. + For instance, commit 4917f55b4ef9 ("mm/sparse-vmemmap: improve memory + savings for compound devmaps") overlooked the need to synchronize + page tables for the vmemmap area. + + 2) It is also easy to overlook that the vmemmap and direct mapping areas + must not be accessed before explicit page table synchronization. + For example, commit 8d400913c231 ("x86/vmemmap: handle unpopulated + sub-pmd ranges")) caused crashes by accessing the vmemmap area + before calling sync_global_pgds(). + +To address this, as suggested by Dave Hansen, introduce _kernel() variants +of the page table population helpers, which invoke architecture-specific +hooks to properly synchronize page tables. These are introduced in a new +header file, include/linux/pgalloc.h, so they can be called from common +code. + +They reuse existing infrastructure for vmalloc and ioremap. +Synchronization requirements are determined by ARCH_PAGE_TABLE_SYNC_MASK, +and the actual synchronization is performed by +arch_sync_kernel_mappings(). + +This change currently targets only x86_64, so only PGD and P4D level +helpers are introduced. Currently, these helpers are no-ops since no +architecture sets PGTBL_{PGD,P4D}_MODIFIED in ARCH_PAGE_TABLE_SYNC_MASK. + +In theory, PUD and PMD level helpers can be added later if needed by other +architectures. For now, 32-bit architectures (x86-32 and arm) only handle +PGTBL_PMD_MODIFIED, so p*d_populate_kernel() will never affect them unless +we introduce a PMD level helper. + +[harry.yoo@oracle.com: fix KASAN build error due to p*d_populate_kernel()] + Link: https://lkml.kernel.org/r/20250822020727.202749-1-harry.yoo@oracle.com +Link: https://lkml.kernel.org/r/20250818020206.4517-3-harry.yoo@oracle.com +Fixes: 8d400913c231 ("x86/vmemmap: handle unpopulated sub-pmd ranges") +Signed-off-by: Harry Yoo +Suggested-by: Dave Hansen +Acked-by: Kiryl Shutsemau +Reviewed-by: Mike Rapoport (Microsoft) +Reviewed-by: Lorenzo Stoakes +Acked-by: David Hildenbrand +Cc: Alexander Potapenko +Cc: Alistair Popple +Cc: Andrey Konovalov +Cc: Andrey Ryabinin +Cc: Andy Lutomirski +Cc: "Aneesh Kumar K.V" +Cc: Anshuman Khandual +Cc: Ard Biesheuvel +Cc: Arnd Bergmann +Cc: bibo mao +Cc: Borislav Betkov +Cc: Christoph Lameter (Ampere) +Cc: Dennis Zhou +Cc: Dev Jain +Cc: Dmitriy Vyukov +Cc: Gwan-gyeong Mun +Cc: Ingo Molnar +Cc: Jane Chu +Cc: Joao Martins +Cc: Joerg Roedel +Cc: John Hubbard +Cc: Kevin Brodsky +Cc: Liam Howlett +Cc: Michal Hocko +Cc: Oscar Salvador +Cc: Peter Xu +Cc: Peter Zijlstra +Cc: Qi Zheng +Cc: Ryan Roberts +Cc: Suren Baghdasaryan +Cc: Tejun Heo +Cc: Thomas Gleinxer +Cc: Thomas Huth +Cc: "Uladzislau Rezki (Sony)" +Cc: Vincenzo Frascino +Cc: Vlastimil Babka +Cc: +Signed-off-by: Andrew Morton +[ Adjust context ] +Signed-off-by: Harry Yoo +Signed-off-by: Greg Kroah-Hartman +--- + include/linux/pgalloc.h | 29 +++++++++++++++++++++++++++++ + include/linux/pgtable.h | 13 +++++++------ + mm/kasan/init.c | 12 ++++++------ + mm/percpu.c | 6 +++--- + mm/sparse-vmemmap.c | 6 +++--- + 5 files changed, 48 insertions(+), 18 deletions(-) + create mode 100644 include/linux/pgalloc.h + +--- /dev/null ++++ b/include/linux/pgalloc.h +@@ -0,0 +1,29 @@ ++/* SPDX-License-Identifier: GPL-2.0 */ ++#ifndef _LINUX_PGALLOC_H ++#define _LINUX_PGALLOC_H ++ ++#include ++#include ++ ++/* ++ * {pgd,p4d}_populate_kernel() are defined as macros to allow ++ * compile-time optimization based on the configured page table levels. ++ * Without this, linking may fail because callers (e.g., KASAN) may rely ++ * on calls to these functions being optimized away when passing symbols ++ * that exist only for certain page table levels. ++ */ ++#define pgd_populate_kernel(addr, pgd, p4d) \ ++ do { \ ++ pgd_populate(&init_mm, pgd, p4d); \ ++ if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_PGD_MODIFIED) \ ++ arch_sync_kernel_mappings(addr, addr); \ ++ } while (0) ++ ++#define p4d_populate_kernel(addr, p4d, pud) \ ++ do { \ ++ p4d_populate(&init_mm, p4d, pud); \ ++ if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_P4D_MODIFIED) \ ++ arch_sync_kernel_mappings(addr, addr); \ ++ } while (0) ++ ++#endif /* _LINUX_PGALLOC_H */ +--- a/include/linux/pgtable.h ++++ b/include/linux/pgtable.h +@@ -1699,8 +1699,8 @@ static inline int pmd_protnone(pmd_t pmd + + /* + * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values +- * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings() +- * needs to be called. ++ * and let generic vmalloc, ioremap and page table update code know when ++ * arch_sync_kernel_mappings() needs to be called. + */ + #ifndef ARCH_PAGE_TABLE_SYNC_MASK + #define ARCH_PAGE_TABLE_SYNC_MASK 0 +@@ -1833,10 +1833,11 @@ static inline bool arch_has_pfn_modify_c + /* + * Page Table Modification bits for pgtbl_mod_mask. + * +- * These are used by the p?d_alloc_track*() set of functions an in the generic +- * vmalloc/ioremap code to track at which page-table levels entries have been +- * modified. Based on that the code can better decide when vmalloc and ioremap +- * mapping changes need to be synchronized to other page-tables in the system. ++ * These are used by the p?d_alloc_track*() and p*d_populate_kernel() ++ * functions in the generic vmalloc, ioremap and page table update code ++ * to track at which page-table levels entries have been modified. ++ * Based on that the code can better decide when page table changes need ++ * to be synchronized to other page-tables in the system. + */ + #define __PGTBL_PGD_MODIFIED 0 + #define __PGTBL_P4D_MODIFIED 1 +--- a/mm/kasan/init.c ++++ b/mm/kasan/init.c +@@ -13,9 +13,9 @@ + #include + #include + #include ++#include + + #include +-#include + + #include "kasan.h" + +@@ -203,7 +203,7 @@ static int __ref zero_p4d_populate(pgd_t + pud_t *pud; + pmd_t *pmd; + +- p4d_populate(&init_mm, p4d, ++ p4d_populate_kernel(addr, p4d, + lm_alias(kasan_early_shadow_pud)); + pud = pud_offset(p4d, addr); + pud_populate(&init_mm, pud, +@@ -224,7 +224,7 @@ static int __ref zero_p4d_populate(pgd_t + } else { + p = early_alloc(PAGE_SIZE, NUMA_NO_NODE); + pud_init(p); +- p4d_populate(&init_mm, p4d, p); ++ p4d_populate_kernel(addr, p4d, p); + } + } + zero_pud_populate(p4d, addr, next); +@@ -263,10 +263,10 @@ int __ref kasan_populate_early_shadow(co + * puds,pmds, so pgd_populate(), pud_populate() + * is noops. + */ +- pgd_populate(&init_mm, pgd, ++ pgd_populate_kernel(addr, pgd, + lm_alias(kasan_early_shadow_p4d)); + p4d = p4d_offset(pgd, addr); +- p4d_populate(&init_mm, p4d, ++ p4d_populate_kernel(addr, p4d, + lm_alias(kasan_early_shadow_pud)); + pud = pud_offset(p4d, addr); + pud_populate(&init_mm, pud, +@@ -285,7 +285,7 @@ int __ref kasan_populate_early_shadow(co + if (!p) + return -ENOMEM; + } else { +- pgd_populate(&init_mm, pgd, ++ pgd_populate_kernel(addr, pgd, + early_alloc(PAGE_SIZE, NUMA_NO_NODE)); + } + } +--- a/mm/percpu.c ++++ b/mm/percpu.c +@@ -3129,7 +3129,7 @@ out_free: + #endif /* BUILD_EMBED_FIRST_CHUNK */ + + #ifdef BUILD_PAGE_FIRST_CHUNK +-#include ++#include + + #ifndef P4D_TABLE_SIZE + #define P4D_TABLE_SIZE PAGE_SIZE +@@ -3157,7 +3157,7 @@ void __init __weak pcpu_populate_pte(uns + p4d = memblock_alloc(P4D_TABLE_SIZE, P4D_TABLE_SIZE); + if (!p4d) + goto err_alloc; +- pgd_populate(&init_mm, pgd, p4d); ++ pgd_populate_kernel(addr, pgd, p4d); + } + + p4d = p4d_offset(pgd, addr); +@@ -3165,7 +3165,7 @@ void __init __weak pcpu_populate_pte(uns + pud = memblock_alloc(PUD_TABLE_SIZE, PUD_TABLE_SIZE); + if (!pud) + goto err_alloc; +- p4d_populate(&init_mm, p4d, pud); ++ p4d_populate_kernel(addr, p4d, pud); + } + + pud = pud_offset(p4d, addr); +--- a/mm/sparse-vmemmap.c ++++ b/mm/sparse-vmemmap.c +@@ -27,9 +27,9 @@ + #include + #include + #include ++#include + + #include +-#include + + /* + * Allocate a block of memory to be used to back the virtual memory map +@@ -230,7 +230,7 @@ p4d_t * __meminit vmemmap_p4d_populate(p + if (!p) + return NULL; + pud_init(p); +- p4d_populate(&init_mm, p4d, p); ++ p4d_populate_kernel(addr, p4d, p); + } + return p4d; + } +@@ -242,7 +242,7 @@ pgd_t * __meminit vmemmap_pgd_populate(u + void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node); + if (!p) + return NULL; +- pgd_populate(&init_mm, pgd, p); ++ pgd_populate_kernel(addr, pgd, p); + } + return pgd; + } diff --git a/queue-6.12/net-mlx5-hws-change-error-flow-on-matcher-disconnect.patch b/queue-6.12/net-mlx5-hws-change-error-flow-on-matcher-disconnect.patch new file mode 100644 index 0000000000..aa0bd9f766 --- /dev/null +++ b/queue-6.12/net-mlx5-hws-change-error-flow-on-matcher-disconnect.patch @@ -0,0 +1,93 @@ +From 1ce840c7a659aa53a31ef49f0271b4fd0dc10296 Mon Sep 17 00:00:00 2001 +From: Yevgeny Kliteynik +Date: Thu, 2 Jan 2025 20:14:05 +0200 +Subject: net/mlx5: HWS, change error flow on matcher disconnect + +From: Yevgeny Kliteynik + +commit 1ce840c7a659aa53a31ef49f0271b4fd0dc10296 upstream. + +Currently, when firmware failure occurs during matcher disconnect flow, +the error flow of the function reconnects the matcher back and returns +an error, which continues running the calling function and eventually +frees the matcher that is being disconnected. +This leads to a case where we have a freed matcher on the matchers list, +which in turn leads to use-after-free and eventual crash. + +This patch fixes that by not trying to reconnect the matcher back when +some FW command fails during disconnect. + +Note that we're dealing here with FW error. We can't overcome this +problem. This might lead to bad steering state (e.g. wrong connection +between matchers), and will also lead to resource leakage, as it is +the case with any other error handling during resource destruction. + +However, the goal here is to allow the driver to continue and not crash +the machine with use-after-free error. + +Signed-off-by: Yevgeny Kliteynik +Signed-off-by: Itamar Gozlan +Reviewed-by: Mark Bloch +Signed-off-by: Tariq Toukan +Link: https://patch.msgid.link/20250102181415.1477316-7-tariqt@nvidia.com +Signed-off-by: Jakub Kicinski +Signed-off-by: Jan Alexander Preissler +Signed-off-by: Sujana Subramaniam +Signed-off-by: Greg Kroah-Hartman +--- + drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_matcher.c | 24 +++------- + 1 file changed, 8 insertions(+), 16 deletions(-) + +--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_matcher.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_matcher.c +@@ -165,14 +165,14 @@ static int hws_matcher_disconnect(struct + next->match_ste.rtc_0_id, + next->match_ste.rtc_1_id); + if (ret) { +- mlx5hws_err(tbl->ctx, "Failed to disconnect matcher\n"); +- goto matcher_reconnect; ++ mlx5hws_err(tbl->ctx, "Fatal error, failed to disconnect matcher\n"); ++ return ret; + } + } else { + ret = mlx5hws_table_connect_to_miss_table(tbl, tbl->default_miss.miss_tbl); + if (ret) { +- mlx5hws_err(tbl->ctx, "Failed to disconnect last matcher\n"); +- goto matcher_reconnect; ++ mlx5hws_err(tbl->ctx, "Fatal error, failed to disconnect last matcher\n"); ++ return ret; + } + } + +@@ -180,27 +180,19 @@ static int hws_matcher_disconnect(struct + if (prev_ft_id == tbl->ft_id) { + ret = mlx5hws_table_update_connected_miss_tables(tbl); + if (ret) { +- mlx5hws_err(tbl->ctx, "Fatal error, failed to update connected miss table\n"); +- goto matcher_reconnect; ++ mlx5hws_err(tbl->ctx, ++ "Fatal error, failed to update connected miss table\n"); ++ return ret; + } + } + + ret = mlx5hws_table_ft_set_default_next_ft(tbl, prev_ft_id); + if (ret) { + mlx5hws_err(tbl->ctx, "Fatal error, failed to restore matcher ft default miss\n"); +- goto matcher_reconnect; ++ return ret; + } + + return 0; +- +-matcher_reconnect: +- if (list_empty(&tbl->matchers_list) || !prev) +- list_add(&matcher->list_node, &tbl->matchers_list); +- else +- /* insert after prev matcher */ +- list_add(&matcher->list_node, &prev->list_node); +- +- return ret; + } + + static void hws_matcher_set_rtc_attr_sz(struct mlx5hws_matcher *matcher, diff --git a/queue-6.12/series b/queue-6.12/series index 7730352279..c4f9186f0c 100644 --- a/queue-6.12/series +++ b/queue-6.12/series @@ -4,3 +4,6 @@ dma-mapping-trace-dma_alloc-free-direction.patch dma-mapping-use-trace_dma_alloc-for-dma_alloc-instea.patch dma-mapping-trace-more-error-paths.patch dma-debug-don-t-enforce-dma-mapping-check-on-noncohe.patch +kunit-kasan_test-disable-fortify-string-checker-on-kasan_strings-test.patch +net-mlx5-hws-change-error-flow-on-matcher-disconnect.patch +mm-introduce-and-use-pgd-p4d-_populate_kernel.patch