From: Greg Kroah-Hartman Date: Thu, 20 Nov 2025 15:58:43 +0000 (+0100) Subject: 6.12-stable patches X-Git-Tag: v6.6.117~36 X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=dc4c3bbf6c8f964ca314f2f8d942f3d6c92c88b0;p=thirdparty%2Fkernel%2Fstable-queue.git 6.12-stable patches added patches: dma-mapping-benchmark-restore-padding-to-ensure-uabi-remained-consistent.patch gcov-add-support-for-gcc-15.patch ksm-use-range-walk-function-to-jump-over-holes-in-scan_get_next_rmap_item.patch ksmbd-close-accepted-socket-when-per-ip-limit-rejects-connection.patch kvm-svm-mark-vmcb_lbr-dirty-when-msr_ia32_debugctlmsr-is-updated.patch loongarch-kvm-add-delay-until-timer-interrupt-injected.patch loongarch-kvm-restore-guest-pmu-if-it-is-enabled.patch loongarch-let-pte-pmd-_modify-record-the-status-of-_page_dirty.patch loongarch-use-correct-accessor-to-read-fwpc-mwpc.patch nfsd-add-missing-fattr4_word2_clone_blksize-from-supported-attributes.patch nfsd-fix-refcount-leak-in-nfsd_set_fh_dentry.patch nfsd-free-copynotify-stateid-in-nfs4_free_ol_stateid.patch strparser-fix-signed-unsigned-mismatch-bug.patch --- diff --git a/queue-6.12/dma-mapping-benchmark-restore-padding-to-ensure-uabi-remained-consistent.patch b/queue-6.12/dma-mapping-benchmark-restore-padding-to-ensure-uabi-remained-consistent.patch new file mode 100644 index 0000000000..e480e83d25 --- /dev/null +++ b/queue-6.12/dma-mapping-benchmark-restore-padding-to-ensure-uabi-remained-consistent.patch @@ -0,0 +1,41 @@ +From 23ee8a2563a0f24cf4964685ced23c32be444ab8 Mon Sep 17 00:00:00 2001 +From: Qinxin Xia +Date: Tue, 28 Oct 2025 20:08:59 +0800 +Subject: dma-mapping: benchmark: Restore padding to ensure uABI remained consistent + +From: Qinxin Xia + +commit 23ee8a2563a0f24cf4964685ced23c32be444ab8 upstream. + +The padding field in the structure was previously reserved to +maintain a stable interface for potential new fields, ensuring +compatibility with user-space shared data structures. +However,it was accidentally removed by tiantao in a prior commit, +which may lead to incompatibility between user space and the kernel. + +This patch reinstates the padding to restore the original structure +layout and preserve compatibility. + +Fixes: 8ddde07a3d28 ("dma-mapping: benchmark: extract a common header file for map_benchmark definition") +Cc: stable@vger.kernel.org +Acked-by: Barry Song +Signed-off-by: Qinxin Xia +Reported-by: Barry Song +Closes: https://lore.kernel.org/lkml/CAGsJ_4waiZ2+NBJG+SCnbNk+nQ_ZF13_Q5FHJqZyxyJTcEop2A@mail.gmail.com/ +Reviewed-by: Jonathan Cameron +Signed-off-by: Marek Szyprowski +Link: https://lore.kernel.org/r/20251028120900.2265511-2-xiaqinxin@huawei.com +Signed-off-by: Greg Kroah-Hartman +--- + include/linux/map_benchmark.h | 1 + + 1 file changed, 1 insertion(+) + +--- a/include/linux/map_benchmark.h ++++ b/include/linux/map_benchmark.h +@@ -27,5 +27,6 @@ struct map_benchmark { + __u32 dma_dir; /* DMA data direction */ + __u32 dma_trans_ns; /* time for DMA transmission in ns */ + __u32 granule; /* how many PAGE_SIZE will do map/unmap once a time */ ++ __u8 expansion[76]; /* For future use */ + }; + #endif /* _KERNEL_DMA_BENCHMARK_H */ diff --git a/queue-6.12/gcov-add-support-for-gcc-15.patch b/queue-6.12/gcov-add-support-for-gcc-15.patch new file mode 100644 index 0000000000..a5b974af2b --- /dev/null +++ b/queue-6.12/gcov-add-support-for-gcc-15.patch @@ -0,0 +1,40 @@ +From ec4d11fc4b2dd4a2fa8c9d801ee9753b74623554 Mon Sep 17 00:00:00 2001 +From: Peter Oberparleiter +Date: Tue, 28 Oct 2025 12:51:25 +0100 +Subject: gcov: add support for GCC 15 + +From: Peter Oberparleiter + +commit ec4d11fc4b2dd4a2fa8c9d801ee9753b74623554 upstream. + +Using gcov on kernels compiled with GCC 15 results in truncated 16-byte +long .gcda files with no usable data. To fix this, update GCOV_COUNTERS +to match the value defined by GCC 15. + +Tested with GCC 14.3.0 and GCC 15.2.0. + +Link: https://lkml.kernel.org/r/20251028115125.1319410-1-oberpar@linux.ibm.com +Signed-off-by: Peter Oberparleiter +Reported-by: Matthieu Baerts +Closes: https://github.com/linux-test-project/lcov/issues/445 +Tested-by: Matthieu Baerts +Cc: +Signed-off-by: Andrew Morton +Signed-off-by: Greg Kroah-Hartman +--- + kernel/gcov/gcc_4_7.c | 4 +++- + 1 file changed, 3 insertions(+), 1 deletion(-) + +--- a/kernel/gcov/gcc_4_7.c ++++ b/kernel/gcov/gcc_4_7.c +@@ -18,7 +18,9 @@ + #include + #include "gcov.h" + +-#if (__GNUC__ >= 14) ++#if (__GNUC__ >= 15) ++#define GCOV_COUNTERS 10 ++#elif (__GNUC__ >= 14) + #define GCOV_COUNTERS 9 + #elif (__GNUC__ >= 10) + #define GCOV_COUNTERS 8 diff --git a/queue-6.12/ksm-use-range-walk-function-to-jump-over-holes-in-scan_get_next_rmap_item.patch b/queue-6.12/ksm-use-range-walk-function-to-jump-over-holes-in-scan_get_next_rmap_item.patch new file mode 100644 index 0000000000..b7cc688f1a --- /dev/null +++ b/queue-6.12/ksm-use-range-walk-function-to-jump-over-holes-in-scan_get_next_rmap_item.patch @@ -0,0 +1,212 @@ +From f5548c318d6520d4fa3c5ed6003eeb710763cbc5 Mon Sep 17 00:00:00 2001 +From: Pedro Demarchi Gomes +Date: Wed, 22 Oct 2025 12:30:59 -0300 +Subject: ksm: use range-walk function to jump over holes in scan_get_next_rmap_item + +From: Pedro Demarchi Gomes + +commit f5548c318d6520d4fa3c5ed6003eeb710763cbc5 upstream. + +Currently, scan_get_next_rmap_item() walks every page address in a VMA to +locate mergeable pages. This becomes highly inefficient when scanning +large virtual memory areas that contain mostly unmapped regions, causing +ksmd to use large amount of cpu without deduplicating much pages. + +This patch replaces the per-address lookup with a range walk using +walk_page_range(). The range walker allows KSM to skip over entire +unmapped holes in a VMA, avoiding unnecessary lookups. This problem was +previously discussed in [1]. + +Consider the following test program which creates a 32 TiB mapping in the +virtual address space but only populates a single page: + +#include +#include +#include + +/* 32 TiB */ +const size_t size = 32ul * 1024 * 1024 * 1024 * 1024; + +int main() { + char *area = mmap(NULL, size, PROT_READ | PROT_WRITE, + MAP_NORESERVE | MAP_PRIVATE | MAP_ANON, -1, 0); + + if (area == MAP_FAILED) { + perror("mmap() failed\n"); + return -1; + } + + /* Populate a single page such that we get an anon_vma. */ + *area = 0; + + /* Enable KSM. */ + madvise(area, size, MADV_MERGEABLE); + pause(); + return 0; +} + +$ ./ksm-sparse & +$ echo 1 > /sys/kernel/mm/ksm/run + +Without this patch ksmd uses 100% of the cpu for a long time (more then 1 +hour in my test machine) scanning all the 32 TiB virtual address space +that contain only one mapped page. This makes ksmd essentially deadlocked +not able to deduplicate anything of value. With this patch ksmd walks +only the one mapped page and skips the rest of the 32 TiB virtual address +space, making the scan fast using little cpu. + +Link: https://lkml.kernel.org/r/20251023035841.41406-1-pedrodemargomes@gmail.com +Link: https://lkml.kernel.org/r/20251022153059.22763-1-pedrodemargomes@gmail.com +Link: https://lore.kernel.org/linux-mm/423de7a3-1c62-4e72-8e79-19a6413e420c@redhat.com/ [1] +Fixes: 31dbd01f3143 ("ksm: Kernel SamePage Merging") +Signed-off-by: Pedro Demarchi Gomes +Co-developed-by: David Hildenbrand +Signed-off-by: David Hildenbrand +Reported-by: craftfever +Closes: https://lkml.kernel.org/r/020cf8de6e773bb78ba7614ef250129f11a63781@murena.io +Suggested-by: David Hildenbrand +Acked-by: David Hildenbrand +Cc: Chengming Zhou +Cc: xu xin +Cc: +Signed-off-by: Andrew Morton +Signed-off-by: Greg Kroah-Hartman +--- + mm/ksm.c | 113 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++------ + 1 file changed, 104 insertions(+), 9 deletions(-) + +--- a/mm/ksm.c ++++ b/mm/ksm.c +@@ -2447,6 +2447,95 @@ static bool should_skip_rmap_item(struct + return true; + } + ++struct ksm_next_page_arg { ++ struct folio *folio; ++ struct page *page; ++ unsigned long addr; ++}; ++ ++static int ksm_next_page_pmd_entry(pmd_t *pmdp, unsigned long addr, unsigned long end, ++ struct mm_walk *walk) ++{ ++ struct ksm_next_page_arg *private = walk->private; ++ struct vm_area_struct *vma = walk->vma; ++ pte_t *start_ptep = NULL, *ptep, pte; ++ struct mm_struct *mm = walk->mm; ++ struct folio *folio; ++ struct page *page; ++ spinlock_t *ptl; ++ pmd_t pmd; ++ ++ if (ksm_test_exit(mm)) ++ return 0; ++ ++ cond_resched(); ++ ++ pmd = pmdp_get_lockless(pmdp); ++ if (!pmd_present(pmd)) ++ return 0; ++ ++ if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && pmd_leaf(pmd)) { ++ ptl = pmd_lock(mm, pmdp); ++ pmd = pmdp_get(pmdp); ++ ++ if (!pmd_present(pmd)) { ++ goto not_found_unlock; ++ } else if (pmd_leaf(pmd)) { ++ page = vm_normal_page_pmd(vma, addr, pmd); ++ if (!page) ++ goto not_found_unlock; ++ folio = page_folio(page); ++ ++ if (folio_is_zone_device(folio) || !folio_test_anon(folio)) ++ goto not_found_unlock; ++ ++ page += ((addr & (PMD_SIZE - 1)) >> PAGE_SHIFT); ++ goto found_unlock; ++ } ++ spin_unlock(ptl); ++ } ++ ++ start_ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl); ++ if (!start_ptep) ++ return 0; ++ ++ for (ptep = start_ptep; addr < end; ptep++, addr += PAGE_SIZE) { ++ pte = ptep_get(ptep); ++ ++ if (!pte_present(pte)) ++ continue; ++ ++ page = vm_normal_page(vma, addr, pte); ++ if (!page) ++ continue; ++ folio = page_folio(page); ++ ++ if (folio_is_zone_device(folio) || !folio_test_anon(folio)) ++ continue; ++ goto found_unlock; ++ } ++ ++not_found_unlock: ++ spin_unlock(ptl); ++ if (start_ptep) ++ pte_unmap(start_ptep); ++ return 0; ++found_unlock: ++ folio_get(folio); ++ spin_unlock(ptl); ++ if (start_ptep) ++ pte_unmap(start_ptep); ++ private->page = page; ++ private->folio = folio; ++ private->addr = addr; ++ return 1; ++} ++ ++static struct mm_walk_ops ksm_next_page_ops = { ++ .pmd_entry = ksm_next_page_pmd_entry, ++ .walk_lock = PGWALK_RDLOCK, ++}; ++ + static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page) + { + struct mm_struct *mm; +@@ -2534,21 +2623,27 @@ next_mm: + ksm_scan.address = vma->vm_end; + + while (ksm_scan.address < vma->vm_end) { ++ struct ksm_next_page_arg ksm_next_page_arg; + struct page *tmp_page = NULL; +- struct folio_walk fw; + struct folio *folio; + + if (ksm_test_exit(mm)) + break; + +- folio = folio_walk_start(&fw, vma, ksm_scan.address, 0); +- if (folio) { +- if (!folio_is_zone_device(folio) && +- folio_test_anon(folio)) { +- folio_get(folio); +- tmp_page = fw.page; +- } +- folio_walk_end(&fw, vma); ++ int found; ++ ++ found = walk_page_range_vma(vma, ksm_scan.address, ++ vma->vm_end, ++ &ksm_next_page_ops, ++ &ksm_next_page_arg); ++ ++ if (found > 0) { ++ folio = ksm_next_page_arg.folio; ++ tmp_page = ksm_next_page_arg.page; ++ ksm_scan.address = ksm_next_page_arg.addr; ++ } else { ++ VM_WARN_ON_ONCE(found < 0); ++ ksm_scan.address = vma->vm_end - PAGE_SIZE; + } + + if (tmp_page) { diff --git a/queue-6.12/ksmbd-close-accepted-socket-when-per-ip-limit-rejects-connection.patch b/queue-6.12/ksmbd-close-accepted-socket-when-per-ip-limit-rejects-connection.patch new file mode 100644 index 0000000000..6f654b7f31 --- /dev/null +++ b/queue-6.12/ksmbd-close-accepted-socket-when-per-ip-limit-rejects-connection.patch @@ -0,0 +1,42 @@ +From 98a5fd31cbf72d46bf18e50b3ab0ce86d5f319a9 Mon Sep 17 00:00:00 2001 +From: Joshua Rogers +Date: Sat, 8 Nov 2025 22:59:23 +0800 +Subject: ksmbd: close accepted socket when per-IP limit rejects connection + +From: Joshua Rogers + +commit 98a5fd31cbf72d46bf18e50b3ab0ce86d5f319a9 upstream. + +When the per-IP connection limit is exceeded in ksmbd_kthread_fn(), +the code sets ret = -EAGAIN and continues the accept loop without +closing the just-accepted socket. That leaks one socket per rejected +attempt from a single IP and enables a trivial remote DoS. + +Release client_sk before continuing. + +This bug was found with ZeroPath. + +Cc: stable@vger.kernel.org +Signed-off-by: Joshua Rogers +Acked-by: Namjae Jeon +Signed-off-by: Steve French +Signed-off-by: Greg Kroah-Hartman +--- + fs/smb/server/transport_tcp.c | 5 ++++- + 1 file changed, 4 insertions(+), 1 deletion(-) + +--- a/fs/smb/server/transport_tcp.c ++++ b/fs/smb/server/transport_tcp.c +@@ -286,8 +286,11 @@ static int ksmbd_kthread_fn(void *p) + } + } + up_read(&conn_list_lock); +- if (ret == -EAGAIN) ++ if (ret == -EAGAIN) { ++ /* Per-IP limit hit: release the just-accepted socket. */ ++ sock_release(client_sk); + continue; ++ } + + skip_max_ip_conns_limit: + if (server_conf.max_connections && diff --git a/queue-6.12/kvm-svm-mark-vmcb_lbr-dirty-when-msr_ia32_debugctlmsr-is-updated.patch b/queue-6.12/kvm-svm-mark-vmcb_lbr-dirty-when-msr_ia32_debugctlmsr-is-updated.patch new file mode 100644 index 0000000000..ac49ee7cd4 --- /dev/null +++ b/queue-6.12/kvm-svm-mark-vmcb_lbr-dirty-when-msr_ia32_debugctlmsr-is-updated.patch @@ -0,0 +1,47 @@ +From dc55b3c3f61246e483e50c85d8d5366f9567e188 Mon Sep 17 00:00:00 2001 +From: Yosry Ahmed +Date: Sat, 8 Nov 2025 00:45:19 +0000 +Subject: KVM: SVM: Mark VMCB_LBR dirty when MSR_IA32_DEBUGCTLMSR is updated + +From: Yosry Ahmed + +commit dc55b3c3f61246e483e50c85d8d5366f9567e188 upstream. + +The APM lists the DbgCtlMsr field as being tracked by the VMCB_LBR clean +bit. Always clear the bit when MSR_IA32_DEBUGCTLMSR is updated. + +The history is complicated, it was correctly cleared for L1 before +commit 1d5a1b5860ed ("KVM: x86: nSVM: correctly virtualize LBR msrs when +L2 is running"). At that point svm_set_msr() started to rely on +svm_update_lbrv() to clear the bit, but when nested virtualization +is enabled the latter does not always clear it even if MSR_IA32_DEBUGCTLMSR +changed. Go back to clearing it directly in svm_set_msr(). + +Fixes: 1d5a1b5860ed ("KVM: x86: nSVM: correctly virtualize LBR msrs when L2 is running") +Reported-by: Matteo Rizzo +Reported-by: evn@google.com +Co-developed-by: Jim Mattson +Signed-off-by: Jim Mattson +Signed-off-by: Yosry Ahmed +Link: https://patch.msgid.link/20251108004524.1600006-2-yosry.ahmed@linux.dev +Cc: stable@vger.kernel.org +Signed-off-by: Paolo Bonzini +Signed-off-by: Greg Kroah-Hartman +--- + arch/x86/kvm/svm/svm.c | 4 ++++ + 1 file changed, 4 insertions(+) + +--- a/arch/x86/kvm/svm/svm.c ++++ b/arch/x86/kvm/svm/svm.c +@@ -3257,7 +3257,11 @@ static int svm_set_msr(struct kvm_vcpu * + if (data & DEBUGCTL_RESERVED_BITS) + return 1; + ++ if (svm_get_lbr_vmcb(svm)->save.dbgctl == data) ++ break; ++ + svm_get_lbr_vmcb(svm)->save.dbgctl = data; ++ vmcb_mark_dirty(svm->vmcb, VMCB_LBR); + svm_update_lbrv(vcpu); + break; + case MSR_VM_HSAVE_PA: diff --git a/queue-6.12/loongarch-kvm-add-delay-until-timer-interrupt-injected.patch b/queue-6.12/loongarch-kvm-add-delay-until-timer-interrupt-injected.patch new file mode 100644 index 0000000000..269b5a96ae --- /dev/null +++ b/queue-6.12/loongarch-kvm-add-delay-until-timer-interrupt-injected.patch @@ -0,0 +1,46 @@ +From d3c9515e4f9d10ccb113adb4809db5cc31e7ef65 Mon Sep 17 00:00:00 2001 +From: Bibo Mao +Date: Sun, 9 Nov 2025 16:02:09 +0800 +Subject: LoongArch: KVM: Add delay until timer interrupt injected + +From: Bibo Mao + +commit d3c9515e4f9d10ccb113adb4809db5cc31e7ef65 upstream. + +When timer is fired in oneshot mode, CSR.TVAL will stop with value -1 +rather than 0. However when the register CSR.TVAL is restored, it will +continue to count down rather than stop there. + +Now the method is to write 0 to CSR.TVAL, wait to count down for 1 cycle +at least, which is 10ns with a timer freq 100MHz, and then retore timer +interrupt status. Here add 2 cycles delay to assure that timer interrupt +is injected. + +With this patch, timer selftest case passes to run always. + +Cc: stable@vger.kernel.org +Signed-off-by: Bibo Mao +Signed-off-by: Huacai Chen +Signed-off-by: Greg Kroah-Hartman +--- + arch/loongarch/kvm/timer.c | 2 ++ + 1 file changed, 2 insertions(+) + +--- a/arch/loongarch/kvm/timer.c ++++ b/arch/loongarch/kvm/timer.c +@@ -4,6 +4,7 @@ + */ + + #include ++#include + #include + #include + +@@ -95,6 +96,7 @@ void kvm_restore_timer(struct kvm_vcpu * + * and set CSR TVAL with -1 + */ + write_gcsr_timertick(0); ++ __delay(2); /* Wait cycles until timer interrupt injected */ + + /* + * Writing CSR_TINTCLR_TI to LOONGARCH_CSR_TINTCLR will clear diff --git a/queue-6.12/loongarch-kvm-restore-guest-pmu-if-it-is-enabled.patch b/queue-6.12/loongarch-kvm-restore-guest-pmu-if-it-is-enabled.patch new file mode 100644 index 0000000000..56bd098781 --- /dev/null +++ b/queue-6.12/loongarch-kvm-restore-guest-pmu-if-it-is-enabled.patch @@ -0,0 +1,49 @@ +From 5001bcf86edf2de02f025a0f789bcac37fa040e6 Mon Sep 17 00:00:00 2001 +From: Bibo Mao +Date: Sun, 9 Nov 2025 16:02:09 +0800 +Subject: LoongArch: KVM: Restore guest PMU if it is enabled + +From: Bibo Mao + +commit 5001bcf86edf2de02f025a0f789bcac37fa040e6 upstream. + +On LoongArch system, guest PMU hardware is shared by guest and host but +PMU interrupt is separated. PMU is pass-through to VM, and there is PMU +context switch when exit to host and return to guest. + +There is optimiation to check whether PMU is enabled by guest. If not, +it is not necessary to return to guest. However, if it is enabled, PMU +context for guest need switch on. Now KVM_REQ_PMU notification is set +on vCPU context switch, but it is missing if there is no vCPU context +switch while PMU is used by guest VM, so fix it. + +Cc: +Fixes: f4e40ea9f78f ("LoongArch: KVM: Add PMU support for guest") +Signed-off-by: Bibo Mao +Signed-off-by: Huacai Chen +Signed-off-by: Greg Kroah-Hartman +--- + arch/loongarch/kvm/vcpu.c | 5 +++++ + 1 file changed, 5 insertions(+) + +--- a/arch/loongarch/kvm/vcpu.c ++++ b/arch/loongarch/kvm/vcpu.c +@@ -127,6 +127,9 @@ static void kvm_lose_pmu(struct kvm_vcpu + * Clear KVM_LARCH_PMU if the guest is not using PMU CSRs when + * exiting the guest, so that the next time trap into the guest. + * We don't need to deal with PMU CSRs contexts. ++ * ++ * Otherwise set the request bit KVM_REQ_PMU to restore guest PMU ++ * before entering guest VM + */ + val = kvm_read_sw_gcsr(csr, LOONGARCH_CSR_PERFCTRL0); + val |= kvm_read_sw_gcsr(csr, LOONGARCH_CSR_PERFCTRL1); +@@ -134,6 +137,8 @@ static void kvm_lose_pmu(struct kvm_vcpu + val |= kvm_read_sw_gcsr(csr, LOONGARCH_CSR_PERFCTRL3); + if (!(val & KVM_PMU_EVENT_ENABLED)) + vcpu->arch.aux_inuse &= ~KVM_LARCH_PMU; ++ else ++ kvm_make_request(KVM_REQ_PMU, vcpu); + + kvm_restore_host_pmu(vcpu); + } diff --git a/queue-6.12/loongarch-let-pte-pmd-_modify-record-the-status-of-_page_dirty.patch b/queue-6.12/loongarch-let-pte-pmd-_modify-record-the-status-of-_page_dirty.patch new file mode 100644 index 0000000000..4d535c5148 --- /dev/null +++ b/queue-6.12/loongarch-let-pte-pmd-_modify-record-the-status-of-_page_dirty.patch @@ -0,0 +1,56 @@ +From a073d637c8cfbfbab39b7272226a3fbf3b887580 Mon Sep 17 00:00:00 2001 +From: Tianyang Zhang +Date: Sun, 9 Nov 2025 16:02:01 +0800 +Subject: LoongArch: Let {pte,pmd}_modify() record the status of _PAGE_DIRTY + +From: Tianyang Zhang + +commit a073d637c8cfbfbab39b7272226a3fbf3b887580 upstream. + +Now if the PTE/PMD is dirty with _PAGE_DIRTY but without _PAGE_MODIFIED, +after {pte,pmd}_modify() we lose _PAGE_DIRTY, then {pte,pmd}_dirty() +return false and lead to data loss. This can happen in certain scenarios +such as HW PTW doesn't set _PAGE_MODIFIED automatically, so here we need +_PAGE_MODIFIED to record the dirty status (_PAGE_DIRTY). + +The new modification involves checking whether the original PTE/PMD has +the _PAGE_DIRTY flag. If it exists, the _PAGE_MODIFIED bit is also set, +ensuring that the {pte,pmd}_dirty() interface can always return accurate +information. + +Cc: stable@vger.kernel.org +Co-developed-by: Liupu Wang +Signed-off-by: Liupu Wang +Signed-off-by: Tianyang Zhang +Signed-off-by: Greg Kroah-Hartman +--- + arch/loongarch/include/asm/pgtable.h | 11 ++++++++--- + 1 file changed, 8 insertions(+), 3 deletions(-) + +--- a/arch/loongarch/include/asm/pgtable.h ++++ b/arch/loongarch/include/asm/pgtable.h +@@ -431,6 +431,9 @@ static inline unsigned long pte_accessib + + static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) + { ++ if (pte_val(pte) & _PAGE_DIRTY) ++ pte_val(pte) |= _PAGE_MODIFIED; ++ + return __pte((pte_val(pte) & _PAGE_CHG_MASK) | + (pgprot_val(newprot) & ~_PAGE_CHG_MASK)); + } +@@ -565,9 +568,11 @@ static inline struct page *pmd_page(pmd_ + + static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot) + { +- pmd_val(pmd) = (pmd_val(pmd) & _HPAGE_CHG_MASK) | +- (pgprot_val(newprot) & ~_HPAGE_CHG_MASK); +- return pmd; ++ if (pmd_val(pmd) & _PAGE_DIRTY) ++ pmd_val(pmd) |= _PAGE_MODIFIED; ++ ++ return __pmd((pmd_val(pmd) & _HPAGE_CHG_MASK) | ++ (pgprot_val(newprot) & ~_HPAGE_CHG_MASK)); + } + + static inline pmd_t pmd_mkinvalid(pmd_t pmd) diff --git a/queue-6.12/loongarch-use-correct-accessor-to-read-fwpc-mwpc.patch b/queue-6.12/loongarch-use-correct-accessor-to-read-fwpc-mwpc.patch new file mode 100644 index 0000000000..c5e3504f51 --- /dev/null +++ b/queue-6.12/loongarch-use-correct-accessor-to-read-fwpc-mwpc.patch @@ -0,0 +1,38 @@ +From eeeeaafa62ea0cd4b86390f657dc0aea73bff4f5 Mon Sep 17 00:00:00 2001 +From: Huacai Chen +Date: Sun, 9 Nov 2025 16:02:01 +0800 +Subject: LoongArch: Use correct accessor to read FWPC/MWPC + +From: Huacai Chen + +commit eeeeaafa62ea0cd4b86390f657dc0aea73bff4f5 upstream. + +CSR.FWPC and CSR.MWPC are 32bit registers, so use csr_read32() rather +than csr_read64() to read the values of FWPC/MWPC. + +Cc: stable@vger.kernel.org +Fixes: edffa33c7bb5a73 ("LoongArch: Add hardware breakpoints/watchpoints support") +Signed-off-by: Huacai Chen +Signed-off-by: Greg Kroah-Hartman +--- + arch/loongarch/include/asm/hw_breakpoint.h | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +--- a/arch/loongarch/include/asm/hw_breakpoint.h ++++ b/arch/loongarch/include/asm/hw_breakpoint.h +@@ -134,13 +134,13 @@ static inline void hw_breakpoint_thread_ + /* Determine number of BRP registers available. */ + static inline int get_num_brps(void) + { +- return csr_read64(LOONGARCH_CSR_FWPC) & CSR_FWPC_NUM; ++ return csr_read32(LOONGARCH_CSR_FWPC) & CSR_FWPC_NUM; + } + + /* Determine number of WRP registers available. */ + static inline int get_num_wrps(void) + { +- return csr_read64(LOONGARCH_CSR_MWPC) & CSR_MWPC_NUM; ++ return csr_read32(LOONGARCH_CSR_MWPC) & CSR_MWPC_NUM; + } + + #endif /* __KERNEL__ */ diff --git a/queue-6.12/nfsd-add-missing-fattr4_word2_clone_blksize-from-supported-attributes.patch b/queue-6.12/nfsd-add-missing-fattr4_word2_clone_blksize-from-supported-attributes.patch new file mode 100644 index 0000000000..d73e04186c --- /dev/null +++ b/queue-6.12/nfsd-add-missing-fattr4_word2_clone_blksize-from-supported-attributes.patch @@ -0,0 +1,32 @@ +From 4d3dbc2386fe051e44efad663e0ec828b98ab53f Mon Sep 17 00:00:00 2001 +From: Olga Kornievskaia +Date: Thu, 9 Oct 2025 16:37:59 -0400 +Subject: nfsd: add missing FATTR4_WORD2_CLONE_BLKSIZE from supported attributes + +From: Olga Kornievskaia + +commit 4d3dbc2386fe051e44efad663e0ec828b98ab53f upstream. + +RFC 7862 Section 4.1.2 says that if the server supports CLONE it MUST +support clone_blksize attribute. + +Fixes: d6ca7d2643ee ("NFSD: Implement FATTR4_CLONE_BLKSIZE attribute") +Cc: stable@vger.kernel.org +Signed-off-by: Olga Kornievskaia +Reviewed-by: Jeff Layton +Signed-off-by: Chuck Lever +Signed-off-by: Greg Kroah-Hartman +--- + fs/nfsd/nfsd.h | 1 + + 1 file changed, 1 insertion(+) + +--- a/fs/nfsd/nfsd.h ++++ b/fs/nfsd/nfsd.h +@@ -458,6 +458,7 @@ enum { + #define NFSD4_2_SUPPORTED_ATTRS_WORD2 \ + (NFSD4_1_SUPPORTED_ATTRS_WORD2 | \ + FATTR4_WORD2_MODE_UMASK | \ ++ FATTR4_WORD2_CLONE_BLKSIZE | \ + NFSD4_2_SECURITY_ATTRS | \ + FATTR4_WORD2_XATTR_SUPPORT) + diff --git a/queue-6.12/nfsd-fix-refcount-leak-in-nfsd_set_fh_dentry.patch b/queue-6.12/nfsd-fix-refcount-leak-in-nfsd_set_fh_dentry.patch new file mode 100644 index 0000000000..31c43008b2 --- /dev/null +++ b/queue-6.12/nfsd-fix-refcount-leak-in-nfsd_set_fh_dentry.patch @@ -0,0 +1,60 @@ +From 8a7348a9ed70bda1c1f51d3f1815bcbdf9f3b38c Mon Sep 17 00:00:00 2001 +From: NeilBrown +Date: Wed, 8 Oct 2025 09:52:25 -0400 +Subject: nfsd: fix refcount leak in nfsd_set_fh_dentry() + +From: NeilBrown + +commit 8a7348a9ed70bda1c1f51d3f1815bcbdf9f3b38c upstream. + +nfsd exports a "pseudo root filesystem" which is used by NFSv4 to find +the various exported filesystems using LOOKUP requests from a known root +filehandle. NFSv3 uses the MOUNT protocol to find those exported +filesystems and so is not given access to the pseudo root filesystem. + +If a v3 (or v2) client uses a filehandle from that filesystem, +nfsd_set_fh_dentry() will report an error, but still stores the export +in "struct svc_fh" even though it also drops the reference (exp_put()). +This means that when fh_put() is called an extra reference will be dropped +which can lead to use-after-free and possible denial of service. + +Normal NFS usage will not provide a pseudo-root filehandle to a v3 +client. This bug can only be triggered by the client synthesising an +incorrect filehandle. + +To fix this we move the assignments to the svc_fh later, after all +possible error cases have been detected. + +Reported-and-tested-by: tianshuo han +Fixes: ef7f6c4904d0 ("nfsd: move V4ROOT version check to nfsd_set_fh_dentry()") +Signed-off-by: NeilBrown +Reviewed-by: Jeff Layton +Cc: stable@vger.kernel.org +Signed-off-by: Chuck Lever +Signed-off-by: Greg Kroah-Hartman +--- + fs/nfsd/nfsfh.c | 6 +++--- + 1 file changed, 3 insertions(+), 3 deletions(-) + +--- a/fs/nfsd/nfsfh.c ++++ b/fs/nfsd/nfsfh.c +@@ -268,9 +268,6 @@ static __be32 nfsd_set_fh_dentry(struct + dentry); + } + +- fhp->fh_dentry = dentry; +- fhp->fh_export = exp; +- + switch (fhp->fh_maxsize) { + case NFS4_FHSIZE: + if (dentry->d_sb->s_export_op->flags & EXPORT_OP_NOATOMIC_ATTR) +@@ -292,6 +289,9 @@ static __be32 nfsd_set_fh_dentry(struct + goto out; + } + ++ fhp->fh_dentry = dentry; ++ fhp->fh_export = exp; ++ + return 0; + out: + exp_put(exp); diff --git a/queue-6.12/nfsd-free-copynotify-stateid-in-nfs4_free_ol_stateid.patch b/queue-6.12/nfsd-free-copynotify-stateid-in-nfs4_free_ol_stateid.patch new file mode 100644 index 0000000000..a85901f29e --- /dev/null +++ b/queue-6.12/nfsd-free-copynotify-stateid-in-nfs4_free_ol_stateid.patch @@ -0,0 +1,85 @@ +From 4aa17144d5abc3c756883e3a010246f0dba8b468 Mon Sep 17 00:00:00 2001 +From: Olga Kornievskaia +Date: Tue, 14 Oct 2025 13:59:59 -0400 +Subject: NFSD: free copynotify stateid in nfs4_free_ol_stateid() + +From: Olga Kornievskaia + +commit 4aa17144d5abc3c756883e3a010246f0dba8b468 upstream. + +Typically copynotify stateid is freed either when parent's stateid +is being close/freed or in nfsd4_laundromat if the stateid hasn't +been used in a lease period. + +However, in case when the server got an OPEN (which created +a parent stateid), followed by a COPY_NOTIFY using that stateid, +followed by a client reboot. New client instance while doing +CREATE_SESSION would force expire previous state of this client. +It leads to the open state being freed thru release_openowner-> +nfs4_free_ol_stateid() and it finds that it still has copynotify +stateid associated with it. We currently print a warning and is +triggerred + +WARNING: CPU: 1 PID: 8858 at fs/nfsd/nfs4state.c:1550 nfs4_free_ol_stateid+0xb0/0x100 [nfsd] + +This patch, instead, frees the associated copynotify stateid here. + +If the parent stateid is freed (without freeing the copynotify +stateids associated with it), it leads to the list corruption +when laundromat ends up freeing the copynotify state later. + +[ 1626.839430] Internal error: Oops - BUG: 00000000f2000800 [#1] SMP +[ 1626.842828] Modules linked in: nfnetlink_queue nfnetlink_log bluetooth cfg80211 rpcrdma rdma_cm iw_cm ib_cm ib_core nfsd nfs_acl lockd grace nfs_localio ext4 crc16 mbcache jbd2 overlay uinput snd_seq_dummy snd_hrtimer qrtr rfkill vfat fat uvcvideo snd_hda_codec_generic videobuf2_vmalloc videobuf2_memops snd_hda_intel uvc snd_intel_dspcfg videobuf2_v4l2 videobuf2_common snd_hda_codec snd_hda_core videodev snd_hwdep snd_seq mc snd_seq_device snd_pcm snd_timer snd soundcore sg loop auth_rpcgss vsock_loopback vmw_vsock_virtio_transport_common vmw_vsock_vmci_transport vmw_vmci vsock xfs 8021q garp stp llc mrp nvme ghash_ce e1000e nvme_core sr_mod nvme_keyring nvme_auth cdrom vmwgfx drm_ttm_helper ttm sunrpc dm_mirror dm_region_hash dm_log iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi fuse dm_multipath dm_mod nfnetlink +[ 1626.855594] CPU: 2 UID: 0 PID: 199 Comm: kworker/u24:33 Kdump: loaded Tainted: G B W 6.17.0-rc7+ #22 PREEMPT(voluntary) +[ 1626.857075] Tainted: [B]=BAD_PAGE, [W]=WARN +[ 1626.857573] Hardware name: VMware, Inc. VMware20,1/VBSA, BIOS VMW201.00V.24006586.BA64.2406042154 06/04/2024 +[ 1626.858724] Workqueue: nfsd4 laundromat_main [nfsd] +[ 1626.859304] pstate: 61400005 (nZCv daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--) +[ 1626.860010] pc : __list_del_entry_valid_or_report+0x148/0x200 +[ 1626.860601] lr : __list_del_entry_valid_or_report+0x148/0x200 +[ 1626.861182] sp : ffff8000881d7a40 +[ 1626.861521] x29: ffff8000881d7a40 x28: 0000000000000018 x27: ffff0000c2a98200 +[ 1626.862260] x26: 0000000000000600 x25: 0000000000000000 x24: ffff8000881d7b20 +[ 1626.862986] x23: ffff0000c2a981e8 x22: 1fffe00012410e7d x21: ffff0000920873e8 +[ 1626.863701] x20: ffff0000920873e8 x19: ffff000086f22998 x18: 0000000000000000 +[ 1626.864421] x17: 20747562202c3839 x16: 3932326636383030 x15: 3030666666662065 +[ 1626.865092] x14: 6220646c756f6873 x13: 0000000000000001 x12: ffff60004fd9e4a3 +[ 1626.865713] x11: 1fffe0004fd9e4a2 x10: ffff60004fd9e4a2 x9 : dfff800000000000 +[ 1626.866320] x8 : 00009fffb0261b5e x7 : ffff00027ecf2513 x6 : 0000000000000001 +[ 1626.866938] x5 : ffff00027ecf2510 x4 : ffff60004fd9e4a3 x3 : 0000000000000000 +[ 1626.867553] x2 : 0000000000000000 x1 : ffff000096069640 x0 : 000000000000006d +[ 1626.868167] Call trace: +[ 1626.868382] __list_del_entry_valid_or_report+0x148/0x200 (P) +[ 1626.868876] _free_cpntf_state_locked+0xd0/0x268 [nfsd] +[ 1626.869368] nfs4_laundromat+0x6f8/0x1058 [nfsd] +[ 1626.869813] laundromat_main+0x24/0x60 [nfsd] +[ 1626.870231] process_one_work+0x584/0x1050 +[ 1626.870595] worker_thread+0x4c4/0xc60 +[ 1626.870893] kthread+0x2f8/0x398 +[ 1626.871146] ret_from_fork+0x10/0x20 +[ 1626.871422] Code: aa1303e1 aa1403e3 910e8000 97bc55d7 (d4210000) +[ 1626.871892] SMP: stopping secondary CPUs + +Reported-by: rtm@csail.mit.edu +Closes: https://lore.kernel.org/linux-nfs/d8f064c1-a26f-4eed-b4f0-1f7f608f415f@oracle.com/T/#t +Fixes: 624322f1adc5 ("NFSD add COPY_NOTIFY operation") +Cc: stable@vger.kernel.org +Signed-off-by: Olga Kornievskaia +Signed-off-by: Chuck Lever +Signed-off-by: Greg Kroah-Hartman +--- + fs/nfsd/nfs4state.c | 3 ++- + 1 file changed, 2 insertions(+), 1 deletion(-) + +--- a/fs/nfsd/nfs4state.c ++++ b/fs/nfsd/nfs4state.c +@@ -1528,7 +1528,8 @@ static void nfs4_free_ol_stateid(struct + release_all_access(stp); + if (stp->st_stateowner) + nfs4_put_stateowner(stp->st_stateowner); +- WARN_ON(!list_empty(&stid->sc_cp_list)); ++ if (!list_empty(&stid->sc_cp_list)) ++ nfs4_free_cpntf_statelist(stid->sc_client->net, stid); + kmem_cache_free(stateid_slab, stid); + } + diff --git a/queue-6.12/series b/queue-6.12/series index 5c8450da16..64d3985617 100644 --- a/queue-6.12/series +++ b/queue-6.12/series @@ -104,3 +104,16 @@ arm-dts-bcm53573-fix-address-of-luxul-xap-1440-s-eth.patch hid-playstation-fix-memory-leak-in-dualshock4_get_ca.patch hid-uclogic-fix-potential-memory-leak-in-error-path.patch net-dsa-sja1105-fix-kasan-out-of-bounds-warning-in-s.patch +loongarch-kvm-restore-guest-pmu-if-it-is-enabled.patch +loongarch-kvm-add-delay-until-timer-interrupt-injected.patch +kvm-svm-mark-vmcb_lbr-dirty-when-msr_ia32_debugctlmsr-is-updated.patch +nfsd-fix-refcount-leak-in-nfsd_set_fh_dentry.patch +nfsd-add-missing-fattr4_word2_clone_blksize-from-supported-attributes.patch +nfsd-free-copynotify-stateid-in-nfs4_free_ol_stateid.patch +gcov-add-support-for-gcc-15.patch +ksmbd-close-accepted-socket-when-per-ip-limit-rejects-connection.patch +ksm-use-range-walk-function-to-jump-over-holes-in-scan_get_next_rmap_item.patch +strparser-fix-signed-unsigned-mismatch-bug.patch +dma-mapping-benchmark-restore-padding-to-ensure-uabi-remained-consistent.patch +loongarch-use-correct-accessor-to-read-fwpc-mwpc.patch +loongarch-let-pte-pmd-_modify-record-the-status-of-_page_dirty.patch diff --git a/queue-6.12/strparser-fix-signed-unsigned-mismatch-bug.patch b/queue-6.12/strparser-fix-signed-unsigned-mismatch-bug.patch new file mode 100644 index 0000000000..036a265db9 --- /dev/null +++ b/queue-6.12/strparser-fix-signed-unsigned-mismatch-bug.patch @@ -0,0 +1,47 @@ +From 4da4e4bde1c453ac5cc2dce5def81d504ae257ee Mon Sep 17 00:00:00 2001 +From: Nate Karstens +Date: Thu, 6 Nov 2025 16:28:33 -0600 +Subject: strparser: Fix signed/unsigned mismatch bug + +From: Nate Karstens + +commit 4da4e4bde1c453ac5cc2dce5def81d504ae257ee upstream. + +The `len` member of the sk_buff is an unsigned int. This is cast to +`ssize_t` (a signed type) for the first sk_buff in the comparison, +but not the second sk_buff. On 32-bit systems, this can result in +an integer underflow for certain values because unsigned arithmetic +is being used. + +This appears to be an oversight: if the intention was to use unsigned +arithmetic, then the first cast would have been omitted. The change +ensures both len values are cast to `ssize_t`. + +The underflow causes an issue with ktls when multiple TLS PDUs are +included in a single TCP segment. The mainline kernel does not use +strparser for ktls anymore, but this is still useful for other +features that still use strparser, and for backporting. + +Signed-off-by: Nate Karstens +Cc: stable@vger.kernel.org +Fixes: 43a0c6751a32 ("strparser: Stream parser for messages") +Reviewed-by: Jacob Keller +Reviewed-by: Sabrina Dubroca +Link: https://patch.msgid.link/20251106222835.1871628-1-nate.karstens@garmin.com +Signed-off-by: Jakub Kicinski +Signed-off-by: Greg Kroah-Hartman +--- + net/strparser/strparser.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/net/strparser/strparser.c ++++ b/net/strparser/strparser.c +@@ -238,7 +238,7 @@ static int __strp_recv(read_descriptor_t + strp_parser_err(strp, -EMSGSIZE, desc); + break; + } else if (len <= (ssize_t)head->len - +- skb->len - stm->strp.offset) { ++ (ssize_t)skb->len - stm->strp.offset) { + /* Length must be into new skb (and also + * greater than zero) + */