--- /dev/null
+From e9418da50d9e5c496c22fe392e4ad74c038a94eb Mon Sep 17 00:00:00 2001
+From: Harin Lee <me@harin.net>
+Date: Mon, 6 Apr 2026 16:48:57 +0900
+Subject: ALSA: ctxfi: Limit PTP to a single page
+
+From: Harin Lee <me@harin.net>
+
+commit e9418da50d9e5c496c22fe392e4ad74c038a94eb upstream.
+
+Commit 391e69143d0a increased CT_PTP_NUM from 1 to 4 to support 256
+playback streams, but the additional pages are not used by the card
+correctly. The CT20K2 hardware already has multiple VMEM_PTPAL
+registers, but using them separately would require refactoring the
+entire virtual memory allocation logic.
+
+ct_vm_map() always uses PTEs in vm->ptp[0].area regardless of
+CT_PTP_NUM. On AMD64 systems, a single PTP covers 512 PTEs (2M). When
+aggregate memory allocations exceed this limit, ct_vm_map() tries to
+access beyond the allocated space and causes a page fault:
+
+ BUG: unable to handle page fault for address: ffffd4ae8a10a000
+ Oops: Oops: 0002 [#1] SMP PTI
+ RIP: 0010:ct_vm_map+0x17c/0x280 [snd_ctxfi]
+ Call Trace:
+ atc_pcm_playback_prepare+0x225/0x3b0
+ ct_pcm_playback_prepare+0x38/0x60
+ snd_pcm_do_prepare+0x2f/0x50
+ snd_pcm_action_single+0x36/0x90
+ snd_pcm_action_nonatomic+0xbf/0xd0
+ snd_pcm_ioctl+0x28/0x40
+ __x64_sys_ioctl+0x97/0xe0
+ do_syscall_64+0x81/0x610
+ entry_SYSCALL_64_after_hwframe+0x76/0x7e
+
+Revert CT_PTP_NUM to 1. The 256 SRC_RESOURCE_NUM and playback_count
+remain unchanged.
+
+Fixes: 391e69143d0a ("ALSA: ctxfi: Bump playback substreams to 256")
+Cc: stable@vger.kernel.org
+Signed-off-by: Harin Lee <me@harin.net>
+Link: https://patch.msgid.link/20260406074857.216034-1-me@harin.net
+Signed-off-by: Takashi Iwai <tiwai@suse.de>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ sound/pci/ctxfi/ctvmem.h | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/sound/pci/ctxfi/ctvmem.h
++++ b/sound/pci/ctxfi/ctvmem.h
+@@ -15,7 +15,7 @@
+ #ifndef CTVMEM_H
+ #define CTVMEM_H
+
+-#define CT_PTP_NUM 4 /* num of device page table pages */
++#define CT_PTP_NUM 1 /* num of device page table pages */
+
+ #include <linux/mutex.h>
+ #include <linux/list.h>
--- /dev/null
+From 15bfba1ad77fad8e45a37aae54b3c813b33fe27c Mon Sep 17 00:00:00 2001
+From: Ryan Roberts <ryan.roberts@arm.com>
+Date: Mon, 30 Mar 2026 17:17:03 +0100
+Subject: arm64: mm: Handle invalid large leaf mappings correctly
+
+From: Ryan Roberts <ryan.roberts@arm.com>
+
+commit 15bfba1ad77fad8e45a37aae54b3c813b33fe27c upstream.
+
+It has been possible for a long time to mark ptes in the linear map as
+invalid. This is done for secretmem, kfence, realm dma memory un/share,
+and others, by simply clearing the PTE_VALID bit. But until commit
+a166563e7ec37 ("arm64: mm: support large block mapping when
+rodata=full") large leaf mappings were never made invalid in this way.
+
+It turns out various parts of the code base are not equipped to handle
+invalid large leaf mappings (in the way they are currently encoded) and
+I've observed a kernel panic while booting a realm guest on a
+BBML2_NOABORT system as a result:
+
+[ 15.432706] software IO TLB: Memory encryption is active and system is using DMA bounce buffers
+[ 15.476896] Unable to handle kernel paging request at virtual address ffff000019600000
+[ 15.513762] Mem abort info:
+[ 15.527245] ESR = 0x0000000096000046
+[ 15.548553] EC = 0x25: DABT (current EL), IL = 32 bits
+[ 15.572146] SET = 0, FnV = 0
+[ 15.592141] EA = 0, S1PTW = 0
+[ 15.612694] FSC = 0x06: level 2 translation fault
+[ 15.640644] Data abort info:
+[ 15.661983] ISV = 0, ISS = 0x00000046, ISS2 = 0x00000000
+[ 15.694875] CM = 0, WnR = 1, TnD = 0, TagAccess = 0
+[ 15.723740] GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0
+[ 15.755776] swapper pgtable: 4k pages, 48-bit VAs, pgdp=0000000081f3f000
+[ 15.800410] [ffff000019600000] pgd=0000000000000000, p4d=180000009ffff403, pud=180000009fffe403, pmd=00e8000199600704
+[ 15.855046] Internal error: Oops: 0000000096000046 [#1] SMP
+[ 15.886394] Modules linked in:
+[ 15.900029] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted 7.0.0-rc4-dirty #4 PREEMPT
+[ 15.935258] Hardware name: linux,dummy-virt (DT)
+[ 15.955612] pstate: 21400005 (nzCv daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--)
+[ 15.986009] pc : __pi_memcpy_generic+0x128/0x22c
+[ 16.006163] lr : swiotlb_bounce+0xf4/0x158
+[ 16.024145] sp : ffff80008000b8f0
+[ 16.038896] x29: ffff80008000b8f0 x28: 0000000000000000 x27: 0000000000000000
+[ 16.069953] x26: ffffb3976d261ba8 x25: 0000000000000000 x24: ffff000019600000
+[ 16.100876] x23: 0000000000000001 x22: ffff0000043430d0 x21: 0000000000007ff0
+[ 16.131946] x20: 0000000084570010 x19: 0000000000000000 x18: ffff00001ffe3fcc
+[ 16.163073] x17: 0000000000000000 x16: 00000000003fffff x15: 646e612065766974
+[ 16.194131] x14: 0000000000000000 x13: 0000000000000000 x12: 0000000000000000
+[ 16.225059] x11: 0000000000000000 x10: 0000000000000010 x9 : 0000000000000018
+[ 16.256113] x8 : 0000000000000018 x7 : 0000000000000000 x6 : 0000000000000000
+[ 16.287203] x5 : ffff000019607ff0 x4 : ffff000004578000 x3 : ffff000019600000
+[ 16.318145] x2 : 0000000000007ff0 x1 : ffff000004570010 x0 : ffff000019600000
+[ 16.349071] Call trace:
+[ 16.360143] __pi_memcpy_generic+0x128/0x22c (P)
+[ 16.380310] swiotlb_tbl_map_single+0x154/0x2b4
+[ 16.400282] swiotlb_map+0x5c/0x228
+[ 16.415984] dma_map_phys+0x244/0x2b8
+[ 16.432199] dma_map_page_attrs+0x44/0x58
+[ 16.449782] virtqueue_map_page_attrs+0x38/0x44
+[ 16.469596] virtqueue_map_single_attrs+0xc0/0x130
+[ 16.490509] virtnet_rq_alloc.isra.0+0xa4/0x1fc
+[ 16.510355] try_fill_recv+0x2a4/0x584
+[ 16.526989] virtnet_open+0xd4/0x238
+[ 16.542775] __dev_open+0x110/0x24c
+[ 16.558280] __dev_change_flags+0x194/0x20c
+[ 16.576879] netif_change_flags+0x24/0x6c
+[ 16.594489] dev_change_flags+0x48/0x7c
+[ 16.611462] ip_auto_config+0x258/0x1114
+[ 16.628727] do_one_initcall+0x80/0x1c8
+[ 16.645590] kernel_init_freeable+0x208/0x2f0
+[ 16.664917] kernel_init+0x24/0x1e0
+[ 16.680295] ret_from_fork+0x10/0x20
+[ 16.696369] Code: 927cec03 cb0e0021 8b0e0042 a9411c26 (a900340c)
+[ 16.723106] ---[ end trace 0000000000000000 ]---
+[ 16.752866] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
+[ 16.792556] Kernel Offset: 0x3396ea200000 from 0xffff800080000000
+[ 16.818966] PHYS_OFFSET: 0xfff1000080000000
+[ 16.837237] CPU features: 0x0000000,00060005,13e38581,957e772f
+[ 16.862904] Memory Limit: none
+[ 16.876526] ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b ]---
+
+This panic occurs because the swiotlb memory was previously shared to
+the host (__set_memory_enc_dec()), which involves transitioning the
+(large) leaf mappings to invalid, sharing to the host, then marking the
+mappings valid again. But pageattr_p[mu]d_entry() would only update the
+entry if it is a section mapping, since otherwise it concluded it must
+be a table entry so shouldn't be modified. But p[mu]d_sect() only
+returns true if the entry is valid. So the result was that the large
+leaf entry was made invalid in the first pass then ignored in the second
+pass. It remains invalid until the above code tries to access it and
+blows up.
+
+The simple fix would be to update pageattr_pmd_entry() to use
+!pmd_table() instead of pmd_sect(). That would solve this problem.
+
+But the ptdump code also suffers from a similar issue. It checks
+pmd_leaf() and doesn't call into the arch-specific note_page() machinery
+if it returns false. As a result of this, ptdump wasn't even able to
+show the invalid large leaf mappings; it looked like they were valid
+which made this super fun to debug. the ptdump code is core-mm and
+pmd_table() is arm64-specific so we can't use the same trick to solve
+that.
+
+But we already support the concept of "present-invalid" for user space
+entries. And even better, pmd_leaf() will return true for a leaf mapping
+that is marked present-invalid. So let's just use that encoding for
+present-invalid kernel mappings too. Then we can use pmd_leaf() where we
+previously used pmd_sect() and everything is magically fixed.
+
+Additionally, from inspection kernel_page_present() was broken in a
+similar way, so I'm also updating that to use pmd_leaf().
+
+The transitional page tables component was also similarly broken; it
+creates a copy of the kernel page tables, making RO leaf mappings RW in
+the process. It also makes invalid (but-not-none) pte mappings valid.
+But it was not doing this for large leaf mappings. This could have
+resulted in crashes at kexec- or hibernate-time. This code is fixed to
+flip "present-invalid" mappings back to "present-valid" at all levels.
+
+Finally, I have hardened split_pmd()/split_pud() so that if it is passed
+a "present-invalid" leaf, it will maintain that property in the split
+leaves, since I wasn't able to convince myself that it would only ever
+be called for "present-valid" leaves.
+
+Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
+Cc: stable@vger.kernel.org
+Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
+Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/arm64/include/asm/pgtable-prot.h | 2 +
+ arch/arm64/include/asm/pgtable.h | 9 ++++--
+ arch/arm64/mm/mmu.c | 4 ++
+ arch/arm64/mm/pageattr.c | 50 +++++++++++++++++++---------------
+ arch/arm64/mm/trans_pgd.c | 42 ++++------------------------
+ 5 files changed, 48 insertions(+), 59 deletions(-)
+
+--- a/arch/arm64/include/asm/pgtable-prot.h
++++ b/arch/arm64/include/asm/pgtable-prot.h
+@@ -25,6 +25,8 @@
+ */
+ #define PTE_PRESENT_INVALID (PTE_NG) /* only when !PTE_VALID */
+
++#define PTE_PRESENT_VALID_KERNEL (PTE_VALID | PTE_MAYBE_NG)
++
+ #ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP
+ #define PTE_UFFD_WP (_AT(pteval_t, 1) << 58) /* uffd-wp tracking */
+ #define PTE_SWP_UFFD_WP (_AT(pteval_t, 1) << 3) /* only for swp ptes */
+--- a/arch/arm64/include/asm/pgtable.h
++++ b/arch/arm64/include/asm/pgtable.h
+@@ -353,9 +353,11 @@ static inline pte_t pte_mknoncont(pte_t
+ return clear_pte_bit(pte, __pgprot(PTE_CONT));
+ }
+
+-static inline pte_t pte_mkvalid(pte_t pte)
++static inline pte_t pte_mkvalid_k(pte_t pte)
+ {
+- return set_pte_bit(pte, __pgprot(PTE_VALID));
++ pte = clear_pte_bit(pte, __pgprot(PTE_PRESENT_INVALID));
++ pte = set_pte_bit(pte, __pgprot(PTE_PRESENT_VALID_KERNEL));
++ return pte;
+ }
+
+ static inline pte_t pte_mkinvalid(pte_t pte)
+@@ -625,6 +627,7 @@ static inline int pmd_protnone(pmd_t pmd
+ #define pmd_mkclean(pmd) pte_pmd(pte_mkclean(pmd_pte(pmd)))
+ #define pmd_mkdirty(pmd) pte_pmd(pte_mkdirty(pmd_pte(pmd)))
+ #define pmd_mkyoung(pmd) pte_pmd(pte_mkyoung(pmd_pte(pmd)))
++#define pmd_mkvalid_k(pmd) pte_pmd(pte_mkvalid_k(pmd_pte(pmd)))
+ #define pmd_mkinvalid(pmd) pte_pmd(pte_mkinvalid(pmd_pte(pmd)))
+ #ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP
+ #define pmd_uffd_wp(pmd) pte_uffd_wp(pmd_pte(pmd))
+@@ -666,6 +669,8 @@ static inline pmd_t pmd_mkspecial(pmd_t
+
+ #define pud_young(pud) pte_young(pud_pte(pud))
+ #define pud_mkyoung(pud) pte_pud(pte_mkyoung(pud_pte(pud)))
++#define pud_mkwrite_novma(pud) pte_pud(pte_mkwrite_novma(pud_pte(pud)))
++#define pud_mkvalid_k(pud) pte_pud(pte_mkvalid_k(pud_pte(pud)))
+ #define pud_write(pud) pte_write(pud_pte(pud))
+
+ static inline pud_t pud_mkhuge(pud_t pud)
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -604,6 +604,8 @@ static int split_pmd(pmd_t *pmdp, pmd_t
+ tableprot |= PMD_TABLE_PXN;
+
+ prot = __pgprot((pgprot_val(prot) & ~PTE_TYPE_MASK) | PTE_TYPE_PAGE);
++ if (!pmd_valid(pmd))
++ prot = pte_pgprot(pte_mkinvalid(pfn_pte(0, prot)));
+ prot = __pgprot(pgprot_val(prot) & ~PTE_CONT);
+ if (to_cont)
+ prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+@@ -649,6 +651,8 @@ static int split_pud(pud_t *pudp, pud_t
+ tableprot |= PUD_TABLE_PXN;
+
+ prot = __pgprot((pgprot_val(prot) & ~PMD_TYPE_MASK) | PMD_TYPE_SECT);
++ if (!pud_valid(pud))
++ prot = pmd_pgprot(pmd_mkinvalid(pfn_pmd(0, prot)));
+ prot = __pgprot(pgprot_val(prot) & ~PTE_CONT);
+ if (to_cont)
+ prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+--- a/arch/arm64/mm/pageattr.c
++++ b/arch/arm64/mm/pageattr.c
+@@ -25,6 +25,11 @@ static ptdesc_t set_pageattr_masks(ptdes
+ {
+ struct page_change_data *masks = walk->private;
+
++ /*
++ * Some users clear and set bits which alias each other (e.g. PTE_NG and
++ * PTE_PRESENT_INVALID). It is therefore important that we always clear
++ * first then set.
++ */
+ val &= ~(pgprot_val(masks->clear_mask));
+ val |= (pgprot_val(masks->set_mask));
+
+@@ -36,7 +41,7 @@ static int pageattr_pud_entry(pud_t *pud
+ {
+ pud_t val = pudp_get(pud);
+
+- if (pud_sect(val)) {
++ if (pud_leaf(val)) {
+ if (WARN_ON_ONCE((next - addr) != PUD_SIZE))
+ return -EINVAL;
+ val = __pud(set_pageattr_masks(pud_val(val), walk));
+@@ -52,7 +57,7 @@ static int pageattr_pmd_entry(pmd_t *pmd
+ {
+ pmd_t val = pmdp_get(pmd);
+
+- if (pmd_sect(val)) {
++ if (pmd_leaf(val)) {
+ if (WARN_ON_ONCE((next - addr) != PMD_SIZE))
+ return -EINVAL;
+ val = __pmd(set_pageattr_masks(pmd_val(val), walk));
+@@ -132,11 +137,12 @@ static int __change_memory_common(unsign
+ ret = update_range_prot(start, size, set_mask, clear_mask);
+
+ /*
+- * If the memory is being made valid without changing any other bits
+- * then a TLBI isn't required as a non-valid entry cannot be cached in
+- * the TLB.
++ * If the memory is being switched from present-invalid to valid without
++ * changing any other bits then a TLBI isn't required as a non-valid
++ * entry cannot be cached in the TLB.
+ */
+- if (pgprot_val(set_mask) != PTE_VALID || pgprot_val(clear_mask))
++ if (pgprot_val(set_mask) != PTE_PRESENT_VALID_KERNEL ||
++ pgprot_val(clear_mask) != PTE_PRESENT_INVALID)
+ flush_tlb_kernel_range(start, start + size);
+ return ret;
+ }
+@@ -234,18 +240,18 @@ int set_memory_valid(unsigned long addr,
+ {
+ if (enable)
+ return __change_memory_common(addr, PAGE_SIZE * numpages,
+- __pgprot(PTE_VALID),
+- __pgprot(0));
++ __pgprot(PTE_PRESENT_VALID_KERNEL),
++ __pgprot(PTE_PRESENT_INVALID));
+ else
+ return __change_memory_common(addr, PAGE_SIZE * numpages,
+- __pgprot(0),
+- __pgprot(PTE_VALID));
++ __pgprot(PTE_PRESENT_INVALID),
++ __pgprot(PTE_PRESENT_VALID_KERNEL));
+ }
+
+ int set_direct_map_invalid_noflush(struct page *page)
+ {
+- pgprot_t clear_mask = __pgprot(PTE_VALID);
+- pgprot_t set_mask = __pgprot(0);
++ pgprot_t clear_mask = __pgprot(PTE_PRESENT_VALID_KERNEL);
++ pgprot_t set_mask = __pgprot(PTE_PRESENT_INVALID);
+
+ if (!can_set_direct_map())
+ return 0;
+@@ -256,8 +262,8 @@ int set_direct_map_invalid_noflush(struc
+
+ int set_direct_map_default_noflush(struct page *page)
+ {
+- pgprot_t set_mask = __pgprot(PTE_VALID | PTE_WRITE);
+- pgprot_t clear_mask = __pgprot(PTE_RDONLY);
++ pgprot_t set_mask = __pgprot(PTE_PRESENT_VALID_KERNEL | PTE_WRITE);
++ pgprot_t clear_mask = __pgprot(PTE_PRESENT_INVALID | PTE_RDONLY);
+
+ if (!can_set_direct_map())
+ return 0;
+@@ -293,8 +299,8 @@ static int __set_memory_enc_dec(unsigned
+ * entries or Synchronous External Aborts caused by RIPAS_EMPTY
+ */
+ ret = __change_memory_common(addr, PAGE_SIZE * numpages,
+- __pgprot(set_prot),
+- __pgprot(clear_prot | PTE_VALID));
++ __pgprot(set_prot | PTE_PRESENT_INVALID),
++ __pgprot(clear_prot | PTE_PRESENT_VALID_KERNEL));
+
+ if (ret)
+ return ret;
+@@ -308,8 +314,8 @@ static int __set_memory_enc_dec(unsigned
+ return ret;
+
+ return __change_memory_common(addr, PAGE_SIZE * numpages,
+- __pgprot(PTE_VALID),
+- __pgprot(0));
++ __pgprot(PTE_PRESENT_VALID_KERNEL),
++ __pgprot(PTE_PRESENT_INVALID));
+ }
+
+ static int realm_set_memory_encrypted(unsigned long addr, int numpages)
+@@ -401,15 +407,15 @@ bool kernel_page_present(struct page *pa
+ pud = READ_ONCE(*pudp);
+ if (pud_none(pud))
+ return false;
+- if (pud_sect(pud))
+- return true;
++ if (pud_leaf(pud))
++ return pud_valid(pud);
+
+ pmdp = pmd_offset(pudp, addr);
+ pmd = READ_ONCE(*pmdp);
+ if (pmd_none(pmd))
+ return false;
+- if (pmd_sect(pmd))
+- return true;
++ if (pmd_leaf(pmd))
++ return pmd_valid(pmd);
+
+ ptep = pte_offset_kernel(pmdp, addr);
+ return pte_valid(__ptep_get(ptep));
+--- a/arch/arm64/mm/trans_pgd.c
++++ b/arch/arm64/mm/trans_pgd.c
+@@ -31,36 +31,6 @@ static void *trans_alloc(struct trans_pg
+ return info->trans_alloc_page(info->trans_alloc_arg);
+ }
+
+-static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr)
+-{
+- pte_t pte = __ptep_get(src_ptep);
+-
+- if (pte_valid(pte)) {
+- /*
+- * Resume will overwrite areas that may be marked
+- * read only (code, rodata). Clear the RDONLY bit from
+- * the temporary mappings we use during restore.
+- */
+- __set_pte(dst_ptep, pte_mkwrite_novma(pte));
+- } else if (!pte_none(pte)) {
+- /*
+- * debug_pagealloc will removed the PTE_VALID bit if
+- * the page isn't in use by the resume kernel. It may have
+- * been in use by the original kernel, in which case we need
+- * to put it back in our copy to do the restore.
+- *
+- * Other cases include kfence / vmalloc / memfd_secret which
+- * may call `set_direct_map_invalid_noflush()`.
+- *
+- * Before marking this entry valid, check the pfn should
+- * be mapped.
+- */
+- BUG_ON(!pfn_valid(pte_pfn(pte)));
+-
+- __set_pte(dst_ptep, pte_mkvalid(pte_mkwrite_novma(pte)));
+- }
+-}
+-
+ static int copy_pte(struct trans_pgd_info *info, pmd_t *dst_pmdp,
+ pmd_t *src_pmdp, unsigned long start, unsigned long end)
+ {
+@@ -76,7 +46,11 @@ static int copy_pte(struct trans_pgd_inf
+
+ src_ptep = pte_offset_kernel(src_pmdp, start);
+ do {
+- _copy_pte(dst_ptep, src_ptep, addr);
++ pte_t pte = __ptep_get(src_ptep);
++
++ if (pte_none(pte))
++ continue;
++ __set_pte(dst_ptep, pte_mkvalid_k(pte_mkwrite_novma(pte)));
+ } while (dst_ptep++, src_ptep++, addr += PAGE_SIZE, addr != end);
+
+ return 0;
+@@ -109,8 +83,7 @@ static int copy_pmd(struct trans_pgd_inf
+ if (copy_pte(info, dst_pmdp, src_pmdp, addr, next))
+ return -ENOMEM;
+ } else {
+- set_pmd(dst_pmdp,
+- __pmd(pmd_val(pmd) & ~PMD_SECT_RDONLY));
++ set_pmd(dst_pmdp, pmd_mkvalid_k(pmd_mkwrite_novma(pmd)));
+ }
+ } while (dst_pmdp++, src_pmdp++, addr = next, addr != end);
+
+@@ -145,8 +118,7 @@ static int copy_pud(struct trans_pgd_inf
+ if (copy_pmd(info, dst_pudp, src_pudp, addr, next))
+ return -ENOMEM;
+ } else {
+- set_pud(dst_pudp,
+- __pud(pud_val(pud) & ~PUD_SECT_RDONLY));
++ set_pud(dst_pudp, pud_mkvalid_k(pud_mkwrite_novma(pud)));
+ }
+ } while (dst_pudp++, src_pudp++, addr = next, addr != end);
+
--- /dev/null
+From f08fe8891c3eeb63b73f9f1f6d97aa629c821579 Mon Sep 17 00:00:00 2001
+From: Zhihao Cheng <chengzhihao1@huawei.com>
+Date: Fri, 30 Jan 2026 11:48:53 +0800
+Subject: dcache: Limit the minimal number of bucket to two
+
+From: Zhihao Cheng <chengzhihao1@huawei.com>
+
+commit f08fe8891c3eeb63b73f9f1f6d97aa629c821579 upstream.
+
+There is an OOB read problem on dentry_hashtable when user sets
+'dhash_entries=1':
+ BUG: unable to handle page fault for address: ffff888b30b774b0
+ #PF: supervisor read access in kernel mode
+ #PF: error_code(0x0000) - not-present page
+ Oops: Oops: 0000 [#1] SMP PTI
+ RIP: 0010:__d_lookup+0x56/0x120
+ Call Trace:
+ d_lookup.cold+0x16/0x5d
+ lookup_dcache+0x27/0xf0
+ lookup_one_qstr_excl+0x2a/0x180
+ start_dirop+0x55/0xa0
+ simple_start_creating+0x8d/0xa0
+ debugfs_start_creating+0x8c/0x180
+ debugfs_create_dir+0x1d/0x1c0
+ pinctrl_init+0x6d/0x140
+ do_one_initcall+0x6d/0x3d0
+ kernel_init_freeable+0x39f/0x460
+ kernel_init+0x2a/0x260
+
+There will be only one bucket in dentry_hashtable when dhash_entries is
+set as one, and d_hash_shift is calculated as 32 by dcache_init(). Then,
+following process will access more than one buckets(which memory region
+is not allocated) in dentry_hashtable:
+ d_lookup
+ b = d_hash(hash)
+ dentry_hashtable + ((u32)hashlen >> d_hash_shift)
+ // The C standard defines the behavior of right shift amounts
+ // exceeding the bit width of the operand as undefined. The
+ // result of '(u32)hashlen >> d_hash_shift' becomes 'hashlen',
+ // so 'b' will point to an unallocated memory region.
+ hlist_bl_for_each_entry_rcu(b)
+ hlist_bl_first_rcu(head)
+ h->first // read OOB!
+
+Fix it by limiting the minimal number of dentry_hashtable bucket to two,
+so that 'd_hash_shift' won't exceeds the bit width of type u32.
+
+Cc: stable@vger.kernel.org
+Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com>
+Link: https://patch.msgid.link/20260130034853.215819-1-chengzhihao1@huawei.com
+Reviewed-by: Yang Erkun <yangerkun@huawei.com>
+Signed-off-by: Christian Brauner <brauner@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/dcache.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/fs/dcache.c
++++ b/fs/dcache.c
+@@ -3207,7 +3207,7 @@ static void __init dcache_init_early(voi
+ HASH_EARLY | HASH_ZERO,
+ &d_hash_shift,
+ NULL,
+- 0,
++ 2,
+ 0);
+ d_hash_shift = 32 - d_hash_shift;
+
+@@ -3238,7 +3238,7 @@ static void __init dcache_init(void)
+ HASH_ZERO,
+ &d_hash_shift,
+ NULL,
+- 0,
++ 2,
+ 0);
+ d_hash_shift = 32 - d_hash_shift;
+
--- /dev/null
+From 0beba407d4585a15b0dc09f2064b5b3ddcb0e857 Mon Sep 17 00:00:00 2001
+From: SeongJae Park <sj@kernel.org>
+Date: Sun, 29 Mar 2026 08:30:49 -0700
+Subject: Docs/admin-guide/mm/damon/reclaim: warn commit_inputs vs param updates race
+
+From: SeongJae Park <sj@kernel.org>
+
+commit 0beba407d4585a15b0dc09f2064b5b3ddcb0e857 upstream.
+
+Patch series "Docs/admin-guide/mm/damon: warn commit_inputs vs other
+params race".
+
+Writing 'Y' to the commit_inputs parameter of DAMON_RECLAIM and
+DAMON_LRU_SORT, and writing other parameters before the commit_inputs
+request is completely processed can cause race conditions. While the
+consequence can be bad, the documentation is not clearly describing that.
+Add clear warnings.
+
+The issue was discovered [1,2] by sashiko.
+
+
+This patch (of 2):
+
+DAMON_RECLAIM handles commit_inputs request inside kdamond thread,
+reading the module parameters. If the user updates the module
+parameters while the kdamond thread is reading those, races can happen.
+To avoid this, the commit_inputs parameter shows whether it is still in
+the progress, assuming users wouldn't update parameters in the middle of
+the work. Some users might ignore that. Add a warning about the
+behavior.
+
+The issue was discovered in [1] by sashiko.
+
+Link: https://lore.kernel.org/20260329153052.46657-2-sj@kernel.org
+Link: https://lore.kernel.org/20260319161620.189392-3-objecting@objecting.org [1]
+Link: https://lore.kernel.org/20260319161620.189392-2-objecting@objecting.org [3]
+Fixes: 81a84182c343 ("Docs/admin-guide/mm/damon/reclaim: document 'commit_inputs' parameter")
+Signed-off-by: SeongJae Park <sj@kernel.org>
+Cc: <stable@vger.kernel.org> # 5.19.x
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ Documentation/admin-guide/mm/damon/reclaim.rst | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+--- a/Documentation/admin-guide/mm/damon/reclaim.rst
++++ b/Documentation/admin-guide/mm/damon/reclaim.rst
+@@ -71,6 +71,10 @@ of parametrs except ``enabled`` again.
+ parameter is set as ``N``. If invalid parameters are found while the
+ re-reading, DAMON_RECLAIM will be disabled.
+
++Once ``Y`` is written to this parameter, the user must not write to any
++parameters until reading ``commit_inputs`` again returns ``N``. If users
++violate this rule, the kernel may exhibit undefined behavior.
++
+ min_age
+ -------
+
--- /dev/null
+From a31e4518bec70333a0a98f2946a12b53b45fe5b9 Mon Sep 17 00:00:00 2001
+From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Date: Thu, 9 Apr 2026 15:23:46 +0200
+Subject: fbdev: udlfb: avoid divide-by-zero on FBIOPUT_VSCREENINFO
+
+From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+commit a31e4518bec70333a0a98f2946a12b53b45fe5b9 upstream.
+
+Much like commit 19f953e74356 ("fbdev: fb_pm2fb: Avoid potential divide
+by zero error"), we also need to prevent that same crash from happening
+in the udlfb driver as it uses pixclock directly when dividing, which
+will crash.
+
+Cc: Bernie Thompson <bernie@plugable.com>
+Cc: Helge Deller <deller@gmx.de>
+Fixes: 59277b679f8b ("Staging: udlfb: add dynamic modeset support")
+Assisted-by: gregkh_clanker_t1000
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Helge Deller <deller@gmx.de>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/video/fbdev/udlfb.c | 3 +++
+ 1 file changed, 3 insertions(+)
+
+--- a/drivers/video/fbdev/udlfb.c
++++ b/drivers/video/fbdev/udlfb.c
+@@ -1018,6 +1018,9 @@ static int dlfb_ops_check_var(struct fb_
+ struct fb_videomode mode;
+ struct dlfb_data *dlfb = info->par;
+
++ if (!var->pixclock)
++ return -EINVAL;
++
+ /* set device-specific elements of var unrelated to mode */
+ dlfb_var_color_format(var);
+
--- /dev/null
+From stable+bounces-236080-greg=kroah.com@vger.kernel.org Mon Apr 13 15:07:41 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 13 Apr 2026 09:01:24 -0400
+Subject: KVM: Remove subtle "struct kvm_stats_desc" pseudo-overlay
+To: stable@vger.kernel.org
+Cc: Sean Christopherson <seanjc@google.com>, "Gustavo A. R. Silva" <gustavoars@kernel.org>, Marc Zyngier <maz@kernel.org>, Christian Borntraeger <borntraeger@linux.ibm.com>, Anup Patel <anup@brainfault.org>, Bibo Mao <maobibo@loongson.cn>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260413130125.2879436-1-sashal@kernel.org>
+
+From: Sean Christopherson <seanjc@google.com>
+
+[ Upstream commit da142f3d373a6ddaca0119615a8db2175ddc4121 ]
+
+Remove KVM's internal pseudo-overlay of kvm_stats_desc, which subtly
+aliases the flexible name[] in the uAPI definition with a fixed-size array
+of the same name. The unusual embedded structure results in compiler
+warnings due to -Wflex-array-member-not-at-end, and also necessitates an
+extra level of dereferencing in KVM. To avoid the "overlay", define the
+uAPI structure to have a fixed-size name when building for the kernel.
+
+Opportunistically clean up the indentation for the stats macros, and
+replace spaces with tabs.
+
+No functional change intended.
+
+Reported-by: Gustavo A. R. Silva <gustavoars@kernel.org>
+Closes: https://lore.kernel.org/all/aPfNKRpLfhmhYqfP@kspp
+Acked-by: Marc Zyngier <maz@kernel.org>
+Acked-by: Christian Borntraeger <borntraeger@linux.ibm.com>
+[..]
+Acked-by: Anup Patel <anup@brainfault.org>
+Reviewed-by: Bibo Mao <maobibo@loongson.cn>
+Acked-by: Gustavo A. R. Silva <gustavoars@kernel.org>
+Link: https://patch.msgid.link/20251205232655.445294-1-seanjc@google.com
+Signed-off-by: Sean Christopherson <seanjc@google.com>
+Stable-dep-of: 2619da73bb2f ("KVM: x86: Use __DECLARE_FLEX_ARRAY() for UAPI structures with VLAs")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/arm64/kvm/guest.c | 4 +-
+ arch/loongarch/kvm/vcpu.c | 2 -
+ arch/loongarch/kvm/vm.c | 2 -
+ arch/mips/kvm/mips.c | 4 +-
+ arch/powerpc/kvm/book3s.c | 4 +-
+ arch/powerpc/kvm/booke.c | 4 +-
+ arch/riscv/kvm/vcpu.c | 2 -
+ arch/riscv/kvm/vm.c | 2 -
+ arch/s390/kvm/kvm-s390.c | 4 +-
+ arch/x86/kvm/x86.c | 4 +-
+ include/linux/kvm_host.h | 83 +++++++++++++++++++---------------------------
+ include/uapi/linux/kvm.h | 8 ++++
+ virt/kvm/binary_stats.c | 2 -
+ virt/kvm/kvm_main.c | 20 +++++------
+ 14 files changed, 70 insertions(+), 75 deletions(-)
+
+--- a/arch/arm64/kvm/guest.c
++++ b/arch/arm64/kvm/guest.c
+@@ -29,7 +29,7 @@
+
+ #include "trace.h"
+
+-const struct _kvm_stats_desc kvm_vm_stats_desc[] = {
++const struct kvm_stats_desc kvm_vm_stats_desc[] = {
+ KVM_GENERIC_VM_STATS()
+ };
+
+@@ -42,7 +42,7 @@ const struct kvm_stats_header kvm_vm_sta
+ sizeof(kvm_vm_stats_desc),
+ };
+
+-const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
++const struct kvm_stats_desc kvm_vcpu_stats_desc[] = {
+ KVM_GENERIC_VCPU_STATS(),
+ STATS_DESC_COUNTER(VCPU, hvc_exit_stat),
+ STATS_DESC_COUNTER(VCPU, wfe_exit_stat),
+--- a/arch/loongarch/kvm/vcpu.c
++++ b/arch/loongarch/kvm/vcpu.c
+@@ -13,7 +13,7 @@
+ #define CREATE_TRACE_POINTS
+ #include "trace.h"
+
+-const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
++const struct kvm_stats_desc kvm_vcpu_stats_desc[] = {
+ KVM_GENERIC_VCPU_STATS(),
+ STATS_DESC_COUNTER(VCPU, int_exits),
+ STATS_DESC_COUNTER(VCPU, idle_exits),
+--- a/arch/loongarch/kvm/vm.c
++++ b/arch/loongarch/kvm/vm.c
+@@ -9,7 +9,7 @@
+ #include <asm/kvm_eiointc.h>
+ #include <asm/kvm_pch_pic.h>
+
+-const struct _kvm_stats_desc kvm_vm_stats_desc[] = {
++const struct kvm_stats_desc kvm_vm_stats_desc[] = {
+ KVM_GENERIC_VM_STATS(),
+ STATS_DESC_ICOUNTER(VM, pages),
+ STATS_DESC_ICOUNTER(VM, hugepages),
+--- a/arch/mips/kvm/mips.c
++++ b/arch/mips/kvm/mips.c
+@@ -38,7 +38,7 @@
+ #define VECTORSPACING 0x100 /* for EI/VI mode */
+ #endif
+
+-const struct _kvm_stats_desc kvm_vm_stats_desc[] = {
++const struct kvm_stats_desc kvm_vm_stats_desc[] = {
+ KVM_GENERIC_VM_STATS()
+ };
+
+@@ -51,7 +51,7 @@ const struct kvm_stats_header kvm_vm_sta
+ sizeof(kvm_vm_stats_desc),
+ };
+
+-const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
++const struct kvm_stats_desc kvm_vcpu_stats_desc[] = {
+ KVM_GENERIC_VCPU_STATS(),
+ STATS_DESC_COUNTER(VCPU, wait_exits),
+ STATS_DESC_COUNTER(VCPU, cache_exits),
+--- a/arch/powerpc/kvm/book3s.c
++++ b/arch/powerpc/kvm/book3s.c
+@@ -38,7 +38,7 @@
+
+ /* #define EXIT_DEBUG */
+
+-const struct _kvm_stats_desc kvm_vm_stats_desc[] = {
++const struct kvm_stats_desc kvm_vm_stats_desc[] = {
+ KVM_GENERIC_VM_STATS(),
+ STATS_DESC_ICOUNTER(VM, num_2M_pages),
+ STATS_DESC_ICOUNTER(VM, num_1G_pages)
+@@ -53,7 +53,7 @@ const struct kvm_stats_header kvm_vm_sta
+ sizeof(kvm_vm_stats_desc),
+ };
+
+-const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
++const struct kvm_stats_desc kvm_vcpu_stats_desc[] = {
+ KVM_GENERIC_VCPU_STATS(),
+ STATS_DESC_COUNTER(VCPU, sum_exits),
+ STATS_DESC_COUNTER(VCPU, mmio_exits),
+--- a/arch/powerpc/kvm/booke.c
++++ b/arch/powerpc/kvm/booke.c
+@@ -36,7 +36,7 @@
+
+ unsigned long kvmppc_booke_handlers;
+
+-const struct _kvm_stats_desc kvm_vm_stats_desc[] = {
++const struct kvm_stats_desc kvm_vm_stats_desc[] = {
+ KVM_GENERIC_VM_STATS(),
+ STATS_DESC_ICOUNTER(VM, num_2M_pages),
+ STATS_DESC_ICOUNTER(VM, num_1G_pages)
+@@ -51,7 +51,7 @@ const struct kvm_stats_header kvm_vm_sta
+ sizeof(kvm_vm_stats_desc),
+ };
+
+-const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
++const struct kvm_stats_desc kvm_vcpu_stats_desc[] = {
+ KVM_GENERIC_VCPU_STATS(),
+ STATS_DESC_COUNTER(VCPU, sum_exits),
+ STATS_DESC_COUNTER(VCPU, mmio_exits),
+--- a/arch/riscv/kvm/vcpu.c
++++ b/arch/riscv/kvm/vcpu.c
+@@ -24,7 +24,7 @@
+ #define CREATE_TRACE_POINTS
+ #include "trace.h"
+
+-const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
++const struct kvm_stats_desc kvm_vcpu_stats_desc[] = {
+ KVM_GENERIC_VCPU_STATS(),
+ STATS_DESC_COUNTER(VCPU, ecall_exit_stat),
+ STATS_DESC_COUNTER(VCPU, wfi_exit_stat),
+--- a/arch/riscv/kvm/vm.c
++++ b/arch/riscv/kvm/vm.c
+@@ -13,7 +13,7 @@
+ #include <linux/kvm_host.h>
+ #include <asm/kvm_mmu.h>
+
+-const struct _kvm_stats_desc kvm_vm_stats_desc[] = {
++const struct kvm_stats_desc kvm_vm_stats_desc[] = {
+ KVM_GENERIC_VM_STATS()
+ };
+ static_assert(ARRAY_SIZE(kvm_vm_stats_desc) ==
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -64,7 +64,7 @@
+ #define VCPU_IRQS_MAX_BUF (sizeof(struct kvm_s390_irq) * \
+ (KVM_MAX_VCPUS + LOCAL_IRQS))
+
+-const struct _kvm_stats_desc kvm_vm_stats_desc[] = {
++const struct kvm_stats_desc kvm_vm_stats_desc[] = {
+ KVM_GENERIC_VM_STATS(),
+ STATS_DESC_COUNTER(VM, inject_io),
+ STATS_DESC_COUNTER(VM, inject_float_mchk),
+@@ -90,7 +90,7 @@ const struct kvm_stats_header kvm_vm_sta
+ sizeof(kvm_vm_stats_desc),
+ };
+
+-const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
++const struct kvm_stats_desc kvm_vcpu_stats_desc[] = {
+ KVM_GENERIC_VCPU_STATS(),
+ STATS_DESC_COUNTER(VCPU, exit_userspace),
+ STATS_DESC_COUNTER(VCPU, exit_null),
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -242,7 +242,7 @@ EXPORT_SYMBOL_FOR_KVM_INTERNAL(enable_ip
+ bool __read_mostly enable_device_posted_irqs = true;
+ EXPORT_SYMBOL_FOR_KVM_INTERNAL(enable_device_posted_irqs);
+
+-const struct _kvm_stats_desc kvm_vm_stats_desc[] = {
++const struct kvm_stats_desc kvm_vm_stats_desc[] = {
+ KVM_GENERIC_VM_STATS(),
+ STATS_DESC_COUNTER(VM, mmu_shadow_zapped),
+ STATS_DESC_COUNTER(VM, mmu_pte_write),
+@@ -268,7 +268,7 @@ const struct kvm_stats_header kvm_vm_sta
+ sizeof(kvm_vm_stats_desc),
+ };
+
+-const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
++const struct kvm_stats_desc kvm_vcpu_stats_desc[] = {
+ KVM_GENERIC_VCPU_STATS(),
+ STATS_DESC_COUNTER(VCPU, pf_taken),
+ STATS_DESC_COUNTER(VCPU, pf_fixed),
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -1932,56 +1932,43 @@ enum kvm_stat_kind {
+
+ struct kvm_stat_data {
+ struct kvm *kvm;
+- const struct _kvm_stats_desc *desc;
++ const struct kvm_stats_desc *desc;
+ enum kvm_stat_kind kind;
+ };
+
+-struct _kvm_stats_desc {
+- struct kvm_stats_desc desc;
+- char name[KVM_STATS_NAME_SIZE];
+-};
+-
+-#define STATS_DESC_COMMON(type, unit, base, exp, sz, bsz) \
+- .flags = type | unit | base | \
+- BUILD_BUG_ON_ZERO(type & ~KVM_STATS_TYPE_MASK) | \
+- BUILD_BUG_ON_ZERO(unit & ~KVM_STATS_UNIT_MASK) | \
+- BUILD_BUG_ON_ZERO(base & ~KVM_STATS_BASE_MASK), \
+- .exponent = exp, \
+- .size = sz, \
++#define STATS_DESC_COMMON(type, unit, base, exp, sz, bsz) \
++ .flags = type | unit | base | \
++ BUILD_BUG_ON_ZERO(type & ~KVM_STATS_TYPE_MASK) | \
++ BUILD_BUG_ON_ZERO(unit & ~KVM_STATS_UNIT_MASK) | \
++ BUILD_BUG_ON_ZERO(base & ~KVM_STATS_BASE_MASK), \
++ .exponent = exp, \
++ .size = sz, \
+ .bucket_size = bsz
+
+-#define VM_GENERIC_STATS_DESC(stat, type, unit, base, exp, sz, bsz) \
+- { \
+- { \
+- STATS_DESC_COMMON(type, unit, base, exp, sz, bsz), \
+- .offset = offsetof(struct kvm_vm_stat, generic.stat) \
+- }, \
+- .name = #stat, \
+- }
+-#define VCPU_GENERIC_STATS_DESC(stat, type, unit, base, exp, sz, bsz) \
+- { \
+- { \
+- STATS_DESC_COMMON(type, unit, base, exp, sz, bsz), \
+- .offset = offsetof(struct kvm_vcpu_stat, generic.stat) \
+- }, \
+- .name = #stat, \
+- }
+-#define VM_STATS_DESC(stat, type, unit, base, exp, sz, bsz) \
+- { \
+- { \
+- STATS_DESC_COMMON(type, unit, base, exp, sz, bsz), \
+- .offset = offsetof(struct kvm_vm_stat, stat) \
+- }, \
+- .name = #stat, \
+- }
+-#define VCPU_STATS_DESC(stat, type, unit, base, exp, sz, bsz) \
+- { \
+- { \
+- STATS_DESC_COMMON(type, unit, base, exp, sz, bsz), \
+- .offset = offsetof(struct kvm_vcpu_stat, stat) \
+- }, \
+- .name = #stat, \
+- }
++#define VM_GENERIC_STATS_DESC(stat, type, unit, base, exp, sz, bsz) \
++{ \
++ STATS_DESC_COMMON(type, unit, base, exp, sz, bsz), \
++ .offset = offsetof(struct kvm_vm_stat, generic.stat), \
++ .name = #stat, \
++}
++#define VCPU_GENERIC_STATS_DESC(stat, type, unit, base, exp, sz, bsz) \
++{ \
++ STATS_DESC_COMMON(type, unit, base, exp, sz, bsz), \
++ .offset = offsetof(struct kvm_vcpu_stat, generic.stat), \
++ .name = #stat, \
++}
++#define VM_STATS_DESC(stat, type, unit, base, exp, sz, bsz) \
++{ \
++ STATS_DESC_COMMON(type, unit, base, exp, sz, bsz), \
++ .offset = offsetof(struct kvm_vm_stat, stat), \
++ .name = #stat, \
++}
++#define VCPU_STATS_DESC(stat, type, unit, base, exp, sz, bsz) \
++{ \
++ STATS_DESC_COMMON(type, unit, base, exp, sz, bsz), \
++ .offset = offsetof(struct kvm_vcpu_stat, stat), \
++ .name = #stat, \
++}
+ /* SCOPE: VM, VM_GENERIC, VCPU, VCPU_GENERIC */
+ #define STATS_DESC(SCOPE, stat, type, unit, base, exp, sz, bsz) \
+ SCOPE##_STATS_DESC(stat, type, unit, base, exp, sz, bsz)
+@@ -2058,7 +2045,7 @@ struct _kvm_stats_desc {
+ STATS_DESC_IBOOLEAN(VCPU_GENERIC, blocking)
+
+ ssize_t kvm_stats_read(char *id, const struct kvm_stats_header *header,
+- const struct _kvm_stats_desc *desc,
++ const struct kvm_stats_desc *desc,
+ void *stats, size_t size_stats,
+ char __user *user_buffer, size_t size, loff_t *offset);
+
+@@ -2103,9 +2090,9 @@ static inline void kvm_stats_log_hist_up
+
+
+ extern const struct kvm_stats_header kvm_vm_stats_header;
+-extern const struct _kvm_stats_desc kvm_vm_stats_desc[];
++extern const struct kvm_stats_desc kvm_vm_stats_desc[];
+ extern const struct kvm_stats_header kvm_vcpu_stats_header;
+-extern const struct _kvm_stats_desc kvm_vcpu_stats_desc[];
++extern const struct kvm_stats_desc kvm_vcpu_stats_desc[];
+
+ #ifdef CONFIG_KVM_GENERIC_MMU_NOTIFIER
+ static inline int mmu_invalidate_retry(struct kvm *kvm, unsigned long mmu_seq)
+--- a/include/uapi/linux/kvm.h
++++ b/include/uapi/linux/kvm.h
+@@ -14,6 +14,10 @@
+ #include <linux/ioctl.h>
+ #include <asm/kvm.h>
+
++#ifdef __KERNEL__
++#include <linux/kvm_types.h>
++#endif
++
+ #define KVM_API_VERSION 12
+
+ /*
+@@ -1568,7 +1572,11 @@ struct kvm_stats_desc {
+ __u16 size;
+ __u32 offset;
+ __u32 bucket_size;
++#ifdef __KERNEL__
++ char name[KVM_STATS_NAME_SIZE];
++#else
+ char name[];
++#endif
+ };
+
+ #define KVM_GET_STATS_FD _IO(KVMIO, 0xce)
+--- a/virt/kvm/binary_stats.c
++++ b/virt/kvm/binary_stats.c
+@@ -50,7 +50,7 @@
+ * Return: the number of bytes that has been successfully read
+ */
+ ssize_t kvm_stats_read(char *id, const struct kvm_stats_header *header,
+- const struct _kvm_stats_desc *desc,
++ const struct kvm_stats_desc *desc,
+ void *stats, size_t size_stats,
+ char __user *user_buffer, size_t size, loff_t *offset)
+ {
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -982,9 +982,9 @@ static void kvm_free_memslots(struct kvm
+ kvm_free_memslot(kvm, memslot);
+ }
+
+-static umode_t kvm_stats_debugfs_mode(const struct _kvm_stats_desc *pdesc)
++static umode_t kvm_stats_debugfs_mode(const struct kvm_stats_desc *desc)
+ {
+- switch (pdesc->desc.flags & KVM_STATS_TYPE_MASK) {
++ switch (desc->flags & KVM_STATS_TYPE_MASK) {
+ case KVM_STATS_TYPE_INSTANT:
+ return 0444;
+ case KVM_STATS_TYPE_CUMULATIVE:
+@@ -1019,7 +1019,7 @@ static int kvm_create_vm_debugfs(struct
+ struct dentry *dent;
+ char dir_name[ITOA_MAX_LEN * 2];
+ struct kvm_stat_data *stat_data;
+- const struct _kvm_stats_desc *pdesc;
++ const struct kvm_stats_desc *pdesc;
+ int i, ret = -ENOMEM;
+ int kvm_debugfs_num_entries = kvm_vm_stats_header.num_desc +
+ kvm_vcpu_stats_header.num_desc;
+@@ -6160,11 +6160,11 @@ static int kvm_stat_data_get(void *data,
+ switch (stat_data->kind) {
+ case KVM_STAT_VM:
+ r = kvm_get_stat_per_vm(stat_data->kvm,
+- stat_data->desc->desc.offset, val);
++ stat_data->desc->offset, val);
+ break;
+ case KVM_STAT_VCPU:
+ r = kvm_get_stat_per_vcpu(stat_data->kvm,
+- stat_data->desc->desc.offset, val);
++ stat_data->desc->offset, val);
+ break;
+ }
+
+@@ -6182,11 +6182,11 @@ static int kvm_stat_data_clear(void *dat
+ switch (stat_data->kind) {
+ case KVM_STAT_VM:
+ r = kvm_clear_stat_per_vm(stat_data->kvm,
+- stat_data->desc->desc.offset);
++ stat_data->desc->offset);
+ break;
+ case KVM_STAT_VCPU:
+ r = kvm_clear_stat_per_vcpu(stat_data->kvm,
+- stat_data->desc->desc.offset);
++ stat_data->desc->offset);
+ break;
+ }
+
+@@ -6334,7 +6334,7 @@ static void kvm_uevent_notify_change(uns
+ static void kvm_init_debug(void)
+ {
+ const struct file_operations *fops;
+- const struct _kvm_stats_desc *pdesc;
++ const struct kvm_stats_desc *pdesc;
+ int i;
+
+ kvm_debugfs_dir = debugfs_create_dir("kvm", NULL);
+@@ -6347,7 +6347,7 @@ static void kvm_init_debug(void)
+ fops = &vm_stat_readonly_fops;
+ debugfs_create_file(pdesc->name, kvm_stats_debugfs_mode(pdesc),
+ kvm_debugfs_dir,
+- (void *)(long)pdesc->desc.offset, fops);
++ (void *)(long)pdesc->offset, fops);
+ }
+
+ for (i = 0; i < kvm_vcpu_stats_header.num_desc; ++i) {
+@@ -6358,7 +6358,7 @@ static void kvm_init_debug(void)
+ fops = &vcpu_stat_readonly_fops;
+ debugfs_create_file(pdesc->name, kvm_stats_debugfs_mode(pdesc),
+ kvm_debugfs_dir,
+- (void *)(long)pdesc->desc.offset, fops);
++ (void *)(long)pdesc->offset, fops);
+ }
+ }
+
--- /dev/null
+From 25a642b6abc98bbbabbf2baef9fc498bbea6aee6 Mon Sep 17 00:00:00 2001
+From: Sean Christopherson <seanjc@google.com>
+Date: Tue, 10 Mar 2026 16:48:09 -0700
+Subject: KVM: selftests: Remove duplicate LAUNCH_UPDATE_VMSA call in SEV-ES migrate test
+
+From: Sean Christopherson <seanjc@google.com>
+
+commit 25a642b6abc98bbbabbf2baef9fc498bbea6aee6 upstream.
+
+Drop the explicit KVM_SEV_LAUNCH_UPDATE_VMSA call when creating an SEV-ES
+VM in the SEV migration test, as sev_vm_create() automatically updates the
+VMSA pages for SEV-ES guests. The only reason the duplicate call doesn't
+cause visible problems is because the test doesn't actually try to run the
+vCPUs. That will change when KVM adds a check to prevent userspace from
+re-launching a VMSA (which corrupts the VMSA page due to KVM writing
+encrypted private memory).
+
+Fixes: 69f8e15ab61f ("KVM: selftests: Use the SEV library APIs in the intra-host migration test")
+Cc: stable@vger.kernel.org
+Link: https://patch.msgid.link/20260310234829.2608037-2-seanjc@google.com
+Signed-off-by: Sean Christopherson <seanjc@google.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ tools/testing/selftests/kvm/x86/sev_migrate_tests.c | 2 --
+ 1 file changed, 2 deletions(-)
+
+--- a/tools/testing/selftests/kvm/x86/sev_migrate_tests.c
++++ b/tools/testing/selftests/kvm/x86/sev_migrate_tests.c
+@@ -36,8 +36,6 @@ static struct kvm_vm *sev_vm_create(bool
+
+ sev_vm_launch(vm, es ? SEV_POLICY_ES : 0);
+
+- if (es)
+- vm_sev_ioctl(vm, KVM_SEV_LAUNCH_UPDATE_VMSA, NULL);
+ return vm;
+ }
+
--- /dev/null
+From 624bf3440d7214b62c22d698a0a294323f331d5d Mon Sep 17 00:00:00 2001
+From: Sean Christopherson <seanjc@google.com>
+Date: Tue, 10 Mar 2026 16:48:12 -0700
+Subject: KVM: SEV: Disallow LAUNCH_FINISH if vCPUs are actively being created
+
+From: Sean Christopherson <seanjc@google.com>
+
+commit 624bf3440d7214b62c22d698a0a294323f331d5d upstream.
+
+Reject LAUNCH_FINISH for SEV-ES and SNP VMs if KVM is actively creating
+one or more vCPUs, as KVM needs to process and encrypt each vCPU's VMSA.
+Letting userspace create vCPUs while LAUNCH_FINISH is in-progress is
+"fine", at least in the current code base, as kvm_for_each_vcpu() operates
+on online_vcpus, LAUNCH_FINISH (all SEV+ sub-ioctls) holds kvm->mutex, and
+fully onlining a vCPU in kvm_vm_ioctl_create_vcpu() is done under
+kvm->mutex. I.e. there's no difference between an in-progress vCPU and a
+vCPU that is created entirely after LAUNCH_FINISH.
+
+However, given that concurrent LAUNCH_FINISH and vCPU creation can't
+possibly work (for any reasonable definition of "work"), since userspace
+can't guarantee whether a particular vCPU will be encrypted or not,
+disallow the combination as a hardening measure, to reduce the probability
+of introducing bugs in the future, and to avoid having to reason about the
+safety of future changes related to LAUNCH_FINISH.
+
+Cc: Jethro Beekman <jethro@fortanix.com>
+Closes: https://lore.kernel.org/all/b31f7c6e-2807-4662-bcdd-eea2c1e132fa@fortanix.com
+Cc: stable@vger.kernel.org
+Link: https://patch.msgid.link/20260310234829.2608037-5-seanjc@google.com
+Signed-off-by: Sean Christopherson <seanjc@google.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kvm/svm/sev.c | 10 ++++++++--
+ include/linux/kvm_host.h | 7 +++++++
+ 2 files changed, 15 insertions(+), 2 deletions(-)
+
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -1019,6 +1019,9 @@ static int sev_launch_update_vmsa(struct
+ if (!sev_es_guest(kvm))
+ return -ENOTTY;
+
++ if (kvm_is_vcpu_creation_in_progress(kvm))
++ return -EBUSY;
++
+ kvm_for_each_vcpu(i, vcpu, kvm) {
+ ret = mutex_lock_killable(&vcpu->mutex);
+ if (ret)
+@@ -2039,8 +2042,8 @@ static int sev_check_source_vcpus(struct
+ struct kvm_vcpu *src_vcpu;
+ unsigned long i;
+
+- if (src->created_vcpus != atomic_read(&src->online_vcpus) ||
+- dst->created_vcpus != atomic_read(&dst->online_vcpus))
++ if (kvm_is_vcpu_creation_in_progress(src) ||
++ kvm_is_vcpu_creation_in_progress(dst))
+ return -EBUSY;
+
+ if (!sev_es_guest(src))
+@@ -2446,6 +2449,9 @@ static int snp_launch_update_vmsa(struct
+ unsigned long i;
+ int ret;
+
++ if (kvm_is_vcpu_creation_in_progress(kvm))
++ return -EBUSY;
++
+ data.gctx_paddr = __psp_pa(sev->snp_context);
+ data.page_type = SNP_PAGE_TYPE_VMSA;
+
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -1030,6 +1030,13 @@ static inline struct kvm_vcpu *kvm_get_v
+ return NULL;
+ }
+
++static inline bool kvm_is_vcpu_creation_in_progress(struct kvm *kvm)
++{
++ lockdep_assert_held(&kvm->lock);
++
++ return kvm->created_vcpus != atomic_read(&kvm->online_vcpus);
++}
++
+ void kvm_destroy_vcpus(struct kvm *kvm);
+
+ int kvm_trylock_all_vcpus(struct kvm *kvm);
--- /dev/null
+From 8acffeef5ef720c35e513e322ab08e32683f32f2 Mon Sep 17 00:00:00 2001
+From: Sean Christopherson <seanjc@google.com>
+Date: Thu, 12 Mar 2026 17:32:58 -0700
+Subject: KVM: SEV: Drop WARN on large size for KVM_MEMORY_ENCRYPT_REG_REGION
+
+From: Sean Christopherson <seanjc@google.com>
+
+commit 8acffeef5ef720c35e513e322ab08e32683f32f2 upstream.
+
+Drop the WARN in sev_pin_memory() on npages overflowing an int, as the
+WARN is comically trivially to trigger from userspace, e.g. by doing:
+
+ struct kvm_enc_region range = {
+ .addr = 0,
+ .size = -1ul,
+ };
+
+ __vm_ioctl(vm, KVM_MEMORY_ENCRYPT_REG_REGION, &range);
+
+Note, the checks in sev_mem_enc_register_region() that presumably exist to
+verify the incoming address+size are completely worthless, as both "addr"
+and "size" are u64s and SEV is 64-bit only, i.e. they _can't_ be greater
+than ULONG_MAX. That wart will be cleaned up in the near future.
+
+ if (range->addr > ULONG_MAX || range->size > ULONG_MAX)
+ return -EINVAL;
+
+Opportunistically add a comment to explain why the code calculates the
+number of pages the "hard" way, e.g. instead of just shifting @ulen.
+
+Fixes: 78824fabc72e ("KVM: SVM: fix svn_pin_memory()'s use of get_user_pages_fast()")
+Cc: stable@vger.kernel.org
+Reviewed-by: Liam Merwick <liam.merwick@oracle.com>
+Tested-by: Liam Merwick <liam.merwick@oracle.com>
+Link: https://patch.msgid.link/20260313003302.3136111-2-seanjc@google.com
+Signed-off-by: Sean Christopherson <seanjc@google.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kvm/svm/sev.c | 11 +++++++----
+ 1 file changed, 7 insertions(+), 4 deletions(-)
+
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -679,10 +679,16 @@ static struct page **sev_pin_memory(stru
+ if (ulen == 0 || uaddr + ulen < uaddr)
+ return ERR_PTR(-EINVAL);
+
+- /* Calculate number of pages. */
++ /*
++ * Calculate the number of pages that need to be pinned to cover the
++ * entire range. Note! This isn't simply ulen >> PAGE_SHIFT, as KVM
++ * doesn't require the incoming address+size to be page aligned!
++ */
+ first = (uaddr & PAGE_MASK) >> PAGE_SHIFT;
+ last = ((uaddr + ulen - 1) & PAGE_MASK) >> PAGE_SHIFT;
+ npages = (last - first + 1);
++ if (npages > INT_MAX)
++ return ERR_PTR(-EINVAL);
+
+ locked = sev->pages_locked + npages;
+ lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
+@@ -691,9 +697,6 @@ static struct page **sev_pin_memory(stru
+ return ERR_PTR(-ENOMEM);
+ }
+
+- if (WARN_ON_ONCE(npages > INT_MAX))
+- return ERR_PTR(-EINVAL);
+-
+ /* Avoid using vmalloc for smaller buffers. */
+ size = npages * sizeof(struct page *);
+ if (size > PAGE_SIZE)
--- /dev/null
+From cb923ee6a80f4e604e6242a4702b59251e61a380 Mon Sep 17 00:00:00 2001
+From: Sean Christopherson <seanjc@google.com>
+Date: Tue, 10 Mar 2026 16:48:13 -0700
+Subject: KVM: SEV: Lock all vCPUs when synchronzing VMSAs for SNP launch finish
+
+From: Sean Christopherson <seanjc@google.com>
+
+commit cb923ee6a80f4e604e6242a4702b59251e61a380 upstream.
+
+Lock all vCPUs when synchronizing and encrypting VMSAs for SNP guests, as
+allowing userspace to manipulate and/or run a vCPU while its state is being
+synchronized would at best corrupt vCPU state, and at worst crash the host
+kernel.
+
+Opportunistically assert that vcpu->mutex is held when synchronizing its
+VMSA (the SEV-ES path already locks vCPUs).
+
+Fixes: ad27ce155566 ("KVM: SEV: Add KVM_SEV_SNP_LAUNCH_FINISH command")
+Cc: stable@vger.kernel.org
+Link: https://patch.msgid.link/20260310234829.2608037-6-seanjc@google.com
+Signed-off-by: Sean Christopherson <seanjc@google.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kvm/svm/sev.c | 16 ++++++++++++----
+ 1 file changed, 12 insertions(+), 4 deletions(-)
+
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -871,6 +871,8 @@ static int sev_es_sync_vmsa(struct vcpu_
+ u8 *d;
+ int i;
+
++ lockdep_assert_held(&vcpu->mutex);
++
+ if (vcpu->arch.guest_state_protected)
+ return -EINVAL;
+
+@@ -2452,6 +2454,10 @@ static int snp_launch_update_vmsa(struct
+ if (kvm_is_vcpu_creation_in_progress(kvm))
+ return -EBUSY;
+
++ ret = kvm_lock_all_vcpus(kvm);
++ if (ret)
++ return ret;
++
+ data.gctx_paddr = __psp_pa(sev->snp_context);
+ data.page_type = SNP_PAGE_TYPE_VMSA;
+
+@@ -2461,12 +2467,12 @@ static int snp_launch_update_vmsa(struct
+
+ ret = sev_es_sync_vmsa(svm);
+ if (ret)
+- return ret;
++ goto out;
+
+ /* Transition the VMSA page to a firmware state. */
+ ret = rmp_make_private(pfn, INITIAL_VMSA_GPA, PG_LEVEL_4K, sev->asid, true);
+ if (ret)
+- return ret;
++ goto out;
+
+ /* Issue the SNP command to encrypt the VMSA */
+ data.address = __sme_pa(svm->sev_es.vmsa);
+@@ -2475,7 +2481,7 @@ static int snp_launch_update_vmsa(struct
+ if (ret) {
+ snp_page_reclaim(kvm, pfn);
+
+- return ret;
++ goto out;
+ }
+
+ svm->vcpu.arch.guest_state_protected = true;
+@@ -2489,7 +2495,9 @@ static int snp_launch_update_vmsa(struct
+ svm_enable_lbrv(vcpu);
+ }
+
+- return 0;
++out:
++ kvm_unlock_all_vcpus(kvm);
++ return ret;
+ }
+
+ static int snp_launch_finish(struct kvm *kvm, struct kvm_sev_cmd *argp)
--- /dev/null
+From b6408b6cec5df76a165575777800ef2aba12b109 Mon Sep 17 00:00:00 2001
+From: Sean Christopherson <seanjc@google.com>
+Date: Tue, 10 Mar 2026 16:48:11 -0700
+Subject: KVM: SEV: Protect *all* of sev_mem_enc_register_region() with kvm->lock
+
+From: Sean Christopherson <seanjc@google.com>
+
+commit b6408b6cec5df76a165575777800ef2aba12b109 upstream.
+
+Take and hold kvm->lock for before checking sev_guest() in
+sev_mem_enc_register_region(), as sev_guest() isn't stable unless kvm->lock
+is held (or KVM can guarantee KVM_SEV_INIT{2} has completed and can't
+rollack state). If KVM_SEV_INIT{2} fails, KVM can end up trying to add to
+a not-yet-initialized sev->regions_list, e.g. triggering a #GP
+
+ Oops: general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] SMP KASAN NOPTI
+ KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007]
+ CPU: 110 UID: 0 PID: 72717 Comm: syz.15.11462 Tainted: G U W O 6.16.0-smp-DEV #1 NONE
+ Tainted: [U]=USER, [W]=WARN, [O]=OOT_MODULE
+ Hardware name: Google, Inc. Arcadia_IT_80/Arcadia_IT_80, BIOS 12.52.0-0 10/28/2024
+ RIP: 0010:sev_mem_enc_register_region+0x3f0/0x4f0 ../include/linux/list.h:83
+ Code: <41> 80 3c 04 00 74 08 4c 89 ff e8 f1 c7 a2 00 49 39 ed 0f 84 c6 00
+ RSP: 0018:ffff88838647fbb8 EFLAGS: 00010256
+ RAX: dffffc0000000000 RBX: 1ffff92015cf1e0b RCX: dffffc0000000000
+ RDX: 0000000000000000 RSI: 0000000000001000 RDI: ffff888367870000
+ RBP: ffffc900ae78f050 R08: ffffea000d9e0007 R09: 1ffffd4001b3c000
+ R10: dffffc0000000000 R11: fffff94001b3c001 R12: 0000000000000000
+ R13: ffff8982ab0bde00 R14: ffffc900ae78f058 R15: 0000000000000000
+ FS: 00007f34e9dc66c0(0000) GS:ffff89ee64d33000(0000) knlGS:0000000000000000
+ CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
+ CR2: 00007fe180adef98 CR3: 000000047210e000 CR4: 0000000000350ef0
+ Call Trace:
+ <TASK>
+ kvm_arch_vm_ioctl+0xa72/0x1240 ../arch/x86/kvm/x86.c:7371
+ kvm_vm_ioctl+0x649/0x990 ../virt/kvm/kvm_main.c:5363
+ __se_sys_ioctl+0x101/0x170 ../fs/ioctl.c:51
+ do_syscall_x64 ../arch/x86/entry/syscall_64.c:63 [inline]
+ do_syscall_64+0x6f/0x1f0 ../arch/x86/entry/syscall_64.c:94
+ entry_SYSCALL_64_after_hwframe+0x76/0x7e
+ RIP: 0033:0x7f34e9f7e9a9
+ Code: <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
+ RSP: 002b:00007f34e9dc6038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
+ RAX: ffffffffffffffda RBX: 00007f34ea1a6080 RCX: 00007f34e9f7e9a9
+ RDX: 0000200000000280 RSI: 000000008010aebb RDI: 0000000000000007
+ RBP: 00007f34ea000d69 R08: 0000000000000000 R09: 0000000000000000
+ R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
+ R13: 0000000000000000 R14: 00007f34ea1a6080 R15: 00007ffce77197a8
+ </TASK>
+
+with a syzlang reproducer that looks like:
+
+ syz_kvm_add_vcpu$x86(0x0, &(0x7f0000000040)={0x0, &(0x7f0000000180)=ANY=[], 0x70}) (async)
+ syz_kvm_add_vcpu$x86(0x0, &(0x7f0000000080)={0x0, &(0x7f0000000180)=ANY=[@ANYBLOB="..."], 0x4f}) (async)
+ r0 = openat$kvm(0xffffffffffffff9c, &(0x7f0000000200), 0x0, 0x0)
+ r1 = ioctl$KVM_CREATE_VM(r0, 0xae01, 0x0)
+ r2 = openat$kvm(0xffffffffffffff9c, &(0x7f0000000240), 0x0, 0x0)
+ r3 = ioctl$KVM_CREATE_VM(r2, 0xae01, 0x0)
+ ioctl$KVM_SET_CLOCK(r3, 0xc008aeba, &(0x7f0000000040)={0x1, 0x8, 0x0, 0x5625e9b0}) (async)
+ ioctl$KVM_SET_PIT2(r3, 0x8010aebb, &(0x7f0000000280)={[...], 0x5}) (async)
+ ioctl$KVM_SET_PIT2(r1, 0x4070aea0, 0x0) (async)
+ r4 = ioctl$KVM_CREATE_VM(0xffffffffffffffff, 0xae01, 0x0)
+ openat$kvm(0xffffffffffffff9c, 0x0, 0x0, 0x0) (async)
+ ioctl$KVM_SET_USER_MEMORY_REGION(r4, 0x4020ae46, &(0x7f0000000400)={0x0, 0x0, 0x0, 0x2000, &(0x7f0000001000/0x2000)=nil}) (async)
+ r5 = ioctl$KVM_CREATE_VCPU(r4, 0xae41, 0x2)
+ close(r0) (async)
+ openat$kvm(0xffffffffffffff9c, &(0x7f0000000000), 0x8000, 0x0) (async)
+ ioctl$KVM_SET_GUEST_DEBUG(r5, 0x4048ae9b, &(0x7f0000000300)={0x4376ea830d46549b, 0x0, [0x46, 0x0, 0x0, 0x0, 0x0, 0x1000]}) (async)
+ ioctl$KVM_RUN(r5, 0xae80, 0x0)
+
+Opportunistically use guard() to avoid having to define a new error label
+and goto usage.
+
+Fixes: 1e80fdc09d12 ("KVM: SVM: Pin guest memory when SEV is active")
+Cc: stable@vger.kernel.org
+Reported-by: Alexander Potapenko <glider@google.com>
+Tested-by: Alexander Potapenko <glider@google.com>
+Link: https://patch.msgid.link/20260310234829.2608037-4-seanjc@google.com
+Signed-off-by: Sean Christopherson <seanjc@google.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kvm/svm/sev.c | 6 ++----
+ 1 file changed, 2 insertions(+), 4 deletions(-)
+
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -2687,6 +2687,8 @@ int sev_mem_enc_register_region(struct k
+ struct enc_region *region;
+ int ret = 0;
+
++ guard(mutex)(&kvm->lock);
++
+ if (!sev_guest(kvm))
+ return -ENOTTY;
+
+@@ -2701,12 +2703,10 @@ int sev_mem_enc_register_region(struct k
+ if (!region)
+ return -ENOMEM;
+
+- mutex_lock(&kvm->lock);
+ region->pages = sev_pin_memory(kvm, range->addr, range->size, ®ion->npages,
+ FOLL_WRITE | FOLL_LONGTERM);
+ if (IS_ERR(region->pages)) {
+ ret = PTR_ERR(region->pages);
+- mutex_unlock(&kvm->lock);
+ goto e_free;
+ }
+
+@@ -2724,8 +2724,6 @@ int sev_mem_enc_register_region(struct k
+ region->size = range->size;
+
+ list_add_tail(®ion->list, &sev->regions_list);
+- mutex_unlock(&kvm->lock);
+-
+ return ret;
+
+ e_free:
--- /dev/null
+From 9b9f7962e3e879d12da2bf47e02a24ec51690e3d Mon Sep 17 00:00:00 2001
+From: Sean Christopherson <seanjc@google.com>
+Date: Tue, 10 Mar 2026 16:48:10 -0700
+Subject: KVM: SEV: Reject attempts to sync VMSA of an already-launched/encrypted vCPU
+
+From: Sean Christopherson <seanjc@google.com>
+
+commit 9b9f7962e3e879d12da2bf47e02a24ec51690e3d upstream.
+
+Reject synchronizing vCPU state to its associated VMSA if the vCPU has
+already been launched, i.e. if the VMSA has already been encrypted. On a
+host with SNP enabled, accessing guest-private memory generates an RMP #PF
+and panics the host.
+
+ BUG: unable to handle page fault for address: ff1276cbfdf36000
+ #PF: supervisor write access in kernel mode
+ #PF: error_code(0x80000003) - RMP violation
+ PGD 5a31801067 P4D 5a31802067 PUD 40ccfb5063 PMD 40e5954063 PTE 80000040fdf36163
+ SEV-SNP: PFN 0x40fdf36, RMP entry: [0x6010fffffffff001 - 0x000000000000001f]
+ Oops: Oops: 0003 [#1] SMP NOPTI
+ CPU: 33 UID: 0 PID: 996180 Comm: qemu-system-x86 Tainted: G OE
+ Tainted: [O]=OOT_MODULE, [E]=UNSIGNED_MODULE
+ Hardware name: Dell Inc. PowerEdge R7625/0H1TJT, BIOS 1.5.8 07/21/2023
+ RIP: 0010:sev_es_sync_vmsa+0x54/0x4c0 [kvm_amd]
+ Call Trace:
+ <TASK>
+ snp_launch_update_vmsa+0x19d/0x290 [kvm_amd]
+ snp_launch_finish+0xb6/0x380 [kvm_amd]
+ sev_mem_enc_ioctl+0x14e/0x720 [kvm_amd]
+ kvm_arch_vm_ioctl+0x837/0xcf0 [kvm]
+ kvm_vm_ioctl+0x3fd/0xcc0 [kvm]
+ __x64_sys_ioctl+0xa3/0x100
+ x64_sys_call+0xfe0/0x2350
+ do_syscall_64+0x81/0x10f0
+ entry_SYSCALL_64_after_hwframe+0x76/0x7e
+ RIP: 0033:0x7ffff673287d
+ </TASK>
+
+Note, the KVM flaw has been present since commit ad73109ae7ec ("KVM: SVM:
+Provide support to launch and run an SEV-ES guest"), but has only been
+actively dangerous for the host since SNP support was added. With SEV-ES,
+KVM would "just" clobber guest state, which is totally fine from a host
+kernel perspective since userspace can clobber guest state any time before
+sev_launch_update_vmsa().
+
+Fixes: ad27ce155566 ("KVM: SEV: Add KVM_SEV_SNP_LAUNCH_FINISH command")
+Reported-by: Jethro Beekman <jethro@fortanix.com>
+Closes: https://lore.kernel.org/all/d98692e2-d96b-4c36-8089-4bc1e5cc3d57@fortanix.com
+Cc: stable@vger.kernel.org
+Link: https://patch.msgid.link/20260310234829.2608037-3-seanjc@google.com
+Signed-off-by: Sean Christopherson <seanjc@google.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kvm/svm/sev.c | 3 +++
+ 1 file changed, 3 insertions(+)
+
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -871,6 +871,9 @@ static int sev_es_sync_vmsa(struct vcpu_
+ u8 *d;
+ int i;
+
++ if (vcpu->arch.guest_state_protected)
++ return -EINVAL;
++
+ /* Check some debug related fields before encrypting the VMSA */
+ if (svm->vcpu.guest_debug || (svm->vmcb->save.dr7 & ~DR7_FIXED_1))
+ return -EINVAL;
--- /dev/null
+From stable+bounces-236081-greg=kroah.com@vger.kernel.org Mon Apr 13 15:07:45 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 13 Apr 2026 09:01:25 -0400
+Subject: KVM: x86: Use __DECLARE_FLEX_ARRAY() for UAPI structures with VLAs
+To: stable@vger.kernel.org
+Cc: David Woodhouse <dwmw@amazon.co.uk>, Sean Christopherson <seanjc@google.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260413130125.2879436-2-sashal@kernel.org>
+
+From: David Woodhouse <dwmw@amazon.co.uk>
+
+[ Upstream commit 2619da73bb2f10d88f7e1087125c40144fdf0987 ]
+
+Commit 94dfc73e7cf4 ("treewide: uapi: Replace zero-length arrays with
+flexible-array members") broke the userspace API for C++.
+
+These structures ending in VLAs are typically a *header*, which can be
+followed by an arbitrary number of entries. Userspace typically creates
+a larger structure with some non-zero number of entries, for example in
+QEMU's kvm_arch_get_supported_msr_feature():
+
+ struct {
+ struct kvm_msrs info;
+ struct kvm_msr_entry entries[1];
+ } msr_data = {};
+
+While that works in C, it fails in C++ with an error like:
+ flexible array member 'kvm_msrs::entries' not at end of 'struct msr_data'
+
+Fix this by using __DECLARE_FLEX_ARRAY() for the VLA, which uses [0]
+for C++ compilation.
+
+Fixes: 94dfc73e7cf4 ("treewide: uapi: Replace zero-length arrays with flexible-array members")
+Cc: stable@vger.kernel.org
+Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
+Link: https://patch.msgid.link/3abaf6aefd6e5efeff3b860ac38421d9dec908db.camel@infradead.org
+[sean: tag for stable@]
+Signed-off-by: Sean Christopherson <seanjc@google.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/include/uapi/asm/kvm.h | 12 ++++++------
+ include/uapi/linux/kvm.h | 11 ++++++-----
+ 2 files changed, 12 insertions(+), 11 deletions(-)
+
+--- a/arch/x86/include/uapi/asm/kvm.h
++++ b/arch/x86/include/uapi/asm/kvm.h
+@@ -197,13 +197,13 @@ struct kvm_msrs {
+ __u32 nmsrs; /* number of msrs in entries */
+ __u32 pad;
+
+- struct kvm_msr_entry entries[];
++ __DECLARE_FLEX_ARRAY(struct kvm_msr_entry, entries);
+ };
+
+ /* for KVM_GET_MSR_INDEX_LIST */
+ struct kvm_msr_list {
+ __u32 nmsrs; /* number of msrs in entries */
+- __u32 indices[];
++ __DECLARE_FLEX_ARRAY(__u32, indices);
+ };
+
+ /* Maximum size of any access bitmap in bytes */
+@@ -245,7 +245,7 @@ struct kvm_cpuid_entry {
+ struct kvm_cpuid {
+ __u32 nent;
+ __u32 padding;
+- struct kvm_cpuid_entry entries[];
++ __DECLARE_FLEX_ARRAY(struct kvm_cpuid_entry, entries);
+ };
+
+ struct kvm_cpuid_entry2 {
+@@ -267,7 +267,7 @@ struct kvm_cpuid_entry2 {
+ struct kvm_cpuid2 {
+ __u32 nent;
+ __u32 padding;
+- struct kvm_cpuid_entry2 entries[];
++ __DECLARE_FLEX_ARRAY(struct kvm_cpuid_entry2, entries);
+ };
+
+ /* for KVM_GET_PIT and KVM_SET_PIT */
+@@ -398,7 +398,7 @@ struct kvm_xsave {
+ * the contents of CPUID leaf 0xD on the host.
+ */
+ __u32 region[1024];
+- __u32 extra[];
++ __DECLARE_FLEX_ARRAY(__u32, extra);
+ };
+
+ #define KVM_MAX_XCRS 16
+@@ -564,7 +564,7 @@ struct kvm_pmu_event_filter {
+ __u32 fixed_counter_bitmap;
+ __u32 flags;
+ __u32 pad[4];
+- __u64 events[];
++ __DECLARE_FLEX_ARRAY(__u64, events);
+ };
+
+ #define KVM_PMU_EVENT_ALLOW 0
+--- a/include/uapi/linux/kvm.h
++++ b/include/uapi/linux/kvm.h
+@@ -11,6 +11,7 @@
+ #include <linux/const.h>
+ #include <linux/types.h>
+ #include <linux/compiler.h>
++#include <linux/stddef.h>
+ #include <linux/ioctl.h>
+ #include <asm/kvm.h>
+
+@@ -523,7 +524,7 @@ struct kvm_coalesced_mmio {
+
+ struct kvm_coalesced_mmio_ring {
+ __u32 first, last;
+- struct kvm_coalesced_mmio coalesced_mmio[];
++ __DECLARE_FLEX_ARRAY(struct kvm_coalesced_mmio, coalesced_mmio);
+ };
+
+ #define KVM_COALESCED_MMIO_MAX \
+@@ -573,7 +574,7 @@ struct kvm_clear_dirty_log {
+ /* for KVM_SET_SIGNAL_MASK */
+ struct kvm_signal_mask {
+ __u32 len;
+- __u8 sigset[];
++ __DECLARE_FLEX_ARRAY(__u8, sigset);
+ };
+
+ /* for KVM_TPR_ACCESS_REPORTING */
+@@ -1029,7 +1030,7 @@ struct kvm_irq_routing_entry {
+ struct kvm_irq_routing {
+ __u32 nr;
+ __u32 flags;
+- struct kvm_irq_routing_entry entries[];
++ __DECLARE_FLEX_ARRAY(struct kvm_irq_routing_entry, entries);
+ };
+
+ #define KVM_IRQFD_FLAG_DEASSIGN (1 << 0)
+@@ -1120,7 +1121,7 @@ struct kvm_dirty_tlb {
+
+ struct kvm_reg_list {
+ __u64 n; /* number of regs */
+- __u64 reg[];
++ __DECLARE_FLEX_ARRAY(__u64, reg);
+ };
+
+ struct kvm_one_reg {
+@@ -1575,7 +1576,7 @@ struct kvm_stats_desc {
+ #ifdef __KERNEL__
+ char name[KVM_STATS_NAME_SIZE];
+ #else
+- char name[];
++ __DECLARE_FLEX_ARRAY(char, name);
+ #endif
+ };
+
--- /dev/null
+From f8e1fc918a9fe67103bcda01d20d745f264d00a7 Mon Sep 17 00:00:00 2001
+From: Ruslan Valiyev <linuxoid@gmail.com>
+Date: Tue, 3 Mar 2026 11:27:54 +0000
+Subject: media: vidtv: fix NULL pointer dereference in vidtv_channel_pmt_match_sections
+
+From: Ruslan Valiyev <linuxoid@gmail.com>
+
+commit f8e1fc918a9fe67103bcda01d20d745f264d00a7 upstream.
+
+syzbot reported a general protection fault in vidtv_psi_desc_assign [1].
+
+vidtv_psi_pmt_stream_init() can return NULL on memory allocation
+failure, but vidtv_channel_pmt_match_sections() does not check for
+this. When tail is NULL, the subsequent call to
+vidtv_psi_desc_assign(&tail->descriptor, desc) dereferences a NULL
+pointer offset, causing a general protection fault.
+
+Add a NULL check after vidtv_psi_pmt_stream_init(). On failure, clean
+up the already-allocated stream chain and return.
+
+[1]
+Oops: general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] SMP KASAN PTI
+KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007]
+RIP: 0010:vidtv_psi_desc_assign+0x24/0x90 drivers/media/test-drivers/vidtv/vidtv_psi.c:629
+Call Trace:
+ <TASK>
+ vidtv_channel_pmt_match_sections drivers/media/test-drivers/vidtv/vidtv_channel.c:349 [inline]
+ vidtv_channel_si_init+0x1445/0x1a50 drivers/media/test-drivers/vidtv/vidtv_channel.c:479
+ vidtv_mux_init+0x526/0xbe0 drivers/media/test-drivers/vidtv/vidtv_mux.c:519
+ vidtv_start_streaming drivers/media/test-drivers/vidtv/vidtv_bridge.c:194 [inline]
+ vidtv_start_feed+0x33e/0x4d0 drivers/media/test-drivers/vidtv/vidtv_bridge.c:239
+
+Fixes: f90cf6079bf67 ("media: vidtv: add a bridge driver")
+Cc: stable@vger.kernel.org
+Reported-by: syzbot+1f5bcc7c919ec578777a@syzkaller.appspotmail.com
+Closes: https://syzkaller.appspot.com/bug?extid=1f5bcc7c919ec578777a
+Signed-off-by: Ruslan Valiyev <linuxoid@gmail.com>
+Signed-off-by: Hans Verkuil <hverkuil+cisco@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/media/test-drivers/vidtv/vidtv_channel.c | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+--- a/drivers/media/test-drivers/vidtv/vidtv_channel.c
++++ b/drivers/media/test-drivers/vidtv/vidtv_channel.c
+@@ -341,6 +341,10 @@ vidtv_channel_pmt_match_sections(struct
+ tail = vidtv_psi_pmt_stream_init(tail,
+ s->type,
+ e_pid);
++ if (!tail) {
++ vidtv_psi_pmt_stream_destroy(head);
++ return;
++ }
+
+ if (!head)
+ head = tail;
--- /dev/null
+From stable+bounces-237711-greg=kroah.com@vger.kernel.org Tue Apr 14 05:31:44 2026
+From: Li hongliang <1468888505@139.com>
+Date: Tue, 14 Apr 2026 11:31:29 +0800
+Subject: netfilter: conntrack: add missing netlink policy validations
+To: gregkh@linuxfoundation.org, stable@vger.kernel.org, fw@strlen.de
+Cc: patches@lists.linux.dev, linux-kernel@vger.kernel.org, pablo@netfilter.org, kadlec@netfilter.org, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, horms@kernel.org, kaber@trash.net, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, netdev@vger.kernel.org, imv4bel@gmail.com
+Message-ID: <20260414033129.48460-1-1468888505@139.com>
+
+From: Florian Westphal <fw@strlen.de>
+
+[ Upstream commit f900e1d77ee0ef87bfb5ab3fe60f0b3d8ad5ba05 ]
+
+Hyunwoo Kim reports out-of-bounds access in sctp and ctnetlink.
+
+These attributes are used by the kernel without any validation.
+Extend the netlink policies accordingly.
+
+Quoting the reporter:
+ nlattr_to_sctp() assigns the user-supplied CTA_PROTOINFO_SCTP_STATE
+ value directly to ct->proto.sctp.state without checking that it is
+ within the valid range. [..]
+
+ and: ... with exp->dir = 100, the access at
+ ct->master->tuplehash[100] reads 5600 bytes past the start of a
+ 320-byte nf_conn object, causing a slab-out-of-bounds read confirmed by
+ UBSAN.
+
+Fixes: 076a0ca02644 ("netfilter: ctnetlink: add NAT support for expectations")
+Fixes: a258860e01b8 ("netfilter: ctnetlink: add full support for SCTP to ctnetlink")
+Reported-by: Hyunwoo Kim <imv4bel@gmail.com>
+Signed-off-by: Florian Westphal <fw@strlen.de>
+Signed-off-by: Li hongliang <1468888505@139.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/netfilter/nf_conntrack_netlink.c | 2 +-
+ net/netfilter/nf_conntrack_proto_sctp.c | 3 ++-
+ 2 files changed, 3 insertions(+), 2 deletions(-)
+
+diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
+index 879413b9fa06..2bb9eb2d25fb 100644
+--- a/net/netfilter/nf_conntrack_netlink.c
++++ b/net/netfilter/nf_conntrack_netlink.c
+@@ -3465,7 +3465,7 @@ ctnetlink_change_expect(struct nf_conntrack_expect *x,
+
+ #if IS_ENABLED(CONFIG_NF_NAT)
+ static const struct nla_policy exp_nat_nla_policy[CTA_EXPECT_NAT_MAX+1] = {
+- [CTA_EXPECT_NAT_DIR] = { .type = NLA_U32 },
++ [CTA_EXPECT_NAT_DIR] = NLA_POLICY_MAX(NLA_BE32, IP_CT_DIR_REPLY),
+ [CTA_EXPECT_NAT_TUPLE] = { .type = NLA_NESTED },
+ };
+ #endif
+diff --git a/net/netfilter/nf_conntrack_proto_sctp.c b/net/netfilter/nf_conntrack_proto_sctp.c
+index 7c6f7c9f7332..645d2c43ebf7 100644
+--- a/net/netfilter/nf_conntrack_proto_sctp.c
++++ b/net/netfilter/nf_conntrack_proto_sctp.c
+@@ -582,7 +582,8 @@ static int sctp_to_nlattr(struct sk_buff *skb, struct nlattr *nla,
+ }
+
+ static const struct nla_policy sctp_nla_policy[CTA_PROTOINFO_SCTP_MAX+1] = {
+- [CTA_PROTOINFO_SCTP_STATE] = { .type = NLA_U8 },
++ [CTA_PROTOINFO_SCTP_STATE] = NLA_POLICY_MAX(NLA_U8,
++ SCTP_CONNTRACK_HEARTBEAT_SENT),
+ [CTA_PROTOINFO_SCTP_VTAG_ORIGINAL] = { .type = NLA_U32 },
+ [CTA_PROTOINFO_SCTP_VTAG_REPLY] = { .type = NLA_U32 },
+ };
+--
+2.34.1
+
--- /dev/null
+From stable+bounces-236133-greg=kroah.com@vger.kernel.org Mon Apr 13 17:25:42 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 13 Apr 2026 11:19:56 -0400
+Subject: ocfs2: add inline inode consistency check to ocfs2_validate_inode_block()
+To: stable@vger.kernel.org
+Cc: Dmitry Antipov <dmantipov@yandex.ru>, syzbot+c16daba279a1161acfb0@syzkaller.appspotmail.com, Joseph Qi <joseph.qi@linux.alibaba.com>, Joseph Qi <jiangqi903@gmail.com>, Mark Fasheh <mark@fasheh.com>, Joel Becker <jlbec@evilplan.org>, Junxiao Bi <junxiao.bi@oracle.com>, Changwei Ge <gechangwei@live.cn>, Jun Piao <piaojun@huawei.com>, Heming Zhao <heming.zhao@suse.com>, Andrew Morton <akpm@linux-foundation.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260413151958.3014725-1-sashal@kernel.org>
+
+From: Dmitry Antipov <dmantipov@yandex.ru>
+
+[ Upstream commit a2b1c419ff72ec62ff5831684e30cd1d4f0b09ee ]
+
+In 'ocfs2_validate_inode_block()', add an extra check whether an inode
+with inline data (i.e. self-contained) has no clusters, thus preventing
+an invalid inode from being passed to 'ocfs2_evict_inode()' and below.
+
+Link: https://lkml.kernel.org/r/20251023141650.417129-1-dmantipov@yandex.ru
+Signed-off-by: Dmitry Antipov <dmantipov@yandex.ru>
+Reported-by: syzbot+c16daba279a1161acfb0@syzkaller.appspotmail.com
+Closes: https://syzkaller.appspot.com/bug?extid=c16daba279a1161acfb0
+Reviewed-by: Joseph Qi <joseph.qi@linux.alibaba.com>
+Cc: Joseph Qi <jiangqi903@gmail.com>
+Cc: Mark Fasheh <mark@fasheh.com>
+Cc: Joel Becker <jlbec@evilplan.org>
+Cc: Junxiao Bi <junxiao.bi@oracle.com>
+Cc: Changwei Ge <gechangwei@live.cn>
+Cc: Jun Piao <piaojun@huawei.com>
+Cc: Heming Zhao <heming.zhao@suse.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Stable-dep-of: 7bc5da4842be ("ocfs2: fix out-of-bounds write in ocfs2_write_end_inline")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/ocfs2/inode.c | 8 ++++++++
+ 1 file changed, 8 insertions(+)
+
+--- a/fs/ocfs2/inode.c
++++ b/fs/ocfs2/inode.c
+@@ -1505,6 +1505,14 @@ int ocfs2_validate_inode_block(struct su
+ goto bail;
+ }
+
++ if ((le16_to_cpu(di->i_dyn_features) & OCFS2_INLINE_DATA_FL) &&
++ le32_to_cpu(di->i_clusters)) {
++ rc = ocfs2_error(sb, "Invalid dinode %llu: %u clusters\n",
++ (unsigned long long)bh->b_blocknr,
++ le32_to_cpu(di->i_clusters));
++ goto bail;
++ }
++
+ rc = 0;
+
+ bail:
--- /dev/null
+From stable+bounces-236135-greg=kroah.com@vger.kernel.org Mon Apr 13 17:26:00 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 13 Apr 2026 11:19:58 -0400
+Subject: ocfs2: fix out-of-bounds write in ocfs2_write_end_inline
+To: stable@vger.kernel.org
+Cc: Joseph Qi <joseph.qi@linux.alibaba.com>, syzbot+62c1793956716ea8b28a@syzkaller.appspotmail.com, Mark Fasheh <mark@fasheh.com>, Joel Becker <jlbec@evilplan.org>, Junxiao Bi <junxiao.bi@oracle.com>, Changwei Ge <gechangwei@live.cn>, Jun Piao <piaojun@huawei.com>, Heming Zhao <heming.zhao@suse.com>, Andrew Morton <akpm@linux-foundation.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260413151958.3014725-3-sashal@kernel.org>
+
+From: Joseph Qi <joseph.qi@linux.alibaba.com>
+
+[ Upstream commit 7bc5da4842bed3252d26e742213741a4d0ac1b14 ]
+
+KASAN reports a use-after-free write of 4086 bytes in
+ocfs2_write_end_inline, called from ocfs2_write_end_nolock during a
+copy_file_range splice fallback on a corrupted ocfs2 filesystem mounted on
+a loop device. The actual bug is an out-of-bounds write past the inode
+block buffer, not a true use-after-free. The write overflows into an
+adjacent freed page, which KASAN reports as UAF.
+
+The root cause is that ocfs2_try_to_write_inline_data trusts the on-disk
+id_count field to determine whether a write fits in inline data. On a
+corrupted filesystem, id_count can exceed the physical maximum inline data
+capacity, causing writes to overflow the inode block buffer.
+
+Call trace (crash path):
+
+ vfs_copy_file_range (fs/read_write.c:1634)
+ do_splice_direct
+ splice_direct_to_actor
+ iter_file_splice_write
+ ocfs2_file_write_iter
+ generic_perform_write
+ ocfs2_write_end
+ ocfs2_write_end_nolock (fs/ocfs2/aops.c:1949)
+ ocfs2_write_end_inline (fs/ocfs2/aops.c:1915)
+ memcpy_from_folio <-- KASAN: write OOB
+
+So add id_count upper bound check in ocfs2_validate_inode_block() to
+alongside the existing i_size check to fix it.
+
+Link: https://lkml.kernel.org/r/20260403063830.3662739-1-joseph.qi@linux.alibaba.com
+Signed-off-by: Joseph Qi <joseph.qi@linux.alibaba.com>
+Reported-by: syzbot+62c1793956716ea8b28a@syzkaller.appspotmail.com
+Closes: https://syzkaller.appspot.com/bug?extid=62c1793956716ea8b28a
+Cc: Mark Fasheh <mark@fasheh.com>
+Cc: Joel Becker <jlbec@evilplan.org>
+Cc: Junxiao Bi <junxiao.bi@oracle.com>
+Cc: Changwei Ge <gechangwei@live.cn>
+Cc: Jun Piao <piaojun@huawei.com>
+Cc: Heming Zhao <heming.zhao@suse.com>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/ocfs2/inode.c | 10 ++++++++++
+ 1 file changed, 10 insertions(+)
+
+--- a/fs/ocfs2/inode.c
++++ b/fs/ocfs2/inode.c
+@@ -1516,6 +1516,16 @@ int ocfs2_validate_inode_block(struct su
+ goto bail;
+ }
+
++ if (le16_to_cpu(data->id_count) >
++ ocfs2_max_inline_data_with_xattr(sb, di)) {
++ rc = ocfs2_error(sb,
++ "Invalid dinode #%llu: inline data id_count %u exceeds max %d\n",
++ (unsigned long long)bh->b_blocknr,
++ le16_to_cpu(data->id_count),
++ ocfs2_max_inline_data_with_xattr(sb, di));
++ goto bail;
++ }
++
+ if (le64_to_cpu(di->i_size) > le16_to_cpu(data->id_count)) {
+ rc = ocfs2_error(sb,
+ "Invalid dinode #%llu: inline data i_size %llu exceeds id_count %u\n",
--- /dev/null
+From b02da26a992db0c0e2559acbda0fc48d4a2fd337 Mon Sep 17 00:00:00 2001
+From: Joseph Qi <joseph.qi@linux.alibaba.com>
+Date: Fri, 6 Mar 2026 11:22:11 +0800
+Subject: ocfs2: fix possible deadlock between unlink and dio_end_io_write
+
+From: Joseph Qi <joseph.qi@linux.alibaba.com>
+
+commit b02da26a992db0c0e2559acbda0fc48d4a2fd337 upstream.
+
+ocfs2_unlink takes orphan dir inode_lock first and then ip_alloc_sem,
+while in ocfs2_dio_end_io_write, it acquires these locks in reverse order.
+This creates an ABBA lock ordering violation on lock classes
+ocfs2_sysfile_lock_key[ORPHAN_DIR_SYSTEM_INODE] and
+ocfs2_file_ip_alloc_sem_key.
+
+Lock Chain #0 (orphan dir inode_lock -> ip_alloc_sem):
+ocfs2_unlink
+ ocfs2_prepare_orphan_dir
+ ocfs2_lookup_lock_orphan_dir
+ inode_lock(orphan_dir_inode) <- lock A
+ __ocfs2_prepare_orphan_dir
+ ocfs2_prepare_dir_for_insert
+ ocfs2_extend_dir
+ ocfs2_expand_inline_dir
+ down_write(&oi->ip_alloc_sem) <- Lock B
+
+Lock Chain #1 (ip_alloc_sem -> orphan dir inode_lock):
+ocfs2_dio_end_io_write
+ down_write(&oi->ip_alloc_sem) <- Lock B
+ ocfs2_del_inode_from_orphan()
+ inode_lock(orphan_dir_inode) <- Lock A
+
+Deadlock Scenario:
+ CPU0 (unlink) CPU1 (dio_end_io_write)
+ ------ ------
+ inode_lock(orphan_dir_inode)
+ down_write(ip_alloc_sem)
+ down_write(ip_alloc_sem)
+ inode_lock(orphan_dir_inode)
+
+Since ip_alloc_sem is to protect allocation changes, which is unrelated
+with operations in ocfs2_del_inode_from_orphan. So move
+ocfs2_del_inode_from_orphan out of ip_alloc_sem to fix the deadlock.
+
+Link: https://lkml.kernel.org/r/20260306032211.1016452-1-joseph.qi@linux.alibaba.com
+Reported-by: syzbot+67b90111784a3eac8c04@syzkaller.appspotmail.com
+Closes: https://syzkaller.appspot.com/bug?extid=67b90111784a3eac8c04
+Fixes: a86a72a4a4e0 ("ocfs2: take ip_alloc_sem in ocfs2_dio_get_block & ocfs2_dio_end_io_write")
+Signed-off-by: Joseph Qi <joseph.qi@linux.alibaba.com>
+Reviewed-by: Heming Zhao <heming.zhao@suse.com>
+Cc: Mark Fasheh <mark@fasheh.com>
+Cc: Joel Becker <jlbec@evilplan.org>
+Cc: Junxiao Bi <junxiao.bi@oracle.com>
+Cc: Joseph Qi <jiangqi903@gmail.com>
+Cc: Changwei Ge <gechangwei@live.cn>
+Cc: Jun Piao <piaojun@huawei.com>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/ocfs2/aops.c | 3 +--
+ 1 file changed, 1 insertion(+), 2 deletions(-)
+
+--- a/fs/ocfs2/aops.c
++++ b/fs/ocfs2/aops.c
+@@ -2295,8 +2295,6 @@ static int ocfs2_dio_end_io_write(struct
+ goto out;
+ }
+
+- down_write(&oi->ip_alloc_sem);
+-
+ /* Delete orphan before acquire i_rwsem. */
+ if (dwc->dw_orphaned) {
+ BUG_ON(dwc->dw_writer_pid != task_pid_nr(current));
+@@ -2309,6 +2307,7 @@ static int ocfs2_dio_end_io_write(struct
+ mlog_errno(ret);
+ }
+
++ down_write(&oi->ip_alloc_sem);
+ di = (struct ocfs2_dinode *)di_bh->b_data;
+
+ ocfs2_init_dinode_extent_tree(&et, INODE_CACHE(inode), di_bh);
--- /dev/null
+From 7de554cabf160e331e4442e2a9ad874ca9875921 Mon Sep 17 00:00:00 2001
+From: Tejas Bharambe <tejas.bharambe@outlook.com>
+Date: Fri, 10 Apr 2026 01:38:16 -0700
+Subject: ocfs2: fix use-after-free in ocfs2_fault() when VM_FAULT_RETRY
+
+From: Tejas Bharambe <tejas.bharambe@outlook.com>
+
+commit 7de554cabf160e331e4442e2a9ad874ca9875921 upstream.
+
+filemap_fault() may drop the mmap_lock before returning VM_FAULT_RETRY,
+as documented in mm/filemap.c:
+
+ "If our return value has VM_FAULT_RETRY set, it's because the mmap_lock
+ may be dropped before doing I/O or by lock_folio_maybe_drop_mmap()."
+
+When this happens, a concurrent munmap() can call remove_vma() and free
+the vm_area_struct via RCU. The saved 'vma' pointer in ocfs2_fault() then
+becomes a dangling pointer, and the subsequent trace_ocfs2_fault() call
+dereferences it -- a use-after-free.
+
+Fix this by saving ip_blkno as a plain integer before calling
+filemap_fault(), and removing vma from the trace event. Since
+ip_blkno is copied by value before the lock can be dropped, it
+remains valid regardless of what happens to the vma or inode
+afterward.
+
+Link: https://lkml.kernel.org/r/20260410083816.34951-1-tejas.bharambe@outlook.com
+Fixes: 614a9e849ca6 ("ocfs2: Remove FILE_IO from masklog.")
+Signed-off-by: Tejas Bharambe <tejas.bharambe@outlook.com>
+Reported-by: syzbot+a49010a0e8fcdeea075f@syzkaller.appspotmail.com
+Closes: https://syzkaller.appspot.com/bug?extid=a49010a0e8fcdeea075f
+Suggested-by: Joseph Qi <joseph.qi@linux.alibaba.com>
+Reviewed-by: Joseph Qi <joseph.qi@linux.alibaba.com>
+Cc: Mark Fasheh <mark@fasheh.com>
+Cc: Joel Becker <jlbec@evilplan.org>
+Cc: Junxiao Bi <junxiao.bi@oracle.com>
+Cc: Changwei Ge <gechangwei@live.cn>
+Cc: Jun Piao <piaojun@huawei.com>
+Cc: Heming Zhao <heming.zhao@suse.com>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/ocfs2/mmap.c | 7 +++----
+ fs/ocfs2/ocfs2_trace.h | 10 ++++------
+ 2 files changed, 7 insertions(+), 10 deletions(-)
+
+--- a/fs/ocfs2/mmap.c
++++ b/fs/ocfs2/mmap.c
+@@ -30,7 +30,8 @@
+
+ static vm_fault_t ocfs2_fault(struct vm_fault *vmf)
+ {
+- struct vm_area_struct *vma = vmf->vma;
++ unsigned long long ip_blkno =
++ OCFS2_I(file_inode(vmf->vma->vm_file))->ip_blkno;
+ sigset_t oldset;
+ vm_fault_t ret;
+
+@@ -38,11 +39,9 @@ static vm_fault_t ocfs2_fault(struct vm_
+ ret = filemap_fault(vmf);
+ ocfs2_unblock_signals(&oldset);
+
+- trace_ocfs2_fault(OCFS2_I(vma->vm_file->f_mapping->host)->ip_blkno,
+- vma, vmf->page, vmf->pgoff);
++ trace_ocfs2_fault(ip_blkno, vmf->page, vmf->pgoff);
+ return ret;
+ }
+-
+ static vm_fault_t __ocfs2_page_mkwrite(struct file *file,
+ struct buffer_head *di_bh, struct folio *folio)
+ {
+--- a/fs/ocfs2/ocfs2_trace.h
++++ b/fs/ocfs2/ocfs2_trace.h
+@@ -1246,22 +1246,20 @@ TRACE_EVENT(ocfs2_write_end_inline,
+
+ TRACE_EVENT(ocfs2_fault,
+ TP_PROTO(unsigned long long ino,
+- void *area, void *page, unsigned long pgoff),
+- TP_ARGS(ino, area, page, pgoff),
++ void *page, unsigned long pgoff),
++ TP_ARGS(ino, page, pgoff),
+ TP_STRUCT__entry(
+ __field(unsigned long long, ino)
+- __field(void *, area)
+ __field(void *, page)
+ __field(unsigned long, pgoff)
+ ),
+ TP_fast_assign(
+ __entry->ino = ino;
+- __entry->area = area;
+ __entry->page = page;
+ __entry->pgoff = pgoff;
+ ),
+- TP_printk("%llu %p %p %lu",
+- __entry->ino, __entry->area, __entry->page, __entry->pgoff)
++ TP_printk("%llu %p %lu",
++ __entry->ino, __entry->page, __entry->pgoff)
+ );
+
+ /* End of trace events for fs/ocfs2/mmap.c. */
--- /dev/null
+From 4a1c0ddc6e7bcf2e9db0eeaab9340dcfe97f448f Mon Sep 17 00:00:00 2001
+From: ZhengYuan Huang <gality369@gmail.com>
+Date: Wed, 1 Apr 2026 17:23:03 +0800
+Subject: ocfs2: handle invalid dinode in ocfs2_group_extend
+
+From: ZhengYuan Huang <gality369@gmail.com>
+
+commit 4a1c0ddc6e7bcf2e9db0eeaab9340dcfe97f448f upstream.
+
+[BUG]
+kernel BUG at fs/ocfs2/resize.c:308!
+Oops: invalid opcode: 0000 [#1] SMP KASAN NOPTI
+RIP: 0010:ocfs2_group_extend+0x10aa/0x1ae0 fs/ocfs2/resize.c:308
+Code: 8b8520ff ffff83f8 860f8580 030000e8 5cc3c1fe
+Call Trace:
+ ...
+ ocfs2_ioctl+0x175/0x6e0 fs/ocfs2/ioctl.c:869
+ vfs_ioctl fs/ioctl.c:51 [inline]
+ __do_sys_ioctl fs/ioctl.c:597 [inline]
+ __se_sys_ioctl fs/ioctl.c:583 [inline]
+ __x64_sys_ioctl+0x197/0x1e0 fs/ioctl.c:583
+ x64_sys_call+0x1144/0x26a0 arch/x86/include/generated/asm/syscalls_64.h:17
+ do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
+ do_syscall_64+0x93/0xf80 arch/x86/entry/syscall_64.c:94
+ entry_SYSCALL_64_after_hwframe+0x76/0x7e
+ ...
+
+[CAUSE]
+ocfs2_group_extend() assumes that the global bitmap inode block
+returned from ocfs2_inode_lock() has already been validated and
+BUG_ONs when the signature is not a dinode. That assumption is too
+strong for crafted filesystems because the JBD2-managed buffer path
+can bypass structural validation and return an invalid dinode to the
+resize ioctl.
+
+[FIX]
+Validate the dinode explicitly in ocfs2_group_extend(). If the global
+bitmap buffer does not contain a valid dinode, report filesystem
+corruption with ocfs2_error() and fail the resize operation instead of
+crashing the kernel.
+
+Link: https://lkml.kernel.org/r/20260401092303.3709187-1-gality369@gmail.com
+Fixes: 10995aa2451a ("ocfs2: Morph the haphazard OCFS2_IS_VALID_DINODE() checks.")
+Signed-off-by: ZhengYuan Huang <gality369@gmail.com>
+Reviewed-by: Joseph Qi <joseph.qi@linux.alibaba.com>
+Cc: Mark Fasheh <mark@fasheh.com>
+Cc: Joel Becker <jlbec@evilplan.org>
+Cc: Junxiao Bi <junxiao.bi@oracle.com>
+Cc: Changwei Ge <gechangwei@live.cn>
+Cc: Jun Piao <piaojun@huawei.com>
+Cc: Heming Zhao <heming.zhao@suse.com>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/ocfs2/resize.c | 10 +++++++---
+ 1 file changed, 7 insertions(+), 3 deletions(-)
+
+--- a/fs/ocfs2/resize.c
++++ b/fs/ocfs2/resize.c
+@@ -303,9 +303,13 @@ int ocfs2_group_extend(struct inode * in
+
+ fe = (struct ocfs2_dinode *)main_bm_bh->b_data;
+
+- /* main_bm_bh is validated by inode read inside ocfs2_inode_lock(),
+- * so any corruption is a code bug. */
+- BUG_ON(!OCFS2_IS_VALID_DINODE(fe));
++ /* JBD-managed buffers can bypass validation, so treat this as corruption. */
++ if (!OCFS2_IS_VALID_DINODE(fe)) {
++ ret = ocfs2_error(main_bm_inode->i_sb,
++ "Invalid dinode #%llu\n",
++ (unsigned long long)OCFS2_I(main_bm_inode)->ip_blkno);
++ goto out_unlock;
++ }
+
+ if (le16_to_cpu(fe->id2.i_chain.cl_cpg) !=
+ ocfs2_group_bitmap_size(osb->sb, 0,
--- /dev/null
+From stable+bounces-236134-greg=kroah.com@vger.kernel.org Mon Apr 13 17:25:54 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 13 Apr 2026 11:19:57 -0400
+Subject: ocfs2: validate inline data i_size during inode read
+To: stable@vger.kernel.org
+Cc: Deepanshu Kartikey <kartikey406@gmail.com>, syzbot+c897823f699449cc3eb4@syzkaller.appspotmail.com, Joseph Qi <joseph.qi@linux.alibaba.com>, Mark Fasheh <mark@fasheh.com>, Joel Becker <jlbec@evilplan.org>, Junxiao Bi <junxiao.bi@oracle.com>, Changwei Ge <gechangwei@live.cn>, Jun Piao <piaojun@huawei.com>, Heming Zhao <heming.zhao@suse.com>, Andrew Morton <akpm@linux-foundation.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260413151958.3014725-2-sashal@kernel.org>
+
+From: Deepanshu Kartikey <kartikey406@gmail.com>
+
+[ Upstream commit 1524af3685b35feac76662cc551cbc37bd14775f ]
+
+When reading an inode from disk, ocfs2_validate_inode_block() performs
+various sanity checks but does not validate the size of inline data. If
+the filesystem is corrupted, an inode's i_size can exceed the actual
+inline data capacity (id_count).
+
+This causes ocfs2_dir_foreach_blk_id() to iterate beyond the inline data
+buffer, triggering a use-after-free when accessing directory entries from
+freed memory.
+
+In the syzbot report:
+ - i_size was 1099511627576 bytes (~1TB)
+ - Actual inline data capacity (id_count) is typically <256 bytes
+ - A garbage rec_len (54648) caused ctx->pos to jump out of bounds
+ - This triggered a UAF in ocfs2_check_dir_entry()
+
+Fix by adding a validation check in ocfs2_validate_inode_block() to ensure
+inodes with inline data have i_size <= id_count. This catches the
+corruption early during inode read and prevents all downstream code from
+operating on invalid data.
+
+Link: https://lkml.kernel.org/r/20251212052132.16750-1-kartikey406@gmail.com
+Signed-off-by: Deepanshu Kartikey <kartikey406@gmail.com>
+Reported-by: syzbot+c897823f699449cc3eb4@syzkaller.appspotmail.com
+Closes: https://syzkaller.appspot.com/bug?extid=c897823f699449cc3eb4
+Tested-by: syzbot+c897823f699449cc3eb4@syzkaller.appspotmail.com
+Link: https://lore.kernel.org/all/20251211115231.3560028-1-kartikey406@gmail.com/T/ [v1]
+Link: https://lore.kernel.org/all/20251212040400.6377-1-kartikey406@gmail.com/T/ [v2]
+Reviewed-by: Joseph Qi <joseph.qi@linux.alibaba.com>
+Cc: Mark Fasheh <mark@fasheh.com>
+Cc: Joel Becker <jlbec@evilplan.org>
+Cc: Junxiao Bi <junxiao.bi@oracle.com>
+Cc: Changwei Ge <gechangwei@live.cn>
+Cc: Jun Piao <piaojun@huawei.com>
+Cc: Heming Zhao <heming.zhao@suse.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Stable-dep-of: 7bc5da4842be ("ocfs2: fix out-of-bounds write in ocfs2_write_end_inline")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/ocfs2/inode.c | 25 +++++++++++++++++++------
+ 1 file changed, 19 insertions(+), 6 deletions(-)
+
+--- a/fs/ocfs2/inode.c
++++ b/fs/ocfs2/inode.c
+@@ -1505,12 +1505,25 @@ int ocfs2_validate_inode_block(struct su
+ goto bail;
+ }
+
+- if ((le16_to_cpu(di->i_dyn_features) & OCFS2_INLINE_DATA_FL) &&
+- le32_to_cpu(di->i_clusters)) {
+- rc = ocfs2_error(sb, "Invalid dinode %llu: %u clusters\n",
+- (unsigned long long)bh->b_blocknr,
+- le32_to_cpu(di->i_clusters));
+- goto bail;
++ if (le16_to_cpu(di->i_dyn_features) & OCFS2_INLINE_DATA_FL) {
++ struct ocfs2_inline_data *data = &di->id2.i_data;
++
++ if (le32_to_cpu(di->i_clusters)) {
++ rc = ocfs2_error(sb,
++ "Invalid dinode %llu: %u clusters\n",
++ (unsigned long long)bh->b_blocknr,
++ le32_to_cpu(di->i_clusters));
++ goto bail;
++ }
++
++ if (le64_to_cpu(di->i_size) > le16_to_cpu(data->id_count)) {
++ rc = ocfs2_error(sb,
++ "Invalid dinode #%llu: inline data i_size %llu exceeds id_count %u\n",
++ (unsigned long long)bh->b_blocknr,
++ (unsigned long long)le64_to_cpu(di->i_size),
++ le16_to_cpu(data->id_count));
++ goto bail;
++ }
+ }
+
+ rc = 0;
--- /dev/null
+From 0da63230d3ec1ec5fcc443a2314233e95bfece54 Mon Sep 17 00:00:00 2001
+From: Koichiro Den <den@valinux.co.jp>
+Date: Thu, 26 Feb 2026 17:41:38 +0900
+Subject: PCI: endpoint: pci-epf-vntb: Remove duplicate resource teardown
+
+From: Koichiro Den <den@valinux.co.jp>
+
+commit 0da63230d3ec1ec5fcc443a2314233e95bfece54 upstream.
+
+epf_ntb_epc_destroy() duplicates the teardown that the caller is
+supposed to perform later. This leads to an oops when .allow_link fails
+or when .drop_link is performed. The following is an example oops of the
+former case:
+
+ Unable to handle kernel paging request at virtual address dead000000000108
+ [...]
+ [dead000000000108] address between user and kernel address ranges
+ Internal error: Oops: 0000000096000044 [#1] SMP
+ [...]
+ Call trace:
+ pci_epc_remove_epf+0x78/0xe0 (P)
+ pci_primary_epc_epf_link+0x88/0xa8
+ configfs_symlink+0x1f4/0x5a0
+ vfs_symlink+0x134/0x1d8
+ do_symlinkat+0x88/0x138
+ __arm64_sys_symlinkat+0x74/0xe0
+ [...]
+
+Remove the helper, and drop pci_epc_put(). EPC device refcounting is
+tied to the configfs EPC group lifetime, and pci_epc_put() in the
+.drop_link path is sufficient.
+
+Fixes: e35f56bb0330 ("PCI: endpoint: Support NTB transfer between RC and EP")
+Signed-off-by: Koichiro Den <den@valinux.co.jp>
+Signed-off-by: Manivannan Sadhasivam <mani@kernel.org>
+Reviewed-by: Frank Li <Frank.Li@nxp.com>
+Cc: stable@vger.kernel.org
+Link: https://patch.msgid.link/20260226084142.2226875-2-den@valinux.co.jp
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/pci/endpoint/functions/pci-epf-vntb.c | 19 +------------------
+ 1 file changed, 1 insertion(+), 18 deletions(-)
+
+--- a/drivers/pci/endpoint/functions/pci-epf-vntb.c
++++ b/drivers/pci/endpoint/functions/pci-epf-vntb.c
+@@ -645,19 +645,6 @@ static void epf_ntb_mw_bar_clear(struct
+ }
+
+ /**
+- * epf_ntb_epc_destroy() - Cleanup NTB EPC interface
+- * @ntb: NTB device that facilitates communication between HOST and VHOST
+- *
+- * Wrapper for epf_ntb_epc_destroy_interface() to cleanup all the NTB interfaces
+- */
+-static void epf_ntb_epc_destroy(struct epf_ntb *ntb)
+-{
+- pci_epc_remove_epf(ntb->epf->epc, ntb->epf, 0);
+- pci_epc_put(ntb->epf->epc);
+-}
+-
+-
+-/**
+ * epf_ntb_is_bar_used() - Check if a bar is used in the ntb configuration
+ * @ntb: NTB device that facilitates communication between HOST and VHOST
+ * @barno: Checked bar number
+@@ -1407,7 +1394,7 @@ static int epf_ntb_bind(struct pci_epf *
+ ret = epf_ntb_init_epc_bar(ntb);
+ if (ret) {
+ dev_err(dev, "Failed to create NTB EPC\n");
+- goto err_bar_init;
++ return ret;
+ }
+
+ ret = epf_ntb_config_spad_bar_alloc(ntb);
+@@ -1447,9 +1434,6 @@ err_epc_cleanup:
+ err_bar_alloc:
+ epf_ntb_config_spad_bar_free(ntb);
+
+-err_bar_init:
+- epf_ntb_epc_destroy(ntb);
+-
+ return ret;
+ }
+
+@@ -1465,7 +1449,6 @@ static void epf_ntb_unbind(struct pci_ep
+
+ epf_ntb_epc_cleanup(ntb);
+ epf_ntb_config_spad_bar_free(ntb);
+- epf_ntb_epc_destroy(ntb);
+
+ pci_unregister_driver(&vntb_pci_driver);
+ }
--- /dev/null
+From d799984233a50abd2667a7d17a9a710a3f10ebe2 Mon Sep 17 00:00:00 2001
+From: Koichiro Den <den@valinux.co.jp>
+Date: Thu, 26 Feb 2026 17:41:40 +0900
+Subject: PCI: endpoint: pci-epf-vntb: Stop cmd_handler work in epf_ntb_epc_cleanup
+
+From: Koichiro Den <den@valinux.co.jp>
+
+commit d799984233a50abd2667a7d17a9a710a3f10ebe2 upstream.
+
+Disable the delayed work before clearing BAR mappings and doorbells to
+avoid running the handler after resources have been torn down.
+
+ Unable to handle kernel paging request at virtual address ffff800083f46004
+ [...]
+ Internal error: Oops: 0000000096000007 [#1] SMP
+ [...]
+ Call trace:
+ epf_ntb_cmd_handler+0x54/0x200 [pci_epf_vntb] (P)
+ process_one_work+0x154/0x3b0
+ worker_thread+0x2c8/0x400
+ kthread+0x148/0x210
+ ret_from_fork+0x10/0x20
+
+Fixes: e35f56bb0330 ("PCI: endpoint: Support NTB transfer between RC and EP")
+Signed-off-by: Koichiro Den <den@valinux.co.jp>
+Signed-off-by: Manivannan Sadhasivam <mani@kernel.org>
+Reviewed-by: Frank Li <Frank.Li@nxp.com>
+Cc: stable@vger.kernel.org
+Link: https://patch.msgid.link/20260226084142.2226875-4-den@valinux.co.jp
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/pci/endpoint/functions/pci-epf-vntb.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/drivers/pci/endpoint/functions/pci-epf-vntb.c
++++ b/drivers/pci/endpoint/functions/pci-epf-vntb.c
+@@ -836,6 +836,7 @@ err_config_interrupt:
+ */
+ static void epf_ntb_epc_cleanup(struct epf_ntb *ntb)
+ {
++ disable_delayed_work_sync(&ntb->cmd_handler);
+ epf_ntb_mw_bar_clear(ntb, ntb->num_mws);
+ epf_ntb_db_bar_clear(ntb);
+ epf_ntb_config_sspad_bar_clear(ntb);
--- /dev/null
+From 8545d9bc4bd0801e0bdfbfdfdc2532ff31236ddf Mon Sep 17 00:00:00 2001
+From: Harry Wentland <harry.wentland@amd.com>
+Date: Fri, 27 Mar 2026 11:41:57 -0400
+Subject: scripts/checkpatch: add Assisted-by: tag validation
+
+From: Harry Wentland <harry.wentland@amd.com>
+
+commit 8545d9bc4bd0801e0bdfbfdfdc2532ff31236ddf upstream.
+
+The coding-assistants.rst documentation defines the Assisted-by: tag
+format for AI-assisted contributions as:
+
+ Assisted-by: AGENT_NAME:MODEL_VERSION [TOOL1] [TOOL2]
+
+This format does not use an email address, so checkpatch currently
+reports a false positive about an invalid email when encountering this
+tag.
+
+Add Assisted-by: to the recognized signature tags and standard signature
+list. When an Assisted-by: tag is found, validate it instead of checking
+for an email address.
+
+Examples of passing tags:
+- Claude:claude-3-opus coccinelle sparse
+- FOO:BAR.baz
+- Copilot Github:claude-3-opus
+- GitHub Copilot:Claude Opus 4.6
+- My Cool Agent:v1.2.3 coccinelle sparse
+
+Examples of tags triggering the new warning:
+- Claude coccinelle sparse
+- JustAName
+- :missing-agent
+
+Cc: Jani Nikula <jani.nikula@linux.intel.com>
+Assisted-by: Claude:claude-opus-4.6
+Co-developed-by: Alex Hung <alex.hung@amd.com>
+Signed-off-by: Alex Hung <alex.hung@amd.com>
+Signed-off-by: Harry Wentland <harry.wentland@amd.com>
+Cc: stable@vger.kernel.org
+Signed-off-by: Jonathan Corbet <corbet@lwn.net>
+Message-ID: <20260327154157.162962-1-harry.wentland@amd.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ scripts/checkpatch.pl | 12 +++++++++++-
+ 1 file changed, 11 insertions(+), 1 deletion(-)
+
+--- a/scripts/checkpatch.pl
++++ b/scripts/checkpatch.pl
+@@ -641,6 +641,7 @@ our $signature_tags = qr{(?xi:
+ Reviewed-by:|
+ Reported-by:|
+ Suggested-by:|
++ Assisted-by:|
+ To:|
+ Cc:
+ )};
+@@ -737,7 +738,7 @@ sub find_standard_signature {
+ my ($sign_off) = @_;
+ my @standard_signature_tags = (
+ 'Signed-off-by:', 'Co-developed-by:', 'Acked-by:', 'Tested-by:',
+- 'Reviewed-by:', 'Reported-by:', 'Suggested-by:'
++ 'Reviewed-by:', 'Reported-by:', 'Suggested-by:', 'Assisted-by:'
+ );
+ foreach my $signature (@standard_signature_tags) {
+ return $signature if (get_edit_distance($sign_off, $signature) <= 2);
+@@ -3087,6 +3088,15 @@ sub process {
+ }
+ }
+
++# Assisted-by: uses format AGENT_NAME:MODEL_VERSION [TOOL1] [TOOL2] instead of email
++ if ($sign_off =~ /^assisted-by:$/i) {
++ if ($email !~ /^[^:]+:\S+(\s+\S+)*$/) {
++ WARN("BAD_ASSISTED_BY",
++ "Assisted-by: should use format: 'Assisted-by: AGENT_NAME:MODEL_VERSION [TOOL1] [TOOL2]'\n" . $herecurr);
++ }
++ next;
++ }
++
+ my ($email_name, $name_comment, $email_address, $comment) = parse_email($email);
+ my $suggested_email = format_email(($email_name, $name_comment, $email_address, $comment));
+ if ($suggested_email eq "") {
--- /dev/null
+From 9b4744d8eda2824041064a5639ccbb079850914d Mon Sep 17 00:00:00 2001
+From: Tamir Duberstein <tamird@kernel.org>
+Date: Tue, 27 Jan 2026 11:35:43 -0500
+Subject: scripts: generate_rust_analyzer.py: avoid FD leak
+
+From: Tamir Duberstein <tamird@kernel.org>
+
+commit 9b4744d8eda2824041064a5639ccbb079850914d upstream.
+
+Use `pathlib.Path.read_text()` to avoid leaking file descriptors.
+
+Fixes: 8c4555ccc55c ("scripts: add `generate_rust_analyzer.py`")
+Cc: stable@vger.kernel.org
+Reviewed-by: Daniel Almeida <daniel.almeida@collabora.com>
+Reviewed-by: Fiona Behrens <me@kloenk.dev>
+Reviewed-by: Trevor Gross <tmgross@umich.edu>
+Link: https://patch.msgid.link/20260127-rust-analyzer-fd-leak-v2-1-1bb55b9b6822@kernel.org
+Signed-off-by: Tamir Duberstein <tamird@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ scripts/generate_rust_analyzer.py | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+--- a/scripts/generate_rust_analyzer.py
++++ b/scripts/generate_rust_analyzer.py
+@@ -168,9 +168,10 @@ def generate_crates(srctree, objtree, sy
+
+ def is_root_crate(build_file, target):
+ try:
+- return f"{target}.o" in open(build_file).read()
++ contents = build_file.read_text()
+ except FileNotFoundError:
+ return False
++ return f"{target}.o" in contents
+
+ # Then, the rest outside of `rust/`.
+ #
--- /dev/null
+From e6ad477d1bf8829973cddd9accbafa9d1a6cd15a Mon Sep 17 00:00:00 2001
+From: Paul Chaignon <paul.chaignon@gmail.com>
+Date: Fri, 27 Feb 2026 22:36:30 +0100
+Subject: selftests/bpf: Test refinement of single-value tnum
+
+From: Paul Chaignon <paul.chaignon@gmail.com>
+
+commit e6ad477d1bf8829973cddd9accbafa9d1a6cd15a upstream.
+
+This patch introduces selftests to cover the new bounds refinement
+logic introduced in the previous patch. Without the previous patch,
+the first two tests fail because of the invariant violation they
+trigger. The last test fails because the R10 access is not detected as
+dead code. In addition, all three tests fail because of R0 having a
+non-constant value in the verifier logs.
+
+In addition, the last two cases are covering the negative cases: when we
+shouldn't refine the bounds because the u64 and tnum overlap in at least
+two values.
+
+Signed-off-by: Paul Chaignon <paul.chaignon@gmail.com>
+Link: https://lore.kernel.org/r/90d880c8cf587b9f7dc715d8961cd1b8111d01a8.1772225741.git.paul.chaignon@gmail.com
+Signed-off-by: Alexei Starovoitov <ast@kernel.org>
+[shung-hsi.yu: test for backported upstream commit efc11a667878 ("bpf: Improve
+bounds when tnum has a single possible value")]
+Signed-off-by: Shung-Hsi Yu <shung-hsi.yu@suse.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ tools/testing/selftests/bpf/progs/verifier_bounds.c | 137 ++++++++++++++++++++
+ 1 file changed, 137 insertions(+)
+
+--- a/tools/testing/selftests/bpf/progs/verifier_bounds.c
++++ b/tools/testing/selftests/bpf/progs/verifier_bounds.c
+@@ -1709,4 +1709,141 @@ __naked void jeq_disagreeing_tnums(void
+ : __clobber_all);
+ }
+
++/* This test covers the bounds deduction when the u64 range and the tnum
++ * overlap only at umax. After instruction 3, the ranges look as follows:
++ *
++ * 0 umin=0xe01 umax=0xf00 U64_MAX
++ * | [xxxxxxxxxxxxxx] |
++ * |----------------------------|------------------------------|
++ * | x x | tnum values
++ *
++ * The verifier can therefore deduce that the R0=0xf0=240.
++ */
++SEC("socket")
++__description("bounds refinement with single-value tnum on umax")
++__msg("3: (15) if r0 == 0xe0 {{.*}} R0=240")
++__success __log_level(2)
++__flag(BPF_F_TEST_REG_INVARIANTS)
++__naked void bounds_refinement_tnum_umax(void *ctx)
++{
++ asm volatile(" \
++ call %[bpf_get_prandom_u32]; \
++ r0 |= 0xe0; \
++ r0 &= 0xf0; \
++ if r0 == 0xe0 goto +2; \
++ if r0 == 0xf0 goto +1; \
++ r10 = 0; \
++ exit; \
++" :
++ : __imm(bpf_get_prandom_u32)
++ : __clobber_all);
++}
++
++/* This test covers the bounds deduction when the u64 range and the tnum
++ * overlap only at umin. After instruction 3, the ranges look as follows:
++ *
++ * 0 umin=0xe00 umax=0xeff U64_MAX
++ * | [xxxxxxxxxxxxxx] |
++ * |----------------------------|------------------------------|
++ * | x x | tnum values
++ *
++ * The verifier can therefore deduce that the R0=0xe0=224.
++ */
++SEC("socket")
++__description("bounds refinement with single-value tnum on umin")
++__msg("3: (15) if r0 == 0xf0 {{.*}} R0=224")
++__success __log_level(2)
++__flag(BPF_F_TEST_REG_INVARIANTS)
++__naked void bounds_refinement_tnum_umin(void *ctx)
++{
++ asm volatile(" \
++ call %[bpf_get_prandom_u32]; \
++ r0 |= 0xe0; \
++ r0 &= 0xf0; \
++ if r0 == 0xf0 goto +2; \
++ if r0 == 0xe0 goto +1; \
++ r10 = 0; \
++ exit; \
++" :
++ : __imm(bpf_get_prandom_u32)
++ : __clobber_all);
++}
++
++/* This test covers the bounds deduction when the only possible tnum value is
++ * in the middle of the u64 range. After instruction 3, the ranges look as
++ * follows:
++ *
++ * 0 umin=0x7cf umax=0x7df U64_MAX
++ * | [xxxxxxxxxxxx] |
++ * |----------------------------|------------------------------|
++ * | x x x x x | tnum values
++ * | +--- 0x7e0
++ * +--- 0x7d0
++ *
++ * Since the lower four bits are zero, the tnum and the u64 range only overlap
++ * in R0=0x7d0=2000. Instruction 5 is therefore dead code.
++ */
++SEC("socket")
++__description("bounds refinement with single-value tnum in middle of range")
++__msg("3: (a5) if r0 < 0x7cf {{.*}} R0=2000")
++__success __log_level(2)
++__naked void bounds_refinement_tnum_middle(void *ctx)
++{
++ asm volatile(" \
++ call %[bpf_get_prandom_u32]; \
++ if r0 & 0x0f goto +4; \
++ if r0 > 0x7df goto +3; \
++ if r0 < 0x7cf goto +2; \
++ if r0 == 0x7d0 goto +1; \
++ r10 = 0; \
++ exit; \
++" :
++ : __imm(bpf_get_prandom_u32)
++ : __clobber_all);
++}
++
++/* This test cover the negative case for the tnum/u64 overlap. Since
++ * they contain the same two values (i.e., {0, 1}), we can't deduce
++ * anything more.
++ */
++SEC("socket")
++__description("bounds refinement: several overlaps between tnum and u64")
++__msg("2: (25) if r0 > 0x1 {{.*}} R0=scalar(smin=smin32=0,smax=umax=smax32=umax32=1,var_off=(0x0; 0x1))")
++__failure __log_level(2)
++__naked void bounds_refinement_several_overlaps(void *ctx)
++{
++ asm volatile(" \
++ call %[bpf_get_prandom_u32]; \
++ if r0 < 0 goto +3; \
++ if r0 > 1 goto +2; \
++ if r0 == 1 goto +1; \
++ r10 = 0; \
++ exit; \
++" :
++ : __imm(bpf_get_prandom_u32)
++ : __clobber_all);
++}
++
++/* This test cover the negative case for the tnum/u64 overlap. Since
++ * they overlap in the two values contained by the u64 range (i.e.,
++ * {0xf, 0x10}), we can't deduce anything more.
++ */
++SEC("socket")
++__description("bounds refinement: multiple overlaps between tnum and u64")
++__msg("2: (25) if r0 > 0x10 {{.*}} R0=scalar(smin=umin=smin32=umin32=15,smax=umax=smax32=umax32=16,var_off=(0x0; 0x1f))")
++__failure __log_level(2)
++__naked void bounds_refinement_multiple_overlaps(void *ctx)
++{
++ asm volatile(" \
++ call %[bpf_get_prandom_u32]; \
++ if r0 < 0xf goto +3; \
++ if r0 > 0x10 goto +2; \
++ if r0 == 0x10 goto +1; \
++ r10 = 0; \
++ exit; \
++" :
++ : __imm(bpf_get_prandom_u32)
++ : __clobber_all);
++}
++
+ char _license[] SEC("license") = "GPL";
usb-cdc-acm-add-quirks-for-yoga-book-9-14iah10-ingenic-touchscreen.patch
usb-gadget-f_hid-don-t-call-cdev_init-while-cdev-in-use.patch
usb-port-add-delay-after-usb_hub_set_port_power.patch
+fbdev-udlfb-avoid-divide-by-zero-on-fbioput_vscreeninfo.patch
+scripts-checkpatch-add-assisted-by-tag-validation.patch
+scripts-generate_rust_analyzer.py-avoid-fd-leak.patch
+wifi-rtw88-fix-device-leak-on-probe-failure.patch
+staging-sm750fb-fix-division-by-zero-in-ps_to_hz.patch
+usb-serial-option-add-telit-cinterion-fn990a-mbim-composition.patch
+docs-admin-guide-mm-damon-reclaim-warn-commit_inputs-vs-param-updates-race.patch
+alsa-ctxfi-limit-ptp-to-a-single-page.patch
+dcache-limit-the-minimal-number-of-bucket-to-two.patch
+arm64-mm-handle-invalid-large-leaf-mappings-correctly.patch
+media-vidtv-fix-null-pointer-dereference-in-vidtv_channel_pmt_match_sections.patch
+ocfs2-fix-possible-deadlock-between-unlink-and-dio_end_io_write.patch
+ocfs2-fix-use-after-free-in-ocfs2_fault-when-vm_fault_retry.patch
+ocfs2-handle-invalid-dinode-in-ocfs2_group_extend.patch
+pci-endpoint-pci-epf-vntb-stop-cmd_handler-work-in-epf_ntb_epc_cleanup.patch
+pci-endpoint-pci-epf-vntb-remove-duplicate-resource-teardown.patch
+kvm-selftests-remove-duplicate-launch_update_vmsa-call-in-sev-es-migrate-test.patch
+kvm-sev-reject-attempts-to-sync-vmsa-of-an-already-launched-encrypted-vcpu.patch
+kvm-sev-protect-all-of-sev_mem_enc_register_region-with-kvm-lock.patch
+kvm-sev-disallow-launch_finish-if-vcpus-are-actively-being-created.patch
+kvm-sev-lock-all-vcpus-when-synchronzing-vmsas-for-snp-launch-finish.patch
+kvm-sev-drop-warn-on-large-size-for-kvm_memory_encrypt_reg_region.patch
+selftests-bpf-test-refinement-of-single-value-tnum.patch
+kvm-remove-subtle-struct-kvm_stats_desc-pseudo-overlay.patch
+kvm-x86-use-__declare_flex_array-for-uapi-structures-with-vlas.patch
+ocfs2-add-inline-inode-consistency-check-to-ocfs2_validate_inode_block.patch
+ocfs2-validate-inline-data-i_size-during-inode-read.patch
+ocfs2-fix-out-of-bounds-write-in-ocfs2_write_end_inline.patch
+netfilter-conntrack-add-missing-netlink-policy-validations.patch
--- /dev/null
+From 75a1621e4f91310673c9acbcbb25c2a7ff821cd3 Mon Sep 17 00:00:00 2001
+From: Junrui Luo <moonafterrain@outlook.com>
+Date: Mon, 23 Mar 2026 15:31:56 +0800
+Subject: staging: sm750fb: fix division by zero in ps_to_hz()
+
+From: Junrui Luo <moonafterrain@outlook.com>
+
+commit 75a1621e4f91310673c9acbcbb25c2a7ff821cd3 upstream.
+
+ps_to_hz() is called from hw_sm750_crtc_set_mode() without validating
+that pixclock is non-zero. A zero pixclock passed via FBIOPUT_VSCREENINFO
+causes a division by zero.
+
+Fix by rejecting zero pixclock in lynxfb_ops_check_var(), consistent
+with other framebuffer drivers.
+
+Fixes: 81dee67e215b ("staging: sm750fb: add sm750 to staging")
+Reported-by: Yuhao Jiang <danisjiang@gmail.com>
+Cc: stable@vger.kernel.org
+Signed-off-by: Junrui Luo <moonafterrain@outlook.com>
+Link: https://patch.msgid.link/SYBPR01MB7881AFBFCE28CCF528B35D0CAF4BA@SYBPR01MB7881.ausprd01.prod.outlook.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/staging/sm750fb/sm750.c | 3 +++
+ 1 file changed, 3 insertions(+)
+
+--- a/drivers/staging/sm750fb/sm750.c
++++ b/drivers/staging/sm750fb/sm750.c
+@@ -481,6 +481,9 @@ static int lynxfb_ops_check_var(struct f
+ struct lynxfb_crtc *crtc;
+ resource_size_t request;
+
++ if (!var->pixclock)
++ return -EINVAL;
++
+ ret = 0;
+ par = info->par;
+ crtc = &par->crtc;
--- /dev/null
+From f8cc59ecc22841be5deb07b549c0c6a2657cd5f9 Mon Sep 17 00:00:00 2001
+From: Fabio Porcedda <fabio.porcedda@gmail.com>
+Date: Thu, 2 Apr 2026 11:57:27 +0200
+Subject: USB: serial: option: add Telit Cinterion FN990A MBIM composition
+
+From: Fabio Porcedda <fabio.porcedda@gmail.com>
+
+commit f8cc59ecc22841be5deb07b549c0c6a2657cd5f9 upstream.
+
+Add the following Telit Cinterion FN990A MBIM composition:
+
+0x1074: MBIM + tty (AT/NMEA) + tty (AT) + tty (AT) + tty (diag) +
+ DPL (Data Packet Logging) + adb
+
+T: Bus=01 Lev=01 Prnt=04 Port=06 Cnt=01 Dev#= 7 Spd=480 MxCh= 0
+D: Ver= 2.10 Cls=ef(misc ) Sub=02 Prot=01 MxPS=64 #Cfgs= 1
+P: Vendor=1bc7 ProdID=1074 Rev=05.04
+S: Manufacturer=Telit Wireless Solutions
+S: Product=FN990
+S: SerialNumber=70628d0c
+C: #Ifs= 8 Cfg#= 1 Atr=e0 MxPwr=500mA
+I: If#= 0 Alt= 0 #EPs= 1 Cls=02(commc) Sub=0e Prot=00 Driver=cdc_mbim
+E: Ad=81(I) Atr=03(Int.) MxPS= 64 Ivl=32ms
+I: If#= 1 Alt= 1 #EPs= 2 Cls=0a(data ) Sub=00 Prot=02 Driver=cdc_mbim
+E: Ad=0f(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms
+E: Ad=8e(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms
+I: If#= 2 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=60 Driver=option
+E: Ad=01(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms
+E: Ad=82(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms
+E: Ad=83(I) Atr=03(Int.) MxPS= 10 Ivl=32ms
+I: If#= 3 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=40 Driver=option
+E: Ad=02(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms
+E: Ad=84(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms
+E: Ad=85(I) Atr=03(Int.) MxPS= 10 Ivl=32ms
+I: If#= 4 Alt= 0 #EPs= 3 Cls=ff(vend.) Sub=ff Prot=40 Driver=option
+E: Ad=03(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms
+E: Ad=86(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms
+E: Ad=87(I) Atr=03(Int.) MxPS= 10 Ivl=32ms
+I: If#= 5 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=ff Prot=30 Driver=option
+E: Ad=04(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms
+E: Ad=88(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms
+I: If#= 6 Alt= 0 #EPs= 1 Cls=ff(vend.) Sub=ff Prot=80 Driver=(none)
+E: Ad=8f(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms
+I: If#= 7 Alt= 0 #EPs= 2 Cls=ff(vend.) Sub=42 Prot=01 Driver=(none)
+E: Ad=05(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms
+E: Ad=89(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms
+
+Cc: stable@vger.kernel.org
+Signed-off-by: Fabio Porcedda <fabio.porcedda@gmail.com>
+Signed-off-by: Johan Hovold <johan@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/usb/serial/option.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1383,6 +1383,8 @@ static const struct usb_device_id option
+ .driver_info = NCTRL(2) | RSVD(3) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1073, 0xff), /* Telit FN990A (ECM) */
+ .driver_info = NCTRL(0) | RSVD(1) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1074, 0xff), /* Telit FN990A (MBIM) */
++ .driver_info = NCTRL(5) | RSVD(6) | RSVD(7) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1075, 0xff), /* Telit FN990A (PCIe) */
+ .driver_info = RSVD(0) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1077, 0xff), /* Telit FN990A (rmnet + audio) */
--- /dev/null
+From bbb15e71156cd9f5e1869eee7207a06ea8e96c39 Mon Sep 17 00:00:00 2001
+From: Johan Hovold <johan@kernel.org>
+Date: Fri, 6 Mar 2026 09:51:44 +0100
+Subject: wifi: rtw88: fix device leak on probe failure
+
+From: Johan Hovold <johan@kernel.org>
+
+commit bbb15e71156cd9f5e1869eee7207a06ea8e96c39 upstream.
+
+Driver core holds a reference to the USB interface and its parent USB
+device while the interface is bound to a driver and there is no need to
+take additional references unless the structures are needed after
+disconnect.
+
+This driver takes a reference to the USB device during probe but does
+not to release it on all probe errors (e.g. when descriptor parsing
+fails).
+
+Drop the redundant device reference to fix the leak, reduce cargo
+culting, make it easier to spot drivers where an extra reference is
+needed, and reduce the risk of further memory leaks.
+
+Fixes: a82dfd33d123 ("wifi: rtw88: Add common USB chip support")
+Reported-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Link: https://lore.kernel.org/netdev/2026022319-turbofan-darkened-206d@gregkh/
+Cc: stable@vger.kernel.org # 6.2
+Cc: Sascha Hauer <s.hauer@pengutronix.de>
+Signed-off-by: Johan Hovold <johan@kernel.org>
+Signed-off-by: Ping-Ke Shih <pkshih@realtek.com>
+Link: https://patch.msgid.link/20260306085144.12064-19-johan@kernel.org
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/wireless/realtek/rtw88/usb.c | 3 +--
+ 1 file changed, 1 insertion(+), 2 deletions(-)
+
+--- a/drivers/net/wireless/realtek/rtw88/usb.c
++++ b/drivers/net/wireless/realtek/rtw88/usb.c
+@@ -1040,7 +1040,7 @@ static int rtw_usb_intf_init(struct rtw_
+ struct usb_interface *intf)
+ {
+ struct rtw_usb *rtwusb = rtw_get_usb_priv(rtwdev);
+- struct usb_device *udev = usb_get_dev(interface_to_usbdev(intf));
++ struct usb_device *udev = interface_to_usbdev(intf);
+ int ret;
+
+ rtwusb->udev = udev;
+@@ -1066,7 +1066,6 @@ static void rtw_usb_intf_deinit(struct r
+ {
+ struct rtw_usb *rtwusb = rtw_get_usb_priv(rtwdev);
+
+- usb_put_dev(rtwusb->udev);
+ kfree(rtwusb->usb_data);
+ usb_set_intfdata(intf, NULL);
+ }