--- /dev/null
+From stable+bounces-225616-greg=kroah.com@vger.kernel.org Mon Mar 16 17:43:27 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 16 Mar 2026 12:38:08 -0400
+Subject: ALSA: pcm: fix use-after-free on linked stream runtime in snd_pcm_drain()
+To: stable@vger.kernel.org
+Cc: Mehul Rao <mehulrao@gmail.com>, Takashi Iwai <tiwai@suse.de>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260316163808.925386-2-sashal@kernel.org>
+
+From: Mehul Rao <mehulrao@gmail.com>
+
+[ Upstream commit 9b1dbd69ba6f8f8c69bc7b77c2ce3b9c6ed05ba6 ]
+
+In the drain loop, the local variable 'runtime' is reassigned to a
+linked stream's runtime (runtime = s->runtime at line 2157). After
+releasing the stream lock at line 2169, the code accesses
+runtime->no_period_wakeup, runtime->rate, and runtime->buffer_size
+(lines 2170-2178) — all referencing the linked stream's runtime without
+any lock or refcount protecting its lifetime.
+
+A concurrent close() on the linked stream's fd triggers
+snd_pcm_release_substream() → snd_pcm_drop() → pcm_release_private()
+→ snd_pcm_unlink() → snd_pcm_detach_substream() → kfree(runtime).
+No synchronization prevents kfree(runtime) from completing while the
+drain path dereferences the stale pointer.
+
+Fix by caching the needed runtime fields (no_period_wakeup, rate,
+buffer_size) into local variables while still holding the stream lock,
+and using the cached values after the lock is released.
+
+Fixes: f2b3614cefb6 ("ALSA: PCM - Don't check DMA time-out too shortly")
+Cc: stable@vger.kernel.org
+Signed-off-by: Mehul Rao <mehulrao@gmail.com>
+Link: https://patch.msgid.link/20260305193508.311096-1-mehulrao@gmail.com
+Signed-off-by: Takashi Iwai <tiwai@suse.de>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ sound/core/pcm_native.c | 19 ++++++++++++++++---
+ 1 file changed, 16 insertions(+), 3 deletions(-)
+
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -2146,6 +2146,10 @@ static int snd_pcm_drain(struct snd_pcm_
+ for (;;) {
+ long tout;
+ struct snd_pcm_runtime *to_check;
++ unsigned int drain_rate;
++ snd_pcm_uframes_t drain_bufsz;
++ bool drain_no_period_wakeup;
++
+ if (signal_pending(current)) {
+ result = -ERESTARTSYS;
+ break;
+@@ -2165,16 +2169,25 @@ static int snd_pcm_drain(struct snd_pcm_
+ snd_pcm_group_unref(group, substream);
+ if (!to_check)
+ break; /* all drained */
++ /*
++ * Cache the runtime fields needed after unlock.
++ * A concurrent close() on the linked stream may free
++ * its runtime via snd_pcm_detach_substream() once we
++ * release the stream lock below.
++ */
++ drain_no_period_wakeup = to_check->no_period_wakeup;
++ drain_rate = to_check->rate;
++ drain_bufsz = to_check->buffer_size;
+ init_waitqueue_entry(&wait, current);
+ set_current_state(TASK_INTERRUPTIBLE);
+ add_wait_queue(&to_check->sleep, &wait);
+ snd_pcm_stream_unlock_irq(substream);
+- if (runtime->no_period_wakeup)
++ if (drain_no_period_wakeup)
+ tout = MAX_SCHEDULE_TIMEOUT;
+ else {
+ tout = 100;
+- if (runtime->rate) {
+- long t = runtime->buffer_size * 1100 / runtime->rate;
++ if (drain_rate) {
++ long t = drain_bufsz * 1100 / drain_rate;
+ tout = max(t, tout);
+ }
+ tout = msecs_to_jiffies(tout);
--- /dev/null
+From stable+bounces-225615-greg=kroah.com@vger.kernel.org Mon Mar 16 17:43:22 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 16 Mar 2026 12:38:07 -0400
+Subject: ALSA: pcm: fix wait_time calculations
+To: stable@vger.kernel.org
+Cc: Oswald Buddenhagen <oswald.buddenhagen@gmx.de>, Takashi Iwai <tiwai@suse.de>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260316163808.925386-1-sashal@kernel.org>
+
+From: Oswald Buddenhagen <oswald.buddenhagen@gmx.de>
+
+[ Upstream commit 3ed2b549b39f57239aad50a255ece353997183fd ]
+
+... in wait_for_avail() and snd_pcm_drain().
+
+t was calculated in seconds, so it would be pretty much always zero, to
+be subsequently de-facto ignored due to being max(t, 10)'d. And then it
+(i.e., 10) would be treated as secs, which doesn't seem right.
+
+However, fixing it to properly calculate msecs would potentially cause
+timeouts when using twice the period size for the default timeout (which
+seems reasonable to me), so instead use the buffer size plus 10 percent
+to be on the safe side ... but that still seems insufficient, presumably
+because the hardware typically needs a moment to fire up. To compensate
+for this, we up the minimal timeout to 100ms, which is still two orders
+of magnitude less than the bogus minimum.
+
+substream->wait_time was also misinterpreted as jiffies, despite being
+documented as being in msecs. Only the soc/sof driver sets it - to 500,
+which looks very much like msecs were intended.
+
+Speaking of which, shouldn't snd_pcm_drain() also use substream->
+wait_time?
+
+As a drive-by, make the debug messages on timeout less confusing.
+
+Signed-off-by: Oswald Buddenhagen <oswald.buddenhagen@gmx.de>
+Link: https://lore.kernel.org/r/20230405201219.2197774-1-oswald.buddenhagen@gmx.de
+Signed-off-by: Takashi Iwai <tiwai@suse.de>
+Stable-dep-of: 9b1dbd69ba6f ("ALSA: pcm: fix use-after-free on linked stream runtime in snd_pcm_drain()")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ sound/core/pcm_lib.c | 11 +++++------
+ sound/core/pcm_native.c | 8 ++++----
+ 2 files changed, 9 insertions(+), 10 deletions(-)
+
+--- a/sound/core/pcm_lib.c
++++ b/sound/core/pcm_lib.c
+@@ -1878,15 +1878,14 @@ static int wait_for_avail(struct snd_pcm
+ if (substream->wait_time) {
+ wait_time = substream->wait_time;
+ } else {
+- wait_time = 10;
++ wait_time = 100;
+
+ if (runtime->rate) {
+- long t = runtime->period_size * 2 /
+- runtime->rate;
++ long t = runtime->buffer_size * 1100 / runtime->rate;
+ wait_time = max(t, wait_time);
+ }
+- wait_time = msecs_to_jiffies(wait_time * 1000);
+ }
++ wait_time = msecs_to_jiffies(wait_time);
+ }
+
+ for (;;) {
+@@ -1934,8 +1933,8 @@ static int wait_for_avail(struct snd_pcm
+ }
+ if (!tout) {
+ pcm_dbg(substream->pcm,
+- "%s write error (DMA or IRQ trouble?)\n",
+- is_playback ? "playback" : "capture");
++ "%s timeout (DMA or IRQ trouble?)\n",
++ is_playback ? "playback write" : "capture read");
+ err = -EIO;
+ break;
+ }
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -2172,12 +2172,12 @@ static int snd_pcm_drain(struct snd_pcm_
+ if (runtime->no_period_wakeup)
+ tout = MAX_SCHEDULE_TIMEOUT;
+ else {
+- tout = 10;
++ tout = 100;
+ if (runtime->rate) {
+- long t = runtime->period_size * 2 / runtime->rate;
++ long t = runtime->buffer_size * 1100 / runtime->rate;
+ tout = max(t, tout);
+ }
+- tout = msecs_to_jiffies(tout * 1000);
++ tout = msecs_to_jiffies(tout);
+ }
+ tout = schedule_timeout(tout);
+
+@@ -2200,7 +2200,7 @@ static int snd_pcm_drain(struct snd_pcm_
+ result = -ESTRPIPE;
+ else {
+ dev_dbg(substream->pcm->card->dev,
+- "playback drain error (DMA or IRQ trouble?)\n");
++ "playback drain timeout (DMA or IRQ trouble?)\n");
+ snd_pcm_stop(substream, SNDRV_PCM_STATE_SETUP);
+ result = -EIO;
+ }
--- /dev/null
+From stable+bounces-227036-greg=kroah.com@vger.kernel.org Wed Mar 18 12:52:33 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 18 Mar 2026 07:51:46 -0400
+Subject: arm64: mm: Add PTE_DIRTY back to PAGE_KERNEL* to fix kexec/hibernation
+To: stable@vger.kernel.org
+Cc: Catalin Marinas <catalin.marinas@arm.com>, Jianpeng Chang <jianpeng.chang.cn@windriver.com>, Will Deacon <will@kernel.org>, "Huang, Ying" <ying.huang@linux.alibaba.com>, Guenter Roeck <linux@roeck-us.net>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260318115146.638253-2-sashal@kernel.org>
+
+From: Catalin Marinas <catalin.marinas@arm.com>
+
+[ Upstream commit c25c4aa3f79a488cc270507935a29c07dc6bddfc ]
+
+Commit 143937ca51cc ("arm64, mm: avoid always making PTE dirty in
+pte_mkwrite()") changed pte_mkwrite_novma() to only clear PTE_RDONLY
+when PTE_DIRTY is set. This was to allow writable-clean PTEs for swap
+pages that haven't actually been written.
+
+However, this broke kexec and hibernation for some platforms. Both go
+through trans_pgd_create_copy() -> _copy_pte(), which calls
+pte_mkwrite_novma() to make the temporary linear-map copy fully
+writable. With the updated pte_mkwrite_novma(), read-only kernel pages
+(without PTE_DIRTY) remain read-only in the temporary mapping.
+While such behaviour is fine for user pages where hardware DBM or
+trapping will make them writeable, subsequent in-kernel writes by the
+kexec relocation code will fault.
+
+Add PTE_DIRTY back to all _PAGE_KERNEL* protection definitions. This was
+the case prior to 5.4, commit aa57157be69f ("arm64: Ensure
+VM_WRITE|VM_SHARED ptes are clean by default"). With the kernel
+linear-map PTEs always having PTE_DIRTY set, pte_mkwrite_novma()
+correctly clears PTE_RDONLY.
+
+Fixes: 143937ca51cc ("arm64, mm: avoid always making PTE dirty in pte_mkwrite()")
+Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
+Cc: stable@vger.kernel.org
+Reported-by: Jianpeng Chang <jianpeng.chang.cn@windriver.com>
+Link: https://lore.kernel.org/r/20251204062722.3367201-1-jianpeng.chang.cn@windriver.com
+Cc: Will Deacon <will@kernel.org>
+Cc: Huang, Ying <ying.huang@linux.alibaba.com>
+Cc: Guenter Roeck <linux@roeck-us.net>
+Reviewed-by: Huang Ying <ying.huang@linux.alibaba.com>
+Signed-off-by: Will Deacon <will@kernel.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/arm64/include/asm/pgtable-prot.h | 10 +++++-----
+ 1 file changed, 5 insertions(+), 5 deletions(-)
+
+--- a/arch/arm64/include/asm/pgtable-prot.h
++++ b/arch/arm64/include/asm/pgtable-prot.h
+@@ -45,11 +45,11 @@
+
+ #define _PAGE_DEFAULT (_PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL))
+
+-#define _PAGE_KERNEL (PROT_NORMAL)
+-#define _PAGE_KERNEL_RO ((PROT_NORMAL & ~PTE_WRITE) | PTE_RDONLY)
+-#define _PAGE_KERNEL_ROX ((PROT_NORMAL & ~(PTE_WRITE | PTE_PXN)) | PTE_RDONLY)
+-#define _PAGE_KERNEL_EXEC (PROT_NORMAL & ~PTE_PXN)
+-#define _PAGE_KERNEL_EXEC_CONT ((PROT_NORMAL & ~PTE_PXN) | PTE_CONT)
++#define _PAGE_KERNEL (PROT_NORMAL | PTE_DIRTY)
++#define _PAGE_KERNEL_RO ((PROT_NORMAL & ~PTE_WRITE) | PTE_RDONLY | PTE_DIRTY)
++#define _PAGE_KERNEL_ROX ((PROT_NORMAL & ~(PTE_WRITE | PTE_PXN)) | PTE_RDONLY | PTE_DIRTY)
++#define _PAGE_KERNEL_EXEC ((PROT_NORMAL & ~PTE_PXN) | PTE_DIRTY)
++#define _PAGE_KERNEL_EXEC_CONT ((PROT_NORMAL & ~PTE_PXN) | PTE_CONT | PTE_DIRTY)
+
+ #define _PAGE_SHARED (_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN | PTE_WRITE)
+ #define _PAGE_SHARED_EXEC (_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_WRITE)
--- /dev/null
+From stable+bounces-227035-greg=kroah.com@vger.kernel.org Wed Mar 18 12:51:52 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 18 Mar 2026 07:51:45 -0400
+Subject: arm64: reorganise PAGE_/PROT_ macros
+To: stable@vger.kernel.org
+Cc: Joey Gouly <joey.gouly@arm.com>, Will Deacon <will@kernel.org>, Mark Rutland <mark.rutland@arm.com>, Catalin Marinas <catalin.marinas@arm.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260318115146.638253-1-sashal@kernel.org>
+
+From: Joey Gouly <joey.gouly@arm.com>
+
+[ Upstream commit fa4cdccaa58224a12591f2c045c24abc5251bb9d ]
+
+Make these macros available to assembly code, so they can be re-used by the
+PIE initialisation code.
+
+This involves adding some extra macros, prepended with _ that are the raw
+values not `pgprot` values.
+
+A dummy value for PTE_MAYBE_NG is also provided, for use in assembly.
+
+Signed-off-by: Joey Gouly <joey.gouly@arm.com>
+Cc: Will Deacon <will@kernel.org>
+Cc: Mark Rutland <mark.rutland@arm.com>
+Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
+Link: https://lore.kernel.org/r/20230606145859.697944-14-joey.gouly@arm.com
+Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
+Stable-dep-of: c25c4aa3f79a ("arm64: mm: Add PTE_DIRTY back to PAGE_KERNEL* to fix kexec/hibernation")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/arm64/include/asm/pgtable-prot.h | 72 ++++++++++++++++++++--------------
+ 1 file changed, 44 insertions(+), 28 deletions(-)
+
+--- a/arch/arm64/include/asm/pgtable-prot.h
++++ b/arch/arm64/include/asm/pgtable-prot.h
+@@ -27,6 +27,40 @@
+ */
+ #define PMD_PRESENT_INVALID (_AT(pteval_t, 1) << 59) /* only when !PMD_SECT_VALID */
+
++#define _PROT_DEFAULT (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
++#define _PROT_SECT_DEFAULT (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
++
++#define PROT_DEFAULT (_PROT_DEFAULT | PTE_MAYBE_NG)
++#define PROT_SECT_DEFAULT (_PROT_SECT_DEFAULT | PMD_MAYBE_NG)
++
++#define PROT_DEVICE_nGnRnE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRnE))
++#define PROT_DEVICE_nGnRE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRE))
++#define PROT_NORMAL_NC (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL_NC))
++#define PROT_NORMAL (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL))
++#define PROT_NORMAL_TAGGED (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL_TAGGED))
++
++#define PROT_SECT_DEVICE_nGnRE (PROT_SECT_DEFAULT | PMD_SECT_PXN | PMD_SECT_UXN | PMD_ATTRINDX(MT_DEVICE_nGnRE))
++#define PROT_SECT_NORMAL (PROT_SECT_DEFAULT | PMD_SECT_PXN | PMD_SECT_UXN | PMD_ATTRINDX(MT_NORMAL))
++#define PROT_SECT_NORMAL_EXEC (PROT_SECT_DEFAULT | PMD_SECT_UXN | PMD_ATTRINDX(MT_NORMAL))
++
++#define _PAGE_DEFAULT (_PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL))
++
++#define _PAGE_KERNEL (PROT_NORMAL)
++#define _PAGE_KERNEL_RO ((PROT_NORMAL & ~PTE_WRITE) | PTE_RDONLY)
++#define _PAGE_KERNEL_ROX ((PROT_NORMAL & ~(PTE_WRITE | PTE_PXN)) | PTE_RDONLY)
++#define _PAGE_KERNEL_EXEC (PROT_NORMAL & ~PTE_PXN)
++#define _PAGE_KERNEL_EXEC_CONT ((PROT_NORMAL & ~PTE_PXN) | PTE_CONT)
++
++#define _PAGE_SHARED (_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN | PTE_WRITE)
++#define _PAGE_SHARED_EXEC (_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_WRITE)
++#define _PAGE_READONLY (_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN)
++#define _PAGE_READONLY_EXEC (_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN)
++#define _PAGE_EXECONLY (_PAGE_DEFAULT | PTE_RDONLY | PTE_NG | PTE_PXN)
++
++#ifdef __ASSEMBLY__
++#define PTE_MAYBE_NG 0
++#endif
++
+ #ifndef __ASSEMBLY__
+
+ #include <asm/cpufeature.h>
+@@ -34,9 +68,6 @@
+
+ extern bool arm64_use_ng_mappings;
+
+-#define _PROT_DEFAULT (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
+-#define _PROT_SECT_DEFAULT (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
+-
+ #define PTE_MAYBE_NG (arm64_use_ng_mappings ? PTE_NG : 0)
+ #define PMD_MAYBE_NG (arm64_use_ng_mappings ? PMD_SECT_NG : 0)
+
+@@ -50,26 +81,11 @@ extern bool arm64_use_ng_mappings;
+ #define PTE_MAYBE_GP 0
+ #endif
+
+-#define PROT_DEFAULT (_PROT_DEFAULT | PTE_MAYBE_NG)
+-#define PROT_SECT_DEFAULT (_PROT_SECT_DEFAULT | PMD_MAYBE_NG)
+-
+-#define PROT_DEVICE_nGnRnE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRnE))
+-#define PROT_DEVICE_nGnRE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRE))
+-#define PROT_NORMAL_NC (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL_NC))
+-#define PROT_NORMAL (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL))
+-#define PROT_NORMAL_TAGGED (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_WRITE | PTE_ATTRINDX(MT_NORMAL_TAGGED))
+-
+-#define PROT_SECT_DEVICE_nGnRE (PROT_SECT_DEFAULT | PMD_SECT_PXN | PMD_SECT_UXN | PMD_ATTRINDX(MT_DEVICE_nGnRE))
+-#define PROT_SECT_NORMAL (PROT_SECT_DEFAULT | PMD_SECT_PXN | PMD_SECT_UXN | PMD_ATTRINDX(MT_NORMAL))
+-#define PROT_SECT_NORMAL_EXEC (PROT_SECT_DEFAULT | PMD_SECT_UXN | PMD_ATTRINDX(MT_NORMAL))
+-
+-#define _PAGE_DEFAULT (_PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL))
+-
+-#define PAGE_KERNEL __pgprot(PROT_NORMAL)
+-#define PAGE_KERNEL_RO __pgprot((PROT_NORMAL & ~PTE_WRITE) | PTE_RDONLY)
+-#define PAGE_KERNEL_ROX __pgprot((PROT_NORMAL & ~(PTE_WRITE | PTE_PXN)) | PTE_RDONLY)
+-#define PAGE_KERNEL_EXEC __pgprot(PROT_NORMAL & ~PTE_PXN)
+-#define PAGE_KERNEL_EXEC_CONT __pgprot((PROT_NORMAL & ~PTE_PXN) | PTE_CONT)
++#define PAGE_KERNEL __pgprot(_PAGE_KERNEL)
++#define PAGE_KERNEL_RO __pgprot(_PAGE_KERNEL_RO)
++#define PAGE_KERNEL_ROX __pgprot(_PAGE_KERNEL_ROX)
++#define PAGE_KERNEL_EXEC __pgprot(_PAGE_KERNEL_EXEC)
++#define PAGE_KERNEL_EXEC_CONT __pgprot(_PAGE_KERNEL_EXEC_CONT)
+
+ #define PAGE_S2_MEMATTR(attr, has_fwb) \
+ ({ \
+@@ -83,11 +99,11 @@ extern bool arm64_use_ng_mappings;
+
+ #define PAGE_NONE __pgprot(((_PAGE_DEFAULT) & ~PTE_VALID) | PTE_PROT_NONE | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN)
+ /* shared+writable pages are clean by default, hence PTE_RDONLY|PTE_WRITE */
+-#define PAGE_SHARED __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN | PTE_WRITE)
+-#define PAGE_SHARED_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_WRITE)
+-#define PAGE_READONLY __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN)
+-#define PAGE_READONLY_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN)
+-#define PAGE_EXECONLY __pgprot(_PAGE_DEFAULT | PTE_RDONLY | PTE_NG | PTE_PXN)
++#define PAGE_SHARED __pgprot(_PAGE_SHARED)
++#define PAGE_SHARED_EXEC __pgprot(_PAGE_SHARED_EXEC)
++#define PAGE_READONLY __pgprot(_PAGE_READONLY)
++#define PAGE_READONLY_EXEC __pgprot(_PAGE_READONLY_EXEC)
++#define PAGE_EXECONLY __pgprot(_PAGE_EXECONLY)
+
+ #endif /* __ASSEMBLY__ */
+
--- /dev/null
+From stable+bounces-226033-greg=kroah.com@vger.kernel.org Tue Mar 17 15:44:13 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 17 Mar 2026 10:29:24 -0400
+Subject: ASoC: qcom: qdsp6: Fix q6apm remove ordering during ADSP stop and start
+To: stable@vger.kernel.org
+Cc: Ravi Hothi <ravi.hothi@oss.qualcomm.com>, Srinivas Kandagatla <srinivas.kandagatla@oss.qualcomm.com>, Mark Brown <broonie@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260317142924.166668-1-sashal@kernel.org>
+
+From: Ravi Hothi <ravi.hothi@oss.qualcomm.com>
+
+[ Upstream commit d6db827b430bdcca3976cebca7bd69cca03cde2c ]
+
+During ADSP stop and start, the kernel crashes due to the order in which
+ASoC components are removed.
+
+On ADSP stop, the q6apm-audio .remove callback unloads topology and removes
+PCM runtimes during ASoC teardown. This deletes the RTDs that contain the
+q6apm DAI components before their removal pass runs, leaving those
+components still linked to the card and causing crashes on the next rebind.
+
+Fix this by ensuring that all dependent (child) components are removed
+first, and the q6apm component is removed last.
+
+[ 48.105720] Unable to handle kernel NULL pointer dereference at virtual address 00000000000000d0
+[ 48.114763] Mem abort info:
+[ 48.117650] ESR = 0x0000000096000004
+[ 48.121526] EC = 0x25: DABT (current EL), IL = 32 bits
+[ 48.127010] SET = 0, FnV = 0
+[ 48.130172] EA = 0, S1PTW = 0
+[ 48.133415] FSC = 0x04: level 0 translation fault
+[ 48.138446] Data abort info:
+[ 48.141422] ISV = 0, ISS = 0x00000004, ISS2 = 0x00000000
+[ 48.147079] CM = 0, WnR = 0, TnD = 0, TagAccess = 0
+[ 48.152354] GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0
+[ 48.157859] user pgtable: 4k pages, 48-bit VAs, pgdp=00000001173cf000
+[ 48.164517] [00000000000000d0] pgd=0000000000000000, p4d=0000000000000000
+[ 48.171530] Internal error: Oops: 0000000096000004 [#1] SMP
+[ 48.177348] Modules linked in: q6prm_clocks q6apm_lpass_dais q6apm_dai snd_q6dsp_common q6prm snd_q6apm 8021q garp mrp stp llc snd_soc_hdmi_codec apr pdr_interface phy_qcom_edp fastrpc qcom_pd_mapper rpmsg_ctrl qrtr_smd rpmsg_char qcom_pdr_msg qcom_iris v4l2_mem2mem videobuf2_dma_contig ath11k_pci msm ubwc_config at24 ath11k videobuf2_memops mac80211 ocmem videobuf2_v4l2 libarc4 drm_gpuvm mhi qrtr videodev drm_exec snd_soc_sc8280xp gpu_sched videobuf2_common nvmem_qcom_spmi_sdam snd_soc_qcom_sdw drm_dp_aux_bus qcom_q6v5_pas qcom_spmi_temp_alarm snd_soc_qcom_common rtc_pm8xxx qcom_pon drm_display_helper cec qcom_pil_info qcom_stats soundwire_bus drm_client_lib mc dispcc0_sa8775p videocc_sa8775p qcom_q6v5 camcc_sa8775p snd_soc_dmic phy_qcom_sgmii_eth snd_soc_max98357a i2c_qcom_geni snd_soc_core dwmac_qcom_ethqos llcc_qcom icc_bwmon qcom_sysmon snd_compress qcom_refgen_regulator coresight_stm stmmac_platform snd_pcm_dmaengine qcom_common coresight_tmc stmmac coresight_replicator qcom_glink_smem coresight_cti stm_core
+[ 48.177444] coresight_funnel snd_pcm ufs_qcom phy_qcom_qmp_usb gpi phy_qcom_snps_femto_v2 coresight phy_qcom_qmp_ufs qcom_wdt gpucc_sa8775p pcs_xpcs mdt_loader qcom_ice icc_osm_l3 qmi_helpers snd_timer snd soundcore display_connector qcom_rng nvmem_reboot_mode drm_kms_helper phy_qcom_qmp_pcie sha256 cfg80211 rfkill socinfo fuse drm backlight ipv6
+[ 48.301059] CPU: 2 UID: 0 PID: 293 Comm: kworker/u32:2 Not tainted 6.19.0-rc6-dirty #10 PREEMPT
+[ 48.310081] Hardware name: Qualcomm Technologies, Inc. Lemans EVK (DT)
+[ 48.316782] Workqueue: pdr_notifier_wq pdr_notifier_work [pdr_interface]
+[ 48.323672] pstate: 20400005 (nzCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
+[ 48.330825] pc : mutex_lock+0xc/0x54
+[ 48.334514] lr : soc_dapm_shutdown_dapm+0x44/0x174 [snd_soc_core]
+[ 48.340794] sp : ffff800084ddb7b0
+[ 48.344207] x29: ffff800084ddb7b0 x28: ffff00009cd9cf30 x27: ffff00009cd9cc00
+[ 48.351544] x26: ffff000099610190 x25: ffffa31d2f19c810 x24: ffffa31d2f185098
+[ 48.358869] x23: ffff800084ddb7f8 x22: 0000000000000000 x21: 00000000000000d0
+[ 48.366198] x20: ffff00009ba6c338 x19: ffff00009ba6c338 x18: 00000000ffffffff
+[ 48.373528] x17: 000000040044ffff x16: ffffa31d4ae6dca8 x15: 072007740775076f
+[ 48.380853] x14: 0765076d07690774 x13: 00313a323a656369 x12: 767265733a637673
+[ 48.388182] x11: 00000000000003f9 x10: ffffa31d4c7dea98 x9 : 0000000000000001
+[ 48.395519] x8 : ffff00009a2aadc0 x7 : 0000000000000003 x6 : 0000000000000000
+[ 48.402854] x5 : 0000000000000000 x4 : 0000000000000028 x3 : ffff000ef397a698
+[ 48.410180] x2 : ffff00009a2aadc0 x1 : 0000000000000000 x0 : 00000000000000d0
+[ 48.417506] Call trace:
+[ 48.420025] mutex_lock+0xc/0x54 (P)
+[ 48.423712] snd_soc_dapm_shutdown+0x44/0xbc [snd_soc_core]
+[ 48.429447] soc_cleanup_card_resources+0x30/0x2c0 [snd_soc_core]
+[ 48.435719] snd_soc_bind_card+0x4dc/0xcc0 [snd_soc_core]
+[ 48.441278] snd_soc_add_component+0x27c/0x2c8 [snd_soc_core]
+[ 48.447192] snd_soc_register_component+0x9c/0xf4 [snd_soc_core]
+[ 48.453371] devm_snd_soc_register_component+0x64/0xc4 [snd_soc_core]
+[ 48.459994] apm_probe+0xb4/0x110 [snd_q6apm]
+[ 48.464479] apr_device_probe+0x24/0x40 [apr]
+[ 48.468964] really_probe+0xbc/0x298
+[ 48.472651] __driver_probe_device+0x78/0x12c
+[ 48.477132] driver_probe_device+0x40/0x160
+[ 48.481435] __device_attach_driver+0xb8/0x134
+[ 48.486011] bus_for_each_drv+0x80/0xdc
+[ 48.489964] __device_attach+0xa8/0x1b0
+[ 48.493916] device_initial_probe+0x50/0x54
+[ 48.498219] bus_probe_device+0x38/0xa0
+[ 48.502170] device_add+0x590/0x760
+[ 48.505761] device_register+0x20/0x30
+[ 48.509623] of_register_apr_devices+0x1d8/0x318 [apr]
+[ 48.514905] apr_pd_status+0x2c/0x54 [apr]
+[ 48.519114] pdr_notifier_work+0x8c/0xe0 [pdr_interface]
+[ 48.524570] process_one_work+0x150/0x294
+[ 48.528692] worker_thread+0x2d8/0x3d8
+[ 48.532551] kthread+0x130/0x204
+[ 48.535874] ret_from_fork+0x10/0x20
+[ 48.539559] Code: d65f03c0 d5384102 d503201f d2800001 (c8e17c02)
+[ 48.545823] ---[ end trace 0000000000000000 ]---
+
+Fixes: 5477518b8a0e ("ASoC: qdsp6: audioreach: add q6apm support")
+Cc: stable@vger.kernel.org
+Signed-off-by: Ravi Hothi <ravi.hothi@oss.qualcomm.com>
+Reviewed-by: Srinivas Kandagatla <srinivas.kandagatla@oss.qualcomm.com>
+Link: https://patch.msgid.link/20260227144534.278568-1-ravi.hothi@oss.qualcomm.com
+Signed-off-by: Mark Brown <broonie@kernel.org>
+[ Context ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ sound/soc/qcom/qdsp6/q6apm-dai.c | 1 +
+ sound/soc/qcom/qdsp6/q6apm-lpass-dais.c | 1 +
+ sound/soc/qcom/qdsp6/q6apm.c | 1 +
+ 3 files changed, 3 insertions(+)
+
+--- a/sound/soc/qcom/qdsp6/q6apm-dai.c
++++ b/sound/soc/qcom/qdsp6/q6apm-dai.c
+@@ -416,6 +416,7 @@ static const struct snd_soc_component_dr
+ .pointer = q6apm_dai_pointer,
+ .trigger = q6apm_dai_trigger,
+ .ack = q6apm_dai_ack,
++ .remove_order = SND_SOC_COMP_ORDER_EARLY,
+ };
+
+ static int q6apm_dai_probe(struct platform_device *pdev)
+--- a/sound/soc/qcom/qdsp6/q6apm-lpass-dais.c
++++ b/sound/soc/qcom/qdsp6/q6apm-lpass-dais.c
+@@ -234,6 +234,7 @@ static const struct snd_soc_component_dr
+ .of_xlate_dai_name = q6dsp_audio_ports_of_xlate_dai_name,
+ .be_pcm_base = AUDIOREACH_BE_PCM_BASE,
+ .use_dai_pcm_id = true,
++ .remove_order = SND_SOC_COMP_ORDER_FIRST,
+ };
+
+ static int q6apm_lpass_dai_dev_probe(struct platform_device *pdev)
+--- a/sound/soc/qcom/qdsp6/q6apm.c
++++ b/sound/soc/qcom/qdsp6/q6apm.c
+@@ -717,6 +717,7 @@ static const struct snd_soc_component_dr
+ .name = APM_AUDIO_DRV_NAME,
+ .probe = q6apm_audio_probe,
+ .remove = q6apm_audio_remove,
++ .remove_order = SND_SOC_COMP_ORDER_LAST,
+ };
+
+ static int apm_probe(gpr_device_t *gdev)
--- /dev/null
+From stable+bounces-227508-greg=kroah.com@vger.kernel.org Fri Mar 20 11:17:30 2026
+From: Sven Eckelmann <sven@narfation.org>
+Date: Fri, 20 Mar 2026 11:17:16 +0100
+Subject: batman-adv: avoid OGM aggregation when skb tailroom is insufficient
+To: stable@vger.kernel.org
+Cc: Yang Yang <n05ec@lzu.edu.cn>, Yifan Wu <yifanwucs@gmail.com>, Juefei Pu <tomapufckgml@gmail.com>, Yuan Tan <tanyuan98@outlook.com>, Xin Liu <bird@lzu.edu.cn>, Sven Eckelmann <sven@narfation.org>, Simon Wunderlich <sw@simonwunderlich.de>
+Message-ID: <20260320101716.1612386-1-sven@narfation.org>
+
+From: Yang Yang <n05ec@lzu.edu.cn>
+
+commit 0d4aef630be9d5f9c1227d07669c26c4383b5ad0 upstream.
+
+When OGM aggregation state is toggled at runtime, an existing forwarded
+packet may have been allocated with only packet_len bytes, while a later
+packet can still be selected for aggregation. Appending in this case can
+hit skb_put overflow conditions.
+
+Reject aggregation when the target skb tailroom cannot accommodate the new
+packet. The caller then falls back to creating a new forward packet
+instead of appending.
+
+Fixes: c6c8fea29769 ("net: Add batman-adv meshing protocol")
+Cc: stable@vger.kernel.org
+Reported-by: Yifan Wu <yifanwucs@gmail.com>
+Reported-by: Juefei Pu <tomapufckgml@gmail.com>
+Signed-off-by: Yuan Tan <tanyuan98@outlook.com>
+Signed-off-by: Xin Liu <bird@lzu.edu.cn>
+Signed-off-by: Ao Zhou <n05ec@lzu.edu.cn>
+Signed-off-by: Yang Yang <n05ec@lzu.edu.cn>
+Signed-off-by: Sven Eckelmann <sven@narfation.org>
+Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
+[ Adjust context ]
+Signed-off-by: Sven Eckelmann <sven@narfation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/batman-adv/bat_iv_ogm.c | 3 +++
+ 1 file changed, 3 insertions(+)
+
+--- a/net/batman-adv/bat_iv_ogm.c
++++ b/net/batman-adv/bat_iv_ogm.c
+@@ -465,6 +465,9 @@ batadv_iv_ogm_can_aggregate(const struct
+ !time_after_eq(aggregation_end_time, forw_packet->send_time))
+ return false;
+
++ if (skb_tailroom(forw_packet->skb) < packet_len)
++ return false;
++
+ if (aggregated_bytes > BATADV_MAX_AGGREGATION_BYTES)
+ return false;
+
--- /dev/null
+From stable+bounces-227375-greg=kroah.com@vger.kernel.org Thu Mar 19 20:38:06 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 19 Mar 2026 15:36:05 -0400
+Subject: btrfs: fix transaction abort on set received ioctl due to item overflow
+To: stable@vger.kernel.org
+Cc: Filipe Manana <fdmanana@suse.com>, Anand Jain <asj@kernel.org>, David Sterba <dsterba@suse.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260319193605.3026586-1-sashal@kernel.org>
+
+From: Filipe Manana <fdmanana@suse.com>
+
+[ Upstream commit 87f2c46003fce4d739138aab4af1942b1afdadac ]
+
+If the set received ioctl fails due to an item overflow when attempting to
+add the BTRFS_UUID_KEY_RECEIVED_SUBVOL we have to abort the transaction
+since we did some metadata updates before.
+
+This means that if a user calls this ioctl with the same received UUID
+field for a lot of subvolumes, we will hit the overflow, trigger the
+transaction abort and turn the filesystem into RO mode. A malicious user
+could exploit this, and this ioctl does not even requires that a user
+has admin privileges (CAP_SYS_ADMIN), only that he/she owns the subvolume.
+
+Fix this by doing an early check for item overflow before starting a
+transaction. This is also race safe because we are holding the subvol_sem
+semaphore in exclusive (write) mode.
+
+A test case for fstests will follow soon.
+
+Fixes: dd5f9615fc5c ("Btrfs: maintain subvolume items in the UUID tree")
+CC: stable@vger.kernel.org # 3.12+
+Reviewed-by: Anand Jain <asj@kernel.org>
+Signed-off-by: Filipe Manana <fdmanana@suse.com>
+Reviewed-by: David Sterba <dsterba@suse.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+[ A whole bunch of small things :) ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/ctree.h | 2 ++
+ fs/btrfs/ioctl.c | 21 +++++++++++++++++++--
+ fs/btrfs/uuid-tree.c | 46 ++++++++++++++++++++++++++++++++++++++++++++++
+ 3 files changed, 67 insertions(+), 2 deletions(-)
+
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -3210,6 +3210,8 @@ int btrfs_uuid_tree_add(struct btrfs_tra
+ u64 subid);
+ int btrfs_uuid_tree_remove(struct btrfs_trans_handle *trans, u8 *uuid, u8 type,
+ u64 subid);
++int btrfs_uuid_tree_check_overflow(struct btrfs_fs_info *fs_info,
++ u8 *uuid, u8 type);
+ int btrfs_uuid_tree_iterate(struct btrfs_fs_info *fs_info);
+
+ /* dir-item.c */
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -4883,6 +4883,25 @@ static long _btrfs_ioctl_set_received_su
+ goto out;
+ }
+
++ received_uuid_changed = memcmp(root_item->received_uuid, sa->uuid,
++ BTRFS_UUID_SIZE);
++
++ /*
++ * Before we attempt to add the new received uuid, check if we have room
++ * for it in case there's already an item. If the size of the existing
++ * item plus this root's ID (u64) exceeds the maximum item size, we can
++ * return here without the need to abort a transaction. If we don't do
++ * this check, the btrfs_uuid_tree_add() call below would fail with
++ * -EOVERFLOW and result in a transaction abort. Malicious users could
++ * exploit this to turn the fs into RO mode.
++ */
++ if (received_uuid_changed && !btrfs_is_empty_uuid(sa->uuid)) {
++ ret = btrfs_uuid_tree_check_overflow(fs_info, sa->uuid,
++ BTRFS_UUID_KEY_RECEIVED_SUBVOL);
++ if (ret < 0)
++ goto out;
++ }
++
+ /*
+ * 1 - root item
+ * 2 - uuid items (received uuid + subvol uuid)
+@@ -4898,8 +4917,6 @@ static long _btrfs_ioctl_set_received_su
+ sa->rtime.sec = ct.tv_sec;
+ sa->rtime.nsec = ct.tv_nsec;
+
+- received_uuid_changed = memcmp(root_item->received_uuid, sa->uuid,
+- BTRFS_UUID_SIZE);
+ if (received_uuid_changed &&
+ !btrfs_is_empty_uuid(root_item->received_uuid)) {
+ ret = btrfs_uuid_tree_remove(trans, root_item->received_uuid,
+--- a/fs/btrfs/uuid-tree.c
++++ b/fs/btrfs/uuid-tree.c
+@@ -225,6 +225,52 @@ out:
+ return ret;
+ }
+
++/*
++ * Check if we can add one root ID to a UUID key.
++ * If the key does not yet exists, we can, otherwise only if extended item does
++ * not exceeds the maximum item size permitted by the leaf size.
++ *
++ * Returns 0 on success, negative value on error.
++ */
++int btrfs_uuid_tree_check_overflow(struct btrfs_fs_info *fs_info,
++ u8 *uuid, u8 type)
++{
++ struct btrfs_path *path = NULL;
++ int ret;
++ u32 item_size;
++ struct btrfs_key key;
++
++ if (WARN_ON_ONCE(!fs_info->uuid_root)) {
++ ret = -EINVAL;
++ goto out;
++ }
++
++ path = btrfs_alloc_path();
++ if (!path) {
++ ret = -ENOMEM;
++ goto out;
++ }
++
++ btrfs_uuid_to_key(uuid, type, &key);
++ ret = btrfs_search_slot(NULL, fs_info->uuid_root, &key, path, 0, 0);
++ if (ret < 0)
++ goto out;
++ if (ret > 0) {
++ ret = 0;
++ goto out;
++ }
++
++ item_size = btrfs_item_size(path->nodes[0], path->slots[0]);
++
++ if (sizeof(struct btrfs_item) + item_size + sizeof(u64) >
++ BTRFS_LEAF_DATA_SIZE(fs_info))
++ ret = -EOVERFLOW;
++
++out:
++ btrfs_free_path(path);
++ return ret;
++}
++
+ static int btrfs_uuid_iter_rem(struct btrfs_root *uuid_root, u8 *uuid, u8 type,
+ u64 subid)
+ {
--- /dev/null
+From stable+bounces-227354-greg=kroah.com@vger.kernel.org Thu Mar 19 18:37:29 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 19 Mar 2026 13:27:02 -0400
+Subject: btrfs: fix transaction abort when snapshotting received subvolumes
+To: stable@vger.kernel.org
+Cc: Filipe Manana <fdmanana@suse.com>, Boris Burkov <boris@bur.io>, Qu Wenruo <wqu@suse.com>, David Sterba <dsterba@suse.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260319172702.2818866-1-sashal@kernel.org>
+
+From: Filipe Manana <fdmanana@suse.com>
+
+[ Upstream commit e1b18b959025e6b5dbad668f391f65d34b39595a ]
+
+Currently a user can trigger a transaction abort by snapshotting a
+previously received snapshot a bunch of times until we reach a
+BTRFS_UUID_KEY_RECEIVED_SUBVOL item overflow (the maximum item size we
+can store in a leaf). This is very likely not common in practice, but
+if it happens, it turns the filesystem into RO mode. The snapshot, send
+and set_received_subvol and subvol_setflags (used by receive) don't
+require CAP_SYS_ADMIN, just inode_owner_or_capable(). A malicious user
+could use this to turn a filesystem into RO mode and disrupt a system.
+
+Reproducer script:
+
+ $ cat test.sh
+ #!/bin/bash
+
+ DEV=/dev/sdi
+ MNT=/mnt/sdi
+
+ # Use smallest node size to make the test faster.
+ mkfs.btrfs -f --nodesize 4K $DEV
+ mount $DEV $MNT
+
+ # Create a subvolume and set it to RO so that it can be used for send.
+ btrfs subvolume create $MNT/sv
+ touch $MNT/sv/foo
+ btrfs property set $MNT/sv ro true
+
+ # Send and receive the subvolume into snaps/sv.
+ mkdir $MNT/snaps
+ btrfs send $MNT/sv | btrfs receive $MNT/snaps
+
+ # Now snapshot the received subvolume, which has a received_uuid, a
+ # lot of times to trigger the leaf overflow.
+ total=500
+ for ((i = 1; i <= $total; i++)); do
+ echo -ne "\rCreating snapshot $i/$total"
+ btrfs subvolume snapshot -r $MNT/snaps/sv $MNT/snaps/sv_$i > /dev/null
+ done
+ echo
+
+ umount $MNT
+
+When running the test:
+
+ $ ./test.sh
+ (...)
+ Create subvolume '/mnt/sdi/sv'
+ At subvol /mnt/sdi/sv
+ At subvol sv
+ Creating snapshot 496/500ERROR: Could not create subvolume: Value too large for defined data type
+ Creating snapshot 497/500ERROR: Could not create subvolume: Read-only file system
+ Creating snapshot 498/500ERROR: Could not create subvolume: Read-only file system
+ Creating snapshot 499/500ERROR: Could not create subvolume: Read-only file system
+ Creating snapshot 500/500ERROR: Could not create subvolume: Read-only file system
+
+And in dmesg/syslog:
+
+ $ dmesg
+ (...)
+ [251067.627338] BTRFS warning (device sdi): insert uuid item failed -75 (0x4628b21c4ac8d898, 0x2598bee2b1515c91) type 252!
+ [251067.629212] ------------[ cut here ]------------
+ [251067.630033] BTRFS: Transaction aborted (error -75)
+ [251067.630871] WARNING: fs/btrfs/transaction.c:1907 at create_pending_snapshot.cold+0x52/0x465 [btrfs], CPU#10: btrfs/615235
+ [251067.632851] Modules linked in: btrfs dm_zero (...)
+ [251067.644071] CPU: 10 UID: 0 PID: 615235 Comm: btrfs Tainted: G W 6.19.0-rc8-btrfs-next-225+ #1 PREEMPT(full)
+ [251067.646165] Tainted: [W]=WARN
+ [251067.646733] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.2-0-gea1b7a073390-prebuilt.qemu.org 04/01/2014
+ [251067.648735] RIP: 0010:create_pending_snapshot.cold+0x55/0x465 [btrfs]
+ [251067.649984] Code: f0 48 0f (...)
+ [251067.653313] RSP: 0018:ffffce644908fae8 EFLAGS: 00010292
+ [251067.653987] RAX: 00000000ffffff01 RBX: ffff8e5639e63a80 RCX: 00000000ffffffd3
+ [251067.655042] RDX: ffff8e53faa76b00 RSI: 00000000ffffffb5 RDI: ffffffffc0919750
+ [251067.656077] RBP: ffffce644908fbd8 R08: 0000000000000000 R09: ffffce644908f820
+ [251067.657068] R10: ffff8e5adc1fffa8 R11: 0000000000000003 R12: ffff8e53c0431bd0
+ [251067.658050] R13: ffff8e5414593600 R14: ffff8e55efafd000 R15: 00000000ffffffb5
+ [251067.659019] FS: 00007f2a4944b3c0(0000) GS:ffff8e5b27dae000(0000) knlGS:0000000000000000
+ [251067.660115] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
+ [251067.660943] CR2: 00007ffc5aa57898 CR3: 00000005813a2003 CR4: 0000000000370ef0
+ [251067.661972] Call Trace:
+ [251067.662292] <TASK>
+ [251067.662653] create_pending_snapshots+0x97/0xc0 [btrfs]
+ [251067.663413] btrfs_commit_transaction+0x26e/0xc00 [btrfs]
+ [251067.664257] ? btrfs_qgroup_convert_reserved_meta+0x35/0x390 [btrfs]
+ [251067.665238] ? _raw_spin_unlock+0x15/0x30
+ [251067.665837] ? record_root_in_trans+0xa2/0xd0 [btrfs]
+ [251067.666531] btrfs_mksubvol+0x330/0x580 [btrfs]
+ [251067.667145] btrfs_mksnapshot+0x74/0xa0 [btrfs]
+ [251067.667827] __btrfs_ioctl_snap_create+0x194/0x1d0 [btrfs]
+ [251067.668595] btrfs_ioctl_snap_create_v2+0x107/0x130 [btrfs]
+ [251067.669479] btrfs_ioctl+0x1580/0x2690 [btrfs]
+ [251067.670093] ? count_memcg_events+0x6d/0x180
+ [251067.670849] ? handle_mm_fault+0x1a0/0x2a0
+ [251067.671652] __x64_sys_ioctl+0x92/0xe0
+ [251067.672406] do_syscall_64+0x50/0xf20
+ [251067.673129] entry_SYSCALL_64_after_hwframe+0x76/0x7e
+ [251067.674096] RIP: 0033:0x7f2a495648db
+ [251067.674812] Code: 00 48 89 (...)
+ [251067.678227] RSP: 002b:00007ffc5aa57840 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
+ [251067.679691] RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 00007f2a495648db
+ [251067.681145] RDX: 00007ffc5aa588b0 RSI: 0000000050009417 RDI: 0000000000000004
+ [251067.682511] RBP: 0000000000000002 R08: 0000000000000000 R09: 0000000000000000
+ [251067.683842] R10: 000000000000000a R11: 0000000000000246 R12: 00007ffc5aa59910
+ [251067.685176] R13: 00007ffc5aa588b0 R14: 0000000000000004 R15: 0000000000000006
+ [251067.686524] </TASK>
+ [251067.686972] ---[ end trace 0000000000000000 ]---
+ [251067.687890] BTRFS: error (device sdi state A) in create_pending_snapshot:1907: errno=-75 unknown
+ [251067.689049] BTRFS info (device sdi state EA): forced readonly
+ [251067.689054] BTRFS warning (device sdi state EA): Skipping commit of aborted transaction.
+ [251067.690119] BTRFS: error (device sdi state EA) in cleanup_transaction:2043: errno=-75 unknown
+ [251067.702028] BTRFS info (device sdi state EA): last unmount of filesystem 46dc3975-30a2-4a69-a18f-418b859cccda
+
+Fix this by ignoring -EOVERFLOW errors from btrfs_uuid_tree_add() in the
+snapshot creation code when attempting to add the
+BTRFS_UUID_KEY_RECEIVED_SUBVOL item. This is OK because it's not critical
+and we are still able to delete the snapshot, as snapshot/subvolume
+deletion ignores if a BTRFS_UUID_KEY_RECEIVED_SUBVOL is missing (see
+inode.c:btrfs_delete_subvolume()). As for send/receive, we can still do
+send/receive operations since it always peeks the first root ID in the
+existing BTRFS_UUID_KEY_RECEIVED_SUBVOL (it could peek any since all
+snapshots have the same content), and even if the key is missing, it
+falls back to searching by BTRFS_UUID_KEY_SUBVOL key.
+
+A test case for fstests will be sent soon.
+
+Fixes: dd5f9615fc5c ("Btrfs: maintain subvolume items in the UUID tree")
+CC: stable@vger.kernel.org # 3.12+
+Reviewed-by: Boris Burkov <boris@bur.io>
+Reviewed-by: Qu Wenruo <wqu@suse.com>
+Signed-off-by: Filipe Manana <fdmanana@suse.com>
+Reviewed-by: David Sterba <dsterba@suse.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+[ adapted error check condition to omit unlikely() wrapper ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/transaction.c | 16 ++++++++++++++++
+ 1 file changed, 16 insertions(+)
+
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -1869,6 +1869,22 @@ static noinline int create_pending_snaps
+ ret = btrfs_uuid_tree_add(trans, new_root_item->received_uuid,
+ BTRFS_UUID_KEY_RECEIVED_SUBVOL,
+ objectid);
++ /*
++ * We are creating of lot of snapshots of the same root that was
++ * received (has a received UUID) and reached a leaf's limit for
++ * an item. We can safely ignore this and avoid a transaction
++ * abort. A deletion of this snapshot will still work since we
++ * ignore if an item with a BTRFS_UUID_KEY_RECEIVED_SUBVOL key
++ * is missing (see btrfs_delete_subvolume()). Send/receive will
++ * work too since it peeks the first root id from the existing
++ * item (it could peek any), and in case it's missing it
++ * falls back to search by BTRFS_UUID_KEY_SUBVOL keys.
++ * Creation of a snapshot does not require CAP_SYS_ADMIN, so
++ * we don't want users triggering transaction aborts, either
++ * intentionally or not.
++ */
++ if (ret == -EOVERFLOW)
++ ret = 0;
+ if (ret && ret != -EEXIST) {
+ btrfs_abort_transaction(trans, ret);
+ goto fail;
--- /dev/null
+From stable+bounces-225657-greg=kroah.com@vger.kernel.org Mon Mar 16 19:36:48 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 16 Mar 2026 14:34:53 -0400
+Subject: can: gs_usb: gs_can_open(): always configure bitrates before starting device
+To: stable@vger.kernel.org
+Cc: Marc Kleine-Budde <mkl@pengutronix.de>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260316183453.1075555-1-sashal@kernel.org>
+
+From: Marc Kleine-Budde <mkl@pengutronix.de>
+
+[ Upstream commit 2df6162785f31f1bbb598cfc3b08e4efc88f80b6 ]
+
+So far the driver populated the struct can_priv::do_set_bittiming() and
+struct can_priv::fd::do_set_data_bittiming() callbacks.
+
+Before bringing up the interface, user space has to configure the bitrates.
+With these callbacks the configuration is directly forwarded into the CAN
+hardware. Then the interface can be brought up.
+
+An ifdown-ifup cycle (without changing the bit rates) doesn't re-configure
+the bitrates in the CAN hardware. This leads to a problem with the
+CANable-2.5 [1] firmware, which resets the configured bit rates during
+ifdown.
+
+To fix the problem remove both bit timing callbacks and always configure
+the bitrates in the struct net_device_ops::ndo_open() callback.
+
+[1] https://github.com/Elmue/CANable-2.5-firmware-Slcan-and-Candlelight
+
+Cc: stable@vger.kernel.org
+Fixes: d08e973a77d1 ("can: gs_usb: Added support for the GS_USB CAN devices")
+Link: https://patch.msgid.link/20260219-gs_usb-always-configure-bitrates-v2-1-671f8ba5b0a5@pengutronix.de
+Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
+[ adapted to different structure of the struct ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/can/usb/gs_usb.c | 22 ++++++++++++++++------
+ 1 file changed, 16 insertions(+), 6 deletions(-)
+
+--- a/drivers/net/can/usb/gs_usb.c
++++ b/drivers/net/can/usb/gs_usb.c
+@@ -678,9 +678,8 @@ device_detach:
+ }
+ }
+
+-static int gs_usb_set_bittiming(struct net_device *netdev)
++static int gs_usb_set_bittiming(struct gs_can *dev)
+ {
+- struct gs_can *dev = netdev_priv(netdev);
+ struct can_bittiming *bt = &dev->can.bittiming;
+ struct gs_device_bittiming dbt = {
+ .prop_seg = cpu_to_le32(bt->prop_seg),
+@@ -698,9 +697,8 @@ static int gs_usb_set_bittiming(struct n
+ GFP_KERNEL);
+ }
+
+-static int gs_usb_set_data_bittiming(struct net_device *netdev)
++static int gs_usb_set_data_bittiming(struct gs_can *dev)
+ {
+- struct gs_can *dev = netdev_priv(netdev);
+ struct can_bittiming *bt = &dev->can.data_bittiming;
+ struct gs_device_bittiming dbt = {
+ .prop_seg = cpu_to_le32(bt->prop_seg),
+@@ -961,6 +959,20 @@ static int gs_can_open(struct net_device
+ if (dev->feature & GS_CAN_FEATURE_HW_TIMESTAMP)
+ flags |= GS_CAN_MODE_HW_TIMESTAMP;
+
++ rc = gs_usb_set_bittiming(dev);
++ if (rc) {
++ netdev_err(netdev, "failed to set bittiming: %pe\n", ERR_PTR(rc));
++ goto out_usb_kill_anchored_urbs;
++ }
++
++ if (ctrlmode & CAN_CTRLMODE_FD) {
++ rc = gs_usb_set_data_bittiming(dev);
++ if (rc) {
++ netdev_err(netdev, "failed to set data bittiming: %pe\n", ERR_PTR(rc));
++ goto out_usb_kill_anchored_urbs;
++ }
++ }
++
+ /* start polling timestamp */
+ if (dev->feature & GS_CAN_FEATURE_HW_TIMESTAMP)
+ gs_usb_timestamp_init(dev);
+@@ -1231,7 +1243,6 @@ static struct gs_can *gs_make_candev(uns
+ dev->can.state = CAN_STATE_STOPPED;
+ dev->can.clock.freq = le32_to_cpu(bt_const.fclk_can);
+ dev->can.bittiming_const = &dev->bt_const;
+- dev->can.do_set_bittiming = gs_usb_set_bittiming;
+
+ dev->can.ctrlmode_supported = CAN_CTRLMODE_CC_LEN8_DLC;
+
+@@ -1255,7 +1266,6 @@ static struct gs_can *gs_make_candev(uns
+ * GS_CAN_FEATURE_BT_CONST_EXT is set.
+ */
+ dev->can.data_bittiming_const = &dev->bt_const;
+- dev->can.do_set_data_bittiming = gs_usb_set_data_bittiming;
+ }
+
+ if (feature & GS_CAN_FEATURE_TERMINATION) {
--- /dev/null
+From stable+bounces-227194-greg=kroah.com@vger.kernel.org Thu Mar 19 02:00:12 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 18 Mar 2026 21:00:06 -0400
+Subject: cifs: open files should not hold ref on superblock
+To: stable@vger.kernel.org
+Cc: Shyam Prasad N <sprasad@microsoft.com>, Steve French <stfrench@microsoft.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260319010006.1861233-1-sashal@kernel.org>
+
+From: Shyam Prasad N <sprasad@microsoft.com>
+
+[ Upstream commit 340cea84f691c5206561bb2e0147158fe02070be ]
+
+Today whenever we deal with a file, in addition to holding
+a reference on the dentry, we also get a reference on the
+superblock. This happens in two cases:
+1. when a new cinode is allocated
+2. when an oplock break is being processed
+
+The reasoning for holding the superblock ref was to make sure
+that when umount happens, if there are users of inodes and
+dentries, it does not try to clean them up and wait for the
+last ref to superblock to be dropped by last of such users.
+
+But the side effect of doing that is that umount silently drops
+a ref on the superblock and we could have deferred closes and
+lease breaks still holding these refs.
+
+Ideally, we should ensure that all of these users of inodes and
+dentries are cleaned up at the time of umount, which is what this
+code is doing.
+
+This code change allows these code paths to use a ref on the
+dentry (and hence the inode). That way, umount is
+ensured to clean up SMB client resources when it's the last
+ref on the superblock (For ex: when same objects are shared).
+
+The code change also moves the call to close all the files in
+deferred close list to the umount code path. It also waits for
+oplock_break workers to be flushed before calling
+kill_anon_super (which eventually frees up those objects).
+
+Fixes: 24261fc23db9 ("cifs: delay super block destruction until all cifsFileInfo objects are gone")
+Fixes: 705c79101ccf ("smb: client: fix use-after-free in cifs_oplock_break")
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Shyam Prasad N <sprasad@microsoft.com>
+Signed-off-by: Steve French <stfrench@microsoft.com>
+[ kmalloc_obj() => kmalloc(), remove trace_smb3_tcon_ref() ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/smb/client/cifsfs.c | 9 ++++++---
+ fs/smb/client/cifsproto.h | 1 +
+ fs/smb/client/file.c | 11 -----------
+ fs/smb/client/misc.c | 41 +++++++++++++++++++++++++++++++++++++++++
+ 4 files changed, 48 insertions(+), 14 deletions(-)
+
+--- a/fs/smb/client/cifsfs.c
++++ b/fs/smb/client/cifsfs.c
+@@ -287,11 +287,15 @@ static void cifs_kill_sb(struct super_bl
+ struct cifs_sb_info *cifs_sb = CIFS_SB(sb);
+
+ /*
+- * We ned to release all dentries for the cached directories
+- * before we kill the sb.
++ * We need to release all dentries for the cached directories
++ * and close all deferred file handles before we kill the sb.
+ */
+ if (cifs_sb->root) {
+ close_all_cached_dirs(cifs_sb);
++ cifs_close_all_deferred_files_sb(cifs_sb);
++
++ /* Wait for all pending oplock breaks to complete */
++ flush_workqueue(cifsoplockd_wq);
+
+ /* finally release root dentry */
+ dput(cifs_sb->root);
+@@ -756,7 +760,6 @@ static void cifs_umount_begin(struct sup
+ spin_unlock(&tcon->tc_lock);
+ spin_unlock(&cifs_tcp_ses_lock);
+
+- cifs_close_all_deferred_files(tcon);
+ /* cancel_brl_requests(tcon); */ /* BB mark all brl mids as exiting */
+ /* cancel_notify_requests(tcon); */
+ if (tcon->ses && tcon->ses->server) {
+--- a/fs/smb/client/cifsproto.h
++++ b/fs/smb/client/cifsproto.h
+@@ -297,6 +297,7 @@ extern void cifs_close_deferred_file(str
+
+ extern void cifs_close_all_deferred_files(struct cifs_tcon *cifs_tcon);
+
++void cifs_close_all_deferred_files_sb(struct cifs_sb_info *cifs_sb);
+ extern void cifs_close_deferred_file_under_dentry(struct cifs_tcon *cifs_tcon,
+ const char *path);
+ extern struct TCP_Server_Info *
+--- a/fs/smb/client/file.c
++++ b/fs/smb/client/file.c
+@@ -375,8 +375,6 @@ struct cifsFileInfo *cifs_new_fileinfo(s
+ mutex_init(&cfile->fh_mutex);
+ spin_lock_init(&cfile->file_info_lock);
+
+- cifs_sb_active(inode->i_sb);
+-
+ /*
+ * If the server returned a read oplock and we have mandatory brlocks,
+ * set oplock level to None.
+@@ -431,7 +429,6 @@ static void cifsFileInfo_put_final(struc
+ struct inode *inode = d_inode(cifs_file->dentry);
+ struct cifsInodeInfo *cifsi = CIFS_I(inode);
+ struct cifsLockInfo *li, *tmp;
+- struct super_block *sb = inode->i_sb;
+
+ /*
+ * Delete any outstanding lock records. We'll lose them when the file
+@@ -449,7 +446,6 @@ static void cifsFileInfo_put_final(struc
+
+ cifs_put_tlink(cifs_file->tlink);
+ dput(cifs_file->dentry);
+- cifs_sb_deactive(sb);
+ kfree(cifs_file->symlink_target);
+ kfree(cifs_file);
+ }
+@@ -5188,12 +5184,6 @@ void cifs_oplock_break(struct work_struc
+ __u64 persistent_fid, volatile_fid;
+ __u16 net_fid;
+
+- /*
+- * Hold a reference to the superblock to prevent it and its inodes from
+- * being freed while we are accessing cinode. Otherwise, _cifsFileInfo_put()
+- * may release the last reference to the sb and trigger inode eviction.
+- */
+- cifs_sb_active(sb);
+ wait_on_bit(&cinode->flags, CIFS_INODE_PENDING_WRITERS,
+ TASK_UNINTERRUPTIBLE);
+
+@@ -5266,7 +5256,6 @@ oplock_break_ack:
+ cifs_put_tlink(tlink);
+ out:
+ cifs_done_oplock_break(cinode);
+- cifs_sb_deactive(sb);
+ }
+
+ /*
+--- a/fs/smb/client/misc.c
++++ b/fs/smb/client/misc.c
+@@ -29,6 +29,11 @@
+ extern mempool_t *cifs_sm_req_poolp;
+ extern mempool_t *cifs_req_poolp;
+
++struct tcon_list {
++ struct list_head entry;
++ struct cifs_tcon *tcon;
++};
++
+ /* The xid serves as a useful identifier for each incoming vfs request,
+ in a similar way to the mid which is useful to track each sent smb,
+ and CurrentXid can also provide a running counter (although it
+@@ -809,6 +814,42 @@ cifs_close_all_deferred_files(struct cif
+ kfree(tmp_list);
+ }
+ }
++
++void cifs_close_all_deferred_files_sb(struct cifs_sb_info *cifs_sb)
++{
++ struct rb_root *root = &cifs_sb->tlink_tree;
++ struct rb_node *node;
++ struct cifs_tcon *tcon;
++ struct tcon_link *tlink;
++ struct tcon_list *tmp_list, *q;
++ LIST_HEAD(tcon_head);
++
++ spin_lock(&cifs_sb->tlink_tree_lock);
++ for (node = rb_first(root); node; node = rb_next(node)) {
++ tlink = rb_entry(node, struct tcon_link, tl_rbnode);
++ tcon = tlink_tcon(tlink);
++ if (IS_ERR(tcon))
++ continue;
++ tmp_list = kmalloc(sizeof(struct tcon_list), GFP_ATOMIC);
++ if (tmp_list == NULL)
++ break;
++ tmp_list->tcon = tcon;
++ /* Take a reference on tcon to prevent it from being freed */
++ spin_lock(&tcon->tc_lock);
++ ++tcon->tc_count;
++ spin_unlock(&tcon->tc_lock);
++ list_add_tail(&tmp_list->entry, &tcon_head);
++ }
++ spin_unlock(&cifs_sb->tlink_tree_lock);
++
++ list_for_each_entry_safe(tmp_list, q, &tcon_head, entry) {
++ cifs_close_all_deferred_files(tmp_list->tcon);
++ list_del(&tmp_list->entry);
++ cifs_put_tcon(tmp_list->tcon);
++ kfree(tmp_list);
++ }
++}
++
+ void
+ cifs_close_deferred_file_under_dentry(struct cifs_tcon *tcon, const char *path)
+ {
--- /dev/null
+From stable+bounces-227197-greg=kroah.com@vger.kernel.org Thu Mar 19 02:21:35 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 18 Mar 2026 21:17:24 -0400
+Subject: crypto: atmel-sha204a - Fix OOM ->tfm_count leak
+To: stable@vger.kernel.org
+Cc: Thorsten Blum <thorsten.blum@linux.dev>, Herbert Xu <herbert@gondor.apana.org.au>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260319011724.1873323-1-sashal@kernel.org>
+
+From: Thorsten Blum <thorsten.blum@linux.dev>
+
+[ Upstream commit d240b079a37e90af03fd7dfec94930eb6c83936e ]
+
+If memory allocation fails, decrement ->tfm_count to avoid blocking
+future reads.
+
+Cc: stable@vger.kernel.org
+Fixes: da001fb651b0 ("crypto: atmel-i2c - add support for SHA204A random number generator")
+Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
+Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
+[ adapted kmalloc_obj() macro to kmalloc(sizeof()) ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/crypto/atmel-sha204a.c | 5 +++--
+ 1 file changed, 3 insertions(+), 2 deletions(-)
+
+--- a/drivers/crypto/atmel-sha204a.c
++++ b/drivers/crypto/atmel-sha204a.c
+@@ -52,9 +52,10 @@ static int atmel_sha204a_rng_read_nonblo
+ rng->priv = 0;
+ } else {
+ work_data = kmalloc(sizeof(*work_data), GFP_ATOMIC);
+- if (!work_data)
++ if (!work_data) {
++ atomic_dec(&i2c_priv->tfm_count);
+ return -ENOMEM;
+-
++ }
+ work_data->ctx = i2c_priv;
+ work_data->client = i2c_priv->client;
+
--- /dev/null
+From stable+bounces-223672-greg=kroah.com@vger.kernel.org Mon Mar 9 15:35:08 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 9 Mar 2026 10:30:49 -0400
+Subject: drm/amd/display: Use GFP_ATOMIC in dc_create_stream_for_sink
+To: stable@vger.kernel.org
+Cc: Natalie Vock <natalie.vock@gmx.de>, Alex Deucher <alexander.deucher@amd.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260309143049.1160472-1-sashal@kernel.org>
+
+From: Natalie Vock <natalie.vock@gmx.de>
+
+[ Upstream commit 28dfe4317541e57fe52f9a290394cd29c348228b ]
+
+This can be called while preemption is disabled, for example by
+dcn32_internal_validate_bw which is called with the FPU active.
+
+Fixes "BUG: scheduling while atomic" messages I encounter on my Navi31
+machine.
+
+Signed-off-by: Natalie Vock <natalie.vock@gmx.de>
+Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
+(cherry picked from commit b42dae2ebc5c84a68de63ec4ffdfec49362d53f1)
+Cc: stable@vger.kernel.org
+[ Context ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/gpu/drm/amd/display/dc/core/dc_stream.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
+@@ -165,7 +165,7 @@ struct dc_stream_state *dc_create_stream
+ if (sink == NULL)
+ return NULL;
+
+- stream = kzalloc(sizeof(struct dc_stream_state), GFP_KERNEL);
++ stream = kzalloc(sizeof(struct dc_stream_state), GFP_ATOMIC);
+ if (stream == NULL)
+ goto alloc_fail;
+
--- /dev/null
+From stable+bounces-227117-greg=kroah.com@vger.kernel.org Wed Mar 18 17:52:38 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 18 Mar 2026 12:14:22 -0400
+Subject: drm/bridge: ti-sn65dsi83: halve horizontal syncs for dual LVDS output
+To: stable@vger.kernel.org
+Cc: Luca Ceresoli <luca.ceresoli@bootlin.com>, Marek Vasut <marek.vasut@mailbox.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260318161422.911810-1-sashal@kernel.org>
+
+From: Luca Ceresoli <luca.ceresoli@bootlin.com>
+
+[ Upstream commit d0d727746944096a6681dc6adb5f123fc5aa018d ]
+
+Dual LVDS output (available on the SN65DSI84) requires HSYNC_PULSE_WIDTH
+and HORIZONTAL_BACK_PORCH to be divided by two with respect to the values
+used for single LVDS output.
+
+While not clearly stated in the datasheet, this is needed according to the
+DSI Tuner [0] output. It also makes sense intuitively because in dual LVDS
+output two pixels at a time are output and so the output clock is half of
+the pixel clock.
+
+Some dual-LVDS panels refuse to show any picture without this fix.
+
+Divide by two HORIZONTAL_FRONT_PORCH too, even though this register is used
+only for test pattern generation which is not currently implemented by this
+driver.
+
+[0] https://www.ti.com/tool/DSI-TUNER
+
+Fixes: ceb515ba29ba ("drm/bridge: ti-sn65dsi83: Add TI SN65DSI83 and SN65DSI84 driver")
+Cc: stable@vger.kernel.org
+Reviewed-by: Marek Vasut <marek.vasut@mailbox.org>
+Link: https://patch.msgid.link/20260226-ti-sn65dsi83-dual-lvds-fixes-and-test-pattern-v1-2-2e15f5a9a6a0@bootlin.com
+Signed-off-by: Luca Ceresoli <luca.ceresoli@bootlin.com>
+[ adapted variable declaration placement ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/gpu/drm/bridge/ti-sn65dsi83.c | 7 ++++---
+ 1 file changed, 4 insertions(+), 3 deletions(-)
+
+--- a/drivers/gpu/drm/bridge/ti-sn65dsi83.c
++++ b/drivers/gpu/drm/bridge/ti-sn65dsi83.c
+@@ -325,6 +325,7 @@ static void sn65dsi83_atomic_pre_enable(
+ struct drm_bridge_state *old_bridge_state)
+ {
+ struct sn65dsi83 *ctx = bridge_to_sn65dsi83(bridge);
++ const unsigned int dual_factor = ctx->lvds_dual_link ? 2 : 1;
+ struct drm_atomic_state *state = old_bridge_state->base.state;
+ const struct drm_bridge_state *bridge_state;
+ const struct drm_crtc_state *crtc_state;
+@@ -452,18 +453,18 @@ static void sn65dsi83_atomic_pre_enable(
+ /* 32 + 1 pixel clock to ensure proper operation */
+ le16val = cpu_to_le16(32 + 1);
+ regmap_bulk_write(ctx->regmap, REG_VID_CHA_SYNC_DELAY_LOW, &le16val, 2);
+- le16val = cpu_to_le16(mode->hsync_end - mode->hsync_start);
++ le16val = cpu_to_le16((mode->hsync_end - mode->hsync_start) / dual_factor);
+ regmap_bulk_write(ctx->regmap, REG_VID_CHA_HSYNC_PULSE_WIDTH_LOW,
+ &le16val, 2);
+ le16val = cpu_to_le16(mode->vsync_end - mode->vsync_start);
+ regmap_bulk_write(ctx->regmap, REG_VID_CHA_VSYNC_PULSE_WIDTH_LOW,
+ &le16val, 2);
+ regmap_write(ctx->regmap, REG_VID_CHA_HORIZONTAL_BACK_PORCH,
+- mode->htotal - mode->hsync_end);
++ (mode->htotal - mode->hsync_end) / dual_factor);
+ regmap_write(ctx->regmap, REG_VID_CHA_VERTICAL_BACK_PORCH,
+ mode->vtotal - mode->vsync_end);
+ regmap_write(ctx->regmap, REG_VID_CHA_HORIZONTAL_FRONT_PORCH,
+- mode->hsync_start - mode->hdisplay);
++ (mode->hsync_start - mode->hdisplay) / dual_factor);
+ regmap_write(ctx->regmap, REG_VID_CHA_VERTICAL_FRONT_PORCH,
+ mode->vsync_start - mode->vdisplay);
+ regmap_write(ctx->regmap, REG_VID_CHA_TEST_PATTERN, 0x00);
--- /dev/null
+From stable+bounces-227108-greg=kroah.com@vger.kernel.org Wed Mar 18 17:36:02 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 18 Mar 2026 11:56:29 -0400
+Subject: drm/msm: Fix dma_free_attrs() buffer size
+To: stable@vger.kernel.org
+Cc: Thomas Fourier <fourier.thomas@gmail.com>, Dmitry Baryshkov <dmitry.baryshkov@oss.qualcomm.com>, Rob Clark <robin.clark@oss.qualcomm.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260318155629.874664-1-sashal@kernel.org>
+
+From: Thomas Fourier <fourier.thomas@gmail.com>
+
+[ Upstream commit e4eb6e4dd6348dd00e19c2275e3fbaed304ca3bd ]
+
+The gpummu->table buffer is alloc'd with size TABLE_SIZE + 32 in
+a2xx_gpummu_new() but freed with size TABLE_SIZE in
+a2xx_gpummu_destroy().
+
+Change the free size to match the allocation.
+
+Fixes: c2052a4e5c99 ("drm/msm: implement a2xx mmu")
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Thomas Fourier <fourier.thomas@gmail.com>
+Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@oss.qualcomm.com>
+Patchwork: https://patchwork.freedesktop.org/patch/707340/
+Message-ID: <20260226095714.12126-2-fourier.thomas@gmail.com>
+Signed-off-by: Rob Clark <robin.clark@oss.qualcomm.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/gpu/drm/msm/msm_gpummu.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/gpu/drm/msm/msm_gpummu.c
++++ b/drivers/gpu/drm/msm/msm_gpummu.c
+@@ -76,7 +76,7 @@ static void msm_gpummu_destroy(struct ms
+ {
+ struct msm_gpummu *gpummu = to_msm_gpummu(mmu);
+
+- dma_free_attrs(mmu->dev, TABLE_SIZE, gpummu->table, gpummu->pt_base,
++ dma_free_attrs(mmu->dev, TABLE_SIZE + 32, gpummu->table, gpummu->pt_base,
+ DMA_ATTR_FORCE_CONTIGUOUS);
+
+ kfree(gpummu);
--- /dev/null
+From stable+bounces-219696-greg=kroah.com@vger.kernel.org Wed Feb 25 20:40:43 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 25 Feb 2026 14:40:32 -0500
+Subject: ext4: always allocate blocks only from groups inode can use
+To: stable@vger.kernel.org
+Cc: Jan Kara <jack@suse.cz>, Baokun Li <libaokun1@huawei.com>, Zhang Yi <yi.zhang@huawei.com>, Pedro Falcato <pfalcato@suse.de>, stable@kernel.org, Theodore Ts'o <tytso@mit.edu>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260225194032.1016421-1-sashal@kernel.org>
+
+From: Jan Kara <jack@suse.cz>
+
+[ Upstream commit 4865c768b563deff1b6a6384e74a62f143427b42 ]
+
+For filesystems with more than 2^32 blocks inodes using indirect block
+based format cannot use blocks beyond the 32-bit limit.
+ext4_mb_scan_groups_linear() takes care to not select these unsupported
+groups for such inodes however other functions selecting groups for
+allocation don't. So far this is harmless because the other selection
+functions are used only with mb_optimize_scan and this is currently
+disabled for inodes with indirect blocks however in the following patch
+we want to enable mb_optimize_scan regardless of inode format.
+
+Reviewed-by: Baokun Li <libaokun1@huawei.com>
+Reviewed-by: Zhang Yi <yi.zhang@huawei.com>
+Signed-off-by: Jan Kara <jack@suse.cz>
+Acked-by: Pedro Falcato <pfalcato@suse.de>
+Cc: stable@kernel.org
+Link: https://patch.msgid.link/20260114182836.14120-3-jack@suse.cz
+Signed-off-by: Theodore Ts'o <tytso@mit.edu>
+[ Drop a few hunks not needed in older trees ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/ext4/mballoc.c | 20 ++++++++++++++++----
+ 1 file changed, 16 insertions(+), 4 deletions(-)
+
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -871,6 +871,21 @@ mb_update_avg_fragment_size(struct super
+ }
+ }
+
++static ext4_group_t ext4_get_allocation_groups_count(
++ struct ext4_allocation_context *ac)
++{
++ ext4_group_t ngroups = ext4_get_groups_count(ac->ac_sb);
++
++ /* non-extent files are limited to low blocks/groups */
++ if (!(ext4_test_inode_flag(ac->ac_inode, EXT4_INODE_EXTENTS)))
++ ngroups = EXT4_SB(ac->ac_sb)->s_blockfile_groups;
++
++ /* Pairs with smp_wmb() in ext4_update_super() */
++ smp_rmb();
++
++ return ngroups;
++}
++
+ /*
+ * Choose next group by traversing largest_free_order lists. Updates *new_cr if
+ * cr level needs an update.
+@@ -2672,10 +2687,7 @@ ext4_mb_regular_allocator(struct ext4_al
+
+ sb = ac->ac_sb;
+ sbi = EXT4_SB(sb);
+- ngroups = ext4_get_groups_count(sb);
+- /* non-extent files are limited to low blocks/groups */
+- if (!(ext4_test_inode_flag(ac->ac_inode, EXT4_INODE_EXTENTS)))
+- ngroups = sbi->s_blockfile_groups;
++ ngroups = ext4_get_allocation_groups_count(ac);
+
+ BUG_ON(ac->ac_status == AC_STATUS_FOUND);
+
--- /dev/null
+From stable+bounces-219633-greg=kroah.com@vger.kernel.org Wed Feb 25 15:37:55 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 25 Feb 2026 09:33:05 -0500
+Subject: ext4: fix dirtyclusters double decrement on fs shutdown
+To: stable@vger.kernel.org
+Cc: Brian Foster <bfoster@redhat.com>, Baokun Li <libaokun1@huawei.com>, Theodore Ts'o <tytso@mit.edu>, stable@kernel.org, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260225143305.469167-1-sashal@kernel.org>
+
+From: Brian Foster <bfoster@redhat.com>
+
+[ Upstream commit 94a8cea54cd935c54fa2fba70354757c0fc245e3 ]
+
+fstests test generic/388 occasionally reproduces a warning in
+ext4_put_super() associated with the dirty clusters count:
+
+ WARNING: CPU: 7 PID: 76064 at fs/ext4/super.c:1324 ext4_put_super+0x48c/0x590 [ext4]
+
+Tracing the failure shows that the warning fires due to an
+s_dirtyclusters_counter value of -1. IOW, this appears to be a
+spurious decrement as opposed to some sort of leak. Further tracing
+of the dirty cluster count deltas and an LLM scan of the resulting
+output identified the cause as a double decrement in the error path
+between ext4_mb_mark_diskspace_used() and the caller
+ext4_mb_new_blocks().
+
+First, note that generic/388 is a shutdown vs. fsstress test and so
+produces a random set of operations and shutdown injections. In the
+problematic case, the shutdown triggers an error return from the
+ext4_handle_dirty_metadata() call(s) made from
+ext4_mb_mark_context(). The changed value is non-zero at this point,
+so ext4_mb_mark_diskspace_used() does not exit after the error
+bubbles up from ext4_mb_mark_context(). Instead, the former
+decrements both cluster counters and returns the error up to
+ext4_mb_new_blocks(). The latter falls into the !ar->len out path
+which decrements the dirty clusters counter a second time, creating
+the inconsistency.
+
+To avoid this problem and simplify ownership of the cluster
+reservation in this codepath, lift the counter reduction to a single
+place in the caller. This makes it more clear that
+ext4_mb_new_blocks() is responsible for acquiring cluster
+reservation (via ext4_claim_free_clusters()) in the !delalloc case
+as well as releasing it, regardless of whether it ends up consumed
+or returned due to failure.
+
+Fixes: 0087d9fb3f29 ("ext4: Fix s_dirty_blocks_counter if block allocation failed with nodelalloc")
+Signed-off-by: Brian Foster <bfoster@redhat.com>
+Reviewed-by: Baokun Li <libaokun1@huawei.com>
+Link: https://patch.msgid.link/20260113171905.118284-1-bfoster@redhat.com
+Signed-off-by: Theodore Ts'o <tytso@mit.edu>
+Cc: stable@kernel.org
+[ Drop mballoc-test changes ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/ext4/mballoc.c | 21 +++++----------------
+ 1 file changed, 5 insertions(+), 16 deletions(-)
+
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -3815,8 +3815,7 @@ void ext4_exit_mballoc(void)
+ * Returns 0 if success or error code
+ */
+ static noinline_for_stack int
+-ext4_mb_mark_diskspace_used(struct ext4_allocation_context *ac,
+- handle_t *handle, unsigned int reserv_clstrs)
++ext4_mb_mark_diskspace_used(struct ext4_allocation_context *ac, handle_t *handle)
+ {
+ struct buffer_head *bitmap_bh = NULL;
+ struct ext4_group_desc *gdp;
+@@ -3904,13 +3903,6 @@ ext4_mb_mark_diskspace_used(struct ext4_
+
+ ext4_unlock_group(sb, ac->ac_b_ex.fe_group);
+ percpu_counter_sub(&sbi->s_freeclusters_counter, ac->ac_b_ex.fe_len);
+- /*
+- * Now reduce the dirty block count also. Should not go negative
+- */
+- if (!(ac->ac_flags & EXT4_MB_DELALLOC_RESERVED))
+- /* release all the reserved blocks if non delalloc */
+- percpu_counter_sub(&sbi->s_dirtyclusters_counter,
+- reserv_clstrs);
+
+ if (sbi->s_log_groups_per_flex) {
+ ext4_group_t flex_group = ext4_flex_group(sbi,
+@@ -5804,7 +5796,7 @@ repeat:
+ ext4_mb_pa_free(ac);
+ }
+ if (likely(ac->ac_status == AC_STATUS_FOUND)) {
+- *errp = ext4_mb_mark_diskspace_used(ac, handle, reserv_clstrs);
++ *errp = ext4_mb_mark_diskspace_used(ac, handle);
+ if (*errp) {
+ ext4_discard_allocated_blocks(ac);
+ goto errout;
+@@ -5836,12 +5828,9 @@ out:
+ kmem_cache_free(ext4_ac_cachep, ac);
+ if (inquota && ar->len < inquota)
+ dquot_free_block(ar->inode, EXT4_C2B(sbi, inquota - ar->len));
+- if (!ar->len) {
+- if ((ar->flags & EXT4_MB_DELALLOC_RESERVED) == 0)
+- /* release all the reserved blocks if non delalloc */
+- percpu_counter_sub(&sbi->s_dirtyclusters_counter,
+- reserv_clstrs);
+- }
++ /* release any reserved blocks */
++ if (reserv_clstrs)
++ percpu_counter_sub(&sbi->s_dirtyclusters_counter, reserv_clstrs);
+
+ trace_ext4_allocate_blocks(ar, (unsigned long long)block);
+
--- /dev/null
+From stable+bounces-227350-greg=kroah.com@vger.kernel.org Thu Mar 19 18:18:29 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 19 Mar 2026 13:18:13 -0400
+Subject: iio: buffer: fix coding style warnings
+To: stable@vger.kernel.org
+Cc: "Nuno Sá" <nuno.sa@analog.com>, "Lars-Peter Clausen" <lars@metafoo.de>, "Jonathan Cameron" <Jonathan.Cameron@huawei.com>, "Sasha Levin" <sashal@kernel.org>
+Message-ID: <20260319171814.2756731-1-sashal@kernel.org>
+
+From: Nuno Sá <nuno.sa@analog.com>
+
+[ Upstream commit 26e46ef7758922e983a9a2f688369f649cc1a635 ]
+
+Just cosmetics. No functional change intended...
+
+Signed-off-by: Nuno Sá <nuno.sa@analog.com>
+Reviewed-by: Lars-Peter Clausen <lars@metafoo.de>
+Link: https://lore.kernel.org/r/20230216101452.591805-4-nuno.sa@analog.com
+Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
+Stable-dep-of: 064234044056 ("iio: buffer: Fix wait_queue not being removed")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/iio/industrialio-buffer.c | 98 +++++++++++++++++++-------------------
+ 1 file changed, 49 insertions(+), 49 deletions(-)
+
+--- a/drivers/iio/industrialio-buffer.c
++++ b/drivers/iio/industrialio-buffer.c
+@@ -194,7 +194,7 @@ static ssize_t iio_buffer_write(struct f
+ written = 0;
+ add_wait_queue(&rb->pollq, &wait);
+ do {
+- if (indio_dev->info == NULL)
++ if (!indio_dev->info)
+ return -ENODEV;
+
+ if (!iio_buffer_space_available(rb)) {
+@@ -210,7 +210,7 @@ static ssize_t iio_buffer_write(struct f
+ }
+
+ wait_woken(&wait, TASK_INTERRUPTIBLE,
+- MAX_SCHEDULE_TIMEOUT);
++ MAX_SCHEDULE_TIMEOUT);
+ continue;
+ }
+
+@@ -242,7 +242,7 @@ static __poll_t iio_buffer_poll(struct f
+ struct iio_buffer *rb = ib->buffer;
+ struct iio_dev *indio_dev = ib->indio_dev;
+
+- if (!indio_dev->info || rb == NULL)
++ if (!indio_dev->info || !rb)
+ return 0;
+
+ poll_wait(filp, &rb->pollq, wait);
+@@ -407,9 +407,9 @@ static ssize_t iio_scan_el_show(struct d
+
+ /* Note NULL used as error indicator as it doesn't make sense. */
+ static const unsigned long *iio_scan_mask_match(const unsigned long *av_masks,
+- unsigned int masklength,
+- const unsigned long *mask,
+- bool strict)
++ unsigned int masklength,
++ const unsigned long *mask,
++ bool strict)
+ {
+ if (bitmap_empty(mask, masklength))
+ return NULL;
+@@ -427,7 +427,7 @@ static const unsigned long *iio_scan_mas
+ }
+
+ static bool iio_validate_scan_mask(struct iio_dev *indio_dev,
+- const unsigned long *mask)
++ const unsigned long *mask)
+ {
+ if (!indio_dev->setup_ops->validate_scan_mask)
+ return true;
+@@ -446,7 +446,7 @@ static bool iio_validate_scan_mask(struc
+ * individual buffers request is plausible.
+ */
+ static int iio_scan_mask_set(struct iio_dev *indio_dev,
+- struct iio_buffer *buffer, int bit)
++ struct iio_buffer *buffer, int bit)
+ {
+ const unsigned long *mask;
+ unsigned long *trialmask;
+@@ -538,7 +538,6 @@ error_ret:
+ mutex_unlock(&indio_dev->mlock);
+
+ return ret < 0 ? ret : len;
+-
+ }
+
+ static ssize_t iio_scan_el_ts_show(struct device *dev,
+@@ -703,7 +702,7 @@ static unsigned int iio_storage_bytes_fo
+ }
+
+ static int iio_compute_scan_bytes(struct iio_dev *indio_dev,
+- const unsigned long *mask, bool timestamp)
++ const unsigned long *mask, bool timestamp)
+ {
+ unsigned int bytes = 0;
+ int length, i, largest = 0;
+@@ -729,7 +728,7 @@ static int iio_compute_scan_bytes(struct
+ }
+
+ static void iio_buffer_activate(struct iio_dev *indio_dev,
+- struct iio_buffer *buffer)
++ struct iio_buffer *buffer)
+ {
+ struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev);
+
+@@ -750,12 +749,12 @@ static void iio_buffer_deactivate_all(st
+ struct iio_buffer *buffer, *_buffer;
+
+ list_for_each_entry_safe(buffer, _buffer,
+- &iio_dev_opaque->buffer_list, buffer_list)
++ &iio_dev_opaque->buffer_list, buffer_list)
+ iio_buffer_deactivate(buffer);
+ }
+
+ static int iio_buffer_enable(struct iio_buffer *buffer,
+- struct iio_dev *indio_dev)
++ struct iio_dev *indio_dev)
+ {
+ if (!buffer->access->enable)
+ return 0;
+@@ -763,7 +762,7 @@ static int iio_buffer_enable(struct iio_
+ }
+
+ static int iio_buffer_disable(struct iio_buffer *buffer,
+- struct iio_dev *indio_dev)
++ struct iio_dev *indio_dev)
+ {
+ if (!buffer->access->disable)
+ return 0;
+@@ -771,7 +770,7 @@ static int iio_buffer_disable(struct iio
+ }
+
+ static void iio_buffer_update_bytes_per_datum(struct iio_dev *indio_dev,
+- struct iio_buffer *buffer)
++ struct iio_buffer *buffer)
+ {
+ unsigned int bytes;
+
+@@ -779,13 +778,13 @@ static void iio_buffer_update_bytes_per_
+ return;
+
+ bytes = iio_compute_scan_bytes(indio_dev, buffer->scan_mask,
+- buffer->scan_timestamp);
++ buffer->scan_timestamp);
+
+ buffer->access->set_bytes_per_datum(buffer, bytes);
+ }
+
+ static int iio_buffer_request_update(struct iio_dev *indio_dev,
+- struct iio_buffer *buffer)
++ struct iio_buffer *buffer)
+ {
+ int ret;
+
+@@ -794,7 +793,7 @@ static int iio_buffer_request_update(str
+ ret = buffer->access->request_update(buffer);
+ if (ret) {
+ dev_dbg(&indio_dev->dev,
+- "Buffer not started: buffer parameter update failed (%d)\n",
++ "Buffer not started: buffer parameter update failed (%d)\n",
+ ret);
+ return ret;
+ }
+@@ -804,7 +803,7 @@ static int iio_buffer_request_update(str
+ }
+
+ static void iio_free_scan_mask(struct iio_dev *indio_dev,
+- const unsigned long *mask)
++ const unsigned long *mask)
+ {
+ /* If the mask is dynamically allocated free it, otherwise do nothing */
+ if (!indio_dev->available_scan_masks)
+@@ -820,8 +819,9 @@ struct iio_device_config {
+ };
+
+ static int iio_verify_update(struct iio_dev *indio_dev,
+- struct iio_buffer *insert_buffer, struct iio_buffer *remove_buffer,
+- struct iio_device_config *config)
++ struct iio_buffer *insert_buffer,
++ struct iio_buffer *remove_buffer,
++ struct iio_device_config *config)
+ {
+ struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev);
+ unsigned long *compound_mask;
+@@ -861,7 +861,7 @@ static int iio_verify_update(struct iio_
+ if (insert_buffer) {
+ modes &= insert_buffer->access->modes;
+ config->watermark = min(config->watermark,
+- insert_buffer->watermark);
++ insert_buffer->watermark);
+ }
+
+ /* Definitely possible for devices to support both of these. */
+@@ -887,7 +887,7 @@ static int iio_verify_update(struct iio_
+
+ /* What scan mask do we actually have? */
+ compound_mask = bitmap_zalloc(indio_dev->masklength, GFP_KERNEL);
+- if (compound_mask == NULL)
++ if (!compound_mask)
+ return -ENOMEM;
+
+ scan_timestamp = false;
+@@ -908,18 +908,18 @@ static int iio_verify_update(struct iio_
+
+ if (indio_dev->available_scan_masks) {
+ scan_mask = iio_scan_mask_match(indio_dev->available_scan_masks,
+- indio_dev->masklength,
+- compound_mask,
+- strict_scanmask);
++ indio_dev->masklength,
++ compound_mask,
++ strict_scanmask);
+ bitmap_free(compound_mask);
+- if (scan_mask == NULL)
++ if (!scan_mask)
+ return -EINVAL;
+ } else {
+ scan_mask = compound_mask;
+ }
+
+ config->scan_bytes = iio_compute_scan_bytes(indio_dev,
+- scan_mask, scan_timestamp);
++ scan_mask, scan_timestamp);
+ config->scan_mask = scan_mask;
+ config->scan_timestamp = scan_timestamp;
+
+@@ -951,16 +951,16 @@ static void iio_buffer_demux_free(struct
+ }
+
+ static int iio_buffer_add_demux(struct iio_buffer *buffer,
+- struct iio_demux_table **p, unsigned int in_loc, unsigned int out_loc,
+- unsigned int length)
++ struct iio_demux_table **p, unsigned int in_loc,
++ unsigned int out_loc,
++ unsigned int length)
+ {
+-
+ if (*p && (*p)->from + (*p)->length == in_loc &&
+- (*p)->to + (*p)->length == out_loc) {
++ (*p)->to + (*p)->length == out_loc) {
+ (*p)->length += length;
+ } else {
+ *p = kmalloc(sizeof(**p), GFP_KERNEL);
+- if (*p == NULL)
++ if (!(*p))
+ return -ENOMEM;
+ (*p)->from = in_loc;
+ (*p)->to = out_loc;
+@@ -1024,7 +1024,7 @@ static int iio_buffer_update_demux(struc
+ out_loc += length;
+ }
+ buffer->demux_bounce = kzalloc(out_loc, GFP_KERNEL);
+- if (buffer->demux_bounce == NULL) {
++ if (!buffer->demux_bounce) {
+ ret = -ENOMEM;
+ goto error_clear_mux_table;
+ }
+@@ -1057,7 +1057,7 @@ error_clear_mux_table:
+ }
+
+ static int iio_enable_buffers(struct iio_dev *indio_dev,
+- struct iio_device_config *config)
++ struct iio_device_config *config)
+ {
+ struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev);
+ struct iio_buffer *buffer, *tmp = NULL;
+@@ -1075,7 +1075,7 @@ static int iio_enable_buffers(struct iio
+ ret = indio_dev->setup_ops->preenable(indio_dev);
+ if (ret) {
+ dev_dbg(&indio_dev->dev,
+- "Buffer not started: buffer preenable failed (%d)\n", ret);
++ "Buffer not started: buffer preenable failed (%d)\n", ret);
+ goto err_undo_config;
+ }
+ }
+@@ -1115,7 +1115,7 @@ static int iio_enable_buffers(struct iio
+ ret = indio_dev->setup_ops->postenable(indio_dev);
+ if (ret) {
+ dev_dbg(&indio_dev->dev,
+- "Buffer not started: postenable failed (%d)\n", ret);
++ "Buffer not started: postenable failed (%d)\n", ret);
+ goto err_detach_pollfunc;
+ }
+ }
+@@ -1191,15 +1191,15 @@ static int iio_disable_buffers(struct ii
+ }
+
+ static int __iio_update_buffers(struct iio_dev *indio_dev,
+- struct iio_buffer *insert_buffer,
+- struct iio_buffer *remove_buffer)
++ struct iio_buffer *insert_buffer,
++ struct iio_buffer *remove_buffer)
+ {
+ struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev);
+ struct iio_device_config new_config;
+ int ret;
+
+ ret = iio_verify_update(indio_dev, insert_buffer, remove_buffer,
+- &new_config);
++ &new_config);
+ if (ret)
+ return ret;
+
+@@ -1255,7 +1255,7 @@ int iio_update_buffers(struct iio_dev *i
+ return 0;
+
+ if (insert_buffer &&
+- (insert_buffer->direction == IIO_BUFFER_DIRECTION_OUT))
++ insert_buffer->direction == IIO_BUFFER_DIRECTION_OUT)
+ return -EINVAL;
+
+ mutex_lock(&iio_dev_opaque->info_exist_lock);
+@@ -1272,7 +1272,7 @@ int iio_update_buffers(struct iio_dev *i
+ goto out_unlock;
+ }
+
+- if (indio_dev->info == NULL) {
++ if (!indio_dev->info) {
+ ret = -ENODEV;
+ goto out_unlock;
+ }
+@@ -1609,7 +1609,7 @@ static int __iio_buffer_alloc_sysfs_and_
+
+ buffer_attrcount = 0;
+ if (buffer->attrs) {
+- while (buffer->attrs[buffer_attrcount] != NULL)
++ while (buffer->attrs[buffer_attrcount])
+ buffer_attrcount++;
+ }
+
+@@ -1636,7 +1636,7 @@ static int __iio_buffer_alloc_sysfs_and_
+ }
+
+ ret = iio_buffer_add_channel_sysfs(indio_dev, buffer,
+- &channels[i]);
++ &channels[i]);
+ if (ret < 0)
+ goto error_cleanup_dynamic;
+ scan_el_attrcount += ret;
+@@ -1644,10 +1644,10 @@ static int __iio_buffer_alloc_sysfs_and_
+ iio_dev_opaque->scan_index_timestamp =
+ channels[i].scan_index;
+ }
+- if (indio_dev->masklength && buffer->scan_mask == NULL) {
++ if (indio_dev->masklength && !buffer->scan_mask) {
+ buffer->scan_mask = bitmap_zalloc(indio_dev->masklength,
+ GFP_KERNEL);
+- if (buffer->scan_mask == NULL) {
++ if (!buffer->scan_mask) {
+ ret = -ENOMEM;
+ goto error_cleanup_dynamic;
+ }
+@@ -1763,7 +1763,7 @@ int iio_buffers_alloc_sysfs_and_mask(str
+ goto error_unwind_sysfs_and_mask;
+ }
+
+- sz = sizeof(*(iio_dev_opaque->buffer_ioctl_handler));
++ sz = sizeof(*iio_dev_opaque->buffer_ioctl_handler);
+ iio_dev_opaque->buffer_ioctl_handler = kzalloc(sz, GFP_KERNEL);
+ if (!iio_dev_opaque->buffer_ioctl_handler) {
+ ret = -ENOMEM;
+@@ -1812,14 +1812,14 @@ void iio_buffers_free_sysfs_and_mask(str
+ * a time.
+ */
+ bool iio_validate_scan_mask_onehot(struct iio_dev *indio_dev,
+- const unsigned long *mask)
++ const unsigned long *mask)
+ {
+ return bitmap_weight(mask, indio_dev->masklength) == 1;
+ }
+ EXPORT_SYMBOL_GPL(iio_validate_scan_mask_onehot);
+
+ static const void *iio_demux(struct iio_buffer *buffer,
+- const void *datain)
++ const void *datain)
+ {
+ struct iio_demux_table *t;
+
--- /dev/null
+From stable+bounces-227351-greg=kroah.com@vger.kernel.org Thu Mar 19 18:26:54 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 19 Mar 2026 13:18:14 -0400
+Subject: iio: buffer: Fix wait_queue not being removed
+To: stable@vger.kernel.org
+Cc: "Nuno Sá" <nuno.sa@analog.com>, "David Lechner" <dlechner@baylibre.com>, Stable@vger.kernel.org, "Jonathan Cameron" <Jonathan.Cameron@huawei.com>, "Sasha Levin" <sashal@kernel.org>
+Message-ID: <20260319171814.2756731-2-sashal@kernel.org>
+
+From: Nuno Sá <nuno.sa@analog.com>
+
+[ Upstream commit 064234044056c93a3719d6893e6e5a26a94a61b6 ]
+
+In the edge case where the IIO device is unregistered while we're
+buffering, we were directly returning an error without removing the wait
+queue. Instead, set 'ret' and break out of the loop.
+
+Fixes: 9eeee3b0bf19 ("iio: Add output buffer support")
+Signed-off-by: Nuno Sá <nuno.sa@analog.com>
+Reviewed-by: David Lechner <dlechner@baylibre.com>
+Cc: <Stable@vger.kernel.org>
+Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/iio/industrialio-buffer.c | 6 ++++--
+ 1 file changed, 4 insertions(+), 2 deletions(-)
+
+--- a/drivers/iio/industrialio-buffer.c
++++ b/drivers/iio/industrialio-buffer.c
+@@ -194,8 +194,10 @@ static ssize_t iio_buffer_write(struct f
+ written = 0;
+ add_wait_queue(&rb->pollq, &wait);
+ do {
+- if (!indio_dev->info)
+- return -ENODEV;
++ if (!indio_dev->info) {
++ ret = -ENODEV;
++ break;
++ }
+
+ if (!iio_buffer_space_available(rb)) {
+ if (signal_pending(current)) {
--- /dev/null
+From stable+bounces-227397-greg=kroah.com@vger.kernel.org Fri Mar 20 00:01:19 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 19 Mar 2026 19:01:11 -0400
+Subject: iio: light: bh1780: fix PM runtime leak on error path
+To: stable@vger.kernel.org
+Cc: Antoniu Miclaus <antoniu.miclaus@analog.com>, Linus Walleij <linusw@kernel.org>, Stable@vger.kernel.org, Jonathan Cameron <Jonathan.Cameron@huawei.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260319230111.3146058-1-sashal@kernel.org>
+
+From: Antoniu Miclaus <antoniu.miclaus@analog.com>
+
+[ Upstream commit dd72e6c3cdea05cad24e99710939086f7a113fb5 ]
+
+Move pm_runtime_put_autosuspend() before the error check to ensure
+the PM runtime reference count is always decremented after
+pm_runtime_get_sync(), regardless of whether the read operation
+succeeds or fails.
+
+Fixes: 1f0477f18306 ("iio: light: new driver for the ROHM BH1780")
+Signed-off-by: Antoniu Miclaus <antoniu.miclaus@analog.com>
+Reviewed-by: Linus Walleij <linusw@kernel.org>
+Cc: <Stable@vger.kernel.org>
+Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
+[ moved both pm_runtime_mark_last_busy() and pm_runtime_put_autosuspend() before the error check instead of just pm_runtime_put_autosuspend() ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/iio/light/bh1780.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/drivers/iio/light/bh1780.c
++++ b/drivers/iio/light/bh1780.c
+@@ -109,10 +109,10 @@ static int bh1780_read_raw(struct iio_de
+ case IIO_LIGHT:
+ pm_runtime_get_sync(&bh1780->client->dev);
+ value = bh1780_read_word(bh1780, BH1780_REG_DLOW);
+- if (value < 0)
+- return value;
+ pm_runtime_mark_last_busy(&bh1780->client->dev);
+ pm_runtime_put_autosuspend(&bh1780->client->dev);
++ if (value < 0)
++ return value;
+ *val = value;
+
+ return IIO_VAL_INT;
--- /dev/null
+From stable+bounces-226945-greg=kroah.com@vger.kernel.org Wed Mar 18 02:07:23 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 17 Mar 2026 21:06:50 -0400
+Subject: iomap: reject delalloc mappings during writeback
+To: stable@vger.kernel.org
+Cc: "Darrick J. Wong" <djwong@kernel.org>, Christoph Hellwig <hch@lst.de>, Carlos Maiolino <cmaiolino@redhat.com>, Christian Brauner <brauner@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260318010650.420596-1-sashal@kernel.org>
+
+From: "Darrick J. Wong" <djwong@kernel.org>
+
+[ Upstream commit d320f160aa5ff36cdf83c645cca52b615e866e32 ]
+
+Filesystems should never provide a delayed allocation mapping to
+writeback; they're supposed to allocate the space before replying.
+This can lead to weird IO errors and crashes in the block layer if the
+filesystem is being malicious, or if it hadn't set iomap->dev because
+it's a delalloc mapping.
+
+Fix this by failing writeback on delalloc mappings. Currently no
+filesystems actually misbehave in this manner, but we ought to be
+stricter about things like that.
+
+Cc: stable@vger.kernel.org # v5.5
+Fixes: 598ecfbaa742ac ("iomap: lift the xfs writeback code to iomap")
+Signed-off-by: Darrick J. Wong <djwong@kernel.org>
+Link: https://patch.msgid.link/20260302173002.GL13829@frogsfrogsfrogs
+Reviewed-by: Christoph Hellwig <hch@lst.de>
+Reviewed-by: Carlos Maiolino <cmaiolino@redhat.com>
+Signed-off-by: Christian Brauner <brauner@kernel.org>
+[ switch -> if ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/iomap/buffered-io.c | 7 +++++--
+ 1 file changed, 5 insertions(+), 2 deletions(-)
+
+--- a/fs/iomap/buffered-io.c
++++ b/fs/iomap/buffered-io.c
+@@ -1620,10 +1620,13 @@ iomap_writepage_map(struct iomap_writepa
+ if (error)
+ break;
+ trace_iomap_writepage_map(inode, &wpc->iomap);
+- if (WARN_ON_ONCE(wpc->iomap.type == IOMAP_INLINE))
+- continue;
+ if (wpc->iomap.type == IOMAP_HOLE)
+ continue;
++ if (WARN_ON_ONCE(wpc->iomap.type != IOMAP_MAPPED &&
++ wpc->iomap.type != IOMAP_UNWRITTEN)) {
++ error = -EIO;
++ break;
++ }
+ iomap_add_to_ioend(inode, pos, folio, iop, wpc, wbc,
+ &submit_list);
+ count++;
--- /dev/null
+From stable+bounces-227308-greg=kroah.com@vger.kernel.org Thu Mar 19 15:07:25 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 19 Mar 2026 10:06:47 -0400
+Subject: kprobes: Remove unneeded goto
+To: stable@vger.kernel.org
+Cc: "Masami Hiramatsu (Google)" <mhiramat@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260319140648.2491064-1-sashal@kernel.org>
+
+From: "Masami Hiramatsu (Google)" <mhiramat@kernel.org>
+
+[ Upstream commit 5e5b8b49335971b68b54afeb0e7ded004945af07 ]
+
+Remove unneeded gotos. Since the labels referred by these gotos have
+only one reference for each, we can replace those gotos with the
+referred code.
+
+Link: https://lore.kernel.org/all/173371211203.480397.13988907319659165160.stgit@devnote2/
+
+Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
+Stable-dep-of: 5ef268cb7a0a ("kprobes: Remove unneeded warnings from __arm_kprobe_ftrace()")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/kprobes.c | 45 +++++++++++++++++++++------------------------
+ 1 file changed, 21 insertions(+), 24 deletions(-)
+
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -1082,20 +1082,18 @@ static int __arm_kprobe_ftrace(struct kp
+
+ if (*cnt == 0) {
+ ret = register_ftrace_function(ops);
+- if (WARN(ret < 0, "Failed to register kprobe-ftrace (error %d)\n", ret))
+- goto err_ftrace;
++ if (WARN(ret < 0, "Failed to register kprobe-ftrace (error %d)\n", ret)) {
++ /*
++ * At this point, sinec ops is not registered, we should be sefe from
++ * registering empty filter.
++ */
++ ftrace_set_filter_ip(ops, (unsigned long)p->addr, 1, 0);
++ return ret;
++ }
+ }
+
+ (*cnt)++;
+ return ret;
+-
+-err_ftrace:
+- /*
+- * At this point, sinec ops is not registered, we should be sefe from
+- * registering empty filter.
+- */
+- ftrace_set_filter_ip(ops, (unsigned long)p->addr, 1, 0);
+- return ret;
+ }
+
+ static int arm_kprobe_ftrace(struct kprobe *p)
+@@ -1447,7 +1445,7 @@ _kprobe_addr(kprobe_opcode_t *addr, cons
+ unsigned long offset, bool *on_func_entry)
+ {
+ if ((symbol_name && addr) || (!symbol_name && !addr))
+- goto invalid;
++ return ERR_PTR(-EINVAL);
+
+ if (symbol_name) {
+ /*
+@@ -1477,11 +1475,10 @@ _kprobe_addr(kprobe_opcode_t *addr, cons
+ * at the start of the function.
+ */
+ addr = arch_adjust_kprobe_addr((unsigned long)addr, offset, on_func_entry);
+- if (addr)
+- return addr;
++ if (!addr)
++ return ERR_PTR(-EINVAL);
+
+-invalid:
+- return ERR_PTR(-EINVAL);
++ return addr;
+ }
+
+ static kprobe_opcode_t *kprobe_addr(struct kprobe *p)
+@@ -1504,15 +1501,15 @@ static struct kprobe *__get_valid_kprobe
+ if (unlikely(!ap))
+ return NULL;
+
+- if (p != ap) {
+- list_for_each_entry(list_p, &ap->list, list)
+- if (list_p == p)
+- /* kprobe p is a valid probe */
+- goto valid;
+- return NULL;
+- }
+-valid:
+- return ap;
++ if (p == ap)
++ return ap;
++
++ list_for_each_entry(list_p, &ap->list, list)
++ if (list_p == p)
++ /* kprobe p is a valid probe */
++ return ap;
++
++ return NULL;
+ }
+
+ /*
--- /dev/null
+From stable+bounces-227309-greg=kroah.com@vger.kernel.org Thu Mar 19 15:09:33 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 19 Mar 2026 10:06:48 -0400
+Subject: kprobes: Remove unneeded warnings from __arm_kprobe_ftrace()
+To: stable@vger.kernel.org
+Cc: "Masami Hiramatsu (Google)" <mhiramat@kernel.org>, Zw Tang <shicenci@gmail.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260319140648.2491064-2-sashal@kernel.org>
+
+From: "Masami Hiramatsu (Google)" <mhiramat@kernel.org>
+
+[ Upstream commit 5ef268cb7a0aac55521fd9881f1939fa94a8988e ]
+
+Remove unneeded warnings for handled errors from __arm_kprobe_ftrace()
+because all caller handled the error correctly.
+
+Link: https://lore.kernel.org/all/177261531182.1312989.8737778408503961141.stgit@mhiramat.tok.corp.google.com/
+
+Reported-by: Zw Tang <shicenci@gmail.com>
+Closes: https://lore.kernel.org/all/CAPHJ_V+J6YDb_wX2nhXU6kh466Dt_nyDSas-1i_Y8s7tqY-Mzw@mail.gmail.com/
+Fixes: 9c89bb8e3272 ("kprobes: treewide: Cleanup the error messages for kprobes")
+Cc: stable@vger.kernel.org
+Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/kprobes.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -1077,12 +1077,12 @@ static int __arm_kprobe_ftrace(struct kp
+ lockdep_assert_held(&kprobe_mutex);
+
+ ret = ftrace_set_filter_ip(ops, (unsigned long)p->addr, 0, 0);
+- if (WARN_ONCE(ret < 0, "Failed to arm kprobe-ftrace at %pS (error %d)\n", p->addr, ret))
++ if (ret < 0)
+ return ret;
+
+ if (*cnt == 0) {
+ ret = register_ftrace_function(ops);
+- if (WARN(ret < 0, "Failed to register kprobe-ftrace (error %d)\n", ret)) {
++ if (ret < 0) {
+ /*
+ * At this point, sinec ops is not registered, we should be sefe from
+ * registering empty filter.
--- /dev/null
+From stable+bounces-219132-greg=kroah.com@vger.kernel.org Wed Feb 25 03:24:56 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 24 Feb 2026 21:24:47 -0500
+Subject: ksmbd: call ksmbd_vfs_kern_path_end_removing() on some error paths
+To: stable@vger.kernel.org
+Cc: Fedor Pchelkin <pchelkin@ispras.ru>, Namjae Jeon <linkinjeon@kernel.org>, Steve French <stfrench@microsoft.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260225022447.3806589-1-sashal@kernel.org>
+
+From: Fedor Pchelkin <pchelkin@ispras.ru>
+
+[ Upstream commit a09dc10d1353f0e92c21eae2a79af1c2b1ddcde8 ]
+
+There are two places where ksmbd_vfs_kern_path_end_removing() needs to be
+called in order to balance what the corresponding successful call to
+ksmbd_vfs_kern_path_start_removing() has done, i.e. drop inode locks and
+put the taken references. Otherwise there might be potential deadlocks
+and unbalanced locks which are caught like:
+
+BUG: workqueue leaked lock or atomic: kworker/5:21/0x00000000/7596
+ last function: handle_ksmbd_work
+2 locks held by kworker/5:21/7596:
+ #0: ffff8881051ae448 (sb_writers#3){.+.+}-{0:0}, at: ksmbd_vfs_kern_path_locked+0x142/0x660
+ #1: ffff888130e966c0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: ksmbd_vfs_kern_path_locked+0x17d/0x660
+CPU: 5 PID: 7596 Comm: kworker/5:21 Not tainted 6.1.162-00456-gc29b353f383b #138
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.17.0-debian-1.17.0-1 04/01/2014
+Workqueue: ksmbd-io handle_ksmbd_work
+Call Trace:
+ <TASK>
+ dump_stack_lvl+0x44/0x5b
+ process_one_work.cold+0x57/0x5c
+ worker_thread+0x82/0x600
+ kthread+0x153/0x190
+ ret_from_fork+0x22/0x30
+ </TASK>
+
+Found by Linux Verification Center (linuxtesting.org).
+
+Fixes: d5fc1400a34b ("smb/server: avoid deadlock when linking with ReplaceIfExists")
+Cc: stable@vger.kernel.org
+Signed-off-by: Fedor Pchelkin <pchelkin@ispras.ru>
+Acked-by: Namjae Jeon <linkinjeon@kernel.org>
+Signed-off-by: Steve French <stfrench@microsoft.com>
+[ ksmbd_vfs_kern_path_end_removing() call -> ksmbd_vfs_kern_path_unlock() ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/smb/server/smb2pdu.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -5693,14 +5693,14 @@ static int smb2_create_link(struct ksmbd
+ rc = -EINVAL;
+ ksmbd_debug(SMB, "cannot delete %s\n",
+ link_name);
+- goto out;
+ }
+ } else {
+ rc = -EEXIST;
+ ksmbd_debug(SMB, "link already exists\n");
+- goto out;
+ }
+ ksmbd_vfs_kern_path_unlock(&parent_path, &path);
++ if (rc)
++ goto out;
+ }
+ rc = ksmbd_vfs_link(work, target_name, link_name);
+ if (rc)
--- /dev/null
+From stable+bounces-227086-greg=kroah.com@vger.kernel.org Wed Mar 18 15:53:37 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 18 Mar 2026 10:46:27 -0400
+Subject: ksmbd: Don't log keys in SMB3 signing and encryption key generation
+To: stable@vger.kernel.org
+Cc: Thorsten Blum <thorsten.blum@linux.dev>, Namjae Jeon <linkinjeon@kernel.org>, Steve French <stfrench@microsoft.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260318144627.850113-1-sashal@kernel.org>
+
+From: Thorsten Blum <thorsten.blum@linux.dev>
+
+[ Upstream commit 441336115df26b966575de56daf7107ed474faed ]
+
+When KSMBD_DEBUG_AUTH logging is enabled, generate_smb3signingkey() and
+generate_smb3encryptionkey() log the session, signing, encryption, and
+decryption key bytes. Remove the logs to avoid exposing credentials.
+
+Fixes: e2f34481b24d ("cifsd: add server-side procedures for SMB3")
+Cc: stable@vger.kernel.org
+Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
+Acked-by: Namjae Jeon <linkinjeon@kernel.org>
+Signed-off-by: Steve French <stfrench@microsoft.com>
+[ Context ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/smb/server/auth.c | 22 ++--------------------
+ 1 file changed, 2 insertions(+), 20 deletions(-)
+
+--- a/fs/smb/server/auth.c
++++ b/fs/smb/server/auth.c
+@@ -795,12 +795,8 @@ static int generate_smb3signingkey(struc
+ if (!(conn->dialect >= SMB30_PROT_ID && signing->binding))
+ memcpy(chann->smb3signingkey, key, SMB3_SIGN_KEY_SIZE);
+
+- ksmbd_debug(AUTH, "dumping generated AES signing keys\n");
++ ksmbd_debug(AUTH, "generated SMB3 signing key\n");
+ ksmbd_debug(AUTH, "Session Id %llu\n", sess->id);
+- ksmbd_debug(AUTH, "Session Key %*ph\n",
+- SMB2_NTLMV2_SESSKEY_SIZE, sess->sess_key);
+- ksmbd_debug(AUTH, "Signing Key %*ph\n",
+- SMB3_SIGN_KEY_SIZE, key);
+ return 0;
+ }
+
+@@ -864,23 +860,9 @@ static int generate_smb3encryptionkey(st
+ if (rc)
+ return rc;
+
+- ksmbd_debug(AUTH, "dumping generated AES encryption keys\n");
++ ksmbd_debug(AUTH, "generated SMB3 encryption/decryption keys\n");
+ ksmbd_debug(AUTH, "Cipher type %d\n", conn->cipher_type);
+ ksmbd_debug(AUTH, "Session Id %llu\n", sess->id);
+- ksmbd_debug(AUTH, "Session Key %*ph\n",
+- SMB2_NTLMV2_SESSKEY_SIZE, sess->sess_key);
+- if (conn->cipher_type == SMB2_ENCRYPTION_AES256_CCM ||
+- conn->cipher_type == SMB2_ENCRYPTION_AES256_GCM) {
+- ksmbd_debug(AUTH, "ServerIn Key %*ph\n",
+- SMB3_GCM256_CRYPTKEY_SIZE, sess->smb3encryptionkey);
+- ksmbd_debug(AUTH, "ServerOut Key %*ph\n",
+- SMB3_GCM256_CRYPTKEY_SIZE, sess->smb3decryptionkey);
+- } else {
+- ksmbd_debug(AUTH, "ServerIn Key %*ph\n",
+- SMB3_GCM128_CRYPTKEY_SIZE, sess->smb3encryptionkey);
+- ksmbd_debug(AUTH, "ServerOut Key %*ph\n",
+- SMB3_GCM128_CRYPTKEY_SIZE, sess->smb3decryptionkey);
+- }
+ return 0;
+ }
+
--- /dev/null
+From stable+bounces-225697-greg=kroah.com@vger.kernel.org Mon Mar 16 21:18:34 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 16 Mar 2026 16:18:01 -0400
+Subject: KVM: SVM: Set/clear CR8 write interception when AVIC is (de)activated
+To: stable@vger.kernel.org
+Cc: Sean Christopherson <seanjc@google.com>, Jim Mattson <jmattson@google.com>, "Naveen N Rao (AMD)" <naveen@kernel.org>, "Maciej S. Szmigiero" <maciej.szmigiero@oracle.com>, Paolo Bonzini <pbonzini@redhat.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260316201801.1376275-1-sashal@kernel.org>
+
+From: Sean Christopherson <seanjc@google.com>
+
+[ Upstream commit 87d0f901a9bd8ae6be57249c737f20ac0cace93d ]
+
+Explicitly set/clear CR8 write interception when AVIC is (de)activated to
+fix a bug where KVM leaves the interception enabled after AVIC is
+activated. E.g. if KVM emulates INIT=>WFS while AVIC is deactivated, CR8
+will remain intercepted in perpetuity.
+
+On its own, the dangling CR8 intercept is "just" a performance issue, but
+combined with the TPR sync bug fixed by commit d02e48830e3f ("KVM: SVM:
+Sync TPR from LAPIC into VMCB::V_TPR even if AVIC is active"), the danging
+intercept is fatal to Windows guests as the TPR seen by hardware gets
+wildly out of sync with reality.
+
+Note, VMX isn't affected by the bug as TPR_THRESHOLD is explicitly ignored
+when Virtual Interrupt Delivery is enabled, i.e. when APICv is active in
+KVM's world. I.e. there's no need to trigger update_cr8_intercept(), this
+is firmly an SVM implementation flaw/detail.
+
+WARN if KVM gets a CR8 write #VMEXIT while AVIC is active, as KVM should
+never enter the guest with AVIC enabled and CR8 writes intercepted.
+
+Fixes: 3bbf3565f48c ("svm: Do not intercept CR8 when enable AVIC")
+Cc: stable@vger.kernel.org
+Cc: Jim Mattson <jmattson@google.com>
+Cc: Naveen N Rao (AMD) <naveen@kernel.org>
+Cc: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
+Reviewed-by: Naveen N Rao (AMD) <naveen@kernel.org>
+Reviewed-by: Jim Mattson <jmattson@google.com>
+Link: https://patch.msgid.link/20260203190711.458413-3-seanjc@google.com
+Signed-off-by: Sean Christopherson <seanjc@google.com>
+[Squash fix to avic_deactivate_vmcb. - Paolo]
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+[ adjusted context ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kvm/svm/avic.c | 6 +++++-
+ arch/x86/kvm/svm/svm.c | 7 ++++---
+ 2 files changed, 9 insertions(+), 4 deletions(-)
+
+--- a/arch/x86/kvm/svm/avic.c
++++ b/arch/x86/kvm/svm/avic.c
+@@ -79,9 +79,10 @@ static void avic_activate_vmcb(struct vc
+
+ vmcb->control.int_ctl &= ~(AVIC_ENABLE_MASK | X2APIC_MODE_MASK);
+ vmcb->control.avic_physical_id &= ~AVIC_PHYSICAL_MAX_INDEX_MASK;
+-
+ vmcb->control.int_ctl |= AVIC_ENABLE_MASK;
+
++ svm_clr_intercept(svm, INTERCEPT_CR8_WRITE);
++
+ /* Note:
+ * KVM can support hybrid-AVIC mode, where KVM emulates x2APIC
+ * MSR accesses, while interrupt injection to a running vCPU
+@@ -116,6 +117,9 @@ static void avic_deactivate_vmcb(struct
+ vmcb->control.int_ctl &= ~(AVIC_ENABLE_MASK | X2APIC_MODE_MASK);
+ vmcb->control.avic_physical_id &= ~AVIC_PHYSICAL_MAX_INDEX_MASK;
+
++ if (!sev_es_guest(svm->vcpu.kvm))
++ svm_set_intercept(svm, INTERCEPT_CR8_WRITE);
++
+ /*
+ * If running nested and the guest uses its own MSR bitmap, there
+ * is no need to update L0's msr bitmap
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -1192,8 +1192,7 @@ static void init_vmcb(struct kvm_vcpu *v
+ svm_set_intercept(svm, INTERCEPT_CR0_WRITE);
+ svm_set_intercept(svm, INTERCEPT_CR3_WRITE);
+ svm_set_intercept(svm, INTERCEPT_CR4_WRITE);
+- if (!kvm_vcpu_apicv_active(vcpu))
+- svm_set_intercept(svm, INTERCEPT_CR8_WRITE);
++ svm_set_intercept(svm, INTERCEPT_CR8_WRITE);
+
+ set_dr_intercepts(svm);
+
+@@ -2690,9 +2689,11 @@ static int dr_interception(struct kvm_vc
+
+ static int cr8_write_interception(struct kvm_vcpu *vcpu)
+ {
++ u8 cr8_prev = kvm_get_cr8(vcpu);
+ int r;
+
+- u8 cr8_prev = kvm_get_cr8(vcpu);
++ WARN_ON_ONCE(kvm_vcpu_apicv_active(vcpu));
++
+ /* instruction emulation calls kvm_set_cr8() */
+ r = cr_interception(vcpu);
+ if (lapic_in_kernel(vcpu))
--- /dev/null
+From sashal@kernel.org Tue Mar 17 17:25:06 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 17 Mar 2026 12:25:01 -0400
+Subject: mm/kfence: disable KFENCE upon KASAN HW tags enablement
+To: stable@vger.kernel.org
+Cc: Alexander Potapenko <glider@google.com>, Marco Elver <elver@google.com>, Andrey Konovalov <andreyknvl@gmail.com>, Andrey Ryabinin <ryabinin.a.a@gmail.com>, Dmitry Vyukov <dvyukov@google.com>, Ernesto Martinez Garcia <ernesto.martinezgarcia@tugraz.at>, Greg KH <gregkh@linuxfoundation.org>, Kees Cook <kees@kernel.org>, Andrew Morton <akpm@linux-foundation.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260317162502.213232-1-sashal@kernel.org>
+
+From: Alexander Potapenko <glider@google.com>
+
+[ Upstream commit 09833d99db36d74456a4d13eb29c32d56ff8f2b6 ]
+
+KFENCE does not currently support KASAN hardware tags. As a result, the
+two features are incompatible when enabled simultaneously.
+
+Given that MTE provides deterministic protection and KFENCE is a
+sampling-based debugging tool, prioritize the stronger hardware
+protections. Disable KFENCE initialization and free the pre-allocated
+pool if KASAN hardware tags are detected to ensure the system maintains
+the security guarantees provided by MTE.
+
+Link: https://lkml.kernel.org/r/20260213095410.1862978-1-glider@google.com
+Fixes: 0ce20dd84089 ("mm: add Kernel Electric-Fence infrastructure")
+Signed-off-by: Alexander Potapenko <glider@google.com>
+Suggested-by: Marco Elver <elver@google.com>
+Reviewed-by: Marco Elver <elver@google.com>
+Cc: Andrey Konovalov <andreyknvl@gmail.com>
+Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
+Cc: Dmitry Vyukov <dvyukov@google.com>
+Cc: Ernesto Martinez Garcia <ernesto.martinezgarcia@tugraz.at>
+Cc: Greg KH <gregkh@linuxfoundation.org>
+Cc: Kees Cook <kees@kernel.org>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+[ Context ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/kfence/core.c | 15 +++++++++++++++
+ 1 file changed, 15 insertions(+)
+
+--- a/mm/kfence/core.c
++++ b/mm/kfence/core.c
+@@ -13,6 +13,7 @@
+ #include <linux/hash.h>
+ #include <linux/irq_work.h>
+ #include <linux/jhash.h>
++#include <linux/kasan-enabled.h>
+ #include <linux/kcsan-checks.h>
+ #include <linux/kfence.h>
+ #include <linux/kmemleak.h>
+@@ -844,6 +845,20 @@ void __init kfence_alloc_pool(void)
+ if (!kfence_sample_interval)
+ return;
+
++ /*
++ * If KASAN hardware tags are enabled, disable KFENCE, because it
++ * does not support MTE yet.
++ */
++ if (kasan_hw_tags_enabled()) {
++ pr_info("disabled as KASAN HW tags are enabled\n");
++ if (__kfence_pool) {
++ memblock_free(__kfence_pool, KFENCE_POOL_SIZE);
++ __kfence_pool = NULL;
++ }
++ kfence_sample_interval = 0;
++ return;
++ }
++
+ /* if the pool has already been initialized by arch, skip the below. */
+ if (__kfence_pool)
+ return;
--- /dev/null
+From sashal@kernel.org Tue Mar 17 16:12:38 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 17 Mar 2026 11:12:34 -0400
+Subject: mm/kfence: fix KASAN hardware tag faults during late enablement
+To: stable@vger.kernel.org
+Cc: Alexander Potapenko <glider@google.com>, Ernesto Martinez Garcia <ernesto.martinezgarcia@tugraz.at>, Andrey Konovalov <andreyknvl@gmail.com>, Andrey Ryabinin <ryabinin.a.a@gmail.com>, Dmitry Vyukov <dvyukov@google.com>, Greg KH <gregkh@linuxfoundation.org>, Kees Cook <kees@kernel.org>, Marco Elver <elver@google.com>, Andrew Morton <akpm@linux-foundation.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260317151234.185462-1-sashal@kernel.org>
+
+From: Alexander Potapenko <glider@google.com>
+
+[ Upstream commit d155aab90fffa00f93cea1f107aef0a3d548b2ff ]
+
+When KASAN hardware tags are enabled, re-enabling KFENCE late (via
+/sys/module/kfence/parameters/sample_interval) causes KASAN faults.
+
+This happens because the KFENCE pool and metadata are allocated via the
+page allocator, which tags the memory, while KFENCE continues to access it
+using untagged pointers during initialization.
+
+Use __GFP_SKIP_KASAN for late KFENCE pool and metadata allocations to
+ensure the memory remains untagged, consistent with early allocations from
+memblock. To support this, add __GFP_SKIP_KASAN to the allowlist in
+__alloc_contig_verify_gfp_mask().
+
+Link: https://lkml.kernel.org/r/20260220144940.2779209-1-glider@google.com
+Fixes: 0ce20dd84089 ("mm: add Kernel Electric-Fence infrastructure")
+Signed-off-by: Alexander Potapenko <glider@google.com>
+Suggested-by: Ernesto Martinez Garcia <ernesto.martinezgarcia@tugraz.at>
+Cc: Andrey Konovalov <andreyknvl@gmail.com>
+Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
+Cc: Dmitry Vyukov <dvyukov@google.com>
+Cc: Greg KH <gregkh@linuxfoundation.org>
+Cc: Kees Cook <kees@kernel.org>
+Cc: Marco Elver <elver@google.com>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+[ expand __GFP_SKIP_KASAN + nr_pages_pool => nr_pages ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/kfence/core.c | 7 +++++--
+ 1 file changed, 5 insertions(+), 2 deletions(-)
+
+--- a/mm/kfence/core.c
++++ b/mm/kfence/core.c
+@@ -897,7 +897,8 @@ static int kfence_init_late(void)
+ #ifdef CONFIG_CONTIG_ALLOC
+ struct page *pages;
+
+- pages = alloc_contig_pages(nr_pages, GFP_KERNEL, first_online_node, NULL);
++ pages = alloc_contig_pages(nr_pages, GFP_KERNEL | __GFP_SKIP_KASAN_UNPOISON |
++ __GFP_SKIP_KASAN_POISON, first_online_node, NULL);
+ if (!pages)
+ return -ENOMEM;
+ __kfence_pool = page_to_virt(pages);
+@@ -906,7 +907,9 @@ static int kfence_init_late(void)
+ pr_warn("KFENCE_NUM_OBJECTS too large for buddy allocator\n");
+ return -EINVAL;
+ }
+- __kfence_pool = alloc_pages_exact(KFENCE_POOL_SIZE, GFP_KERNEL);
++ __kfence_pool = alloc_pages_exact(KFENCE_POOL_SIZE, GFP_KERNEL |
++ __GFP_SKIP_KASAN_UNPOISON |
++ __GFP_SKIP_KASAN_POISON);
+ if (!__kfence_pool)
+ return -ENOMEM;
+ #endif
--- /dev/null
+From stable+bounces-223688-greg=kroah.com@vger.kernel.org Mon Mar 9 16:24:40 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 9 Mar 2026 11:24:20 -0400
+Subject: mptcp: pm: avoid sending RM_ADDR over same subflow
+To: stable@vger.kernel.org
+Cc: "Matthieu Baerts (NGI0)" <matttbe@kernel.org>, Frank Lorenz <lorenz-frank@web.de>, Mat Martineau <martineau@kernel.org>, Jakub Kicinski <kuba@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260309152420.1280295-1-sashal@kernel.org>
+
+From: "Matthieu Baerts (NGI0)" <matttbe@kernel.org>
+
+[ Upstream commit fb8d0bccb221080630efcd9660c9f9349e53cc9e ]
+
+RM_ADDR are sent over an active subflow, the first one in the subflows
+list. There is then a high chance the initial subflow is picked. With
+the in-kernel PM, when an endpoint is removed, a RM_ADDR is sent, then
+linked subflows are closed. This is done for each active MPTCP
+connection.
+
+MPTCP endpoints are likely removed because the attached network is no
+longer available or usable. In this case, it is better to avoid sending
+this RM_ADDR over the subflow that is going to be removed, but prefer
+sending it over another active and non stale subflow, if any.
+
+This modification avoids situations where the other end is not notified
+when a subflow is no longer usable: typically when the endpoint linked
+to the initial subflow is removed, especially on the server side.
+
+Fixes: 8dd5efb1f91b ("mptcp: send ack for rm_addr")
+Cc: stable@vger.kernel.org
+Reported-by: Frank Lorenz <lorenz-frank@web.de>
+Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/612
+Reviewed-by: Mat Martineau <martineau@kernel.org>
+Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
+Link: https://patch.msgid.link/20260303-net-mptcp-misc-fixes-7-0-rc2-v1-2-4b5462b6f016@kernel.org
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+[ adapted to _nl-prefixed function names in pm_netlink.c and omitted stale subflow fallback ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/mptcp/pm.c | 2 +-
+ net/mptcp/pm_netlink.c | 43 ++++++++++++++++++++++++++++++++++++++-----
+ net/mptcp/protocol.h | 2 ++
+ 3 files changed, 41 insertions(+), 6 deletions(-)
+
+--- a/net/mptcp/pm.c
++++ b/net/mptcp/pm.c
+@@ -55,7 +55,7 @@ int mptcp_pm_remove_addr(struct mptcp_so
+ msk->pm.rm_list_tx = *rm_list;
+ rm_addr |= BIT(MPTCP_RM_ADDR_SIGNAL);
+ WRITE_ONCE(msk->pm.addr_signal, rm_addr);
+- mptcp_pm_nl_addr_send_ack(msk);
++ mptcp_pm_nl_addr_send_ack_avoid_list(msk, rm_list);
+ return 0;
+ }
+
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -850,9 +850,23 @@ bool mptcp_pm_nl_is_init_remote_addr(str
+ return mptcp_addresses_equal(&mpc_remote, remote, remote->port);
+ }
+
+-void mptcp_pm_nl_addr_send_ack(struct mptcp_sock *msk)
++static bool subflow_in_rm_list(const struct mptcp_subflow_context *subflow,
++ const struct mptcp_rm_list *rm_list)
++{
++ u8 i, id = subflow_get_local_id(subflow);
++
++ for (i = 0; i < rm_list->nr; i++) {
++ if (rm_list->ids[i] == id)
++ return true;
++ }
++
++ return false;
++}
++
++void mptcp_pm_nl_addr_send_ack_avoid_list(struct mptcp_sock *msk,
++ const struct mptcp_rm_list *rm_list)
+ {
+- struct mptcp_subflow_context *subflow;
++ struct mptcp_subflow_context *subflow, *same_id = NULL;
+
+ msk_owned_by_me(msk);
+ lockdep_assert_held(&msk->pm.lock);
+@@ -862,11 +876,30 @@ void mptcp_pm_nl_addr_send_ack(struct mp
+ return;
+
+ mptcp_for_each_subflow(msk, subflow) {
+- if (__mptcp_subflow_active(subflow)) {
+- mptcp_pm_send_ack(msk, subflow, false, false);
+- break;
++ if (!__mptcp_subflow_active(subflow))
++ continue;
++
++ if (unlikely(rm_list &&
++ subflow_in_rm_list(subflow, rm_list))) {
++ if (!same_id)
++ same_id = subflow;
++ } else {
++ goto send_ack;
+ }
+ }
++
++ if (same_id)
++ subflow = same_id;
++ else
++ return;
++
++send_ack:
++ mptcp_pm_send_ack(msk, subflow, false, false);
++}
++
++void mptcp_pm_nl_addr_send_ack(struct mptcp_sock *msk)
++{
++ mptcp_pm_nl_addr_send_ack_avoid_list(msk, NULL);
+ }
+
+ int mptcp_pm_nl_mp_prio_send_ack(struct mptcp_sock *msk,
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -818,6 +818,8 @@ void mptcp_pm_add_addr_send_ack(struct m
+ bool mptcp_pm_nl_is_init_remote_addr(struct mptcp_sock *msk,
+ const struct mptcp_addr_info *remote);
+ void mptcp_pm_nl_addr_send_ack(struct mptcp_sock *msk);
++void mptcp_pm_nl_addr_send_ack_avoid_list(struct mptcp_sock *msk,
++ const struct mptcp_rm_list *rm_list);
+ void mptcp_pm_rm_addr_received(struct mptcp_sock *msk,
+ const struct mptcp_rm_list *rm_list);
+ void mptcp_pm_mp_prio_received(struct sock *sk, u8 bkup);
--- /dev/null
+From stable+bounces-223698-greg=kroah.com@vger.kernel.org Mon Mar 9 17:15:16 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 9 Mar 2026 12:11:38 -0400
+Subject: mptcp: pm: in-kernel: always mark signal+subflow endp as used
+To: stable@vger.kernel.org
+Cc: "Matthieu Baerts (NGI0)" <matttbe@kernel.org>, Mat Martineau <martineau@kernel.org>, Jakub Kicinski <kuba@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260309161138.1300644-1-sashal@kernel.org>
+
+From: "Matthieu Baerts (NGI0)" <matttbe@kernel.org>
+
+[ Upstream commit 579a752464a64cb5f9139102f0e6b90a1f595ceb ]
+
+Syzkaller managed to find a combination of actions that was generating
+this warning:
+
+ msk->pm.local_addr_used == 0
+ WARNING: net/mptcp/pm_kernel.c:1071 at __mark_subflow_endp_available net/mptcp/pm_kernel.c:1071 [inline], CPU#1: syz.2.17/961
+ WARNING: net/mptcp/pm_kernel.c:1071 at mptcp_nl_remove_subflow_and_signal_addr net/mptcp/pm_kernel.c:1103 [inline], CPU#1: syz.2.17/961
+ WARNING: net/mptcp/pm_kernel.c:1071 at mptcp_pm_nl_del_addr_doit+0x81d/0x8f0 net/mptcp/pm_kernel.c:1210, CPU#1: syz.2.17/961
+ Modules linked in:
+ CPU: 1 UID: 0 PID: 961 Comm: syz.2.17 Not tainted 6.19.0-08368-gfafda3b4b06b #22 PREEMPT(full)
+ Hardware name: QEMU Ubuntu 25.10 PC v2 (i440FX + PIIX, + 10.1 machine, 1996), BIOS 1.17.0-debian-1.17.0-1build1 04/01/2014
+ RIP: 0010:__mark_subflow_endp_available net/mptcp/pm_kernel.c:1071 [inline]
+ RIP: 0010:mptcp_nl_remove_subflow_and_signal_addr net/mptcp/pm_kernel.c:1103 [inline]
+ RIP: 0010:mptcp_pm_nl_del_addr_doit+0x81d/0x8f0 net/mptcp/pm_kernel.c:1210
+ Code: 89 c5 e8 46 30 6f fe e9 21 fd ff ff 49 83 ed 80 e8 38 30 6f fe 4c 89 ef be 03 00 00 00 e8 db 49 df fe eb ac e8 24 30 6f fe 90 <0f> 0b 90 e9 1d ff ff ff e8 16 30 6f fe eb 05 e8 0f 30 6f fe e8 9a
+ RSP: 0018:ffffc90001663880 EFLAGS: 00010293
+ RAX: ffffffff82de1a6c RBX: 0000000000000000 RCX: ffff88800722b500
+ RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
+ RBP: ffff8880158b22d0 R08: 0000000000010425 R09: ffffffffffffffff
+ R10: ffffffff82de18ba R11: 0000000000000000 R12: ffff88800641a640
+ R13: ffff8880158b1880 R14: ffff88801ec3c900 R15: ffff88800641a650
+ FS: 00005555722c3500(0000) GS:ffff8880f909d000(0000) knlGS:0000000000000000
+ CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
+ CR2: 00007f66346e0f60 CR3: 000000001607c000 CR4: 0000000000350ef0
+ Call Trace:
+ <TASK>
+ genl_family_rcv_msg_doit+0x117/0x180 net/netlink/genetlink.c:1115
+ genl_family_rcv_msg net/netlink/genetlink.c:1195 [inline]
+ genl_rcv_msg+0x3a8/0x3f0 net/netlink/genetlink.c:1210
+ netlink_rcv_skb+0x16d/0x240 net/netlink/af_netlink.c:2550
+ genl_rcv+0x28/0x40 net/netlink/genetlink.c:1219
+ netlink_unicast_kernel net/netlink/af_netlink.c:1318 [inline]
+ netlink_unicast+0x3e9/0x4c0 net/netlink/af_netlink.c:1344
+ netlink_sendmsg+0x4aa/0x5b0 net/netlink/af_netlink.c:1894
+ sock_sendmsg_nosec net/socket.c:727 [inline]
+ __sock_sendmsg+0xc9/0xf0 net/socket.c:742
+ ____sys_sendmsg+0x272/0x3b0 net/socket.c:2592
+ ___sys_sendmsg+0x2de/0x320 net/socket.c:2646
+ __sys_sendmsg net/socket.c:2678 [inline]
+ __do_sys_sendmsg net/socket.c:2683 [inline]
+ __se_sys_sendmsg net/socket.c:2681 [inline]
+ __x64_sys_sendmsg+0x110/0x1a0 net/socket.c:2681
+ do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
+ do_syscall_64+0x143/0x440 arch/x86/entry/syscall_64.c:94
+ entry_SYSCALL_64_after_hwframe+0x77/0x7f
+ RIP: 0033:0x7f66346f826d
+ Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48
+ RSP: 002b:00007ffc83d8bdc8 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
+ RAX: ffffffffffffffda RBX: 00007f6634985fa0 RCX: 00007f66346f826d
+ RDX: 00000000040000b0 RSI: 0000200000000740 RDI: 0000000000000007
+ RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
+ R10: 0000000000000000 R11: 0000000000000246 R12: 00007f6634985fa8
+ R13: 00007f6634985fac R14: 0000000000000000 R15: 0000000000001770
+ </TASK>
+
+The actions that caused that seem to be:
+
+ - Set the MPTCP subflows limit to 0
+ - Create an MPTCP endpoint with both the 'signal' and 'subflow' flags
+ - Create a new MPTCP connection from a different address: an ADD_ADDR
+ linked to the MPTCP endpoint will be sent ('signal' flag), but no
+ subflows is initiated ('subflow' flag)
+ - Remove the MPTCP endpoint
+
+In this case, msk->pm.local_addr_used has been kept to 0 -- because no
+subflows have been created -- but the corresponding bit in
+msk->pm.id_avail_bitmap has been cleared when the ADD_ADDR has been
+sent. This later causes a splat when removing the MPTCP endpoint because
+msk->pm.local_addr_used has been kept to 0.
+
+Now, if an endpoint has both the signal and subflow flags, but it is not
+possible to create subflows because of the limits or the c-flag case,
+then the local endpoint counter is still incremented: the endpoint is
+used at the end. This avoids issues later when removing the endpoint and
+calling __mark_subflow_endp_available(), which expects
+msk->pm.local_addr_used to have been previously incremented if the
+endpoint was marked as used according to msk->pm.id_avail_bitmap.
+
+Note that signal_and_subflow variable is reset to false when the limits
+and the c-flag case allows subflows creation. Also, local_addr_used is
+only incremented for non ID0 subflows.
+
+Fixes: 85df533a787b ("mptcp: pm: do not ignore 'subflow' if 'signal' flag is also set")
+Cc: stable@vger.kernel.org
+Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/613
+Reviewed-by: Mat Martineau <martineau@kernel.org>
+Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
+Link: https://patch.msgid.link/20260303-net-mptcp-misc-fixes-7-0-rc2-v1-4-4b5462b6f016@kernel.org
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+[ pm_kernel.c => pm_netlink.c ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/mptcp/pm_netlink.c | 9 +++++++++
+ 1 file changed, 9 insertions(+)
+
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -666,6 +666,15 @@ subflow:
+ }
+
+ exit:
++ /* If an endpoint has both the signal and subflow flags, but it is not
++ * possible to create subflows -- the 'while' loop body above never
++ * executed -- then still mark the endp as used, which is somehow the
++ * case. This avoids issues later when removing the endpoint and calling
++ * __mark_subflow_endp_available(), which expects the increment here.
++ */
++ if (signal_and_subflow && local.addr.id != msk->mpc_endpoint_id)
++ msk->pm.local_addr_used++;
++
+ mptcp_pm_nl_check_work_pending(msk);
+ }
+
--- /dev/null
+From stable+bounces-227561-greg=kroah.com@vger.kernel.org Fri Mar 20 16:25:13 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Fri, 20 Mar 2026 11:08:51 -0400
+Subject: net: macb: Introduce gem_init_rx_ring()
+To: stable@vger.kernel.org
+Cc: Kevin Hao <haokexin@gmail.com>, Simon Horman <horms@kernel.org>, Jakub Kicinski <kuba@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260320150852.4191566-2-sashal@kernel.org>
+
+From: Kevin Hao <haokexin@gmail.com>
+
+[ Upstream commit 1a7124ecd655bcaf1845197fe416aa25cff4c3ea ]
+
+Extract the initialization code for the GEM RX ring into a new function.
+This change will be utilized in a subsequent patch. No functional changes
+are introduced.
+
+Signed-off-by: Kevin Hao <haokexin@gmail.com>
+Reviewed-by: Simon Horman <horms@kernel.org>
+Link: https://patch.msgid.link/20260312-macb-versal-v1-1-467647173fa4@gmail.com
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+Stable-dep-of: 718d0766ce4c ("net: macb: Reinitialize tx/rx queue pointer registers and rx ring during resume")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/cadence/macb_main.c | 13 +++++++++----
+ 1 file changed, 9 insertions(+), 4 deletions(-)
+
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -2638,6 +2638,14 @@ static void macb_init_tieoff(struct macb
+ desc->ctrl = 0;
+ }
+
++static void gem_init_rx_ring(struct macb_queue *queue)
++{
++ queue->rx_tail = 0;
++ queue->rx_prepared_head = 0;
++
++ gem_rx_refill(queue);
++}
++
+ static void gem_init_rings(struct macb *bp)
+ {
+ struct macb_queue *queue;
+@@ -2655,10 +2663,7 @@ static void gem_init_rings(struct macb *
+ queue->tx_head = 0;
+ queue->tx_tail = 0;
+
+- queue->rx_tail = 0;
+- queue->rx_prepared_head = 0;
+-
+- gem_rx_refill(queue);
++ gem_init_rx_ring(queue);
+ }
+
+ macb_init_tieoff(bp);
--- /dev/null
+From stable+bounces-227560-greg=kroah.com@vger.kernel.org Fri Mar 20 16:14:18 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Fri, 20 Mar 2026 11:08:50 -0400
+Subject: net: macb: queue tie-off or disable during WOL suspend
+To: stable@vger.kernel.org
+Cc: Vineeth Karumanchi <vineeth.karumanchi@amd.com>, Harini Katakam <harini.katakam@amd.com>, Andrew Lunn <andrew@lunn.ch>, Claudiu Beznea <claudiu.beznea@tuxon.dev>, Paolo Abeni <pabeni@redhat.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260320150852.4191566-1-sashal@kernel.org>
+
+From: Vineeth Karumanchi <vineeth.karumanchi@amd.com>
+
+[ Upstream commit 759cc793ebfc2d1a02f357ae97e5dcdcd63f758f ]
+
+When GEM is used as a wake device, it is not mandatory for the RX DMA
+to be active. The RX engine in IP only needs to receive and identify
+a wake packet through an interrupt. The wake packet is of no further
+significance; hence, it is not required to be copied into memory.
+By disabling RX DMA during suspend, we can avoid unnecessary DMA
+processing of any incoming traffic.
+
+During suspend, perform either of the below operations:
+
+- tie-off/dummy descriptor: Disable unused queues by connecting
+ them to a looped descriptor chain without free slots.
+
+- queue disable: The newer IP version allows disabling individual queues.
+
+Co-developed-by: Harini Katakam <harini.katakam@amd.com>
+Signed-off-by: Harini Katakam <harini.katakam@amd.com>
+Signed-off-by: Vineeth Karumanchi <vineeth.karumanchi@amd.com>
+Reviewed-by: Andrew Lunn <andrew@lunn.ch>
+Reviewed-by: Claudiu Beznea <claudiu.beznea@tuxon.dev>
+Tested-by: Claudiu Beznea <claudiu.beznea@tuxon.dev> # on SAMA7G5
+Signed-off-by: Paolo Abeni <pabeni@redhat.com>
+Stable-dep-of: 718d0766ce4c ("net: macb: Reinitialize tx/rx queue pointer registers and rx ring during resume")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/cadence/macb.h | 7 +++
+ drivers/net/ethernet/cadence/macb_main.c | 60 +++++++++++++++++++++++++++++--
+ 2 files changed, 64 insertions(+), 3 deletions(-)
+
+--- a/drivers/net/ethernet/cadence/macb.h
++++ b/drivers/net/ethernet/cadence/macb.h
+@@ -636,6 +636,10 @@
+ #define GEM_T2OFST_OFFSET 0 /* offset value */
+ #define GEM_T2OFST_SIZE 7
+
++/* Bitfields in queue pointer registers */
++#define MACB_QUEUE_DISABLE_OFFSET 0 /* disable queue */
++#define MACB_QUEUE_DISABLE_SIZE 1
++
+ /* Offset for screener type 2 compare values (T2CMPOFST).
+ * Note the offset is applied after the specified point,
+ * e.g. GEM_T2COMPOFST_ETYPE denotes the EtherType field, so an offset
+@@ -722,6 +726,7 @@
+ #define MACB_CAPS_NEEDS_RSTONUBR 0x00000100
+ #define MACB_CAPS_MIIONRGMII 0x00000200
+ #define MACB_CAPS_NEED_TSUCLK 0x00000400
++#define MACB_CAPS_QUEUE_DISABLE 0x00000800
+ #define MACB_CAPS_PCS 0x01000000
+ #define MACB_CAPS_HIGH_SPEED 0x02000000
+ #define MACB_CAPS_CLK_HW_CHG 0x04000000
+@@ -1254,6 +1259,8 @@ struct macb {
+ u32 (*macb_reg_readl)(struct macb *bp, int offset);
+ void (*macb_reg_writel)(struct macb *bp, int offset, u32 value);
+
++ struct macb_dma_desc *rx_ring_tieoff;
++ dma_addr_t rx_ring_tieoff_dma;
+ size_t rx_buffer_size;
+
+ unsigned int rx_ring_size;
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -2511,6 +2511,12 @@ static void macb_free_consistent(struct
+ unsigned int q;
+ int size;
+
++ if (bp->rx_ring_tieoff) {
++ dma_free_coherent(&bp->pdev->dev, macb_dma_desc_get_size(bp),
++ bp->rx_ring_tieoff, bp->rx_ring_tieoff_dma);
++ bp->rx_ring_tieoff = NULL;
++ }
++
+ bp->macbgem_ops.mog_free_rx_buffers(bp);
+
+ for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
+@@ -2602,6 +2608,16 @@ static int macb_alloc_consistent(struct
+ if (bp->macbgem_ops.mog_alloc_rx_buffers(bp))
+ goto out_err;
+
++ /* Required for tie off descriptor for PM cases */
++ if (!(bp->caps & MACB_CAPS_QUEUE_DISABLE)) {
++ bp->rx_ring_tieoff = dma_alloc_coherent(&bp->pdev->dev,
++ macb_dma_desc_get_size(bp),
++ &bp->rx_ring_tieoff_dma,
++ GFP_KERNEL);
++ if (!bp->rx_ring_tieoff)
++ goto out_err;
++ }
++
+ return 0;
+
+ out_err:
+@@ -2609,6 +2625,19 @@ out_err:
+ return -ENOMEM;
+ }
+
++static void macb_init_tieoff(struct macb *bp)
++{
++ struct macb_dma_desc *desc = bp->rx_ring_tieoff;
++
++ if (bp->caps & MACB_CAPS_QUEUE_DISABLE)
++ return;
++ /* Setup a wrapping descriptor with no free slots
++ * (WRAP and USED) to tie off/disable unused RX queues.
++ */
++ macb_set_addr(bp, desc, MACB_BIT(RX_WRAP) | MACB_BIT(RX_USED));
++ desc->ctrl = 0;
++}
++
+ static void gem_init_rings(struct macb *bp)
+ {
+ struct macb_queue *queue;
+@@ -2632,6 +2661,7 @@ static void gem_init_rings(struct macb *
+ gem_rx_refill(queue);
+ }
+
++ macb_init_tieoff(bp);
+ }
+
+ static void macb_init_rings(struct macb *bp)
+@@ -2649,6 +2679,8 @@ static void macb_init_rings(struct macb
+ bp->queues[0].tx_head = 0;
+ bp->queues[0].tx_tail = 0;
+ desc->ctrl |= MACB_BIT(TX_WRAP);
++
++ macb_init_tieoff(bp);
+ }
+
+ static void macb_reset_hw(struct macb *bp)
+@@ -5188,6 +5220,7 @@ static int __maybe_unused macb_suspend(s
+ unsigned long flags;
+ unsigned int q;
+ int err;
++ u32 tmp;
+
+ if (!device_may_wakeup(&bp->dev->dev))
+ phy_exit(bp->sgmii_phy);
+@@ -5197,17 +5230,38 @@ static int __maybe_unused macb_suspend(s
+
+ if (bp->wol & MACB_WOL_ENABLED) {
+ spin_lock_irqsave(&bp->lock, flags);
+- /* Flush all status bits */
+- macb_writel(bp, TSR, -1);
+- macb_writel(bp, RSR, -1);
++
++ /* Disable Tx and Rx engines before disabling the queues,
++ * this is mandatory as per the IP spec sheet
++ */
++ tmp = macb_readl(bp, NCR);
++ macb_writel(bp, NCR, tmp & ~(MACB_BIT(TE) | MACB_BIT(RE)));
+ for (q = 0, queue = bp->queues; q < bp->num_queues;
+ ++q, ++queue) {
++ /* Disable RX queues */
++ if (bp->caps & MACB_CAPS_QUEUE_DISABLE) {
++ queue_writel(queue, RBQP, MACB_BIT(QUEUE_DISABLE));
++ } else {
++ /* Tie off RX queues */
++ queue_writel(queue, RBQP,
++ lower_32_bits(bp->rx_ring_tieoff_dma));
++#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
++ queue_writel(queue, RBQPH,
++ upper_32_bits(bp->rx_ring_tieoff_dma));
++#endif
++ }
+ /* Disable all interrupts */
+ queue_writel(queue, IDR, -1);
+ queue_readl(queue, ISR);
+ if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
+ queue_writel(queue, ISR, -1);
+ }
++ /* Enable Receive engine */
++ macb_writel(bp, NCR, tmp | MACB_BIT(RE));
++ /* Flush all status bits */
++ macb_writel(bp, TSR, -1);
++ macb_writel(bp, RSR, -1);
++
+ /* Change interrupt handler and
+ * Enable WoL IRQ on queue 0
+ */
--- /dev/null
+From stable+bounces-227562-greg=kroah.com@vger.kernel.org Fri Mar 20 16:14:22 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Fri, 20 Mar 2026 11:08:52 -0400
+Subject: net: macb: Reinitialize tx/rx queue pointer registers and rx ring during resume
+To: stable@vger.kernel.org
+Cc: Kevin Hao <haokexin@gmail.com>, Quanyang Wang <quanyang.wang@windriver.com>, Simon Horman <horms@kernel.org>, Jakub Kicinski <kuba@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260320150852.4191566-3-sashal@kernel.org>
+
+From: Kevin Hao <haokexin@gmail.com>
+
+[ Upstream commit 718d0766ce4c7634ce62fa78b526ea7263487edd ]
+
+On certain platforms, such as AMD Versal boards, the tx/rx queue pointer
+registers are cleared after suspend, and the rx queue pointer register
+is also disabled during suspend if WOL is enabled. Previously, we assumed
+that these registers would be restored by macb_mac_link_up(). However,
+in commit bf9cf80cab81, macb_init_buffers() was moved from
+macb_mac_link_up() to macb_open(). Therefore, we should call
+macb_init_buffers() to reinitialize the tx/rx queue pointer registers
+during resume.
+
+Due to the reset of these two registers, we also need to adjust the
+tx/rx rings accordingly. The tx ring will be handled by
+gem_shuffle_tx_rings() in macb_mac_link_up(), so we only need to
+initialize the rx ring here.
+
+Fixes: bf9cf80cab81 ("net: macb: Fix tx/rx malfunction after phy link down and up")
+Reported-by: Quanyang Wang <quanyang.wang@windriver.com>
+Signed-off-by: Kevin Hao <haokexin@gmail.com>
+Tested-by: Quanyang Wang <quanyang.wang@windriver.com>
+Cc: stable@vger.kernel.org
+Reviewed-by: Simon Horman <horms@kernel.org>
+Link: https://patch.msgid.link/20260312-macb-versal-v1-2-467647173fa4@gmail.com
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/cadence/macb_main.c | 10 ++++++++++
+ 1 file changed, 10 insertions(+)
+
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -5386,8 +5386,18 @@ static int __maybe_unused macb_resume(st
+ rtnl_unlock();
+ }
+
++ if (!(bp->caps & MACB_CAPS_MACB_IS_EMAC))
++ macb_init_buffers(bp);
++
+ for (q = 0, queue = bp->queues; q < bp->num_queues;
+ ++q, ++queue) {
++ if (!(bp->caps & MACB_CAPS_MACB_IS_EMAC)) {
++ if (macb_is_gem(bp))
++ gem_init_rx_ring(queue);
++ else
++ macb_init_rx_ring(queue);
++ }
++
+ napi_enable(&queue->napi_rx);
+ napi_enable(&queue->napi_tx);
+ }
--- /dev/null
+From stable+bounces-227153-greg=kroah.com@vger.kernel.org Wed Mar 18 21:31:29 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 18 Mar 2026 16:31:20 -0400
+Subject: net: macb: Shuffle the tx ring before enabling tx
+To: stable@vger.kernel.org
+Cc: Kevin Hao <haokexin@gmail.com>, Quanyang Wang <quanyang.wang@windriver.com>, Simon Horman <horms@kernel.org>, Jakub Kicinski <kuba@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260318203120.1133362-1-sashal@kernel.org>
+
+From: Kevin Hao <haokexin@gmail.com>
+
+[ Upstream commit 881a0263d502e1a93ebc13a78254e9ad19520232 ]
+
+Quanyang observed that when using an NFS rootfs on an AMD ZynqMp board,
+the rootfs may take an extended time to recover after a suspend.
+Upon investigation, it was determined that the issue originates from a
+problem in the macb driver.
+
+According to the Zynq UltraScale TRM [1], when transmit is disabled,
+the transmit buffer queue pointer resets to point to the address
+specified by the transmit buffer queue base address register.
+
+In the current implementation, the code merely resets `queue->tx_head`
+and `queue->tx_tail` to '0'. This approach presents several issues:
+
+- Packets already queued in the tx ring are silently lost,
+ leading to memory leaks since the associated skbs cannot be released.
+
+- Concurrent write access to `queue->tx_head` and `queue->tx_tail` may
+ occur from `macb_tx_poll()` or `macb_start_xmit()` when these values
+ are reset to '0'.
+
+- The transmission may become stuck on a packet that has already been sent
+ out, with its 'TX_USED' bit set, but has not yet been processed. However,
+ due to the manipulation of 'queue->tx_head' and 'queue->tx_tail',
+ `macb_tx_poll()` incorrectly assumes there are no packets to handle
+ because `queue->tx_head == queue->tx_tail`. This issue is only resolved
+ when a new packet is placed at this position. This is the root cause of
+ the prolonged recovery time observed for the NFS root filesystem.
+
+To resolve this issue, shuffle the tx ring and tx skb array so that
+the first unsent packet is positioned at the start of the tx ring.
+Additionally, ensure that updates to `queue->tx_head` and
+`queue->tx_tail` are properly protected with the appropriate lock.
+
+[1] https://docs.amd.com/v/u/en-US/ug1085-zynq-ultrascale-trm
+
+Fixes: bf9cf80cab81 ("net: macb: Fix tx/rx malfunction after phy link down and up")
+Reported-by: Quanyang Wang <quanyang.wang@windriver.com>
+Signed-off-by: Kevin Hao <haokexin@gmail.com>
+Cc: stable@vger.kernel.org
+Reviewed-by: Simon Horman <horms@kernel.org>
+Link: https://patch.msgid.link/20260307-zynqmp-v2-1-6ef98a70e1d0@gmail.com
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+[ #include context ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/cadence/macb_main.c | 98 ++++++++++++++++++++++++++++++-
+ 1 file changed, 95 insertions(+), 3 deletions(-)
+
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -39,6 +39,7 @@
+ #include <linux/ptp_classify.h>
+ #include <linux/reset.h>
+ #include <linux/firmware/xlnx-zynqmp.h>
++#include <linux/gcd.h>
+ #include "macb.h"
+
+ /* This structure is only used for MACB on SiFive FU540 devices */
+@@ -668,6 +669,97 @@ static void macb_mac_link_down(struct ph
+ netif_tx_stop_all_queues(ndev);
+ }
+
++/* Use juggling algorithm to left rotate tx ring and tx skb array */
++static void gem_shuffle_tx_one_ring(struct macb_queue *queue)
++{
++ unsigned int head, tail, count, ring_size, desc_size;
++ struct macb_tx_skb tx_skb, *skb_curr, *skb_next;
++ struct macb_dma_desc *desc_curr, *desc_next;
++ unsigned int i, cycles, shift, curr, next;
++ struct macb *bp = queue->bp;
++ unsigned char desc[24];
++ unsigned long flags;
++
++ desc_size = macb_dma_desc_get_size(bp);
++
++ if (WARN_ON_ONCE(desc_size > ARRAY_SIZE(desc)))
++ return;
++
++ spin_lock_irqsave(&queue->tx_ptr_lock, flags);
++ head = queue->tx_head;
++ tail = queue->tx_tail;
++ ring_size = bp->tx_ring_size;
++ count = CIRC_CNT(head, tail, ring_size);
++
++ if (!(tail % ring_size))
++ goto unlock;
++
++ if (!count) {
++ queue->tx_head = 0;
++ queue->tx_tail = 0;
++ goto unlock;
++ }
++
++ shift = tail % ring_size;
++ cycles = gcd(ring_size, shift);
++
++ for (i = 0; i < cycles; i++) {
++ memcpy(&desc, macb_tx_desc(queue, i), desc_size);
++ memcpy(&tx_skb, macb_tx_skb(queue, i),
++ sizeof(struct macb_tx_skb));
++
++ curr = i;
++ next = (curr + shift) % ring_size;
++
++ while (next != i) {
++ desc_curr = macb_tx_desc(queue, curr);
++ desc_next = macb_tx_desc(queue, next);
++
++ memcpy(desc_curr, desc_next, desc_size);
++
++ if (next == ring_size - 1)
++ desc_curr->ctrl &= ~MACB_BIT(TX_WRAP);
++ if (curr == ring_size - 1)
++ desc_curr->ctrl |= MACB_BIT(TX_WRAP);
++
++ skb_curr = macb_tx_skb(queue, curr);
++ skb_next = macb_tx_skb(queue, next);
++ memcpy(skb_curr, skb_next, sizeof(struct macb_tx_skb));
++
++ curr = next;
++ next = (curr + shift) % ring_size;
++ }
++
++ desc_curr = macb_tx_desc(queue, curr);
++ memcpy(desc_curr, &desc, desc_size);
++ if (i == ring_size - 1)
++ desc_curr->ctrl &= ~MACB_BIT(TX_WRAP);
++ if (curr == ring_size - 1)
++ desc_curr->ctrl |= MACB_BIT(TX_WRAP);
++ memcpy(macb_tx_skb(queue, curr), &tx_skb,
++ sizeof(struct macb_tx_skb));
++ }
++
++ queue->tx_head = count;
++ queue->tx_tail = 0;
++
++ /* Make descriptor updates visible to hardware */
++ wmb();
++
++unlock:
++ spin_unlock_irqrestore(&queue->tx_ptr_lock, flags);
++}
++
++/* Rotate the queue so that the tail is at index 0 */
++static void gem_shuffle_tx_rings(struct macb *bp)
++{
++ struct macb_queue *queue;
++ int q;
++
++ for (q = 0, queue = bp->queues; q < bp->num_queues; q++, queue++)
++ gem_shuffle_tx_one_ring(queue);
++}
++
+ static void macb_mac_link_up(struct phylink_config *config,
+ struct phy_device *phy,
+ unsigned int mode, phy_interface_t interface,
+@@ -706,8 +798,6 @@ static void macb_mac_link_up(struct phyl
+ ctrl |= MACB_BIT(PAE);
+
+ for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
+- queue->tx_head = 0;
+- queue->tx_tail = 0;
+ queue_writel(queue, IER,
+ bp->rx_intr_mask | MACB_TX_INT_FLAGS | MACB_BIT(HRESP));
+ }
+@@ -721,8 +811,10 @@ static void macb_mac_link_up(struct phyl
+
+ spin_unlock_irqrestore(&bp->lock, flags);
+
+- if (!(bp->caps & MACB_CAPS_MACB_IS_EMAC))
++ if (!(bp->caps & MACB_CAPS_MACB_IS_EMAC)) {
+ macb_set_tx_clk(bp, speed);
++ gem_shuffle_tx_rings(bp);
++ }
+
+ /* Enable Rx and Tx; Enable PTP unicast */
+ ctrl = macb_readl(bp, NCR);
--- /dev/null
+From stable+bounces-223659-greg=kroah.com@vger.kernel.org Mon Mar 9 14:51:56 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 9 Mar 2026 09:50:33 -0400
+Subject: net: phy: register phy led_triggers during probe to avoid AB-BA deadlock
+To: stable@vger.kernel.org
+Cc: Andrew Lunn <andrew@lunn.ch>, Shiji Yang <yangshiji66@outlook.com>, Paolo Abeni <pabeni@redhat.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260309135033.1025776-1-sashal@kernel.org>
+
+From: Andrew Lunn <andrew@lunn.ch>
+
+[ Upstream commit c8dbdc6e380e7e96a51706db3e4b7870d8a9402d ]
+
+There is an AB-BA deadlock when both LEDS_TRIGGER_NETDEV and
+LED_TRIGGER_PHY are enabled:
+
+[ 1362.049207] [<8054e4b8>] led_trigger_register+0x5c/0x1fc <-- Trying to get lock "triggers_list_lock" via down_write(&triggers_list_lock);
+[ 1362.054536] [<80662830>] phy_led_triggers_register+0xd0/0x234
+[ 1362.060329] [<8065e200>] phy_attach_direct+0x33c/0x40c
+[ 1362.065489] [<80651fc4>] phylink_fwnode_phy_connect+0x15c/0x23c
+[ 1362.071480] [<8066ee18>] mtk_open+0x7c/0xba0
+[ 1362.075849] [<806d714c>] __dev_open+0x280/0x2b0
+[ 1362.080384] [<806d7668>] __dev_change_flags+0x244/0x24c
+[ 1362.085598] [<806d7698>] dev_change_flags+0x28/0x78
+[ 1362.090528] [<807150e4>] dev_ioctl+0x4c0/0x654 <-- Hold lock "rtnl_mutex" by calling rtnl_lock();
+[ 1362.094985] [<80694360>] sock_ioctl+0x2f4/0x4e0
+[ 1362.099567] [<802e9c4c>] sys_ioctl+0x32c/0xd8c
+[ 1362.104022] [<80014504>] syscall_common+0x34/0x58
+
+Here LED_TRIGGER_PHY is registering LED triggers during phy_attach
+while holding RTNL and then taking triggers_list_lock.
+
+[ 1362.191101] [<806c2640>] register_netdevice_notifier+0x60/0x168 <-- Trying to get lock "rtnl_mutex" via rtnl_lock();
+[ 1362.197073] [<805504ac>] netdev_trig_activate+0x194/0x1e4
+[ 1362.202490] [<8054e28c>] led_trigger_set+0x1d4/0x360 <-- Hold lock "triggers_list_lock" by down_read(&triggers_list_lock);
+[ 1362.207511] [<8054eb38>] led_trigger_write+0xd8/0x14c
+[ 1362.212566] [<80381d98>] sysfs_kf_bin_write+0x80/0xbc
+[ 1362.217688] [<8037fcd8>] kernfs_fop_write_iter+0x17c/0x28c
+[ 1362.223174] [<802cbd70>] vfs_write+0x21c/0x3c4
+[ 1362.227712] [<802cc0c4>] ksys_write+0x78/0x12c
+[ 1362.232164] [<80014504>] syscall_common+0x34/0x58
+
+Here LEDS_TRIGGER_NETDEV is being enabled on an LED. It first takes
+triggers_list_lock and then RTNL. A classical AB-BA deadlock.
+
+phy_led_triggers_registers() does not require the RTNL, it does not
+make any calls into the network stack which require protection. There
+is also no requirement the PHY has been attached to a MAC, the
+triggers only make use of phydev state. This allows the call to
+phy_led_triggers_registers() to be placed elsewhere. PHY probe() and
+release() don't hold RTNL, so solving the AB-BA deadlock.
+
+Reported-by: Shiji Yang <yangshiji66@outlook.com>
+Closes: https://lore.kernel.org/all/OS7PR01MB13602B128BA1AD3FA38B6D1FFBC69A@OS7PR01MB13602.jpnprd01.prod.outlook.com/
+Fixes: 06f502f57d0d ("leds: trigger: Introduce a NETDEV trigger")
+Cc: stable@vger.kernel.org
+Signed-off-by: Andrew Lunn <andrew@lunn.ch>
+Tested-by: Shiji Yang <yangshiji66@outlook.com>
+Link: https://patch.msgid.link/20260222152601.1978655-1-andrew@lunn.ch
+Signed-off-by: Paolo Abeni <pabeni@redhat.com>
+[ dropped `is_on_sfp_module` guards and `CONFIG_PHYLIB_LEDS`/`of_phy_leds` logic ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/phy/phy_device.c | 13 ++++++++-----
+ 1 file changed, 8 insertions(+), 5 deletions(-)
+
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -1510,7 +1510,6 @@ int phy_attach_direct(struct net_device
+ goto error;
+
+ phy_resume(phydev);
+- phy_led_triggers_register(phydev);
+
+ return err;
+
+@@ -1767,8 +1766,6 @@ void phy_detach(struct phy_device *phyde
+ }
+ phydev->phylink = NULL;
+
+- phy_led_triggers_unregister(phydev);
+-
+ if (phydev->mdio.dev.driver)
+ module_put(phydev->mdio.dev.driver->owner);
+
+@@ -3109,10 +3106,14 @@ static int phy_probe(struct device *dev)
+ /* Set the state to READY by default */
+ phydev->state = PHY_READY;
+
++ /* Register the PHY LED triggers */
++ phy_led_triggers_register(phydev);
++
++ return 0;
++
+ out:
+ /* Re-assert the reset signal on error */
+- if (err)
+- phy_device_reset(phydev, 1);
++ phy_device_reset(phydev, 1);
+
+ return err;
+ }
+@@ -3123,6 +3124,8 @@ static int phy_remove(struct device *dev
+
+ cancel_delayed_work_sync(&phydev->state_queue);
+
++ phy_led_triggers_unregister(phydev);
++
+ phydev->state = PHY_DOWN;
+
+ sfp_bus_del_upstream(phydev->sfp_bus);
--- /dev/null
+From stable+bounces-224906-greg=kroah.com@vger.kernel.org Thu Mar 12 19:35:29 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 12 Mar 2026 14:35:21 -0400
+Subject: net/sched: act_gate: snapshot parameters with RCU on replace
+To: stable@vger.kernel.org
+Cc: Paul Moses <p@1g4.org>, Vladimir Oltean <vladimir.oltean@nxp.com>, Jamal Hadi Salim <jhs@mojatatu.com>, Victor Nogueira <victor@mojatatu.com>, Jakub Kicinski <kuba@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260312183521.1822147-1-sashal@kernel.org>
+
+From: Paul Moses <p@1g4.org>
+
+[ Upstream commit 62413a9c3cb183afb9bb6e94dd68caf4e4145f4c ]
+
+The gate action can be replaced while the hrtimer callback or dump path is
+walking the schedule list.
+
+Convert the parameters to an RCU-protected snapshot and swap updates under
+tcf_lock, freeing the previous snapshot via call_rcu(). When REPLACE omits
+the entry list, preserve the existing schedule so the effective state is
+unchanged.
+
+Fixes: a51c328df310 ("net: qos: introduce a gate control flow action")
+Cc: stable@vger.kernel.org
+Signed-off-by: Paul Moses <p@1g4.org>
+Tested-by: Vladimir Oltean <vladimir.oltean@nxp.com>
+Acked-by: Jamal Hadi Salim <jhs@mojatatu.com>
+Reviewed-by: Victor Nogueira <victor@mojatatu.com>
+Link: https://patch.msgid.link/20260223150512.2251594-2-p@1g4.org
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+[ hrtimer_setup() => hrtimer_init() + keep is_tcf_gate() ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ include/net/tc_act/tc_gate.h | 33 ++++-
+ net/sched/act_gate.c | 266 ++++++++++++++++++++++++++++++-------------
+ 2 files changed, 212 insertions(+), 87 deletions(-)
+
+--- a/include/net/tc_act/tc_gate.h
++++ b/include/net/tc_act/tc_gate.h
+@@ -32,6 +32,7 @@ struct tcf_gate_params {
+ s32 tcfg_clockid;
+ size_t num_entries;
+ struct list_head entries;
++ struct rcu_head rcu;
+ };
+
+ #define GATE_ACT_GATE_OPEN BIT(0)
+@@ -39,7 +40,7 @@ struct tcf_gate_params {
+
+ struct tcf_gate {
+ struct tc_action common;
+- struct tcf_gate_params param;
++ struct tcf_gate_params __rcu *param;
+ u8 current_gate_status;
+ ktime_t current_close_time;
+ u32 current_entry_octets;
+@@ -60,47 +61,65 @@ static inline bool is_tcf_gate(const str
+ return false;
+ }
+
++static inline struct tcf_gate_params *tcf_gate_params_locked(const struct tc_action *a)
++{
++ struct tcf_gate *gact = to_gate(a);
++
++ return rcu_dereference_protected(gact->param,
++ lockdep_is_held(&gact->tcf_lock));
++}
++
+ static inline s32 tcf_gate_prio(const struct tc_action *a)
+ {
++ struct tcf_gate_params *p;
+ s32 tcfg_prio;
+
+- tcfg_prio = to_gate(a)->param.tcfg_priority;
++ p = tcf_gate_params_locked(a);
++ tcfg_prio = p->tcfg_priority;
+
+ return tcfg_prio;
+ }
+
+ static inline u64 tcf_gate_basetime(const struct tc_action *a)
+ {
++ struct tcf_gate_params *p;
+ u64 tcfg_basetime;
+
+- tcfg_basetime = to_gate(a)->param.tcfg_basetime;
++ p = tcf_gate_params_locked(a);
++ tcfg_basetime = p->tcfg_basetime;
+
+ return tcfg_basetime;
+ }
+
+ static inline u64 tcf_gate_cycletime(const struct tc_action *a)
+ {
++ struct tcf_gate_params *p;
+ u64 tcfg_cycletime;
+
+- tcfg_cycletime = to_gate(a)->param.tcfg_cycletime;
++ p = tcf_gate_params_locked(a);
++ tcfg_cycletime = p->tcfg_cycletime;
+
+ return tcfg_cycletime;
+ }
+
+ static inline u64 tcf_gate_cycletimeext(const struct tc_action *a)
+ {
++ struct tcf_gate_params *p;
+ u64 tcfg_cycletimeext;
+
+- tcfg_cycletimeext = to_gate(a)->param.tcfg_cycletime_ext;
++ p = tcf_gate_params_locked(a);
++ tcfg_cycletimeext = p->tcfg_cycletime_ext;
+
+ return tcfg_cycletimeext;
+ }
+
+ static inline u32 tcf_gate_num_entries(const struct tc_action *a)
+ {
++ struct tcf_gate_params *p;
+ u32 num_entries;
+
+- num_entries = to_gate(a)->param.num_entries;
++ p = tcf_gate_params_locked(a);
++ num_entries = p->num_entries;
+
+ return num_entries;
+ }
+@@ -114,7 +133,7 @@ static inline struct action_gate_entry
+ u32 num_entries;
+ int i = 0;
+
+- p = &to_gate(a)->param;
++ p = tcf_gate_params_locked(a);
+ num_entries = p->num_entries;
+
+ list_for_each_entry(entry, &p->entries, list)
+--- a/net/sched/act_gate.c
++++ b/net/sched/act_gate.c
+@@ -31,9 +31,12 @@ static ktime_t gate_get_time(struct tcf_
+ return KTIME_MAX;
+ }
+
+-static void gate_get_start_time(struct tcf_gate *gact, ktime_t *start)
++static void tcf_gate_params_free_rcu(struct rcu_head *head);
++
++static void gate_get_start_time(struct tcf_gate *gact,
++ const struct tcf_gate_params *param,
++ ktime_t *start)
+ {
+- struct tcf_gate_params *param = &gact->param;
+ ktime_t now, base, cycle;
+ u64 n;
+
+@@ -68,12 +71,14 @@ static enum hrtimer_restart gate_timer_f
+ {
+ struct tcf_gate *gact = container_of(timer, struct tcf_gate,
+ hitimer);
+- struct tcf_gate_params *p = &gact->param;
+ struct tcfg_gate_entry *next;
++ struct tcf_gate_params *p;
+ ktime_t close_time, now;
+
+ spin_lock(&gact->tcf_lock);
+
++ p = rcu_dereference_protected(gact->param,
++ lockdep_is_held(&gact->tcf_lock));
+ next = gact->next_entry;
+
+ /* cycle start, clear pending bit, clear total octets */
+@@ -226,6 +231,35 @@ static void release_entry_list(struct li
+ }
+ }
+
++static int tcf_gate_copy_entries(struct tcf_gate_params *dst,
++ const struct tcf_gate_params *src,
++ struct netlink_ext_ack *extack)
++{
++ struct tcfg_gate_entry *entry;
++ int i = 0;
++
++ list_for_each_entry(entry, &src->entries, list) {
++ struct tcfg_gate_entry *new;
++
++ new = kzalloc(sizeof(*new), GFP_ATOMIC);
++ if (!new) {
++ NL_SET_ERR_MSG(extack, "Not enough memory for entry");
++ return -ENOMEM;
++ }
++
++ new->index = entry->index;
++ new->gate_state = entry->gate_state;
++ new->interval = entry->interval;
++ new->ipv = entry->ipv;
++ new->maxoctets = entry->maxoctets;
++ list_add_tail(&new->list, &dst->entries);
++ i++;
++ }
++
++ dst->num_entries = i;
++ return 0;
++}
++
+ static int parse_gate_list(struct nlattr *list_attr,
+ struct tcf_gate_params *sched,
+ struct netlink_ext_ack *extack)
+@@ -271,23 +305,42 @@ release_list:
+ return err;
+ }
+
+-static void gate_setup_timer(struct tcf_gate *gact, u64 basetime,
+- enum tk_offsets tko, s32 clockid,
+- bool do_init)
+-{
+- if (!do_init) {
+- if (basetime == gact->param.tcfg_basetime &&
+- tko == gact->tk_offset &&
+- clockid == gact->param.tcfg_clockid)
+- return;
+-
+- spin_unlock_bh(&gact->tcf_lock);
+- hrtimer_cancel(&gact->hitimer);
+- spin_lock_bh(&gact->tcf_lock);
++static bool gate_timer_needs_cancel(u64 basetime, u64 old_basetime,
++ enum tk_offsets tko,
++ enum tk_offsets old_tko,
++ s32 clockid, s32 old_clockid)
++{
++ return basetime != old_basetime ||
++ clockid != old_clockid ||
++ tko != old_tko;
++}
++
++static int gate_clock_resolve(s32 clockid, enum tk_offsets *tko,
++ struct netlink_ext_ack *extack)
++{
++ switch (clockid) {
++ case CLOCK_REALTIME:
++ *tko = TK_OFFS_REAL;
++ return 0;
++ case CLOCK_MONOTONIC:
++ *tko = TK_OFFS_MAX;
++ return 0;
++ case CLOCK_BOOTTIME:
++ *tko = TK_OFFS_BOOT;
++ return 0;
++ case CLOCK_TAI:
++ *tko = TK_OFFS_TAI;
++ return 0;
++ default:
++ NL_SET_ERR_MSG(extack, "Invalid 'clockid'");
++ return -EINVAL;
+ }
+- gact->param.tcfg_basetime = basetime;
+- gact->param.tcfg_clockid = clockid;
+- gact->tk_offset = tko;
++}
++
++static void gate_setup_timer(struct tcf_gate *gact, s32 clockid,
++ enum tk_offsets tko)
++{
++ WRITE_ONCE(gact->tk_offset, tko);
+ hrtimer_init(&gact->hitimer, clockid, HRTIMER_MODE_ABS_SOFT);
+ gact->hitimer.function = gate_timer_func;
+ }
+@@ -298,15 +351,22 @@ static int tcf_gate_init(struct net *net
+ struct netlink_ext_ack *extack)
+ {
+ struct tc_action_net *tn = net_generic(net, act_gate_ops.net_id);
+- enum tk_offsets tk_offset = TK_OFFS_TAI;
++ u64 cycletime = 0, basetime = 0, cycletime_ext = 0;
++ struct tcf_gate_params *p = NULL, *old_p = NULL;
++ enum tk_offsets old_tk_offset = TK_OFFS_TAI;
++ const struct tcf_gate_params *cur_p = NULL;
+ bool bind = flags & TCA_ACT_FLAGS_BIND;
+ struct nlattr *tb[TCA_GATE_MAX + 1];
++ enum tk_offsets tko = TK_OFFS_TAI;
+ struct tcf_chain *goto_ch = NULL;
+- u64 cycletime = 0, basetime = 0;
+- struct tcf_gate_params *p;
++ s32 timer_clockid = CLOCK_TAI;
++ bool use_old_entries = false;
++ s32 old_clockid = CLOCK_TAI;
++ bool need_cancel = false;
+ s32 clockid = CLOCK_TAI;
+ struct tcf_gate *gact;
+ struct tc_gate *parm;
++ u64 old_basetime = 0;
+ int ret = 0, err;
+ u32 gflags = 0;
+ s32 prio = -1;
+@@ -323,26 +383,8 @@ static int tcf_gate_init(struct net *net
+ if (!tb[TCA_GATE_PARMS])
+ return -EINVAL;
+
+- if (tb[TCA_GATE_CLOCKID]) {
++ if (tb[TCA_GATE_CLOCKID])
+ clockid = nla_get_s32(tb[TCA_GATE_CLOCKID]);
+- switch (clockid) {
+- case CLOCK_REALTIME:
+- tk_offset = TK_OFFS_REAL;
+- break;
+- case CLOCK_MONOTONIC:
+- tk_offset = TK_OFFS_MAX;
+- break;
+- case CLOCK_BOOTTIME:
+- tk_offset = TK_OFFS_BOOT;
+- break;
+- case CLOCK_TAI:
+- tk_offset = TK_OFFS_TAI;
+- break;
+- default:
+- NL_SET_ERR_MSG(extack, "Invalid 'clockid'");
+- return -EINVAL;
+- }
+- }
+
+ parm = nla_data(tb[TCA_GATE_PARMS]);
+ index = parm->index;
+@@ -368,6 +410,60 @@ static int tcf_gate_init(struct net *net
+ return -EEXIST;
+ }
+
++ gact = to_gate(*a);
++
++ err = tcf_action_check_ctrlact(parm->action, tp, &goto_ch, extack);
++ if (err < 0)
++ goto release_idr;
++
++ p = kzalloc(sizeof(*p), GFP_KERNEL);
++ if (!p) {
++ err = -ENOMEM;
++ goto chain_put;
++ }
++ INIT_LIST_HEAD(&p->entries);
++
++ use_old_entries = !tb[TCA_GATE_ENTRY_LIST];
++ if (!use_old_entries) {
++ err = parse_gate_list(tb[TCA_GATE_ENTRY_LIST], p, extack);
++ if (err < 0)
++ goto err_free;
++ use_old_entries = !err;
++ }
++
++ if (ret == ACT_P_CREATED && use_old_entries) {
++ NL_SET_ERR_MSG(extack, "The entry list is empty");
++ err = -EINVAL;
++ goto err_free;
++ }
++
++ if (ret != ACT_P_CREATED) {
++ rcu_read_lock();
++ cur_p = rcu_dereference(gact->param);
++
++ old_basetime = cur_p->tcfg_basetime;
++ old_clockid = cur_p->tcfg_clockid;
++ old_tk_offset = READ_ONCE(gact->tk_offset);
++
++ basetime = old_basetime;
++ cycletime_ext = cur_p->tcfg_cycletime_ext;
++ prio = cur_p->tcfg_priority;
++ gflags = cur_p->tcfg_flags;
++
++ if (!tb[TCA_GATE_CLOCKID])
++ clockid = old_clockid;
++
++ err = 0;
++ if (use_old_entries) {
++ err = tcf_gate_copy_entries(p, cur_p, extack);
++ if (!err && !tb[TCA_GATE_CYCLE_TIME])
++ cycletime = cur_p->tcfg_cycletime;
++ }
++ rcu_read_unlock();
++ if (err)
++ goto err_free;
++ }
++
+ if (tb[TCA_GATE_PRIORITY])
+ prio = nla_get_s32(tb[TCA_GATE_PRIORITY]);
+
+@@ -377,25 +473,26 @@ static int tcf_gate_init(struct net *net
+ if (tb[TCA_GATE_FLAGS])
+ gflags = nla_get_u32(tb[TCA_GATE_FLAGS]);
+
+- gact = to_gate(*a);
+- if (ret == ACT_P_CREATED)
+- INIT_LIST_HEAD(&gact->param.entries);
++ if (tb[TCA_GATE_CYCLE_TIME])
++ cycletime = nla_get_u64(tb[TCA_GATE_CYCLE_TIME]);
+
+- err = tcf_action_check_ctrlact(parm->action, tp, &goto_ch, extack);
+- if (err < 0)
+- goto release_idr;
++ if (tb[TCA_GATE_CYCLE_TIME_EXT])
++ cycletime_ext = nla_get_u64(tb[TCA_GATE_CYCLE_TIME_EXT]);
+
+- spin_lock_bh(&gact->tcf_lock);
+- p = &gact->param;
++ err = gate_clock_resolve(clockid, &tko, extack);
++ if (err)
++ goto err_free;
++ timer_clockid = clockid;
++
++ need_cancel = ret != ACT_P_CREATED &&
++ gate_timer_needs_cancel(basetime, old_basetime,
++ tko, old_tk_offset,
++ timer_clockid, old_clockid);
+
+- if (tb[TCA_GATE_CYCLE_TIME])
+- cycletime = nla_get_u64(tb[TCA_GATE_CYCLE_TIME]);
++ if (need_cancel)
++ hrtimer_cancel(&gact->hitimer);
+
+- if (tb[TCA_GATE_ENTRY_LIST]) {
+- err = parse_gate_list(tb[TCA_GATE_ENTRY_LIST], p, extack);
+- if (err < 0)
+- goto chain_put;
+- }
++ spin_lock_bh(&gact->tcf_lock);
+
+ if (!cycletime) {
+ struct tcfg_gate_entry *entry;
+@@ -404,22 +501,20 @@ static int tcf_gate_init(struct net *net
+ list_for_each_entry(entry, &p->entries, list)
+ cycle = ktime_add_ns(cycle, entry->interval);
+ cycletime = cycle;
+- if (!cycletime) {
+- err = -EINVAL;
+- goto chain_put;
+- }
+ }
+ p->tcfg_cycletime = cycletime;
++ p->tcfg_cycletime_ext = cycletime_ext;
+
+- if (tb[TCA_GATE_CYCLE_TIME_EXT])
+- p->tcfg_cycletime_ext =
+- nla_get_u64(tb[TCA_GATE_CYCLE_TIME_EXT]);
+-
+- gate_setup_timer(gact, basetime, tk_offset, clockid,
+- ret == ACT_P_CREATED);
++ if (need_cancel || ret == ACT_P_CREATED)
++ gate_setup_timer(gact, timer_clockid, tko);
+ p->tcfg_priority = prio;
+ p->tcfg_flags = gflags;
+- gate_get_start_time(gact, &start);
++ p->tcfg_basetime = basetime;
++ p->tcfg_clockid = timer_clockid;
++ gate_get_start_time(gact, p, &start);
++
++ old_p = rcu_replace_pointer(gact->param, p,
++ lockdep_is_held(&gact->tcf_lock));
+
+ gact->current_close_time = start;
+ gact->current_gate_status = GATE_ACT_GATE_OPEN | GATE_ACT_PENDING;
+@@ -436,11 +531,15 @@ static int tcf_gate_init(struct net *net
+ if (goto_ch)
+ tcf_chain_put_by_act(goto_ch);
+
++ if (old_p)
++ call_rcu(&old_p->rcu, tcf_gate_params_free_rcu);
++
+ return ret;
+
++err_free:
++ release_entry_list(&p->entries);
++ kfree(p);
+ chain_put:
+- spin_unlock_bh(&gact->tcf_lock);
+-
+ if (goto_ch)
+ tcf_chain_put_by_act(goto_ch);
+ release_idr:
+@@ -448,21 +547,29 @@ release_idr:
+ * without taking tcf_lock.
+ */
+ if (ret == ACT_P_CREATED)
+- gate_setup_timer(gact, gact->param.tcfg_basetime,
+- gact->tk_offset, gact->param.tcfg_clockid,
+- true);
++ gate_setup_timer(gact, timer_clockid, tko);
++
+ tcf_idr_release(*a, bind);
+ return err;
+ }
+
++static void tcf_gate_params_free_rcu(struct rcu_head *head)
++{
++ struct tcf_gate_params *p = container_of(head, struct tcf_gate_params, rcu);
++
++ release_entry_list(&p->entries);
++ kfree(p);
++}
++
+ static void tcf_gate_cleanup(struct tc_action *a)
+ {
+ struct tcf_gate *gact = to_gate(a);
+ struct tcf_gate_params *p;
+
+- p = &gact->param;
+ hrtimer_cancel(&gact->hitimer);
+- release_entry_list(&p->entries);
++ p = rcu_dereference_protected(gact->param, 1);
++ if (p)
++ call_rcu(&p->rcu, tcf_gate_params_free_rcu);
+ }
+
+ static int dumping_entry(struct sk_buff *skb,
+@@ -511,10 +618,9 @@ static int tcf_gate_dump(struct sk_buff
+ struct nlattr *entry_list;
+ struct tcf_t t;
+
+- spin_lock_bh(&gact->tcf_lock);
+- opt.action = gact->tcf_action;
+-
+- p = &gact->param;
++ rcu_read_lock();
++ opt.action = READ_ONCE(gact->tcf_action);
++ p = rcu_dereference(gact->param);
+
+ if (nla_put(skb, TCA_GATE_PARMS, sizeof(opt), &opt))
+ goto nla_put_failure;
+@@ -554,12 +660,12 @@ static int tcf_gate_dump(struct sk_buff
+ tcf_tm_dump(&t, &gact->tcf_tm);
+ if (nla_put_64bit(skb, TCA_GATE_TM, sizeof(t), &t, TCA_GATE_PAD))
+ goto nla_put_failure;
+- spin_unlock_bh(&gact->tcf_lock);
++ rcu_read_unlock();
+
+ return skb->len;
+
+ nla_put_failure:
+- spin_unlock_bh(&gact->tcf_lock);
++ rcu_read_unlock();
+ nlmsg_trim(skb, b);
+ return -1;
+ }
--- /dev/null
+From stable+bounces-227517-greg=kroah.com@vger.kernel.org Fri Mar 20 12:24:48 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Fri, 20 Mar 2026 07:21:05 -0400
+Subject: nfsd: define exports_proc_ops with CONFIG_PROC_FS
+To: stable@vger.kernel.org
+Cc: Tom Rix <trix@redhat.com>, Jeff Layton <jlayton@kernel.org>, Chuck Lever <chuck.lever@oracle.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260320112106.3879597-1-sashal@kernel.org>
+
+From: Tom Rix <trix@redhat.com>
+
+[ Upstream commit 340086da9a87820b40601141a0e9e87c954ac006 ]
+
+gcc with W=1 and ! CONFIG_PROC_FS
+fs/nfsd/nfsctl.c:161:30: error: ‘exports_proc_ops’
+ defined but not used [-Werror=unused-const-variable=]
+ 161 | static const struct proc_ops exports_proc_ops = {
+ | ^~~~~~~~~~~~~~~~
+
+The only use of exports_proc_ops is when CONFIG_PROC_FS
+is defined, so its definition should be likewise conditional.
+
+Signed-off-by: Tom Rix <trix@redhat.com>
+Reviewed-by: Jeff Layton <jlayton@kernel.org>
+Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
+Stable-dep-of: e7fcf179b82d ("NFSD: Hold net reference for the lifetime of /proc/fs/nfs/exports fd")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/nfsd/nfsctl.c | 25 +++++++++++++------------
+ 1 file changed, 13 insertions(+), 12 deletions(-)
+
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -155,18 +155,6 @@ static int exports_net_open(struct net *
+ return 0;
+ }
+
+-static int exports_proc_open(struct inode *inode, struct file *file)
+-{
+- return exports_net_open(current->nsproxy->net_ns, file);
+-}
+-
+-static const struct proc_ops exports_proc_ops = {
+- .proc_open = exports_proc_open,
+- .proc_read = seq_read,
+- .proc_lseek = seq_lseek,
+- .proc_release = seq_release,
+-};
+-
+ static int exports_nfsd_open(struct inode *inode, struct file *file)
+ {
+ return exports_net_open(inode->i_sb->s_fs_info, file);
+@@ -1423,6 +1411,19 @@ static struct file_system_type nfsd_fs_t
+ MODULE_ALIAS_FS("nfsd");
+
+ #ifdef CONFIG_PROC_FS
++
++static int exports_proc_open(struct inode *inode, struct file *file)
++{
++ return exports_net_open(current->nsproxy->net_ns, file);
++}
++
++static const struct proc_ops exports_proc_ops = {
++ .proc_open = exports_proc_open,
++ .proc_read = seq_read,
++ .proc_lseek = seq_lseek,
++ .proc_release = seq_release,
++};
++
+ static int create_proc_exports_entry(void)
+ {
+ struct proc_dir_entry *entry;
--- /dev/null
+From stable+bounces-227520-greg=kroah.com@vger.kernel.org Fri Mar 20 12:30:20 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Fri, 20 Mar 2026 07:29:33 -0400
+Subject: nfsd: fix heap overflow in NFSv4.0 LOCK replay cache
+To: stable@vger.kernel.org
+Cc: Jeff Layton <jlayton@kernel.org>, stable@kernel.org, Nicholas Carlini <npc@anthropic.com>, Chuck Lever <chuck.lever@oracle.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260320112933.3960093-1-sashal@kernel.org>
+
+From: Jeff Layton <jlayton@kernel.org>
+
+[ Upstream commit 5133b61aaf437e5f25b1b396b14242a6bb0508e2 ]
+
+The NFSv4.0 replay cache uses a fixed 112-byte inline buffer
+(rp_ibuf[NFSD4_REPLAY_ISIZE]) to store encoded operation responses.
+This size was calculated based on OPEN responses and does not account
+for LOCK denied responses, which include the conflicting lock owner as
+a variable-length field up to 1024 bytes (NFS4_OPAQUE_LIMIT).
+
+When a LOCK operation is denied due to a conflict with an existing lock
+that has a large owner, nfsd4_encode_operation() copies the full encoded
+response into the undersized replay buffer via read_bytes_from_xdr_buf()
+with no bounds check. This results in a slab-out-of-bounds write of up
+to 944 bytes past the end of the buffer, corrupting adjacent heap memory.
+
+This can be triggered remotely by an unauthenticated attacker with two
+cooperating NFSv4.0 clients: one sets a lock with a large owner string,
+then the other requests a conflicting lock to provoke the denial.
+
+We could fix this by increasing NFSD4_REPLAY_ISIZE to allow for a full
+opaque, but that would increase the size of every stateowner, when most
+lockowners are not that large.
+
+Instead, fix this by checking the encoded response length against
+NFSD4_REPLAY_ISIZE before copying into the replay buffer. If the
+response is too large, set rp_buflen to 0 to skip caching the replay
+payload. The status is still cached, and the client already received the
+correct response on the original request.
+
+Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
+Cc: stable@kernel.org
+Reported-by: Nicholas Carlini <npc@anthropic.com>
+Tested-by: Nicholas Carlini <npc@anthropic.com>
+Signed-off-by: Jeff Layton <jlayton@kernel.org>
+Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
+[ replaced `op_status_offset + XDR_UNIT` with existing `post_err_offset` variable ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/nfsd/nfs4xdr.c | 9 +++++++--
+ fs/nfsd/state.h | 17 ++++++++++++-----
+ 2 files changed, 19 insertions(+), 7 deletions(-)
+
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -5438,9 +5438,14 @@ nfsd4_encode_operation(struct nfsd4_comp
+ int len = xdr->buf->len - post_err_offset;
+
+ so->so_replay.rp_status = op->status;
+- so->so_replay.rp_buflen = len;
+- read_bytes_from_xdr_buf(xdr->buf, post_err_offset,
++ if (len <= NFSD4_REPLAY_ISIZE) {
++ so->so_replay.rp_buflen = len;
++ read_bytes_from_xdr_buf(xdr->buf,
++ post_err_offset,
+ so->so_replay.rp_buf, len);
++ } else {
++ so->so_replay.rp_buflen = 0;
++ }
+ }
+ status:
+ *p = op->status;
+--- a/fs/nfsd/state.h
++++ b/fs/nfsd/state.h
+@@ -430,11 +430,18 @@ struct nfs4_client_reclaim {
+ struct xdr_netobj cr_princhash;
+ };
+
+-/* A reasonable value for REPLAY_ISIZE was estimated as follows:
+- * The OPEN response, typically the largest, requires
+- * 4(status) + 8(stateid) + 20(changeinfo) + 4(rflags) + 8(verifier) +
+- * 4(deleg. type) + 8(deleg. stateid) + 4(deleg. recall flag) +
+- * 20(deleg. space limit) + ~32(deleg. ace) = 112 bytes
++/*
++ * REPLAY_ISIZE is sized for an OPEN response with delegation:
++ * 4(status) + 8(stateid) + 20(changeinfo) + 4(rflags) +
++ * 8(verifier) + 4(deleg. type) + 8(deleg. stateid) +
++ * 4(deleg. recall flag) + 20(deleg. space limit) +
++ * ~32(deleg. ace) = 112 bytes
++ *
++ * Some responses can exceed this. A LOCK denial includes the conflicting
++ * lock owner, which can be up to 1024 bytes (NFS4_OPAQUE_LIMIT). Responses
++ * larger than REPLAY_ISIZE are not cached in rp_ibuf; only rp_status is
++ * saved. Enlarging this constant increases the size of every
++ * nfs4_stateowner.
+ */
+
+ #define NFSD4_REPLAY_ISIZE 112
--- /dev/null
+From stable+bounces-227518-greg=kroah.com@vger.kernel.org Fri Mar 20 12:24:52 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Fri, 20 Mar 2026 07:21:06 -0400
+Subject: NFSD: Hold net reference for the lifetime of /proc/fs/nfs/exports fd
+To: stable@vger.kernel.org
+Cc: Chuck Lever <chuck.lever@oracle.com>, Misbah Anjum N <misanjum@linux.ibm.com>, Jeff Layton <jlayton@kernel.org>, NeilBrown <neil@brown.name>, Olga Kornievskaia <okorniev@redhat.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260320112106.3879597-2-sashal@kernel.org>
+
+From: Chuck Lever <chuck.lever@oracle.com>
+
+[ Upstream commit e7fcf179b82d3a3730fd8615da01b087cc654d0b ]
+
+The /proc/fs/nfs/exports proc entry is created at module init
+and persists for the module's lifetime. exports_proc_open()
+captures the caller's current network namespace and stores
+its svc_export_cache in seq->private, but takes no reference
+on the namespace. If the namespace is subsequently torn down
+(e.g. container destruction after the opener does setns() to a
+different namespace), nfsd_net_exit() calls nfsd_export_shutdown()
+which frees the cache. Subsequent reads on the still-open fd
+dereference the freed cache_detail, walking a freed hash table.
+
+Hold a reference on the struct net for the lifetime of the open
+file descriptor. This prevents nfsd_net_exit() from running --
+and thus prevents nfsd_export_shutdown() from freeing the cache
+-- while any exports fd is open. cache_detail already stores
+its net pointer (cd->net, set by cache_create_net()), so
+exports_release() can retrieve it without additional per-file
+storage.
+
+Reported-by: Misbah Anjum N <misanjum@linux.ibm.com>
+Closes: https://lore.kernel.org/linux-nfs/dcd371d3a95815a84ba7de52cef447b8@linux.ibm.com/
+Fixes: 96d851c4d28d ("nfsd: use proper net while reading "exports" file")
+Cc: stable@vger.kernel.org
+Reviewed-by: Jeff Layton <jlayton@kernel.org>
+Reviewed-by: NeilBrown <neil@brown.name>
+Tested-by: Olga Kornievskaia <okorniev@redhat.com>
+Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/nfsd/nfsctl.c | 14 ++++++++++++--
+ 1 file changed, 12 insertions(+), 2 deletions(-)
+
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -152,9 +152,19 @@ static int exports_net_open(struct net *
+
+ seq = file->private_data;
+ seq->private = nn->svc_export_cache;
++ get_net(net);
+ return 0;
+ }
+
++static int exports_release(struct inode *inode, struct file *file)
++{
++ struct seq_file *seq = file->private_data;
++ struct cache_detail *cd = seq->private;
++
++ put_net(cd->net);
++ return seq_release(inode, file);
++}
++
+ static int exports_nfsd_open(struct inode *inode, struct file *file)
+ {
+ return exports_net_open(inode->i_sb->s_fs_info, file);
+@@ -164,7 +174,7 @@ static const struct file_operations expo
+ .open = exports_nfsd_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+- .release = seq_release,
++ .release = exports_release,
+ };
+
+ static int export_features_show(struct seq_file *m, void *v)
+@@ -1421,7 +1431,7 @@ static const struct proc_ops exports_pro
+ .proc_open = exports_proc_open,
+ .proc_read = seq_read,
+ .proc_lseek = seq_lseek,
+- .proc_release = seq_release,
++ .proc_release = exports_release,
+ };
+
+ static int create_proc_exports_entry(void)
--- /dev/null
+From stable+bounces-227034-greg=kroah.com@vger.kernel.org Wed Mar 18 12:51:41 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 18 Mar 2026 07:51:33 -0400
+Subject: pmdomain: bcm: bcm2835-power: Fix broken reset status read
+To: stable@vger.kernel.org
+Cc: "Maíra Canal" <mcanal@igalia.com>, "Florian Fainelli" <florian.fainelli@broadcom.com>, "Stefan Wahren" <wahrenst@gmx.net>, "Ulf Hansson" <ulf.hansson@linaro.org>, "Sasha Levin" <sashal@kernel.org>
+Message-ID: <20260318115133.637923-1-sashal@kernel.org>
+
+From: Maíra Canal <mcanal@igalia.com>
+
+[ Upstream commit 550bae2c0931dbb664a61b08c21cf156f0a5362a ]
+
+bcm2835_reset_status() has a misplaced parenthesis on every PM_READ()
+call. Since PM_READ(reg) expands to readl(power->base + (reg)), the
+expression:
+
+ PM_READ(PM_GRAFX & PM_V3DRSTN)
+
+computes the bitwise AND of the register offset PM_GRAFX with the
+bitmask PM_V3DRSTN before using the result as a register offset, reading
+from the wrong MMIO address instead of the intended PM_GRAFX register.
+The same issue affects the PM_IMAGE cases.
+
+Fix by moving the closing parenthesis so PM_READ() receives only the
+register offset, and the bitmask is applied to the value returned by
+the read.
+
+Fixes: 670c672608a1 ("soc: bcm: bcm2835-pm: Add support for power domains under a new binding.")
+Signed-off-by: Maíra Canal <mcanal@igalia.com>
+Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com>
+Reviewed-by: Stefan Wahren <wahrenst@gmx.net>
+Cc: stable@vger.kernel.org
+Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/soc/bcm/bcm2835-power.c | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+--- a/drivers/soc/bcm/bcm2835-power.c
++++ b/drivers/soc/bcm/bcm2835-power.c
+@@ -580,11 +580,11 @@ static int bcm2835_reset_status(struct r
+
+ switch (id) {
+ case BCM2835_RESET_V3D:
+- return !PM_READ(PM_GRAFX & PM_V3DRSTN);
++ return !(PM_READ(PM_GRAFX) & PM_V3DRSTN);
+ case BCM2835_RESET_H264:
+- return !PM_READ(PM_IMAGE & PM_H264RSTN);
++ return !(PM_READ(PM_IMAGE) & PM_H264RSTN);
+ case BCM2835_RESET_ISP:
+- return !PM_READ(PM_IMAGE & PM_ISPRSTN);
++ return !(PM_READ(PM_IMAGE) & PM_ISPRSTN);
+ default:
+ return -EINVAL;
+ }
--- /dev/null
+From stable+bounces-227632-greg=kroah.com@vger.kernel.org Fri Mar 20 22:55:21 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Fri, 20 Mar 2026 17:55:16 -0400
+Subject: pmdomain: bcm: bcm2835-power: Increase ASB control timeout
+To: stable@vger.kernel.org
+Cc: "Maíra Canal" <mcanal@igalia.com>, "Stefan Wahren" <wahrenst@gmx.net>, "Ulf Hansson" <ulf.hansson@linaro.org>, "Sasha Levin" <sashal@kernel.org>
+Message-ID: <20260320215516.133026-1-sashal@kernel.org>
+
+From: Maíra Canal <mcanal@igalia.com>
+
+[ Upstream commit b826d2c0b0ecb844c84431ba6b502e744f5d919a ]
+
+The bcm2835_asb_control() function uses a tight polling loop to wait
+for the ASB bridge to acknowledge a request. During intensive workloads,
+this handshake intermittently fails for V3D's master ASB on BCM2711,
+resulting in "Failed to disable ASB master for v3d" errors during
+runtime PM suspend. As a consequence, the failed power-off leaves V3D in
+a broken state, leading to bus faults or system hangs on later accesses.
+
+As the timeout is insufficient in some scenarios, increase the polling
+timeout from 1us to 5us, which is still negligible in the context of a
+power domain transition. Also, replace the open-coded ktime_get_ns()/
+cpu_relax() polling loop with readl_poll_timeout_atomic().
+
+Cc: stable@vger.kernel.org
+Fixes: 670c672608a1 ("soc: bcm: bcm2835-pm: Add support for power domains under a new binding.")
+Signed-off-by: Maíra Canal <mcanal@igalia.com>
+Reviewed-by: Stefan Wahren <wahrenst@gmx.net>
+Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/soc/bcm/bcm2835-power.c | 12 ++++--------
+ 1 file changed, 4 insertions(+), 8 deletions(-)
+
+--- a/drivers/soc/bcm/bcm2835-power.c
++++ b/drivers/soc/bcm/bcm2835-power.c
+@@ -9,6 +9,7 @@
+ #include <linux/clk.h>
+ #include <linux/delay.h>
+ #include <linux/io.h>
++#include <linux/iopoll.h>
+ #include <linux/mfd/bcm2835-pm.h>
+ #include <linux/module.h>
+ #include <linux/platform_device.h>
+@@ -152,7 +153,6 @@ struct bcm2835_power {
+ static int bcm2835_asb_control(struct bcm2835_power *power, u32 reg, bool enable)
+ {
+ void __iomem *base = power->asb;
+- u64 start;
+ u32 val;
+
+ switch (reg) {
+@@ -165,8 +165,6 @@ static int bcm2835_asb_control(struct bc
+ break;
+ }
+
+- start = ktime_get_ns();
+-
+ /* Enable the module's async AXI bridges. */
+ if (enable) {
+ val = readl(base + reg) & ~ASB_REQ_STOP;
+@@ -175,11 +173,9 @@ static int bcm2835_asb_control(struct bc
+ }
+ writel(PM_PASSWORD | val, base + reg);
+
+- while (!!(readl(base + reg) & ASB_ACK) == enable) {
+- cpu_relax();
+- if (ktime_get_ns() - start >= 1000)
+- return -ETIMEDOUT;
+- }
++ if (readl_poll_timeout_atomic(base + reg, val,
++ !!(val & ASB_ACK) != enable, 0, 5))
++ return -ETIMEDOUT;
+
+ return 0;
+ }
--- /dev/null
+From stable+bounces-227270-greg=kroah.com@vger.kernel.org Thu Mar 19 12:39:28 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 19 Mar 2026 07:32:58 -0400
+Subject: s390/zcrypt: Enable AUTOSEL_DOM for CCA serialnr sysfs attribute
+To: stable@vger.kernel.org
+Cc: Harald Freudenberger <freude@linux.ibm.com>, Ingo Franzki <ifranzki@linux.ibm.com>, Vasily Gorbik <gor@linux.ibm.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260319113258.2340305-1-sashal@kernel.org>
+
+From: Harald Freudenberger <freude@linux.ibm.com>
+
+[ Upstream commit 598bbefa8032cc58b564a81d1ad68bd815c8dc0f ]
+
+The serialnr sysfs attribute for CCA cards when queried always
+used the default domain for sending the request down to the card.
+If for any reason exactly this default domain is disabled then
+the attribute code fails to retrieve the CCA info and the sysfs
+entry shows an empty string. Works as designed but the serial
+number is a card attribute and thus it does not matter which
+domain is used for the query. So if there are other domains on
+this card available, these could be used.
+
+So extend the code to use AUTOSEL_DOM for the domain value to
+address any online domain within the card for querying the cca
+info and thus show the serialnr as long as there is one domain
+usable regardless of the default domain setting.
+
+Fixes: 8f291ebf3270 ("s390/zcrypt: enable card/domain autoselect on ep11 cprbs")
+Suggested-by: Ingo Franzki <ifranzki@linux.ibm.com>
+Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
+Reviewed-by: Ingo Franzki <ifranzki@linux.ibm.com>
+Cc: stable@vger.kernel.org
+Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
+[ preserved zc->online as the fourth argument to cca_get_info() ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/s390/crypto/zcrypt_ccamisc.c | 12 +++++++-----
+ drivers/s390/crypto/zcrypt_cex4.c | 3 +--
+ 2 files changed, 8 insertions(+), 7 deletions(-)
+
+--- a/drivers/s390/crypto/zcrypt_ccamisc.c
++++ b/drivers/s390/crypto/zcrypt_ccamisc.c
+@@ -1689,11 +1689,13 @@ static int fetch_cca_info(u16 cardnr, u1
+
+ memset(ci, 0, sizeof(*ci));
+
+- /* get first info from zcrypt device driver about this apqn */
+- rc = zcrypt_device_status_ext(cardnr, domain, &devstat);
+- if (rc)
+- return rc;
+- ci->hwtype = devstat.hwtype;
++ /* if specific domain given, fetch status and hw info for this apqn */
++ if (domain != AUTOSEL_DOM) {
++ rc = zcrypt_device_status_ext(cardnr, domain, &devstat);
++ if (rc)
++ return rc;
++ ci->hwtype = devstat.hwtype;
++ }
+
+ /* prep page for rule array and var array use */
+ pg = (u8 *)__get_free_page(GFP_KERNEL);
+--- a/drivers/s390/crypto/zcrypt_cex4.c
++++ b/drivers/s390/crypto/zcrypt_cex4.c
+@@ -85,8 +85,7 @@ static ssize_t cca_serialnr_show(struct
+
+ memset(&ci, 0, sizeof(ci));
+
+- if (ap_domain_index >= 0)
+- cca_get_info(ac->id, ap_domain_index, &ci, zc->online);
++ cca_get_info(ac->id, AUTOSEL_DOM, &ci, zc->online);
+
+ return scnprintf(buf, PAGE_SIZE, "%s\n", ci.serial);
+ }
--- /dev/null
+From stable+bounces-223715-greg=kroah.com@vger.kernel.org Mon Mar 9 18:44:17 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 9 Mar 2026 13:44:10 -0400
+Subject: selftests: mptcp: join: check RM_ADDR not sent over same subflow
+To: stable@vger.kernel.org
+Cc: "Matthieu Baerts (NGI0)" <matttbe@kernel.org>, Mat Martineau <martineau@kernel.org>, Jakub Kicinski <kuba@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260309174410.1333230-1-sashal@kernel.org>
+
+From: "Matthieu Baerts (NGI0)" <matttbe@kernel.org>
+
+[ Upstream commit 560edd99b5f58b2d4bbe3c8e51e1eed68d887b0e ]
+
+This validates the previous commit: RM_ADDR were sent over the first
+found active subflow which could be the same as the one being removed.
+It is more likely to loose this notification.
+
+For this check, RM_ADDR are explicitly dropped when trying to send them
+over the initial subflow, when removing the endpoint attached to it. If
+it is dropped, the test will complain because some RM_ADDR have not been
+received.
+
+Note that only the RM_ADDR are dropped, to allow the linked subflow to
+be quickly and cleanly closed. To only drop those RM_ADDR, a cBPF byte
+code is used. If the IPTables commands fail, that's OK, the tests will
+continue to pass, but not validate this part. This can be ignored:
+another subtest fully depends on such command, and will be marked as
+skipped.
+
+The 'Fixes' tag here below is the same as the one from the previous
+commit: this patch here is not fixing anything wrong in the selftests,
+but it validates the previous fix for an issue introduced by this commit
+ID.
+
+Fixes: 8dd5efb1f91b ("mptcp: send ack for rm_addr")
+Cc: stable@vger.kernel.org
+Reviewed-by: Mat Martineau <martineau@kernel.org>
+Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
+Link: https://patch.msgid.link/20260303-net-mptcp-misc-fixes-7-0-rc2-v1-3-4b5462b6f016@kernel.org
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+[ adapted chk_subflow_nr calls to include extra empty first argument ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ tools/testing/selftests/net/mptcp/mptcp_join.sh | 36 ++++++++++++++++++++++++
+ 1 file changed, 36 insertions(+)
+
+--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+@@ -64,6 +64,24 @@ CBPF_MPTCP_SUBOPTION_ADD_ADDR="14,
+ 6 0 0 65535,
+ 6 0 0 0"
+
++# IPv4: TCP hdr of 48B, a first suboption of 12B (DACK8), the RM_ADDR suboption
++# generated using "nfbpf_compile '(ip[32] & 0xf0) == 0xc0 && ip[53] == 0x0c &&
++# (ip[66] & 0xf0) == 0x40'"
++CBPF_MPTCP_SUBOPTION_RM_ADDR="13,
++ 48 0 0 0,
++ 84 0 0 240,
++ 21 0 9 64,
++ 48 0 0 32,
++ 84 0 0 240,
++ 21 0 6 192,
++ 48 0 0 53,
++ 21 0 4 12,
++ 48 0 0 66,
++ 84 0 0 240,
++ 21 0 1 64,
++ 6 0 0 65535,
++ 6 0 0 0"
++
+ init_partial()
+ {
+ capout=$(mktemp)
+@@ -3468,6 +3486,14 @@ endpoint_tests()
+ wait_mpj $ns2
+ chk_subflow_nr "" "after no reject" 3
+
++ # To make sure RM_ADDR are sent over a different subflow, but
++ # allow the rest to quickly and cleanly close the subflow
++ local ipt=1
++ ip netns exec "${ns2}" ${iptables} -I OUTPUT -s "10.0.1.2" \
++ -p tcp -m tcp --tcp-option 30 \
++ -m bpf --bytecode \
++ "$CBPF_MPTCP_SUBOPTION_RM_ADDR" \
++ -j DROP || ipt=0
+ local i
+ for i in $(seq 3); do
+ pm_nl_del_endpoint $ns2 1 10.0.1.2
+@@ -3478,6 +3504,7 @@ endpoint_tests()
+ wait_mpj $ns2
+ chk_subflow_nr "" "after re-add id 0 ($i)" 3
+ done
++ [ ${ipt} = 1 ] && ip netns exec "${ns2}" ${iptables} -D OUTPUT 1
+
+ kill_wait "${tests_pid}"
+ kill_events_pids
+@@ -3527,9 +3554,18 @@ endpoint_tests()
+ wait_mpj $ns2
+ chk_subflow_nr "" "after re-add" 3
+
++ # To make sure RM_ADDR are sent over a different subflow, but
++ # allow the rest to quickly and cleanly close the subflow
++ local ipt=1
++ ip netns exec "${ns1}" ${iptables} -I OUTPUT -s "10.0.1.1" \
++ -p tcp -m tcp --tcp-option 30 \
++ -m bpf --bytecode \
++ "$CBPF_MPTCP_SUBOPTION_RM_ADDR" \
++ -j DROP || ipt=0
+ pm_nl_del_endpoint $ns1 42 10.0.1.1
+ sleep 0.5
+ chk_subflow_nr "" "after delete ID 0" 2
++ [ ${ipt} = 1 ] && ip netns exec "${ns1}" ${iptables} -D OUTPUT 1
+
+ pm_nl_add_endpoint $ns1 10.0.1.1 id 99 flags signal
+ wait_mpj $ns2
mm-hugetlb-fix-hugetlb_pmd_shared.patch
mm-hugetlb-fix-two-comments-related-to-huge_pmd_unshare.patch
mm-hugetlb-fix-excessive-ipi-broadcasts-when-unsharing-pmd-tables-using-mmu_gather.patch
+ksmbd-call-ksmbd_vfs_kern_path_end_removing-on-some-error-paths.patch
+ext4-fix-dirtyclusters-double-decrement-on-fs-shutdown.patch
+ext4-always-allocate-blocks-only-from-groups-inode-can-use.patch
+wifi-libertas-fix-use-after-free-in-lbs_free_adapter.patch
+wifi-cfg80211-move-scan-done-work-to-wiphy-work.patch
+wifi-cfg80211-cancel-rfkill_block-work-in-wiphy_unregister.patch
+x86-sev-allow-ibpb-on-entry-feature-for-snp-guests.patch
+net-phy-register-phy-led_triggers-during-probe-to-avoid-ab-ba-deadlock.patch
+drm-amd-display-use-gfp_atomic-in-dc_create_stream_for_sink.patch
+mptcp-pm-avoid-sending-rm_addr-over-same-subflow.patch
+mptcp-pm-in-kernel-always-mark-signal-subflow-endp-as-used.patch
+selftests-mptcp-join-check-rm_addr-not-sent-over-same-subflow.patch
+net-sched-act_gate-snapshot-parameters-with-rcu-on-replace.patch
+alsa-pcm-fix-wait_time-calculations.patch
+alsa-pcm-fix-use-after-free-on-linked-stream-runtime-in-snd_pcm_drain.patch
+can-gs_usb-gs_can_open-always-configure-bitrates-before-starting-device.patch
+kvm-svm-set-clear-cr8-write-interception-when-avic-is-de-activated.patch
+usb-gadget-f_tcm-fix-null-pointer-dereferences-in-nexus-handling.patch
+usb-roles-get-usb-role-switch-from-parent-only-for-usb-b-connector.patch
+asoc-qcom-qdsp6-fix-q6apm-remove-ordering-during-adsp-stop-and-start.patch
+mm-kfence-fix-kasan-hardware-tag-faults-during-late-enablement.patch
+mm-kfence-disable-kfence-upon-kasan-hw-tags-enablement.patch
+iomap-reject-delalloc-mappings-during-writeback.patch
+tracing-fix-syscall-events-activation-by-ensuring-refcount-hits-zero.patch
+pmdomain-bcm-bcm2835-power-fix-broken-reset-status-read.patch
+arm64-reorganise-page_-prot_-macros.patch
+arm64-mm-add-pte_dirty-back-to-page_kernel-to-fix-kexec-hibernation.patch
+ksmbd-don-t-log-keys-in-smb3-signing-and-encryption-key-generation.patch
+drm-msm-fix-dma_free_attrs-buffer-size.patch
+drm-bridge-ti-sn65dsi83-halve-horizontal-syncs-for-dual-lvds-output.patch
+net-macb-shuffle-the-tx-ring-before-enabling-tx.patch
+s390-zcrypt-enable-autosel_dom-for-cca-serialnr-sysfs-attribute.patch
+xfs-fix-integer-overflow-in-bmap-intent-sort-comparator.patch
+xfs-ensure-dquot-item-is-deleted-from-ail-only-after-log-shutdown.patch
+crypto-atmel-sha204a-fix-oom-tfm_count-leak.patch
+cifs-open-files-should-not-hold-ref-on-superblock.patch
+kprobes-remove-unneeded-goto.patch
+kprobes-remove-unneeded-warnings-from-__arm_kprobe_ftrace.patch
+iio-buffer-fix-coding-style-warnings.patch
+iio-buffer-fix-wait_queue-not-being-removed.patch
+btrfs-fix-transaction-abort-when-snapshotting-received-subvolumes.patch
+btrfs-fix-transaction-abort-on-set-received-ioctl-due-to-item-overflow.patch
+iio-light-bh1780-fix-pm-runtime-leak-on-error-path.patch
+batman-adv-avoid-ogm-aggregation-when-skb-tailroom-is-insufficient.patch
+nfsd-define-exports_proc_ops-with-config_proc_fs.patch
+nfsd-hold-net-reference-for-the-lifetime-of-proc-fs-nfs-exports-fd.patch
+nfsd-fix-heap-overflow-in-nfsv4.0-lock-replay-cache.patch
+net-macb-queue-tie-off-or-disable-during-wol-suspend.patch
+net-macb-introduce-gem_init_rx_ring.patch
+net-macb-reinitialize-tx-rx-queue-pointer-registers-and-rx-ring-during-resume.patch
+pmdomain-bcm-bcm2835-power-increase-asb-control-timeout.patch
--- /dev/null
+From stable+bounces-227024-greg=kroah.com@vger.kernel.org Wed Mar 18 12:35:52 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 18 Mar 2026 07:31:08 -0400
+Subject: tracing: Fix syscall events activation by ensuring refcount hits zero
+To: stable@vger.kernel.org
+Cc: Huiwen He <hehuiwen@kylinos.cn>, Masami Hiramatsu <mhiramat@kernel.org>, Mathieu Desnoyers <mathieu.desnoyers@efficios.com>, "Steven Rostedt (Google)" <rostedt@goodmis.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260318113108.626781-1-sashal@kernel.org>
+
+From: Huiwen He <hehuiwen@kylinos.cn>
+
+[ Upstream commit 0a663b764dbdf135a126284f454c9f01f95a87d4 ]
+
+When multiple syscall events are specified in the kernel command line
+(e.g., trace_event=syscalls:sys_enter_openat,syscalls:sys_enter_close),
+they are often not captured after boot, even though they appear enabled
+in the tracing/set_event file.
+
+The issue stems from how syscall events are initialized. Syscall
+tracepoints require the global reference count (sys_tracepoint_refcount)
+to transition from 0 to 1 to trigger the registration of the syscall
+work (TIF_SYSCALL_TRACEPOINT) for tasks, including the init process (pid 1).
+
+The current implementation of early_enable_events() with disable_first=true
+used an interleaved sequence of "Disable A -> Enable A -> Disable B -> Enable B".
+If multiple syscalls are enabled, the refcount never drops to zero,
+preventing the 0->1 transition that triggers actual registration.
+
+Fix this by splitting early_enable_events() into two distinct phases:
+1. Disable all events specified in the buffer.
+2. Enable all events specified in the buffer.
+
+This ensures the refcount hits zero before re-enabling, allowing syscall
+events to be properly activated during early boot.
+
+The code is also refactored to use a helper function to avoid logic
+duplication between the disable and enable phases.
+
+Cc: stable@vger.kernel.org
+Cc: Masami Hiramatsu <mhiramat@kernel.org>
+Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
+Link: https://patch.msgid.link/20260224023544.1250787-1-hehuiwen@kylinos.cn
+Fixes: ce1039bd3a89 ("tracing: Fix enabling of syscall events on the command line")
+Signed-off-by: Huiwen He <hehuiwen@kylinos.cn>
+Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/trace/trace_events.c | 51 +++++++++++++++++++++++++++++++-------------
+ 1 file changed, 36 insertions(+), 15 deletions(-)
+
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -3862,27 +3862,23 @@ static __init int event_trace_memsetup(v
+ return 0;
+ }
+
+-static __init void
+-early_enable_events(struct trace_array *tr, bool disable_first)
++/*
++ * Helper function to enable or disable a comma-separated list of events
++ * from the bootup buffer.
++ */
++static __init void __early_set_events(struct trace_array *tr, bool enable)
+ {
+ char *buf = bootup_event_buf;
+ char *token;
+- int ret;
+-
+- while (true) {
+- token = strsep(&buf, ",");
+-
+- if (!token)
+- break;
+
++ while ((token = strsep(&buf, ","))) {
+ if (*token) {
+- /* Restarting syscalls requires that we stop them first */
+- if (disable_first)
++ if (enable) {
++ if (ftrace_set_clr_event(tr, token, 1))
++ pr_warn("Failed to enable trace event: %s\n", token);
++ } else {
+ ftrace_set_clr_event(tr, token, 0);
+-
+- ret = ftrace_set_clr_event(tr, token, 1);
+- if (ret)
+- pr_warn("Failed to enable trace event: %s\n", token);
++ }
+ }
+
+ /* Put back the comma to allow this to be called again */
+@@ -3891,6 +3887,31 @@ early_enable_events(struct trace_array *
+ }
+ }
+
++/**
++ * early_enable_events - enable events from the bootup buffer
++ * @tr: The trace array to enable the events in
++ * @disable_first: If true, disable all events before enabling them
++ *
++ * This function enables events from the bootup buffer. If @disable_first
++ * is true, it will first disable all events in the buffer before enabling
++ * them.
++ *
++ * For syscall events, which rely on a global refcount to register the
++ * SYSCALL_WORK_SYSCALL_TRACEPOINT flag (especially for pid 1), we must
++ * ensure the refcount hits zero before re-enabling them. A simple
++ * "disable then enable" per-event is not enough if multiple syscalls are
++ * used, as the refcount will stay above zero. Thus, we need a two-phase
++ * approach: disable all, then enable all.
++ */
++static __init void
++early_enable_events(struct trace_array *tr, bool disable_first)
++{
++ if (disable_first)
++ __early_set_events(tr, false);
++
++ __early_set_events(tr, true);
++}
++
+ static __init int event_trace_enable(void)
+ {
+ struct trace_array *tr = top_trace_array();
--- /dev/null
+From stable+bounces-225704-greg=kroah.com@vger.kernel.org Mon Mar 16 21:53:03 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 16 Mar 2026 16:50:57 -0400
+Subject: usb: gadget: f_tcm: Fix NULL pointer dereferences in nexus handling
+To: stable@vger.kernel.org
+Cc: Jiasheng Jiang <jiashengjiangcool@gmail.com>, stable <stable@kernel.org>, Thinh Nguyen <Thinh.Nguyen@synopsys.com>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260316205057.1402393-1-sashal@kernel.org>
+
+From: Jiasheng Jiang <jiashengjiangcool@gmail.com>
+
+[ Upstream commit b9fde507355342a2d64225d582dc8b98ff5ecb19 ]
+
+The `tpg->tpg_nexus` pointer in the USB Target driver is dynamically
+managed and tied to userspace configuration via ConfigFS. It can be
+NULL if the USB host sends requests before the nexus is fully
+established or immediately after it is dropped.
+
+Currently, functions like `bot_submit_command()` and the data
+transfer paths retrieve `tv_nexus = tpg->tpg_nexus` and immediately
+dereference `tv_nexus->tvn_se_sess` without any validation. If a
+malicious or misconfigured USB host sends a BOT (Bulk-Only Transport)
+command during this race window, it triggers a NULL pointer
+dereference, leading to a kernel panic (local DoS).
+
+This exposes an inconsistent API usage within the module, as peer
+functions like `usbg_submit_command()` and `bot_send_bad_response()`
+correctly implement a NULL check for `tv_nexus` before proceeding.
+
+Fix this by bringing consistency to the nexus handling. Add the
+missing `if (!tv_nexus)` checks to the vulnerable BOT command and
+request processing paths, aborting the command gracefully with an
+error instead of crashing the system.
+
+Fixes: c52661d60f63 ("usb-gadget: Initial merge of target module for UASP + BOT")
+Cc: stable <stable@kernel.org>
+Signed-off-by: Jiasheng Jiang <jiashengjiangcool@gmail.com>
+Reviewed-by: Thinh Nguyen <Thinh.Nguyen@synopsys.com>
+Link: https://patch.msgid.link/20260219023834.17976-1-jiashengjiangcool@gmail.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/usb/gadget/function/f_tcm.c | 14 ++++++++++++++
+ 1 file changed, 14 insertions(+)
+
+--- a/drivers/usb/gadget/function/f_tcm.c
++++ b/drivers/usb/gadget/function/f_tcm.c
+@@ -1032,6 +1032,13 @@ static void usbg_cmd_work(struct work_st
+ se_cmd = &cmd->se_cmd;
+ tpg = cmd->fu->tpg;
+ tv_nexus = tpg->tpg_nexus;
++ if (!tv_nexus) {
++ struct usb_gadget *gadget = fuas_to_gadget(cmd->fu);
++
++ dev_err(&gadget->dev, "Missing nexus, ignoring command\n");
++ return;
++ }
++
+ dir = get_cmd_dir(cmd->cmd_buf);
+ if (dir < 0) {
+ __target_init_cmd(se_cmd,
+@@ -1160,6 +1167,13 @@ static void bot_cmd_work(struct work_str
+ se_cmd = &cmd->se_cmd;
+ tpg = cmd->fu->tpg;
+ tv_nexus = tpg->tpg_nexus;
++ if (!tv_nexus) {
++ struct usb_gadget *gadget = fuas_to_gadget(cmd->fu);
++
++ dev_err(&gadget->dev, "Missing nexus, ignoring command\n");
++ return;
++ }
++
+ dir = get_cmd_dir(cmd->cmd_buf);
+ if (dir < 0) {
+ __target_init_cmd(se_cmd,
--- /dev/null
+From sashal@kernel.org Mon Mar 16 22:23:54 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 16 Mar 2026 17:23:51 -0400
+Subject: usb: roles: get usb role switch from parent only for usb-b-connector
+To: stable@vger.kernel.org
+Cc: Xu Yang <xu.yang_2@nxp.com>, stable <stable@kernel.org>, Arnaud Ferraris <arnaud.ferraris@collabora.com>, Heikki Krogerus <heikki.krogerus@linux.intel.com>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260316212351.1415785-1-sashal@kernel.org>
+
+From: Xu Yang <xu.yang_2@nxp.com>
+
+[ Upstream commit 8345b1539faa49fcf9c9439c3cbd97dac6eca171 ]
+
+usb_role_switch_is_parent() was walking up to the parent node and checking
+for the "usb-role-switch" property regardless of the type of the passed
+fwnode. This could cause unrelated device nodes to be probed as potential
+role switch parent, leading to spurious matches and "-EPROBE_DEFER" being
+returned infinitely.
+
+Till now only Type-B connector node will have a parent node which may
+present "usb-role-switch" property and register the role switch device.
+For Type-C connector node, its parent node will always be a Type-C chip
+device which will never register the role switch device. However, it may
+still present a non-boolean "usb-role-switch = <&usb_controller>" property
+for historical compatibility.
+
+So restrict the helper to only operate on Type-B connector when attempting
+to get the role switch from parent node.
+
+Fixes: 6fadd72943b8 ("usb: roles: get usb-role-switch from parent")
+Cc: stable <stable@kernel.org>
+Signed-off-by: Xu Yang <xu.yang_2@nxp.com>
+Tested-by: Arnaud Ferraris <arnaud.ferraris@collabora.com>
+Reviewed-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
+Link: https://patch.msgid.link/20260309074313.2809867-3-xu.yang_2@nxp.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+[ replace fwnode_device_is_compatible() call with it's expansion ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/usb/roles/class.c | 7 ++++++-
+ 1 file changed, 6 insertions(+), 1 deletion(-)
+
+--- a/drivers/usb/roles/class.c
++++ b/drivers/usb/roles/class.c
+@@ -108,9 +108,14 @@ static void *usb_role_switch_match(struc
+ static struct usb_role_switch *
+ usb_role_switch_is_parent(struct fwnode_handle *fwnode)
+ {
+- struct fwnode_handle *parent = fwnode_get_parent(fwnode);
++ struct fwnode_handle *parent;
+ struct device *dev;
+
++ if (fwnode_property_match_string(fwnode, "compatible", "usb-b-connector") < 0)
++ return NULL;
++
++ parent = fwnode_get_parent(fwnode);
++
+ if (!fwnode_property_present(parent, "usb-role-switch")) {
+ fwnode_handle_put(parent);
+ return NULL;
--- /dev/null
+From stable+bounces-223616-greg=kroah.com@vger.kernel.org Mon Mar 9 12:38:33 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 9 Mar 2026 07:38:23 -0400
+Subject: wifi: cfg80211: cancel rfkill_block work in wiphy_unregister()
+To: stable@vger.kernel.org
+Cc: Daniil Dulov <d.dulov@aladdin.ru>, Johannes Berg <johannes.berg@intel.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260309113823.823525-2-sashal@kernel.org>
+
+From: Daniil Dulov <d.dulov@aladdin.ru>
+
+[ Upstream commit 767d23ade706d5fa51c36168e92a9c5533c351a1 ]
+
+There is a use-after-free error in cfg80211_shutdown_all_interfaces found
+by syzkaller:
+
+BUG: KASAN: use-after-free in cfg80211_shutdown_all_interfaces+0x213/0x220
+Read of size 8 at addr ffff888112a78d98 by task kworker/0:5/5326
+CPU: 0 UID: 0 PID: 5326 Comm: kworker/0:5 Not tainted 6.19.0-rc2 #2 PREEMPT(voluntary)
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014
+Workqueue: events cfg80211_rfkill_block_work
+Call Trace:
+ <TASK>
+ dump_stack_lvl+0x116/0x1f0
+ print_report+0xcd/0x630
+ kasan_report+0xe0/0x110
+ cfg80211_shutdown_all_interfaces+0x213/0x220
+ cfg80211_rfkill_block_work+0x1e/0x30
+ process_one_work+0x9cf/0x1b70
+ worker_thread+0x6c8/0xf10
+ kthread+0x3c5/0x780
+ ret_from_fork+0x56d/0x700
+ ret_from_fork_asm+0x1a/0x30
+ </TASK>
+
+The problem arises due to the rfkill_block work is not cancelled when wiphy
+is being unregistered. In order to fix the issue cancel the corresponding
+work in wiphy_unregister().
+
+Found by Linux Verification Center (linuxtesting.org) with Syzkaller.
+
+Fixes: 1f87f7d3a3b4 ("cfg80211: add rfkill support")
+Cc: stable@vger.kernel.org
+Signed-off-by: Daniil Dulov <d.dulov@aladdin.ru>
+Link: https://patch.msgid.link/20260211082024.1967588-1-d.dulov@aladdin.ru
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/wireless/core.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/net/wireless/core.c
++++ b/net/wireless/core.c
+@@ -1125,6 +1125,7 @@ void wiphy_unregister(struct wiphy *wiph
+ /* this has nothing to do now but make sure it's gone */
+ cancel_work_sync(&rdev->wiphy_work);
+
++ cancel_work_sync(&rdev->rfkill_block);
+ cancel_work_sync(&rdev->conn_work);
+ flush_work(&rdev->event_work);
+ cancel_delayed_work_sync(&rdev->dfs_update_channels_wk);
--- /dev/null
+From stable+bounces-223615-greg=kroah.com@vger.kernel.org Mon Mar 9 12:38:32 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 9 Mar 2026 07:38:22 -0400
+Subject: wifi: cfg80211: move scan done work to wiphy work
+To: stable@vger.kernel.org
+Cc: Johannes Berg <johannes.berg@intel.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260309113823.823525-1-sashal@kernel.org>
+
+From: Johannes Berg <johannes.berg@intel.com>
+
+[ Upstream commit fe0af9fe54d0ff53aa49eef390c8962355b274e2 ]
+
+Move the scan done work to the new wiphy work to
+simplify the code a bit.
+
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+Stable-dep-of: 767d23ade706 ("wifi: cfg80211: cancel rfkill_block work in wiphy_unregister()")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/wireless/core.c | 3 +--
+ net/wireless/core.h | 4 ++--
+ net/wireless/scan.c | 14 ++++----------
+ 3 files changed, 7 insertions(+), 14 deletions(-)
+
+--- a/net/wireless/core.c
++++ b/net/wireless/core.c
+@@ -525,7 +525,7 @@ use_default_name:
+ spin_lock_init(&rdev->bss_lock);
+ INIT_LIST_HEAD(&rdev->bss_list);
+ INIT_LIST_HEAD(&rdev->sched_scan_req_list);
+- INIT_WORK(&rdev->scan_done_wk, __cfg80211_scan_done);
++ wiphy_work_init(&rdev->scan_done_wk, __cfg80211_scan_done);
+ INIT_DELAYED_WORK(&rdev->dfs_update_channels_wk,
+ cfg80211_dfs_channels_update_work);
+ #ifdef CONFIG_CFG80211_WEXT
+@@ -1125,7 +1125,6 @@ void wiphy_unregister(struct wiphy *wiph
+ /* this has nothing to do now but make sure it's gone */
+ cancel_work_sync(&rdev->wiphy_work);
+
+- flush_work(&rdev->scan_done_wk);
+ cancel_work_sync(&rdev->conn_work);
+ flush_work(&rdev->event_work);
+ cancel_delayed_work_sync(&rdev->dfs_update_channels_wk);
+--- a/net/wireless/core.h
++++ b/net/wireless/core.h
+@@ -75,7 +75,7 @@ struct cfg80211_registered_device {
+ struct sk_buff *scan_msg;
+ struct list_head sched_scan_req_list;
+ time64_t suspend_at;
+- struct work_struct scan_done_wk;
++ struct wiphy_work scan_done_wk;
+
+ struct genl_info *cur_cmd_info;
+
+@@ -447,7 +447,7 @@ bool cfg80211_valid_key_idx(struct cfg80
+ int cfg80211_validate_key_settings(struct cfg80211_registered_device *rdev,
+ struct key_params *params, int key_idx,
+ bool pairwise, const u8 *mac_addr);
+-void __cfg80211_scan_done(struct work_struct *wk);
++void __cfg80211_scan_done(struct wiphy *wiphy, struct wiphy_work *wk);
+ void ___cfg80211_scan_done(struct cfg80211_registered_device *rdev,
+ bool send_message);
+ void cfg80211_add_sched_scan_req(struct cfg80211_registered_device *rdev,
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -1096,16 +1096,9 @@ void ___cfg80211_scan_done(struct cfg802
+ nl80211_send_scan_msg(rdev, msg);
+ }
+
+-void __cfg80211_scan_done(struct work_struct *wk)
++void __cfg80211_scan_done(struct wiphy *wiphy, struct wiphy_work *wk)
+ {
+- struct cfg80211_registered_device *rdev;
+-
+- rdev = container_of(wk, struct cfg80211_registered_device,
+- scan_done_wk);
+-
+- wiphy_lock(&rdev->wiphy);
+- ___cfg80211_scan_done(rdev, true);
+- wiphy_unlock(&rdev->wiphy);
++ ___cfg80211_scan_done(wiphy_to_rdev(wiphy), true);
+ }
+
+ void cfg80211_scan_done(struct cfg80211_scan_request *request,
+@@ -1131,7 +1124,8 @@ void cfg80211_scan_done(struct cfg80211_
+ }
+
+ request->notified = true;
+- queue_work(cfg80211_wq, &wiphy_to_rdev(request->wiphy)->scan_done_wk);
++ wiphy_work_queue(request->wiphy,
++ &wiphy_to_rdev(request->wiphy)->scan_done_wk);
+ }
+ EXPORT_SYMBOL(cfg80211_scan_done);
+
--- /dev/null
+From stable+bounces-223609-greg=kroah.com@vger.kernel.org Mon Mar 9 12:19:25 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 9 Mar 2026 07:14:37 -0400
+Subject: wifi: libertas: fix use-after-free in lbs_free_adapter()
+To: stable@vger.kernel.org
+Cc: Daniel Hodges <git@danielhodges.dev>, Johannes Berg <johannes.berg@intel.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260309111437.811502-1-sashal@kernel.org>
+
+From: Daniel Hodges <git@danielhodges.dev>
+
+[ Upstream commit 03cc8f90d0537fcd4985c3319b4fafbf2e3fb1f0 ]
+
+The lbs_free_adapter() function uses timer_delete() (non-synchronous)
+for both command_timer and tx_lockup_timer before the structure is
+freed. This is incorrect because timer_delete() does not wait for
+any running timer callback to complete.
+
+If a timer callback is executing when lbs_free_adapter() is called,
+the callback will access freed memory since lbs_cfg_free() frees the
+containing structure immediately after lbs_free_adapter() returns.
+
+Both timer callbacks (lbs_cmd_timeout_handler and lbs_tx_lockup_handler)
+access priv->driver_lock, priv->cur_cmd, priv->dev, and other fields,
+which would all be use-after-free violations.
+
+Use timer_delete_sync() instead to ensure any running timer callback
+has completed before returning.
+
+This bug was introduced in commit 8f641d93c38a ("libertas: detect TX
+lockups and reset hardware") where del_timer() was used instead of
+del_timer_sync() in the cleanup path. The command_timer has had the
+same issue since the driver was first written.
+
+Fixes: 8f641d93c38a ("libertas: detect TX lockups and reset hardware")
+Fixes: 954ee164f4f4 ("[PATCH] libertas: reorganize and simplify init sequence")
+Cc: stable@vger.kernel.org
+Signed-off-by: Daniel Hodges <git@danielhodges.dev>
+Link: https://patch.msgid.link/20260206195356.15647-1-git@danielhodges.dev
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+[ del_timer() => timer_delete_sync() ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/wireless/marvell/libertas/main.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/drivers/net/wireless/marvell/libertas/main.c
++++ b/drivers/net/wireless/marvell/libertas/main.c
+@@ -881,8 +881,8 @@ static void lbs_free_adapter(struct lbs_
+ {
+ lbs_free_cmd_buffer(priv);
+ kfifo_free(&priv->event_fifo);
+- del_timer(&priv->command_timer);
+- del_timer(&priv->tx_lockup_timer);
++ timer_delete_sync(&priv->command_timer);
++ timer_delete_sync(&priv->tx_lockup_timer);
+ del_timer(&priv->auto_deepsleep_timer);
+ }
+
--- /dev/null
+From stable+bounces-223637-greg=kroah.com@vger.kernel.org Mon Mar 9 14:08:30 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 9 Mar 2026 09:06:48 -0400
+Subject: x86/sev: Allow IBPB-on-Entry feature for SNP guests
+To: stable@vger.kernel.org
+Cc: Kim Phillips <kim.phillips@amd.com>, "Borislav Petkov (AMD)" <bp@alien8.de>, Nikunj A Dadhania <nikunj@amd.com>, Tom Lendacky <thomas.lendacky@amd.com>, stable@kernel.org, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260309130648.871470-1-sashal@kernel.org>
+
+From: Kim Phillips <kim.phillips@amd.com>
+
+[ Upstream commit 9073428bb204d921ae15326bb7d4558d9d269aab ]
+
+The SEV-SNP IBPB-on-Entry feature does not require a guest-side
+implementation. It was added in Zen5 h/w, after the first SNP Zen
+implementation, and thus was not accounted for when the initial set of SNP
+features were added to the kernel.
+
+In its abundant precaution, commit
+
+ 8c29f0165405 ("x86/sev: Add SEV-SNP guest feature negotiation support")
+
+included SEV_STATUS' IBPB-on-Entry bit as a reserved bit, thereby masking
+guests from using the feature.
+
+Allow guests to make use of IBPB-on-Entry when supported by the hypervisor, as
+the bit is now architecturally defined and safe to expose.
+
+Fixes: 8c29f0165405 ("x86/sev: Add SEV-SNP guest feature negotiation support")
+Signed-off-by: Kim Phillips <kim.phillips@amd.com>
+Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
+Reviewed-by: Nikunj A Dadhania <nikunj@amd.com>
+Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
+Cc: stable@kernel.org
+Link: https://patch.msgid.link/20260203222405.4065706-2-kim.phillips@amd.com
+[ No SECURE_AVIC ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/boot/compressed/sev.c | 1 +
+ arch/x86/include/asm/msr-index.h | 5 ++++-
+ 2 files changed, 5 insertions(+), 1 deletion(-)
+
+--- a/arch/x86/boot/compressed/sev.c
++++ b/arch/x86/boot/compressed/sev.c
+@@ -328,6 +328,7 @@ static void enforce_vmpl0(void)
+ MSR_AMD64_SNP_VMSA_REG_PROTECTION | \
+ MSR_AMD64_SNP_RESERVED_BIT13 | \
+ MSR_AMD64_SNP_RESERVED_BIT15 | \
++ MSR_AMD64_SNP_RESERVED_BITS18_22 | \
+ MSR_AMD64_SNP_RESERVED_MASK)
+
+ /*
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -630,11 +630,14 @@
+ #define MSR_AMD64_SNP_IBS_VIRT BIT_ULL(14)
+ #define MSR_AMD64_SNP_VMSA_REG_PROTECTION BIT_ULL(16)
+ #define MSR_AMD64_SNP_SMT_PROTECTION BIT_ULL(17)
++#define MSR_AMD64_SNP_IBPB_ON_ENTRY_BIT 23
++#define MSR_AMD64_SNP_IBPB_ON_ENTRY BIT_ULL(MSR_AMD64_SNP_IBPB_ON_ENTRY_BIT)
+
+ /* SNP feature bits reserved for future use. */
+ #define MSR_AMD64_SNP_RESERVED_BIT13 BIT_ULL(13)
+ #define MSR_AMD64_SNP_RESERVED_BIT15 BIT_ULL(15)
+-#define MSR_AMD64_SNP_RESERVED_MASK GENMASK_ULL(63, 18)
++#define MSR_AMD64_SNP_RESERVED_BITS18_22 GENMASK_ULL(22, 18)
++#define MSR_AMD64_SNP_RESERVED_MASK GENMASK_ULL(63, 24)
+
+ #define MSR_AMD64_VIRT_SPEC_CTRL 0xc001011f
+
--- /dev/null
+From stable+bounces-227265-greg=kroah.com@vger.kernel.org Thu Mar 19 12:07:35 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 19 Mar 2026 07:07:26 -0400
+Subject: xfs: ensure dquot item is deleted from AIL only after log shutdown
+To: stable@vger.kernel.org
+Cc: Long Li <leo.lilong@huawei.com>, Carlos Maiolino <cmaiolino@redhat.com>, Christoph Hellwig <hch@lst.de>, Carlos Maiolino <cem@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260319110726.2314927-1-sashal@kernel.org>
+
+From: Long Li <leo.lilong@huawei.com>
+
+[ Upstream commit 186ac39b8a7d3ec7ce9c5dd45e5c2730177f375c ]
+
+In xfs_qm_dqflush(), when a dquot flush fails due to corruption
+(the out_abort error path), the original code removed the dquot log
+item from the AIL before calling xfs_force_shutdown(). This ordering
+introduces a subtle race condition that can lead to data loss after
+a crash.
+
+The AIL tracks the oldest dirty metadata in the journal. The position
+of the tail item in the AIL determines the log tail LSN, which is the
+oldest LSN that must be preserved for crash recovery. When an item is
+removed from the AIL, the log tail can advance past the LSN of that item.
+
+The race window is as follows: if the dquot item happens to be at
+the tail of the log, removing it from the AIL allows the log tail
+to advance. If a concurrent log write is sampling the tail LSN at
+the same time and subsequently writes a complete checkpoint (i.e.,
+one containing a commit record) to disk before the shutdown takes
+effect, the journal will no longer protect the dquot's last
+modification. On the next mount, log recovery will not replay the
+dquot changes, even though they were never written back to disk,
+resulting in silent data loss.
+
+Fix this by calling xfs_force_shutdown() before xfs_trans_ail_delete()
+in the out_abort path. Once the log is shut down, no new log writes
+can complete with an updated tail LSN, making it safe to remove the
+dquot item from the AIL.
+
+Cc: stable@vger.kernel.org
+Fixes: b707fffda6a3 ("xfs: abort consistently on dquot flush failure")
+Signed-off-by: Long Li <leo.lilong@huawei.com>
+Reviewed-by: Carlos Maiolino <cmaiolino@redhat.com>
+Reviewed-by: Christoph Hellwig <hch@lst.de>
+Signed-off-by: Carlos Maiolino <cem@kernel.org>
+[ adapted error path to preserve existing out_unlock label between xfs_trans_ail_delete and xfs_dqfunlock ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/xfs/xfs_dquot.c | 8 +++++++-
+ 1 file changed, 7 insertions(+), 1 deletion(-)
+
+--- a/fs/xfs/xfs_dquot.c
++++ b/fs/xfs/xfs_dquot.c
+@@ -1297,9 +1297,15 @@ xfs_qm_dqflush(
+ return 0;
+
+ out_abort:
++ /*
++ * Shut down the log before removing the dquot item from the AIL.
++ * Otherwise, the log tail may advance past this item's LSN while
++ * log writes are still in progress, making these unflushed changes
++ * unrecoverable on the next mount.
++ */
++ xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE);
+ dqp->q_flags &= ~XFS_DQFLAG_DIRTY;
+ xfs_trans_ail_delete(lip, 0);
+- xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE);
+ out_unlock:
+ xfs_dqfunlock(dqp);
+ return error;
--- /dev/null
+From stable+bounces-227264-greg=kroah.com@vger.kernel.org Thu Mar 19 12:12:16 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 19 Mar 2026 07:07:17 -0400
+Subject: xfs: fix integer overflow in bmap intent sort comparator
+To: stable@vger.kernel.org
+Cc: Long Li <leo.lilong@huawei.com>, "Darrick J. Wong" <djwong@kernel.org>, Carlos Maiolino <cem@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260319110717.2314489-1-sashal@kernel.org>
+
+From: Long Li <leo.lilong@huawei.com>
+
+[ Upstream commit 362c490980867930a098b99f421268fbd7ca05fd ]
+
+xfs_bmap_update_diff_items() sorts bmap intents by inode number using
+a subtraction of two xfs_ino_t (uint64_t) values, with the result
+truncated to int. This is incorrect when two inode numbers differ by
+more than INT_MAX (2^31 - 1), which is entirely possible on large XFS
+filesystems.
+
+Fix this by replacing the subtraction with cmp_int().
+
+Cc: <stable@vger.kernel.org> # v4.9
+Fixes: 9f3afb57d5f1 ("xfs: implement deferred bmbt map/unmap operations")
+Signed-off-by: Long Li <leo.lilong@huawei.com>
+Reviewed-by: Darrick J. Wong <djwong@kernel.org>
+Signed-off-by: Carlos Maiolino <cem@kernel.org>
+[ replaced `bi_entry()` macro with `container_of()` and inlined `cmp_int()` as a manual three-way comparison expression ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/xfs/xfs_bmap_item.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+--- a/fs/xfs/xfs_bmap_item.c
++++ b/fs/xfs/xfs_bmap_item.c
+@@ -277,7 +277,8 @@ xfs_bmap_update_diff_items(
+
+ ba = container_of(a, struct xfs_bmap_intent, bi_list);
+ bb = container_of(b, struct xfs_bmap_intent, bi_list);
+- return ba->bi_owner->i_ino - bb->bi_owner->i_ino;
++ return (ba->bi_owner->i_ino > bb->bi_owner->i_ino) -
++ (ba->bi_owner->i_ino < bb->bi_owner->i_ino);
+ }
+
+ /* Set the map extent flags for this mapping. */