--- /dev/null
+From stable+bounces-222508-greg=kroah.com@vger.kernel.org Mon Mar 2 05:57:20 2026
+From: Rahul Sharma <black.hawk@163.com>
+Date: Mon, 2 Mar 2026 12:56:01 +0800
+Subject: btrfs: do not strictly require dirty metadata threshold for metadata writepages
+To: gregkh@linuxfoundation.org, stable@vger.kernel.org
+Cc: linux-kernel@vger.kernel.org, Qu Wenruo <wqu@suse.com>, Jan Kara <jack@suse.cz>, Boris Burkov <boris@bur.io>, David Sterba <dsterba@suse.com>, Rahul Sharma <black.hawk@163.com>
+Message-ID: <20260302045601.2101230-1-black.hawk@163.com>
+
+From: Qu Wenruo <wqu@suse.com>
+
+[ Upstream commit 4e159150a9a56d66d247f4b5510bed46fe58aa1c ]
+
+[BUG]
+There is an internal report that over 1000 processes are
+waiting at the io_schedule_timeout() of balance_dirty_pages(), causing
+a system hang and trigger a kernel coredump.
+
+The kernel is v6.4 kernel based, but the root problem still applies to
+any upstream kernel before v6.18.
+
+[CAUSE]
+>From Jan Kara for his wisdom on the dirty page balance behavior first.
+
+ This cgroup dirty limit was what was actually playing the role here
+ because the cgroup had only a small amount of memory and so the dirty
+ limit for it was something like 16MB.
+
+ Dirty throttling is responsible for enforcing that nobody can dirty
+ (significantly) more dirty memory than there's dirty limit. Thus when
+ a task is dirtying pages it periodically enters into balance_dirty_pages()
+ and we let it sleep there to slow down the dirtying.
+
+ When the system is over dirty limit already (either globally or within
+ a cgroup of the running task), we will not let the task exit from
+ balance_dirty_pages() until the number of dirty pages drops below the
+ limit.
+
+ So in this particular case, as I already mentioned, there was a cgroup
+ with relatively small amount of memory and as a result with dirty limit
+ set at 16MB. A task from that cgroup has dirtied about 28MB worth of
+ pages in btrfs btree inode and these were practically the only dirty
+ pages in that cgroup.
+
+So that means the only way to reduce the dirty pages of that cgroup is
+to writeback the dirty pages of btrfs btree inode, and only after that
+those processes can exit balance_dirty_pages().
+
+Now back to the btrfs part, btree_writepages() is responsible for
+writing back dirty btree inode pages.
+
+The problem here is, there is a btrfs internal threshold that if the
+btree inode's dirty bytes are below the 32M threshold, it will not
+do any writeback.
+
+This behavior is to batch as much metadata as possible so we won't write
+back those tree blocks and then later re-COW them again for another
+modification.
+
+This internal 32MiB is higher than the existing dirty page size (28MiB),
+meaning no writeback will happen, causing a deadlock between btrfs and
+cgroup:
+
+- Btrfs doesn't want to write back btree inode until more dirty pages
+
+- Cgroup/MM doesn't want more dirty pages for btrfs btree inode
+ Thus any process touching that btree inode is put into sleep until
+ the number of dirty pages is reduced.
+
+Thanks Jan Kara a lot for the analysis of the root cause.
+
+[ENHANCEMENT]
+Since kernel commit b55102826d7d ("btrfs: set AS_KERNEL_FILE on the
+btree_inode"), btrfs btree inode pages will only be charged to the root
+cgroup which should have a much larger limit than btrfs' 32MiB
+threshold.
+So it should not affect newer kernels.
+
+But for all current LTS kernels, they are all affected by this problem,
+and backporting the whole AS_KERNEL_FILE may not be a good idea.
+
+Even for newer kernels I still think it's a good idea to get
+rid of the internal threshold at btree_writepages(), since for most cases
+cgroup/MM has a better view of full system memory usage than btrfs' fixed
+threshold.
+
+For internal callers using btrfs_btree_balance_dirty() since that
+function is already doing internal threshold check, we don't need to
+bother them.
+
+But for external callers of btree_writepages(), just respect their
+requests and write back whatever they want, ignoring the internal
+btrfs threshold to avoid such deadlock on btree inode dirty page
+balancing.
+
+CC: stable@vger.kernel.org
+CC: Jan Kara <jack@suse.cz>
+Reviewed-by: Boris Burkov <boris@bur.io>
+Signed-off-by: Qu Wenruo <wqu@suse.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+[ The context change is due to the commit 41044b41ad2c
+("btrfs: add helper to get fs_info from struct inode pointer")
+in v6.9 and the commit c66f2afc7148
+("btrfs: remove pointless writepages callback wrapper")
+in v6.10 which are irrelevant to the logic of this patch. ]
+Signed-off-by: Rahul Sharma <black.hawk@163.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/disk-io.c | 22 ----------------------
+ fs/btrfs/extent_io.c | 3 +--
+ fs/btrfs/extent_io.h | 3 +--
+ 3 files changed, 2 insertions(+), 26 deletions(-)
+
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -800,28 +800,6 @@ static int btree_migrate_folio(struct ad
+ #define btree_migrate_folio NULL
+ #endif
+
+-static int btree_writepages(struct address_space *mapping,
+- struct writeback_control *wbc)
+-{
+- struct btrfs_fs_info *fs_info;
+- int ret;
+-
+- if (wbc->sync_mode == WB_SYNC_NONE) {
+-
+- if (wbc->for_kupdate)
+- return 0;
+-
+- fs_info = BTRFS_I(mapping->host)->root->fs_info;
+- /* this is a bit racy, but that's ok */
+- ret = __percpu_counter_compare(&fs_info->dirty_metadata_bytes,
+- BTRFS_DIRTY_METADATA_THRESH,
+- fs_info->dirty_metadata_batch);
+- if (ret < 0)
+- return 0;
+- }
+- return btree_write_cache_pages(mapping, wbc);
+-}
+-
+ static bool btree_release_folio(struct folio *folio, gfp_t gfp_flags)
+ {
+ if (folio_test_writeback(folio) || folio_test_dirty(folio))
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -2959,8 +2959,7 @@ static int submit_eb_page(struct page *p
+ return 1;
+ }
+
+-int btree_write_cache_pages(struct address_space *mapping,
+- struct writeback_control *wbc)
++int btree_writepages(struct address_space *mapping, struct writeback_control *wbc)
+ {
+ struct extent_buffer *eb_context = NULL;
+ struct extent_page_data epd = {
+--- a/fs/btrfs/extent_io.h
++++ b/fs/btrfs/extent_io.h
+@@ -152,8 +152,7 @@ int btrfs_read_folio(struct file *file,
+ int extent_write_locked_range(struct inode *inode, u64 start, u64 end);
+ int extent_writepages(struct address_space *mapping,
+ struct writeback_control *wbc);
+-int btree_write_cache_pages(struct address_space *mapping,
+- struct writeback_control *wbc);
++int btree_writepages(struct address_space *mapping, struct writeback_control *wbc);
+ void extent_readahead(struct readahead_control *rac);
+ int extent_fiemap(struct btrfs_inode *inode, struct fiemap_extent_info *fieinfo,
+ u64 start, u64 len);
--- /dev/null
+From alvalan9@foxmail.com Mon Mar 2 13:37:32 2026
+From: Alva Lan <alvalan9@foxmail.com>
+Date: Mon, 2 Mar 2026 20:37:12 +0800
+Subject: btrfs: send: check for inline extents in range_is_hole_in_parent()
+To: gregkh@linuxfoundation.org, stable@vger.kernel.org
+Cc: linux-btrfs@vger.kernel.org, Qu Wenruo <wqu@suse.com>, Filipe Manana <fdmanana@suse.com>, David Sterba <dsterba@suse.com>, Alva Lan <alvalan9@foxmail.com>
+Message-ID: <tencent_1F20921BD691389E308E0D31B4FF35831F08@qq.com>
+
+From: Qu Wenruo <wqu@suse.com>
+
+[ Upstream commit 08b096c1372cd69627f4f559fb47c9fb67a52b39 ]
+
+Before accessing the disk_bytenr field of a file extent item we need
+to check if we are dealing with an inline extent.
+This is because for inline extents their data starts at the offset of
+the disk_bytenr field. So accessing the disk_bytenr
+means we are accessing inline data or in case the inline data is less
+than 8 bytes we can actually cause an invalid
+memory access if this inline extent item is the first item in the leaf
+or access metadata from other items.
+
+Fixes: 82bfb2e7b645 ("Btrfs: incremental send, fix unnecessary hole writes for sparse files")
+Reviewed-by: Filipe Manana <fdmanana@suse.com>
+Signed-off-by: Qu Wenruo <wqu@suse.com>
+Reviewed-by: David Sterba <dsterba@suse.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+[ Avoid leaking the path by using { ret = 0; goto out; } instead of
+returning directly. ]
+Signed-off-by: Alva Lan <alvalan9@foxmail.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/send.c | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -6289,6 +6289,10 @@ static int range_is_hole_in_parent(struc
+ extent_end = btrfs_file_extent_end(path);
+ if (extent_end <= start)
+ goto next;
++ if (btrfs_file_extent_type(leaf, fi) == BTRFS_FILE_EXTENT_INLINE) {
++ ret = 0;
++ goto out;
++ }
+ if (btrfs_file_extent_disk_bytenr(leaf, fi) == 0) {
+ search_start = extent_end;
+ goto next;
--- /dev/null
+From alvalan9@foxmail.com Tue Mar 3 12:56:39 2026
+From: Alva Lan <alvalan9@foxmail.com>
+Date: Tue, 3 Mar 2026 19:55:58 +0800
+Subject: drm/amdgpu: drop redundant sched job cleanup when cs is aborted
+To: gregkh@linuxfoundation.org, stable@vger.kernel.org
+Cc: "Guchun Chen" <guchun.chen@amd.com>, "Christian König" <christian.koenig@amd.com>, "Alex Deucher" <alexander.deucher@amd.com>, "Alva Lan" <alvalan9@foxmail.com>
+Message-ID: <tencent_7BA7352F1F4B920BD6F63845300D0DE7B408@qq.com>
+
+From: Guchun Chen <guchun.chen@amd.com>
+
+[ Upstream commit 1253685f0d3eb3eab0bfc4bf15ab341a5f3da0c8 ]
+
+Once command submission failed due to userptr invalidation in
+amdgpu_cs_submit, legacy code will perform cleanup of scheduler
+job. However, it's not needed at all, as former commit has integrated
+job cleanup stuff into amdgpu_job_free. Otherwise, because of double
+free, a NULL pointer dereference will occur in such scenario.
+
+Bug: https://gitlab.freedesktop.org/drm/amd/-/issues/2457
+Fixes: f7d66fb2ea43 ("drm/amdgpu: cleanup scheduler job initialization v2")
+Signed-off-by: Guchun Chen <guchun.chen@amd.com>
+Reviewed-by: Christian König <christian.koenig@amd.com>
+Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
+Cc: stable@vger.kernel.org
+[ Adjust context. The context change is irrelevant
+to the current patch logic. ]
+Signed-off-by: Alva Lan <alvalan9@foxmail.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 13 +++----------
+ 1 file changed, 3 insertions(+), 10 deletions(-)
+
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -1244,7 +1244,7 @@ static int amdgpu_cs_submit(struct amdgp
+ fence = &p->jobs[i]->base.s_fence->scheduled;
+ r = amdgpu_sync_fence(&leader->sync, fence);
+ if (r)
+- goto error_cleanup;
++ return r;
+ }
+
+ if (p->gang_size > 1) {
+@@ -1270,7 +1270,8 @@ static int amdgpu_cs_submit(struct amdgp
+ }
+ if (r) {
+ r = -EAGAIN;
+- goto error_unlock;
++ mutex_unlock(&p->adev->notifier_lock);
++ return r;
+ }
+
+ p->fence = dma_fence_get(&leader->base.s_fence->finished);
+@@ -1317,14 +1318,6 @@ static int amdgpu_cs_submit(struct amdgp
+ mutex_unlock(&p->adev->notifier_lock);
+ mutex_unlock(&p->bo_list->bo_list_mutex);
+ return 0;
+-
+-error_unlock:
+- mutex_unlock(&p->adev->notifier_lock);
+-
+-error_cleanup:
+- for (i = 0; i < p->gang_size; ++i)
+- drm_sched_job_cleanup(&p->jobs[i]->base);
+- return r;
+ }
+
+ /* Cleanup the parser structure */
--- /dev/null
+From stable+bounces-222506-greg=kroah.com@vger.kernel.org Mon Mar 2 04:20:45 2026
+From: Rahul Sharma <black.hawk@163.com>
+Date: Mon, 2 Mar 2026 11:19:50 +0800
+Subject: eth: bnxt: always recalculate features after XDP clearing, fix null-deref
+To: gregkh@linuxfoundation.org, stable@vger.kernel.org
+Cc: linux-kernel@vger.kernel.org, Jakub Kicinski <kuba@kernel.org>, Michael Chan <michael.chan@broadcom.com>, Somnath Kotur <somnath.kotur@broadcom.com>, Rahul Sharma <black.hawk@163.com>
+Message-ID: <20260302031950.938519-1-black.hawk@163.com>
+
+From: Jakub Kicinski <kuba@kernel.org>
+
+[ Upstream commit f0aa6a37a3dbb40b272df5fc6db93c114688adcd ]
+
+Recalculate features when XDP is detached.
+
+Before:
+ # ip li set dev eth0 xdp obj xdp_dummy.bpf.o sec xdp
+ # ip li set dev eth0 xdp off
+ # ethtool -k eth0 | grep gro
+ rx-gro-hw: off [requested on]
+
+After:
+ # ip li set dev eth0 xdp obj xdp_dummy.bpf.o sec xdp
+ # ip li set dev eth0 xdp off
+ # ethtool -k eth0 | grep gro
+ rx-gro-hw: on
+
+The fact that HW-GRO doesn't get re-enabled automatically is just
+a minor annoyance. The real issue is that the features will randomly
+come back during another reconfiguration which just happens to invoke
+netdev_update_features(). The driver doesn't handle reconfiguring
+two things at a time very robustly.
+
+Starting with commit 98ba1d931f61 ("bnxt_en: Fix RSS logic in
+__bnxt_reserve_rings()") we only reconfigure the RSS hash table
+if the "effective" number of Rx rings has changed. If HW-GRO is
+enabled "effective" number of rings is 2x what user sees.
+So if we are in the bad state, with HW-GRO re-enablement "pending"
+after XDP off, and we lower the rings by / 2 - the HW-GRO rings
+doing 2x and the ethtool -L doing / 2 may cancel each other out,
+and the:
+
+ if (old_rx_rings != bp->hw_resc.resv_rx_rings &&
+
+condition in __bnxt_reserve_rings() will be false.
+The RSS map won't get updated, and we'll crash with:
+
+ BUG: kernel NULL pointer dereference, address: 0000000000000168
+ RIP: 0010:__bnxt_hwrm_vnic_set_rss+0x13a/0x1a0
+ bnxt_hwrm_vnic_rss_cfg_p5+0x47/0x180
+ __bnxt_setup_vnic_p5+0x58/0x110
+ bnxt_init_nic+0xb72/0xf50
+ __bnxt_open_nic+0x40d/0xab0
+ bnxt_open_nic+0x2b/0x60
+ ethtool_set_channels+0x18c/0x1d0
+
+As we try to access a freed ring.
+
+The issue is present since XDP support was added, really, but
+prior to commit 98ba1d931f61 ("bnxt_en: Fix RSS logic in
+__bnxt_reserve_rings()") it wasn't causing major issues.
+
+Fixes: 1054aee82321 ("bnxt_en: Use NETIF_F_GRO_HW.")
+Fixes: 98ba1d931f61 ("bnxt_en: Fix RSS logic in __bnxt_reserve_rings()")
+Reviewed-by: Michael Chan <michael.chan@broadcom.com>
+Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
+Link: https://patch.msgid.link/20250109043057.2888953-1-kuba@kernel.org
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+[ The context change is due to the commit 1f6e77cb9b32
+("bnxt_en: Add bnxt_l2_filter hash table.") in v6.8 and the commit
+8336a974f37d ("bnxt_en: Save user configured filters in a lookup list")
+in v6.9 which are irrelevant to the logic of this patch. ]
+Signed-off-by: Rahul Sharma <black.hawk@163.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/broadcom/bnxt/bnxt.c | 25 ++++++++++++++++++++-----
+ drivers/net/ethernet/broadcom/bnxt/bnxt.h | 2 +-
+ drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c | 7 -------
+ 3 files changed, 21 insertions(+), 13 deletions(-)
+
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -4045,7 +4045,7 @@ void bnxt_set_ring_params(struct bnxt *b
+ /* Changing allocation mode of RX rings.
+ * TODO: Update when extending xdp_rxq_info to support allocation modes.
+ */
+-int bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode)
++static void __bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode)
+ {
+ struct net_device *dev = bp->dev;
+
+@@ -4066,15 +4066,30 @@ int bnxt_set_rx_skb_mode(struct bnxt *bp
+ bp->rx_skb_func = bnxt_rx_page_skb;
+ }
+ bp->rx_dir = DMA_BIDIRECTIONAL;
+- /* Disable LRO or GRO_HW */
+- netdev_update_features(dev);
+ } else {
+ dev->max_mtu = bp->max_mtu;
+ bp->flags &= ~BNXT_FLAG_RX_PAGE_MODE;
+ bp->rx_dir = DMA_FROM_DEVICE;
+ bp->rx_skb_func = bnxt_rx_skb;
+ }
+- return 0;
++}
++
++void bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode)
++{
++ __bnxt_set_rx_skb_mode(bp, page_mode);
++
++ if (!page_mode) {
++ int rx, tx;
++
++ bnxt_get_max_rings(bp, &rx, &tx, true);
++ if (rx > 1) {
++ bp->flags &= ~BNXT_FLAG_NO_AGG_RINGS;
++ bp->dev->hw_features |= NETIF_F_LRO;
++ }
++ }
++
++ /* Update LRO and GRO_HW availability */
++ netdev_update_features(bp->dev);
+ }
+
+ static void bnxt_free_vnic_attributes(struct bnxt *bp)
+@@ -13778,7 +13793,7 @@ static int bnxt_init_one(struct pci_dev
+ if (rc)
+ goto init_err_pci_clean;
+
+- bnxt_set_rx_skb_mode(bp, false);
++ __bnxt_set_rx_skb_mode(bp, false);
+ bnxt_set_tpa_flags(bp);
+ bnxt_set_ring_params(bp);
+ rc = bnxt_set_dflt_rings(bp, true);
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -2315,7 +2315,7 @@ void bnxt_reuse_rx_data(struct bnxt_rx_r
+ u32 bnxt_fw_health_readl(struct bnxt *bp, int reg_idx);
+ void bnxt_set_tpa_flags(struct bnxt *bp);
+ void bnxt_set_ring_params(struct bnxt *);
+-int bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode);
++void bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode);
+ int bnxt_hwrm_func_drv_rgtr(struct bnxt *bp, unsigned long *bmap,
+ int bmap_size, bool async_only);
+ int bnxt_hwrm_func_drv_unrgtr(struct bnxt *bp);
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
+@@ -423,14 +423,7 @@ static int bnxt_xdp_set(struct bnxt *bp,
+ if (prog) {
+ bnxt_set_rx_skb_mode(bp, true);
+ } else {
+- int rx, tx;
+-
+ bnxt_set_rx_skb_mode(bp, false);
+- bnxt_get_max_rings(bp, &rx, &tx, true);
+- if (rx > 1) {
+- bp->flags &= ~BNXT_FLAG_NO_AGG_RINGS;
+- bp->dev->hw_features |= NETIF_F_LRO;
+- }
+ }
+ bp->tx_nr_rings_xdp = tx_xdp;
+ bp->tx_nr_rings = bp->tx_nr_rings_per_tc * tc + tx_xdp;
--- /dev/null
+From stable+bounces-223357-greg=kroah.com@vger.kernel.org Fri Mar 6 16:12:44 2026
+From: Ovidiu Panait <ovidiu.panait.rb@renesas.com>
+Date: Fri, 6 Mar 2026 15:07:18 +0000
+Subject: net: stmmac: remove support for lpi_intr_o
+To: stable@vger.kernel.org
+Cc: "Russell King (Oracle)" <rmk+kernel@armlinux.org.uk>, Ovidiu Panait <ovidiu.panait.rb@renesas.com>, Jakub Kicinski <kuba@kernel.org>
+Message-ID: <20260306150718.23811-2-ovidiu.panait.rb@renesas.com>
+
+From: "Russell King (Oracle)" <rmk+kernel@armlinux.org.uk>
+
+commit 14eb64db8ff07b58a35b98375f446d9e20765674 upstream.
+
+The dwmac databook for v3.74a states that lpi_intr_o is a sideband
+signal which should be used to ungate the application clock, and this
+signal is synchronous to the receive clock. The receive clock can run
+at 2.5, 25 or 125MHz depending on the media speed, and can stop under
+the control of the link partner. This means that the time it takes to
+clear is dependent on the negotiated media speed, and thus can be 8,
+40, or 400ns after reading the LPI control and status register.
+
+It has been observed with some aggressive link partners, this clock
+can stop while lpi_intr_o is still asserted, meaning that the signal
+remains asserted for an indefinite period that the local system has
+no direct control over.
+
+The LPI interrupts will still be signalled through the main interrupt
+path in any case, and this path is not dependent on the receive clock.
+
+This, since we do not gate the application clock, and the chances of
+adding clock gating in the future are slim due to the clocks being
+ill-defined, lpi_intr_o serves no useful purpose. Remove the code which
+requests the interrupt, and all associated code.
+
+Reported-by: Ovidiu Panait <ovidiu.panait.rb@renesas.com>
+Tested-by: Ovidiu Panait <ovidiu.panait.rb@renesas.com> # Renesas RZ/V2H board
+Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
+Link: https://patch.msgid.link/E1vnJbt-00000007YYN-28nm@rmk-PC.armlinux.org.uk
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+Signed-off-by: Ovidiu Panait <ovidiu.panait.rb@renesas.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/stmicro/stmmac/common.h | 1
+ drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c | 4 --
+ drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c | 7 ---
+ drivers/net/ethernet/stmicro/stmmac/stmmac.h | 2 -
+ drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 36 ------------------
+ drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c | 8 ----
+ include/linux/stmmac.h | 1
+ 7 files changed, 59 deletions(-)
+
+--- a/drivers/net/ethernet/stmicro/stmmac/common.h
++++ b/drivers/net/ethernet/stmicro/stmmac/common.h
+@@ -346,7 +346,6 @@ enum request_irq_err {
+ REQ_IRQ_ERR_RX,
+ REQ_IRQ_ERR_SFTY_UE,
+ REQ_IRQ_ERR_SFTY_CE,
+- REQ_IRQ_ERR_LPI,
+ REQ_IRQ_ERR_WOL,
+ REQ_IRQ_ERR_MAC,
+ REQ_IRQ_ERR_NO,
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+@@ -614,7 +614,6 @@ static int intel_mgbe_common_data(struct
+
+ /* Setup MSI vector offset specific to Intel mGbE controller */
+ plat->msi_mac_vec = 29;
+- plat->msi_lpi_vec = 28;
+ plat->msi_sfty_ce_vec = 27;
+ plat->msi_sfty_ue_vec = 26;
+ plat->msi_rx_base_vec = 0;
+@@ -999,8 +998,6 @@ static int stmmac_config_multi_msi(struc
+ res->irq = pci_irq_vector(pdev, plat->msi_mac_vec);
+ if (plat->msi_wol_vec < STMMAC_MSI_VEC_MAX)
+ res->wol_irq = pci_irq_vector(pdev, plat->msi_wol_vec);
+- if (plat->msi_lpi_vec < STMMAC_MSI_VEC_MAX)
+- res->lpi_irq = pci_irq_vector(pdev, plat->msi_lpi_vec);
+ if (plat->msi_sfty_ce_vec < STMMAC_MSI_VEC_MAX)
+ res->sfty_ce_irq = pci_irq_vector(pdev, plat->msi_sfty_ce_vec);
+ if (plat->msi_sfty_ue_vec < STMMAC_MSI_VEC_MAX)
+@@ -1082,7 +1079,6 @@ static int intel_eth_pci_probe(struct pc
+ */
+ plat->msi_mac_vec = STMMAC_MSI_VEC_MAX;
+ plat->msi_wol_vec = STMMAC_MSI_VEC_MAX;
+- plat->msi_lpi_vec = STMMAC_MSI_VEC_MAX;
+ plat->msi_sfty_ce_vec = STMMAC_MSI_VEC_MAX;
+ plat->msi_sfty_ue_vec = STMMAC_MSI_VEC_MAX;
+ plat->msi_rx_base_vec = STMMAC_MSI_VEC_MAX;
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
+@@ -135,13 +135,6 @@ static int loongson_dwmac_probe(struct p
+ res.wol_irq = res.irq;
+ }
+
+- res.lpi_irq = of_irq_get_byname(np, "eth_lpi");
+- if (res.lpi_irq < 0) {
+- dev_err(&pdev->dev, "IRQ eth_lpi not found\n");
+- ret = -ENODEV;
+- goto err_disable_msi;
+- }
+-
+ ret = stmmac_dvr_probe(&pdev->dev, plat, &res);
+ if (ret)
+ goto err_disable_msi;
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+@@ -28,7 +28,6 @@ struct stmmac_resources {
+ void __iomem *addr;
+ u8 mac[ETH_ALEN];
+ int wol_irq;
+- int lpi_irq;
+ int irq;
+ int sfty_ce_irq;
+ int sfty_ue_irq;
+@@ -254,7 +253,6 @@ struct stmmac_priv {
+ bool wol_irq_disabled;
+ int clk_csr;
+ struct timer_list eee_ctrl_timer;
+- int lpi_irq;
+ int eee_enabled;
+ int eee_active;
+ int tx_lpi_timer;
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -3497,10 +3497,6 @@ static void stmmac_free_irq(struct net_d
+ free_irq(priv->sfty_ce_irq, dev);
+ fallthrough;
+ case REQ_IRQ_ERR_SFTY_CE:
+- if (priv->lpi_irq > 0 && priv->lpi_irq != dev->irq)
+- free_irq(priv->lpi_irq, dev);
+- fallthrough;
+- case REQ_IRQ_ERR_LPI:
+ if (priv->wol_irq > 0 && priv->wol_irq != dev->irq)
+ free_irq(priv->wol_irq, dev);
+ fallthrough;
+@@ -3555,24 +3551,6 @@ static int stmmac_request_irq_multi_msi(
+ }
+ }
+
+- /* Request the LPI IRQ in case of another line
+- * is used for LPI
+- */
+- if (priv->lpi_irq > 0 && priv->lpi_irq != dev->irq) {
+- int_name = priv->int_name_lpi;
+- sprintf(int_name, "%s:%s", dev->name, "lpi");
+- ret = request_irq(priv->lpi_irq,
+- stmmac_mac_interrupt,
+- 0, int_name, dev);
+- if (unlikely(ret < 0)) {
+- netdev_err(priv->dev,
+- "%s: alloc lpi MSI %d (error: %d)\n",
+- __func__, priv->lpi_irq, ret);
+- irq_err = REQ_IRQ_ERR_LPI;
+- goto irq_error;
+- }
+- }
+-
+ /* Request the Safety Feature Correctible Error line in
+ * case of another line is used
+ */
+@@ -3696,19 +3674,6 @@ static int stmmac_request_irq_single(str
+ }
+ }
+
+- /* Request the IRQ lines */
+- if (priv->lpi_irq > 0 && priv->lpi_irq != dev->irq) {
+- ret = request_irq(priv->lpi_irq, stmmac_interrupt,
+- IRQF_SHARED, dev->name, dev);
+- if (unlikely(ret < 0)) {
+- netdev_err(priv->dev,
+- "%s: ERROR: allocating the LPI IRQ %d (%d)\n",
+- __func__, priv->lpi_irq, ret);
+- irq_err = REQ_IRQ_ERR_LPI;
+- goto irq_error;
+- }
+- }
+-
+ return 0;
+
+ irq_error:
+@@ -7259,7 +7224,6 @@ int stmmac_dvr_probe(struct device *devi
+
+ priv->dev->irq = res->irq;
+ priv->wol_irq = res->wol_irq;
+- priv->lpi_irq = res->lpi_irq;
+ priv->sfty_ce_irq = res->sfty_ce_irq;
+ priv->sfty_ue_irq = res->sfty_ue_irq;
+ for (i = 0; i < MTL_MAX_RX_QUEUES; i++)
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+@@ -758,14 +758,6 @@ int stmmac_get_platform_resources(struct
+ stmmac_res->wol_irq = stmmac_res->irq;
+ }
+
+- stmmac_res->lpi_irq =
+- platform_get_irq_byname_optional(pdev, "eth_lpi");
+- if (stmmac_res->lpi_irq < 0) {
+- if (stmmac_res->lpi_irq == -EPROBE_DEFER)
+- return -EPROBE_DEFER;
+- dev_info(&pdev->dev, "IRQ eth_lpi not found\n");
+- }
+-
+ stmmac_res->addr = devm_platform_ioremap_resource(pdev, 0);
+
+ return PTR_ERR_OR_ZERO(stmmac_res->addr);
+--- a/include/linux/stmmac.h
++++ b/include/linux/stmmac.h
+@@ -284,7 +284,6 @@ struct plat_stmmacenet_data {
+ bool multi_msi_en;
+ int msi_mac_vec;
+ int msi_wol_vec;
+- int msi_lpi_vec;
+ int msi_sfty_ce_vec;
+ int msi_sfty_ue_vec;
+ int msi_rx_base_vec;
drm-exynos-vidi-use-priv-vidi_dev-for-ctx-lookup-in-vidi_connection_ioctl.patch
drm-exynos-vidi-fix-to-avoid-directly-dereferencing-user-pointer.patch
drm-exynos-vidi-use-ctx-lock-to-protect-struct-vidi_context-member-variables-related-to-memory-alloc-free.patch
+x86-uprobes-fix-xol-allocation-failure-for-32-bit-tasks.patch
+btrfs-send-check-for-inline-extents-in-range_is_hole_in_parent.patch
+btrfs-do-not-strictly-require-dirty-metadata-threshold-for-metadata-writepages.patch
+eth-bnxt-always-recalculate-features-after-xdp-clearing-fix-null-deref.patch
+spi-cadence-quadspi-implement-refcount-to-handle-unbind-during-busy.patch
+drm-amdgpu-drop-redundant-sched-job-cleanup-when-cs-is-aborted.patch
+net-stmmac-remove-support-for-lpi_intr_o.patch
--- /dev/null
+From stable+bounces-222505-greg=kroah.com@vger.kernel.org Mon Mar 2 04:11:04 2026
+From: Robert Garcia <rob_garcia@163.com>
+Date: Mon, 2 Mar 2026 11:10:15 +0800
+Subject: spi: cadence-quadspi: Implement refcount to handle unbind during busy
+To: stable@vger.kernel.org, Khairul Anuar Romli <khairul.anuar.romli@altera.com>
+Cc: Mark Brown <broonie@kernel.org>, Niravkumar L Rabara <nirav.rabara@altera.com>, Robert Garcia <rob_garcia@163.com>, linux-spi@vger.kernel.org, linux-kernel@vger.kernel.org
+Message-ID: <20260302031015.1319477-1-rob_garcia@163.com>
+
+From: Khairul Anuar Romli <khairul.anuar.romli@altera.com>
+
+[ Upstream commit 7446284023e8ef694fb392348185349c773eefb3 ]
+
+driver support indirect read and indirect write operation with
+assumption no force device removal(unbind) operation. However
+force device removal(removal) is still available to root superuser.
+
+Unbinding driver during operation causes kernel crash. This changes
+ensure driver able to handle such operation for indirect read and
+indirect write by implementing refcount to track attached devices
+to the controller and gracefully wait and until attached devices
+remove operation completed before proceed with removal operation.
+
+Signed-off-by: Khairul Anuar Romli <khairul.anuar.romli@altera.com>
+Reviewed-by: Matthew Gerlach <matthew.gerlach@altera.com>
+Reviewed-by: Niravkumar L Rabara <nirav.rabara@altera.com>
+Link: https://patch.msgid.link/8704fd6bd2ff4d37bba4a0eacf5eba3ba001079e.1756168074.git.khairul.anuar.romli@altera.com
+Signed-off-by: Mark Brown <broonie@kernel.org>
+[Add cqspi defination in cqspi_exec_mem_op and minor context change fixed.]
+Signed-off-by: Robert Garcia <rob_garcia@163.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/spi/spi-cadence-quadspi.c | 34 ++++++++++++++++++++++++++++++++++
+ 1 file changed, 34 insertions(+)
+
+--- a/drivers/spi/spi-cadence-quadspi.c
++++ b/drivers/spi/spi-cadence-quadspi.c
+@@ -100,6 +100,8 @@ struct cqspi_st {
+ bool apb_ahb_hazard;
+
+ bool is_jh7110; /* Flag for StarFive JH7110 SoC */
++ refcount_t refcount;
++ refcount_t inflight_ops;
+ };
+
+ struct cqspi_driver_platdata {
+@@ -686,6 +688,9 @@ static int cqspi_indirect_read_execute(s
+ u8 *rxbuf_end = rxbuf + n_rx;
+ int ret = 0;
+
++ if (!refcount_read(&cqspi->refcount))
++ return -ENODEV;
++
+ writel(from_addr, reg_base + CQSPI_REG_INDIRECTRDSTARTADDR);
+ writel(remaining, reg_base + CQSPI_REG_INDIRECTRDBYTES);
+
+@@ -973,6 +978,9 @@ static int cqspi_indirect_write_execute(
+ unsigned int write_bytes;
+ int ret;
+
++ if (!refcount_read(&cqspi->refcount))
++ return -ENODEV;
++
+ writel(to_addr, reg_base + CQSPI_REG_INDIRECTWRSTARTADDR);
+ writel(remaining, reg_base + CQSPI_REG_INDIRECTWRBYTES);
+
+@@ -1365,11 +1373,29 @@ static int cqspi_mem_process(struct spi_
+ static int cqspi_exec_mem_op(struct spi_mem *mem, const struct spi_mem_op *op)
+ {
+ int ret;
++ struct cqspi_st *cqspi = spi_controller_get_devdata(mem->spi->controller);
++
++ if (refcount_read(&cqspi->inflight_ops) == 0)
++ return -ENODEV;
++
++ if (!refcount_read(&cqspi->refcount))
++ return -EBUSY;
++
++ refcount_inc(&cqspi->inflight_ops);
++
++ if (!refcount_read(&cqspi->refcount)) {
++ if (refcount_read(&cqspi->inflight_ops))
++ refcount_dec(&cqspi->inflight_ops);
++ return -EBUSY;
++ }
+
+ ret = cqspi_mem_process(mem, op);
+ if (ret)
+ dev_err(&mem->spi->dev, "operation failed with %d\n", ret);
+
++ if (refcount_read(&cqspi->inflight_ops) > 1)
++ refcount_dec(&cqspi->inflight_ops);
++
+ return ret;
+ }
+
+@@ -1800,6 +1826,9 @@ static int cqspi_probe(struct platform_d
+ }
+ }
+
++ refcount_set(&cqspi->refcount, 1);
++ refcount_set(&cqspi->inflight_ops, 1);
++
+ ret = devm_request_irq(dev, irq, cqspi_irq_handler, 0,
+ pdev->name, cqspi);
+ if (ret) {
+@@ -1852,6 +1881,11 @@ static int cqspi_remove(struct platform_
+ {
+ struct cqspi_st *cqspi = platform_get_drvdata(pdev);
+
++ refcount_set(&cqspi->refcount, 0);
++
++ if (!refcount_dec_and_test(&cqspi->inflight_ops))
++ cqspi_wait_idle(cqspi);
++
+ spi_unregister_master(cqspi->master);
+ cqspi_controller_enable(cqspi, 0);
+
--- /dev/null
+From stable+bounces-222630-greg=kroah.com@vger.kernel.org Mon Mar 2 16:43:44 2026
+From: Oleg Nesterov <oleg@redhat.com>
+Date: Mon, 2 Mar 2026 16:36:12 +0100
+Subject: x86/uprobes: Fix XOL allocation failure for 32-bit tasks
+To: Sasha Levin <sashal@kernel.org>
+Cc: stable@vger.kernel.org, Paulo Andrade <pandrade@redhat.com>, "Peter Zijlstra (Intel)" <peterz@infradead.org>, linux-trace-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org
+Message-ID: <aaWubGOJwB8CsLVy@redhat.com>
+Content-Disposition: inline
+
+From: Oleg Nesterov <oleg@redhat.com>
+
+[ Upstream commit d55c571e4333fac71826e8db3b9753fadfbead6a ]
+
+This script
+
+ #!/usr/bin/bash
+
+ echo 0 > /proc/sys/kernel/randomize_va_space
+
+ echo 'void main(void) {}' > TEST.c
+
+ # -fcf-protection to ensure that the 1st endbr32 insn can't be emulated
+ gcc -m32 -fcf-protection=branch TEST.c -o test
+
+ bpftrace -e 'uprobe:./test:main {}' -c ./test
+
+"hangs", the probed ./test task enters an endless loop.
+
+The problem is that with randomize_va_space == 0
+get_unmapped_area(TASK_SIZE - PAGE_SIZE) called by xol_add_vma() can not
+just return the "addr == TASK_SIZE - PAGE_SIZE" hint, this addr is used
+by the stack vma.
+
+arch_get_unmapped_area_topdown() doesn't take TIF_ADDR32 into account and
+in_32bit_syscall() is false, this leads to info.high_limit > TASK_SIZE.
+vm_unmapped_area() happily returns the high address > TASK_SIZE and then
+get_unmapped_area() returns -ENOMEM after the "if (addr > TASK_SIZE - len)"
+check.
+
+handle_swbp() doesn't report this failure (probably it should) and silently
+restarts the probed insn. Endless loop.
+
+I think that the right fix should change the x86 get_unmapped_area() paths
+to rely on TIF_ADDR32 rather than in_32bit_syscall(). Note also that if
+CONFIG_X86_X32_ABI=y, in_x32_syscall() falsely returns true in this case
+because ->orig_ax = -1.
+
+But we need a simple fix for -stable, so this patch just sets TS_COMPAT if
+the probed task is 32-bit to make in_ia32_syscall() true.
+
+Fixes: 1b028f784e8c ("x86/mm: Introduce mmap_compat_base() for 32-bit mmap()")
+Reported-by: Paulo Andrade <pandrade@redhat.com>
+Signed-off-by: Oleg Nesterov <oleg@redhat.com>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Link: https://lore.kernel.org/all/aV5uldEvV7pb4RA8@redhat.com/
+Cc: stable@vger.kernel.org
+Link: https://patch.msgid.link/aWO7Fdxn39piQnxu@redhat.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kernel/uprobes.c | 24 ++++++++++++++++++++++++
+ include/linux/uprobes.h | 1 +
+ kernel/events/uprobes.c | 10 +++++++---
+ 3 files changed, 32 insertions(+), 3 deletions(-)
+
+--- a/arch/x86/kernel/uprobes.c
++++ b/arch/x86/kernel/uprobes.c
+@@ -1097,3 +1097,27 @@ bool arch_uretprobe_is_alive(struct retu
+ else
+ return regs->sp <= ret->stack;
+ }
++
++#ifdef CONFIG_IA32_EMULATION
++unsigned long arch_uprobe_get_xol_area(void)
++{
++ struct thread_info *ti = current_thread_info();
++ unsigned long vaddr;
++
++ /*
++ * HACK: we are not in a syscall, but x86 get_unmapped_area() paths
++ * ignore TIF_ADDR32 and rely on in_32bit_syscall() to calculate
++ * vm_unmapped_area_info.high_limit.
++ *
++ * The #ifdef above doesn't cover the CONFIG_X86_X32_ABI=y case,
++ * but in this case in_32bit_syscall() -> in_x32_syscall() always
++ * (falsely) returns true because ->orig_ax == -1.
++ */
++ if (test_thread_flag(TIF_ADDR32))
++ ti->status |= TS_COMPAT;
++ vaddr = get_unmapped_area(NULL, TASK_SIZE - PAGE_SIZE, PAGE_SIZE, 0, 0);
++ ti->status &= ~TS_COMPAT;
++
++ return vaddr;
++}
++#endif
+--- a/include/linux/uprobes.h
++++ b/include/linux/uprobes.h
+@@ -140,6 +140,7 @@ extern bool arch_uretprobe_is_alive(stru
+ extern bool arch_uprobe_ignore(struct arch_uprobe *aup, struct pt_regs *regs);
+ extern void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr,
+ void *src, unsigned long len);
++extern unsigned long arch_uprobe_get_xol_area(void);
+ #else /* !CONFIG_UPROBES */
+ struct uprobes_state {
+ };
+--- a/kernel/events/uprobes.c
++++ b/kernel/events/uprobes.c
+@@ -1441,6 +1441,12 @@ void uprobe_munmap(struct vm_area_struct
+ set_bit(MMF_RECALC_UPROBES, &vma->vm_mm->flags);
+ }
+
++unsigned long __weak arch_uprobe_get_xol_area(void)
++{
++ /* Try to map as high as possible, this is only a hint. */
++ return get_unmapped_area(NULL, TASK_SIZE - PAGE_SIZE, PAGE_SIZE, 0, 0);
++}
++
+ /* Slot allocation for XOL */
+ static int xol_add_vma(struct mm_struct *mm, struct xol_area *area)
+ {
+@@ -1456,9 +1462,7 @@ static int xol_add_vma(struct mm_struct
+ }
+
+ if (!area->vaddr) {
+- /* Try to map as high as possible, this is only a hint. */
+- area->vaddr = get_unmapped_area(NULL, TASK_SIZE - PAGE_SIZE,
+- PAGE_SIZE, 0, 0);
++ area->vaddr = arch_uprobe_get_xol_area();
+ if (IS_ERR_VALUE(area->vaddr)) {
+ ret = area->vaddr;
+ goto fail;