--- /dev/null
+From stable+bounces-211358-greg=kroah.com@vger.kernel.org Fri Jan 23 08:34:49 2026
+From: Rahul Sharma <black.hawk@163.com>
+Date: Fri, 23 Jan 2026 15:33:58 +0800
+Subject: accel/ivpu: Fix race condition when unbinding BOs
+To: gregkh@linuxfoundation.org, stable@vger.kernel.org
+Cc: linux-kernel@vger.kernel.org, Tomasz Rusinowicz <tomasz.rusinowicz@intel.com>, Jeff Hugo <jeff.hugo@oss.qualcomm.com>, Karol Wachowski <karol.wachowski@linux.intel.com>, Rahul Sharma <black.hawk@163.com>
+Message-ID: <20260123073358.1530418-1-black.hawk@163.com>
+
+From: Tomasz Rusinowicz <tomasz.rusinowicz@intel.com>
+
+[ Upstream commit 00812636df370bedf4e44a0c81b86ea96bca8628 ]
+
+Fix 'Memory manager not clean during takedown' warning that occurs
+when ivpu_gem_bo_free() removes the BO from the BOs list before it
+gets unmapped. Then file_priv_unbind() triggers a warning in
+drm_mm_takedown() during context teardown.
+
+Protect the unmapping sequence with bo_list_lock to ensure the BO is
+always fully unmapped when removed from the list. This ensures the BO
+is either fully unmapped at context teardown time or present on the
+list and unmapped by file_priv_unbind().
+
+Fixes: 48aea7f2a2ef ("accel/ivpu: Fix locking in ivpu_bo_remove_all_bos_from_context()")
+Signed-off-by: Tomasz Rusinowicz <tomasz.rusinowicz@intel.com>
+Reviewed-by: Jeff Hugo <jeff.hugo@oss.qualcomm.com>
+Signed-off-by: Karol Wachowski <karol.wachowski@linux.intel.com>
+Link: https://patch.msgid.link/20251029071451.184243-1-karol.wachowski@linux.intel.com
+[ The context change is due to the commit e0c0891cd63b
+("accel/ivpu: Rework bind/unbind of imported buffers")
+and the proper adoption is done. ]
+Signed-off-by: Rahul Sharma <black.hawk@163.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/accel/ivpu/ivpu_gem.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+--- a/drivers/accel/ivpu/ivpu_gem.c
++++ b/drivers/accel/ivpu/ivpu_gem.c
+@@ -240,7 +240,6 @@ static void ivpu_gem_bo_free(struct drm_
+
+ mutex_lock(&vdev->bo_list_lock);
+ list_del(&bo->bo_list_node);
+- mutex_unlock(&vdev->bo_list_lock);
+
+ drm_WARN_ON(&vdev->drm, !drm_gem_is_imported(&bo->base.base) &&
+ !dma_resv_test_signaled(obj->resv, DMA_RESV_USAGE_READ));
+@@ -248,6 +247,8 @@ static void ivpu_gem_bo_free(struct drm_
+ drm_WARN_ON(&vdev->drm, bo->base.vaddr);
+
+ ivpu_bo_unbind_locked(bo);
++ mutex_unlock(&vdev->bo_list_lock);
++
+ drm_WARN_ON(&vdev->drm, bo->mmu_mapped);
+ drm_WARN_ON(&vdev->drm, bo->ctx);
+
--- /dev/null
+From stable+bounces-211647-greg=kroah.com@vger.kernel.org Mon Jan 26 17:04:04 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 26 Jan 2026 11:03:52 -0500
+Subject: arm64: dts: rockchip: remove redundant max-link-speed from nanopi-r4s
+To: stable@vger.kernel.org
+Cc: Geraldo Nascimento <geraldogabriel@gmail.com>, Dragan Simic <dsimic@manjaro.org>, Shawn Lin <shawn.lin@rock-chips.com>, Heiko Stuebner <heiko@sntech.de>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260126160352.3337022-1-sashal@kernel.org>
+
+From: Geraldo Nascimento <geraldogabriel@gmail.com>
+
+[ Upstream commit ce652c98a7bfa0b7c675ef5cd85c44c186db96af ]
+
+This is already the default in rk3399-base.dtsi, remove redundant
+declaration from rk3399-nanopi-r4s.dtsi.
+
+Fixes: db792e9adbf8 ("rockchip: rk3399: Add support for FriendlyARM NanoPi R4S")
+Cc: stable@vger.kernel.org
+Reported-by: Dragan Simic <dsimic@manjaro.org>
+Reviewed-by: Dragan Simic <dsimic@manjaro.org>
+Signed-off-by: Geraldo Nascimento <geraldogabriel@gmail.com>
+Acked-by: Shawn Lin <shawn.lin@rock-chips.com>
+Link: https://patch.msgid.link/6694456a735844177c897581f785cc00c064c7d1.1763415706.git.geraldogabriel@gmail.com
+Signed-off-by: Heiko Stuebner <heiko@sntech.de>
+[ adapted file path from rk3399-nanopi-r4s.dtsi to rk3399-nanopi-r4s.dts ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/arm64/boot/dts/rockchip/rk3399-nanopi-r4s.dts | 1 -
+ 1 file changed, 1 deletion(-)
+
+--- a/arch/arm64/boot/dts/rockchip/rk3399-nanopi-r4s.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-nanopi-r4s.dts
+@@ -73,7 +73,6 @@
+ };
+
+ &pcie0 {
+- max-link-speed = <1>;
+ num-lanes = <1>;
+ vpcie3v3-supply = <&vcc3v3_sys>;
+ };
--- /dev/null
+From 04a899573fb87273a656f178b5f920c505f68875 Mon Sep 17 00:00:00 2001
+From: Daniel Borkmann <daniel@iogearbox.net>
+Date: Mon, 20 Oct 2025 09:54:41 +0200
+Subject: bpf: Do not let BPF test infra emit invalid GSO types to stack
+
+From: Daniel Borkmann <daniel@iogearbox.net>
+
+commit 04a899573fb87273a656f178b5f920c505f68875 upstream.
+
+Yinhao et al. reported that their fuzzer tool was able to trigger a
+skb_warn_bad_offload() from netif_skb_features() -> gso_features_check().
+When a BPF program - triggered via BPF test infra - pushes the packet
+to the loopback device via bpf_clone_redirect() then mentioned offload
+warning can be seen. GSO-related features are then rightfully disabled.
+
+We get into this situation due to convert___skb_to_skb() setting
+gso_segs and gso_size but not gso_type. Technically, it makes sense
+that this warning triggers since the GSO properties are malformed due
+to the gso_type. Potentially, the gso_type could be marked non-trustworthy
+through setting it at least to SKB_GSO_DODGY without any other specific
+assumptions, but that also feels wrong given we should not go further
+into the GSO engine in the first place.
+
+The checks were added in 121d57af308d ("gso: validate gso_type in GSO
+handlers") because there were malicious (syzbot) senders that combine
+a protocol with a non-matching gso_type. If we would want to drop such
+packets, gso_features_check() currently only returns feature flags via
+netif_skb_features(), so one location for potentially dropping such skbs
+could be validate_xmit_unreadable_skb(), but then otoh it would be
+an additional check in the fast-path for a very corner case. Given
+bpf_clone_redirect() is the only place where BPF test infra could emit
+such packets, lets reject them right there.
+
+Fixes: 850a88cc4096 ("bpf: Expose __sk_buff wire_len/gso_segs to BPF_PROG_TEST_RUN")
+Fixes: cf62089b0edd ("bpf: Add gso_size to __sk_buff")
+Reported-by: Yinhao Hu <dddddd@hust.edu.cn>
+Reported-by: Kaiyan Mei <M202472210@hust.edu.cn>
+Reported-by: Dongliang Mu <dzm91@hust.edu.cn>
+Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
+Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
+Link: https://patch.msgid.link/20251020075441.127980-1-daniel@iogearbox.net
+Signed-off-by: Shung-Hsi Yu <shung-hsi.yu@suse.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/bpf/test_run.c | 5 +++++
+ net/core/filter.c | 7 +++++++
+ 2 files changed, 12 insertions(+)
+
+--- a/net/bpf/test_run.c
++++ b/net/bpf/test_run.c
+@@ -944,6 +944,11 @@ static int convert___skb_to_skb(struct s
+
+ if (__skb->gso_segs > GSO_MAX_SEGS)
+ return -EINVAL;
++
++ /* Currently GSO type is zero/unset. If this gets extended with
++ * a small list of accepted GSO types in future, the filter for
++ * an unset GSO type in bpf_clone_redirect() can be lifted.
++ */
+ skb_shinfo(skb)->gso_segs = __skb->gso_segs;
+ skb_shinfo(skb)->gso_size = __skb->gso_size;
+ skb_shinfo(skb)->hwtstamps.hwtstamp = __skb->hwtstamp;
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -2466,6 +2466,13 @@ BPF_CALL_3(bpf_clone_redirect, struct sk
+ if (unlikely(flags & (~(BPF_F_INGRESS) | BPF_F_REDIRECT_INTERNAL)))
+ return -EINVAL;
+
++ /* BPF test infra's convert___skb_to_skb() can create type-less
++ * GSO packets. gso_features_check() will detect this as a bad
++ * offload. However, lets not leak them out in the first place.
++ */
++ if (unlikely(skb_is_gso(skb) && !skb_shinfo(skb)->gso_type))
++ return -EBADMSG;
++
+ dev = dev_get_by_index_rcu(dev_net(skb->dev), ifindex);
+ if (unlikely(!dev))
+ return -EINVAL;
--- /dev/null
+From stable+bounces-211707-greg=kroah.com@vger.kernel.org Tue Jan 27 03:53:11 2026
+From: Rahul Sharma <black.hawk@163.com>
+Date: Tue, 27 Jan 2026 10:52:40 +0800
+Subject: btrfs: fix racy bitfield write in btrfs_clear_space_info_full()
+To: gregkh@linuxfoundation.org, stable@vger.kernel.org
+Cc: linux-kernel@vger.kernel.org, Boris Burkov <boris@bur.io>, Qu Wenruo <wqu@suse.com>, David Sterba <dsterba@suse.com>, Rahul Sharma <black.hawk@163.com>
+Message-ID: <20260127025240.1991664-1-black.hawk@163.com>
+
+From: Boris Burkov <boris@bur.io>
+
+[ Upstream commit 38e818718c5e04961eea0fa8feff3f100ce40408 ]
+
+>From the memory-barriers.txt document regarding memory barrier ordering
+guarantees:
+
+ (*) These guarantees do not apply to bitfields, because compilers often
+ generate code to modify these using non-atomic read-modify-write
+ sequences. Do not attempt to use bitfields to synchronize parallel
+ algorithms.
+
+ (*) Even in cases where bitfields are protected by locks, all fields
+ in a given bitfield must be protected by one lock. If two fields
+ in a given bitfield are protected by different locks, the compiler's
+ non-atomic read-modify-write sequences can cause an update to one
+ field to corrupt the value of an adjacent field.
+
+btrfs_space_info has a bitfield sharing an underlying word consisting of
+the fields full, chunk_alloc, and flush:
+
+struct btrfs_space_info {
+ struct btrfs_fs_info * fs_info; /* 0 8 */
+ struct btrfs_space_info * parent; /* 8 8 */
+ ...
+ int clamp; /* 172 4 */
+ unsigned int full:1; /* 176: 0 4 */
+ unsigned int chunk_alloc:1; /* 176: 1 4 */
+ unsigned int flush:1; /* 176: 2 4 */
+ ...
+
+Therefore, to be safe from parallel read-modify-writes losing a write to
+one of the bitfield members protected by a lock, all writes to all the
+bitfields must use the lock. They almost universally do, except for
+btrfs_clear_space_info_full() which iterates over the space_infos and
+writes out found->full = 0 without a lock.
+
+Imagine that we have one thread completing a transaction in which we
+finished deleting a block_group and are thus calling
+btrfs_clear_space_info_full() while simultaneously the data reclaim
+ticket infrastructure is running do_async_reclaim_data_space():
+
+ T1 T2
+btrfs_commit_transaction
+ btrfs_clear_space_info_full
+ data_sinfo->full = 0
+ READ: full:0, chunk_alloc:0, flush:1
+ do_async_reclaim_data_space(data_sinfo)
+ spin_lock(&space_info->lock);
+ if(list_empty(tickets))
+ space_info->flush = 0;
+ READ: full: 0, chunk_alloc:0, flush:1
+ MOD/WRITE: full: 0, chunk_alloc:0, flush:0
+ spin_unlock(&space_info->lock);
+ return;
+ MOD/WRITE: full:0, chunk_alloc:0, flush:1
+
+and now data_sinfo->flush is 1 but the reclaim worker has exited. This
+breaks the invariant that flush is 0 iff there is no work queued or
+running. Once this invariant is violated, future allocations that go
+into __reserve_bytes() will add tickets to space_info->tickets but will
+see space_info->flush is set to 1 and not queue the work. After this,
+they will block forever on the resulting ticket, as it is now impossible
+to kick the worker again.
+
+I also confirmed by looking at the assembly of the affected kernel that
+it is doing RMW operations. For example, to set the flush (3rd) bit to 0,
+the assembly is:
+ andb $0xfb,0x60(%rbx)
+and similarly for setting the full (1st) bit to 0:
+ andb $0xfe,-0x20(%rax)
+
+So I think this is really a bug on practical systems. I have observed
+a number of systems in this exact state, but am currently unable to
+reproduce it.
+
+Rather than leaving this footgun lying around for the future, take
+advantage of the fact that there is room in the struct anyway, and that
+it is already quite large and simply change the three bitfield members to
+bools. This avoids writes to space_info->full having any effect on
+writes to space_info->flush, regardless of locking.
+
+Fixes: 957780eb2788 ("Btrfs: introduce ticketed enospc infrastructure")
+Reviewed-by: Qu Wenruo <wqu@suse.com>
+Signed-off-by: Boris Burkov <boris@bur.io>
+Reviewed-by: David Sterba <dsterba@suse.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+[ The context change is due to the commit cc0517fe779f
+("btrfs: tweak extent/chunk allocation for space_info sub-space")
+in v6.16 which is irrelevant to the logic of this patch. ]
+Signed-off-by: Rahul Sharma <black.hawk@163.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/block-group.c | 6 +++---
+ fs/btrfs/space-info.c | 22 +++++++++++-----------
+ fs/btrfs/space-info.h | 6 +++---
+ 3 files changed, 17 insertions(+), 17 deletions(-)
+
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -4195,7 +4195,7 @@ int btrfs_chunk_alloc(struct btrfs_trans
+ mutex_unlock(&fs_info->chunk_mutex);
+ } else {
+ /* Proceed with allocation */
+- space_info->chunk_alloc = 1;
++ space_info->chunk_alloc = true;
+ wait_for_alloc = false;
+ spin_unlock(&space_info->lock);
+ }
+@@ -4244,7 +4244,7 @@ int btrfs_chunk_alloc(struct btrfs_trans
+ spin_lock(&space_info->lock);
+ if (ret < 0) {
+ if (ret == -ENOSPC)
+- space_info->full = 1;
++ space_info->full = true;
+ else
+ goto out;
+ } else {
+@@ -4254,7 +4254,7 @@ int btrfs_chunk_alloc(struct btrfs_trans
+
+ space_info->force_alloc = CHUNK_ALLOC_NO_FORCE;
+ out:
+- space_info->chunk_alloc = 0;
++ space_info->chunk_alloc = false;
+ spin_unlock(&space_info->lock);
+ mutex_unlock(&fs_info->chunk_mutex);
+
+--- a/fs/btrfs/space-info.c
++++ b/fs/btrfs/space-info.c
+@@ -183,7 +183,7 @@ void btrfs_clear_space_info_full(struct
+ struct btrfs_space_info *found;
+
+ list_for_each_entry(found, head, list)
+- found->full = 0;
++ found->full = false;
+ }
+
+ /*
+@@ -364,7 +364,7 @@ void btrfs_add_bg_to_space_info(struct b
+ found->bytes_readonly += block_group->bytes_super;
+ btrfs_space_info_update_bytes_zone_unusable(info, found, block_group->zone_unusable);
+ if (block_group->length > 0)
+- found->full = 0;
++ found->full = false;
+ btrfs_try_granting_tickets(info, found);
+ spin_unlock(&found->lock);
+
+@@ -1140,7 +1140,7 @@ static void btrfs_async_reclaim_metadata
+ spin_lock(&space_info->lock);
+ to_reclaim = btrfs_calc_reclaim_metadata_size(fs_info, space_info);
+ if (!to_reclaim) {
+- space_info->flush = 0;
++ space_info->flush = false;
+ spin_unlock(&space_info->lock);
+ return;
+ }
+@@ -1152,7 +1152,7 @@ static void btrfs_async_reclaim_metadata
+ flush_space(fs_info, space_info, to_reclaim, flush_state, false);
+ spin_lock(&space_info->lock);
+ if (list_empty(&space_info->tickets)) {
+- space_info->flush = 0;
++ space_info->flush = false;
+ spin_unlock(&space_info->lock);
+ return;
+ }
+@@ -1195,7 +1195,7 @@ static void btrfs_async_reclaim_metadata
+ flush_state = FLUSH_DELAYED_ITEMS_NR;
+ commit_cycles--;
+ } else {
+- space_info->flush = 0;
++ space_info->flush = false;
+ }
+ } else {
+ flush_state = FLUSH_DELAYED_ITEMS_NR;
+@@ -1357,7 +1357,7 @@ static void btrfs_async_reclaim_data_spa
+
+ spin_lock(&space_info->lock);
+ if (list_empty(&space_info->tickets)) {
+- space_info->flush = 0;
++ space_info->flush = false;
+ spin_unlock(&space_info->lock);
+ return;
+ }
+@@ -1368,7 +1368,7 @@ static void btrfs_async_reclaim_data_spa
+ flush_space(fs_info, space_info, U64_MAX, ALLOC_CHUNK_FORCE, false);
+ spin_lock(&space_info->lock);
+ if (list_empty(&space_info->tickets)) {
+- space_info->flush = 0;
++ space_info->flush = false;
+ spin_unlock(&space_info->lock);
+ return;
+ }
+@@ -1385,7 +1385,7 @@ static void btrfs_async_reclaim_data_spa
+ data_flush_states[flush_state], false);
+ spin_lock(&space_info->lock);
+ if (list_empty(&space_info->tickets)) {
+- space_info->flush = 0;
++ space_info->flush = false;
+ spin_unlock(&space_info->lock);
+ return;
+ }
+@@ -1402,7 +1402,7 @@ static void btrfs_async_reclaim_data_spa
+ if (maybe_fail_all_tickets(fs_info, space_info))
+ flush_state = 0;
+ else
+- space_info->flush = 0;
++ space_info->flush = false;
+ } else {
+ flush_state = 0;
+ }
+@@ -1418,7 +1418,7 @@ static void btrfs_async_reclaim_data_spa
+
+ aborted_fs:
+ maybe_fail_all_tickets(fs_info, space_info);
+- space_info->flush = 0;
++ space_info->flush = false;
+ spin_unlock(&space_info->lock);
+ }
+
+@@ -1787,7 +1787,7 @@ static int __reserve_bytes(struct btrfs_
+ */
+ maybe_clamp_preempt(fs_info, space_info);
+
+- space_info->flush = 1;
++ space_info->flush = true;
+ trace_btrfs_trigger_flush(fs_info,
+ space_info->flags,
+ orig_bytes, flush,
+--- a/fs/btrfs/space-info.h
++++ b/fs/btrfs/space-info.h
+@@ -136,11 +136,11 @@ struct btrfs_space_info {
+ flushing. The value is >> clamp, so turns
+ out to be a 2^clamp divisor. */
+
+- unsigned int full:1; /* indicates that we cannot allocate any more
++ bool full; /* indicates that we cannot allocate any more
+ chunks for this space */
+- unsigned int chunk_alloc:1; /* set if we are allocating a chunk */
++ bool chunk_alloc; /* set if we are allocating a chunk */
+
+- unsigned int flush:1; /* set if we are trying to make space */
++ bool flush; /* set if we are trying to make space */
+
+ unsigned int force_alloc; /* set if we need to force a chunk
+ alloc for this space */
--- /dev/null
+From 5a4391bdc6c8357242f62f22069c865b792406b3 Mon Sep 17 00:00:00 2001
+From: Marc Kleine-Budde <mkl@pengutronix.de>
+Date: Sat, 10 Jan 2026 12:52:27 +0100
+Subject: can: esd_usb: esd_usb_read_bulk_callback(): fix URB memory leak
+
+From: Marc Kleine-Budde <mkl@pengutronix.de>
+
+commit 5a4391bdc6c8357242f62f22069c865b792406b3 upstream.
+
+Fix similar memory leak as in commit 7352e1d5932a ("can: gs_usb:
+gs_usb_receive_bulk_callback(): fix URB memory leak").
+
+In esd_usb_open(), the URBs for USB-in transfers are allocated, added to
+the dev->rx_submitted anchor and submitted. In the complete callback
+esd_usb_read_bulk_callback(), the URBs are processed and resubmitted. In
+esd_usb_close() the URBs are freed by calling
+usb_kill_anchored_urbs(&dev->rx_submitted).
+
+However, this does not take into account that the USB framework unanchors
+the URB before the complete function is called. This means that once an
+in-URB has been completed, it is no longer anchored and is ultimately not
+released in esd_usb_close().
+
+Fix the memory leak by anchoring the URB in the
+esd_usb_read_bulk_callback() to the dev->rx_submitted anchor.
+
+Fixes: 96d8e90382dc ("can: Add driver for esd CAN-USB/2 device")
+Cc: stable@vger.kernel.org
+Link: https://patch.msgid.link/20260116-can_usb-fix-memory-leak-v2-2-4b8cb2915571@pengutronix.de
+Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/can/usb/esd_usb.c | 9 ++++++++-
+ 1 file changed, 8 insertions(+), 1 deletion(-)
+
+--- a/drivers/net/can/usb/esd_usb.c
++++ b/drivers/net/can/usb/esd_usb.c
+@@ -539,13 +539,20 @@ resubmit_urb:
+ urb->transfer_buffer, ESD_USB_RX_BUFFER_SIZE,
+ esd_usb_read_bulk_callback, dev);
+
++ usb_anchor_urb(urb, &dev->rx_submitted);
++
+ retval = usb_submit_urb(urb, GFP_ATOMIC);
++ if (!retval)
++ return;
++
++ usb_unanchor_urb(urb);
++
+ if (retval == -ENODEV) {
+ for (i = 0; i < dev->net_count; i++) {
+ if (dev->nets[i])
+ netif_device_detach(dev->nets[i]->netdev);
+ }
+- } else if (retval) {
++ } else {
+ dev_err(dev->udev->dev.parent,
+ "failed resubmitting read bulk urb: %d\n", retval);
+ }
--- /dev/null
+From 566beb347eded7a860511164a7a163bc882dc4d0 Mon Sep 17 00:00:00 2001
+From: Siddharth Vadapalli <s-vadapalli@ti.com>
+Date: Wed, 5 Feb 2025 17:48:01 +0530
+Subject: dmaengine: ti: k3-udma: Enable second resource range for BCDMA and PKTDMA
+
+From: Siddharth Vadapalli <s-vadapalli@ti.com>
+
+commit 566beb347eded7a860511164a7a163bc882dc4d0 upstream.
+
+The SoC DMA resources for UDMA, BCDMA and PKTDMA can be described via a
+combination of up to two resource ranges. The first resource range handles
+the default partitioning wherein all resources belonging to that range are
+allocated to a single entity and form a continuous range. For use-cases
+where the resources are shared across multiple entities and require to be
+described via discontinuous ranges, a second resource range is required.
+
+Currently, udma_setup_resources() supports handling resources that belong
+to the second range. Extend bcdma_setup_resources() and
+pktdma_setup_resources() to support the same.
+
+Signed-off-by: Siddharth Vadapalli <s-vadapalli@ti.com>
+Acked-by: Peter Ujfalusi <peter.ujfalusi@gmail.com>
+Link: https://lore.kernel.org/r/20250205121805.316792-1-s-vadapalli@ti.com
+Signed-off-by: Vinod Koul <vkoul@kernel.org>
+Tested-by: Sai Sree Kartheek Adivi <s-adivi@ti.com>
+Signed-off-by: Sai Sree Kartheek Adivi <s-adivi@ti.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/dma/ti/k3-udma.c | 36 ++++++++++++++++++++++++++++++++++++
+ 1 file changed, 36 insertions(+)
+
+--- a/drivers/dma/ti/k3-udma.c
++++ b/drivers/dma/ti/k3-udma.c
+@@ -4876,6 +4876,12 @@ static int bcdma_setup_resources(struct
+ irq_res.desc[i].start = rm_res->desc[i].start +
+ oes->bcdma_bchan_ring;
+ irq_res.desc[i].num = rm_res->desc[i].num;
++
++ if (rm_res->desc[i].num_sec) {
++ irq_res.desc[i].start_sec = rm_res->desc[i].start_sec +
++ oes->bcdma_bchan_ring;
++ irq_res.desc[i].num_sec = rm_res->desc[i].num_sec;
++ }
+ }
+ }
+ } else {
+@@ -4899,6 +4905,15 @@ static int bcdma_setup_resources(struct
+ irq_res.desc[i + 1].start = rm_res->desc[j].start +
+ oes->bcdma_tchan_ring;
+ irq_res.desc[i + 1].num = rm_res->desc[j].num;
++
++ if (rm_res->desc[j].num_sec) {
++ irq_res.desc[i].start_sec = rm_res->desc[j].start_sec +
++ oes->bcdma_tchan_data;
++ irq_res.desc[i].num_sec = rm_res->desc[j].num_sec;
++ irq_res.desc[i + 1].start_sec = rm_res->desc[j].start_sec +
++ oes->bcdma_tchan_ring;
++ irq_res.desc[i + 1].num_sec = rm_res->desc[j].num_sec;
++ }
+ }
+ }
+ }
+@@ -4919,6 +4934,15 @@ static int bcdma_setup_resources(struct
+ irq_res.desc[i + 1].start = rm_res->desc[j].start +
+ oes->bcdma_rchan_ring;
+ irq_res.desc[i + 1].num = rm_res->desc[j].num;
++
++ if (rm_res->desc[j].num_sec) {
++ irq_res.desc[i].start_sec = rm_res->desc[j].start_sec +
++ oes->bcdma_rchan_data;
++ irq_res.desc[i].num_sec = rm_res->desc[j].num_sec;
++ irq_res.desc[i + 1].start_sec = rm_res->desc[j].start_sec +
++ oes->bcdma_rchan_ring;
++ irq_res.desc[i + 1].num_sec = rm_res->desc[j].num_sec;
++ }
+ }
+ }
+ }
+@@ -5053,6 +5077,12 @@ static int pktdma_setup_resources(struct
+ irq_res.desc[i].start = rm_res->desc[i].start +
+ oes->pktdma_tchan_flow;
+ irq_res.desc[i].num = rm_res->desc[i].num;
++
++ if (rm_res->desc[i].num_sec) {
++ irq_res.desc[i].start_sec = rm_res->desc[i].start_sec +
++ oes->pktdma_tchan_flow;
++ irq_res.desc[i].num_sec = rm_res->desc[i].num_sec;
++ }
+ }
+ }
+ rm_res = tisci_rm->rm_ranges[RM_RANGE_RFLOW];
+@@ -5064,6 +5094,12 @@ static int pktdma_setup_resources(struct
+ irq_res.desc[i].start = rm_res->desc[j].start +
+ oes->pktdma_rchan_flow;
+ irq_res.desc[i].num = rm_res->desc[j].num;
++
++ if (rm_res->desc[j].num_sec) {
++ irq_res.desc[i].start_sec = rm_res->desc[j].start_sec +
++ oes->pktdma_rchan_flow;
++ irq_res.desc[i].num_sec = rm_res->desc[j].num_sec;
++ }
+ }
+ }
+ ret = ti_sci_inta_msi_domain_alloc_irqs(ud->dev, &irq_res);
--- /dev/null
+From 1468888505@139.com Fri Jan 23 03:37:25 2026
+From: Li hongliang <1468888505@139.com>
+Date: Fri, 23 Jan 2026 10:37:21 +0800
+Subject: exfat: fix refcount leak in exfat_find
+To: gregkh@linuxfoundation.org, stable@vger.kernel.org, sfual@cse.ust.hk
+Cc: patches@lists.linux.dev, linux-kernel@vger.kernel.org, Yuezhang.Mo@sony.com, linkinjeon@kernel.org, sj1557.seo@samsung.com, p22gone@gmail.com, kkamagui@gmail.com, jimmyxyz010315@gmail.com, linux-fsdevel@vger.kernel.org
+Message-ID: <20260123023721.3779125-1-1468888505@139.com>
+
+From: Shuhao Fu <sfual@cse.ust.hk>
+
+[ Upstream commit 9aee8de970f18c2aaaa348e3de86c38e2d956c1d ]
+
+Fix refcount leaks in `exfat_find` related to `exfat_get_dentry_set`.
+
+Function `exfat_get_dentry_set` would increase the reference counter of
+`es->bh` on success. Therefore, `exfat_put_dentry_set` must be called
+after `exfat_get_dentry_set` to ensure refcount consistency. This patch
+relocate two checks to avoid possible leaks.
+
+Fixes: 82ebecdc74ff ("exfat: fix improper check of dentry.stream.valid_size")
+Fixes: 13940cef9549 ("exfat: add a check for invalid data size")
+Signed-off-by: Shuhao Fu <sfual@cse.ust.hk>
+Reviewed-by: Yuezhang Mo <Yuezhang.Mo@sony.com>
+Signed-off-by: Namjae Jeon <linkinjeon@kernel.org>
+Signed-off-by: Li hongliang <1468888505@139.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/exfat/namei.c | 20 ++++++++++----------
+ 1 file changed, 10 insertions(+), 10 deletions(-)
+
+--- a/fs/exfat/namei.c
++++ b/fs/exfat/namei.c
+@@ -638,16 +638,6 @@ static int exfat_find(struct inode *dir,
+ info->valid_size = le64_to_cpu(ep2->dentry.stream.valid_size);
+ info->size = le64_to_cpu(ep2->dentry.stream.size);
+
+- if (info->valid_size < 0) {
+- exfat_fs_error(sb, "data valid size is invalid(%lld)", info->valid_size);
+- return -EIO;
+- }
+-
+- if (unlikely(EXFAT_B_TO_CLU_ROUND_UP(info->size, sbi) > sbi->used_clusters)) {
+- exfat_fs_error(sb, "data size is invalid(%lld)", info->size);
+- return -EIO;
+- }
+-
+ info->start_clu = le32_to_cpu(ep2->dentry.stream.start_clu);
+ if (!is_valid_cluster(sbi, info->start_clu) && info->size) {
+ exfat_warn(sb, "start_clu is invalid cluster(0x%x)",
+@@ -685,6 +675,16 @@ static int exfat_find(struct inode *dir,
+ 0);
+ exfat_put_dentry_set(&es, false);
+
++ if (info->valid_size < 0) {
++ exfat_fs_error(sb, "data valid size is invalid(%lld)", info->valid_size);
++ return -EIO;
++ }
++
++ if (unlikely(EXFAT_B_TO_CLU_ROUND_UP(info->size, sbi) > sbi->used_clusters)) {
++ exfat_fs_error(sb, "data size is invalid(%lld)", info->size);
++ return -EIO;
++ }
++
+ if (ei->start_clu == EXFAT_FREE_CLUSTER) {
+ exfat_fs_error(sb,
+ "non-zero size file starts with zero cluster (size : %llu, p_dir : %u, entry : 0x%08x)",
--- /dev/null
+From stable+bounces-211330-greg=kroah.com@vger.kernel.org Fri Jan 23 04:43:03 2026
+From: Li hongliang <1468888505@139.com>
+Date: Fri, 23 Jan 2026 11:42:31 +0800
+Subject: fs/ntfs3: Initialize allocated memory before use
+To: gregkh@linuxfoundation.org, stable@vger.kernel.org, kubik.bartlomiej@gmail.com
+Cc: patches@lists.linux.dev, linux-kernel@vger.kernel.org, khalid@kernel.org, almaz.alexandrovich@paragon-software.com, ntfs3@lists.linux.dev
+Message-ID: <20260123034231.3793689-1-1468888505@139.com>
+
+From: Bartlomiej Kubik <kubik.bartlomiej@gmail.com>
+
+[ Upstream commit a8a3ca23bbd9d849308a7921a049330dc6c91398 ]
+
+KMSAN reports: Multiple uninitialized values detected:
+
+- KMSAN: uninit-value in ntfs_read_hdr (3)
+- KMSAN: uninit-value in bcmp (3)
+
+Memory is allocated by __getname(), which is a wrapper for
+kmem_cache_alloc(). This memory is used before being properly
+cleared. Change kmem_cache_alloc() to kmem_cache_zalloc() to
+properly allocate and clear memory before use.
+
+Fixes: 82cae269cfa9 ("fs/ntfs3: Add initialization of super block")
+Fixes: 78ab59fee07f ("fs/ntfs3: Rework file operations")
+Tested-by: syzbot+332bd4e9d148f11a87dc@syzkaller.appspotmail.com
+Reported-by: syzbot+332bd4e9d148f11a87dc@syzkaller.appspotmail.com
+Closes: https://syzkaller.appspot.com/bug?extid=332bd4e9d148f11a87dc
+
+Fixes: 82cae269cfa9 ("fs/ntfs3: Add initialization of super block")
+Fixes: 78ab59fee07f ("fs/ntfs3: Rework file operations")
+Tested-by: syzbot+0399100e525dd9696764@syzkaller.appspotmail.com
+Reported-by: syzbot+0399100e525dd9696764@syzkaller.appspotmail.com
+Closes: https://syzkaller.appspot.com/bug?extid=0399100e525dd9696764
+
+Reviewed-by: Khalid Aziz <khalid@kernel.org>
+Signed-off-by: Bartlomiej Kubik <kubik.bartlomiej@gmail.com>
+Signed-off-by: Konstantin Komarov <almaz.alexandrovich@paragon-software.com>
+Signed-off-by: Li hongliang <1468888505@139.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/ntfs3/inode.c | 7 +++----
+ 1 file changed, 3 insertions(+), 4 deletions(-)
+
+--- a/fs/ntfs3/inode.c
++++ b/fs/ntfs3/inode.c
+@@ -1301,7 +1301,7 @@ int ntfs_create_inode(struct mnt_idmap *
+ fa |= FILE_ATTRIBUTE_READONLY;
+
+ /* Allocate PATH_MAX bytes. */
+- new_de = __getname();
++ new_de = kmem_cache_zalloc(names_cachep, GFP_KERNEL);
+ if (!new_de) {
+ err = -ENOMEM;
+ goto out1;
+@@ -1734,10 +1734,9 @@ int ntfs_link_inode(struct inode *inode,
+ struct NTFS_DE *de;
+
+ /* Allocate PATH_MAX bytes. */
+- de = __getname();
++ de = kmem_cache_zalloc(names_cachep, GFP_KERNEL);
+ if (!de)
+ return -ENOMEM;
+- memset(de, 0, PATH_MAX);
+
+ /* Mark rw ntfs as dirty. It will be cleared at umount. */
+ ntfs_set_state(sbi, NTFS_DIRTY_DIRTY);
+@@ -1773,7 +1772,7 @@ int ntfs_unlink_inode(struct inode *dir,
+ return -EINVAL;
+
+ /* Allocate PATH_MAX bytes. */
+- de = __getname();
++ de = kmem_cache_zalloc(names_cachep, GFP_KERNEL);
+ if (!de)
+ return -ENOMEM;
+
--- /dev/null
+From stable+bounces-211877-greg=kroah.com@vger.kernel.org Tue Jan 27 18:51:40 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 27 Jan 2026 12:51:29 -0500
+Subject: iio: adc: exynos_adc: fix OF populate on driver rebind
+To: stable@vger.kernel.org
+Cc: Johan Hovold <johan@kernel.org>, Krzysztof Kozlowski <krzysztof.kozlowski@oss.qualcomm.com>, Jonathan Cameron <Jonathan.Cameron@huawei.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260127175129.2025249-1-sashal@kernel.org>
+
+From: Johan Hovold <johan@kernel.org>
+
+[ Upstream commit ea6b4feba85e996e840e0b661bc42793df6eb701 ]
+
+Since commit c6e126de43e7 ("of: Keep track of populated platform
+devices") child devices will not be created by of_platform_populate()
+if the devices had previously been deregistered individually so that the
+OF_POPULATED flag is still set in the corresponding OF nodes.
+
+Switch to using of_platform_depopulate() instead of open coding so that
+the child devices are created if the driver is rebound.
+
+Fixes: c6e126de43e7 ("of: Keep track of populated platform devices")
+Cc: stable@vger.kernel.org # 3.16
+Signed-off-by: Johan Hovold <johan@kernel.org>
+Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@oss.qualcomm.com>
+Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
+[ Adjust context ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/iio/adc/exynos_adc.c | 13 ++-----------
+ 1 file changed, 2 insertions(+), 11 deletions(-)
+
+--- a/drivers/iio/adc/exynos_adc.c
++++ b/drivers/iio/adc/exynos_adc.c
+@@ -721,14 +721,7 @@ static const struct iio_chan_spec exynos
+ ADC_CHANNEL(9, "adc9"),
+ };
+
+-static int exynos_adc_remove_devices(struct device *dev, void *c)
+-{
+- struct platform_device *pdev = to_platform_device(dev);
+-
+- platform_device_unregister(pdev);
+
+- return 0;
+-}
+
+ static int exynos_adc_ts_open(struct input_dev *dev)
+ {
+@@ -929,8 +922,7 @@ static int exynos_adc_probe(struct platf
+ return 0;
+
+ err_of_populate:
+- device_for_each_child(&indio_dev->dev, NULL,
+- exynos_adc_remove_devices);
++ of_platform_depopulate(&indio_dev->dev);
+ if (has_ts) {
+ input_unregister_device(info->input);
+ free_irq(info->tsirq, info);
+@@ -959,8 +951,7 @@ static void exynos_adc_remove(struct pla
+ free_irq(info->tsirq, info);
+ input_unregister_device(info->input);
+ }
+- device_for_each_child(&indio_dev->dev, NULL,
+- exynos_adc_remove_devices);
++ of_platform_depopulate(&indio_dev->dev);
+ iio_device_unregister(indio_dev);
+ free_irq(info->irq, info);
+ if (info->data->exit_hw)
--- /dev/null
+From stable+bounces-211657-greg=kroah.com@vger.kernel.org Mon Jan 26 18:06:47 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 26 Jan 2026 12:04:11 -0500
+Subject: iio: core: add missing mutex_destroy in iio_dev_release()
+To: stable@vger.kernel.org
+Cc: "Andy Shevchenko" <andriy.shevchenko@linux.intel.com>, "Nuno Sá" <nuno.sa@analog.com>, "Jonathan Cameron" <Jonathan.Cameron@huawei.com>, "Sasha Levin" <sashal@kernel.org>
+Message-ID: <20260126170413.3418184-1-sashal@kernel.org>
+
+From: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+
+[ Upstream commit f5d203467a31798191365efeb16cd619d2c8f23a ]
+
+Add missing mutex_destroy() call in iio_dev_release() to properly
+clean up the mutex initialized in iio_device_alloc(). Ensure proper
+resource cleanup and follows kernel practices.
+
+Found by code review.
+
+While at it, create a lockdep key before mutex initialisation.
+This will help with converting it to the better API in the future.
+
+Fixes: 847ec80bbaa7 ("Staging: IIO: core support for device registration and management")
+Fixes: ac917a81117c ("staging:iio:core set the iio_dev.info pointer to null on unregister under lock.")
+Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+Reviewed-by: Nuno Sá <nuno.sa@analog.com>
+Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
+Stable-dep-of: 9910159f0659 ("iio: core: add separate lockdep class for info_exist_lock")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/iio/industrialio-core.c | 9 +++++++--
+ 1 file changed, 7 insertions(+), 2 deletions(-)
+
+--- a/drivers/iio/industrialio-core.c
++++ b/drivers/iio/industrialio-core.c
+@@ -1628,6 +1628,9 @@ static void iio_dev_release(struct devic
+
+ iio_device_detach_buffers(indio_dev);
+
++ mutex_destroy(&iio_dev_opaque->info_exist_lock);
++ mutex_destroy(&iio_dev_opaque->mlock);
++
+ lockdep_unregister_key(&iio_dev_opaque->mlock_key);
+
+ ida_free(&iio_ida, iio_dev_opaque->id);
+@@ -1672,8 +1675,7 @@ struct iio_dev *iio_device_alloc(struct
+ indio_dev->dev.type = &iio_device_type;
+ indio_dev->dev.bus = &iio_bus_type;
+ device_initialize(&indio_dev->dev);
+- mutex_init(&iio_dev_opaque->mlock);
+- mutex_init(&iio_dev_opaque->info_exist_lock);
++
+ INIT_LIST_HEAD(&iio_dev_opaque->channel_attr_list);
+
+ iio_dev_opaque->id = ida_alloc(&iio_ida, GFP_KERNEL);
+@@ -1696,6 +1698,9 @@ struct iio_dev *iio_device_alloc(struct
+ lockdep_register_key(&iio_dev_opaque->mlock_key);
+ lockdep_set_class(&iio_dev_opaque->mlock, &iio_dev_opaque->mlock_key);
+
++ mutex_init(&iio_dev_opaque->mlock);
++ mutex_init(&iio_dev_opaque->info_exist_lock);
++
+ return indio_dev;
+ }
+ EXPORT_SYMBOL(iio_device_alloc);
--- /dev/null
+From stable+bounces-211659-greg=kroah.com@vger.kernel.org Mon Jan 26 18:06:55 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 26 Jan 2026 12:04:13 -0500
+Subject: iio: core: add separate lockdep class for info_exist_lock
+To: stable@vger.kernel.org
+Cc: Rasmus Villemoes <ravi@prevas.dk>, Peter Rosin <peda@axentia.se>, Jonathan Cameron <Jonathan.Cameron@huawei.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260126170413.3418184-3-sashal@kernel.org>
+
+From: Rasmus Villemoes <ravi@prevas.dk>
+
+[ Upstream commit 9910159f06590c17df4fbddedaabb4c0201cc4cb ]
+
+When one iio device is a consumer of another, it is possible that
+the ->info_exist_lock of both ends up being taken when reading the
+value of the consumer device.
+
+Since they currently belong to the same lockdep class (being
+initialized in a single location with mutex_init()), that results in a
+lockdep warning
+
+ CPU0
+ ----
+ lock(&iio_dev_opaque->info_exist_lock);
+ lock(&iio_dev_opaque->info_exist_lock);
+
+ *** DEADLOCK ***
+
+ May be due to missing lock nesting notation
+
+ 4 locks held by sensors/414:
+ #0: c31fd6dc (&p->lock){+.+.}-{3:3}, at: seq_read_iter+0x44/0x4e4
+ #1: c4f5a1c4 (&of->mutex){+.+.}-{3:3}, at: kernfs_seq_start+0x1c/0xac
+ #2: c2827548 (kn->active#34){.+.+}-{0:0}, at: kernfs_seq_start+0x30/0xac
+ #3: c1dd2b68 (&iio_dev_opaque->info_exist_lock){+.+.}-{3:3}, at: iio_read_channel_processed_scale+0x24/0xd8
+
+ stack backtrace:
+ CPU: 0 UID: 0 PID: 414 Comm: sensors Not tainted 6.17.11 #5 NONE
+ Hardware name: Generic AM33XX (Flattened Device Tree)
+ Call trace:
+ unwind_backtrace from show_stack+0x10/0x14
+ show_stack from dump_stack_lvl+0x44/0x60
+ dump_stack_lvl from print_deadlock_bug+0x2b8/0x334
+ print_deadlock_bug from __lock_acquire+0x13a4/0x2ab0
+ __lock_acquire from lock_acquire+0xd0/0x2c0
+ lock_acquire from __mutex_lock+0xa0/0xe8c
+ __mutex_lock from mutex_lock_nested+0x1c/0x24
+ mutex_lock_nested from iio_read_channel_raw+0x20/0x6c
+ iio_read_channel_raw from rescale_read_raw+0x128/0x1c4
+ rescale_read_raw from iio_channel_read+0xe4/0xf4
+ iio_channel_read from iio_read_channel_processed_scale+0x6c/0xd8
+ iio_read_channel_processed_scale from iio_hwmon_read_val+0x68/0xbc
+ iio_hwmon_read_val from dev_attr_show+0x18/0x48
+ dev_attr_show from sysfs_kf_seq_show+0x80/0x110
+ sysfs_kf_seq_show from seq_read_iter+0xdc/0x4e4
+ seq_read_iter from vfs_read+0x238/0x2e4
+ vfs_read from ksys_read+0x6c/0xec
+ ksys_read from ret_fast_syscall+0x0/0x1c
+
+Just as the mlock_key already has its own lockdep class, add a
+lock_class_key for the info_exist mutex.
+
+Note that this has in theory been a problem since before IIO first
+left staging, but it only occurs when a chain of consumers is in use
+and that is not often done.
+
+Fixes: ac917a81117c ("staging:iio:core set the iio_dev.info pointer to null on unregister under lock.")
+Signed-off-by: Rasmus Villemoes <ravi@prevas.dk>
+Reviewed-by: Peter Rosin <peda@axentia.se>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/iio/industrialio-core.c | 4 +++-
+ include/linux/iio/iio-opaque.h | 2 ++
+ 2 files changed, 5 insertions(+), 1 deletion(-)
+
+--- a/drivers/iio/industrialio-core.c
++++ b/drivers/iio/industrialio-core.c
+@@ -1631,6 +1631,7 @@ static void iio_dev_release(struct devic
+ mutex_destroy(&iio_dev_opaque->info_exist_lock);
+ mutex_destroy(&iio_dev_opaque->mlock);
+
++ lockdep_unregister_key(&iio_dev_opaque->info_exist_key);
+ lockdep_unregister_key(&iio_dev_opaque->mlock_key);
+
+ ida_free(&iio_ida, iio_dev_opaque->id);
+@@ -1696,9 +1697,10 @@ struct iio_dev *iio_device_alloc(struct
+ INIT_LIST_HEAD(&iio_dev_opaque->ioctl_handlers);
+
+ lockdep_register_key(&iio_dev_opaque->mlock_key);
++ lockdep_register_key(&iio_dev_opaque->info_exist_key);
+
+ mutex_init_with_key(&iio_dev_opaque->mlock, &iio_dev_opaque->mlock_key);
+- mutex_init(&iio_dev_opaque->info_exist_lock);
++ mutex_init_with_key(&iio_dev_opaque->info_exist_lock, &iio_dev_opaque->info_exist_key);
+
+ return indio_dev;
+ }
+--- a/include/linux/iio/iio-opaque.h
++++ b/include/linux/iio/iio-opaque.h
+@@ -14,6 +14,7 @@
+ * @mlock: lock used to prevent simultaneous device state changes
+ * @mlock_key: lockdep class for iio_dev lock
+ * @info_exist_lock: lock to prevent use during removal
++ * @info_exist_key: lockdep class for info_exist lock
+ * @trig_readonly: mark the current trigger immutable
+ * @event_interface: event chrdevs associated with interrupt lines
+ * @attached_buffers: array of buffers statically attached by the driver
+@@ -47,6 +48,7 @@ struct iio_dev_opaque {
+ struct mutex mlock;
+ struct lock_class_key mlock_key;
+ struct mutex info_exist_lock;
++ struct lock_class_key info_exist_key;
+ bool trig_readonly;
+ struct iio_event_interface *event_interface;
+ struct iio_buffer **attached_buffers;
--- /dev/null
+From stable+bounces-211658-greg=kroah.com@vger.kernel.org Mon Jan 26 18:06:51 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 26 Jan 2026 12:04:12 -0500
+Subject: iio: core: Replace lockdep_set_class() + mutex_init() by combined call
+To: stable@vger.kernel.org
+Cc: "Andy Shevchenko" <andriy.shevchenko@linux.intel.com>, "Nuno Sá" <nuno.sa@analog.com>, "Jonathan Cameron" <Jonathan.Cameron@huawei.com>, "Sasha Levin" <sashal@kernel.org>
+Message-ID: <20260126170413.3418184-2-sashal@kernel.org>
+
+From: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+
+[ Upstream commit c76ba4b2644424b8dbacee80bb40991eac29d39e ]
+
+Replace lockdep_set_class() + mutex_init() by combined call
+mutex_init_with_key().
+
+Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+Reviewed-by: Nuno Sá <nuno.sa@analog.com>
+Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
+Stable-dep-of: 9910159f0659 ("iio: core: add separate lockdep class for info_exist_lock")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/iio/industrialio-core.c | 3 +--
+ 1 file changed, 1 insertion(+), 2 deletions(-)
+
+--- a/drivers/iio/industrialio-core.c
++++ b/drivers/iio/industrialio-core.c
+@@ -1696,9 +1696,8 @@ struct iio_dev *iio_device_alloc(struct
+ INIT_LIST_HEAD(&iio_dev_opaque->ioctl_handlers);
+
+ lockdep_register_key(&iio_dev_opaque->mlock_key);
+- lockdep_set_class(&iio_dev_opaque->mlock, &iio_dev_opaque->mlock_key);
+
+- mutex_init(&iio_dev_opaque->mlock);
++ mutex_init_with_key(&iio_dev_opaque->mlock, &iio_dev_opaque->mlock_key);
+ mutex_init(&iio_dev_opaque->info_exist_lock);
+
+ return indio_dev;
--- /dev/null
+From b7880cb166ab62c2409046b2347261abf701530e Mon Sep 17 00:00:00 2001
+From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
+Date: Fri, 9 Jan 2026 04:13:42 +0000
+Subject: migrate: correct lock ordering for hugetlb file folios
+
+From: Matthew Wilcox (Oracle) <willy@infradead.org>
+
+commit b7880cb166ab62c2409046b2347261abf701530e upstream.
+
+Syzbot has found a deadlock (analyzed by Lance Yang):
+
+1) Task (5749): Holds folio_lock, then tries to acquire i_mmap_rwsem(read lock).
+2) Task (5754): Holds i_mmap_rwsem(write lock), then tries to acquire
+folio_lock.
+
+migrate_pages()
+ -> migrate_hugetlbs()
+ -> unmap_and_move_huge_page() <- Takes folio_lock!
+ -> remove_migration_ptes()
+ -> __rmap_walk_file()
+ -> i_mmap_lock_read() <- Waits for i_mmap_rwsem(read lock)!
+
+hugetlbfs_fallocate()
+ -> hugetlbfs_punch_hole() <- Takes i_mmap_rwsem(write lock)!
+ -> hugetlbfs_zero_partial_page()
+ -> filemap_lock_hugetlb_folio()
+ -> filemap_lock_folio()
+ -> __filemap_get_folio <- Waits for folio_lock!
+
+The migration path is the one taking locks in the wrong order according to
+the documentation at the top of mm/rmap.c. So expand the scope of the
+existing i_mmap_lock to cover the calls to remove_migration_ptes() too.
+
+This is (mostly) how it used to be after commit c0d0381ade79. That was
+removed by 336bf30eb765 for both file & anon hugetlb pages when it should
+only have been removed for anon hugetlb pages.
+
+Link: https://lkml.kernel.org/r/20260109041345.3863089-2-willy@infradead.org
+Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
+Fixes: 336bf30eb765 ("hugetlbfs: fix anon huge page migration race")
+Reported-by: syzbot+2d9c96466c978346b55f@syzkaller.appspotmail.com
+Link: https://lore.kernel.org/all/68e9715a.050a0220.1186a4.000d.GAE@google.com
+Debugged-by: Lance Yang <lance.yang@linux.dev>
+Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
+Acked-by: Zi Yan <ziy@nvidia.com>
+Cc: Alistair Popple <apopple@nvidia.com>
+Cc: Byungchul Park <byungchul@sk.com>
+Cc: Gregory Price <gourry@gourry.net>
+Cc: Jann Horn <jannh@google.com>
+Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
+Cc: Liam Howlett <liam.howlett@oracle.com>
+Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
+Cc: Matthew Brost <matthew.brost@intel.com>
+Cc: Rakie Kim <rakie.kim@sk.com>
+Cc: Rik van Riel <riel@surriel.com>
+Cc: Vlastimil Babka <vbabka@suse.cz>
+Cc: Ying Huang <ying.huang@linux.alibaba.com>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/migrate.c | 12 ++++++------
+ 1 file changed, 6 insertions(+), 6 deletions(-)
+
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -1439,6 +1439,7 @@ static int unmap_and_move_huge_page(new_
+ int page_was_mapped = 0;
+ struct anon_vma *anon_vma = NULL;
+ struct address_space *mapping = NULL;
++ enum ttu_flags ttu = 0;
+
+ if (folio_ref_count(src) == 1) {
+ /* page was freed from under us. So we are done. */
+@@ -1479,8 +1480,6 @@ static int unmap_and_move_huge_page(new_
+ goto put_anon;
+
+ if (folio_mapped(src)) {
+- enum ttu_flags ttu = 0;
+-
+ if (!folio_test_anon(src)) {
+ /*
+ * In shared mappings, try_to_unmap could potentially
+@@ -1497,9 +1496,6 @@ static int unmap_and_move_huge_page(new_
+
+ try_to_migrate(src, ttu);
+ page_was_mapped = 1;
+-
+- if (ttu & TTU_RMAP_LOCKED)
+- i_mmap_unlock_write(mapping);
+ }
+
+ if (!folio_mapped(src))
+@@ -1507,7 +1503,11 @@ static int unmap_and_move_huge_page(new_
+
+ if (page_was_mapped)
+ remove_migration_ptes(src,
+- rc == MIGRATEPAGE_SUCCESS ? dst : src, 0);
++ rc == MIGRATEPAGE_SUCCESS ? dst : src,
++ ttu ? RMP_LOCKED : 0);
++
++ if (ttu & TTU_RMAP_LOCKED)
++ i_mmap_unlock_write(mapping);
+
+ unlock_put_anon:
+ folio_unlock(dst);
--- /dev/null
+From stable+bounces-211327-greg=kroah.com@vger.kernel.org Fri Jan 23 03:55:45 2026
+From: Chen Yu <xnguchen@sina.cn>
+Date: Fri, 23 Jan 2026 10:52:25 +0800
+Subject: sched_ext: Fix possible deadlock in the deferred_irq_workfn()
+To: qiang.zhang@linux.dev, tj@kernel.org
+Cc: stable@vger.kernel.org
+Message-ID: <20260123025225.2132-1-xnguchen@sina.cn>
+
+From: Zqiang <qiang.zhang@linux.dev>
+
+[ Upstream commit a257e974210320ede524f340ffe16bf4bf0dda1e ]
+
+For PREEMPT_RT=y kernels, the deferred_irq_workfn() is executed in
+the per-cpu irq_work/* task context and not disable-irq, if the rq
+returned by container_of() is current CPU's rq, the following scenarios
+may occur:
+
+lock(&rq->__lock);
+<Interrupt>
+ lock(&rq->__lock);
+
+This commit use IRQ_WORK_INIT_HARD() to replace init_irq_work() to
+initialize rq->scx.deferred_irq_work, make the deferred_irq_workfn()
+is always invoked in hard-irq context.
+
+Signed-off-by: Zqiang <qiang.zhang@linux.dev>
+Signed-off-by: Tejun Heo <tj@kernel.org>
+Signed-off-by: Chen Yu <xnguchen@sina.cn>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/sched/ext.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/kernel/sched/ext.c
++++ b/kernel/sched/ext.c
+@@ -6044,7 +6044,7 @@ void __init init_sched_ext_class(void)
+ BUG_ON(!zalloc_cpumask_var(&rq->scx.cpus_to_kick_if_idle, GFP_KERNEL));
+ BUG_ON(!zalloc_cpumask_var(&rq->scx.cpus_to_preempt, GFP_KERNEL));
+ BUG_ON(!zalloc_cpumask_var(&rq->scx.cpus_to_wait, GFP_KERNEL));
+- init_irq_work(&rq->scx.deferred_irq_work, deferred_irq_workfn);
++ rq->scx.deferred_irq_work = IRQ_WORK_INIT_HARD(deferred_irq_workfn);
+ init_irq_work(&rq->scx.kick_cpus_irq_work, kick_cpus_irq_workfn);
+
+ if (cpu_online(cpu))
--- /dev/null
+From e6c209da7e0e9aaf955a7b59e91ed78c2b6c96fb Mon Sep 17 00:00:00 2001
+From: Ihor Solodrai <ihor.solodrai@pm.me>
+Date: Fri, 11 Oct 2024 15:31:07 +0000
+Subject: selftests/bpf: Check for timeout in perf_link test
+
+From: Ihor Solodrai <ihor.solodrai@pm.me>
+
+commit e6c209da7e0e9aaf955a7b59e91ed78c2b6c96fb upstream.
+
+Recently perf_link test started unreliably failing on libbpf CI:
+ * https://github.com/libbpf/libbpf/actions/runs/11260672407/job/31312405473
+ * https://github.com/libbpf/libbpf/actions/runs/11260992334/job/31315514626
+ * https://github.com/libbpf/libbpf/actions/runs/11263162459/job/31320458251
+
+Part of the test is running a dummy loop for a while and then checking
+for a counter incremented by the test program.
+
+Instead of waiting for an arbitrary number of loop iterations once,
+check for the test counter in a loop and use get_time_ns() helper to
+enforce a 100ms timeout.
+
+v1: https://lore.kernel.org/bpf/zuRd072x9tumn2iN4wDNs5av0nu5nekMNV4PkR-YwCT10eFFTrUtZBRkLWFbrcCe7guvLStGQlhibo8qWojCO7i2-NGajes5GYIyynexD-w=@pm.me/
+
+Signed-off-by: Ihor Solodrai <ihor.solodrai@pm.me>
+Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
+Link: https://lore.kernel.org/bpf/20241011153104.249800-1-ihor.solodrai@pm.me
+Signed-off-by: Shung-Hsi Yu <shung-hsi.yu@suse.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ tools/testing/selftests/bpf/prog_tests/perf_link.c | 15 +++++++++++++--
+ 1 file changed, 13 insertions(+), 2 deletions(-)
+
+--- a/tools/testing/selftests/bpf/prog_tests/perf_link.c
++++ b/tools/testing/selftests/bpf/prog_tests/perf_link.c
+@@ -4,8 +4,12 @@
+ #include <pthread.h>
+ #include <sched.h>
+ #include <test_progs.h>
++#include "testing_helpers.h"
+ #include "test_perf_link.skel.h"
+
++#define BURN_TIMEOUT_MS 100
++#define BURN_TIMEOUT_NS BURN_TIMEOUT_MS * 1000000
++
+ static void burn_cpu(void)
+ {
+ volatile int j = 0;
+@@ -32,6 +36,7 @@ void serial_test_perf_link(void)
+ int run_cnt_before, run_cnt_after;
+ struct bpf_link_info info;
+ __u32 info_len = sizeof(info);
++ __u64 timeout_time_ns;
+
+ /* create perf event */
+ memset(&attr, 0, sizeof(attr));
+@@ -63,8 +68,14 @@ void serial_test_perf_link(void)
+ ASSERT_GT(info.prog_id, 0, "link_prog_id");
+
+ /* ensure we get at least one perf_event prog execution */
+- burn_cpu();
+- ASSERT_GT(skel->bss->run_cnt, 0, "run_cnt");
++ timeout_time_ns = get_time_ns() + BURN_TIMEOUT_NS;
++ while (true) {
++ burn_cpu();
++ if (skel->bss->run_cnt > 0)
++ break;
++ if (!ASSERT_LT(get_time_ns(), timeout_time_ns, "run_cnt_timeout"))
++ break;
++ }
+
+ /* perf_event is still active, but we close link and BPF program
+ * shouldn't be executed anymore
can-usb_8dev-usb_8dev_read_bulk_callback-fix-urb-memory-leak.patch
drm-amdgpu-remove-frame-cntl-for-gfx-v12.patch
gpio-cdev-correct-return-code-on-memory-allocation-failure.patch
+migrate-correct-lock-ordering-for-hugetlb-file-folios.patch
+dmaengine-ti-k3-udma-enable-second-resource-range-for-bcdma-and-pktdma.patch
+can-esd_usb-esd_usb_read_bulk_callback-fix-urb-memory-leak.patch
+selftests-bpf-check-for-timeout-in-perf_link-test.patch
+bpf-do-not-let-bpf-test-infra-emit-invalid-gso-types-to-stack.patch
+arm64-dts-rockchip-remove-redundant-max-link-speed-from-nanopi-r4s.patch
+iio-core-add-missing-mutex_destroy-in-iio_dev_release.patch
+iio-core-replace-lockdep_set_class-mutex_init-by-combined-call.patch
+iio-core-add-separate-lockdep-class-for-info_exist_lock.patch
+iio-adc-exynos_adc-fix-of-populate-on-driver-rebind.patch
+exfat-fix-refcount-leak-in-exfat_find.patch
+sched_ext-fix-possible-deadlock-in-the-deferred_irq_workfn.patch
+fs-ntfs3-initialize-allocated-memory-before-use.patch
+accel-ivpu-fix-race-condition-when-unbinding-bos.patch
+btrfs-fix-racy-bitfield-write-in-btrfs_clear_space_info_full.patch
+wifi-ath11k-fix-rcu-stall-while-reaping-monitor-destination-ring.patch
--- /dev/null
+From stable+bounces-211920-greg=kroah.com@vger.kernel.org Wed Jan 28 04:27:28 2026
+From: Li hongliang <1468888505@139.com>
+Date: Wed, 28 Jan 2026 11:26:57 +0800
+Subject: wifi: ath11k: fix RCU stall while reaping monitor destination ring
+To: gregkh@linuxfoundation.org, stable@vger.kernel.org, quic_ppranees@quicinc.com
+Cc: patches@lists.linux.dev, linux-kernel@vger.kernel.org, quic_kangyang@quicinc.com, kvalo@kernel.org, quic_jjohnson@quicinc.com, jeff.johnson@oss.qualcomm.com, jjohnson@kernel.org, quic_msinada@quicinc.com, rmanohar@codeaurora.org, julia.lawall@lip6.fr, quic_pradeepc@quicinc.com, linux-wireless@vger.kernel.org, ath11k@lists.infradead.org
+Message-ID: <20260128032657.1183323-1-1468888505@139.com>
+
+From: P Praneesh <quic_ppranees@quicinc.com>
+
+[ Upstream commit 16c6c35c03ea73054a1f6d3302a4ce4a331b427d ]
+
+While processing the monitor destination ring, MSDUs are reaped from the
+link descriptor based on the corresponding buf_id.
+
+However, sometimes the driver cannot obtain a valid buffer corresponding
+to the buf_id received from the hardware. This causes an infinite loop
+in the destination processing, resulting in a kernel crash.
+
+kernel log:
+ath11k_pci 0000:58:00.0: data msdu_pop: invalid buf_id 309
+ath11k_pci 0000:58:00.0: data dp_rx_monitor_link_desc_return failed
+ath11k_pci 0000:58:00.0: data msdu_pop: invalid buf_id 309
+ath11k_pci 0000:58:00.0: data dp_rx_monitor_link_desc_return failed
+
+Fix this by skipping the problematic buf_id and reaping the next entry,
+replacing the break with the next MSDU processing.
+
+Tested-on: WCN6855 hw2.0 PCI WLAN.HSP.1.1-03125-QCAHSPSWPL_V1_V2_SILICONZ_LITE-3.6510.30
+Tested-on: QCN9074 hw1.0 PCI WLAN.HK.2.7.0.1-01744-QCAHKSWPL_SILICONZ-1
+
+Fixes: d5c65159f289 ("ath11k: driver for Qualcomm IEEE 802.11ax devices")
+Signed-off-by: P Praneesh <quic_ppranees@quicinc.com>
+Signed-off-by: Kang Yang <quic_kangyang@quicinc.com>
+Acked-by: Kalle Valo <kvalo@kernel.org>
+Acked-by: Jeff Johnson <quic_jjohnson@quicinc.com>
+Link: https://patch.msgid.link/20241219110531.2096-2-quic_kangyang@quicinc.com
+Signed-off-by: Jeff Johnson <jeff.johnson@oss.qualcomm.com>
+Signed-off-by: Li hongliang <1468888505@139.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/wireless/ath/ath11k/dp_rx.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
+@@ -4777,7 +4777,7 @@ ath11k_dp_rx_mon_mpdu_pop(struct ath11k
+ if (!msdu) {
+ ath11k_dbg(ar->ab, ATH11K_DBG_DATA,
+ "msdu_pop: invalid buf_id %d\n", buf_id);
+- break;
++ goto next_msdu;
+ }
+ rxcb = ATH11K_SKB_RXCB(msdu);
+ if (!rxcb->unmapped) {
+@@ -5404,7 +5404,7 @@ ath11k_dp_rx_full_mon_mpdu_pop(struct at
+ "full mon msdu_pop: invalid buf_id %d\n",
+ buf_id);
+ spin_unlock_bh(&rx_ring->idr_lock);
+- break;
++ goto next_msdu;
+ }
+ idr_remove(&rx_ring->bufs_idr, buf_id);
+ spin_unlock_bh(&rx_ring->idr_lock);