--- /dev/null
+From stable+bounces-164967-greg=kroah.com@vger.kernel.org Mon Jul 28 16:37:03 2025
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 28 Jul 2025 10:35:52 -0400
+Subject: ARM: 9448/1: Use an absolute path to unified.h in KBUILD_AFLAGS
+To: stable@vger.kernel.org
+Cc: Nathan Chancellor <nathan@kernel.org>, KernelCI bot <bot@kernelci.org>, Masahiro Yamada <masahiroy@kernel.org>, Russell King <rmk+kernel@armlinux.org.uk>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20250728143552.2337910-1-sashal@kernel.org>
+
+From: Nathan Chancellor <nathan@kernel.org>
+
+[ Upstream commit 87c4e1459e80bf65066f864c762ef4dc932fad4b ]
+
+After commit d5c8d6e0fa61 ("kbuild: Update assembler calls to use proper
+flags and language target"), which updated as-instr to use the
+'assembler-with-cpp' language option, the Kbuild version of as-instr
+always fails internally for arch/arm with
+
+ <command-line>: fatal error: asm/unified.h: No such file or directory
+ compilation terminated.
+
+because '-include' flags are now taken into account by the compiler
+driver and as-instr does not have '$(LINUXINCLUDE)', so unified.h is not
+found.
+
+This went unnoticed at the time of the Kbuild change because the last
+use of as-instr in Kbuild that arch/arm could reach was removed in 5.7
+by commit 541ad0150ca4 ("arm: Remove 32bit KVM host support") but a
+stable backport of the Kbuild change to before that point exposed this
+potential issue if one were to be reintroduced.
+
+Follow the general pattern of '-include' paths throughout the tree and
+make unified.h absolute using '$(srctree)' to ensure KBUILD_AFLAGS can
+be used independently.
+
+Closes: https://lore.kernel.org/CACo-S-1qbCX4WAVFA63dWfHtrRHZBTyyr2js8Lx=Az03XHTTHg@mail.gmail.com/
+
+Cc: stable@vger.kernel.org
+Fixes: d5c8d6e0fa61 ("kbuild: Update assembler calls to use proper flags and language target")
+Reported-by: KernelCI bot <bot@kernelci.org>
+Reviewed-by: Masahiro Yamada <masahiroy@kernel.org>
+Signed-off-by: Nathan Chancellor <nathan@kernel.org>
+Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
+[ No KBUILD_RUSTFLAGS in <=6.12 ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/arm/Makefile | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/arch/arm/Makefile
++++ b/arch/arm/Makefile
+@@ -142,7 +142,7 @@ endif
+ # Need -Uarm for gcc < 3.x
+ KBUILD_CPPFLAGS +=$(cpp-y)
+ KBUILD_CFLAGS +=$(CFLAGS_ABI) $(CFLAGS_ISA) $(arch-y) $(tune-y) $(call cc-option,-mshort-load-bytes,$(call cc-option,-malignment-traps,)) -msoft-float -Uarm
+-KBUILD_AFLAGS +=$(CFLAGS_ABI) $(AFLAGS_ISA) -Wa,$(arch-y) $(tune-y) -include asm/unified.h -msoft-float
++KBUILD_AFLAGS +=$(CFLAGS_ABI) $(AFLAGS_ISA) -Wa,$(arch-y) $(tune-y) -include $(srctree)/arch/arm/include/asm/unified.h -msoft-float
+
+ CHECKFLAGS += -D__arm__
+
--- /dev/null
+From e8cde32f111f7f5681a7bad3ec747e9e697569a9 Mon Sep 17 00:00:00 2001
+From: Nianyao Tang <tangnianyao@huawei.com>
+Date: Tue, 11 Jun 2024 12:20:49 +0000
+Subject: arm64/cpufeatures/kvm: Add ARMv8.9 FEAT_ECBHB bits in ID_AA64MMFR1 register
+
+From: Nianyao Tang <tangnianyao@huawei.com>
+
+commit e8cde32f111f7f5681a7bad3ec747e9e697569a9 upstream.
+
+Enable ECBHB bits in ID_AA64MMFR1 register as per ARM DDI 0487K.a
+specification.
+
+When guest OS read ID_AA64MMFR1_EL1, kvm emulate this reg using
+ftr_id_aa64mmfr1 and always return ID_AA64MMFR1_EL1.ECBHB=0 to guest.
+It results in guest syscall jump to tramp ventry, which is not needed
+in implementation with ID_AA64MMFR1_EL1.ECBHB=1.
+Let's make the guest syscall process the same as the host.
+
+Signed-off-by: Nianyao Tang <tangnianyao@huawei.com>
+Link: https://lore.kernel.org/r/20240611122049.2758600-1-tangnianyao@huawei.com
+Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
+[ This fixes performance regressions introduced by commit 4117975672c4
+ ("arm64: errata: Add newer ARM cores to the
+ spectre_bhb_loop_affected() lists") for guests running on neoverse v2
+ hardware, which supports ECBHB. ]
+Signed-off-by: Patrick Roy <roypat@amazon.co.uk>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/arm64/kernel/cpufeature.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -364,6 +364,7 @@ static const struct arm64_ftr_bits ftr_i
+ };
+
+ static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
++ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_EL1_ECBHB_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_EL1_TIDCP1_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_EL1_AFP_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_EL1_HCX_SHIFT, 4, 0),
--- /dev/null
+From stable+bounces-165093-greg=kroah.com@vger.kernel.org Tue Jul 29 17:42:37 2025
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 29 Jul 2025 11:40:45 -0400
+Subject: drm/sched: Remove optimization that causes hang when killing dependent jobs
+To: stable@vger.kernel.org
+Cc: "Lin.Cao" <lincao12@amd.com>, "Christian König" <christian.koenig@amd.com>, "Philipp Stanner" <phasta@kernel.org>, "Sasha Levin" <sashal@kernel.org>
+Message-ID: <20250729154045.2736146-1-sashal@kernel.org>
+
+From: "Lin.Cao" <lincao12@amd.com>
+
+[ Upstream commit 15f77764e90a713ee3916ca424757688e4f565b9 ]
+
+When application A submits jobs and application B submits a job with a
+dependency on A's fence, the normal flow wakes up the scheduler after
+processing each job. However, the optimization in
+drm_sched_entity_add_dependency_cb() uses a callback that only clears
+dependencies without waking up the scheduler.
+
+When application A is killed before its jobs can run, the callback gets
+triggered but only clears the dependency without waking up the scheduler,
+causing the scheduler to enter sleep state and application B to hang.
+
+Remove the optimization by deleting drm_sched_entity_clear_dep() and its
+usage, ensuring the scheduler is always woken up when dependencies are
+cleared.
+
+Fixes: 777dbd458c89 ("drm/amdgpu: drop a dummy wakeup scheduler")
+Cc: stable@vger.kernel.org # v4.6+
+Signed-off-by: Lin.Cao <lincao12@amd.com>
+Reviewed-by: Christian König <christian.koenig@amd.com>
+Signed-off-by: Philipp Stanner <phasta@kernel.org>
+Link: https://lore.kernel.org/r/20250717084453.921097-1-lincao12@amd.com
+[ replaced drm_sched_wakeup() calls with drm_sched_wakeup_if_can_queue() ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/gpu/drm/scheduler/sched_entity.c | 25 ++++---------------------
+ 1 file changed, 4 insertions(+), 21 deletions(-)
+
+--- a/drivers/gpu/drm/scheduler/sched_entity.c
++++ b/drivers/gpu/drm/scheduler/sched_entity.c
+@@ -346,20 +346,9 @@ void drm_sched_entity_destroy(struct drm
+ }
+ EXPORT_SYMBOL(drm_sched_entity_destroy);
+
+-/* drm_sched_entity_clear_dep - callback to clear the entities dependency */
+-static void drm_sched_entity_clear_dep(struct dma_fence *f,
+- struct dma_fence_cb *cb)
+-{
+- struct drm_sched_entity *entity =
+- container_of(cb, struct drm_sched_entity, cb);
+-
+- entity->dependency = NULL;
+- dma_fence_put(f);
+-}
+-
+ /*
+- * drm_sched_entity_clear_dep - callback to clear the entities dependency and
+- * wake up scheduler
++ * drm_sched_entity_wakeup - callback to clear the entity's dependency and
++ * wake up the scheduler
+ */
+ static void drm_sched_entity_wakeup(struct dma_fence *f,
+ struct dma_fence_cb *cb)
+@@ -367,7 +356,8 @@ static void drm_sched_entity_wakeup(stru
+ struct drm_sched_entity *entity =
+ container_of(cb, struct drm_sched_entity, cb);
+
+- drm_sched_entity_clear_dep(f, cb);
++ entity->dependency = NULL;
++ dma_fence_put(f);
+ drm_sched_wakeup_if_can_queue(entity->rq->sched);
+ }
+
+@@ -420,13 +410,6 @@ static bool drm_sched_entity_add_depende
+ fence = dma_fence_get(&s_fence->scheduled);
+ dma_fence_put(entity->dependency);
+ entity->dependency = fence;
+- if (!dma_fence_add_callback(fence, &entity->cb,
+- drm_sched_entity_clear_dep))
+- return true;
+-
+- /* Ignore it when it is already scheduled */
+- dma_fence_put(fence);
+- return false;
+ }
+
+ if (!dma_fence_add_callback(entity->dependency, &entity->cb,
--- /dev/null
+From a89f5fae998bdc4d0505306f93844c9ae059d50c Mon Sep 17 00:00:00 2001
+From: Namjae Jeon <linkinjeon@kernel.org>
+Date: Tue, 10 Jun 2025 18:52:56 +0900
+Subject: ksmbd: add free_transport ops in ksmbd connection
+
+From: Namjae Jeon <linkinjeon@kernel.org>
+
+commit a89f5fae998bdc4d0505306f93844c9ae059d50c upstream.
+
+free_transport function for tcp connection can be called from smbdirect.
+It will cause kernel oops. This patch add free_transport ops in ksmbd
+connection, and add each free_transports for tcp and smbdirect.
+
+Fixes: 21a4e47578d4 ("ksmbd: fix use-after-free in __smb2_lease_break_noti()")
+Reviewed-by: Stefan Metzmacher <metze@samba.org>
+Signed-off-by: Namjae Jeon <linkinjeon@kernel.org>
+Signed-off-by: Steve French <stfrench@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/smb/server/connection.c | 2 +-
+ fs/smb/server/connection.h | 1 +
+ fs/smb/server/transport_rdma.c | 10 ++++++++--
+ fs/smb/server/transport_tcp.c | 3 ++-
+ 4 files changed, 12 insertions(+), 4 deletions(-)
+
+--- a/fs/smb/server/connection.c
++++ b/fs/smb/server/connection.c
+@@ -40,7 +40,7 @@ void ksmbd_conn_free(struct ksmbd_conn *
+ kvfree(conn->request_buf);
+ kfree(conn->preauth_info);
+ if (atomic_dec_and_test(&conn->refcnt)) {
+- ksmbd_free_transport(conn->transport);
++ conn->transport->ops->free_transport(conn->transport);
+ kfree(conn);
+ }
+ }
+--- a/fs/smb/server/connection.h
++++ b/fs/smb/server/connection.h
+@@ -132,6 +132,7 @@ struct ksmbd_transport_ops {
+ void *buf, unsigned int len,
+ struct smb2_buffer_desc_v1 *desc,
+ unsigned int desc_len);
++ void (*free_transport)(struct ksmbd_transport *kt);
+ };
+
+ struct ksmbd_transport {
+--- a/fs/smb/server/transport_rdma.c
++++ b/fs/smb/server/transport_rdma.c
+@@ -158,7 +158,8 @@ struct smb_direct_transport {
+ };
+
+ #define KSMBD_TRANS(t) ((struct ksmbd_transport *)&((t)->transport))
+-
++#define SMBD_TRANS(t) ((struct smb_direct_transport *)container_of(t, \
++ struct smb_direct_transport, transport))
+ enum {
+ SMB_DIRECT_MSG_NEGOTIATE_REQ = 0,
+ SMB_DIRECT_MSG_DATA_TRANSFER
+@@ -409,6 +410,11 @@ err:
+ return NULL;
+ }
+
++static void smb_direct_free_transport(struct ksmbd_transport *kt)
++{
++ kfree(SMBD_TRANS(kt));
++}
++
+ static void free_transport(struct smb_direct_transport *t)
+ {
+ struct smb_direct_recvmsg *recvmsg;
+@@ -455,7 +461,6 @@ static void free_transport(struct smb_di
+
+ smb_direct_destroy_pools(t);
+ ksmbd_conn_free(KSMBD_TRANS(t)->conn);
+- kfree(t);
+ }
+
+ static struct smb_direct_sendmsg
+@@ -2301,4 +2306,5 @@ static struct ksmbd_transport_ops ksmbd_
+ .read = smb_direct_read,
+ .rdma_read = smb_direct_rdma_read,
+ .rdma_write = smb_direct_rdma_write,
++ .free_transport = smb_direct_free_transport,
+ };
+--- a/fs/smb/server/transport_tcp.c
++++ b/fs/smb/server/transport_tcp.c
+@@ -93,7 +93,7 @@ static struct tcp_transport *alloc_trans
+ return t;
+ }
+
+-void ksmbd_free_transport(struct ksmbd_transport *kt)
++static void ksmbd_tcp_free_transport(struct ksmbd_transport *kt)
+ {
+ struct tcp_transport *t = TCP_TRANS(kt);
+
+@@ -659,4 +659,5 @@ static struct ksmbd_transport_ops ksmbd_
+ .read = ksmbd_tcp_read,
+ .writev = ksmbd_tcp_writev,
+ .disconnect = ksmbd_tcp_disconnect,
++ .free_transport = ksmbd_tcp_free_transport,
+ };
--- /dev/null
+From stable+bounces-164896-greg=kroah.com@vger.kernel.org Mon Jul 28 11:15:30 2025
+From: "Matthieu Baerts (NGI0)" <matttbe@kernel.org>
+Date: Mon, 28 Jul 2025 11:14:49 +0200
+Subject: mptcp: make fallback action and fallback decision atomic
+To: mptcp@lists.linux.dev, stable@vger.kernel.org, gregkh@linuxfoundation.org
+Cc: Paolo Abeni <pabeni@redhat.com>, sashal@kernel.org, Matthieu Baerts <matttbe@kernel.org>, syzbot+5cf807c20386d699b524@syzkaller.appspotmail.com, Jakub Kicinski <kuba@kernel.org>
+Message-ID: <20250728091448.3494479-6-matttbe@kernel.org>
+
+From: Paolo Abeni <pabeni@redhat.com>
+
+commit f8a1d9b18c5efc76784f5a326e905f641f839894 upstream.
+
+Syzkaller reported the following splat:
+
+ WARNING: CPU: 1 PID: 7704 at net/mptcp/protocol.h:1223 __mptcp_do_fallback net/mptcp/protocol.h:1223 [inline]
+ WARNING: CPU: 1 PID: 7704 at net/mptcp/protocol.h:1223 mptcp_do_fallback net/mptcp/protocol.h:1244 [inline]
+ WARNING: CPU: 1 PID: 7704 at net/mptcp/protocol.h:1223 check_fully_established net/mptcp/options.c:982 [inline]
+ WARNING: CPU: 1 PID: 7704 at net/mptcp/protocol.h:1223 mptcp_incoming_options+0x21a8/0x2510 net/mptcp/options.c:1153
+ Modules linked in:
+ CPU: 1 UID: 0 PID: 7704 Comm: syz.3.1419 Not tainted 6.16.0-rc3-gbd5ce2324dba #20 PREEMPT(voluntary)
+ Hardware name: QEMU Ubuntu 24.04 PC (i440FX + PIIX, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
+ RIP: 0010:__mptcp_do_fallback net/mptcp/protocol.h:1223 [inline]
+ RIP: 0010:mptcp_do_fallback net/mptcp/protocol.h:1244 [inline]
+ RIP: 0010:check_fully_established net/mptcp/options.c:982 [inline]
+ RIP: 0010:mptcp_incoming_options+0x21a8/0x2510 net/mptcp/options.c:1153
+ Code: 24 18 e8 bb 2a 00 fd e9 1b df ff ff e8 b1 21 0f 00 e8 ec 5f c4 fc 44 0f b7 ac 24 b0 00 00 00 e9 54 f1 ff ff e8 d9 5f c4 fc 90 <0f> 0b 90 e9 b8 f4 ff ff e8 8b 2a 00 fd e9 8d e6 ff ff e8 81 2a 00
+ RSP: 0018:ffff8880a3f08448 EFLAGS: 00010246
+ RAX: 0000000000000000 RBX: ffff8880180a8000 RCX: ffffffff84afcf45
+ RDX: ffff888090223700 RSI: ffffffff84afdaa7 RDI: 0000000000000001
+ RBP: ffff888017955780 R08: 0000000000000001 R09: 0000000000000000
+ R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
+ R13: ffff8880180a8910 R14: ffff8880a3e9d058 R15: 0000000000000000
+ FS: 00005555791b8500(0000) GS:ffff88811c495000(0000) knlGS:0000000000000000
+ CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
+ CR2: 000000110c2800b7 CR3: 0000000058e44000 CR4: 0000000000350ef0
+ Call Trace:
+ <IRQ>
+ tcp_reset+0x26f/0x2b0 net/ipv4/tcp_input.c:4432
+ tcp_validate_incoming+0x1057/0x1b60 net/ipv4/tcp_input.c:5975
+ tcp_rcv_established+0x5b5/0x21f0 net/ipv4/tcp_input.c:6166
+ tcp_v4_do_rcv+0x5dc/0xa70 net/ipv4/tcp_ipv4.c:1925
+ tcp_v4_rcv+0x3473/0x44a0 net/ipv4/tcp_ipv4.c:2363
+ ip_protocol_deliver_rcu+0xba/0x480 net/ipv4/ip_input.c:205
+ ip_local_deliver_finish+0x2f1/0x500 net/ipv4/ip_input.c:233
+ NF_HOOK include/linux/netfilter.h:317 [inline]
+ NF_HOOK include/linux/netfilter.h:311 [inline]
+ ip_local_deliver+0x1be/0x560 net/ipv4/ip_input.c:254
+ dst_input include/net/dst.h:469 [inline]
+ ip_rcv_finish net/ipv4/ip_input.c:447 [inline]
+ NF_HOOK include/linux/netfilter.h:317 [inline]
+ NF_HOOK include/linux/netfilter.h:311 [inline]
+ ip_rcv+0x514/0x810 net/ipv4/ip_input.c:567
+ __netif_receive_skb_one_core+0x197/0x1e0 net/core/dev.c:5975
+ __netif_receive_skb+0x1f/0x120 net/core/dev.c:6088
+ process_backlog+0x301/0x1360 net/core/dev.c:6440
+ __napi_poll.constprop.0+0xba/0x550 net/core/dev.c:7453
+ napi_poll net/core/dev.c:7517 [inline]
+ net_rx_action+0xb44/0x1010 net/core/dev.c:7644
+ handle_softirqs+0x1d0/0x770 kernel/softirq.c:579
+ do_softirq+0x3f/0x90 kernel/softirq.c:480
+ </IRQ>
+ <TASK>
+ __local_bh_enable_ip+0xed/0x110 kernel/softirq.c:407
+ local_bh_enable include/linux/bottom_half.h:33 [inline]
+ inet_csk_listen_stop+0x2c5/0x1070 net/ipv4/inet_connection_sock.c:1524
+ mptcp_check_listen_stop.part.0+0x1cc/0x220 net/mptcp/protocol.c:2985
+ mptcp_check_listen_stop net/mptcp/mib.h:118 [inline]
+ __mptcp_close+0x9b9/0xbd0 net/mptcp/protocol.c:3000
+ mptcp_close+0x2f/0x140 net/mptcp/protocol.c:3066
+ inet_release+0xed/0x200 net/ipv4/af_inet.c:435
+ inet6_release+0x4f/0x70 net/ipv6/af_inet6.c:487
+ __sock_release+0xb3/0x270 net/socket.c:649
+ sock_close+0x1c/0x30 net/socket.c:1439
+ __fput+0x402/0xb70 fs/file_table.c:465
+ task_work_run+0x150/0x240 kernel/task_work.c:227
+ resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
+ exit_to_user_mode_loop+0xd4/0xe0 kernel/entry/common.c:114
+ exit_to_user_mode_prepare include/linux/entry-common.h:330 [inline]
+ syscall_exit_to_user_mode_work include/linux/entry-common.h:414 [inline]
+ syscall_exit_to_user_mode include/linux/entry-common.h:449 [inline]
+ do_syscall_64+0x245/0x360 arch/x86/entry/syscall_64.c:100
+ entry_SYSCALL_64_after_hwframe+0x77/0x7f
+ RIP: 0033:0x7fc92f8a36ad
+ Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
+ RSP: 002b:00007ffcf52802d8 EFLAGS: 00000246 ORIG_RAX: 00000000000001b4
+ RAX: 0000000000000000 RBX: 00007ffcf52803a8 RCX: 00007fc92f8a36ad
+ RDX: 0000000000000000 RSI: 000000000000001e RDI: 0000000000000003
+ RBP: 00007fc92fae7ba0 R08: 0000000000000001 R09: 0000002800000000
+ R10: 00007fc92f700000 R11: 0000000000000246 R12: 00007fc92fae5fac
+ R13: 00007fc92fae5fa0 R14: 0000000000026d00 R15: 0000000000026c51
+ </TASK>
+ irq event stamp: 4068
+ hardirqs last enabled at (4076): [<ffffffff81544816>] __up_console_sem+0x76/0x80 kernel/printk/printk.c:344
+ hardirqs last disabled at (4085): [<ffffffff815447fb>] __up_console_sem+0x5b/0x80 kernel/printk/printk.c:342
+ softirqs last enabled at (3096): [<ffffffff840e1be0>] local_bh_enable include/linux/bottom_half.h:33 [inline]
+ softirqs last enabled at (3096): [<ffffffff840e1be0>] inet_csk_listen_stop+0x2c0/0x1070 net/ipv4/inet_connection_sock.c:1524
+ softirqs last disabled at (3097): [<ffffffff813b6b9f>] do_softirq+0x3f/0x90 kernel/softirq.c:480
+
+Since we need to track the 'fallback is possible' condition and the
+fallback status separately, there are a few possible races open between
+the check and the actual fallback action.
+
+Add a spinlock to protect the fallback related information and use it
+close all the possible related races. While at it also remove the
+too-early clearing of allow_infinite_fallback in __mptcp_subflow_connect():
+the field will be correctly cleared by subflow_finish_connect() if/when
+the connection will complete successfully.
+
+If fallback is not possible, as per RFC, reset the current subflow.
+
+Since the fallback operation can now fail and return value should be
+checked, rename the helper accordingly.
+
+Fixes: 0530020a7c8f ("mptcp: track and update contiguous data status")
+Cc: stable@vger.kernel.org
+Reported-by: Matthieu Baerts <matttbe@kernel.org>
+Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/570
+Reported-by: syzbot+5cf807c20386d699b524@syzkaller.appspotmail.com
+Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/555
+Signed-off-by: Paolo Abeni <pabeni@redhat.com>
+Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
+Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
+Link: https://patch.msgid.link/20250714-net-mptcp-fallback-races-v1-1-391aff963322@kernel.org
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+[ Conflicts in protocol.h, because commit 6ebf6f90ab4a ("mptcp: add
+ mptcpi_subflows_total counter") is not in this version, and this
+ causes conflicts in the context. Commit 65b02260a0e0 ("mptcp: export
+ mptcp_subflow_early_fallback()") is also not in this version, and
+ moves code from protocol.c to protocol.h, but the modification can
+ still apply there. ]
+Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/mptcp/options.c | 3 ++-
+ net/mptcp/protocol.c | 42 ++++++++++++++++++++++++++++++++++++------
+ net/mptcp/protocol.h | 24 ++++++++++++++++++------
+ net/mptcp/subflow.c | 11 +++++------
+ 4 files changed, 61 insertions(+), 19 deletions(-)
+
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -979,8 +979,9 @@ static bool check_fully_established(stru
+ if (subflow->mp_join)
+ goto reset;
+ subflow->mp_capable = 0;
++ if (!mptcp_try_fallback(ssk))
++ goto reset;
+ pr_fallback(msk);
+- mptcp_do_fallback(ssk);
+ return false;
+ }
+
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -623,10 +623,9 @@ static bool mptcp_check_data_fin(struct
+
+ static void mptcp_dss_corruption(struct mptcp_sock *msk, struct sock *ssk)
+ {
+- if (READ_ONCE(msk->allow_infinite_fallback)) {
++ if (mptcp_try_fallback(ssk)) {
+ MPTCP_INC_STATS(sock_net(ssk),
+ MPTCP_MIB_DSSCORRUPTIONFALLBACK);
+- mptcp_do_fallback(ssk);
+ } else {
+ MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_DSSCORRUPTIONRESET);
+ mptcp_subflow_reset(ssk);
+@@ -887,6 +886,14 @@ static bool __mptcp_finish_join(struct m
+ if (sk->sk_state != TCP_ESTABLISHED)
+ return false;
+
++ spin_lock_bh(&msk->fallback_lock);
++ if (__mptcp_check_fallback(msk)) {
++ spin_unlock_bh(&msk->fallback_lock);
++ return false;
++ }
++ mptcp_subflow_joined(msk, ssk);
++ spin_unlock_bh(&msk->fallback_lock);
++
+ /* attach to msk socket only after we are sure we will deal with it
+ * at close time
+ */
+@@ -895,7 +902,6 @@ static bool __mptcp_finish_join(struct m
+
+ mptcp_subflow_ctx(ssk)->subflow_id = msk->subflow_id++;
+ mptcp_sockopt_sync_locked(msk, ssk);
+- mptcp_subflow_joined(msk, ssk);
+ mptcp_stop_tout_timer(sk);
+ __mptcp_propagate_sndbuf(sk, ssk);
+ return true;
+@@ -1231,10 +1237,14 @@ static void mptcp_update_infinite_map(st
+ mpext->infinite_map = 1;
+ mpext->data_len = 0;
+
++ if (!mptcp_try_fallback(ssk)) {
++ mptcp_subflow_reset(ssk);
++ return;
++ }
++
+ MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_INFINITEMAPTX);
+ mptcp_subflow_ctx(ssk)->send_infinite_map = 0;
+ pr_fallback(msk);
+- mptcp_do_fallback(ssk);
+ }
+
+ #define MPTCP_MAX_GSO_SIZE (GSO_LEGACY_MAX_SIZE - (MAX_TCP_HEADER + 1))
+@@ -2606,9 +2616,9 @@ static void mptcp_check_fastclose(struct
+
+ static void __mptcp_retrans(struct sock *sk)
+ {
++ struct mptcp_sendmsg_info info = { .data_lock_held = true, };
+ struct mptcp_sock *msk = mptcp_sk(sk);
+ struct mptcp_subflow_context *subflow;
+- struct mptcp_sendmsg_info info = {};
+ struct mptcp_data_frag *dfrag;
+ struct sock *ssk;
+ int ret, err;
+@@ -2653,6 +2663,18 @@ static void __mptcp_retrans(struct sock
+ info.sent = 0;
+ info.limit = READ_ONCE(msk->csum_enabled) ? dfrag->data_len :
+ dfrag->already_sent;
++
++ /*
++ * make the whole retrans decision, xmit, disallow
++ * fallback atomic
++ */
++ spin_lock_bh(&msk->fallback_lock);
++ if (__mptcp_check_fallback(msk)) {
++ spin_unlock_bh(&msk->fallback_lock);
++ release_sock(ssk);
++ return;
++ }
++
+ while (info.sent < info.limit) {
+ ret = mptcp_sendmsg_frag(sk, ssk, dfrag, &info);
+ if (ret <= 0)
+@@ -2668,6 +2690,7 @@ static void __mptcp_retrans(struct sock
+ info.size_goal);
+ WRITE_ONCE(msk->allow_infinite_fallback, false);
+ }
++ spin_unlock_bh(&msk->fallback_lock);
+
+ release_sock(ssk);
+ }
+@@ -2801,6 +2824,7 @@ static void __mptcp_init_sock(struct soc
+ msk->subflow_id = 1;
+
+ mptcp_pm_data_init(msk);
++ spin_lock_init(&msk->fallback_lock);
+
+ /* re-use the csk retrans timer for MPTCP-level retrans */
+ timer_setup(&msk->sk.icsk_retransmit_timer, mptcp_retransmit_timer, 0);
+@@ -3599,7 +3623,13 @@ bool mptcp_finish_join(struct sock *ssk)
+
+ /* active subflow, already present inside the conn_list */
+ if (!list_empty(&subflow->node)) {
++ spin_lock_bh(&msk->fallback_lock);
++ if (__mptcp_check_fallback(msk)) {
++ spin_unlock_bh(&msk->fallback_lock);
++ return false;
++ }
+ mptcp_subflow_joined(msk, ssk);
++ spin_unlock_bh(&msk->fallback_lock);
+ mptcp_propagate_sndbuf(parent, ssk);
+ return true;
+ }
+@@ -3712,7 +3742,7 @@ static void mptcp_subflow_early_fallback
+ struct mptcp_subflow_context *subflow)
+ {
+ subflow->request_mptcp = 0;
+- __mptcp_do_fallback(msk);
++ WARN_ON_ONCE(!__mptcp_try_fallback(msk));
+ }
+
+ static int mptcp_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -334,6 +334,10 @@ struct mptcp_sock {
+ u32 subflow_id;
+ u32 setsockopt_seq;
+ char ca_name[TCP_CA_NAME_MAX];
++
++ spinlock_t fallback_lock; /* protects fallback and
++ * allow_infinite_fallback
++ */
+ };
+
+ #define mptcp_data_lock(sk) spin_lock_bh(&(sk)->sk_lock.slock)
+@@ -1097,25 +1101,32 @@ static inline bool mptcp_check_fallback(
+ return __mptcp_check_fallback(msk);
+ }
+
+-static inline void __mptcp_do_fallback(struct mptcp_sock *msk)
++static inline bool __mptcp_try_fallback(struct mptcp_sock *msk)
+ {
+ if (test_bit(MPTCP_FALLBACK_DONE, &msk->flags)) {
+ pr_debug("TCP fallback already done (msk=%p)\n", msk);
+- return;
++ return true;
+ }
+- if (WARN_ON_ONCE(!READ_ONCE(msk->allow_infinite_fallback)))
+- return;
++ spin_lock_bh(&msk->fallback_lock);
++ if (!msk->allow_infinite_fallback) {
++ spin_unlock_bh(&msk->fallback_lock);
++ return false;
++ }
++
+ set_bit(MPTCP_FALLBACK_DONE, &msk->flags);
++ spin_unlock_bh(&msk->fallback_lock);
++ return true;
+ }
+
+-static inline void mptcp_do_fallback(struct sock *ssk)
++static inline bool mptcp_try_fallback(struct sock *ssk)
+ {
+ struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
+ struct sock *sk = subflow->conn;
+ struct mptcp_sock *msk;
+
+ msk = mptcp_sk(sk);
+- __mptcp_do_fallback(msk);
++ if (!__mptcp_try_fallback(msk))
++ return false;
+ if (READ_ONCE(msk->snd_data_fin_enable) && !(ssk->sk_shutdown & SEND_SHUTDOWN)) {
+ gfp_t saved_allocation = ssk->sk_allocation;
+
+@@ -1127,6 +1138,7 @@ static inline void mptcp_do_fallback(str
+ tcp_shutdown(ssk, SEND_SHUTDOWN);
+ ssk->sk_allocation = saved_allocation;
+ }
++ return true;
+ }
+
+ #define pr_fallback(a) pr_debug("%s:fallback to TCP (msk=%p)\n", __func__, a)
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -524,9 +524,11 @@ static void subflow_finish_connect(struc
+ mptcp_get_options(skb, &mp_opt);
+ if (subflow->request_mptcp) {
+ if (!(mp_opt.suboptions & OPTION_MPTCP_MPC_SYNACK)) {
++ if (!mptcp_try_fallback(sk))
++ goto do_reset;
++
+ MPTCP_INC_STATS(sock_net(sk),
+ MPTCP_MIB_MPCAPABLEACTIVEFALLBACK);
+- mptcp_do_fallback(sk);
+ pr_fallback(msk);
+ goto fallback;
+ }
+@@ -1350,7 +1352,7 @@ fallback:
+ return true;
+ }
+
+- if (!READ_ONCE(msk->allow_infinite_fallback)) {
++ if (!mptcp_try_fallback(ssk)) {
+ /* fatal protocol error, close the socket.
+ * subflow_error_report() will introduce the appropriate barriers
+ */
+@@ -1366,8 +1368,6 @@ reset:
+ WRITE_ONCE(subflow->data_avail, MPTCP_SUBFLOW_NODATA);
+ return false;
+ }
+-
+- mptcp_do_fallback(ssk);
+ }
+
+ skb = skb_peek(&ssk->sk_receive_queue);
+@@ -1612,7 +1612,6 @@ int __mptcp_subflow_connect(struct sock
+ /* discard the subflow socket */
+ mptcp_sock_graft(ssk, sk->sk_socket);
+ iput(SOCK_INODE(sf));
+- WRITE_ONCE(msk->allow_infinite_fallback, false);
+ mptcp_stop_tout_timer(sk);
+ return 0;
+
+@@ -1790,7 +1789,7 @@ static void subflow_state_change(struct
+
+ msk = mptcp_sk(parent);
+ if (subflow_simultaneous_connect(sk)) {
+- mptcp_do_fallback(sk);
++ WARN_ON_ONCE(!mptcp_try_fallback(sk));
+ pr_fallback(msk);
+ subflow->conn_finished = 1;
+ mptcp_propagate_state(parent, sk, subflow, NULL);
--- /dev/null
+From matttbe@kernel.org Mon Jul 28 11:15:30 2025
+From: "Matthieu Baerts (NGI0)" <matttbe@kernel.org>
+Date: Mon, 28 Jul 2025 11:14:50 +0200
+Subject: mptcp: plug races between subflow fail and subflow creation
+To: mptcp@lists.linux.dev, stable@vger.kernel.org, gregkh@linuxfoundation.org
+Cc: Paolo Abeni <pabeni@redhat.com>, sashal@kernel.org, "Matthieu Baerts (NGI0)" <matttbe@kernel.org>, Jakub Kicinski <kuba@kernel.org>
+Message-ID: <20250728091448.3494479-7-matttbe@kernel.org>
+
+From: Paolo Abeni <pabeni@redhat.com>
+
+commit def5b7b2643ebba696fc60ddf675dca13f073486 upstream.
+
+We have races similar to the one addressed by the previous patch between
+subflow failing and additional subflow creation. They are just harder to
+trigger.
+
+The solution is similar. Use a separate flag to track the condition
+'socket state prevent any additional subflow creation' protected by the
+fallback lock.
+
+The socket fallback makes such flag true, and also receiving or sending
+an MP_FAIL option.
+
+The field 'allow_infinite_fallback' is now always touched under the
+relevant lock, we can drop the ONCE annotation on write.
+
+Fixes: 478d770008b0 ("mptcp: send out MP_FAIL when data checksum fails")
+Cc: stable@vger.kernel.org
+Signed-off-by: Paolo Abeni <pabeni@redhat.com>
+Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
+Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
+Link: https://patch.msgid.link/20250714-net-mptcp-fallback-races-v1-2-391aff963322@kernel.org
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+[ Conflicts in subflow.c, because commit f1f26512a9bf ("mptcp: use plain
+ bool instead of custom binary enum") and commit 46a5d3abedbe
+ ("mptcp: fix typos in comments") are not in this version. Both are
+ causing conflicts in the context, and the same modifications can still
+ be applied. ]
+Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/mptcp/pm.c | 8 +++++++-
+ net/mptcp/protocol.c | 11 ++++++-----
+ net/mptcp/protocol.h | 7 +++++--
+ net/mptcp/subflow.c | 19 ++++++++++++++-----
+ 4 files changed, 32 insertions(+), 13 deletions(-)
+
+--- a/net/mptcp/pm.c
++++ b/net/mptcp/pm.c
+@@ -304,8 +304,14 @@ void mptcp_pm_mp_fail_received(struct so
+
+ pr_debug("fail_seq=%llu\n", fail_seq);
+
+- if (!READ_ONCE(msk->allow_infinite_fallback))
++ /* After accepting the fail, we can't create any other subflows */
++ spin_lock_bh(&msk->fallback_lock);
++ if (!msk->allow_infinite_fallback) {
++ spin_unlock_bh(&msk->fallback_lock);
+ return;
++ }
++ msk->allow_subflows = false;
++ spin_unlock_bh(&msk->fallback_lock);
+
+ if (!subflow->fail_tout) {
+ pr_debug("send MP_FAIL response and infinite map\n");
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -875,7 +875,7 @@ void mptcp_data_ready(struct sock *sk, s
+ static void mptcp_subflow_joined(struct mptcp_sock *msk, struct sock *ssk)
+ {
+ mptcp_subflow_ctx(ssk)->map_seq = READ_ONCE(msk->ack_seq);
+- WRITE_ONCE(msk->allow_infinite_fallback, false);
++ msk->allow_infinite_fallback = false;
+ mptcp_event(MPTCP_EVENT_SUB_ESTABLISHED, msk, ssk, GFP_ATOMIC);
+ }
+
+@@ -887,7 +887,7 @@ static bool __mptcp_finish_join(struct m
+ return false;
+
+ spin_lock_bh(&msk->fallback_lock);
+- if (__mptcp_check_fallback(msk)) {
++ if (!msk->allow_subflows) {
+ spin_unlock_bh(&msk->fallback_lock);
+ return false;
+ }
+@@ -2688,7 +2688,7 @@ static void __mptcp_retrans(struct sock
+ len = max(copied, len);
+ tcp_push(ssk, 0, info.mss_now, tcp_sk(ssk)->nonagle,
+ info.size_goal);
+- WRITE_ONCE(msk->allow_infinite_fallback, false);
++ msk->allow_infinite_fallback = false;
+ }
+ spin_unlock_bh(&msk->fallback_lock);
+
+@@ -2819,7 +2819,8 @@ static void __mptcp_init_sock(struct soc
+ WRITE_ONCE(msk->first, NULL);
+ inet_csk(sk)->icsk_sync_mss = mptcp_sync_mss;
+ WRITE_ONCE(msk->csum_enabled, mptcp_is_checksum_enabled(sock_net(sk)));
+- WRITE_ONCE(msk->allow_infinite_fallback, true);
++ msk->allow_infinite_fallback = true;
++ msk->allow_subflows = true;
+ msk->recovery = false;
+ msk->subflow_id = 1;
+
+@@ -3624,7 +3625,7 @@ bool mptcp_finish_join(struct sock *ssk)
+ /* active subflow, already present inside the conn_list */
+ if (!list_empty(&subflow->node)) {
+ spin_lock_bh(&msk->fallback_lock);
+- if (__mptcp_check_fallback(msk)) {
++ if (!msk->allow_subflows) {
+ spin_unlock_bh(&msk->fallback_lock);
+ return false;
+ }
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -330,13 +330,15 @@ struct mptcp_sock {
+ u64 rtt_us; /* last maximum rtt of subflows */
+ } rcvq_space;
+ u8 scaling_ratio;
++ bool allow_subflows;
+
+ u32 subflow_id;
+ u32 setsockopt_seq;
+ char ca_name[TCP_CA_NAME_MAX];
+
+- spinlock_t fallback_lock; /* protects fallback and
+- * allow_infinite_fallback
++ spinlock_t fallback_lock; /* protects fallback,
++ * allow_infinite_fallback and
++ * allow_join
+ */
+ };
+
+@@ -1113,6 +1115,7 @@ static inline bool __mptcp_try_fallback(
+ return false;
+ }
+
++ msk->allow_subflows = false;
+ set_bit(MPTCP_FALLBACK_DONE, &msk->flags);
+ spin_unlock_bh(&msk->fallback_lock);
+ return true;
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -1257,20 +1257,29 @@ static void subflow_sched_work_if_closed
+ mptcp_schedule_work(sk);
+ }
+
+-static void mptcp_subflow_fail(struct mptcp_sock *msk, struct sock *ssk)
++static bool mptcp_subflow_fail(struct mptcp_sock *msk, struct sock *ssk)
+ {
+ struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
+ unsigned long fail_tout;
+
++ /* we are really failing, prevent any later subflow join */
++ spin_lock_bh(&msk->fallback_lock);
++ if (!msk->allow_infinite_fallback) {
++ spin_unlock_bh(&msk->fallback_lock);
++ return false;
++ }
++ msk->allow_subflows = false;
++ spin_unlock_bh(&msk->fallback_lock);
++
+ /* greceful failure can happen only on the MPC subflow */
+ if (WARN_ON_ONCE(ssk != READ_ONCE(msk->first)))
+- return;
++ return false;
+
+ /* since the close timeout take precedence on the fail one,
+ * no need to start the latter when the first is already set
+ */
+ if (sock_flag((struct sock *)msk, SOCK_DEAD))
+- return;
++ return true;
+
+ /* we don't need extreme accuracy here, use a zero fail_tout as special
+ * value meaning no fail timeout at all;
+@@ -1282,6 +1291,7 @@ static void mptcp_subflow_fail(struct mp
+ tcp_send_ack(ssk);
+
+ mptcp_reset_tout_timer(msk, subflow->fail_tout);
++ return true;
+ }
+
+ static bool subflow_check_data_avail(struct sock *ssk)
+@@ -1342,12 +1352,11 @@ fallback:
+ (subflow->mp_join || subflow->valid_csum_seen)) {
+ subflow->send_mp_fail = 1;
+
+- if (!READ_ONCE(msk->allow_infinite_fallback)) {
++ if (!mptcp_subflow_fail(msk, ssk)) {
+ subflow->reset_transient = 0;
+ subflow->reset_reason = MPTCP_RST_EMIDDLEBOX;
+ goto reset;
+ }
+- mptcp_subflow_fail(msk, ssk);
+ WRITE_ONCE(subflow->data_avail, MPTCP_SUBFLOW_DATA_AVAIL);
+ return true;
+ }
--- /dev/null
+From stable+bounces-164898-greg=kroah.com@vger.kernel.org Mon Jul 28 11:15:34 2025
+From: "Matthieu Baerts (NGI0)" <matttbe@kernel.org>
+Date: Mon, 28 Jul 2025 11:14:51 +0200
+Subject: mptcp: reset fallback status gracefully at disconnect() time
+To: mptcp@lists.linux.dev, stable@vger.kernel.org, gregkh@linuxfoundation.org
+Cc: Paolo Abeni <pabeni@redhat.com>, sashal@kernel.org, "Matthieu Baerts (NGI0)" <matttbe@kernel.org>, Jakub Kicinski <kuba@kernel.org>
+Message-ID: <20250728091448.3494479-8-matttbe@kernel.org>
+
+From: Paolo Abeni <pabeni@redhat.com>
+
+commit da9b2fc7b73d147d88abe1922de5ab72d72d7756 upstream.
+
+mptcp_disconnect() clears the fallback bit unconditionally, without
+touching the associated flags.
+
+The bit clear is safe, as no fallback operation can race with that --
+all subflow are already in TCP_CLOSE status thanks to the previous
+FASTCLOSE -- but we need to consistently reset all the fallback related
+status.
+
+Also acquire the relevant lock, to avoid fouling static analyzers.
+
+Fixes: b29fcfb54cd7 ("mptcp: full disconnect implementation")
+Cc: stable@vger.kernel.org
+Signed-off-by: Paolo Abeni <pabeni@redhat.com>
+Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
+Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
+Link: https://patch.msgid.link/20250714-net-mptcp-fallback-races-v1-3-391aff963322@kernel.org
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/mptcp/protocol.c | 9 +++++++++
+ 1 file changed, 9 insertions(+)
+
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -3208,7 +3208,16 @@ static int mptcp_disconnect(struct sock
+ * subflow
+ */
+ mptcp_destroy_common(msk, MPTCP_CF_FASTCLOSE);
++
++ /* The first subflow is already in TCP_CLOSE status, the following
++ * can't overlap with a fallback anymore
++ */
++ spin_lock_bh(&msk->fallback_lock);
++ msk->allow_subflows = true;
++ msk->allow_infinite_fallback = true;
+ WRITE_ONCE(msk->flags, 0);
++ spin_unlock_bh(&msk->fallback_lock);
++
+ msk->cb_flags = 0;
+ msk->recovery = false;
+ msk->can_ack = false;
--- /dev/null
+From stable+bounces-165034-greg=kroah.com@vger.kernel.org Tue Jul 29 07:37:08 2025
+From: Shung-Hsi Yu <shung-hsi.yu@suse.com>
+Date: Tue, 29 Jul 2025 13:36:51 +0800
+Subject: Revert "selftests/bpf: Add a cgroup prog bpf_get_ns_current_pid_tgid() test"
+To: stable@vger.kernel.org
+Cc: Sasha Levin <sashal@kernel.org>, Andrii Nakryiko <andrii@kernel.org>, Yonghong Song <yonghong.song@linux.dev>, Shung-Hsi Yu <shung-hsi.yu@suse.com>
+Message-ID: <20250729053652.73667-1-shung-hsi.yu@suse.com>
+
+From: Shung-Hsi Yu <shung-hsi.yu@suse.com>
+
+This reverts commit 4730b07ef7745d7cd48c6aa9f72d75ac136d436f.
+
+The test depends on commit eb166e522c77 "bpf: Allow helper
+bpf_get_[ns_]current_pid_tgid() for all prog types", which was not part of the
+stable 6.6 code base, and thus the test will fail. Revert it since it is a
+false positive.
+
+Signed-off-by: Shung-Hsi Yu <shung-hsi.yu@suse.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ tools/testing/selftests/bpf/prog_tests/ns_current_pid_tgid.c | 73 -----------
+ tools/testing/selftests/bpf/progs/test_ns_current_pid_tgid.c | 7 -
+ 2 files changed, 80 deletions(-)
+
+--- a/tools/testing/selftests/bpf/prog_tests/ns_current_pid_tgid.c
++++ b/tools/testing/selftests/bpf/prog_tests/ns_current_pid_tgid.c
+@@ -12,7 +12,6 @@
+ #include <sys/wait.h>
+ #include <sys/mount.h>
+ #include <fcntl.h>
+-#include "network_helpers.h"
+
+ #define STACK_SIZE (1024 * 1024)
+ static char child_stack[STACK_SIZE];
+@@ -75,50 +74,6 @@ cleanup:
+ return ret;
+ }
+
+-static int test_current_pid_tgid_cgrp(void *args)
+-{
+- struct test_ns_current_pid_tgid__bss *bss;
+- struct test_ns_current_pid_tgid *skel;
+- int server_fd = -1, ret = -1, err;
+- int cgroup_fd = *(int *)args;
+- pid_t tgid, pid;
+-
+- skel = test_ns_current_pid_tgid__open();
+- if (!ASSERT_OK_PTR(skel, "test_ns_current_pid_tgid__open"))
+- return ret;
+-
+- bpf_program__set_autoload(skel->progs.cgroup_bind4, true);
+-
+- err = test_ns_current_pid_tgid__load(skel);
+- if (!ASSERT_OK(err, "test_ns_current_pid_tgid__load"))
+- goto cleanup;
+-
+- bss = skel->bss;
+- if (get_pid_tgid(&pid, &tgid, bss))
+- goto cleanup;
+-
+- skel->links.cgroup_bind4 = bpf_program__attach_cgroup(
+- skel->progs.cgroup_bind4, cgroup_fd);
+- if (!ASSERT_OK_PTR(skel->links.cgroup_bind4, "bpf_program__attach_cgroup"))
+- goto cleanup;
+-
+- server_fd = start_server(AF_INET, SOCK_STREAM, NULL, 0, 0);
+- if (!ASSERT_GE(server_fd, 0, "start_server"))
+- goto cleanup;
+-
+- if (!ASSERT_EQ(bss->user_pid, pid, "pid"))
+- goto cleanup;
+- if (!ASSERT_EQ(bss->user_tgid, tgid, "tgid"))
+- goto cleanup;
+- ret = 0;
+-
+-cleanup:
+- if (server_fd >= 0)
+- close(server_fd);
+- test_ns_current_pid_tgid__destroy(skel);
+- return ret;
+-}
+-
+ static void test_ns_current_pid_tgid_new_ns(int (*fn)(void *), void *arg)
+ {
+ int wstatus;
+@@ -140,25 +95,6 @@ static void test_ns_current_pid_tgid_new
+ return;
+ }
+
+-static void test_in_netns(int (*fn)(void *), void *arg)
+-{
+- struct nstoken *nstoken = NULL;
+-
+- SYS(cleanup, "ip netns add ns_current_pid_tgid");
+- SYS(cleanup, "ip -net ns_current_pid_tgid link set dev lo up");
+-
+- nstoken = open_netns("ns_current_pid_tgid");
+- if (!ASSERT_OK_PTR(nstoken, "open_netns"))
+- goto cleanup;
+-
+- test_ns_current_pid_tgid_new_ns(fn, arg);
+-
+-cleanup:
+- if (nstoken)
+- close_netns(nstoken);
+- SYS_NOFAIL("ip netns del ns_current_pid_tgid");
+-}
+-
+ /* TODO: use a different tracepoint */
+ void serial_test_ns_current_pid_tgid(void)
+ {
+@@ -166,13 +102,4 @@ void serial_test_ns_current_pid_tgid(voi
+ test_current_pid_tgid_tp(NULL);
+ if (test__start_subtest("new_ns_tp"))
+ test_ns_current_pid_tgid_new_ns(test_current_pid_tgid_tp, NULL);
+- if (test__start_subtest("new_ns_cgrp")) {
+- int cgroup_fd = -1;
+-
+- cgroup_fd = test__join_cgroup("/sock_addr");
+- if (ASSERT_GE(cgroup_fd, 0, "join_cgroup")) {
+- test_in_netns(test_current_pid_tgid_cgrp, &cgroup_fd);
+- close(cgroup_fd);
+- }
+- }
+ }
+--- a/tools/testing/selftests/bpf/progs/test_ns_current_pid_tgid.c
++++ b/tools/testing/selftests/bpf/progs/test_ns_current_pid_tgid.c
+@@ -28,11 +28,4 @@ int tp_handler(const void *ctx)
+ return 0;
+ }
+
+-SEC("?cgroup/bind4")
+-int cgroup_bind4(struct bpf_sock_addr *ctx)
+-{
+- get_pid_tgid();
+- return 1;
+-}
+-
+ char _license[] SEC("license") = "GPL";
mtd-rawnand-qcom-fix-last-codeword-read-in-qcom_param_page_type_exec.patch
perf-x86-intel-fix-crash-in-icl_update_topdown_event.patch
wifi-mt76-mt7921-prevent-decap-offload-config-before-sta-initialization.patch
+ksmbd-add-free_transport-ops-in-ksmbd-connection.patch
+arm64-cpufeatures-kvm-add-armv8.9-feat_ecbhb-bits-in-id_aa64mmfr1-register.patch
+mptcp-make-fallback-action-and-fallback-decision-atomic.patch
+mptcp-plug-races-between-subflow-fail-and-subflow-creation.patch
+mptcp-reset-fallback-status-gracefully-at-disconnect-time.patch
+arm-9448-1-use-an-absolute-path-to-unified.h-in-kbuild_aflags.patch
+drm-sched-remove-optimization-that-causes-hang-when-killing-dependent-jobs.patch
+spi-cadence-quadspi-fix-cleanup-of-rx_chan-on-failure-paths.patch
+revert-selftests-bpf-add-a-cgroup-prog-bpf_get_ns_current_pid_tgid-test.patch
--- /dev/null
+From stable+bounces-165105-greg=kroah.com@vger.kernel.org Tue Jul 29 19:11:43 2025
+From: Ronald Wahl <rwahl@gmx.de>
+Date: Tue, 29 Jul 2025 19:11:14 +0200
+Subject: spi: cadence-quadspi: fix cleanup of rx_chan on failure paths
+To: stable@vger.kernel.org
+Cc: ronald.wahl@legrand.com, Khairul Anuar Romli <khairul.anuar.romli@altera.com>, Dan Carpenter <dan.carpenter@linaro.org>, Mark Brown <broonie@kernel.org>
+Message-ID: <20250729171114.3982809-1-rwahl@gmx.de>
+
+From: Khairul Anuar Romli <khairul.anuar.romli@altera.com>
+
+commit 04a8ff1bc3514808481ddebd454342ad902a3f60 upstream.
+
+Remove incorrect checks on cqspi->rx_chan that cause driver breakage
+during failure cleanup. Ensure proper resource freeing on the success
+path when operating in cqspi->use_direct_mode, preventing leaks and
+improving stability.
+
+Signed-off-by: Khairul Anuar Romli <khairul.anuar.romli@altera.com>
+Reviewed-by: Dan Carpenter <dan.carpenter@linaro.org>
+Link: https://patch.msgid.link/89765a2b94f047ded4f14babaefb7ef92ba07cb2.1751274389.git.khairul.anuar.romli@altera.com
+Signed-off-by: Mark Brown <broonie@kernel.org>
+[Minor conflict resolved due to code context change.]
+Signed-off-by: Ronald Wahl <ronald.wahl@legrand.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/spi/spi-cadence-quadspi.c | 5 -----
+ 1 file changed, 5 deletions(-)
+
+--- a/drivers/spi/spi-cadence-quadspi.c
++++ b/drivers/spi/spi-cadence-quadspi.c
+@@ -1870,11 +1870,6 @@ static int cqspi_probe(struct platform_d
+
+ pm_runtime_enable(dev);
+
+- if (cqspi->rx_chan) {
+- dma_release_channel(cqspi->rx_chan);
+- goto probe_setup_failed;
+- }
+-
+ ret = spi_register_controller(host);
+ if (ret) {
+ dev_err(&pdev->dev, "failed to register SPI ctlr %d\n", ret);