--- /dev/null
+From stable+bounces-225630-greg=kroah.com@vger.kernel.org Mon Mar 16 18:20:23 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 16 Mar 2026 13:19:47 -0400
+Subject: can: gs_usb: gs_can_open(): always configure bitrates before starting device
+To: stable@vger.kernel.org
+Cc: Marc Kleine-Budde <mkl@pengutronix.de>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260316171947.1022973-1-sashal@kernel.org>
+
+From: Marc Kleine-Budde <mkl@pengutronix.de>
+
+[ Upstream commit 2df6162785f31f1bbb598cfc3b08e4efc88f80b6 ]
+
+So far the driver populated the struct can_priv::do_set_bittiming() and
+struct can_priv::fd::do_set_data_bittiming() callbacks.
+
+Before bringing up the interface, user space has to configure the bitrates.
+With these callbacks the configuration is directly forwarded into the CAN
+hardware. Then the interface can be brought up.
+
+An ifdown-ifup cycle (without changing the bit rates) doesn't re-configure
+the bitrates in the CAN hardware. This leads to a problem with the
+CANable-2.5 [1] firmware, which resets the configured bit rates during
+ifdown.
+
+To fix the problem remove both bit timing callbacks and always configure
+the bitrates in the struct net_device_ops::ndo_open() callback.
+
+[1] https://github.com/Elmue/CANable-2.5-firmware-Slcan-and-Candlelight
+
+Cc: stable@vger.kernel.org
+Fixes: d08e973a77d1 ("can: gs_usb: Added support for the GS_USB CAN devices")
+Link: https://patch.msgid.link/20260219-gs_usb-always-configure-bitrates-v2-1-671f8ba5b0a5@pengutronix.de
+Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
+[ adapted the `.fd` sub-struct ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/can/usb/gs_usb.c | 22 ++++++++++++++++------
+ 1 file changed, 16 insertions(+), 6 deletions(-)
+
+--- a/drivers/net/can/usb/gs_usb.c
++++ b/drivers/net/can/usb/gs_usb.c
+@@ -769,9 +769,8 @@ device_detach:
+ }
+ }
+
+-static int gs_usb_set_bittiming(struct net_device *netdev)
++static int gs_usb_set_bittiming(struct gs_can *dev)
+ {
+- struct gs_can *dev = netdev_priv(netdev);
+ struct can_bittiming *bt = &dev->can.bittiming;
+ struct gs_device_bittiming dbt = {
+ .prop_seg = cpu_to_le32(bt->prop_seg),
+@@ -788,9 +787,8 @@ static int gs_usb_set_bittiming(struct n
+ GFP_KERNEL);
+ }
+
+-static int gs_usb_set_data_bittiming(struct net_device *netdev)
++static int gs_usb_set_data_bittiming(struct gs_can *dev)
+ {
+- struct gs_can *dev = netdev_priv(netdev);
+ struct can_bittiming *bt = &dev->can.data_bittiming;
+ struct gs_device_bittiming dbt = {
+ .prop_seg = cpu_to_le32(bt->prop_seg),
+@@ -1054,6 +1052,20 @@ static int gs_can_open(struct net_device
+ if (dev->feature & GS_CAN_FEATURE_HW_TIMESTAMP)
+ flags |= GS_CAN_MODE_HW_TIMESTAMP;
+
++ rc = gs_usb_set_bittiming(dev);
++ if (rc) {
++ netdev_err(netdev, "failed to set bittiming: %pe\n", ERR_PTR(rc));
++ goto out_usb_kill_anchored_urbs;
++ }
++
++ if (ctrlmode & CAN_CTRLMODE_FD) {
++ rc = gs_usb_set_data_bittiming(dev);
++ if (rc) {
++ netdev_err(netdev, "failed to set data bittiming: %pe\n", ERR_PTR(rc));
++ goto out_usb_kill_anchored_urbs;
++ }
++ }
++
+ /* finally start device */
+ dev->can.state = CAN_STATE_ERROR_ACTIVE;
+ dm.flags = cpu_to_le32(flags);
+@@ -1354,7 +1366,6 @@ static struct gs_can *gs_make_candev(uns
+ dev->can.state = CAN_STATE_STOPPED;
+ dev->can.clock.freq = le32_to_cpu(bt_const.fclk_can);
+ dev->can.bittiming_const = &dev->bt_const;
+- dev->can.do_set_bittiming = gs_usb_set_bittiming;
+
+ dev->can.ctrlmode_supported = CAN_CTRLMODE_CC_LEN8_DLC;
+
+@@ -1378,7 +1389,6 @@ static struct gs_can *gs_make_candev(uns
+ * GS_CAN_FEATURE_BT_CONST_EXT is set.
+ */
+ dev->can.data_bittiming_const = &dev->bt_const;
+- dev->can.do_set_data_bittiming = gs_usb_set_data_bittiming;
+ }
+
+ if (feature & GS_CAN_FEATURE_TERMINATION) {
--- /dev/null
+From stable+bounces-227185-greg=kroah.com@vger.kernel.org Thu Mar 19 01:36:00 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 18 Mar 2026 20:35:52 -0400
+Subject: cifs: open files should not hold ref on superblock
+To: stable@vger.kernel.org
+Cc: Shyam Prasad N <sprasad@microsoft.com>, Steve French <stfrench@microsoft.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260319003552.1847058-1-sashal@kernel.org>
+
+From: Shyam Prasad N <sprasad@microsoft.com>
+
+[ Upstream commit 340cea84f691c5206561bb2e0147158fe02070be ]
+
+Today whenever we deal with a file, in addition to holding
+a reference on the dentry, we also get a reference on the
+superblock. This happens in two cases:
+1. when a new cinode is allocated
+2. when an oplock break is being processed
+
+The reasoning for holding the superblock ref was to make sure
+that when umount happens, if there are users of inodes and
+dentries, it does not try to clean them up and wait for the
+last ref to superblock to be dropped by last of such users.
+
+But the side effect of doing that is that umount silently drops
+a ref on the superblock and we could have deferred closes and
+lease breaks still holding these refs.
+
+Ideally, we should ensure that all of these users of inodes and
+dentries are cleaned up at the time of umount, which is what this
+code is doing.
+
+This code change allows these code paths to use a ref on the
+dentry (and hence the inode). That way, umount is
+ensured to clean up SMB client resources when it's the last
+ref on the superblock (For ex: when same objects are shared).
+
+The code change also moves the call to close all the files in
+deferred close list to the umount code path. It also waits for
+oplock_break workers to be flushed before calling
+kill_anon_super (which eventually frees up those objects).
+
+Fixes: 24261fc23db9 ("cifs: delay super block destruction until all cifsFileInfo objects are gone")
+Fixes: 705c79101ccf ("smb: client: fix use-after-free in cifs_oplock_break")
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Shyam Prasad N <sprasad@microsoft.com>
+Signed-off-by: Steve French <stfrench@microsoft.com>
+[ adapted kmalloc_obj() macro to kmalloc(sizeof()) ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/smb/client/cifsfs.c | 7 +++++--
+ fs/smb/client/cifsproto.h | 1 +
+ fs/smb/client/file.c | 11 -----------
+ fs/smb/client/misc.c | 43 +++++++++++++++++++++++++++++++++++++++++++
+ fs/smb/client/trace.h | 2 ++
+ 5 files changed, 51 insertions(+), 13 deletions(-)
+
+--- a/fs/smb/client/cifsfs.c
++++ b/fs/smb/client/cifsfs.c
+@@ -290,10 +290,14 @@ static void cifs_kill_sb(struct super_bl
+
+ /*
+ * We need to release all dentries for the cached directories
+- * before we kill the sb.
++ * and close all deferred file handles before we kill the sb.
+ */
+ if (cifs_sb->root) {
+ close_all_cached_dirs(cifs_sb);
++ cifs_close_all_deferred_files_sb(cifs_sb);
++
++ /* Wait for all pending oplock breaks to complete */
++ flush_workqueue(cifsoplockd_wq);
+
+ /* finally release root dentry */
+ dput(cifs_sb->root);
+@@ -768,7 +772,6 @@ static void cifs_umount_begin(struct sup
+ spin_unlock(&tcon->tc_lock);
+ spin_unlock(&cifs_tcp_ses_lock);
+
+- cifs_close_all_deferred_files(tcon);
+ /* cancel_brl_requests(tcon); */ /* BB mark all brl mids as exiting */
+ /* cancel_notify_requests(tcon); */
+ if (tcon->ses && tcon->ses->server) {
+--- a/fs/smb/client/cifsproto.h
++++ b/fs/smb/client/cifsproto.h
+@@ -300,6 +300,7 @@ extern void cifs_close_deferred_file(str
+
+ extern void cifs_close_all_deferred_files(struct cifs_tcon *cifs_tcon);
+
++void cifs_close_all_deferred_files_sb(struct cifs_sb_info *cifs_sb);
+ extern void cifs_close_deferred_file_under_dentry(struct cifs_tcon *cifs_tcon,
+ const char *path);
+
+--- a/fs/smb/client/file.c
++++ b/fs/smb/client/file.c
+@@ -579,8 +579,6 @@ struct cifsFileInfo *cifs_new_fileinfo(s
+ mutex_init(&cfile->fh_mutex);
+ spin_lock_init(&cfile->file_info_lock);
+
+- cifs_sb_active(inode->i_sb);
+-
+ /*
+ * If the server returned a read oplock and we have mandatory brlocks,
+ * set oplock level to None.
+@@ -635,7 +633,6 @@ static void cifsFileInfo_put_final(struc
+ struct inode *inode = d_inode(cifs_file->dentry);
+ struct cifsInodeInfo *cifsi = CIFS_I(inode);
+ struct cifsLockInfo *li, *tmp;
+- struct super_block *sb = inode->i_sb;
+
+ /*
+ * Delete any outstanding lock records. We'll lose them when the file
+@@ -653,7 +650,6 @@ static void cifsFileInfo_put_final(struc
+
+ cifs_put_tlink(cifs_file->tlink);
+ dput(cifs_file->dentry);
+- cifs_sb_deactive(sb);
+ kfree(cifs_file->symlink_target);
+ kfree(cifs_file);
+ }
+@@ -5154,12 +5150,6 @@ void cifs_oplock_break(struct work_struc
+ __u64 persistent_fid, volatile_fid;
+ __u16 net_fid;
+
+- /*
+- * Hold a reference to the superblock to prevent it and its inodes from
+- * being freed while we are accessing cinode. Otherwise, _cifsFileInfo_put()
+- * may release the last reference to the sb and trigger inode eviction.
+- */
+- cifs_sb_active(sb);
+ wait_on_bit(&cinode->flags, CIFS_INODE_PENDING_WRITERS,
+ TASK_UNINTERRUPTIBLE);
+
+@@ -5232,7 +5222,6 @@ oplock_break_ack:
+ cifs_put_tlink(tlink);
+ out:
+ cifs_done_oplock_break(cinode);
+- cifs_sb_deactive(sb);
+ }
+
+ /*
+--- a/fs/smb/client/misc.c
++++ b/fs/smb/client/misc.c
+@@ -27,6 +27,11 @@
+ #include "fs_context.h"
+ #include "cached_dir.h"
+
++struct tcon_list {
++ struct list_head entry;
++ struct cifs_tcon *tcon;
++};
++
+ /* The xid serves as a useful identifier for each incoming vfs request,
+ in a similar way to the mid which is useful to track each sent smb,
+ and CurrentXid can also provide a running counter (although it
+@@ -831,6 +836,44 @@ cifs_close_all_deferred_files(struct cif
+ kfree(tmp_list);
+ }
+ }
++
++void cifs_close_all_deferred_files_sb(struct cifs_sb_info *cifs_sb)
++{
++ struct rb_root *root = &cifs_sb->tlink_tree;
++ struct rb_node *node;
++ struct cifs_tcon *tcon;
++ struct tcon_link *tlink;
++ struct tcon_list *tmp_list, *q;
++ LIST_HEAD(tcon_head);
++
++ spin_lock(&cifs_sb->tlink_tree_lock);
++ for (node = rb_first(root); node; node = rb_next(node)) {
++ tlink = rb_entry(node, struct tcon_link, tl_rbnode);
++ tcon = tlink_tcon(tlink);
++ if (IS_ERR(tcon))
++ continue;
++ tmp_list = kmalloc(sizeof(struct tcon_list), GFP_ATOMIC);
++ if (tmp_list == NULL)
++ break;
++ tmp_list->tcon = tcon;
++ /* Take a reference on tcon to prevent it from being freed */
++ spin_lock(&tcon->tc_lock);
++ ++tcon->tc_count;
++ trace_smb3_tcon_ref(tcon->debug_id, tcon->tc_count,
++ netfs_trace_tcon_ref_get_close_defer_files);
++ spin_unlock(&tcon->tc_lock);
++ list_add_tail(&tmp_list->entry, &tcon_head);
++ }
++ spin_unlock(&cifs_sb->tlink_tree_lock);
++
++ list_for_each_entry_safe(tmp_list, q, &tcon_head, entry) {
++ cifs_close_all_deferred_files(tmp_list->tcon);
++ list_del(&tmp_list->entry);
++ cifs_put_tcon(tmp_list->tcon, netfs_trace_tcon_ref_put_close_defer_files);
++ kfree(tmp_list);
++ }
++}
++
+ void
+ cifs_close_deferred_file_under_dentry(struct cifs_tcon *tcon, const char *path)
+ {
+--- a/fs/smb/client/trace.h
++++ b/fs/smb/client/trace.h
+@@ -30,6 +30,7 @@
+ EM(netfs_trace_tcon_ref_get_cached_laundromat, "GET Ch-Lau") \
+ EM(netfs_trace_tcon_ref_get_cached_lease_break, "GET Ch-Lea") \
+ EM(netfs_trace_tcon_ref_get_cancelled_close, "GET Cn-Cls") \
++ EM(netfs_trace_tcon_ref_get_close_defer_files, "GET Cl-Def") \
+ EM(netfs_trace_tcon_ref_get_dfs_refer, "GET DfsRef") \
+ EM(netfs_trace_tcon_ref_get_find, "GET Find ") \
+ EM(netfs_trace_tcon_ref_get_find_sess_tcon, "GET FndSes") \
+@@ -41,6 +42,7 @@
+ EM(netfs_trace_tcon_ref_put_cancelled_close, "PUT Cn-Cls") \
+ EM(netfs_trace_tcon_ref_put_cancelled_close_fid, "PUT Cn-Fid") \
+ EM(netfs_trace_tcon_ref_put_cancelled_mid, "PUT Cn-Mid") \
++ EM(netfs_trace_tcon_ref_put_close_defer_files, "PUT Cl-Def") \
+ EM(netfs_trace_tcon_ref_put_mnt_ctx, "PUT MntCtx") \
+ EM(netfs_trace_tcon_ref_put_dfs_refer, "PUT DfsRfr") \
+ EM(netfs_trace_tcon_ref_put_reconnect_server, "PUT Reconn") \
--- /dev/null
+From stable+bounces-227196-greg=kroah.com@vger.kernel.org Thu Mar 19 02:07:41 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 18 Mar 2026 21:07:36 -0400
+Subject: crypto: atmel-sha204a - Fix OOM ->tfm_count leak
+To: stable@vger.kernel.org
+Cc: Thorsten Blum <thorsten.blum@linux.dev>, Herbert Xu <herbert@gondor.apana.org.au>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260319010736.1868348-1-sashal@kernel.org>
+
+From: Thorsten Blum <thorsten.blum@linux.dev>
+
+[ Upstream commit d240b079a37e90af03fd7dfec94930eb6c83936e ]
+
+If memory allocation fails, decrement ->tfm_count to avoid blocking
+future reads.
+
+Cc: stable@vger.kernel.org
+Fixes: da001fb651b0 ("crypto: atmel-i2c - add support for SHA204A random number generator")
+Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
+Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
+[ adapted kmalloc_obj() macro to kmalloc(sizeof()) ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/crypto/atmel-sha204a.c | 5 +++--
+ 1 file changed, 3 insertions(+), 2 deletions(-)
+
+--- a/drivers/crypto/atmel-sha204a.c
++++ b/drivers/crypto/atmel-sha204a.c
+@@ -52,9 +52,10 @@ static int atmel_sha204a_rng_read_nonblo
+ rng->priv = 0;
+ } else {
+ work_data = kmalloc(sizeof(*work_data), GFP_ATOMIC);
+- if (!work_data)
++ if (!work_data) {
++ atomic_dec(&i2c_priv->tfm_count);
+ return -ENOMEM;
+-
++ }
+ work_data->ctx = i2c_priv;
+ work_data->client = i2c_priv->client;
+
--- /dev/null
+From stable+bounces-223670-greg=kroah.com@vger.kernel.org Mon Mar 9 15:11:54 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 9 Mar 2026 10:11:36 -0400
+Subject: drm/amd/display: Use GFP_ATOMIC in dc_create_stream_for_sink
+To: stable@vger.kernel.org
+Cc: Natalie Vock <natalie.vock@gmx.de>, Alex Deucher <alexander.deucher@amd.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260309141136.1105798-1-sashal@kernel.org>
+
+From: Natalie Vock <natalie.vock@gmx.de>
+
+[ Upstream commit 28dfe4317541e57fe52f9a290394cd29c348228b ]
+
+This can be called while preemption is disabled, for example by
+dcn32_internal_validate_bw which is called with the FPU active.
+
+Fixes "BUG: scheduling while atomic" messages I encounter on my Navi31
+machine.
+
+Signed-off-by: Natalie Vock <natalie.vock@gmx.de>
+Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
+(cherry picked from commit b42dae2ebc5c84a68de63ec4ffdfec49362d53f1)
+Cc: stable@vger.kernel.org
+[ Context ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/gpu/drm/amd/display/dc/core/dc_stream.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
+@@ -164,7 +164,7 @@ struct dc_stream_state *dc_create_stream
+ if (sink == NULL)
+ return NULL;
+
+- stream = kzalloc(sizeof(struct dc_stream_state), GFP_KERNEL);
++ stream = kzalloc(sizeof(struct dc_stream_state), GFP_ATOMIC);
+ if (stream == NULL)
+ goto alloc_fail;
+
--- /dev/null
+From stable+bounces-227115-greg=kroah.com@vger.kernel.org Wed Mar 18 17:36:59 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 18 Mar 2026 12:10:34 -0400
+Subject: drm/bridge: ti-sn65dsi83: halve horizontal syncs for dual LVDS output
+To: stable@vger.kernel.org
+Cc: Luca Ceresoli <luca.ceresoli@bootlin.com>, Marek Vasut <marek.vasut@mailbox.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260318161034.907691-1-sashal@kernel.org>
+
+From: Luca Ceresoli <luca.ceresoli@bootlin.com>
+
+[ Upstream commit d0d727746944096a6681dc6adb5f123fc5aa018d ]
+
+Dual LVDS output (available on the SN65DSI84) requires HSYNC_PULSE_WIDTH
+and HORIZONTAL_BACK_PORCH to be divided by two with respect to the values
+used for single LVDS output.
+
+While not clearly stated in the datasheet, this is needed according to the
+DSI Tuner [0] output. It also makes sense intuitively because in dual LVDS
+output two pixels at a time are output and so the output clock is half of
+the pixel clock.
+
+Some dual-LVDS panels refuse to show any picture without this fix.
+
+Divide by two HORIZONTAL_FRONT_PORCH too, even though this register is used
+only for test pattern generation which is not currently implemented by this
+driver.
+
+[0] https://www.ti.com/tool/DSI-TUNER
+
+Fixes: ceb515ba29ba ("drm/bridge: ti-sn65dsi83: Add TI SN65DSI83 and SN65DSI84 driver")
+Cc: stable@vger.kernel.org
+Reviewed-by: Marek Vasut <marek.vasut@mailbox.org>
+Link: https://patch.msgid.link/20260226-ti-sn65dsi83-dual-lvds-fixes-and-test-pattern-v1-2-2e15f5a9a6a0@bootlin.com
+Signed-off-by: Luca Ceresoli <luca.ceresoli@bootlin.com>
+[ adapted variable declaration placement ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/gpu/drm/bridge/ti-sn65dsi83.c | 7 ++++---
+ 1 file changed, 4 insertions(+), 3 deletions(-)
+
+--- a/drivers/gpu/drm/bridge/ti-sn65dsi83.c
++++ b/drivers/gpu/drm/bridge/ti-sn65dsi83.c
+@@ -325,6 +325,7 @@ static void sn65dsi83_atomic_pre_enable(
+ struct drm_bridge_state *old_bridge_state)
+ {
+ struct sn65dsi83 *ctx = bridge_to_sn65dsi83(bridge);
++ const unsigned int dual_factor = ctx->lvds_dual_link ? 2 : 1;
+ struct drm_atomic_state *state = old_bridge_state->base.state;
+ const struct drm_bridge_state *bridge_state;
+ const struct drm_crtc_state *crtc_state;
+@@ -452,18 +453,18 @@ static void sn65dsi83_atomic_pre_enable(
+ /* 32 + 1 pixel clock to ensure proper operation */
+ le16val = cpu_to_le16(32 + 1);
+ regmap_bulk_write(ctx->regmap, REG_VID_CHA_SYNC_DELAY_LOW, &le16val, 2);
+- le16val = cpu_to_le16(mode->hsync_end - mode->hsync_start);
++ le16val = cpu_to_le16((mode->hsync_end - mode->hsync_start) / dual_factor);
+ regmap_bulk_write(ctx->regmap, REG_VID_CHA_HSYNC_PULSE_WIDTH_LOW,
+ &le16val, 2);
+ le16val = cpu_to_le16(mode->vsync_end - mode->vsync_start);
+ regmap_bulk_write(ctx->regmap, REG_VID_CHA_VSYNC_PULSE_WIDTH_LOW,
+ &le16val, 2);
+ regmap_write(ctx->regmap, REG_VID_CHA_HORIZONTAL_BACK_PORCH,
+- mode->htotal - mode->hsync_end);
++ (mode->htotal - mode->hsync_end) / dual_factor);
+ regmap_write(ctx->regmap, REG_VID_CHA_VERTICAL_BACK_PORCH,
+ mode->vtotal - mode->vsync_end);
+ regmap_write(ctx->regmap, REG_VID_CHA_HORIZONTAL_FRONT_PORCH,
+- mode->hsync_start - mode->hdisplay);
++ (mode->hsync_start - mode->hdisplay) / dual_factor);
+ regmap_write(ctx->regmap, REG_VID_CHA_VERTICAL_FRONT_PORCH,
+ mode->vsync_start - mode->vdisplay);
+ regmap_write(ctx->regmap, REG_VID_CHA_TEST_PATTERN, 0x00);
--- /dev/null
+From stable+bounces-227106-greg=kroah.com@vger.kernel.org Wed Mar 18 16:58:25 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 18 Mar 2026 11:48:37 -0400
+Subject: drm/msm: Fix dma_free_attrs() buffer size
+To: stable@vger.kernel.org
+Cc: Thomas Fourier <fourier.thomas@gmail.com>, Dmitry Baryshkov <dmitry.baryshkov@oss.qualcomm.com>, Rob Clark <robin.clark@oss.qualcomm.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260318154837.868095-1-sashal@kernel.org>
+
+From: Thomas Fourier <fourier.thomas@gmail.com>
+
+[ Upstream commit e4eb6e4dd6348dd00e19c2275e3fbaed304ca3bd ]
+
+The gpummu->table buffer is alloc'd with size TABLE_SIZE + 32 in
+a2xx_gpummu_new() but freed with size TABLE_SIZE in
+a2xx_gpummu_destroy().
+
+Change the free size to match the allocation.
+
+Fixes: c2052a4e5c99 ("drm/msm: implement a2xx mmu")
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Thomas Fourier <fourier.thomas@gmail.com>
+Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@oss.qualcomm.com>
+Patchwork: https://patchwork.freedesktop.org/patch/707340/
+Message-ID: <20260226095714.12126-2-fourier.thomas@gmail.com>
+Signed-off-by: Rob Clark <robin.clark@oss.qualcomm.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/gpu/drm/msm/msm_gpummu.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/gpu/drm/msm/msm_gpummu.c
++++ b/drivers/gpu/drm/msm/msm_gpummu.c
+@@ -76,7 +76,7 @@ static void msm_gpummu_destroy(struct ms
+ {
+ struct msm_gpummu *gpummu = to_msm_gpummu(mmu);
+
+- dma_free_attrs(mmu->dev, TABLE_SIZE, gpummu->table, gpummu->pt_base,
++ dma_free_attrs(mmu->dev, TABLE_SIZE + 32, gpummu->table, gpummu->pt_base,
+ DMA_ATTR_FORCE_CONTIGUOUS);
+
+ kfree(gpummu);
--- /dev/null
+From stable+bounces-223511-greg=kroah.com@vger.kernel.org Mon Mar 9 09:33:17 2026
+From: Robert Garcia <rob_garcia@163.com>
+Date: Mon, 9 Mar 2026 16:32:27 +0800
+Subject: f2fs: fix to avoid migrating empty section
+To: stable@vger.kernel.org, Chao Yu <chao@kernel.org>
+Cc: Jaegeuk Kim <jaegeuk@kernel.org>, Daeho Jeong <daehojeong@google.com>, Robert Garcia <rob_garcia@163.com>, linux-f2fs-devel@lists.sourceforge.net, linux-kernel@vger.kernel.org
+Message-ID: <20260309083227.3241109-1-rob_garcia@163.com>
+
+From: Chao Yu <chao@kernel.org>
+
+[ Upstream commit d625a2b08c089397d3a03bff13fa8645e4ec7a01 ]
+
+It reports a bug from device w/ zufs:
+
+F2FS-fs (dm-64): Inconsistent segment (173822) type [1, 0] in SSA and SIT
+F2FS-fs (dm-64): Stopped filesystem due to reason: 4
+
+Thread A Thread B
+- f2fs_expand_inode_data
+ - f2fs_allocate_pinning_section
+ - f2fs_gc_range
+ - do_garbage_collect w/ segno #x
+ - writepage
+ - f2fs_allocate_data_block
+ - new_curseg
+ - allocate segno #x
+
+The root cause is: fallocate on pinning file may race w/ block allocation
+as above, result in do_garbage_collect() from fallocate() may migrate
+segment which is just allocated by a log, the log will update segment type
+in its in-memory structure, however GC will get segment type from on-disk
+SSA block, once segment type changes by log, we can detect such
+inconsistency, then shutdown filesystem.
+
+In this case, on-disk SSA shows type of segno #173822 is 1 (SUM_TYPE_NODE),
+however segno #173822 was just allocated as data type segment, so in-memory
+SIT shows type of segno #173822 is 0 (SUM_TYPE_DATA).
+
+Change as below to fix this issue:
+- check whether current section is empty before gc
+- add sanity checks on do_garbage_collect() to avoid any race case, result
+in migrating segment used by log.
+- btw, it fixes misc issue in printed logs: "SSA and SIT" -> "SIT and SSA".
+
+Fixes: 9703d69d9d15 ("f2fs: support file pinning for zoned devices")
+Cc: Daeho Jeong <daehojeong@google.com>
+Signed-off-by: Chao Yu <chao@kernel.org>
+Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
+[ Use IS_CURSEC instead of is_cursec according to
+commit c1cfc87e49525 ("f2fs: introduce is_cur{seg,sec}()"). ]
+Signed-off-by: Robert Garcia <rob_garcia@163.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/f2fs/gc.c | 16 +++++++++++++++-
+ 1 file changed, 15 insertions(+), 1 deletion(-)
+
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -1742,6 +1742,13 @@ static int do_garbage_collect(struct f2f
+ GET_SUM_BLOCK(sbi, segno));
+ f2fs_put_page(sum_page, 0);
+
++ if (IS_CURSEC(sbi, GET_SEC_FROM_SEG(sbi, segno))) {
++ f2fs_err(sbi, "%s: segment %u is used by log",
++ __func__, segno);
++ f2fs_bug_on(sbi, 1);
++ goto skip;
++ }
++
+ if (get_valid_blocks(sbi, segno, false) == 0)
+ goto freed;
+ if (gc_type == BG_GC && __is_large_section(sbi) &&
+@@ -1752,7 +1759,7 @@ static int do_garbage_collect(struct f2f
+
+ sum = page_address(sum_page);
+ if (type != GET_SUM_TYPE((&sum->footer))) {
+- f2fs_err(sbi, "Inconsistent segment (%u) type [%d, %d] in SSA and SIT",
++ f2fs_err(sbi, "Inconsistent segment (%u) type [%d, %d] in SIT and SSA",
+ segno, type, GET_SUM_TYPE((&sum->footer)));
+ set_sbi_flag(sbi, SBI_NEED_FSCK);
+ f2fs_stop_checkpoint(sbi, false,
+@@ -2005,6 +2012,13 @@ int f2fs_gc_range(struct f2fs_sb_info *s
+ .iroot = RADIX_TREE_INIT(gc_list.iroot, GFP_NOFS),
+ };
+
++ /*
++ * avoid migrating empty section, as it can be allocated by
++ * log in parallel.
++ */
++ if (!get_valid_blocks(sbi, segno, true))
++ continue;
++
+ do_garbage_collect(sbi, segno, &gc_list, FG_GC,
+ dry_run_sections == 0);
+ put_gc_inode(&gc_list);
--- /dev/null
+From stable+bounces-216898-greg=kroah.com@vger.kernel.org Tue Feb 17 20:52:23 2026
+From: Joshua Washington <joshwash@google.com>
+Date: Tue, 17 Feb 2026 11:52:07 -0800
+Subject: gve: defer interrupt enabling until NAPI registration
+To: stable@vger.kernel.org
+Cc: Ankit Garg <nktgrg@google.com>, Jordan Rhee <jordanrhee@google.com>, Harshitha Ramamurthy <hramamurthy@google.com>, Paolo Abeni <pabeni@redhat.com>, Joshua Washington <joshwash@google.com>
+Message-ID: <20260217195207.1449764-4-joshwash@google.com>
+
+From: Ankit Garg <nktgrg@google.com>
+
+[ Upstream commit 3d970eda003441f66551a91fda16478ac0711617 ]
+
+Currently, interrupts are automatically enabled immediately upon
+request. This allows interrupt to fire before the associated NAPI
+context is fully initialized and cause failures like below:
+
+[ 0.946369] Call Trace:
+[ 0.946369] <IRQ>
+[ 0.946369] __napi_poll+0x2a/0x1e0
+[ 0.946369] net_rx_action+0x2f9/0x3f0
+[ 0.946369] handle_softirqs+0xd6/0x2c0
+[ 0.946369] ? handle_edge_irq+0xc1/0x1b0
+[ 0.946369] __irq_exit_rcu+0xc3/0xe0
+[ 0.946369] common_interrupt+0x81/0xa0
+[ 0.946369] </IRQ>
+[ 0.946369] <TASK>
+[ 0.946369] asm_common_interrupt+0x22/0x40
+[ 0.946369] RIP: 0010:pv_native_safe_halt+0xb/0x10
+
+Use the `IRQF_NO_AUTOEN` flag when requesting interrupts to prevent auto
+enablement and explicitly enable the interrupt in NAPI initialization
+path (and disable it during NAPI teardown).
+
+This ensures that interrupt lifecycle is strictly coupled with
+readiness of NAPI context.
+
+Cc: stable@vger.kernel.org
+Fixes: 893ce44df565 ("gve: Add basic driver framework for Compute Engine Virtual NIC")
+Signed-off-by: Ankit Garg <nktgrg@google.com>
+Reviewed-by: Jordan Rhee <jordanrhee@google.com>
+Signed-off-by: Harshitha Ramamurthy <hramamurthy@google.com>
+Link: https://patch.msgid.link/20251219102945.2193617-1-hramamurthy@google.com
+Signed-off-by: Paolo Abeni <pabeni@redhat.com>
+[ modified to re-introduce the irq member to struct gve_notify_block,
+ which was introuduced in commit 9a5e0776d11f ("gve: Avoid rescheduling
+ napi if on wrong cpu"). ]
+Signed-off-by: Joshua Washington <joshwash@google.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/google/gve/gve.h | 1 +
+ drivers/net/ethernet/google/gve/gve_main.c | 5 ++++-
+ 2 files changed, 5 insertions(+), 1 deletion(-)
+
+--- a/drivers/net/ethernet/google/gve/gve.h
++++ b/drivers/net/ethernet/google/gve/gve.h
+@@ -585,6 +585,7 @@ struct gve_notify_block {
+ struct gve_priv *priv;
+ struct gve_tx_ring *tx; /* tx rings on this block */
+ struct gve_rx_ring *rx; /* rx rings on this block */
++ u32 irq;
+ };
+
+ /* Tracks allowed and current queue settings */
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -407,9 +407,10 @@ static int gve_alloc_notify_blocks(struc
+ snprintf(block->name, sizeof(block->name), "gve-ntfy-blk%d@pci:%s",
+ i, pci_name(priv->pdev));
+ block->priv = priv;
++ block->irq = priv->msix_vectors[msix_idx].vector;
+ err = request_irq(priv->msix_vectors[msix_idx].vector,
+ gve_is_gqi(priv) ? gve_intr : gve_intr_dqo,
+- 0, block->name, block);
++ IRQF_NO_AUTOEN, block->name, block);
+ if (err) {
+ dev_err(&priv->pdev->dev,
+ "Failed to receive msix vector %d\n", i);
+@@ -575,6 +576,7 @@ static void gve_add_napi(struct gve_priv
+ struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx];
+
+ netif_napi_add(priv->dev, &block->napi, gve_poll);
++ enable_irq(block->irq);
+ }
+
+ static void gve_remove_napi(struct gve_priv *priv, int ntfy_idx)
+@@ -582,6 +584,7 @@ static void gve_remove_napi(struct gve_p
+ struct gve_notify_block *block = &priv->ntfy_blocks[ntfy_idx];
+
+ netif_napi_del(&block->napi);
++ disable_irq(block->irq);
+ }
+
+ static int gve_register_xdp_qpls(struct gve_priv *priv)
--- /dev/null
+From stable+bounces-223636-greg=kroah.com@vger.kernel.org Mon Mar 9 14:05:20 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 9 Mar 2026 09:00:29 -0400
+Subject: gve: fix incorrect buffer cleanup in gve_tx_clean_pending_packets for QPL
+To: stable@vger.kernel.org
+Cc: Ankit Garg <nktgrg@google.com>, Jordan Rhee <jordanrhee@google.com>, Harshitha Ramamurthy <hramamurthy@google.com>, Joshua Washington <joshwash@google.com>, Simon Horman <horms@kernel.org>, Jakub Kicinski <kuba@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260309130029.867834-1-sashal@kernel.org>
+
+From: Ankit Garg <nktgrg@google.com>
+
+[ Upstream commit fb868db5f4bccd7a78219313ab2917429f715cea ]
+
+In DQ-QPL mode, gve_tx_clean_pending_packets() incorrectly uses the RDA
+buffer cleanup path. It iterates num_bufs times and attempts to unmap
+entries in the dma array.
+
+This leads to two issues:
+1. The dma array shares storage with tx_qpl_buf_ids (union).
+ Interpreting buffer IDs as DMA addresses results in attempting to
+ unmap incorrect memory locations.
+2. num_bufs in QPL mode (counting 2K chunks) can significantly exceed
+ the size of the dma array, causing out-of-bounds access warnings
+(trace below is how we noticed this issue).
+
+UBSAN: array-index-out-of-bounds in
+drivers/net/ethernet/drivers/net/ethernet/google/gve/gve_tx_dqo.c:178:5 index 18 is out of
+range for type 'dma_addr_t[18]' (aka 'unsigned long long[18]')
+Workqueue: gve gve_service_task [gve]
+Call Trace:
+<TASK>
+dump_stack_lvl+0x33/0xa0
+__ubsan_handle_out_of_bounds+0xdc/0x110
+gve_tx_stop_ring_dqo+0x182/0x200 [gve]
+gve_close+0x1be/0x450 [gve]
+gve_reset+0x99/0x120 [gve]
+gve_service_task+0x61/0x100 [gve]
+process_scheduled_works+0x1e9/0x380
+
+Fix this by properly checking for QPL mode and delegating to
+gve_free_tx_qpl_bufs() to reclaim the buffers.
+
+Cc: stable@vger.kernel.org
+Fixes: a6fb8d5a8b69 ("gve: Tx path for DQO-QPL")
+Signed-off-by: Ankit Garg <nktgrg@google.com>
+Reviewed-by: Jordan Rhee <jordanrhee@google.com>
+Reviewed-by: Harshitha Ramamurthy <hramamurthy@google.com>
+Signed-off-by: Joshua Washington <joshwash@google.com>
+Reviewed-by: Simon Horman <horms@kernel.org>
+Link: https://patch.msgid.link/20260220215324.1631350-1-joshwash@google.com
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+[ netmem_dma_unmap_page_attrs() => dma_unmap_page() ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/google/gve/gve_tx_dqo.c | 52 +++++++++++----------------
+ 1 file changed, 23 insertions(+), 29 deletions(-)
+
+--- a/drivers/net/ethernet/google/gve/gve_tx_dqo.c
++++ b/drivers/net/ethernet/google/gve/gve_tx_dqo.c
+@@ -157,6 +157,24 @@ gve_free_pending_packet(struct gve_tx_ri
+ }
+ }
+
++static void gve_unmap_packet(struct device *dev,
++ struct gve_tx_pending_packet_dqo *pkt)
++{
++ int i;
++
++ if (!pkt->num_bufs)
++ return;
++
++ /* SKB linear portion is guaranteed to be mapped */
++ dma_unmap_single(dev, dma_unmap_addr(pkt, dma[0]),
++ dma_unmap_len(pkt, len[0]), DMA_TO_DEVICE);
++ for (i = 1; i < pkt->num_bufs; i++) {
++ dma_unmap_page(dev, dma_unmap_addr(pkt, dma[i]),
++ dma_unmap_len(pkt, len[i]), DMA_TO_DEVICE);
++ }
++ pkt->num_bufs = 0;
++}
++
+ /* gve_tx_free_desc - Cleans up all pending tx requests and buffers.
+ */
+ static void gve_tx_clean_pending_packets(struct gve_tx_ring *tx)
+@@ -166,21 +184,12 @@ static void gve_tx_clean_pending_packets
+ for (i = 0; i < tx->dqo.num_pending_packets; i++) {
+ struct gve_tx_pending_packet_dqo *cur_state =
+ &tx->dqo.pending_packets[i];
+- int j;
+
+- for (j = 0; j < cur_state->num_bufs; j++) {
+- if (j == 0) {
+- dma_unmap_single(tx->dev,
+- dma_unmap_addr(cur_state, dma[j]),
+- dma_unmap_len(cur_state, len[j]),
+- DMA_TO_DEVICE);
+- } else {
+- dma_unmap_page(tx->dev,
+- dma_unmap_addr(cur_state, dma[j]),
+- dma_unmap_len(cur_state, len[j]),
+- DMA_TO_DEVICE);
+- }
+- }
++ if (tx->dqo.qpl)
++ gve_free_tx_qpl_bufs(tx, cur_state);
++ else
++ gve_unmap_packet(tx->dev, cur_state);
++
+ if (cur_state->skb) {
+ dev_consume_skb_any(cur_state->skb);
+ cur_state->skb = NULL;
+@@ -992,21 +1001,6 @@ static void remove_from_list(struct gve_
+ }
+ }
+
+-static void gve_unmap_packet(struct device *dev,
+- struct gve_tx_pending_packet_dqo *pkt)
+-{
+- int i;
+-
+- /* SKB linear portion is guaranteed to be mapped */
+- dma_unmap_single(dev, dma_unmap_addr(pkt, dma[0]),
+- dma_unmap_len(pkt, len[0]), DMA_TO_DEVICE);
+- for (i = 1; i < pkt->num_bufs; i++) {
+- dma_unmap_page(dev, dma_unmap_addr(pkt, dma[i]),
+- dma_unmap_len(pkt, len[i]), DMA_TO_DEVICE);
+- }
+- pkt->num_bufs = 0;
+-}
+-
+ /* Completion types and expected behavior:
+ * No Miss compl + Packet compl = Packet completed normally.
+ * Miss compl + Re-inject compl = Packet completed normally.
--- /dev/null
+From stable+bounces-226935-greg=kroah.com@vger.kernel.org Wed Mar 18 01:46:09 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 17 Mar 2026 20:46:03 -0400
+Subject: iomap: reject delalloc mappings during writeback
+To: stable@vger.kernel.org
+Cc: "Darrick J. Wong" <djwong@kernel.org>, Christoph Hellwig <hch@lst.de>, Carlos Maiolino <cmaiolino@redhat.com>, Christian Brauner <brauner@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260318004603.406498-1-sashal@kernel.org>
+
+From: "Darrick J. Wong" <djwong@kernel.org>
+
+[ Upstream commit d320f160aa5ff36cdf83c645cca52b615e866e32 ]
+
+Filesystems should never provide a delayed allocation mapping to
+writeback; they're supposed to allocate the space before replying.
+This can lead to weird IO errors and crashes in the block layer if the
+filesystem is being malicious, or if it hadn't set iomap->dev because
+it's a delalloc mapping.
+
+Fix this by failing writeback on delalloc mappings. Currently no
+filesystems actually misbehave in this manner, but we ought to be
+stricter about things like that.
+
+Cc: stable@vger.kernel.org # v5.5
+Fixes: 598ecfbaa742ac ("iomap: lift the xfs writeback code to iomap")
+Signed-off-by: Darrick J. Wong <djwong@kernel.org>
+Link: https://patch.msgid.link/20260302173002.GL13829@frogsfrogsfrogs
+Reviewed-by: Christoph Hellwig <hch@lst.de>
+Reviewed-by: Carlos Maiolino <cmaiolino@redhat.com>
+Signed-off-by: Christian Brauner <brauner@kernel.org>
+[ no ioend.c ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/iomap/buffered-io.c | 15 ++++++++++++---
+ 1 file changed, 12 insertions(+), 3 deletions(-)
+
+--- a/fs/iomap/buffered-io.c
++++ b/fs/iomap/buffered-io.c
+@@ -1838,10 +1838,19 @@ iomap_writepage_map(struct iomap_writepa
+ if (error)
+ break;
+ trace_iomap_writepage_map(inode, &wpc->iomap);
+- if (WARN_ON_ONCE(wpc->iomap.type == IOMAP_INLINE))
+- continue;
+- if (wpc->iomap.type == IOMAP_HOLE)
++ switch (wpc->iomap.type) {
++ case IOMAP_UNWRITTEN:
++ case IOMAP_MAPPED:
++ break;
++ case IOMAP_HOLE:
+ continue;
++ default:
++ WARN_ON_ONCE(1);
++ error = -EIO;
++ break;
++ }
++ if (error)
++ break;
+ iomap_add_to_ioend(inode, pos, folio, ifs, wpc, wbc,
+ &submit_list);
+ count++;
--- /dev/null
+From stable+bounces-223719-greg=kroah.com@vger.kernel.org Mon Mar 9 19:56:45 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 9 Mar 2026 14:55:35 -0400
+Subject: kbuild: Leave objtool binary around with 'make clean'
+To: stable@vger.kernel.org
+Cc: Nathan Chancellor <nathan@kernel.org>, Michal Suchanek <msuchanek@suse.de>, Rainer Fiebig <jrf@mailbox.org>, Josh Poimboeuf <jpoimboe@kernel.org>, "Peter Zijlstra (Intel)" <peterz@infradead.org>, Nicolas Schier <nsc@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260309185535.1355869-1-sashal@kernel.org>
+
+From: Nathan Chancellor <nathan@kernel.org>
+
+[ Upstream commit fdb12c8a24a453bdd6759979b6ef1e04ebd4beb4 ]
+
+The difference between 'make clean' and 'make mrproper' is documented in
+'make help' as:
+
+ clean - Remove most generated files but keep the config and
+ enough build support to build external modules
+ mrproper - Remove all generated files + config + various backup files
+
+After commit 68b4fe32d737 ("kbuild: Add objtool to top-level clean
+target"), running 'make clean' then attempting to build an external
+module with the resulting build directory fails with
+
+ $ make ARCH=x86_64 O=build clean
+
+ $ make -C build M=... MO=...
+ ...
+ /bin/sh: line 1: .../build/tools/objtool/objtool: No such file or directory
+
+as 'make clean' removes the objtool binary.
+
+Split the objtool clean target into mrproper and clean like Kbuild does
+and remove all generated artifacts with 'make clean' except for the
+objtool binary, which is removed with 'make mrproper'. To avoid a small
+race when running the objtool clean target through both objtool_mrproper
+and objtool_clean when running 'make mrproper', modify objtool's clean
+up find command to avoid using find's '-delete' command by piping the
+files into 'xargs rm -f' like the rest of Kbuild does.
+
+Cc: stable@vger.kernel.org
+Fixes: 68b4fe32d737 ("kbuild: Add objtool to top-level clean target")
+Reported-by: Michal Suchanek <msuchanek@suse.de>
+Closes: https://lore.kernel.org/20260225112633.6123-1-msuchanek@suse.de/
+Reported-by: Rainer Fiebig <jrf@mailbox.org>
+Closes: https://lore.kernel.org/62d12399-76e5-3d40-126a-7490b4795b17@mailbox.org/
+Acked-by: Josh Poimboeuf <jpoimboe@kernel.org>
+Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Reviewed-by: Nicolas Schier <nsc@kernel.org>
+Tested-by: Nicolas Schier <nsc@kernel.org>
+Link: https://patch.msgid.link/20260227-avoid-objtool-binary-removal-clean-v1-1-122f3e55eae9@kernel.org
+Signed-off-by: Nathan Chancellor <nathan@kernel.org>
+[ Context ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ Makefile | 8 ++++----
+ tools/objtool/Makefile | 8 +++++---
+ 2 files changed, 9 insertions(+), 7 deletions(-)
+
+--- a/Makefile
++++ b/Makefile
+@@ -1356,13 +1356,13 @@ ifneq ($(wildcard $(resolve_btfids_O)),)
+ $(Q)$(MAKE) -sC $(srctree)/tools/bpf/resolve_btfids O=$(resolve_btfids_O) clean
+ endif
+
+-PHONY += objtool_clean
++PHONY += objtool_clean objtool_mrproper
+
+ objtool_O = $(abspath $(objtree))/tools/objtool
+
+-objtool_clean:
++objtool_clean objtool_mrproper:
+ ifneq ($(wildcard $(objtool_O)),)
+- $(Q)$(MAKE) -sC $(abs_srctree)/tools/objtool O=$(objtool_O) srctree=$(abs_srctree) clean
++ $(Q)$(MAKE) -sC $(abs_srctree)/tools/objtool O=$(objtool_O) srctree=$(abs_srctree) $(patsubst objtool_%,%,$@)
+ endif
+
+ tools/: FORCE
+@@ -1529,7 +1529,7 @@ PHONY += $(mrproper-dirs) mrproper
+ $(mrproper-dirs):
+ $(Q)$(MAKE) $(clean)=$(patsubst _mrproper_%,%,$@)
+
+-mrproper: clean $(mrproper-dirs)
++mrproper: clean objtool_mrproper $(mrproper-dirs)
+ $(call cmd,rmfiles)
+ @find . $(RCS_FIND_IGNORE) \
+ \( -name '*.rmeta' \) \
+--- a/tools/objtool/Makefile
++++ b/tools/objtool/Makefile
+@@ -87,10 +87,12 @@ $(LIBSUBCMD)-clean:
+ $(Q)$(RM) -r -- $(LIBSUBCMD_OUTPUT)
+
+ clean: $(LIBSUBCMD)-clean
+- $(call QUIET_CLEAN, objtool) $(RM) $(OBJTOOL)
+- $(Q)find $(OUTPUT) -name '*.o' -delete -o -name '\.*.cmd' -delete -o -name '\.*.d' -delete
++ $(Q)find $(OUTPUT) \( -name '*.o' -o -name '\.*.cmd' -o -name '\.*.d' \) -type f -print | xargs $(RM)
+ $(Q)$(RM) $(OUTPUT)arch/x86/lib/inat-tables.c $(OUTPUT)fixdep
+
++mrproper: clean
++ $(call QUIET_CLEAN, objtool) $(RM) $(OBJTOOL)
++
+ FORCE:
+
+-.PHONY: clean FORCE
++.PHONY: clean mrproper FORCE
--- /dev/null
+From stable+bounces-219131-greg=kroah.com@vger.kernel.org Wed Feb 25 03:20:58 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 24 Feb 2026 21:20:17 -0500
+Subject: ksmbd: call ksmbd_vfs_kern_path_end_removing() on some error paths
+To: stable@vger.kernel.org
+Cc: Fedor Pchelkin <pchelkin@ispras.ru>, Namjae Jeon <linkinjeon@kernel.org>, Steve French <stfrench@microsoft.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260225022017.3800187-1-sashal@kernel.org>
+
+From: Fedor Pchelkin <pchelkin@ispras.ru>
+
+[ Upstream commit a09dc10d1353f0e92c21eae2a79af1c2b1ddcde8 ]
+
+There are two places where ksmbd_vfs_kern_path_end_removing() needs to be
+called in order to balance what the corresponding successful call to
+ksmbd_vfs_kern_path_start_removing() has done, i.e. drop inode locks and
+put the taken references. Otherwise there might be potential deadlocks
+and unbalanced locks which are caught like:
+
+BUG: workqueue leaked lock or atomic: kworker/5:21/0x00000000/7596
+ last function: handle_ksmbd_work
+2 locks held by kworker/5:21/7596:
+ #0: ffff8881051ae448 (sb_writers#3){.+.+}-{0:0}, at: ksmbd_vfs_kern_path_locked+0x142/0x660
+ #1: ffff888130e966c0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: ksmbd_vfs_kern_path_locked+0x17d/0x660
+CPU: 5 PID: 7596 Comm: kworker/5:21 Not tainted 6.1.162-00456-gc29b353f383b #138
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.17.0-debian-1.17.0-1 04/01/2014
+Workqueue: ksmbd-io handle_ksmbd_work
+Call Trace:
+ <TASK>
+ dump_stack_lvl+0x44/0x5b
+ process_one_work.cold+0x57/0x5c
+ worker_thread+0x82/0x600
+ kthread+0x153/0x190
+ ret_from_fork+0x22/0x30
+ </TASK>
+
+Found by Linux Verification Center (linuxtesting.org).
+
+Fixes: d5fc1400a34b ("smb/server: avoid deadlock when linking with ReplaceIfExists")
+Cc: stable@vger.kernel.org
+Signed-off-by: Fedor Pchelkin <pchelkin@ispras.ru>
+Acked-by: Namjae Jeon <linkinjeon@kernel.org>
+Signed-off-by: Steve French <stfrench@microsoft.com>
+[ ksmbd_vfs_kern_path_end_removing() call -> ksmbd_vfs_kern_path_unlock() ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/smb/server/smb2pdu.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -6067,14 +6067,14 @@ static int smb2_create_link(struct ksmbd
+ rc = -EINVAL;
+ ksmbd_debug(SMB, "cannot delete %s\n",
+ link_name);
+- goto out;
+ }
+ } else {
+ rc = -EEXIST;
+ ksmbd_debug(SMB, "link already exists\n");
+- goto out;
+ }
+ ksmbd_vfs_kern_path_unlock(&parent_path, &path);
++ if (rc)
++ goto out;
+ }
+ rc = ksmbd_vfs_link(work, target_name, link_name);
+ if (rc)
--- /dev/null
+From stable+bounces-224561-greg=kroah.com@vger.kernel.org Tue Mar 10 20:53:04 2026
+From: Eric Biggers <ebiggers@kernel.org>
+Date: Tue, 10 Mar 2026 12:52:53 -0700
+Subject: ksmbd: Compare MACs in constant time
+To: stable@vger.kernel.org
+Cc: linux-crypto@vger.kernel.org, linux-cifs@vger.kernel.org, Eric Biggers <ebiggers@kernel.org>, Namjae Jeon <linkinjeon@kernel.org>, Steve French <stfrench@microsoft.com>
+Message-ID: <20260310195253.70903-1-ebiggers@kernel.org>
+
+From: Eric Biggers <ebiggers@kernel.org>
+
+commit c5794709bc9105935dbedef8b9cf9c06f2b559fa upstream.
+
+To prevent timing attacks, MAC comparisons need to be constant-time.
+Replace the memcmp() with the correct function, crypto_memneq().
+
+Fixes: e2f34481b24d ("cifsd: add server-side procedures for SMB3")
+Cc: stable@vger.kernel.org
+Signed-off-by: Eric Biggers <ebiggers@kernel.org>
+Acked-by: Namjae Jeon <linkinjeon@kernel.org>
+Signed-off-by: Steve French <stfrench@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/smb/server/Kconfig | 1 +
+ fs/smb/server/auth.c | 4 +++-
+ fs/smb/server/smb2pdu.c | 5 +++--
+ 3 files changed, 7 insertions(+), 3 deletions(-)
+
+--- a/fs/smb/server/Kconfig
++++ b/fs/smb/server/Kconfig
+@@ -11,6 +11,7 @@ config SMB_SERVER
+ select CRYPTO_HMAC
+ select CRYPTO_ECB
+ select CRYPTO_LIB_DES
++ select CRYPTO_LIB_UTILS
+ select CRYPTO_SHA256
+ select CRYPTO_CMAC
+ select CRYPTO_SHA512
+--- a/fs/smb/server/auth.c
++++ b/fs/smb/server/auth.c
+@@ -13,6 +13,7 @@
+ #include <linux/xattr.h>
+ #include <crypto/hash.h>
+ #include <crypto/aead.h>
++#include <crypto/utils.h>
+ #include <linux/random.h>
+ #include <linux/scatterlist.h>
+
+@@ -283,7 +284,8 @@ int ksmbd_auth_ntlmv2(struct ksmbd_conn
+ goto out;
+ }
+
+- if (memcmp(ntlmv2->ntlmv2_hash, ntlmv2_rsp, CIFS_HMAC_MD5_HASH_SIZE) != 0)
++ if (crypto_memneq(ntlmv2->ntlmv2_hash, ntlmv2_rsp,
++ CIFS_HMAC_MD5_HASH_SIZE))
+ rc = -EINVAL;
+ out:
+ if (ctx)
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -4,6 +4,7 @@
+ * Copyright (C) 2018 Samsung Electronics Co., Ltd.
+ */
+
++#include <crypto/utils.h>
+ #include <linux/inetdevice.h>
+ #include <net/addrconf.h>
+ #include <linux/syscalls.h>
+@@ -8804,7 +8805,7 @@ int smb2_check_sign_req(struct ksmbd_wor
+ signature))
+ return 0;
+
+- if (memcmp(signature, signature_req, SMB2_SIGNATURE_SIZE)) {
++ if (crypto_memneq(signature, signature_req, SMB2_SIGNATURE_SIZE)) {
+ pr_err("bad smb2 signature\n");
+ return 0;
+ }
+@@ -8892,7 +8893,7 @@ int smb3_check_sign_req(struct ksmbd_wor
+ if (ksmbd_sign_smb3_pdu(conn, signing_key, iov, 1, signature))
+ return 0;
+
+- if (memcmp(signature, signature_req, SMB2_SIGNATURE_SIZE)) {
++ if (crypto_memneq(signature, signature_req, SMB2_SIGNATURE_SIZE)) {
+ pr_err("bad smb2 signature\n");
+ return 0;
+ }
--- /dev/null
+From stable+bounces-227083-greg=kroah.com@vger.kernel.org Wed Mar 18 16:01:30 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 18 Mar 2026 10:41:50 -0400
+Subject: ksmbd: Don't log keys in SMB3 signing and encryption key generation
+To: stable@vger.kernel.org
+Cc: Thorsten Blum <thorsten.blum@linux.dev>, Namjae Jeon <linkinjeon@kernel.org>, Steve French <stfrench@microsoft.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260318144150.848070-1-sashal@kernel.org>
+
+From: Thorsten Blum <thorsten.blum@linux.dev>
+
+[ Upstream commit 441336115df26b966575de56daf7107ed474faed ]
+
+When KSMBD_DEBUG_AUTH logging is enabled, generate_smb3signingkey() and
+generate_smb3encryptionkey() log the session, signing, encryption, and
+decryption key bytes. Remove the logs to avoid exposing credentials.
+
+Fixes: e2f34481b24d ("cifsd: add server-side procedures for SMB3")
+Cc: stable@vger.kernel.org
+Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
+Acked-by: Namjae Jeon <linkinjeon@kernel.org>
+Signed-off-by: Steve French <stfrench@microsoft.com>
+[ Context ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/smb/server/auth.c | 22 ++--------------------
+ 1 file changed, 2 insertions(+), 20 deletions(-)
+
+--- a/fs/smb/server/auth.c
++++ b/fs/smb/server/auth.c
+@@ -797,12 +797,8 @@ static int generate_smb3signingkey(struc
+ if (!(conn->dialect >= SMB30_PROT_ID && signing->binding))
+ memcpy(chann->smb3signingkey, key, SMB3_SIGN_KEY_SIZE);
+
+- ksmbd_debug(AUTH, "dumping generated AES signing keys\n");
++ ksmbd_debug(AUTH, "generated SMB3 signing key\n");
+ ksmbd_debug(AUTH, "Session Id %llu\n", sess->id);
+- ksmbd_debug(AUTH, "Session Key %*ph\n",
+- SMB2_NTLMV2_SESSKEY_SIZE, sess->sess_key);
+- ksmbd_debug(AUTH, "Signing Key %*ph\n",
+- SMB3_SIGN_KEY_SIZE, key);
+ return 0;
+ }
+
+@@ -866,23 +862,9 @@ static int generate_smb3encryptionkey(st
+ if (rc)
+ return rc;
+
+- ksmbd_debug(AUTH, "dumping generated AES encryption keys\n");
++ ksmbd_debug(AUTH, "generated SMB3 encryption/decryption keys\n");
+ ksmbd_debug(AUTH, "Cipher type %d\n", conn->cipher_type);
+ ksmbd_debug(AUTH, "Session Id %llu\n", sess->id);
+- ksmbd_debug(AUTH, "Session Key %*ph\n",
+- SMB2_NTLMV2_SESSKEY_SIZE, sess->sess_key);
+- if (conn->cipher_type == SMB2_ENCRYPTION_AES256_CCM ||
+- conn->cipher_type == SMB2_ENCRYPTION_AES256_GCM) {
+- ksmbd_debug(AUTH, "ServerIn Key %*ph\n",
+- SMB3_GCM256_CRYPTKEY_SIZE, sess->smb3encryptionkey);
+- ksmbd_debug(AUTH, "ServerOut Key %*ph\n",
+- SMB3_GCM256_CRYPTKEY_SIZE, sess->smb3decryptionkey);
+- } else {
+- ksmbd_debug(AUTH, "ServerIn Key %*ph\n",
+- SMB3_GCM128_CRYPTKEY_SIZE, sess->smb3encryptionkey);
+- ksmbd_debug(AUTH, "ServerOut Key %*ph\n",
+- SMB3_GCM128_CRYPTKEY_SIZE, sess->smb3decryptionkey);
+- }
+ return 0;
+ }
+
--- /dev/null
+From stable+bounces-225688-greg=kroah.com@vger.kernel.org Mon Mar 16 20:36:50 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 16 Mar 2026 15:36:42 -0400
+Subject: KVM: SVM: Add a helper to look up the max physical ID for AVIC
+To: stable@vger.kernel.org
+Cc: Naveen N Rao <naveen@kernel.org>, Sean Christopherson <seanjc@google.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260316193643.1358734-2-sashal@kernel.org>
+
+From: Naveen N Rao <naveen@kernel.org>
+
+[ Upstream commit f2f6e67a56dc88fea7e9b10c4e79bb01d97386b7 ]
+
+To help with a future change, add a helper to look up the maximum
+physical ID depending on the vCPU AVIC mode. No functional change
+intended.
+
+Suggested-by: Sean Christopherson <seanjc@google.com>
+Signed-off-by: Naveen N Rao (AMD) <naveen@kernel.org>
+Link: https://lore.kernel.org/r/0ab9bf5e20a3463a4aa3a5ea9bbbac66beedf1d1.1757009416.git.naveen@kernel.org
+Signed-off-by: Sean Christopherson <seanjc@google.com>
+Stable-dep-of: 87d0f901a9bd ("KVM: SVM: Set/clear CR8 write interception when AVIC is (de)activated")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kvm/svm/avic.c | 26 ++++++++++++++++++++------
+ 1 file changed, 20 insertions(+), 6 deletions(-)
+
+--- a/arch/x86/kvm/svm/avic.c
++++ b/arch/x86/kvm/svm/avic.c
+@@ -82,13 +82,31 @@ struct amd_svm_iommu_ir {
+ void *data; /* Storing pointer to struct amd_ir_data */
+ };
+
++static u32 avic_get_max_physical_id(struct kvm_vcpu *vcpu)
++{
++ u32 arch_max;
++
++ if (x2avic_enabled && apic_x2apic_mode(vcpu->arch.apic))
++ arch_max = X2AVIC_MAX_PHYSICAL_ID;
++ else
++ arch_max = AVIC_MAX_PHYSICAL_ID;
++
++ /*
++ * Despite its name, KVM_CAP_MAX_VCPU_ID represents the maximum APIC ID
++ * plus one, so the max possible APIC ID is one less than that.
++ */
++ return min(vcpu->kvm->arch.max_vcpu_ids - 1, arch_max);
++}
++
+ static void avic_activate_vmcb(struct vcpu_svm *svm)
+ {
+ struct vmcb *vmcb = svm->vmcb01.ptr;
+- struct kvm *kvm = svm->vcpu.kvm;
++ struct kvm_vcpu *vcpu = &svm->vcpu;
+
+ vmcb->control.int_ctl &= ~(AVIC_ENABLE_MASK | X2APIC_MODE_MASK);
++
+ vmcb->control.avic_physical_id &= ~AVIC_PHYSICAL_MAX_INDEX_MASK;
++ vmcb->control.avic_physical_id |= avic_get_max_physical_id(vcpu);
+
+ vmcb->control.int_ctl |= AVIC_ENABLE_MASK;
+
+@@ -101,8 +119,7 @@ static void avic_activate_vmcb(struct vc
+ */
+ if (x2avic_enabled && apic_x2apic_mode(svm->vcpu.arch.apic)) {
+ vmcb->control.int_ctl |= X2APIC_MODE_MASK;
+- vmcb->control.avic_physical_id |= min(kvm->arch.max_vcpu_ids - 1,
+- X2AVIC_MAX_PHYSICAL_ID);
++
+ /* Disabling MSR intercept for x2APIC registers */
+ svm_set_x2apic_msr_interception(svm, false);
+ } else {
+@@ -112,9 +129,6 @@ static void avic_activate_vmcb(struct vc
+ */
+ kvm_make_request(KVM_REQ_TLB_FLUSH_CURRENT, &svm->vcpu);
+
+- /* For xAVIC and hybrid-xAVIC modes */
+- vmcb->control.avic_physical_id |= min(kvm->arch.max_vcpu_ids - 1,
+- AVIC_MAX_PHYSICAL_ID);
+ /* Enabling MSR intercept for x2APIC registers */
+ svm_set_x2apic_msr_interception(svm, true);
+ }
--- /dev/null
+From stable+bounces-225687-greg=kroah.com@vger.kernel.org Mon Mar 16 20:36:52 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 16 Mar 2026 15:36:41 -0400
+Subject: KVM: SVM: Limit AVIC physical max index based on configured max_vcpu_ids
+To: stable@vger.kernel.org
+Cc: Naveen N Rao <naveen@kernel.org>, Sean Christopherson <seanjc@google.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260316193643.1358734-1-sashal@kernel.org>
+
+From: Naveen N Rao <naveen@kernel.org>
+
+[ Upstream commit 574ef752d4aea04134bc121294d717f4422c2755 ]
+
+KVM allows VMMs to specify the maximum possible APIC ID for a virtual
+machine through KVM_CAP_MAX_VCPU_ID capability so as to limit data
+structures related to APIC/x2APIC. Utilize the same to set the AVIC
+physical max index in the VMCB, similar to VMX. This helps hardware
+limit the number of entries to be scanned in the physical APIC ID table
+speeding up IPI broadcasts for virtual machines with smaller number of
+vCPUs.
+
+Unlike VMX, SVM AVIC requires a single page to be allocated for the
+Physical APIC ID table and the Logical APIC ID table, so retain the
+existing approach of allocating those during VM init.
+
+Signed-off-by: Naveen N Rao (AMD) <naveen@kernel.org>
+Link: https://lore.kernel.org/r/adb07ccdb3394cd79cb372ba6bcc69a4e4d4ef54.1757009416.git.naveen@kernel.org
+Signed-off-by: Sean Christopherson <seanjc@google.com>
+Stable-dep-of: 87d0f901a9bd ("KVM: SVM: Set/clear CR8 write interception when AVIC is (de)activated")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kvm/svm/avic.c | 7 +++++--
+ 1 file changed, 5 insertions(+), 2 deletions(-)
+
+--- a/arch/x86/kvm/svm/avic.c
++++ b/arch/x86/kvm/svm/avic.c
+@@ -85,6 +85,7 @@ struct amd_svm_iommu_ir {
+ static void avic_activate_vmcb(struct vcpu_svm *svm)
+ {
+ struct vmcb *vmcb = svm->vmcb01.ptr;
++ struct kvm *kvm = svm->vcpu.kvm;
+
+ vmcb->control.int_ctl &= ~(AVIC_ENABLE_MASK | X2APIC_MODE_MASK);
+ vmcb->control.avic_physical_id &= ~AVIC_PHYSICAL_MAX_INDEX_MASK;
+@@ -100,7 +101,8 @@ static void avic_activate_vmcb(struct vc
+ */
+ if (x2avic_enabled && apic_x2apic_mode(svm->vcpu.arch.apic)) {
+ vmcb->control.int_ctl |= X2APIC_MODE_MASK;
+- vmcb->control.avic_physical_id |= X2AVIC_MAX_PHYSICAL_ID;
++ vmcb->control.avic_physical_id |= min(kvm->arch.max_vcpu_ids - 1,
++ X2AVIC_MAX_PHYSICAL_ID);
+ /* Disabling MSR intercept for x2APIC registers */
+ svm_set_x2apic_msr_interception(svm, false);
+ } else {
+@@ -111,7 +113,8 @@ static void avic_activate_vmcb(struct vc
+ kvm_make_request(KVM_REQ_TLB_FLUSH_CURRENT, &svm->vcpu);
+
+ /* For xAVIC and hybrid-xAVIC modes */
+- vmcb->control.avic_physical_id |= AVIC_MAX_PHYSICAL_ID;
++ vmcb->control.avic_physical_id |= min(kvm->arch.max_vcpu_ids - 1,
++ AVIC_MAX_PHYSICAL_ID);
+ /* Enabling MSR intercept for x2APIC registers */
+ svm_set_x2apic_msr_interception(svm, true);
+ }
--- /dev/null
+From stable+bounces-225689-greg=kroah.com@vger.kernel.org Mon Mar 16 20:36:59 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 16 Mar 2026 15:36:43 -0400
+Subject: KVM: SVM: Set/clear CR8 write interception when AVIC is (de)activated
+To: stable@vger.kernel.org
+Cc: Sean Christopherson <seanjc@google.com>, Jim Mattson <jmattson@google.com>, "Naveen N Rao (AMD)" <naveen@kernel.org>, "Maciej S. Szmigiero" <maciej.szmigiero@oracle.com>, Paolo Bonzini <pbonzini@redhat.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260316193643.1358734-3-sashal@kernel.org>
+
+From: Sean Christopherson <seanjc@google.com>
+
+[ Upstream commit 87d0f901a9bd8ae6be57249c737f20ac0cace93d ]
+
+Explicitly set/clear CR8 write interception when AVIC is (de)activated to
+fix a bug where KVM leaves the interception enabled after AVIC is
+activated. E.g. if KVM emulates INIT=>WFS while AVIC is deactivated, CR8
+will remain intercepted in perpetuity.
+
+On its own, the dangling CR8 intercept is "just" a performance issue, but
+combined with the TPR sync bug fixed by commit d02e48830e3f ("KVM: SVM:
+Sync TPR from LAPIC into VMCB::V_TPR even if AVIC is active"), the danging
+intercept is fatal to Windows guests as the TPR seen by hardware gets
+wildly out of sync with reality.
+
+Note, VMX isn't affected by the bug as TPR_THRESHOLD is explicitly ignored
+when Virtual Interrupt Delivery is enabled, i.e. when APICv is active in
+KVM's world. I.e. there's no need to trigger update_cr8_intercept(), this
+is firmly an SVM implementation flaw/detail.
+
+WARN if KVM gets a CR8 write #VMEXIT while AVIC is active, as KVM should
+never enter the guest with AVIC enabled and CR8 writes intercepted.
+
+Fixes: 3bbf3565f48c ("svm: Do not intercept CR8 when enable AVIC")
+Cc: stable@vger.kernel.org
+Cc: Jim Mattson <jmattson@google.com>
+Cc: Naveen N Rao (AMD) <naveen@kernel.org>
+Cc: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
+Reviewed-by: Naveen N Rao (AMD) <naveen@kernel.org>
+Reviewed-by: Jim Mattson <jmattson@google.com>
+Link: https://patch.msgid.link/20260203190711.458413-3-seanjc@google.com
+Signed-off-by: Sean Christopherson <seanjc@google.com>
+[Squash fix to avic_deactivate_vmcb. - Paolo]
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kvm/svm/avic.c | 7 +++++--
+ arch/x86/kvm/svm/svm.c | 7 ++++---
+ 2 files changed, 9 insertions(+), 5 deletions(-)
+
+--- a/arch/x86/kvm/svm/avic.c
++++ b/arch/x86/kvm/svm/avic.c
+@@ -104,12 +104,12 @@ static void avic_activate_vmcb(struct vc
+ struct kvm_vcpu *vcpu = &svm->vcpu;
+
+ vmcb->control.int_ctl &= ~(AVIC_ENABLE_MASK | X2APIC_MODE_MASK);
+-
+ vmcb->control.avic_physical_id &= ~AVIC_PHYSICAL_MAX_INDEX_MASK;
+ vmcb->control.avic_physical_id |= avic_get_max_physical_id(vcpu);
+-
+ vmcb->control.int_ctl |= AVIC_ENABLE_MASK;
+
++ svm_clr_intercept(svm, INTERCEPT_CR8_WRITE);
++
+ /*
+ * Note: KVM supports hybrid-AVIC mode, where KVM emulates x2APIC MSR
+ * accesses, while interrupt injection to a running vCPU can be
+@@ -141,6 +141,9 @@ static void avic_deactivate_vmcb(struct
+ vmcb->control.int_ctl &= ~(AVIC_ENABLE_MASK | X2APIC_MODE_MASK);
+ vmcb->control.avic_physical_id &= ~AVIC_PHYSICAL_MAX_INDEX_MASK;
+
++ if (!sev_es_guest(svm->vcpu.kvm))
++ svm_set_intercept(svm, INTERCEPT_CR8_WRITE);
++
+ /*
+ * If running nested and the guest uses its own MSR bitmap, there
+ * is no need to update L0's msr bitmap
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -1261,8 +1261,7 @@ static void init_vmcb(struct kvm_vcpu *v
+ svm_set_intercept(svm, INTERCEPT_CR0_WRITE);
+ svm_set_intercept(svm, INTERCEPT_CR3_WRITE);
+ svm_set_intercept(svm, INTERCEPT_CR4_WRITE);
+- if (!kvm_vcpu_apicv_active(vcpu))
+- svm_set_intercept(svm, INTERCEPT_CR8_WRITE);
++ svm_set_intercept(svm, INTERCEPT_CR8_WRITE);
+
+ set_dr_intercepts(svm);
+
+@@ -2806,9 +2805,11 @@ static int dr_interception(struct kvm_vc
+
+ static int cr8_write_interception(struct kvm_vcpu *vcpu)
+ {
++ u8 cr8_prev = kvm_get_cr8(vcpu);
+ int r;
+
+- u8 cr8_prev = kvm_get_cr8(vcpu);
++ WARN_ON_ONCE(kvm_vcpu_apicv_active(vcpu));
++
+ /* instruction emulation calls kvm_set_cr8() */
+ r = cr_interception(vcpu);
+ if (lapic_in_kernel(vcpu))
--- /dev/null
+From stable+bounces-226034-greg=kroah.com@vger.kernel.org Tue Mar 17 15:43:48 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 17 Mar 2026 10:30:32 -0400
+Subject: mm/kfence: fix KASAN hardware tag faults during late enablement
+To: stable@vger.kernel.org
+Cc: Alexander Potapenko <glider@google.com>, Ernesto Martinez Garcia <ernesto.martinezgarcia@tugraz.at>, Andrey Konovalov <andreyknvl@gmail.com>, Andrey Ryabinin <ryabinin.a.a@gmail.com>, Dmitry Vyukov <dvyukov@google.com>, Greg KH <gregkh@linuxfoundation.org>, Kees Cook <kees@kernel.org>, Marco Elver <elver@google.com>, Andrew Morton <akpm@linux-foundation.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260317143032.168309-1-sashal@kernel.org>
+
+From: Alexander Potapenko <glider@google.com>
+
+[ Upstream commit d155aab90fffa00f93cea1f107aef0a3d548b2ff ]
+
+When KASAN hardware tags are enabled, re-enabling KFENCE late (via
+/sys/module/kfence/parameters/sample_interval) causes KASAN faults.
+
+This happens because the KFENCE pool and metadata are allocated via the
+page allocator, which tags the memory, while KFENCE continues to access it
+using untagged pointers during initialization.
+
+Use __GFP_SKIP_KASAN for late KFENCE pool and metadata allocations to
+ensure the memory remains untagged, consistent with early allocations from
+memblock. To support this, add __GFP_SKIP_KASAN to the allowlist in
+__alloc_contig_verify_gfp_mask().
+
+Link: https://lkml.kernel.org/r/20260220144940.2779209-1-glider@google.com
+Fixes: 0ce20dd84089 ("mm: add Kernel Electric-Fence infrastructure")
+Signed-off-by: Alexander Potapenko <glider@google.com>
+Suggested-by: Ernesto Martinez Garcia <ernesto.martinezgarcia@tugraz.at>
+Cc: Andrey Konovalov <andreyknvl@gmail.com>
+Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
+Cc: Dmitry Vyukov <dvyukov@google.com>
+Cc: Greg KH <gregkh@linuxfoundation.org>
+Cc: Kees Cook <kees@kernel.org>
+Cc: Marco Elver <elver@google.com>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+[ dropped page_alloc.c hunk adding __GFP_SKIP_KASAN ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/kfence/core.c | 14 ++++++++------
+ 1 file changed, 8 insertions(+), 6 deletions(-)
+
+--- a/mm/kfence/core.c
++++ b/mm/kfence/core.c
+@@ -945,14 +945,14 @@ static int kfence_init_late(void)
+ #ifdef CONFIG_CONTIG_ALLOC
+ struct page *pages;
+
+- pages = alloc_contig_pages(nr_pages_pool, GFP_KERNEL, first_online_node,
+- NULL);
++ pages = alloc_contig_pages(nr_pages_pool, GFP_KERNEL | __GFP_SKIP_KASAN,
++ first_online_node, NULL);
+ if (!pages)
+ return -ENOMEM;
+
+ __kfence_pool = page_to_virt(pages);
+- pages = alloc_contig_pages(nr_pages_meta, GFP_KERNEL, first_online_node,
+- NULL);
++ pages = alloc_contig_pages(nr_pages_meta, GFP_KERNEL | __GFP_SKIP_KASAN,
++ first_online_node, NULL);
+ if (pages)
+ kfence_metadata_init = page_to_virt(pages);
+ #else
+@@ -962,11 +962,13 @@ static int kfence_init_late(void)
+ return -EINVAL;
+ }
+
+- __kfence_pool = alloc_pages_exact(KFENCE_POOL_SIZE, GFP_KERNEL);
++ __kfence_pool = alloc_pages_exact(KFENCE_POOL_SIZE,
++ GFP_KERNEL | __GFP_SKIP_KASAN);
+ if (!__kfence_pool)
+ return -ENOMEM;
+
+- kfence_metadata_init = alloc_pages_exact(KFENCE_METADATA_SIZE, GFP_KERNEL);
++ kfence_metadata_init = alloc_pages_exact(KFENCE_METADATA_SIZE,
++ GFP_KERNEL | __GFP_SKIP_KASAN);
+ #endif
+
+ if (!kfence_metadata_init)
--- /dev/null
+From stable+bounces-223686-greg=kroah.com@vger.kernel.org Mon Mar 9 16:18:54 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 9 Mar 2026 11:18:43 -0400
+Subject: mptcp: pm: avoid sending RM_ADDR over same subflow
+To: stable@vger.kernel.org
+Cc: "Matthieu Baerts (NGI0)" <matttbe@kernel.org>, Frank Lorenz <lorenz-frank@web.de>, Mat Martineau <martineau@kernel.org>, Jakub Kicinski <kuba@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260309151843.1264861-1-sashal@kernel.org>
+
+From: "Matthieu Baerts (NGI0)" <matttbe@kernel.org>
+
+[ Upstream commit fb8d0bccb221080630efcd9660c9f9349e53cc9e ]
+
+RM_ADDR are sent over an active subflow, the first one in the subflows
+list. There is then a high chance the initial subflow is picked. With
+the in-kernel PM, when an endpoint is removed, a RM_ADDR is sent, then
+linked subflows are closed. This is done for each active MPTCP
+connection.
+
+MPTCP endpoints are likely removed because the attached network is no
+longer available or usable. In this case, it is better to avoid sending
+this RM_ADDR over the subflow that is going to be removed, but prefer
+sending it over another active and non stale subflow, if any.
+
+This modification avoids situations where the other end is not notified
+when a subflow is no longer usable: typically when the endpoint linked
+to the initial subflow is removed, especially on the server side.
+
+Fixes: 8dd5efb1f91b ("mptcp: send ack for rm_addr")
+Cc: stable@vger.kernel.org
+Reported-by: Frank Lorenz <lorenz-frank@web.de>
+Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/612
+Reviewed-by: Mat Martineau <martineau@kernel.org>
+Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
+Link: https://patch.msgid.link/20260303-net-mptcp-misc-fixes-7-0-rc2-v1-2-4b5462b6f016@kernel.org
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+[ adapted to _nl-prefixed function names in pm_netlink.c and omitted stale subflow fallback ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/mptcp/pm.c | 2 +-
+ net/mptcp/pm_netlink.c | 43 ++++++++++++++++++++++++++++++++++++++-----
+ net/mptcp/protocol.h | 2 ++
+ 3 files changed, 41 insertions(+), 6 deletions(-)
+
+--- a/net/mptcp/pm.c
++++ b/net/mptcp/pm.c
+@@ -57,7 +57,7 @@ int mptcp_pm_remove_addr(struct mptcp_so
+ msk->pm.rm_list_tx = *rm_list;
+ rm_addr |= BIT(MPTCP_RM_ADDR_SIGNAL);
+ WRITE_ONCE(msk->pm.addr_signal, rm_addr);
+- mptcp_pm_nl_addr_send_ack(msk);
++ mptcp_pm_nl_addr_send_ack_avoid_list(msk, rm_list);
+ return 0;
+ }
+
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -849,9 +849,23 @@ bool mptcp_pm_nl_is_init_remote_addr(str
+ return mptcp_addresses_equal(&mpc_remote, remote, remote->port);
+ }
+
+-void mptcp_pm_nl_addr_send_ack(struct mptcp_sock *msk)
++static bool subflow_in_rm_list(const struct mptcp_subflow_context *subflow,
++ const struct mptcp_rm_list *rm_list)
++{
++ u8 i, id = subflow_get_local_id(subflow);
++
++ for (i = 0; i < rm_list->nr; i++) {
++ if (rm_list->ids[i] == id)
++ return true;
++ }
++
++ return false;
++}
++
++void mptcp_pm_nl_addr_send_ack_avoid_list(struct mptcp_sock *msk,
++ const struct mptcp_rm_list *rm_list)
+ {
+- struct mptcp_subflow_context *subflow;
++ struct mptcp_subflow_context *subflow, *same_id = NULL;
+
+ msk_owned_by_me(msk);
+ lockdep_assert_held(&msk->pm.lock);
+@@ -861,11 +875,30 @@ void mptcp_pm_nl_addr_send_ack(struct mp
+ return;
+
+ mptcp_for_each_subflow(msk, subflow) {
+- if (__mptcp_subflow_active(subflow)) {
+- mptcp_pm_send_ack(msk, subflow, false, false);
+- break;
++ if (!__mptcp_subflow_active(subflow))
++ continue;
++
++ if (unlikely(rm_list &&
++ subflow_in_rm_list(subflow, rm_list))) {
++ if (!same_id)
++ same_id = subflow;
++ } else {
++ goto send_ack;
+ }
+ }
++
++ if (same_id)
++ subflow = same_id;
++ else
++ return;
++
++send_ack:
++ mptcp_pm_send_ack(msk, subflow, false, false);
++}
++
++void mptcp_pm_nl_addr_send_ack(struct mptcp_sock *msk)
++{
++ mptcp_pm_nl_addr_send_ack_avoid_list(msk, NULL);
+ }
+
+ int mptcp_pm_nl_mp_prio_send_ack(struct mptcp_sock *msk,
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -932,6 +932,8 @@ void mptcp_pm_add_addr_send_ack(struct m
+ bool mptcp_pm_nl_is_init_remote_addr(struct mptcp_sock *msk,
+ const struct mptcp_addr_info *remote);
+ void mptcp_pm_nl_addr_send_ack(struct mptcp_sock *msk);
++void mptcp_pm_nl_addr_send_ack_avoid_list(struct mptcp_sock *msk,
++ const struct mptcp_rm_list *rm_list);
+ void mptcp_pm_rm_addr_received(struct mptcp_sock *msk,
+ const struct mptcp_rm_list *rm_list);
+ void mptcp_pm_mp_prio_received(struct sock *sk, u8 bkup);
--- /dev/null
+From stable+bounces-223695-greg=kroah.com@vger.kernel.org Mon Mar 9 17:02:21 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 9 Mar 2026 11:59:43 -0400
+Subject: mptcp: pm: in-kernel: always mark signal+subflow endp as used
+To: stable@vger.kernel.org
+Cc: "Matthieu Baerts (NGI0)" <matttbe@kernel.org>, Mat Martineau <martineau@kernel.org>, Jakub Kicinski <kuba@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260309155943.1295514-1-sashal@kernel.org>
+
+From: "Matthieu Baerts (NGI0)" <matttbe@kernel.org>
+
+[ Upstream commit 579a752464a64cb5f9139102f0e6b90a1f595ceb ]
+
+Syzkaller managed to find a combination of actions that was generating
+this warning:
+
+ msk->pm.local_addr_used == 0
+ WARNING: net/mptcp/pm_kernel.c:1071 at __mark_subflow_endp_available net/mptcp/pm_kernel.c:1071 [inline], CPU#1: syz.2.17/961
+ WARNING: net/mptcp/pm_kernel.c:1071 at mptcp_nl_remove_subflow_and_signal_addr net/mptcp/pm_kernel.c:1103 [inline], CPU#1: syz.2.17/961
+ WARNING: net/mptcp/pm_kernel.c:1071 at mptcp_pm_nl_del_addr_doit+0x81d/0x8f0 net/mptcp/pm_kernel.c:1210, CPU#1: syz.2.17/961
+ Modules linked in:
+ CPU: 1 UID: 0 PID: 961 Comm: syz.2.17 Not tainted 6.19.0-08368-gfafda3b4b06b #22 PREEMPT(full)
+ Hardware name: QEMU Ubuntu 25.10 PC v2 (i440FX + PIIX, + 10.1 machine, 1996), BIOS 1.17.0-debian-1.17.0-1build1 04/01/2014
+ RIP: 0010:__mark_subflow_endp_available net/mptcp/pm_kernel.c:1071 [inline]
+ RIP: 0010:mptcp_nl_remove_subflow_and_signal_addr net/mptcp/pm_kernel.c:1103 [inline]
+ RIP: 0010:mptcp_pm_nl_del_addr_doit+0x81d/0x8f0 net/mptcp/pm_kernel.c:1210
+ Code: 89 c5 e8 46 30 6f fe e9 21 fd ff ff 49 83 ed 80 e8 38 30 6f fe 4c 89 ef be 03 00 00 00 e8 db 49 df fe eb ac e8 24 30 6f fe 90 <0f> 0b 90 e9 1d ff ff ff e8 16 30 6f fe eb 05 e8 0f 30 6f fe e8 9a
+ RSP: 0018:ffffc90001663880 EFLAGS: 00010293
+ RAX: ffffffff82de1a6c RBX: 0000000000000000 RCX: ffff88800722b500
+ RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
+ RBP: ffff8880158b22d0 R08: 0000000000010425 R09: ffffffffffffffff
+ R10: ffffffff82de18ba R11: 0000000000000000 R12: ffff88800641a640
+ R13: ffff8880158b1880 R14: ffff88801ec3c900 R15: ffff88800641a650
+ FS: 00005555722c3500(0000) GS:ffff8880f909d000(0000) knlGS:0000000000000000
+ CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
+ CR2: 00007f66346e0f60 CR3: 000000001607c000 CR4: 0000000000350ef0
+ Call Trace:
+ <TASK>
+ genl_family_rcv_msg_doit+0x117/0x180 net/netlink/genetlink.c:1115
+ genl_family_rcv_msg net/netlink/genetlink.c:1195 [inline]
+ genl_rcv_msg+0x3a8/0x3f0 net/netlink/genetlink.c:1210
+ netlink_rcv_skb+0x16d/0x240 net/netlink/af_netlink.c:2550
+ genl_rcv+0x28/0x40 net/netlink/genetlink.c:1219
+ netlink_unicast_kernel net/netlink/af_netlink.c:1318 [inline]
+ netlink_unicast+0x3e9/0x4c0 net/netlink/af_netlink.c:1344
+ netlink_sendmsg+0x4aa/0x5b0 net/netlink/af_netlink.c:1894
+ sock_sendmsg_nosec net/socket.c:727 [inline]
+ __sock_sendmsg+0xc9/0xf0 net/socket.c:742
+ ____sys_sendmsg+0x272/0x3b0 net/socket.c:2592
+ ___sys_sendmsg+0x2de/0x320 net/socket.c:2646
+ __sys_sendmsg net/socket.c:2678 [inline]
+ __do_sys_sendmsg net/socket.c:2683 [inline]
+ __se_sys_sendmsg net/socket.c:2681 [inline]
+ __x64_sys_sendmsg+0x110/0x1a0 net/socket.c:2681
+ do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
+ do_syscall_64+0x143/0x440 arch/x86/entry/syscall_64.c:94
+ entry_SYSCALL_64_after_hwframe+0x77/0x7f
+ RIP: 0033:0x7f66346f826d
+ Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48
+ RSP: 002b:00007ffc83d8bdc8 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
+ RAX: ffffffffffffffda RBX: 00007f6634985fa0 RCX: 00007f66346f826d
+ RDX: 00000000040000b0 RSI: 0000200000000740 RDI: 0000000000000007
+ RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
+ R10: 0000000000000000 R11: 0000000000000246 R12: 00007f6634985fa8
+ R13: 00007f6634985fac R14: 0000000000000000 R15: 0000000000001770
+ </TASK>
+
+The actions that caused that seem to be:
+
+ - Set the MPTCP subflows limit to 0
+ - Create an MPTCP endpoint with both the 'signal' and 'subflow' flags
+ - Create a new MPTCP connection from a different address: an ADD_ADDR
+ linked to the MPTCP endpoint will be sent ('signal' flag), but no
+ subflows is initiated ('subflow' flag)
+ - Remove the MPTCP endpoint
+
+In this case, msk->pm.local_addr_used has been kept to 0 -- because no
+subflows have been created -- but the corresponding bit in
+msk->pm.id_avail_bitmap has been cleared when the ADD_ADDR has been
+sent. This later causes a splat when removing the MPTCP endpoint because
+msk->pm.local_addr_used has been kept to 0.
+
+Now, if an endpoint has both the signal and subflow flags, but it is not
+possible to create subflows because of the limits or the c-flag case,
+then the local endpoint counter is still incremented: the endpoint is
+used at the end. This avoids issues later when removing the endpoint and
+calling __mark_subflow_endp_available(), which expects
+msk->pm.local_addr_used to have been previously incremented if the
+endpoint was marked as used according to msk->pm.id_avail_bitmap.
+
+Note that signal_and_subflow variable is reset to false when the limits
+and the c-flag case allows subflows creation. Also, local_addr_used is
+only incremented for non ID0 subflows.
+
+Fixes: 85df533a787b ("mptcp: pm: do not ignore 'subflow' if 'signal' flag is also set")
+Cc: stable@vger.kernel.org
+Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/613
+Reviewed-by: Mat Martineau <martineau@kernel.org>
+Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
+Link: https://patch.msgid.link/20260303-net-mptcp-misc-fixes-7-0-rc2-v1-4-4b5462b6f016@kernel.org
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+[ pm_kernel.c => pm_netlink.c ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/mptcp/pm_netlink.c | 9 +++++++++
+ 1 file changed, 9 insertions(+)
+
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -662,6 +662,15 @@ subflow:
+ }
+
+ exit:
++ /* If an endpoint has both the signal and subflow flags, but it is not
++ * possible to create subflows -- the 'while' loop body above never
++ * executed -- then still mark the endp as used, which is somehow the
++ * case. This avoids issues later when removing the endpoint and calling
++ * __mark_subflow_endp_available(), which expects the increment here.
++ */
++ if (signal_and_subflow && local.addr.id != msk->mpc_endpoint_id)
++ msk->pm.local_addr_used++;
++
+ mptcp_pm_nl_check_work_pending(msk);
+ }
+
--- /dev/null
+From stable+bounces-227127-greg=kroah.com@vger.kernel.org Wed Mar 18 17:56:12 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 18 Mar 2026 12:47:17 -0400
+Subject: net: macb: Shuffle the tx ring before enabling tx
+To: stable@vger.kernel.org
+Cc: Kevin Hao <haokexin@gmail.com>, Quanyang Wang <quanyang.wang@windriver.com>, Simon Horman <horms@kernel.org>, Jakub Kicinski <kuba@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260318164717.1118974-1-sashal@kernel.org>
+
+From: Kevin Hao <haokexin@gmail.com>
+
+[ Upstream commit 881a0263d502e1a93ebc13a78254e9ad19520232 ]
+
+Quanyang observed that when using an NFS rootfs on an AMD ZynqMp board,
+the rootfs may take an extended time to recover after a suspend.
+Upon investigation, it was determined that the issue originates from a
+problem in the macb driver.
+
+According to the Zynq UltraScale TRM [1], when transmit is disabled,
+the transmit buffer queue pointer resets to point to the address
+specified by the transmit buffer queue base address register.
+
+In the current implementation, the code merely resets `queue->tx_head`
+and `queue->tx_tail` to '0'. This approach presents several issues:
+
+- Packets already queued in the tx ring are silently lost,
+ leading to memory leaks since the associated skbs cannot be released.
+
+- Concurrent write access to `queue->tx_head` and `queue->tx_tail` may
+ occur from `macb_tx_poll()` or `macb_start_xmit()` when these values
+ are reset to '0'.
+
+- The transmission may become stuck on a packet that has already been sent
+ out, with its 'TX_USED' bit set, but has not yet been processed. However,
+ due to the manipulation of 'queue->tx_head' and 'queue->tx_tail',
+ `macb_tx_poll()` incorrectly assumes there are no packets to handle
+ because `queue->tx_head == queue->tx_tail`. This issue is only resolved
+ when a new packet is placed at this position. This is the root cause of
+ the prolonged recovery time observed for the NFS root filesystem.
+
+To resolve this issue, shuffle the tx ring and tx skb array so that
+the first unsent packet is positioned at the start of the tx ring.
+Additionally, ensure that updates to `queue->tx_head` and
+`queue->tx_tail` are properly protected with the appropriate lock.
+
+[1] https://docs.amd.com/v/u/en-US/ug1085-zynq-ultrascale-trm
+
+Fixes: bf9cf80cab81 ("net: macb: Fix tx/rx malfunction after phy link down and up")
+Reported-by: Quanyang Wang <quanyang.wang@windriver.com>
+Signed-off-by: Kevin Hao <haokexin@gmail.com>
+Cc: stable@vger.kernel.org
+Reviewed-by: Simon Horman <horms@kernel.org>
+Link: https://patch.msgid.link/20260307-zynqmp-v2-1-6ef98a70e1d0@gmail.com
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+[ #include context ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/cadence/macb_main.c | 98 ++++++++++++++++++++++++++++++-
+ 1 file changed, 95 insertions(+), 3 deletions(-)
+
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -38,6 +38,7 @@
+ #include <linux/ptp_classify.h>
+ #include <linux/reset.h>
+ #include <linux/firmware/xlnx-zynqmp.h>
++#include <linux/gcd.h>
+ #include "macb.h"
+
+ /* This structure is only used for MACB on SiFive FU540 devices */
+@@ -719,6 +720,97 @@ static void macb_mac_link_down(struct ph
+ netif_tx_stop_all_queues(ndev);
+ }
+
++/* Use juggling algorithm to left rotate tx ring and tx skb array */
++static void gem_shuffle_tx_one_ring(struct macb_queue *queue)
++{
++ unsigned int head, tail, count, ring_size, desc_size;
++ struct macb_tx_skb tx_skb, *skb_curr, *skb_next;
++ struct macb_dma_desc *desc_curr, *desc_next;
++ unsigned int i, cycles, shift, curr, next;
++ struct macb *bp = queue->bp;
++ unsigned char desc[24];
++ unsigned long flags;
++
++ desc_size = macb_dma_desc_get_size(bp);
++
++ if (WARN_ON_ONCE(desc_size > ARRAY_SIZE(desc)))
++ return;
++
++ spin_lock_irqsave(&queue->tx_ptr_lock, flags);
++ head = queue->tx_head;
++ tail = queue->tx_tail;
++ ring_size = bp->tx_ring_size;
++ count = CIRC_CNT(head, tail, ring_size);
++
++ if (!(tail % ring_size))
++ goto unlock;
++
++ if (!count) {
++ queue->tx_head = 0;
++ queue->tx_tail = 0;
++ goto unlock;
++ }
++
++ shift = tail % ring_size;
++ cycles = gcd(ring_size, shift);
++
++ for (i = 0; i < cycles; i++) {
++ memcpy(&desc, macb_tx_desc(queue, i), desc_size);
++ memcpy(&tx_skb, macb_tx_skb(queue, i),
++ sizeof(struct macb_tx_skb));
++
++ curr = i;
++ next = (curr + shift) % ring_size;
++
++ while (next != i) {
++ desc_curr = macb_tx_desc(queue, curr);
++ desc_next = macb_tx_desc(queue, next);
++
++ memcpy(desc_curr, desc_next, desc_size);
++
++ if (next == ring_size - 1)
++ desc_curr->ctrl &= ~MACB_BIT(TX_WRAP);
++ if (curr == ring_size - 1)
++ desc_curr->ctrl |= MACB_BIT(TX_WRAP);
++
++ skb_curr = macb_tx_skb(queue, curr);
++ skb_next = macb_tx_skb(queue, next);
++ memcpy(skb_curr, skb_next, sizeof(struct macb_tx_skb));
++
++ curr = next;
++ next = (curr + shift) % ring_size;
++ }
++
++ desc_curr = macb_tx_desc(queue, curr);
++ memcpy(desc_curr, &desc, desc_size);
++ if (i == ring_size - 1)
++ desc_curr->ctrl &= ~MACB_BIT(TX_WRAP);
++ if (curr == ring_size - 1)
++ desc_curr->ctrl |= MACB_BIT(TX_WRAP);
++ memcpy(macb_tx_skb(queue, curr), &tx_skb,
++ sizeof(struct macb_tx_skb));
++ }
++
++ queue->tx_head = count;
++ queue->tx_tail = 0;
++
++ /* Make descriptor updates visible to hardware */
++ wmb();
++
++unlock:
++ spin_unlock_irqrestore(&queue->tx_ptr_lock, flags);
++}
++
++/* Rotate the queue so that the tail is at index 0 */
++static void gem_shuffle_tx_rings(struct macb *bp)
++{
++ struct macb_queue *queue;
++ int q;
++
++ for (q = 0, queue = bp->queues; q < bp->num_queues; q++, queue++)
++ gem_shuffle_tx_one_ring(queue);
++}
++
+ static void macb_mac_link_up(struct phylink_config *config,
+ struct phy_device *phy,
+ unsigned int mode, phy_interface_t interface,
+@@ -757,8 +849,6 @@ static void macb_mac_link_up(struct phyl
+ ctrl |= MACB_BIT(PAE);
+
+ for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
+- queue->tx_head = 0;
+- queue->tx_tail = 0;
+ queue_writel(queue, IER,
+ bp->rx_intr_mask | MACB_TX_INT_FLAGS | MACB_BIT(HRESP));
+ }
+@@ -772,8 +862,10 @@ static void macb_mac_link_up(struct phyl
+
+ spin_unlock_irqrestore(&bp->lock, flags);
+
+- if (!(bp->caps & MACB_CAPS_MACB_IS_EMAC))
++ if (!(bp->caps & MACB_CAPS_MACB_IS_EMAC)) {
+ macb_set_tx_clk(bp, speed);
++ gem_shuffle_tx_rings(bp);
++ }
+
+ /* Enable Rx and Tx; Enable PTP unicast */
+ ctrl = macb_readl(bp, NCR);
--- /dev/null
+From stable+bounces-223647-greg=kroah.com@vger.kernel.org Mon Mar 9 14:29:27 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 9 Mar 2026 09:27:08 -0400
+Subject: net: phy: register phy led_triggers during probe to avoid AB-BA deadlock
+To: stable@vger.kernel.org
+Cc: Andrew Lunn <andrew@lunn.ch>, Shiji Yang <yangshiji66@outlook.com>, Paolo Abeni <pabeni@redhat.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260309132708.943315-1-sashal@kernel.org>
+
+From: Andrew Lunn <andrew@lunn.ch>
+
+[ Upstream commit c8dbdc6e380e7e96a51706db3e4b7870d8a9402d ]
+
+There is an AB-BA deadlock when both LEDS_TRIGGER_NETDEV and
+LED_TRIGGER_PHY are enabled:
+
+[ 1362.049207] [<8054e4b8>] led_trigger_register+0x5c/0x1fc <-- Trying to get lock "triggers_list_lock" via down_write(&triggers_list_lock);
+[ 1362.054536] [<80662830>] phy_led_triggers_register+0xd0/0x234
+[ 1362.060329] [<8065e200>] phy_attach_direct+0x33c/0x40c
+[ 1362.065489] [<80651fc4>] phylink_fwnode_phy_connect+0x15c/0x23c
+[ 1362.071480] [<8066ee18>] mtk_open+0x7c/0xba0
+[ 1362.075849] [<806d714c>] __dev_open+0x280/0x2b0
+[ 1362.080384] [<806d7668>] __dev_change_flags+0x244/0x24c
+[ 1362.085598] [<806d7698>] dev_change_flags+0x28/0x78
+[ 1362.090528] [<807150e4>] dev_ioctl+0x4c0/0x654 <-- Hold lock "rtnl_mutex" by calling rtnl_lock();
+[ 1362.094985] [<80694360>] sock_ioctl+0x2f4/0x4e0
+[ 1362.099567] [<802e9c4c>] sys_ioctl+0x32c/0xd8c
+[ 1362.104022] [<80014504>] syscall_common+0x34/0x58
+
+Here LED_TRIGGER_PHY is registering LED triggers during phy_attach
+while holding RTNL and then taking triggers_list_lock.
+
+[ 1362.191101] [<806c2640>] register_netdevice_notifier+0x60/0x168 <-- Trying to get lock "rtnl_mutex" via rtnl_lock();
+[ 1362.197073] [<805504ac>] netdev_trig_activate+0x194/0x1e4
+[ 1362.202490] [<8054e28c>] led_trigger_set+0x1d4/0x360 <-- Hold lock "triggers_list_lock" by down_read(&triggers_list_lock);
+[ 1362.207511] [<8054eb38>] led_trigger_write+0xd8/0x14c
+[ 1362.212566] [<80381d98>] sysfs_kf_bin_write+0x80/0xbc
+[ 1362.217688] [<8037fcd8>] kernfs_fop_write_iter+0x17c/0x28c
+[ 1362.223174] [<802cbd70>] vfs_write+0x21c/0x3c4
+[ 1362.227712] [<802cc0c4>] ksys_write+0x78/0x12c
+[ 1362.232164] [<80014504>] syscall_common+0x34/0x58
+
+Here LEDS_TRIGGER_NETDEV is being enabled on an LED. It first takes
+triggers_list_lock and then RTNL. A classical AB-BA deadlock.
+
+phy_led_triggers_registers() does not require the RTNL, it does not
+make any calls into the network stack which require protection. There
+is also no requirement the PHY has been attached to a MAC, the
+triggers only make use of phydev state. This allows the call to
+phy_led_triggers_registers() to be placed elsewhere. PHY probe() and
+release() don't hold RTNL, so solving the AB-BA deadlock.
+
+Reported-by: Shiji Yang <yangshiji66@outlook.com>
+Closes: https://lore.kernel.org/all/OS7PR01MB13602B128BA1AD3FA38B6D1FFBC69A@OS7PR01MB13602.jpnprd01.prod.outlook.com/
+Fixes: 06f502f57d0d ("leds: trigger: Introduce a NETDEV trigger")
+Cc: stable@vger.kernel.org
+Signed-off-by: Andrew Lunn <andrew@lunn.ch>
+Tested-by: Shiji Yang <yangshiji66@outlook.com>
+Link: https://patch.msgid.link/20260222152601.1978655-1-andrew@lunn.ch
+Signed-off-by: Paolo Abeni <pabeni@redhat.com>
+[ adapted condition to preserve existing `!phy_driver_is_genphy_10g(phydev)` guard ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/phy/phy_device.c | 25 +++++++++++++++++--------
+ 1 file changed, 17 insertions(+), 8 deletions(-)
+
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -1582,8 +1582,6 @@ int phy_attach_direct(struct net_device
+ goto error;
+
+ phy_resume(phydev);
+- if (!phydev->is_on_sfp_module)
+- phy_led_triggers_register(phydev);
+
+ /**
+ * If the external phy used by current mac interface is managed by
+@@ -1856,9 +1854,6 @@ void phy_detach(struct phy_device *phyde
+ phydev->phy_link_change = NULL;
+ phydev->phylink = NULL;
+
+- if (!phydev->is_on_sfp_module)
+- phy_led_triggers_unregister(phydev);
+-
+ if (phydev->mdio.dev.driver)
+ module_put(phydev->mdio.dev.driver->owner);
+
+@@ -3402,17 +3397,28 @@ static int phy_probe(struct device *dev)
+ /* Set the state to READY by default */
+ phydev->state = PHY_READY;
+
++ /* Register the PHY LED triggers */
++ if (!phydev->is_on_sfp_module)
++ phy_led_triggers_register(phydev);
++
+ /* Get the LEDs from the device tree, and instantiate standard
+ * LEDs for them.
+ */
+ if (IS_ENABLED(CONFIG_PHYLIB_LEDS) && !phy_driver_is_genphy(phydev) &&
+- !phy_driver_is_genphy_10g(phydev))
++ !phy_driver_is_genphy_10g(phydev)) {
+ err = of_phy_leds(phydev);
++ if (err)
++ goto out;
++ }
++
++ return 0;
+
+ out:
++ if (!phydev->is_on_sfp_module)
++ phy_led_triggers_unregister(phydev);
++
+ /* Re-assert the reset signal on error */
+- if (err)
+- phy_device_reset(phydev, 1);
++ phy_device_reset(phydev, 1);
+
+ return err;
+ }
+@@ -3427,6 +3433,9 @@ static int phy_remove(struct device *dev
+ !phy_driver_is_genphy_10g(phydev))
+ phy_leds_unregister(phydev);
+
++ if (!phydev->is_on_sfp_module)
++ phy_led_triggers_unregister(phydev);
++
+ phydev->state = PHY_DOWN;
+
+ sfp_bus_del_upstream(phydev->sfp_bus);
--- /dev/null
+From stable+bounces-224902-greg=kroah.com@vger.kernel.org Thu Mar 12 18:44:36 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 12 Mar 2026 13:41:16 -0400
+Subject: net/sched: act_gate: snapshot parameters with RCU on replace
+To: stable@vger.kernel.org
+Cc: Paul Moses <p@1g4.org>, Vladimir Oltean <vladimir.oltean@nxp.com>, Jamal Hadi Salim <jhs@mojatatu.com>, Victor Nogueira <victor@mojatatu.com>, Jakub Kicinski <kuba@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260312174116.1809568-1-sashal@kernel.org>
+
+From: Paul Moses <p@1g4.org>
+
+[ Upstream commit 62413a9c3cb183afb9bb6e94dd68caf4e4145f4c ]
+
+The gate action can be replaced while the hrtimer callback or dump path is
+walking the schedule list.
+
+Convert the parameters to an RCU-protected snapshot and swap updates under
+tcf_lock, freeing the previous snapshot via call_rcu(). When REPLACE omits
+the entry list, preserve the existing schedule so the effective state is
+unchanged.
+
+Fixes: a51c328df310 ("net: qos: introduce a gate control flow action")
+Cc: stable@vger.kernel.org
+Signed-off-by: Paul Moses <p@1g4.org>
+Tested-by: Vladimir Oltean <vladimir.oltean@nxp.com>
+Acked-by: Jamal Hadi Salim <jhs@mojatatu.com>
+Reviewed-by: Victor Nogueira <victor@mojatatu.com>
+Link: https://patch.msgid.link/20260223150512.2251594-2-p@1g4.org
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+[ hrtimer_setup() => hrtimer_init() + keep is_tcf_gate() ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ include/net/tc_act/tc_gate.h | 33 ++++-
+ net/sched/act_gate.c | 266 ++++++++++++++++++++++++++++++-------------
+ 2 files changed, 212 insertions(+), 87 deletions(-)
+
+--- a/include/net/tc_act/tc_gate.h
++++ b/include/net/tc_act/tc_gate.h
+@@ -32,6 +32,7 @@ struct tcf_gate_params {
+ s32 tcfg_clockid;
+ size_t num_entries;
+ struct list_head entries;
++ struct rcu_head rcu;
+ };
+
+ #define GATE_ACT_GATE_OPEN BIT(0)
+@@ -39,7 +40,7 @@ struct tcf_gate_params {
+
+ struct tcf_gate {
+ struct tc_action common;
+- struct tcf_gate_params param;
++ struct tcf_gate_params __rcu *param;
+ u8 current_gate_status;
+ ktime_t current_close_time;
+ u32 current_entry_octets;
+@@ -60,47 +61,65 @@ static inline bool is_tcf_gate(const str
+ return false;
+ }
+
++static inline struct tcf_gate_params *tcf_gate_params_locked(const struct tc_action *a)
++{
++ struct tcf_gate *gact = to_gate(a);
++
++ return rcu_dereference_protected(gact->param,
++ lockdep_is_held(&gact->tcf_lock));
++}
++
+ static inline s32 tcf_gate_prio(const struct tc_action *a)
+ {
++ struct tcf_gate_params *p;
+ s32 tcfg_prio;
+
+- tcfg_prio = to_gate(a)->param.tcfg_priority;
++ p = tcf_gate_params_locked(a);
++ tcfg_prio = p->tcfg_priority;
+
+ return tcfg_prio;
+ }
+
+ static inline u64 tcf_gate_basetime(const struct tc_action *a)
+ {
++ struct tcf_gate_params *p;
+ u64 tcfg_basetime;
+
+- tcfg_basetime = to_gate(a)->param.tcfg_basetime;
++ p = tcf_gate_params_locked(a);
++ tcfg_basetime = p->tcfg_basetime;
+
+ return tcfg_basetime;
+ }
+
+ static inline u64 tcf_gate_cycletime(const struct tc_action *a)
+ {
++ struct tcf_gate_params *p;
+ u64 tcfg_cycletime;
+
+- tcfg_cycletime = to_gate(a)->param.tcfg_cycletime;
++ p = tcf_gate_params_locked(a);
++ tcfg_cycletime = p->tcfg_cycletime;
+
+ return tcfg_cycletime;
+ }
+
+ static inline u64 tcf_gate_cycletimeext(const struct tc_action *a)
+ {
++ struct tcf_gate_params *p;
+ u64 tcfg_cycletimeext;
+
+- tcfg_cycletimeext = to_gate(a)->param.tcfg_cycletime_ext;
++ p = tcf_gate_params_locked(a);
++ tcfg_cycletimeext = p->tcfg_cycletime_ext;
+
+ return tcfg_cycletimeext;
+ }
+
+ static inline u32 tcf_gate_num_entries(const struct tc_action *a)
+ {
++ struct tcf_gate_params *p;
+ u32 num_entries;
+
+- num_entries = to_gate(a)->param.num_entries;
++ p = tcf_gate_params_locked(a);
++ num_entries = p->num_entries;
+
+ return num_entries;
+ }
+@@ -114,7 +133,7 @@ static inline struct action_gate_entry
+ u32 num_entries;
+ int i = 0;
+
+- p = &to_gate(a)->param;
++ p = tcf_gate_params_locked(a);
+ num_entries = p->num_entries;
+
+ list_for_each_entry(entry, &p->entries, list)
+--- a/net/sched/act_gate.c
++++ b/net/sched/act_gate.c
+@@ -32,9 +32,12 @@ static ktime_t gate_get_time(struct tcf_
+ return KTIME_MAX;
+ }
+
+-static void gate_get_start_time(struct tcf_gate *gact, ktime_t *start)
++static void tcf_gate_params_free_rcu(struct rcu_head *head);
++
++static void gate_get_start_time(struct tcf_gate *gact,
++ const struct tcf_gate_params *param,
++ ktime_t *start)
+ {
+- struct tcf_gate_params *param = &gact->param;
+ ktime_t now, base, cycle;
+ u64 n;
+
+@@ -69,12 +72,14 @@ static enum hrtimer_restart gate_timer_f
+ {
+ struct tcf_gate *gact = container_of(timer, struct tcf_gate,
+ hitimer);
+- struct tcf_gate_params *p = &gact->param;
+ struct tcfg_gate_entry *next;
++ struct tcf_gate_params *p;
+ ktime_t close_time, now;
+
+ spin_lock(&gact->tcf_lock);
+
++ p = rcu_dereference_protected(gact->param,
++ lockdep_is_held(&gact->tcf_lock));
+ next = gact->next_entry;
+
+ /* cycle start, clear pending bit, clear total octets */
+@@ -230,6 +235,35 @@ static void release_entry_list(struct li
+ }
+ }
+
++static int tcf_gate_copy_entries(struct tcf_gate_params *dst,
++ const struct tcf_gate_params *src,
++ struct netlink_ext_ack *extack)
++{
++ struct tcfg_gate_entry *entry;
++ int i = 0;
++
++ list_for_each_entry(entry, &src->entries, list) {
++ struct tcfg_gate_entry *new;
++
++ new = kzalloc(sizeof(*new), GFP_ATOMIC);
++ if (!new) {
++ NL_SET_ERR_MSG(extack, "Not enough memory for entry");
++ return -ENOMEM;
++ }
++
++ new->index = entry->index;
++ new->gate_state = entry->gate_state;
++ new->interval = entry->interval;
++ new->ipv = entry->ipv;
++ new->maxoctets = entry->maxoctets;
++ list_add_tail(&new->list, &dst->entries);
++ i++;
++ }
++
++ dst->num_entries = i;
++ return 0;
++}
++
+ static int parse_gate_list(struct nlattr *list_attr,
+ struct tcf_gate_params *sched,
+ struct netlink_ext_ack *extack)
+@@ -275,23 +309,42 @@ release_list:
+ return err;
+ }
+
+-static void gate_setup_timer(struct tcf_gate *gact, u64 basetime,
+- enum tk_offsets tko, s32 clockid,
+- bool do_init)
+-{
+- if (!do_init) {
+- if (basetime == gact->param.tcfg_basetime &&
+- tko == gact->tk_offset &&
+- clockid == gact->param.tcfg_clockid)
+- return;
+-
+- spin_unlock_bh(&gact->tcf_lock);
+- hrtimer_cancel(&gact->hitimer);
+- spin_lock_bh(&gact->tcf_lock);
++static bool gate_timer_needs_cancel(u64 basetime, u64 old_basetime,
++ enum tk_offsets tko,
++ enum tk_offsets old_tko,
++ s32 clockid, s32 old_clockid)
++{
++ return basetime != old_basetime ||
++ clockid != old_clockid ||
++ tko != old_tko;
++}
++
++static int gate_clock_resolve(s32 clockid, enum tk_offsets *tko,
++ struct netlink_ext_ack *extack)
++{
++ switch (clockid) {
++ case CLOCK_REALTIME:
++ *tko = TK_OFFS_REAL;
++ return 0;
++ case CLOCK_MONOTONIC:
++ *tko = TK_OFFS_MAX;
++ return 0;
++ case CLOCK_BOOTTIME:
++ *tko = TK_OFFS_BOOT;
++ return 0;
++ case CLOCK_TAI:
++ *tko = TK_OFFS_TAI;
++ return 0;
++ default:
++ NL_SET_ERR_MSG(extack, "Invalid 'clockid'");
++ return -EINVAL;
+ }
+- gact->param.tcfg_basetime = basetime;
+- gact->param.tcfg_clockid = clockid;
+- gact->tk_offset = tko;
++}
++
++static void gate_setup_timer(struct tcf_gate *gact, s32 clockid,
++ enum tk_offsets tko)
++{
++ WRITE_ONCE(gact->tk_offset, tko);
+ hrtimer_init(&gact->hitimer, clockid, HRTIMER_MODE_ABS_SOFT);
+ gact->hitimer.function = gate_timer_func;
+ }
+@@ -302,15 +355,22 @@ static int tcf_gate_init(struct net *net
+ struct netlink_ext_ack *extack)
+ {
+ struct tc_action_net *tn = net_generic(net, act_gate_ops.net_id);
+- enum tk_offsets tk_offset = TK_OFFS_TAI;
++ u64 cycletime = 0, basetime = 0, cycletime_ext = 0;
++ struct tcf_gate_params *p = NULL, *old_p = NULL;
++ enum tk_offsets old_tk_offset = TK_OFFS_TAI;
++ const struct tcf_gate_params *cur_p = NULL;
+ bool bind = flags & TCA_ACT_FLAGS_BIND;
+ struct nlattr *tb[TCA_GATE_MAX + 1];
++ enum tk_offsets tko = TK_OFFS_TAI;
+ struct tcf_chain *goto_ch = NULL;
+- u64 cycletime = 0, basetime = 0;
+- struct tcf_gate_params *p;
++ s32 timer_clockid = CLOCK_TAI;
++ bool use_old_entries = false;
++ s32 old_clockid = CLOCK_TAI;
++ bool need_cancel = false;
+ s32 clockid = CLOCK_TAI;
+ struct tcf_gate *gact;
+ struct tc_gate *parm;
++ u64 old_basetime = 0;
+ int ret = 0, err;
+ u32 gflags = 0;
+ s32 prio = -1;
+@@ -327,26 +387,8 @@ static int tcf_gate_init(struct net *net
+ if (!tb[TCA_GATE_PARMS])
+ return -EINVAL;
+
+- if (tb[TCA_GATE_CLOCKID]) {
++ if (tb[TCA_GATE_CLOCKID])
+ clockid = nla_get_s32(tb[TCA_GATE_CLOCKID]);
+- switch (clockid) {
+- case CLOCK_REALTIME:
+- tk_offset = TK_OFFS_REAL;
+- break;
+- case CLOCK_MONOTONIC:
+- tk_offset = TK_OFFS_MAX;
+- break;
+- case CLOCK_BOOTTIME:
+- tk_offset = TK_OFFS_BOOT;
+- break;
+- case CLOCK_TAI:
+- tk_offset = TK_OFFS_TAI;
+- break;
+- default:
+- NL_SET_ERR_MSG(extack, "Invalid 'clockid'");
+- return -EINVAL;
+- }
+- }
+
+ parm = nla_data(tb[TCA_GATE_PARMS]);
+ index = parm->index;
+@@ -372,6 +414,60 @@ static int tcf_gate_init(struct net *net
+ return -EEXIST;
+ }
+
++ gact = to_gate(*a);
++
++ err = tcf_action_check_ctrlact(parm->action, tp, &goto_ch, extack);
++ if (err < 0)
++ goto release_idr;
++
++ p = kzalloc(sizeof(*p), GFP_KERNEL);
++ if (!p) {
++ err = -ENOMEM;
++ goto chain_put;
++ }
++ INIT_LIST_HEAD(&p->entries);
++
++ use_old_entries = !tb[TCA_GATE_ENTRY_LIST];
++ if (!use_old_entries) {
++ err = parse_gate_list(tb[TCA_GATE_ENTRY_LIST], p, extack);
++ if (err < 0)
++ goto err_free;
++ use_old_entries = !err;
++ }
++
++ if (ret == ACT_P_CREATED && use_old_entries) {
++ NL_SET_ERR_MSG(extack, "The entry list is empty");
++ err = -EINVAL;
++ goto err_free;
++ }
++
++ if (ret != ACT_P_CREATED) {
++ rcu_read_lock();
++ cur_p = rcu_dereference(gact->param);
++
++ old_basetime = cur_p->tcfg_basetime;
++ old_clockid = cur_p->tcfg_clockid;
++ old_tk_offset = READ_ONCE(gact->tk_offset);
++
++ basetime = old_basetime;
++ cycletime_ext = cur_p->tcfg_cycletime_ext;
++ prio = cur_p->tcfg_priority;
++ gflags = cur_p->tcfg_flags;
++
++ if (!tb[TCA_GATE_CLOCKID])
++ clockid = old_clockid;
++
++ err = 0;
++ if (use_old_entries) {
++ err = tcf_gate_copy_entries(p, cur_p, extack);
++ if (!err && !tb[TCA_GATE_CYCLE_TIME])
++ cycletime = cur_p->tcfg_cycletime;
++ }
++ rcu_read_unlock();
++ if (err)
++ goto err_free;
++ }
++
+ if (tb[TCA_GATE_PRIORITY])
+ prio = nla_get_s32(tb[TCA_GATE_PRIORITY]);
+
+@@ -381,25 +477,26 @@ static int tcf_gate_init(struct net *net
+ if (tb[TCA_GATE_FLAGS])
+ gflags = nla_get_u32(tb[TCA_GATE_FLAGS]);
+
+- gact = to_gate(*a);
+- if (ret == ACT_P_CREATED)
+- INIT_LIST_HEAD(&gact->param.entries);
++ if (tb[TCA_GATE_CYCLE_TIME])
++ cycletime = nla_get_u64(tb[TCA_GATE_CYCLE_TIME]);
+
+- err = tcf_action_check_ctrlact(parm->action, tp, &goto_ch, extack);
+- if (err < 0)
+- goto release_idr;
++ if (tb[TCA_GATE_CYCLE_TIME_EXT])
++ cycletime_ext = nla_get_u64(tb[TCA_GATE_CYCLE_TIME_EXT]);
+
+- spin_lock_bh(&gact->tcf_lock);
+- p = &gact->param;
++ err = gate_clock_resolve(clockid, &tko, extack);
++ if (err)
++ goto err_free;
++ timer_clockid = clockid;
++
++ need_cancel = ret != ACT_P_CREATED &&
++ gate_timer_needs_cancel(basetime, old_basetime,
++ tko, old_tk_offset,
++ timer_clockid, old_clockid);
+
+- if (tb[TCA_GATE_CYCLE_TIME])
+- cycletime = nla_get_u64(tb[TCA_GATE_CYCLE_TIME]);
++ if (need_cancel)
++ hrtimer_cancel(&gact->hitimer);
+
+- if (tb[TCA_GATE_ENTRY_LIST]) {
+- err = parse_gate_list(tb[TCA_GATE_ENTRY_LIST], p, extack);
+- if (err < 0)
+- goto chain_put;
+- }
++ spin_lock_bh(&gact->tcf_lock);
+
+ if (!cycletime) {
+ struct tcfg_gate_entry *entry;
+@@ -408,22 +505,20 @@ static int tcf_gate_init(struct net *net
+ list_for_each_entry(entry, &p->entries, list)
+ cycle = ktime_add_ns(cycle, entry->interval);
+ cycletime = cycle;
+- if (!cycletime) {
+- err = -EINVAL;
+- goto chain_put;
+- }
+ }
+ p->tcfg_cycletime = cycletime;
++ p->tcfg_cycletime_ext = cycletime_ext;
+
+- if (tb[TCA_GATE_CYCLE_TIME_EXT])
+- p->tcfg_cycletime_ext =
+- nla_get_u64(tb[TCA_GATE_CYCLE_TIME_EXT]);
+-
+- gate_setup_timer(gact, basetime, tk_offset, clockid,
+- ret == ACT_P_CREATED);
++ if (need_cancel || ret == ACT_P_CREATED)
++ gate_setup_timer(gact, timer_clockid, tko);
+ p->tcfg_priority = prio;
+ p->tcfg_flags = gflags;
+- gate_get_start_time(gact, &start);
++ p->tcfg_basetime = basetime;
++ p->tcfg_clockid = timer_clockid;
++ gate_get_start_time(gact, p, &start);
++
++ old_p = rcu_replace_pointer(gact->param, p,
++ lockdep_is_held(&gact->tcf_lock));
+
+ gact->current_close_time = start;
+ gact->current_gate_status = GATE_ACT_GATE_OPEN | GATE_ACT_PENDING;
+@@ -440,11 +535,15 @@ static int tcf_gate_init(struct net *net
+ if (goto_ch)
+ tcf_chain_put_by_act(goto_ch);
+
++ if (old_p)
++ call_rcu(&old_p->rcu, tcf_gate_params_free_rcu);
++
+ return ret;
+
++err_free:
++ release_entry_list(&p->entries);
++ kfree(p);
+ chain_put:
+- spin_unlock_bh(&gact->tcf_lock);
+-
+ if (goto_ch)
+ tcf_chain_put_by_act(goto_ch);
+ release_idr:
+@@ -452,21 +551,29 @@ release_idr:
+ * without taking tcf_lock.
+ */
+ if (ret == ACT_P_CREATED)
+- gate_setup_timer(gact, gact->param.tcfg_basetime,
+- gact->tk_offset, gact->param.tcfg_clockid,
+- true);
++ gate_setup_timer(gact, timer_clockid, tko);
++
+ tcf_idr_release(*a, bind);
+ return err;
+ }
+
++static void tcf_gate_params_free_rcu(struct rcu_head *head)
++{
++ struct tcf_gate_params *p = container_of(head, struct tcf_gate_params, rcu);
++
++ release_entry_list(&p->entries);
++ kfree(p);
++}
++
+ static void tcf_gate_cleanup(struct tc_action *a)
+ {
+ struct tcf_gate *gact = to_gate(a);
+ struct tcf_gate_params *p;
+
+- p = &gact->param;
+ hrtimer_cancel(&gact->hitimer);
+- release_entry_list(&p->entries);
++ p = rcu_dereference_protected(gact->param, 1);
++ if (p)
++ call_rcu(&p->rcu, tcf_gate_params_free_rcu);
+ }
+
+ static int dumping_entry(struct sk_buff *skb,
+@@ -515,10 +622,9 @@ static int tcf_gate_dump(struct sk_buff
+ struct nlattr *entry_list;
+ struct tcf_t t;
+
+- spin_lock_bh(&gact->tcf_lock);
+- opt.action = gact->tcf_action;
+-
+- p = &gact->param;
++ rcu_read_lock();
++ opt.action = READ_ONCE(gact->tcf_action);
++ p = rcu_dereference(gact->param);
+
+ if (nla_put(skb, TCA_GATE_PARMS, sizeof(opt), &opt))
+ goto nla_put_failure;
+@@ -558,12 +664,12 @@ static int tcf_gate_dump(struct sk_buff
+ tcf_tm_dump(&t, &gact->tcf_tm);
+ if (nla_put_64bit(skb, TCA_GATE_TM, sizeof(t), &t, TCA_GATE_PAD))
+ goto nla_put_failure;
+- spin_unlock_bh(&gact->tcf_lock);
++ rcu_read_unlock();
+
+ return skb->len;
+
+ nla_put_failure:
+- spin_unlock_bh(&gact->tcf_lock);
++ rcu_read_unlock();
+ nlmsg_trim(skb, b);
+ return -1;
+ }
--- /dev/null
+From stable+bounces-224567-greg=kroah.com@vger.kernel.org Tue Mar 10 21:18:05 2026
+From: Eric Biggers <ebiggers@kernel.org>
+Date: Tue, 10 Mar 2026 13:17:01 -0700
+Subject: net/tcp-md5: Fix MAC comparison to be constant-time
+To: stable@vger.kernel.org
+Cc: linux-crypto@vger.kernel.org, netdev@vger.kernel.org, Dmitry Safonov <0x7f454c46@gmail.com>, Eric Biggers <ebiggers@kernel.org>, Jakub Kicinski <kuba@kernel.org>
+Message-ID: <20260310201701.120016-1-ebiggers@kernel.org>
+
+From: Eric Biggers <ebiggers@kernel.org>
+
+commit 46d0d6f50dab706637f4c18a470aac20a21900d3 upstream.
+
+To prevent timing attacks, MACs need to be compared in constant
+time. Use the appropriate helper function for this.
+
+Fixes: cfb6eeb4c860 ("[TCP]: MD5 Signature Option (RFC2385) support.")
+Fixes: 658ddaaf6694 ("tcp: md5: RST: getting md5 key from listener")
+Cc: stable@vger.kernel.org
+Signed-off-by: Eric Biggers <ebiggers@kernel.org>
+Link: https://patch.msgid.link/20260302203409.13388-1-ebiggers@kernel.org
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/ipv4/tcp.c | 3 ++-
+ net/ipv4/tcp_ipv4.c | 3 ++-
+ net/ipv6/tcp_ipv6.c | 3 ++-
+ 3 files changed, 6 insertions(+), 3 deletions(-)
+
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -244,6 +244,7 @@
+ #define pr_fmt(fmt) "TCP: " fmt
+
+ #include <crypto/hash.h>
++#include <crypto/utils.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/types.h>
+@@ -4556,7 +4557,7 @@ tcp_inbound_md5_hash(const struct sock *
+ hash_expected,
+ NULL, skb);
+
+- if (genhash || memcmp(hash_location, newhash, 16) != 0) {
++ if (genhash || crypto_memneq(hash_location, newhash, 16)) {
+ NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPMD5FAILURE);
+ if (family == AF_INET) {
+ net_info_ratelimited("MD5 Hash failed for (%pI4, %d)->(%pI4, %d)%s L3 index %d\n",
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -80,6 +80,7 @@
+ #include <linux/btf_ids.h>
+
+ #include <crypto/hash.h>
++#include <crypto/utils.h>
+ #include <linux/scatterlist.h>
+
+ #include <trace/events/tcp.h>
+@@ -776,7 +777,7 @@ static void tcp_v4_send_reset(const stru
+
+
+ genhash = tcp_v4_md5_hash_skb(newhash, key, NULL, skb);
+- if (genhash || memcmp(hash_location, newhash, 16) != 0)
++ if (genhash || crypto_memneq(hash_location, newhash, 16))
+ goto out;
+
+ }
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -64,6 +64,7 @@
+ #include <linux/seq_file.h>
+
+ #include <crypto/hash.h>
++#include <crypto/utils.h>
+ #include <linux/scatterlist.h>
+
+ #include <trace/events/tcp.h>
+@@ -1035,7 +1036,7 @@ static void tcp_v6_send_reset(const stru
+ goto out;
+
+ genhash = tcp_v6_md5_hash_skb(newhash, key, NULL, skb);
+- if (genhash || memcmp(hash_location, newhash, 16) != 0)
++ if (genhash || crypto_memneq(hash_location, newhash, 16))
+ goto out;
+ }
+ #endif
--- /dev/null
+From stable+bounces-223617-greg=kroah.com@vger.kernel.org Mon Mar 9 12:38:36 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 9 Mar 2026 07:38:27 -0400
+Subject: platform/x86: hp-bioscfg: Support allocations of larger data
+To: stable@vger.kernel.org
+Cc: "Mario Limonciello" <mario.limonciello@amd.com>, "Paul Kerry" <p.kerry@sheffield.ac.uk>, "Ilpo Järvinen" <ilpo.jarvinen@linux.intel.com>, "Sasha Levin" <sashal@kernel.org>
+Message-ID: <20260309113827.823581-1-sashal@kernel.org>
+
+From: Mario Limonciello <mario.limonciello@amd.com>
+
+[ Upstream commit 916727cfdb72cd01fef3fa6746e648f8cb70e713 ]
+
+Some systems have much larger amounts of enumeration attributes
+than have been previously encountered. This can lead to page allocation
+failures when using kcalloc(). Switch over to using kvcalloc() to
+allow larger allocations.
+
+Fixes: 6b2770bfd6f92 ("platform/x86: hp-bioscfg: enum-attributes")
+Cc: stable@vger.kernel.org
+Reported-by: Paul Kerry <p.kerry@sheffield.ac.uk>
+Tested-by: Paul Kerry <p.kerry@sheffield.ac.uk>
+Closes: https://bugs.debian.org/1127612
+Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
+Link: https://patch.msgid.link/20260225210646.59381-1-mario.limonciello@amd.com
+Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
+Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
+[ kcalloc() => kvcalloc() ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/platform/x86/hp/hp-bioscfg/enum-attributes.c | 9 ++++++---
+ 1 file changed, 6 insertions(+), 3 deletions(-)
+
+--- a/drivers/platform/x86/hp/hp-bioscfg/enum-attributes.c
++++ b/drivers/platform/x86/hp/hp-bioscfg/enum-attributes.c
+@@ -96,8 +96,11 @@ int hp_alloc_enumeration_data(void)
+ bioscfg_drv.enumeration_instances_count =
+ hp_get_instance_count(HP_WMI_BIOS_ENUMERATION_GUID);
+
+- bioscfg_drv.enumeration_data = kcalloc(bioscfg_drv.enumeration_instances_count,
+- sizeof(*bioscfg_drv.enumeration_data), GFP_KERNEL);
++ if (!bioscfg_drv.enumeration_instances_count)
++ return -EINVAL;
++ bioscfg_drv.enumeration_data = kvcalloc(bioscfg_drv.enumeration_instances_count,
++ sizeof(*bioscfg_drv.enumeration_data), GFP_KERNEL);
++
+ if (!bioscfg_drv.enumeration_data) {
+ bioscfg_drv.enumeration_instances_count = 0;
+ return -ENOMEM;
+@@ -452,6 +455,6 @@ void hp_exit_enumeration_attributes(void
+ }
+ bioscfg_drv.enumeration_instances_count = 0;
+
+- kfree(bioscfg_drv.enumeration_data);
++ kvfree(bioscfg_drv.enumeration_data);
+ bioscfg_drv.enumeration_data = NULL;
+ }
--- /dev/null
+From stable+bounces-223703-greg=kroah.com@vger.kernel.org Mon Mar 9 17:38:10 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 9 Mar 2026 12:28:42 -0400
+Subject: selftests: mptcp: add a check for 'add_addr_accepted'
+To: stable@vger.kernel.org
+Cc: Gang Yan <yangang@kylinos.cn>, Geliang Tang <geliang@kernel.org>, "Matthieu Baerts (NGI0)" <matttbe@kernel.org>, Jakub Kicinski <kuba@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260309162844.1306091-1-sashal@kernel.org>
+
+From: Gang Yan <yangang@kylinos.cn>
+
+[ Upstream commit 0eee0fdf9b7b0baf698f9b426384aa9714d76a51 ]
+
+The previous patch fixed an issue with the 'add_addr_accepted' counter.
+This was not spot by the test suite.
+
+Check this counter and 'add_addr_signal' in MPTCP Join 'delete re-add
+signal' test. This should help spotting similar regressions later on.
+These counters are crucial for ensuring the MPTCP path manager correctly
+handles the subflow creation via 'ADD_ADDR'.
+
+Signed-off-by: Gang Yan <yangang@kylinos.cn>
+Reviewed-by: Geliang Tang <geliang@kernel.org>
+Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
+Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
+Link: https://patch.msgid.link/20251118-net-mptcp-misc-fixes-6-18-rc6-v1-11-806d3781c95f@kernel.org
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+Stable-dep-of: 560edd99b5f5 ("selftests: mptcp: join: check RM_ADDR not sent over same subflow")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ tools/testing/selftests/net/mptcp/mptcp_join.sh | 7 +++++++
+ 1 file changed, 7 insertions(+)
+
+--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+@@ -3934,38 +3934,45 @@ endpoint_tests()
+ $ns1 10.0.2.1 id 1 flags signal
+ chk_subflow_nr "before delete" 2
+ chk_mptcp_info subflows 1 subflows 1
++ chk_mptcp_info add_addr_signal 2 add_addr_accepted 1
+
+ pm_nl_del_endpoint $ns1 1 10.0.2.1
+ pm_nl_del_endpoint $ns1 2 224.0.0.1
+ sleep 0.5
+ chk_subflow_nr "after delete" 1
+ chk_mptcp_info subflows 0 subflows 0
++ chk_mptcp_info add_addr_signal 0 add_addr_accepted 0
+
+ pm_nl_add_endpoint $ns1 10.0.2.1 id 1 flags signal
+ pm_nl_add_endpoint $ns1 10.0.3.1 id 2 flags signal
+ wait_mpj $ns2
+ chk_subflow_nr "after re-add" 3
+ chk_mptcp_info subflows 2 subflows 2
++ chk_mptcp_info add_addr_signal 2 add_addr_accepted 2
+
+ pm_nl_del_endpoint $ns1 42 10.0.1.1
+ sleep 0.5
+ chk_subflow_nr "after delete ID 0" 2
+ chk_mptcp_info subflows 2 subflows 2
++ chk_mptcp_info add_addr_signal 2 add_addr_accepted 2
+
+ pm_nl_add_endpoint $ns1 10.0.1.1 id 99 flags signal
+ wait_mpj $ns2
+ chk_subflow_nr "after re-add ID 0" 3
+ chk_mptcp_info subflows 3 subflows 3
++ chk_mptcp_info add_addr_signal 3 add_addr_accepted 2
+
+ pm_nl_del_endpoint $ns1 99 10.0.1.1
+ sleep 0.5
+ chk_subflow_nr "after re-delete ID 0" 2
+ chk_mptcp_info subflows 2 subflows 2
++ chk_mptcp_info add_addr_signal 2 add_addr_accepted 2
+
+ pm_nl_add_endpoint $ns1 10.0.1.1 id 88 flags signal
+ wait_mpj $ns2
+ chk_subflow_nr "after re-re-add ID 0" 3
+ chk_mptcp_info subflows 3 subflows 3
++ chk_mptcp_info add_addr_signal 3 add_addr_accepted 2
+ mptcp_lib_kill_group_wait $tests_pid
+
+ kill_events_pids
--- /dev/null
+From stable+bounces-223704-greg=kroah.com@vger.kernel.org Mon Mar 9 17:38:10 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 9 Mar 2026 12:28:43 -0400
+Subject: selftests: mptcp: join: check RM_ADDR not sent over same subflow
+To: stable@vger.kernel.org
+Cc: "Matthieu Baerts (NGI0)" <matttbe@kernel.org>, Mat Martineau <martineau@kernel.org>, Jakub Kicinski <kuba@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260309162844.1306091-2-sashal@kernel.org>
+
+From: "Matthieu Baerts (NGI0)" <matttbe@kernel.org>
+
+[ Upstream commit 560edd99b5f58b2d4bbe3c8e51e1eed68d887b0e ]
+
+This validates the previous commit: RM_ADDR were sent over the first
+found active subflow which could be the same as the one being removed.
+It is more likely to loose this notification.
+
+For this check, RM_ADDR are explicitly dropped when trying to send them
+over the initial subflow, when removing the endpoint attached to it. If
+it is dropped, the test will complain because some RM_ADDR have not been
+received.
+
+Note that only the RM_ADDR are dropped, to allow the linked subflow to
+be quickly and cleanly closed. To only drop those RM_ADDR, a cBPF byte
+code is used. If the IPTables commands fail, that's OK, the tests will
+continue to pass, but not validate this part. This can be ignored:
+another subtest fully depends on such command, and will be marked as
+skipped.
+
+The 'Fixes' tag here below is the same as the one from the previous
+commit: this patch here is not fixing anything wrong in the selftests,
+but it validates the previous fix for an issue introduced by this commit
+ID.
+
+Fixes: 8dd5efb1f91b ("mptcp: send ack for rm_addr")
+Cc: stable@vger.kernel.org
+Reviewed-by: Mat Martineau <martineau@kernel.org>
+Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
+Link: https://patch.msgid.link/20260303-net-mptcp-misc-fixes-7-0-rc2-v1-3-4b5462b6f016@kernel.org
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ tools/testing/selftests/net/mptcp/mptcp_join.sh | 36 ++++++++++++++++++++++++
+ 1 file changed, 36 insertions(+)
+
+--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh
++++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh
+@@ -81,6 +81,24 @@ CBPF_MPTCP_SUBOPTION_ADD_ADDR="14,
+ 6 0 0 65535,
+ 6 0 0 0"
+
++# IPv4: TCP hdr of 48B, a first suboption of 12B (DACK8), the RM_ADDR suboption
++# generated using "nfbpf_compile '(ip[32] & 0xf0) == 0xc0 && ip[53] == 0x0c &&
++# (ip[66] & 0xf0) == 0x40'"
++CBPF_MPTCP_SUBOPTION_RM_ADDR="13,
++ 48 0 0 0,
++ 84 0 0 240,
++ 21 0 9 64,
++ 48 0 0 32,
++ 84 0 0 240,
++ 21 0 6 192,
++ 48 0 0 53,
++ 21 0 4 12,
++ 48 0 0 66,
++ 84 0 0 240,
++ 21 0 1 64,
++ 6 0 0 65535,
++ 6 0 0 0"
++
+ init_partial()
+ {
+ capout=$(mktemp)
+@@ -3880,6 +3898,14 @@ endpoint_tests()
+ chk_subflow_nr "after no reject" 3
+ chk_mptcp_info subflows 2 subflows 2
+
++ # To make sure RM_ADDR are sent over a different subflow, but
++ # allow the rest to quickly and cleanly close the subflow
++ local ipt=1
++ ip netns exec "${ns2}" ${iptables} -I OUTPUT -s "10.0.1.2" \
++ -p tcp -m tcp --tcp-option 30 \
++ -m bpf --bytecode \
++ "$CBPF_MPTCP_SUBOPTION_RM_ADDR" \
++ -j DROP || ipt=0
+ local i
+ for i in $(seq 3); do
+ pm_nl_del_endpoint $ns2 1 10.0.1.2
+@@ -3892,6 +3918,7 @@ endpoint_tests()
+ chk_subflow_nr "after re-add id 0 ($i)" 3
+ chk_mptcp_info subflows 3 subflows 3
+ done
++ [ ${ipt} = 1 ] && ip netns exec "${ns2}" ${iptables} -D OUTPUT 1
+
+ mptcp_lib_kill_group_wait $tests_pid
+
+@@ -3950,11 +3977,20 @@ endpoint_tests()
+ chk_mptcp_info subflows 2 subflows 2
+ chk_mptcp_info add_addr_signal 2 add_addr_accepted 2
+
++ # To make sure RM_ADDR are sent over a different subflow, but
++ # allow the rest to quickly and cleanly close the subflow
++ local ipt=1
++ ip netns exec "${ns1}" ${iptables} -I OUTPUT -s "10.0.1.1" \
++ -p tcp -m tcp --tcp-option 30 \
++ -m bpf --bytecode \
++ "$CBPF_MPTCP_SUBOPTION_RM_ADDR" \
++ -j DROP || ipt=0
+ pm_nl_del_endpoint $ns1 42 10.0.1.1
+ sleep 0.5
+ chk_subflow_nr "after delete ID 0" 2
+ chk_mptcp_info subflows 2 subflows 2
+ chk_mptcp_info add_addr_signal 2 add_addr_accepted 2
++ [ ${ipt} = 1 ] && ip netns exec "${ns1}" ${iptables} -D OUTPUT 1
+
+ pm_nl_add_endpoint $ns1 10.0.1.1 id 99 flags signal
+ wait_mpj $ns2
i3c-mipi-i3c-hci-restart-dma-ring-correctly-after-dequeue-abort.patch
i3c-mipi-i3c-hci-add-missing-tid-field-to-no-op-command-descriptor.patch
drm-bridge-ti-sn65dsi86-add-support-for-displayport-mode-with-hpd.patch
+gve-defer-interrupt-enabling-until-napi-registration.patch
+ksmbd-call-ksmbd_vfs_kern_path_end_removing-on-some-error-paths.patch
+wifi-libertas-fix-use-after-free-in-lbs_free_adapter.patch
+platform-x86-hp-bioscfg-support-allocations-of-larger-data.patch
+x86-sev-allow-ibpb-on-entry-feature-for-snp-guests.patch
+gve-fix-incorrect-buffer-cleanup-in-gve_tx_clean_pending_packets-for-qpl.patch
+net-phy-register-phy-led_triggers-during-probe-to-avoid-ab-ba-deadlock.patch
+drm-amd-display-use-gfp_atomic-in-dc_create_stream_for_sink.patch
+mptcp-pm-avoid-sending-rm_addr-over-same-subflow.patch
+mptcp-pm-in-kernel-always-mark-signal-subflow-endp-as-used.patch
+selftests-mptcp-add-a-check-for-add_addr_accepted.patch
+selftests-mptcp-join-check-rm_addr-not-sent-over-same-subflow.patch
+kbuild-leave-objtool-binary-around-with-make-clean.patch
+net-sched-act_gate-snapshot-parameters-with-rcu-on-replace.patch
+can-gs_usb-gs_can_open-always-configure-bitrates-before-starting-device.patch
+usb-gadget-f_tcm-fix-null-pointer-dereferences-in-nexus-handling.patch
+kvm-svm-limit-avic-physical-max-index-based-on-configured-max_vcpu_ids.patch
+kvm-svm-add-a-helper-to-look-up-the-max-physical-id-for-avic.patch
+kvm-svm-set-clear-cr8-write-interception-when-avic-is-de-activated.patch
+mm-kfence-fix-kasan-hardware-tag-faults-during-late-enablement.patch
+iomap-reject-delalloc-mappings-during-writeback.patch
+ksmbd-don-t-log-keys-in-smb3-signing-and-encryption-key-generation.patch
+drm-msm-fix-dma_free_attrs-buffer-size.patch
+drm-bridge-ti-sn65dsi83-halve-horizontal-syncs-for-dual-lvds-output.patch
+net-macb-shuffle-the-tx-ring-before-enabling-tx.patch
+cifs-open-files-should-not-hold-ref-on-superblock.patch
+crypto-atmel-sha204a-fix-oom-tfm_count-leak.patch
+xfs-fix-integer-overflow-in-bmap-intent-sort-comparator.patch
+xfs-ensure-dquot-item-is-deleted-from-ail-only-after-log-shutdown.patch
+smb-client-compare-macs-in-constant-time.patch
+ksmbd-compare-macs-in-constant-time.patch
+net-tcp-md5-fix-mac-comparison-to-be-constant-time.patch
+f2fs-fix-to-avoid-migrating-empty-section.patch
--- /dev/null
+From stable+bounces-224555-greg=kroah.com@vger.kernel.org Tue Mar 10 20:51:12 2026
+From: Eric Biggers <ebiggers@kernel.org>
+Date: Tue, 10 Mar 2026 12:50:58 -0700
+Subject: smb: client: Compare MACs in constant time
+To: stable@vger.kernel.org
+Cc: linux-crypto@vger.kernel.org, linux-cifs@vger.kernel.org, Eric Biggers <ebiggers@kernel.org>, "Paulo Alcantara (Red Hat)" <pc@manguebit.org>, Steve French <stfrench@microsoft.com>
+Message-ID: <20260310195058.70682-1-ebiggers@kernel.org>
+
+From: Eric Biggers <ebiggers@kernel.org>
+
+commit 26bc83b88bbbf054f0980a4a42047a8d1e210e4c upstream.
+
+To prevent timing attacks, MAC comparisons need to be constant-time.
+Replace the memcmp() with the correct function, crypto_memneq().
+
+Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
+Cc: stable@vger.kernel.org
+Acked-by: Paulo Alcantara (Red Hat) <pc@manguebit.org>
+Signed-off-by: Eric Biggers <ebiggers@kernel.org>
+Signed-off-by: Steve French <stfrench@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/smb/client/cifsencrypt.c | 3 ++-
+ fs/smb/client/smb2transport.c | 4 +++-
+ 2 files changed, 5 insertions(+), 2 deletions(-)
+
+--- a/fs/smb/client/cifsencrypt.c
++++ b/fs/smb/client/cifsencrypt.c
+@@ -23,6 +23,7 @@
+ #include <linux/fips.h>
+ #include "../common/arc4.h"
+ #include <crypto/aead.h>
++#include <crypto/utils.h>
+
+ /*
+ * Hash data from a BVEC-type iterator.
+@@ -371,7 +372,7 @@ int cifs_verify_signature(struct smb_rqs
+ /* cifs_dump_mem("what we think it should be: ",
+ what_we_think_sig_should_be, 16); */
+
+- if (memcmp(server_response_sig, what_we_think_sig_should_be, 8))
++ if (crypto_memneq(server_response_sig, what_we_think_sig_should_be, 8))
+ return -EACCES;
+ else
+ return 0;
+--- a/fs/smb/client/smb2transport.c
++++ b/fs/smb/client/smb2transport.c
+@@ -19,6 +19,7 @@
+ #include <linux/mempool.h>
+ #include <linux/highmem.h>
+ #include <crypto/aead.h>
++#include <crypto/utils.h>
+ #include "cifsglob.h"
+ #include "cifsproto.h"
+ #include "smb2proto.h"
+@@ -732,7 +733,8 @@ smb2_verify_signature(struct smb_rqst *r
+ if (rc)
+ return rc;
+
+- if (memcmp(server_response_sig, shdr->Signature, SMB2_SIGNATURE_SIZE)) {
++ if (crypto_memneq(server_response_sig, shdr->Signature,
++ SMB2_SIGNATURE_SIZE)) {
+ cifs_dbg(VFS, "sign fail cmd 0x%x message id 0x%llx\n",
+ shdr->Command, shdr->MessageId);
+ return -EACCES;
--- /dev/null
+From stable+bounces-225695-greg=kroah.com@vger.kernel.org Mon Mar 16 21:17:26 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 16 Mar 2026 16:17:19 -0400
+Subject: usb: gadget: f_tcm: Fix NULL pointer dereferences in nexus handling
+To: stable@vger.kernel.org
+Cc: Jiasheng Jiang <jiashengjiangcool@gmail.com>, stable <stable@kernel.org>, Thinh Nguyen <Thinh.Nguyen@synopsys.com>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260316201719.1375493-1-sashal@kernel.org>
+
+From: Jiasheng Jiang <jiashengjiangcool@gmail.com>
+
+[ Upstream commit b9fde507355342a2d64225d582dc8b98ff5ecb19 ]
+
+The `tpg->tpg_nexus` pointer in the USB Target driver is dynamically
+managed and tied to userspace configuration via ConfigFS. It can be
+NULL if the USB host sends requests before the nexus is fully
+established or immediately after it is dropped.
+
+Currently, functions like `bot_submit_command()` and the data
+transfer paths retrieve `tv_nexus = tpg->tpg_nexus` and immediately
+dereference `tv_nexus->tvn_se_sess` without any validation. If a
+malicious or misconfigured USB host sends a BOT (Bulk-Only Transport)
+command during this race window, it triggers a NULL pointer
+dereference, leading to a kernel panic (local DoS).
+
+This exposes an inconsistent API usage within the module, as peer
+functions like `usbg_submit_command()` and `bot_send_bad_response()`
+correctly implement a NULL check for `tv_nexus` before proceeding.
+
+Fix this by bringing consistency to the nexus handling. Add the
+missing `if (!tv_nexus)` checks to the vulnerable BOT command and
+request processing paths, aborting the command gracefully with an
+error instead of crashing the system.
+
+Fixes: c52661d60f63 ("usb-gadget: Initial merge of target module for UASP + BOT")
+Cc: stable <stable@kernel.org>
+Signed-off-by: Jiasheng Jiang <jiashengjiangcool@gmail.com>
+Reviewed-by: Thinh Nguyen <Thinh.Nguyen@synopsys.com>
+Link: https://patch.msgid.link/20260219023834.17976-1-jiashengjiangcool@gmail.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/usb/gadget/function/f_tcm.c | 14 ++++++++++++++
+ 1 file changed, 14 insertions(+)
+
+--- a/drivers/usb/gadget/function/f_tcm.c
++++ b/drivers/usb/gadget/function/f_tcm.c
+@@ -1032,6 +1032,13 @@ static void usbg_cmd_work(struct work_st
+ se_cmd = &cmd->se_cmd;
+ tpg = cmd->fu->tpg;
+ tv_nexus = tpg->tpg_nexus;
++ if (!tv_nexus) {
++ struct usb_gadget *gadget = fuas_to_gadget(cmd->fu);
++
++ dev_err(&gadget->dev, "Missing nexus, ignoring command\n");
++ return;
++ }
++
+ dir = get_cmd_dir(cmd->cmd_buf);
+ if (dir < 0) {
+ __target_init_cmd(se_cmd,
+@@ -1160,6 +1167,13 @@ static void bot_cmd_work(struct work_str
+ se_cmd = &cmd->se_cmd;
+ tpg = cmd->fu->tpg;
+ tv_nexus = tpg->tpg_nexus;
++ if (!tv_nexus) {
++ struct usb_gadget *gadget = fuas_to_gadget(cmd->fu);
++
++ dev_err(&gadget->dev, "Missing nexus, ignoring command\n");
++ return;
++ }
++
+ dir = get_cmd_dir(cmd->cmd_buf);
+ if (dir < 0) {
+ __target_init_cmd(se_cmd,
--- /dev/null
+From stable+bounces-223605-greg=kroah.com@vger.kernel.org Mon Mar 9 12:14:25 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 9 Mar 2026 07:09:31 -0400
+Subject: wifi: libertas: fix use-after-free in lbs_free_adapter()
+To: stable@vger.kernel.org
+Cc: Daniel Hodges <git@danielhodges.dev>, Johannes Berg <johannes.berg@intel.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260309110931.808152-1-sashal@kernel.org>
+
+From: Daniel Hodges <git@danielhodges.dev>
+
+[ Upstream commit 03cc8f90d0537fcd4985c3319b4fafbf2e3fb1f0 ]
+
+The lbs_free_adapter() function uses timer_delete() (non-synchronous)
+for both command_timer and tx_lockup_timer before the structure is
+freed. This is incorrect because timer_delete() does not wait for
+any running timer callback to complete.
+
+If a timer callback is executing when lbs_free_adapter() is called,
+the callback will access freed memory since lbs_cfg_free() frees the
+containing structure immediately after lbs_free_adapter() returns.
+
+Both timer callbacks (lbs_cmd_timeout_handler and lbs_tx_lockup_handler)
+access priv->driver_lock, priv->cur_cmd, priv->dev, and other fields,
+which would all be use-after-free violations.
+
+Use timer_delete_sync() instead to ensure any running timer callback
+has completed before returning.
+
+This bug was introduced in commit 8f641d93c38a ("libertas: detect TX
+lockups and reset hardware") where del_timer() was used instead of
+del_timer_sync() in the cleanup path. The command_timer has had the
+same issue since the driver was first written.
+
+Fixes: 8f641d93c38a ("libertas: detect TX lockups and reset hardware")
+Fixes: 954ee164f4f4 ("[PATCH] libertas: reorganize and simplify init sequence")
+Cc: stable@vger.kernel.org
+Signed-off-by: Daniel Hodges <git@danielhodges.dev>
+Link: https://patch.msgid.link/20260206195356.15647-1-git@danielhodges.dev
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+[ del_timer() => timer_delete_sync() ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/wireless/marvell/libertas/main.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/drivers/net/wireless/marvell/libertas/main.c
++++ b/drivers/net/wireless/marvell/libertas/main.c
+@@ -881,8 +881,8 @@ static void lbs_free_adapter(struct lbs_
+ {
+ lbs_free_cmd_buffer(priv);
+ kfifo_free(&priv->event_fifo);
+- del_timer(&priv->command_timer);
+- del_timer(&priv->tx_lockup_timer);
++ timer_delete_sync(&priv->command_timer);
++ timer_delete_sync(&priv->tx_lockup_timer);
+ del_timer(&priv->auto_deepsleep_timer);
+ }
+
--- /dev/null
+From stable+bounces-223632-greg=kroah.com@vger.kernel.org Mon Mar 9 13:52:58 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 9 Mar 2026 08:48:15 -0400
+Subject: x86/sev: Allow IBPB-on-Entry feature for SNP guests
+To: stable@vger.kernel.org
+Cc: Kim Phillips <kim.phillips@amd.com>, "Borislav Petkov (AMD)" <bp@alien8.de>, Nikunj A Dadhania <nikunj@amd.com>, Tom Lendacky <thomas.lendacky@amd.com>, stable@kernel.org, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260309124815.862405-1-sashal@kernel.org>
+
+From: Kim Phillips <kim.phillips@amd.com>
+
+[ Upstream commit 9073428bb204d921ae15326bb7d4558d9d269aab ]
+
+The SEV-SNP IBPB-on-Entry feature does not require a guest-side
+implementation. It was added in Zen5 h/w, after the first SNP Zen
+implementation, and thus was not accounted for when the initial set of SNP
+features were added to the kernel.
+
+In its abundant precaution, commit
+
+ 8c29f0165405 ("x86/sev: Add SEV-SNP guest feature negotiation support")
+
+included SEV_STATUS' IBPB-on-Entry bit as a reserved bit, thereby masking
+guests from using the feature.
+
+Allow guests to make use of IBPB-on-Entry when supported by the hypervisor, as
+the bit is now architecturally defined and safe to expose.
+
+Fixes: 8c29f0165405 ("x86/sev: Add SEV-SNP guest feature negotiation support")
+Signed-off-by: Kim Phillips <kim.phillips@amd.com>
+Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
+Reviewed-by: Nikunj A Dadhania <nikunj@amd.com>
+Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
+Cc: stable@kernel.org
+Link: https://patch.msgid.link/20260203222405.4065706-2-kim.phillips@amd.com
+[ No SECURE_AVIC ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/boot/compressed/sev.c | 1 +
+ arch/x86/include/asm/msr-index.h | 5 ++++-
+ 2 files changed, 5 insertions(+), 1 deletion(-)
+
+--- a/arch/x86/boot/compressed/sev.c
++++ b/arch/x86/boot/compressed/sev.c
+@@ -341,6 +341,7 @@ static void enforce_vmpl0(void)
+ MSR_AMD64_SNP_VMSA_REG_PROTECTION | \
+ MSR_AMD64_SNP_RESERVED_BIT13 | \
+ MSR_AMD64_SNP_RESERVED_BIT15 | \
++ MSR_AMD64_SNP_RESERVED_BITS18_22 | \
+ MSR_AMD64_SNP_RESERVED_MASK)
+
+ /*
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -632,11 +632,14 @@
+ #define MSR_AMD64_SNP_IBS_VIRT BIT_ULL(14)
+ #define MSR_AMD64_SNP_VMSA_REG_PROTECTION BIT_ULL(16)
+ #define MSR_AMD64_SNP_SMT_PROTECTION BIT_ULL(17)
++#define MSR_AMD64_SNP_IBPB_ON_ENTRY_BIT 23
++#define MSR_AMD64_SNP_IBPB_ON_ENTRY BIT_ULL(MSR_AMD64_SNP_IBPB_ON_ENTRY_BIT)
+
+ /* SNP feature bits reserved for future use. */
+ #define MSR_AMD64_SNP_RESERVED_BIT13 BIT_ULL(13)
+ #define MSR_AMD64_SNP_RESERVED_BIT15 BIT_ULL(15)
+-#define MSR_AMD64_SNP_RESERVED_MASK GENMASK_ULL(63, 18)
++#define MSR_AMD64_SNP_RESERVED_BITS18_22 GENMASK_ULL(22, 18)
++#define MSR_AMD64_SNP_RESERVED_MASK GENMASK_ULL(63, 24)
+
+ #define MSR_AMD64_VIRT_SPEC_CTRL 0xc001011f
+
--- /dev/null
+From stable+bounces-227259-greg=kroah.com@vger.kernel.org Thu Mar 19 11:54:59 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 19 Mar 2026 06:50:05 -0400
+Subject: xfs: ensure dquot item is deleted from AIL only after log shutdown
+To: stable@vger.kernel.org
+Cc: Long Li <leo.lilong@huawei.com>, Carlos Maiolino <cmaiolino@redhat.com>, Christoph Hellwig <hch@lst.de>, Carlos Maiolino <cem@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260319105005.2298220-1-sashal@kernel.org>
+
+From: Long Li <leo.lilong@huawei.com>
+
+[ Upstream commit 186ac39b8a7d3ec7ce9c5dd45e5c2730177f375c ]
+
+In xfs_qm_dqflush(), when a dquot flush fails due to corruption
+(the out_abort error path), the original code removed the dquot log
+item from the AIL before calling xfs_force_shutdown(). This ordering
+introduces a subtle race condition that can lead to data loss after
+a crash.
+
+The AIL tracks the oldest dirty metadata in the journal. The position
+of the tail item in the AIL determines the log tail LSN, which is the
+oldest LSN that must be preserved for crash recovery. When an item is
+removed from the AIL, the log tail can advance past the LSN of that item.
+
+The race window is as follows: if the dquot item happens to be at
+the tail of the log, removing it from the AIL allows the log tail
+to advance. If a concurrent log write is sampling the tail LSN at
+the same time and subsequently writes a complete checkpoint (i.e.,
+one containing a commit record) to disk before the shutdown takes
+effect, the journal will no longer protect the dquot's last
+modification. On the next mount, log recovery will not replay the
+dquot changes, even though they were never written back to disk,
+resulting in silent data loss.
+
+Fix this by calling xfs_force_shutdown() before xfs_trans_ail_delete()
+in the out_abort path. Once the log is shut down, no new log writes
+can complete with an updated tail LSN, making it safe to remove the
+dquot item from the AIL.
+
+Cc: stable@vger.kernel.org
+Fixes: b707fffda6a3 ("xfs: abort consistently on dquot flush failure")
+Signed-off-by: Long Li <leo.lilong@huawei.com>
+Reviewed-by: Carlos Maiolino <cmaiolino@redhat.com>
+Reviewed-by: Christoph Hellwig <hch@lst.de>
+Signed-off-by: Carlos Maiolino <cem@kernel.org>
+[ adapted error path to preserve existing out_unlock label between xfs_trans_ail_delete and xfs_dqfunlock ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/xfs/xfs_dquot.c | 8 +++++++-
+ 1 file changed, 7 insertions(+), 1 deletion(-)
+
+--- a/fs/xfs/xfs_dquot.c
++++ b/fs/xfs/xfs_dquot.c
+@@ -1297,9 +1297,15 @@ xfs_qm_dqflush(
+ return 0;
+
+ out_abort:
++ /*
++ * Shut down the log before removing the dquot item from the AIL.
++ * Otherwise, the log tail may advance past this item's LSN while
++ * log writes are still in progress, making these unflushed changes
++ * unrecoverable on the next mount.
++ */
++ xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE);
+ dqp->q_flags &= ~XFS_DQFLAG_DIRTY;
+ xfs_trans_ail_delete(lip, 0);
+- xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE);
+ out_unlock:
+ xfs_dqfunlock(dqp);
+ return error;
--- /dev/null
+From stable+bounces-227258-greg=kroah.com@vger.kernel.org Thu Mar 19 11:54:56 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 19 Mar 2026 06:49:47 -0400
+Subject: xfs: fix integer overflow in bmap intent sort comparator
+To: stable@vger.kernel.org
+Cc: Long Li <leo.lilong@huawei.com>, "Darrick J. Wong" <djwong@kernel.org>, Carlos Maiolino <cem@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260319104947.2288756-1-sashal@kernel.org>
+
+From: Long Li <leo.lilong@huawei.com>
+
+[ Upstream commit 362c490980867930a098b99f421268fbd7ca05fd ]
+
+xfs_bmap_update_diff_items() sorts bmap intents by inode number using
+a subtraction of two xfs_ino_t (uint64_t) values, with the result
+truncated to int. This is incorrect when two inode numbers differ by
+more than INT_MAX (2^31 - 1), which is entirely possible on large XFS
+filesystems.
+
+Fix this by replacing the subtraction with cmp_int().
+
+Cc: <stable@vger.kernel.org> # v4.9
+Fixes: 9f3afb57d5f1 ("xfs: implement deferred bmbt map/unmap operations")
+Signed-off-by: Long Li <leo.lilong@huawei.com>
+Reviewed-by: Darrick J. Wong <djwong@kernel.org>
+Signed-off-by: Carlos Maiolino <cem@kernel.org>
+[ replaced `bi_entry()` macro with `container_of()` and inlined `cmp_int()` as a manual three-way comparison expression ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/xfs/xfs_bmap_item.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+--- a/fs/xfs/xfs_bmap_item.c
++++ b/fs/xfs/xfs_bmap_item.c
+@@ -278,7 +278,8 @@ xfs_bmap_update_diff_items(
+
+ ba = container_of(a, struct xfs_bmap_intent, bi_list);
+ bb = container_of(b, struct xfs_bmap_intent, bi_list);
+- return ba->bi_owner->i_ino - bb->bi_owner->i_ino;
++ return (ba->bi_owner->i_ino > bb->bi_owner->i_ino) -
++ (ba->bi_owner->i_ino < bb->bi_owner->i_ino);
+ }
+
+ /* Set the map extent flags for this mapping. */