]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/commitdiff
6.12-stable patches
authorGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Thu, 19 Mar 2026 09:45:45 +0000 (10:45 +0100)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Thu, 19 Mar 2026 09:45:45 +0000 (10:45 +0100)
added patches:
cifs-open-files-should-not-hold-ref-on-superblock.patch
crypto-atmel-sha204a-fix-oom-tfm_count-leak.patch
drm-bridge-ti-sn65dsi83-halve-horizontal-syncs-for-dual-lvds-output.patch
fgraph-fix-thresh_return-clear-per-task-notrace.patch
ksmbd-don-t-log-keys-in-smb3-signing-and-encryption-key-generation.patch
kvm-nvmx-add-consistency-checks-for-cr0.wp-and-cr4.cet.patch
kvm-x86-allow-vendor-code-to-disable-quirks.patch
kvm-x86-co-locate-initialization-of-feature-msrs-in-kvm_arch_vcpu_create.patch
kvm-x86-do-not-allow-re-enabling-quirks.patch
kvm-x86-introduce-intel-specific-quirk-kvm_x86_quirk_ignore_guest_pat.patch
kvm-x86-introduce-kvm_x86_quirk_vmcs12_allow_freeze_in_smm.patch
kvm-x86-introduce-supported_quirks-to-block-disabling-quirks.patch
kvm-x86-quirk-initialization-of-feature-msrs-to-kvm-s-max-configuration.patch
net-macb-shuffle-the-tx-ring-before-enabling-tx.patch
xfs-fix-integer-overflow-in-bmap-intent-sort-comparator.patch

16 files changed:
queue-6.12/cifs-open-files-should-not-hold-ref-on-superblock.patch [new file with mode: 0644]
queue-6.12/crypto-atmel-sha204a-fix-oom-tfm_count-leak.patch [new file with mode: 0644]
queue-6.12/drm-bridge-ti-sn65dsi83-halve-horizontal-syncs-for-dual-lvds-output.patch [new file with mode: 0644]
queue-6.12/fgraph-fix-thresh_return-clear-per-task-notrace.patch [new file with mode: 0644]
queue-6.12/ksmbd-don-t-log-keys-in-smb3-signing-and-encryption-key-generation.patch [new file with mode: 0644]
queue-6.12/kvm-nvmx-add-consistency-checks-for-cr0.wp-and-cr4.cet.patch [new file with mode: 0644]
queue-6.12/kvm-x86-allow-vendor-code-to-disable-quirks.patch [new file with mode: 0644]
queue-6.12/kvm-x86-co-locate-initialization-of-feature-msrs-in-kvm_arch_vcpu_create.patch [new file with mode: 0644]
queue-6.12/kvm-x86-do-not-allow-re-enabling-quirks.patch [new file with mode: 0644]
queue-6.12/kvm-x86-introduce-intel-specific-quirk-kvm_x86_quirk_ignore_guest_pat.patch [new file with mode: 0644]
queue-6.12/kvm-x86-introduce-kvm_x86_quirk_vmcs12_allow_freeze_in_smm.patch [new file with mode: 0644]
queue-6.12/kvm-x86-introduce-supported_quirks-to-block-disabling-quirks.patch [new file with mode: 0644]
queue-6.12/kvm-x86-quirk-initialization-of-feature-msrs-to-kvm-s-max-configuration.patch [new file with mode: 0644]
queue-6.12/net-macb-shuffle-the-tx-ring-before-enabling-tx.patch [new file with mode: 0644]
queue-6.12/series
queue-6.12/xfs-fix-integer-overflow-in-bmap-intent-sort-comparator.patch [new file with mode: 0644]

diff --git a/queue-6.12/cifs-open-files-should-not-hold-ref-on-superblock.patch b/queue-6.12/cifs-open-files-should-not-hold-ref-on-superblock.patch
new file mode 100644 (file)
index 0000000..c9cd5db
--- /dev/null
@@ -0,0 +1,217 @@
+From stable+bounces-227173-greg=kroah.com@vger.kernel.org Wed Mar 18 22:35:56 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 18 Mar 2026 17:31:29 -0400
+Subject: cifs: open files should not hold ref on superblock
+To: stable@vger.kernel.org
+Cc: Shyam Prasad N <sprasad@microsoft.com>, Steve French <stfrench@microsoft.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260318213129.1363807-1-sashal@kernel.org>
+
+From: Shyam Prasad N <sprasad@microsoft.com>
+
+[ Upstream commit 340cea84f691c5206561bb2e0147158fe02070be ]
+
+Today whenever we deal with a file, in addition to holding
+a reference on the dentry, we also get a reference on the
+superblock. This happens in two cases:
+1. when a new cinode is allocated
+2. when an oplock break is being processed
+
+The reasoning for holding the superblock ref was to make sure
+that when umount happens, if there are users of inodes and
+dentries, it does not try to clean them up and wait for the
+last ref to superblock to be dropped by last of such users.
+
+But the side effect of doing that is that umount silently drops
+a ref on the superblock and we could have deferred closes and
+lease breaks still holding these refs.
+
+Ideally, we should ensure that all of these users of inodes and
+dentries are cleaned up at the time of umount, which is what this
+code is doing.
+
+This code change allows these code paths to use a ref on the
+dentry (and hence the inode). That way, umount is
+ensured to clean up SMB client resources when it's the last
+ref on the superblock (For ex: when same objects are shared).
+
+The code change also moves the call to close all the files in
+deferred close list to the umount code path. It also waits for
+oplock_break workers to be flushed before calling
+kill_anon_super (which eventually frees up those objects).
+
+Fixes: 24261fc23db9 ("cifs: delay super block destruction until all cifsFileInfo objects are gone")
+Fixes: 705c79101ccf ("smb: client: fix use-after-free in cifs_oplock_break")
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Shyam Prasad N <sprasad@microsoft.com>
+Signed-off-by: Steve French <stfrench@microsoft.com>
+[ replaced kmalloc_obj() with kmalloc(sizeof(...)) ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/smb/client/cifsfs.c    |    7 +++++--
+ fs/smb/client/cifsproto.h |    1 +
+ fs/smb/client/file.c      |   11 -----------
+ fs/smb/client/misc.c      |   42 ++++++++++++++++++++++++++++++++++++++++++
+ fs/smb/client/trace.h     |    2 ++
+ 5 files changed, 50 insertions(+), 13 deletions(-)
+
+--- a/fs/smb/client/cifsfs.c
++++ b/fs/smb/client/cifsfs.c
+@@ -291,10 +291,14 @@ static void cifs_kill_sb(struct super_bl
+       /*
+        * We need to release all dentries for the cached directories
+-       * before we kill the sb.
++       * and close all deferred file handles before we kill the sb.
+        */
+       if (cifs_sb->root) {
+               close_all_cached_dirs(cifs_sb);
++              cifs_close_all_deferred_files_sb(cifs_sb);
++
++              /* Wait for all pending oplock breaks to complete */
++              flush_workqueue(cifsoplockd_wq);
+               /* finally release root dentry */
+               dput(cifs_sb->root);
+@@ -799,7 +803,6 @@ static void cifs_umount_begin(struct sup
+       spin_unlock(&tcon->tc_lock);
+       spin_unlock(&cifs_tcp_ses_lock);
+-      cifs_close_all_deferred_files(tcon);
+       /* cancel_brl_requests(tcon); */ /* BB mark all brl mids as exiting */
+       /* cancel_notify_requests(tcon); */
+       if (tcon->ses && tcon->ses->server) {
+--- a/fs/smb/client/cifsproto.h
++++ b/fs/smb/client/cifsproto.h
+@@ -298,6 +298,7 @@ extern void cifs_close_deferred_file(str
+ extern void cifs_close_all_deferred_files(struct cifs_tcon *cifs_tcon);
++void cifs_close_all_deferred_files_sb(struct cifs_sb_info *cifs_sb);
+ void cifs_close_deferred_file_under_dentry(struct cifs_tcon *cifs_tcon,
+                                          struct dentry *dentry);
+--- a/fs/smb/client/file.c
++++ b/fs/smb/client/file.c
+@@ -690,8 +690,6 @@ struct cifsFileInfo *cifs_new_fileinfo(s
+       mutex_init(&cfile->fh_mutex);
+       spin_lock_init(&cfile->file_info_lock);
+-      cifs_sb_active(inode->i_sb);
+-
+       /*
+        * If the server returned a read oplock and we have mandatory brlocks,
+        * set oplock level to None.
+@@ -746,7 +744,6 @@ static void cifsFileInfo_put_final(struc
+       struct inode *inode = d_inode(cifs_file->dentry);
+       struct cifsInodeInfo *cifsi = CIFS_I(inode);
+       struct cifsLockInfo *li, *tmp;
+-      struct super_block *sb = inode->i_sb;
+       /*
+        * Delete any outstanding lock records. We'll lose them when the file
+@@ -764,7 +761,6 @@ static void cifsFileInfo_put_final(struc
+       cifs_put_tlink(cifs_file->tlink);
+       dput(cifs_file->dentry);
+-      cifs_sb_deactive(sb);
+       kfree(cifs_file->symlink_target);
+       kfree(cifs_file);
+ }
+@@ -3075,12 +3071,6 @@ void cifs_oplock_break(struct work_struc
+       __u64 persistent_fid, volatile_fid;
+       __u16 net_fid;
+-      /*
+-       * Hold a reference to the superblock to prevent it and its inodes from
+-       * being freed while we are accessing cinode. Otherwise, _cifsFileInfo_put()
+-       * may release the last reference to the sb and trigger inode eviction.
+-       */
+-      cifs_sb_active(sb);
+       wait_on_bit(&cinode->flags, CIFS_INODE_PENDING_WRITERS,
+                       TASK_UNINTERRUPTIBLE);
+@@ -3153,7 +3143,6 @@ oplock_break_ack:
+       cifs_put_tlink(tlink);
+ out:
+       cifs_done_oplock_break(cinode);
+-      cifs_sb_deactive(sb);
+ }
+ static int cifs_swap_activate(struct swap_info_struct *sis,
+--- a/fs/smb/client/misc.c
++++ b/fs/smb/client/misc.c
+@@ -27,6 +27,11 @@
+ #include "fs_context.h"
+ #include "cached_dir.h"
++struct tcon_list {
++      struct list_head entry;
++      struct cifs_tcon *tcon;
++};
++
+ /* The xid serves as a useful identifier for each incoming vfs request,
+    in a similar way to the mid which is useful to track each sent smb,
+    and CurrentXid can also provide a running counter (although it
+@@ -829,6 +834,43 @@ cifs_close_all_deferred_files(struct cif
+               kfree(tmp_list);
+       }
+ }
++
++void cifs_close_all_deferred_files_sb(struct cifs_sb_info *cifs_sb)
++{
++      struct rb_root *root = &cifs_sb->tlink_tree;
++      struct rb_node *node;
++      struct cifs_tcon *tcon;
++      struct tcon_link *tlink;
++      struct tcon_list *tmp_list, *q;
++      LIST_HEAD(tcon_head);
++
++      spin_lock(&cifs_sb->tlink_tree_lock);
++      for (node = rb_first(root); node; node = rb_next(node)) {
++              tlink = rb_entry(node, struct tcon_link, tl_rbnode);
++              tcon = tlink_tcon(tlink);
++              if (IS_ERR(tcon))
++                      continue;
++              tmp_list = kmalloc(sizeof(struct tcon_list), GFP_ATOMIC);
++              if (tmp_list == NULL)
++                      break;
++              tmp_list->tcon = tcon;
++              /* Take a reference on tcon to prevent it from being freed */
++              spin_lock(&tcon->tc_lock);
++              ++tcon->tc_count;
++              trace_smb3_tcon_ref(tcon->debug_id, tcon->tc_count,
++                                  netfs_trace_tcon_ref_get_close_defer_files);
++              spin_unlock(&tcon->tc_lock);
++              list_add_tail(&tmp_list->entry, &tcon_head);
++      }
++      spin_unlock(&cifs_sb->tlink_tree_lock);
++
++      list_for_each_entry_safe(tmp_list, q, &tcon_head, entry) {
++              cifs_close_all_deferred_files(tmp_list->tcon);
++              list_del(&tmp_list->entry);
++              cifs_put_tcon(tmp_list->tcon, netfs_trace_tcon_ref_put_close_defer_files);
++              kfree(tmp_list);
++      }
++}
+ void cifs_close_deferred_file_under_dentry(struct cifs_tcon *tcon,
+                                          struct dentry *dentry)
+--- a/fs/smb/client/trace.h
++++ b/fs/smb/client/trace.h
+@@ -47,6 +47,7 @@
+       EM(netfs_trace_tcon_ref_get_cached_laundromat,  "GET Ch-Lau") \
+       EM(netfs_trace_tcon_ref_get_cached_lease_break, "GET Ch-Lea") \
+       EM(netfs_trace_tcon_ref_get_cancelled_close,    "GET Cn-Cls") \
++      EM(netfs_trace_tcon_ref_get_close_defer_files,  "GET Cl-Def") \
+       EM(netfs_trace_tcon_ref_get_dfs_refer,          "GET DfsRef") \
+       EM(netfs_trace_tcon_ref_get_find,               "GET Find  ") \
+       EM(netfs_trace_tcon_ref_get_find_sess_tcon,     "GET FndSes") \
+@@ -58,6 +59,7 @@
+       EM(netfs_trace_tcon_ref_put_cancelled_close,    "PUT Cn-Cls") \
+       EM(netfs_trace_tcon_ref_put_cancelled_close_fid, "PUT Cn-Fid") \
+       EM(netfs_trace_tcon_ref_put_cancelled_mid,      "PUT Cn-Mid") \
++      EM(netfs_trace_tcon_ref_put_close_defer_files,  "PUT Cl-Def") \
+       EM(netfs_trace_tcon_ref_put_mnt_ctx,            "PUT MntCtx") \
+       EM(netfs_trace_tcon_ref_put_dfs_refer,          "PUT DfsRfr") \
+       EM(netfs_trace_tcon_ref_put_reconnect_server,   "PUT Reconn") \
diff --git a/queue-6.12/crypto-atmel-sha204a-fix-oom-tfm_count-leak.patch b/queue-6.12/crypto-atmel-sha204a-fix-oom-tfm_count-leak.patch
new file mode 100644 (file)
index 0000000..1c72bd5
--- /dev/null
@@ -0,0 +1,41 @@
+From stable+bounces-227193-greg=kroah.com@vger.kernel.org Thu Mar 19 01:59:48 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 18 Mar 2026 20:59:41 -0400
+Subject: crypto: atmel-sha204a - Fix OOM ->tfm_count leak
+To: stable@vger.kernel.org
+Cc: Thorsten Blum <thorsten.blum@linux.dev>, Herbert Xu <herbert@gondor.apana.org.au>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260319005941.1860779-1-sashal@kernel.org>
+
+From: Thorsten Blum <thorsten.blum@linux.dev>
+
+[ Upstream commit d240b079a37e90af03fd7dfec94930eb6c83936e ]
+
+If memory allocation fails, decrement ->tfm_count to avoid blocking
+future reads.
+
+Cc: stable@vger.kernel.org
+Fixes: da001fb651b0 ("crypto: atmel-i2c - add support for SHA204A random number generator")
+Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
+Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
+[ adapted kmalloc_obj() macro to kmalloc(sizeof()) ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/crypto/atmel-sha204a.c |    5 +++--
+ 1 file changed, 3 insertions(+), 2 deletions(-)
+
+--- a/drivers/crypto/atmel-sha204a.c
++++ b/drivers/crypto/atmel-sha204a.c
+@@ -52,9 +52,10 @@ static int atmel_sha204a_rng_read_nonblo
+               rng->priv = 0;
+       } else {
+               work_data = kmalloc(sizeof(*work_data), GFP_ATOMIC);
+-              if (!work_data)
++              if (!work_data) {
++                      atomic_dec(&i2c_priv->tfm_count);
+                       return -ENOMEM;
+-
++              }
+               work_data->ctx = i2c_priv;
+               work_data->client = i2c_priv->client;
diff --git a/queue-6.12/drm-bridge-ti-sn65dsi83-halve-horizontal-syncs-for-dual-lvds-output.patch b/queue-6.12/drm-bridge-ti-sn65dsi83-halve-horizontal-syncs-for-dual-lvds-output.patch
new file mode 100644 (file)
index 0000000..94a0213
--- /dev/null
@@ -0,0 +1,73 @@
+From stable+bounces-227112-greg=kroah.com@vger.kernel.org Wed Mar 18 17:31:42 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 18 Mar 2026 12:02:18 -0400
+Subject: drm/bridge: ti-sn65dsi83: halve horizontal syncs for dual LVDS output
+To: stable@vger.kernel.org
+Cc: Luca Ceresoli <luca.ceresoli@bootlin.com>, Marek Vasut <marek.vasut@mailbox.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260318160218.901785-1-sashal@kernel.org>
+
+From: Luca Ceresoli <luca.ceresoli@bootlin.com>
+
+[ Upstream commit d0d727746944096a6681dc6adb5f123fc5aa018d ]
+
+Dual LVDS output (available on the SN65DSI84) requires HSYNC_PULSE_WIDTH
+and HORIZONTAL_BACK_PORCH to be divided by two with respect to the values
+used for single LVDS output.
+
+While not clearly stated in the datasheet, this is needed according to the
+DSI Tuner [0] output. It also makes sense intuitively because in dual LVDS
+output two pixels at a time are output and so the output clock is half of
+the pixel clock.
+
+Some dual-LVDS panels refuse to show any picture without this fix.
+
+Divide by two HORIZONTAL_FRONT_PORCH too, even though this register is used
+only for test pattern generation which is not currently implemented by this
+driver.
+
+[0] https://www.ti.com/tool/DSI-TUNER
+
+Fixes: ceb515ba29ba ("drm/bridge: ti-sn65dsi83: Add TI SN65DSI83 and SN65DSI84 driver")
+Cc: stable@vger.kernel.org
+Reviewed-by: Marek Vasut <marek.vasut@mailbox.org>
+Link: https://patch.msgid.link/20260226-ti-sn65dsi83-dual-lvds-fixes-and-test-pattern-v1-2-2e15f5a9a6a0@bootlin.com
+Signed-off-by: Luca Ceresoli <luca.ceresoli@bootlin.com>
+[ adapted variable declaration placement ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/gpu/drm/bridge/ti-sn65dsi83.c |    7 ++++---
+ 1 file changed, 4 insertions(+), 3 deletions(-)
+
+--- a/drivers/gpu/drm/bridge/ti-sn65dsi83.c
++++ b/drivers/gpu/drm/bridge/ti-sn65dsi83.c
+@@ -325,6 +325,7 @@ static void sn65dsi83_atomic_pre_enable(
+                                       struct drm_bridge_state *old_bridge_state)
+ {
+       struct sn65dsi83 *ctx = bridge_to_sn65dsi83(bridge);
++      const unsigned int dual_factor = ctx->lvds_dual_link ? 2 : 1;
+       struct drm_atomic_state *state = old_bridge_state->base.state;
+       const struct drm_bridge_state *bridge_state;
+       const struct drm_crtc_state *crtc_state;
+@@ -452,18 +453,18 @@ static void sn65dsi83_atomic_pre_enable(
+       /* 32 + 1 pixel clock to ensure proper operation */
+       le16val = cpu_to_le16(32 + 1);
+       regmap_bulk_write(ctx->regmap, REG_VID_CHA_SYNC_DELAY_LOW, &le16val, 2);
+-      le16val = cpu_to_le16(mode->hsync_end - mode->hsync_start);
++      le16val = cpu_to_le16((mode->hsync_end - mode->hsync_start) / dual_factor);
+       regmap_bulk_write(ctx->regmap, REG_VID_CHA_HSYNC_PULSE_WIDTH_LOW,
+                         &le16val, 2);
+       le16val = cpu_to_le16(mode->vsync_end - mode->vsync_start);
+       regmap_bulk_write(ctx->regmap, REG_VID_CHA_VSYNC_PULSE_WIDTH_LOW,
+                         &le16val, 2);
+       regmap_write(ctx->regmap, REG_VID_CHA_HORIZONTAL_BACK_PORCH,
+-                   mode->htotal - mode->hsync_end);
++                   (mode->htotal - mode->hsync_end) / dual_factor);
+       regmap_write(ctx->regmap, REG_VID_CHA_VERTICAL_BACK_PORCH,
+                    mode->vtotal - mode->vsync_end);
+       regmap_write(ctx->regmap, REG_VID_CHA_HORIZONTAL_FRONT_PORCH,
+-                   mode->hsync_start - mode->hdisplay);
++                   (mode->hsync_start - mode->hdisplay) / dual_factor);
+       regmap_write(ctx->regmap, REG_VID_CHA_VERTICAL_FRONT_PORCH,
+                    mode->vsync_start - mode->vdisplay);
+       regmap_write(ctx->regmap, REG_VID_CHA_TEST_PATTERN, 0x00);
diff --git a/queue-6.12/fgraph-fix-thresh_return-clear-per-task-notrace.patch b/queue-6.12/fgraph-fix-thresh_return-clear-per-task-notrace.patch
new file mode 100644 (file)
index 0000000..6469347
--- /dev/null
@@ -0,0 +1,51 @@
+From stable+bounces-227028-greg=kroah.com@vger.kernel.org Wed Mar 18 12:39:27 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 18 Mar 2026 07:39:12 -0400
+Subject: fgraph: Fix thresh_return clear per-task notrace
+To: stable@vger.kernel.org
+Cc: Shengming Hu <hu.shengming@zte.com.cn>, "Masami Hiramatsu (Google)" <mhiramat@kernel.org>, "Steven Rostedt (Google)" <rostedt@goodmis.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260318113912.631601-1-sashal@kernel.org>
+
+From: Shengming Hu <hu.shengming@zte.com.cn>
+
+[ Upstream commit 6ca8379b5d36e22b04e6315c3e49a6083377c862 ]
+
+When tracing_thresh is enabled, function graph tracing uses
+trace_graph_thresh_return() as the return handler. Unlike
+trace_graph_return(), it did not clear the per-task TRACE_GRAPH_NOTRACE
+flag set by the entry handler for set_graph_notrace addresses. This could
+leave the task permanently in "notrace" state and effectively disable
+function graph tracing for that task.
+
+Mirror trace_graph_return()'s per-task notrace handling by clearing
+TRACE_GRAPH_NOTRACE and returning early when set.
+
+Cc: stable@vger.kernel.org
+Link: https://patch.msgid.link/20260221113007819YgrZsMGABff4Rc-O_fZxL@zte.com.cn
+Fixes: b84214890a9bc ("function_graph: Move graph notrace bit to shadow stack global var")
+Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
+Signed-off-by: Shengming Hu <hu.shengming@zte.com.cn>
+Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/trace/trace_functions_graph.c |    6 ++++--
+ 1 file changed, 4 insertions(+), 2 deletions(-)
+
+--- a/kernel/trace/trace_functions_graph.c
++++ b/kernel/trace/trace_functions_graph.c
+@@ -271,10 +271,12 @@ void trace_graph_return(struct ftrace_gr
+ static void trace_graph_thresh_return(struct ftrace_graph_ret *trace,
+                                     struct fgraph_ops *gops)
+ {
++      unsigned long *task_var = fgraph_get_task_var(gops);
++
+       ftrace_graph_addr_finish(gops, trace);
+-      if (trace_recursion_test(TRACE_GRAPH_NOTRACE_BIT)) {
+-              trace_recursion_clear(TRACE_GRAPH_NOTRACE_BIT);
++      if (*task_var & TRACE_GRAPH_NOTRACE) {
++              *task_var &= ~TRACE_GRAPH_NOTRACE;
+               return;
+       }
diff --git a/queue-6.12/ksmbd-don-t-log-keys-in-smb3-signing-and-encryption-key-generation.patch b/queue-6.12/ksmbd-don-t-log-keys-in-smb3-signing-and-encryption-key-generation.patch
new file mode 100644 (file)
index 0000000..e6b9f0a
--- /dev/null
@@ -0,0 +1,69 @@
+From stable+bounces-227082-greg=kroah.com@vger.kernel.org Wed Mar 18 15:45:55 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 18 Mar 2026 10:36:20 -0400
+Subject: ksmbd: Don't log keys in SMB3 signing and encryption key generation
+To: stable@vger.kernel.org
+Cc: Thorsten Blum <thorsten.blum@linux.dev>, Namjae Jeon <linkinjeon@kernel.org>, Steve French <stfrench@microsoft.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260318143620.844352-1-sashal@kernel.org>
+
+From: Thorsten Blum <thorsten.blum@linux.dev>
+
+[ Upstream commit 441336115df26b966575de56daf7107ed474faed ]
+
+When KSMBD_DEBUG_AUTH logging is enabled, generate_smb3signingkey() and
+generate_smb3encryptionkey() log the session, signing, encryption, and
+decryption key bytes. Remove the logs to avoid exposing credentials.
+
+Fixes: e2f34481b24d ("cifsd: add server-side procedures for SMB3")
+Cc: stable@vger.kernel.org
+Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
+Acked-by: Namjae Jeon <linkinjeon@kernel.org>
+Signed-off-by: Steve French <stfrench@microsoft.com>
+[ Context ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/smb/server/auth.c |   22 ++--------------------
+ 1 file changed, 2 insertions(+), 20 deletions(-)
+
+--- a/fs/smb/server/auth.c
++++ b/fs/smb/server/auth.c
+@@ -803,12 +803,8 @@ static int generate_smb3signingkey(struc
+       if (!(conn->dialect >= SMB30_PROT_ID && signing->binding))
+               memcpy(chann->smb3signingkey, key, SMB3_SIGN_KEY_SIZE);
+-      ksmbd_debug(AUTH, "dumping generated AES signing keys\n");
++      ksmbd_debug(AUTH, "generated SMB3 signing key\n");
+       ksmbd_debug(AUTH, "Session Id    %llu\n", sess->id);
+-      ksmbd_debug(AUTH, "Session Key   %*ph\n",
+-                  SMB2_NTLMV2_SESSKEY_SIZE, sess->sess_key);
+-      ksmbd_debug(AUTH, "Signing Key   %*ph\n",
+-                  SMB3_SIGN_KEY_SIZE, key);
+       return 0;
+ }
+@@ -872,23 +868,9 @@ static int generate_smb3encryptionkey(st
+       if (rc)
+               return rc;
+-      ksmbd_debug(AUTH, "dumping generated AES encryption keys\n");
++      ksmbd_debug(AUTH, "generated SMB3 encryption/decryption keys\n");
+       ksmbd_debug(AUTH, "Cipher type   %d\n", conn->cipher_type);
+       ksmbd_debug(AUTH, "Session Id    %llu\n", sess->id);
+-      ksmbd_debug(AUTH, "Session Key   %*ph\n",
+-                  SMB2_NTLMV2_SESSKEY_SIZE, sess->sess_key);
+-      if (conn->cipher_type == SMB2_ENCRYPTION_AES256_CCM ||
+-          conn->cipher_type == SMB2_ENCRYPTION_AES256_GCM) {
+-              ksmbd_debug(AUTH, "ServerIn Key  %*ph\n",
+-                          SMB3_GCM256_CRYPTKEY_SIZE, sess->smb3encryptionkey);
+-              ksmbd_debug(AUTH, "ServerOut Key %*ph\n",
+-                          SMB3_GCM256_CRYPTKEY_SIZE, sess->smb3decryptionkey);
+-      } else {
+-              ksmbd_debug(AUTH, "ServerIn Key  %*ph\n",
+-                          SMB3_GCM128_CRYPTKEY_SIZE, sess->smb3encryptionkey);
+-              ksmbd_debug(AUTH, "ServerOut Key %*ph\n",
+-                          SMB3_GCM128_CRYPTKEY_SIZE, sess->smb3decryptionkey);
+-      }
+       return 0;
+ }
diff --git a/queue-6.12/kvm-nvmx-add-consistency-checks-for-cr0.wp-and-cr4.cet.patch b/queue-6.12/kvm-nvmx-add-consistency-checks-for-cr0.wp-and-cr4.cet.patch
new file mode 100644 (file)
index 0000000..cb46389
--- /dev/null
@@ -0,0 +1,53 @@
+From stable+bounces-225637-greg=kroah.com@vger.kernel.org Mon Mar 16 18:23:37 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 16 Mar 2026 13:20:02 -0400
+Subject: KVM: nVMX: Add consistency checks for CR0.WP and CR4.CET
+To: stable@vger.kernel.org
+Cc: Chao Gao <chao.gao@intel.com>, Mathias Krause <minipli@grsecurity.net>, John Allen <john.allen@amd.com>, Rick Edgecombe <rick.p.edgecombe@intel.com>, Binbin Wu <binbin.wu@linux.intel.com>, Sean Christopherson <seanjc@google.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260316172003.1024253-7-sashal@kernel.org>
+
+From: Chao Gao <chao.gao@intel.com>
+
+[ Upstream commit 8060b2bd2dd05a19ad7ec248489d374f2bd2b057 ]
+
+Add consistency checks for CR4.CET and CR0.WP in guest-state or host-state
+area in the VMCS12. This ensures that configurations with CR4.CET set and
+CR0.WP not set result in VM-entry failure, aligning with architectural
+behavior.
+
+Tested-by: Mathias Krause <minipli@grsecurity.net>
+Tested-by: John Allen <john.allen@amd.com>
+Tested-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
+Signed-off-by: Chao Gao <chao.gao@intel.com>
+Reviewed-by: Binbin Wu <binbin.wu@linux.intel.com>
+Link: https://lore.kernel.org/r/20250919223258.1604852-33-seanjc@google.com
+Signed-off-by: Sean Christopherson <seanjc@google.com>
+Stable-dep-of: e2ffe85b6d2b ("KVM: x86: Introduce KVM_X86_QUIRK_VMCS12_ALLOW_FREEZE_IN_SMM")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kvm/vmx/nested.c |    6 ++++++
+ 1 file changed, 6 insertions(+)
+
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -3022,6 +3022,9 @@ static int nested_vmx_check_host_state(s
+           CC(!kvm_vcpu_is_legal_cr3(vcpu, vmcs12->host_cr3)))
+               return -EINVAL;
++      if (CC(vmcs12->host_cr4 & X86_CR4_CET && !(vmcs12->host_cr0 & X86_CR0_WP)))
++              return -EINVAL;
++
+       if (CC(is_noncanonical_msr_address(vmcs12->host_ia32_sysenter_esp, vcpu)) ||
+           CC(is_noncanonical_msr_address(vmcs12->host_ia32_sysenter_eip, vcpu)))
+               return -EINVAL;
+@@ -3136,6 +3139,9 @@ static int nested_vmx_check_guest_state(
+           CC(!nested_guest_cr4_valid(vcpu, vmcs12->guest_cr4)))
+               return -EINVAL;
++      if (CC(vmcs12->guest_cr4 & X86_CR4_CET && !(vmcs12->guest_cr0 & X86_CR0_WP)))
++              return -EINVAL;
++
+       if ((vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS) &&
+           (CC(!kvm_dr7_valid(vmcs12->guest_dr7)) ||
+            CC(!vmx_is_valid_debugctl(vcpu, vmcs12->guest_ia32_debugctl, false))))
diff --git a/queue-6.12/kvm-x86-allow-vendor-code-to-disable-quirks.patch b/queue-6.12/kvm-x86-allow-vendor-code-to-disable-quirks.patch
new file mode 100644 (file)
index 0000000..4203d84
--- /dev/null
@@ -0,0 +1,89 @@
+From stable+bounces-225634-greg=kroah.com@vger.kernel.org Mon Mar 16 18:27:43 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 16 Mar 2026 13:19:59 -0400
+Subject: KVM: x86: Allow vendor code to disable quirks
+To: stable@vger.kernel.org
+Cc: Paolo Bonzini <pbonzini@redhat.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260316172003.1024253-4-sashal@kernel.org>
+
+From: Paolo Bonzini <pbonzini@redhat.com>
+
+[ Upstream commit a4dae7c7a41d803a05192015b2d47aca8aca4abf ]
+
+In some cases, the handling of quirks is split between platform-specific
+code and generic code, or it is done entirely in generic code, but the
+relevant bug does not trigger on some platforms; for example,
+this will be the case for "ignore guest PAT".  Allow unaffected vendor
+modules to disable handling of a quirk for all VMs via a new entry in
+kvm_caps.
+
+Such quirks remain available in KVM_CAP_DISABLE_QUIRKS2, because that API
+tells userspace that KVM *knows* that some of its past behavior was bogus
+or just undesirable.  In other words, it's plausible for userspace to
+refuse to run if a quirk is not listed by KVM_CAP_DISABLE_QUIRKS2, so
+preserve that and make it part of the API.
+
+As an example, mark KVM_X86_QUIRK_CD_NW_CLEARED as auto-disabled on
+Intel systems.
+
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+Stable-dep-of: e2ffe85b6d2b ("KVM: x86: Introduce KVM_X86_QUIRK_VMCS12_ALLOW_FREEZE_IN_SMM")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/include/asm/kvm_host.h |    3 +++
+ arch/x86/kvm/svm/svm.c          |    1 +
+ arch/x86/kvm/x86.c              |    2 ++
+ arch/x86/kvm/x86.h              |    1 +
+ 4 files changed, 7 insertions(+)
+
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -2388,6 +2388,9 @@ int memslot_rmap_alloc(struct kvm_memory
+        KVM_X86_QUIRK_SLOT_ZAP_ALL |           \
+        KVM_X86_QUIRK_STUFF_FEATURE_MSRS)
++#define KVM_X86_CONDITIONAL_QUIRKS            \
++       KVM_X86_QUIRK_CD_NW_CLEARED
++
+ /*
+  * KVM previously used a u32 field in kvm_run to indicate the hypercall was
+  * initiated from long mode. KVM now sets bit 0 to indicate long mode, but the
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -5563,6 +5563,7 @@ static __init int svm_hardware_setup(voi
+        */
+       allow_smaller_maxphyaddr = !npt_enabled;
++      kvm_caps.inapplicable_quirks &= ~KVM_X86_QUIRK_CD_NW_CLEARED;
+       return 0;
+ err:
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -9782,6 +9782,7 @@ int kvm_x86_vendor_init(struct kvm_x86_i
+               kvm_host.xcr0 = xgetbv(XCR_XFEATURE_ENABLED_MASK);
+               kvm_caps.supported_xcr0 = kvm_host.xcr0 & KVM_SUPPORTED_XCR0;
+       }
++      kvm_caps.inapplicable_quirks = KVM_X86_CONDITIONAL_QUIRKS;
+       rdmsrl_safe(MSR_EFER, &kvm_host.efer);
+@@ -12780,6 +12781,7 @@ int kvm_arch_init_vm(struct kvm *kvm, un
+       /* Decided by the vendor code for other VM types.  */
+       kvm->arch.pre_fault_allowed =
+               type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM;
++      kvm->arch.disabled_quirks = kvm_caps.inapplicable_quirks;
+       ret = kvm_page_track_init(kvm);
+       if (ret)
+--- a/arch/x86/kvm/x86.h
++++ b/arch/x86/kvm/x86.h
+@@ -32,6 +32,7 @@ struct kvm_caps {
+       u64 supported_xcr0;
+       u64 supported_xss;
+       u64 supported_perf_cap;
++      u64 inapplicable_quirks;
+ };
+ struct kvm_host_values {
diff --git a/queue-6.12/kvm-x86-co-locate-initialization-of-feature-msrs-in-kvm_arch_vcpu_create.patch b/queue-6.12/kvm-x86-co-locate-initialization-of-feature-msrs-in-kvm_arch_vcpu_create.patch
new file mode 100644 (file)
index 0000000..37f92d4
--- /dev/null
@@ -0,0 +1,46 @@
+From stable+bounces-225631-greg=kroah.com@vger.kernel.org Mon Mar 16 18:23:31 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 16 Mar 2026 13:19:56 -0400
+Subject: KVM: x86: Co-locate initialization of feature MSRs in kvm_arch_vcpu_create()
+To: stable@vger.kernel.org
+Cc: Sean Christopherson <seanjc@google.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260316172003.1024253-1-sashal@kernel.org>
+
+From: Sean Christopherson <seanjc@google.com>
+
+[ Upstream commit 2142ac663a6a72ac868d0768681b1355e3a703eb ]
+
+Bunch all of the feature MSR initialization in kvm_arch_vcpu_create() so
+that it can be easily quirked in a future patch.
+
+No functional change intended.
+
+Link: https://lore.kernel.org/r/20240802185511.305849-2-seanjc@google.com
+Signed-off-by: Sean Christopherson <seanjc@google.com>
+Stable-dep-of: e2ffe85b6d2b ("KVM: x86: Introduce KVM_X86_QUIRK_VMCS12_ALLOW_FREEZE_IN_SMM")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kvm/x86.c |    4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -12383,6 +12383,8 @@ int kvm_arch_vcpu_create(struct kvm_vcpu
+       kvm_async_pf_hash_reset(vcpu);
++      vcpu->arch.arch_capabilities = kvm_get_arch_capabilities();
++      vcpu->arch.msr_platform_info = MSR_PLATFORM_INFO_CPUID_FAULT;
+       vcpu->arch.perf_capabilities = kvm_caps.supported_perf_cap;
+       kvm_pmu_init(vcpu);
+@@ -12397,8 +12399,6 @@ int kvm_arch_vcpu_create(struct kvm_vcpu
+       if (r)
+               goto free_guest_fpu;
+-      vcpu->arch.arch_capabilities = kvm_get_arch_capabilities();
+-      vcpu->arch.msr_platform_info = MSR_PLATFORM_INFO_CPUID_FAULT;
+       kvm_xen_init_vcpu(vcpu);
+       vcpu_load(vcpu);
+       kvm_set_tsc_khz(vcpu, vcpu->kvm->arch.default_tsc_khz);
diff --git a/queue-6.12/kvm-x86-do-not-allow-re-enabling-quirks.patch b/queue-6.12/kvm-x86-do-not-allow-re-enabling-quirks.patch
new file mode 100644 (file)
index 0000000..7aeb1c1
--- /dev/null
@@ -0,0 +1,37 @@
+From stable+bounces-225633-greg=kroah.com@vger.kernel.org Mon Mar 16 18:28:56 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 16 Mar 2026 13:19:58 -0400
+Subject: KVM: x86: do not allow re-enabling quirks
+To: stable@vger.kernel.org
+Cc: Paolo Bonzini <pbonzini@redhat.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260316172003.1024253-3-sashal@kernel.org>
+
+From: Paolo Bonzini <pbonzini@redhat.com>
+
+[ Upstream commit 9966b7822b3f49b3aea5d926ece4bc92f1f0a700 ]
+
+Allowing arbitrary re-enabling of quirks puts a limit on what the
+quirks themselves can do, since you cannot assume that the quirk
+prevents a particular state.  More important, it also prevents
+KVM from disabling a quirk at VM creation time, because userspace
+can always go back and re-enable that.
+
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+Stable-dep-of: e2ffe85b6d2b ("KVM: x86: Introduce KVM_X86_QUIRK_VMCS12_ALLOW_FREEZE_IN_SMM")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kvm/x86.c |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -6538,7 +6538,7 @@ int kvm_vm_ioctl_enable_cap(struct kvm *
+                       break;
+               fallthrough;
+       case KVM_CAP_DISABLE_QUIRKS:
+-              kvm->arch.disabled_quirks = cap->args[0];
++              kvm->arch.disabled_quirks |= cap->args[0];
+               r = 0;
+               break;
+       case KVM_CAP_SPLIT_IRQCHIP: {
diff --git a/queue-6.12/kvm-x86-introduce-intel-specific-quirk-kvm_x86_quirk_ignore_guest_pat.patch b/queue-6.12/kvm-x86-introduce-intel-specific-quirk-kvm_x86_quirk_ignore_guest_pat.patch
new file mode 100644 (file)
index 0000000..bd5ac39
--- /dev/null
@@ -0,0 +1,236 @@
+From stable+bounces-225636-greg=kroah.com@vger.kernel.org Mon Mar 16 18:30:59 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 16 Mar 2026 13:20:01 -0400
+Subject: KVM: x86: Introduce Intel specific quirk KVM_X86_QUIRK_IGNORE_GUEST_PAT
+To: stable@vger.kernel.org
+Cc: Yan Zhao <yan.y.zhao@intel.com>, Paolo Bonzini <pbonzini@redhat.com>, Sean Christopherson <seanjc@google.com>, Kevin Tian <kevin.tian@intel.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260316172003.1024253-6-sashal@kernel.org>
+
+From: Yan Zhao <yan.y.zhao@intel.com>
+
+[ Upstream commit c9c1e20b4c7d60fa084b3257525d21a49fe651a1 ]
+
+Introduce an Intel specific quirk KVM_X86_QUIRK_IGNORE_GUEST_PAT to have
+KVM ignore guest PAT when this quirk is enabled.
+
+On AMD platforms, KVM always honors guest PAT.  On Intel however there are
+two issues.  First, KVM *cannot* honor guest PAT if CPU feature self-snoop
+is not supported. Second, UC access on certain Intel platforms can be very
+slow[1] and honoring guest PAT on those platforms may break some old
+guests that accidentally specify video RAM as UC. Those old guests may
+never expect the slowness since KVM always forces WB previously. See [2].
+
+So, introduce a quirk that KVM can enable by default on all Intel platforms
+to avoid breaking old unmodifiable guests. Newer userspace can disable this
+quirk if it wishes KVM to honor guest PAT; disabling the quirk will fail
+if self-snoop is not supported, i.e. if KVM cannot obey the wish.
+
+The quirk is a no-op on AMD and also if any assigned devices have
+non-coherent DMA.  This is not an issue, as KVM_X86_QUIRK_CD_NW_CLEARED is
+another example of a quirk that is sometimes automatically disabled.
+
+Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
+Suggested-by: Sean Christopherson <seanjc@google.com>
+Cc: Kevin Tian <kevin.tian@intel.com>
+Signed-off-by: Yan Zhao <yan.y.zhao@intel.com>
+Link: https://lore.kernel.org/all/Ztl9NWCOupNfVaCA@yzhao56-desk.sh.intel.com # [1]
+Link: https://lore.kernel.org/all/87jzfutmfc.fsf@redhat.com # [2]
+Message-ID: <20250224070946.31482-1-yan.y.zhao@intel.com>
+[Use supported_quirks/inapplicable_quirks to support both AMD and
+ no-self-snoop cases, as well as to remove the shadow_memtype_mask check
+ from kvm_mmu_may_ignore_guest_pat(). - Paolo]
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+Stable-dep-of: e2ffe85b6d2b ("KVM: x86: Introduce KVM_X86_QUIRK_VMCS12_ALLOW_FREEZE_IN_SMM")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ Documentation/virt/kvm/api.rst  |   22 +++++++++++++++++++++
+ arch/x86/include/asm/kvm_host.h |    6 +++--
+ arch/x86/include/uapi/asm/kvm.h |    1 
+ arch/x86/kvm/mmu.h              |    2 -
+ arch/x86/kvm/mmu/mmu.c          |   10 +++++----
+ arch/x86/kvm/vmx/vmx.c          |   41 +++++++++++++++++++++++++++++++++-------
+ arch/x86/kvm/x86.c              |    6 ++++-
+ 7 files changed, 73 insertions(+), 15 deletions(-)
+
+--- a/Documentation/virt/kvm/api.rst
++++ b/Documentation/virt/kvm/api.rst
+@@ -8129,6 +8129,28 @@ KVM_X86_QUIRK_STUFF_FEATURE_MSRS    By d
+                                     and 0x489), as KVM does now allow them to
+                                     be set by userspace (KVM sets them based on
+                                     guest CPUID, for safety purposes).
++
++KVM_X86_QUIRK_IGNORE_GUEST_PAT      By default, on Intel platforms, KVM ignores
++                                    guest PAT and forces the effective memory
++                                    type to WB in EPT.  The quirk is not available
++                                    on Intel platforms which are incapable of
++                                    safely honoring guest PAT (i.e., without CPU
++                                    self-snoop, KVM always ignores guest PAT and
++                                    forces effective memory type to WB).  It is
++                                    also ignored on AMD platforms or, on Intel,
++                                    when a VM has non-coherent DMA devices
++                                    assigned; KVM always honors guest PAT in
++                                    such case. The quirk is needed to avoid
++                                    slowdowns on certain Intel Xeon platforms
++                                    (e.g. ICX, SPR) where self-snoop feature is
++                                    supported but UC is slow enough to cause
++                                    issues with some older guests that use
++                                    UC instead of WC to map the video RAM.
++                                    Userspace can disable the quirk to honor
++                                    guest PAT if it knows that there is no such
++                                    guest software, for example if it does not
++                                    expose a bochs graphics device (which is
++                                    known to have had a buggy driver).
+ =================================== ============================================
+ 7.32 KVM_CAP_MAX_VCPU_ID
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -2386,10 +2386,12 @@ int memslot_rmap_alloc(struct kvm_memory
+        KVM_X86_QUIRK_FIX_HYPERCALL_INSN |     \
+        KVM_X86_QUIRK_MWAIT_NEVER_UD_FAULTS |  \
+        KVM_X86_QUIRK_SLOT_ZAP_ALL |           \
+-       KVM_X86_QUIRK_STUFF_FEATURE_MSRS)
++       KVM_X86_QUIRK_STUFF_FEATURE_MSRS |     \
++       KVM_X86_QUIRK_IGNORE_GUEST_PAT)
+ #define KVM_X86_CONDITIONAL_QUIRKS            \
+-       KVM_X86_QUIRK_CD_NW_CLEARED
++      (KVM_X86_QUIRK_CD_NW_CLEARED |          \
++       KVM_X86_QUIRK_IGNORE_GUEST_PAT)
+ /*
+  * KVM previously used a u32 field in kvm_run to indicate the hypercall was
+--- a/arch/x86/include/uapi/asm/kvm.h
++++ b/arch/x86/include/uapi/asm/kvm.h
+@@ -441,6 +441,7 @@ struct kvm_sync_regs {
+ #define KVM_X86_QUIRK_MWAIT_NEVER_UD_FAULTS   (1 << 6)
+ #define KVM_X86_QUIRK_SLOT_ZAP_ALL            (1 << 7)
+ #define KVM_X86_QUIRK_STUFF_FEATURE_MSRS      (1 << 8)
++#define KVM_X86_QUIRK_IGNORE_GUEST_PAT                (1 << 9)
+ #define KVM_STATE_NESTED_FORMAT_VMX   0
+ #define KVM_STATE_NESTED_FORMAT_SVM   1
+--- a/arch/x86/kvm/mmu.h
++++ b/arch/x86/kvm/mmu.h
+@@ -222,7 +222,7 @@ static inline u8 permission_fault(struct
+       return -(u32)fault & errcode;
+ }
+-bool kvm_mmu_may_ignore_guest_pat(void);
++bool kvm_mmu_may_ignore_guest_pat(struct kvm *kvm);
+ int kvm_mmu_post_init_vm(struct kvm *kvm);
+ void kvm_mmu_pre_destroy_vm(struct kvm *kvm);
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -4713,17 +4713,19 @@ out_unlock:
+ }
+ #endif
+-bool kvm_mmu_may_ignore_guest_pat(void)
++bool kvm_mmu_may_ignore_guest_pat(struct kvm *kvm)
+ {
+       /*
+        * When EPT is enabled (shadow_memtype_mask is non-zero), and the VM
+        * has non-coherent DMA (DMA doesn't snoop CPU caches), KVM's ABI is to
+        * honor the memtype from the guest's PAT so that guest accesses to
+        * memory that is DMA'd aren't cached against the guest's wishes.  As a
+-       * result, KVM _may_ ignore guest PAT, whereas without non-coherent DMA,
+-       * KVM _always_ ignores guest PAT (when EPT is enabled).
++       * result, KVM _may_ ignore guest PAT, whereas without non-coherent DMA.
++       * KVM _always_ ignores guest PAT, when EPT is enabled and when quirk
++       * KVM_X86_QUIRK_IGNORE_GUEST_PAT is enabled or the CPU lacks the
++       * ability to safely honor guest PAT.
+        */
+-      return shadow_memtype_mask;
++      return kvm_check_has_quirk(kvm, KVM_X86_QUIRK_IGNORE_GUEST_PAT);
+ }
+ int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -7665,6 +7665,17 @@ int vmx_vm_init(struct kvm *kvm)
+       return 0;
+ }
++static inline bool vmx_ignore_guest_pat(struct kvm *kvm)
++{
++      /*
++       * Non-coherent DMA devices need the guest to flush CPU properly.
++       * In that case it is not possible to map all guest RAM as WB, so
++       * always trust guest PAT.
++       */
++      return !kvm_arch_has_noncoherent_dma(kvm) &&
++             kvm_check_has_quirk(kvm, KVM_X86_QUIRK_IGNORE_GUEST_PAT);
++}
++
+ u8 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
+ {
+       /*
+@@ -7674,13 +7685,8 @@ u8 vmx_get_mt_mask(struct kvm_vcpu *vcpu
+       if (is_mmio)
+               return MTRR_TYPE_UNCACHABLE << VMX_EPT_MT_EPTE_SHIFT;
+-      /*
+-       * Force WB and ignore guest PAT if the VM does NOT have a non-coherent
+-       * device attached.  Letting the guest control memory types on Intel
+-       * CPUs may result in unexpected behavior, and so KVM's ABI is to trust
+-       * the guest to behave only as a last resort.
+-       */
+-      if (!kvm_arch_has_noncoherent_dma(vcpu->kvm))
++      /* Force WB if ignoring guest PAT */
++      if (vmx_ignore_guest_pat(vcpu->kvm))
+               return (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT;
+       return (MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT);
+@@ -8579,6 +8585,27 @@ __init int vmx_hardware_setup(void)
+       kvm_set_posted_intr_wakeup_handler(pi_wakeup_handler);
++      /*
++       * On Intel CPUs that lack self-snoop feature, letting the guest control
++       * memory types may result in unexpected behavior. So always ignore guest
++       * PAT on those CPUs and map VM as writeback, not allowing userspace to
++       * disable the quirk.
++       *
++       * On certain Intel CPUs (e.g. SPR, ICX), though self-snoop feature is
++       * supported, UC is slow enough to cause issues with some older guests (e.g.
++       * an old version of bochs driver uses ioremap() instead of ioremap_wc() to
++       * map the video RAM, causing wayland desktop to fail to get started
++       * correctly). To avoid breaking those older guests that rely on KVM to force
++       * memory type to WB, provide KVM_X86_QUIRK_IGNORE_GUEST_PAT to preserve the
++       * safer (for performance) default behavior.
++       *
++       * On top of this, non-coherent DMA devices need the guest to flush CPU
++       * caches properly.  This also requires honoring guest PAT, and is forced
++       * independent of the quirk in vmx_ignore_guest_pat().
++       */
++      if (!static_cpu_has(X86_FEATURE_SELFSNOOP))
++              kvm_caps.supported_quirks &= ~KVM_X86_QUIRK_IGNORE_GUEST_PAT;
++       kvm_caps.inapplicable_quirks &= ~KVM_X86_QUIRK_IGNORE_GUEST_PAT;
+       return r;
+ }
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -9828,6 +9828,10 @@ int kvm_x86_vendor_init(struct kvm_x86_i
+       if (IS_ENABLED(CONFIG_KVM_SW_PROTECTED_VM) && tdp_mmu_enabled)
+               kvm_caps.supported_vm_types |= BIT(KVM_X86_SW_PROTECTED_VM);
++      /* KVM always ignores guest PAT for shadow paging.  */
++      if (!tdp_enabled)
++              kvm_caps.supported_quirks &= ~KVM_X86_QUIRK_IGNORE_GUEST_PAT;
++
+       if (!kvm_cpu_cap_has(X86_FEATURE_XSAVES))
+               kvm_caps.supported_xss = 0;
+@@ -13601,7 +13605,7 @@ static void kvm_noncoherent_dma_assignme
+        * (or last) non-coherent device is (un)registered to so that new SPTEs
+        * with the correct "ignore guest PAT" setting are created.
+        */
+-      if (kvm_mmu_may_ignore_guest_pat())
++      if (kvm_mmu_may_ignore_guest_pat(kvm))
+               kvm_zap_gfn_range(kvm, gpa_to_gfn(0), gpa_to_gfn(~0ULL));
+ }
diff --git a/queue-6.12/kvm-x86-introduce-kvm_x86_quirk_vmcs12_allow_freeze_in_smm.patch b/queue-6.12/kvm-x86-introduce-kvm_x86_quirk_vmcs12_allow_freeze_in_smm.patch
new file mode 100644 (file)
index 0000000..6ca8ea6
--- /dev/null
@@ -0,0 +1,112 @@
+From stable+bounces-225638-greg=kroah.com@vger.kernel.org Mon Mar 16 18:31:09 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 16 Mar 2026 13:20:03 -0400
+Subject: KVM: x86: Introduce KVM_X86_QUIRK_VMCS12_ALLOW_FREEZE_IN_SMM
+To: stable@vger.kernel.org
+Cc: Jim Mattson <jmattson@google.com>, Sean Christopherson <seanjc@google.com>, Paolo Bonzini <pbonzini@redhat.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260316172003.1024253-8-sashal@kernel.org>
+
+From: Jim Mattson <jmattson@google.com>
+
+[ Upstream commit e2ffe85b6d2bb7780174b87aa4468a39be17eb81 ]
+
+Add KVM_X86_QUIRK_VMCS12_ALLOW_FREEZE_IN_SMM to allow L1 to set
+FREEZE_IN_SMM in vmcs12's GUEST_IA32_DEBUGCTL field, as permitted
+prior to commit 6b1dd26544d0 ("KVM: VMX: Preserve host's
+DEBUGCTLMSR_FREEZE_IN_SMM while running the guest").  Enable the quirk
+by default for backwards compatibility (like all quirks); userspace
+can disable it via KVM_CAP_DISABLE_QUIRKS2 for consistency with the
+constraints on WRMSR(IA32_DEBUGCTL).
+
+Note that the quirk only bypasses the consistency check.  The vmcs02 bit is
+still owned by the host, and PMCs are not frozen during virtualized SMM.
+In particular, if a host administrator decides that PMCs should not be
+frozen during physical SMM, then L1 has no say in the matter.
+
+Fixes: 095686e6fcb4 ("KVM: nVMX: Check vmcs12->guest_ia32_debugctl on nested VM-Enter")
+Cc: stable@vger.kernel.org
+Signed-off-by: Jim Mattson <jmattson@google.com>
+Link: https://patch.msgid.link/20260205231537.1278753-1-jmattson@google.com
+[sean: tag for stable@, clean-up and fix goofs in the comment and docs]
+Signed-off-by: Sean Christopherson <seanjc@google.com>
+[Rename quirk. - Paolo]
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ Documentation/virt/kvm/api.rst  |    8 ++++++++
+ arch/x86/include/asm/kvm_host.h |    3 ++-
+ arch/x86/include/uapi/asm/kvm.h |    1 +
+ arch/x86/kvm/vmx/nested.c       |   22 ++++++++++++++++++----
+ 4 files changed, 29 insertions(+), 5 deletions(-)
+
+--- a/Documentation/virt/kvm/api.rst
++++ b/Documentation/virt/kvm/api.rst
+@@ -8151,6 +8151,14 @@ KVM_X86_QUIRK_IGNORE_GUEST_PAT      By d
+                                     guest software, for example if it does not
+                                     expose a bochs graphics device (which is
+                                     known to have had a buggy driver).
++
++KVM_X86_QUIRK_VMCS12_ALLOW_FREEZE_IN_SMM   By default, KVM relaxes the consistency
++                                      check for GUEST_IA32_DEBUGCTL in vmcs12
++                                      to allow FREEZE_IN_SMM to be set.  When
++                                      this quirk is disabled, KVM requires this
++                                      bit to be cleared.  Note that the vmcs02
++                                      bit is still completely controlled by the
++                                      host, regardless of the quirk setting.
+ =================================== ============================================
+ 7.32 KVM_CAP_MAX_VCPU_ID
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -2387,7 +2387,8 @@ int memslot_rmap_alloc(struct kvm_memory
+        KVM_X86_QUIRK_MWAIT_NEVER_UD_FAULTS |  \
+        KVM_X86_QUIRK_SLOT_ZAP_ALL |           \
+        KVM_X86_QUIRK_STUFF_FEATURE_MSRS |     \
+-       KVM_X86_QUIRK_IGNORE_GUEST_PAT)
++       KVM_X86_QUIRK_IGNORE_GUEST_PAT |       \
++       KVM_X86_QUIRK_VMCS12_ALLOW_FREEZE_IN_SMM)
+ #define KVM_X86_CONDITIONAL_QUIRKS            \
+       (KVM_X86_QUIRK_CD_NW_CLEARED |          \
+--- a/arch/x86/include/uapi/asm/kvm.h
++++ b/arch/x86/include/uapi/asm/kvm.h
+@@ -442,6 +442,7 @@ struct kvm_sync_regs {
+ #define KVM_X86_QUIRK_SLOT_ZAP_ALL            (1 << 7)
+ #define KVM_X86_QUIRK_STUFF_FEATURE_MSRS      (1 << 8)
+ #define KVM_X86_QUIRK_IGNORE_GUEST_PAT                (1 << 9)
++#define KVM_X86_QUIRK_VMCS12_ALLOW_FREEZE_IN_SMM (1 << 10)
+ #define KVM_STATE_NESTED_FORMAT_VMX   0
+ #define KVM_STATE_NESTED_FORMAT_SVM   1
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -3142,10 +3142,24 @@ static int nested_vmx_check_guest_state(
+       if (CC(vmcs12->guest_cr4 & X86_CR4_CET && !(vmcs12->guest_cr0 & X86_CR0_WP)))
+               return -EINVAL;
+-      if ((vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS) &&
+-          (CC(!kvm_dr7_valid(vmcs12->guest_dr7)) ||
+-           CC(!vmx_is_valid_debugctl(vcpu, vmcs12->guest_ia32_debugctl, false))))
+-              return -EINVAL;
++      if (vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS) {
++              u64 debugctl = vmcs12->guest_ia32_debugctl;
++
++              /*
++               * FREEZE_IN_SMM is not virtualized, but allow L1 to set it in
++               * vmcs12's DEBUGCTL under a quirk for backwards compatibility.
++               * Note that the quirk only relaxes the consistency check.  The
++               * vmcc02 bit is still under the control of the host.  In
++               * particular, if a host administrator decides to clear the bit,
++               * then L1 has no say in the matter.
++               */
++              if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_VMCS12_ALLOW_FREEZE_IN_SMM))
++                      debugctl &= ~DEBUGCTLMSR_FREEZE_IN_SMM;
++
++              if (CC(!kvm_dr7_valid(vmcs12->guest_dr7)) ||
++                  CC(!vmx_is_valid_debugctl(vcpu, debugctl, false)))
++                      return -EINVAL;
++      }
+       if ((vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_PAT) &&
+           CC(!kvm_pat_valid(vmcs12->guest_ia32_pat)))
diff --git a/queue-6.12/kvm-x86-introduce-supported_quirks-to-block-disabling-quirks.patch b/queue-6.12/kvm-x86-introduce-supported_quirks-to-block-disabling-quirks.patch
new file mode 100644 (file)
index 0000000..5ea9a24
--- /dev/null
@@ -0,0 +1,82 @@
+From stable+bounces-225635-greg=kroah.com@vger.kernel.org Mon Mar 16 18:29:00 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 16 Mar 2026 13:20:00 -0400
+Subject: KVM: x86: Introduce supported_quirks to block disabling quirks
+To: stable@vger.kernel.org
+Cc: Yan Zhao <yan.y.zhao@intel.com>, Paolo Bonzini <pbonzini@redhat.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260316172003.1024253-5-sashal@kernel.org>
+
+From: Yan Zhao <yan.y.zhao@intel.com>
+
+[ Upstream commit bd7d5362b4c4ac8b951385867a0fadfae0ba3c07 ]
+
+Introduce supported_quirks in kvm_caps to store platform-specific force-enabled
+quirks.
+
+No functional changes intended.
+
+Signed-off-by: Yan Zhao <yan.y.zhao@intel.com>
+Message-ID: <20250224070832.31394-1-yan.y.zhao@intel.com>
+[Remove unsupported quirks at KVM_ENABLE_CAP time. - Paolo]
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+Stable-dep-of: e2ffe85b6d2b ("KVM: x86: Introduce KVM_X86_QUIRK_VMCS12_ALLOW_FREEZE_IN_SMM")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kvm/x86.c |    9 +++++----
+ arch/x86/kvm/x86.h |    2 ++
+ 2 files changed, 7 insertions(+), 4 deletions(-)
+
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -4801,7 +4801,7 @@ int kvm_vm_ioctl_check_extension(struct
+               r = enable_pmu ? KVM_CAP_PMU_VALID_MASK : 0;
+               break;
+       case KVM_CAP_DISABLE_QUIRKS2:
+-              r = KVM_X86_VALID_QUIRKS;
++              r = kvm_caps.supported_quirks;
+               break;
+       case KVM_CAP_X86_NOTIFY_VMEXIT:
+               r = kvm_caps.has_notify_vmexit;
+@@ -6534,11 +6534,11 @@ int kvm_vm_ioctl_enable_cap(struct kvm *
+       switch (cap->cap) {
+       case KVM_CAP_DISABLE_QUIRKS2:
+               r = -EINVAL;
+-              if (cap->args[0] & ~KVM_X86_VALID_QUIRKS)
++              if (cap->args[0] & ~kvm_caps.supported_quirks)
+                       break;
+               fallthrough;
+       case KVM_CAP_DISABLE_QUIRKS:
+-              kvm->arch.disabled_quirks |= cap->args[0];
++              kvm->arch.disabled_quirks |= cap->args[0] & kvm_caps.supported_quirks;
+               r = 0;
+               break;
+       case KVM_CAP_SPLIT_IRQCHIP: {
+@@ -9782,6 +9782,7 @@ int kvm_x86_vendor_init(struct kvm_x86_i
+               kvm_host.xcr0 = xgetbv(XCR_XFEATURE_ENABLED_MASK);
+               kvm_caps.supported_xcr0 = kvm_host.xcr0 & KVM_SUPPORTED_XCR0;
+       }
++      kvm_caps.supported_quirks = KVM_X86_VALID_QUIRKS;
+       kvm_caps.inapplicable_quirks = KVM_X86_CONDITIONAL_QUIRKS;
+       rdmsrl_safe(MSR_EFER, &kvm_host.efer);
+@@ -12781,7 +12782,7 @@ int kvm_arch_init_vm(struct kvm *kvm, un
+       /* Decided by the vendor code for other VM types.  */
+       kvm->arch.pre_fault_allowed =
+               type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM;
+-      kvm->arch.disabled_quirks = kvm_caps.inapplicable_quirks;
++      kvm->arch.disabled_quirks = kvm_caps.inapplicable_quirks & kvm_caps.supported_quirks;
+       ret = kvm_page_track_init(kvm);
+       if (ret)
+--- a/arch/x86/kvm/x86.h
++++ b/arch/x86/kvm/x86.h
+@@ -32,6 +32,8 @@ struct kvm_caps {
+       u64 supported_xcr0;
+       u64 supported_xss;
+       u64 supported_perf_cap;
++
++      u64 supported_quirks;
+       u64 inapplicable_quirks;
+ };
diff --git a/queue-6.12/kvm-x86-quirk-initialization-of-feature-msrs-to-kvm-s-max-configuration.patch b/queue-6.12/kvm-x86-quirk-initialization-of-feature-msrs-to-kvm-s-max-configuration.patch
new file mode 100644 (file)
index 0000000..ef3836f
--- /dev/null
@@ -0,0 +1,149 @@
+From stable+bounces-225632-greg=kroah.com@vger.kernel.org Mon Mar 16 18:30:54 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 16 Mar 2026 13:19:57 -0400
+Subject: KVM: x86: Quirk initialization of feature MSRs to KVM's max configuration
+To: stable@vger.kernel.org
+Cc: Sean Christopherson <seanjc@google.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260316172003.1024253-2-sashal@kernel.org>
+
+From: Sean Christopherson <seanjc@google.com>
+
+[ Upstream commit dcb988cdac85bad177de86fbf409524eda4f9467 ]
+
+Add a quirk to control KVM's misguided initialization of select feature
+MSRs to KVM's max configuration, as enabling features by default violates
+KVM's approach of letting userspace own the vCPU model, and is actively
+problematic for MSRs that are conditionally supported, as the vCPU will
+end up with an MSR value that userspace can't restore.  E.g. if the vCPU
+is configured with PDCM=0, userspace will save and attempt to restore a
+non-zero PERF_CAPABILITIES, thanks to KVM's meddling.
+
+Link: https://lore.kernel.org/r/20240802185511.305849-4-seanjc@google.com
+Signed-off-by: Sean Christopherson <seanjc@google.com>
+Stable-dep-of: e2ffe85b6d2b ("KVM: x86: Introduce KVM_X86_QUIRK_VMCS12_ALLOW_FREEZE_IN_SMM")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ Documentation/virt/kvm/api.rst  |   22 ++++++++++++++++++++++
+ arch/x86/include/asm/kvm_host.h |    3 ++-
+ arch/x86/include/uapi/asm/kvm.h |    1 +
+ arch/x86/kvm/svm/svm.c          |    4 +++-
+ arch/x86/kvm/vmx/vmx.c          |    9 ++++++---
+ arch/x86/kvm/x86.c              |    8 +++++---
+ 6 files changed, 39 insertions(+), 8 deletions(-)
+
+--- a/Documentation/virt/kvm/api.rst
++++ b/Documentation/virt/kvm/api.rst
+@@ -8107,6 +8107,28 @@ KVM_X86_QUIRK_SLOT_ZAP_ALL          By d
+                                     or moved memslot isn't reachable, i.e KVM
+                                     _may_ invalidate only SPTEs related to the
+                                     memslot.
++
++KVM_X86_QUIRK_STUFF_FEATURE_MSRS    By default, at vCPU creation, KVM sets the
++                                    vCPU's MSR_IA32_PERF_CAPABILITIES (0x345),
++                                    MSR_IA32_ARCH_CAPABILITIES (0x10a),
++                                    MSR_PLATFORM_INFO (0xce), and all VMX MSRs
++                                    (0x480..0x492) to the maximal capabilities
++                                    supported by KVM.  KVM also sets
++                                    MSR_IA32_UCODE_REV (0x8b) to an arbitrary
++                                    value (which is different for Intel vs.
++                                    AMD).  Lastly, when guest CPUID is set (by
++                                    userspace), KVM modifies select VMX MSR
++                                    fields to force consistency between guest
++                                    CPUID and L2's effective ISA.  When this
++                                    quirk is disabled, KVM zeroes the vCPU's MSR
++                                    values (with two exceptions, see below),
++                                    i.e. treats the feature MSRs like CPUID
++                                    leaves and gives userspace full control of
++                                    the vCPU model definition.  This quirk does
++                                    not affect VMX MSRs CR0/CR4_FIXED1 (0x487
++                                    and 0x489), as KVM does now allow them to
++                                    be set by userspace (KVM sets them based on
++                                    guest CPUID, for safety purposes).
+ =================================== ============================================
+ 7.32 KVM_CAP_MAX_VCPU_ID
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -2385,7 +2385,8 @@ int memslot_rmap_alloc(struct kvm_memory
+        KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT |   \
+        KVM_X86_QUIRK_FIX_HYPERCALL_INSN |     \
+        KVM_X86_QUIRK_MWAIT_NEVER_UD_FAULTS |  \
+-       KVM_X86_QUIRK_SLOT_ZAP_ALL)
++       KVM_X86_QUIRK_SLOT_ZAP_ALL |           \
++       KVM_X86_QUIRK_STUFF_FEATURE_MSRS)
+ /*
+  * KVM previously used a u32 field in kvm_run to indicate the hypercall was
+--- a/arch/x86/include/uapi/asm/kvm.h
++++ b/arch/x86/include/uapi/asm/kvm.h
+@@ -440,6 +440,7 @@ struct kvm_sync_regs {
+ #define KVM_X86_QUIRK_FIX_HYPERCALL_INSN      (1 << 5)
+ #define KVM_X86_QUIRK_MWAIT_NEVER_UD_FAULTS   (1 << 6)
+ #define KVM_X86_QUIRK_SLOT_ZAP_ALL            (1 << 7)
++#define KVM_X86_QUIRK_STUFF_FEATURE_MSRS      (1 << 8)
+ #define KVM_STATE_NESTED_FORMAT_VMX   0
+ #define KVM_STATE_NESTED_FORMAT_SVM   1
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -1389,7 +1389,9 @@ static void __svm_vcpu_reset(struct kvm_
+       svm_vcpu_init_msrpm(vcpu, svm->msrpm);
+       svm_init_osvw(vcpu);
+-      vcpu->arch.microcode_version = 0x01000065;
++
++      if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_STUFF_FEATURE_MSRS))
++              vcpu->arch.microcode_version = 0x01000065;
+       svm->tsc_ratio_msr = kvm_caps.default_tsc_scaling_ratio;
+       svm->nmi_masked = false;
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -4562,7 +4562,8 @@ vmx_adjust_secondary_exec_control(struct
+        * Update the nested MSR settings so that a nested VMM can/can't set
+        * controls for features that are/aren't exposed to the guest.
+        */
+-      if (nested) {
++      if (nested &&
++          kvm_check_has_quirk(vmx->vcpu.kvm, KVM_X86_QUIRK_STUFF_FEATURE_MSRS)) {
+               /*
+                * All features that can be added or removed to VMX MSRs must
+                * be supported in the first place for nested virtualization.
+@@ -4853,7 +4854,8 @@ static void __vmx_vcpu_reset(struct kvm_
+       init_vmcs(vmx);
+-      if (nested)
++      if (nested &&
++          kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_STUFF_FEATURE_MSRS))
+               memcpy(&vmx->nested.msrs, &vmcs_config.nested, sizeof(vmx->nested.msrs));
+       vcpu_setup_sgx_lepubkeyhash(vcpu);
+@@ -4866,7 +4868,8 @@ static void __vmx_vcpu_reset(struct kvm_
+       vmx->nested.hv_evmcs_vmptr = EVMPTR_INVALID;
+ #endif
+-      vcpu->arch.microcode_version = 0x100000000ULL;
++      if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_STUFF_FEATURE_MSRS))
++              vcpu->arch.microcode_version = 0x100000000ULL;
+       vmx->msr_ia32_feature_control_valid_bits = FEAT_CTL_LOCKED;
+       /*
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -12383,9 +12383,11 @@ int kvm_arch_vcpu_create(struct kvm_vcpu
+       kvm_async_pf_hash_reset(vcpu);
+-      vcpu->arch.arch_capabilities = kvm_get_arch_capabilities();
+-      vcpu->arch.msr_platform_info = MSR_PLATFORM_INFO_CPUID_FAULT;
+-      vcpu->arch.perf_capabilities = kvm_caps.supported_perf_cap;
++      if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_STUFF_FEATURE_MSRS)) {
++              vcpu->arch.arch_capabilities = kvm_get_arch_capabilities();
++              vcpu->arch.msr_platform_info = MSR_PLATFORM_INFO_CPUID_FAULT;
++              vcpu->arch.perf_capabilities = kvm_caps.supported_perf_cap;
++      }
+       kvm_pmu_init(vcpu);
+       vcpu->arch.pending_external_vector = -1;
diff --git a/queue-6.12/net-macb-shuffle-the-tx-ring-before-enabling-tx.patch b/queue-6.12/net-macb-shuffle-the-tx-ring-before-enabling-tx.patch
new file mode 100644 (file)
index 0000000..99e4332
--- /dev/null
@@ -0,0 +1,189 @@
+From stable+bounces-227116-greg=kroah.com@vger.kernel.org Wed Mar 18 17:37:04 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 18 Mar 2026 12:11:53 -0400
+Subject: net: macb: Shuffle the tx ring before enabling tx
+To: stable@vger.kernel.org
+Cc: Kevin Hao <haokexin@gmail.com>, Quanyang Wang <quanyang.wang@windriver.com>, Simon Horman <horms@kernel.org>, Jakub Kicinski <kuba@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260318161153.909243-1-sashal@kernel.org>
+
+From: Kevin Hao <haokexin@gmail.com>
+
+[ Upstream commit 881a0263d502e1a93ebc13a78254e9ad19520232 ]
+
+Quanyang observed that when using an NFS rootfs on an AMD ZynqMp board,
+the rootfs may take an extended time to recover after a suspend.
+Upon investigation, it was determined that the issue originates from a
+problem in the macb driver.
+
+According to the Zynq UltraScale TRM [1], when transmit is disabled,
+the transmit buffer queue pointer resets to point to the address
+specified by the transmit buffer queue base address register.
+
+In the current implementation, the code merely resets `queue->tx_head`
+and `queue->tx_tail` to '0'. This approach presents several issues:
+
+- Packets already queued in the tx ring are silently lost,
+  leading to memory leaks since the associated skbs cannot be released.
+
+- Concurrent write access to `queue->tx_head` and `queue->tx_tail` may
+  occur from `macb_tx_poll()` or `macb_start_xmit()` when these values
+  are reset to '0'.
+
+- The transmission may become stuck on a packet that has already been sent
+  out, with its 'TX_USED' bit set, but has not yet been processed. However,
+  due to the manipulation of 'queue->tx_head' and 'queue->tx_tail',
+  `macb_tx_poll()` incorrectly assumes there are no packets to handle
+  because `queue->tx_head == queue->tx_tail`. This issue is only resolved
+  when a new packet is placed at this position. This is the root cause of
+  the prolonged recovery time observed for the NFS root filesystem.
+
+To resolve this issue, shuffle the tx ring and tx skb array so that
+the first unsent packet is positioned at the start of the tx ring.
+Additionally, ensure that updates to `queue->tx_head` and
+`queue->tx_tail` are properly protected with the appropriate lock.
+
+[1] https://docs.amd.com/v/u/en-US/ug1085-zynq-ultrascale-trm
+
+Fixes: bf9cf80cab81 ("net: macb: Fix tx/rx malfunction after phy link down and up")
+Reported-by: Quanyang Wang <quanyang.wang@windriver.com>
+Signed-off-by: Kevin Hao <haokexin@gmail.com>
+Cc: stable@vger.kernel.org
+Reviewed-by: Simon Horman <horms@kernel.org>
+Link: https://patch.msgid.link/20260307-zynqmp-v2-1-6ef98a70e1d0@gmail.com
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+[ adapted include block context ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/cadence/macb_main.c |   98 ++++++++++++++++++++++++++++++-
+ 1 file changed, 95 insertions(+), 3 deletions(-)
+
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -38,6 +38,7 @@
+ #include <linux/ptp_classify.h>
+ #include <linux/reset.h>
+ #include <linux/firmware/xlnx-zynqmp.h>
++#include <linux/gcd.h>
+ #include <linux/inetdevice.h>
+ #include "macb.h"
+@@ -719,6 +720,97 @@ static void macb_mac_link_down(struct ph
+       netif_tx_stop_all_queues(ndev);
+ }
++/* Use juggling algorithm to left rotate tx ring and tx skb array */
++static void gem_shuffle_tx_one_ring(struct macb_queue *queue)
++{
++      unsigned int head, tail, count, ring_size, desc_size;
++      struct macb_tx_skb tx_skb, *skb_curr, *skb_next;
++      struct macb_dma_desc *desc_curr, *desc_next;
++      unsigned int i, cycles, shift, curr, next;
++      struct macb *bp = queue->bp;
++      unsigned char desc[24];
++      unsigned long flags;
++
++      desc_size = macb_dma_desc_get_size(bp);
++
++      if (WARN_ON_ONCE(desc_size > ARRAY_SIZE(desc)))
++              return;
++
++      spin_lock_irqsave(&queue->tx_ptr_lock, flags);
++      head = queue->tx_head;
++      tail = queue->tx_tail;
++      ring_size = bp->tx_ring_size;
++      count = CIRC_CNT(head, tail, ring_size);
++
++      if (!(tail % ring_size))
++              goto unlock;
++
++      if (!count) {
++              queue->tx_head = 0;
++              queue->tx_tail = 0;
++              goto unlock;
++      }
++
++      shift = tail % ring_size;
++      cycles = gcd(ring_size, shift);
++
++      for (i = 0; i < cycles; i++) {
++              memcpy(&desc, macb_tx_desc(queue, i), desc_size);
++              memcpy(&tx_skb, macb_tx_skb(queue, i),
++                     sizeof(struct macb_tx_skb));
++
++              curr = i;
++              next = (curr + shift) % ring_size;
++
++              while (next != i) {
++                      desc_curr = macb_tx_desc(queue, curr);
++                      desc_next = macb_tx_desc(queue, next);
++
++                      memcpy(desc_curr, desc_next, desc_size);
++
++                      if (next == ring_size - 1)
++                              desc_curr->ctrl &= ~MACB_BIT(TX_WRAP);
++                      if (curr == ring_size - 1)
++                              desc_curr->ctrl |= MACB_BIT(TX_WRAP);
++
++                      skb_curr = macb_tx_skb(queue, curr);
++                      skb_next = macb_tx_skb(queue, next);
++                      memcpy(skb_curr, skb_next, sizeof(struct macb_tx_skb));
++
++                      curr = next;
++                      next = (curr + shift) % ring_size;
++              }
++
++              desc_curr = macb_tx_desc(queue, curr);
++              memcpy(desc_curr, &desc, desc_size);
++              if (i == ring_size - 1)
++                      desc_curr->ctrl &= ~MACB_BIT(TX_WRAP);
++              if (curr == ring_size - 1)
++                      desc_curr->ctrl |= MACB_BIT(TX_WRAP);
++              memcpy(macb_tx_skb(queue, curr), &tx_skb,
++                     sizeof(struct macb_tx_skb));
++      }
++
++      queue->tx_head = count;
++      queue->tx_tail = 0;
++
++      /* Make descriptor updates visible to hardware */
++      wmb();
++
++unlock:
++      spin_unlock_irqrestore(&queue->tx_ptr_lock, flags);
++}
++
++/* Rotate the queue so that the tail is at index 0 */
++static void gem_shuffle_tx_rings(struct macb *bp)
++{
++      struct macb_queue *queue;
++      int q;
++
++      for (q = 0, queue = bp->queues; q < bp->num_queues; q++, queue++)
++              gem_shuffle_tx_one_ring(queue);
++}
++
+ static void macb_mac_link_up(struct phylink_config *config,
+                            struct phy_device *phy,
+                            unsigned int mode, phy_interface_t interface,
+@@ -757,8 +849,6 @@ static void macb_mac_link_up(struct phyl
+                       ctrl |= MACB_BIT(PAE);
+               for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
+-                      queue->tx_head = 0;
+-                      queue->tx_tail = 0;
+                       queue_writel(queue, IER,
+                                    bp->rx_intr_mask | MACB_TX_INT_FLAGS | MACB_BIT(HRESP));
+               }
+@@ -772,8 +862,10 @@ static void macb_mac_link_up(struct phyl
+       spin_unlock_irqrestore(&bp->lock, flags);
+-      if (!(bp->caps & MACB_CAPS_MACB_IS_EMAC))
++      if (!(bp->caps & MACB_CAPS_MACB_IS_EMAC)) {
+               macb_set_tx_clk(bp, speed);
++              gem_shuffle_tx_rings(bp);
++      }
+       /* Enable Rx and Tx; Enable PTP unicast */
+       ctrl = macb_readl(bp, NCR);
index 84919fc3c15865bc26b8d30521db06679d483e75..603a4f1ba9af0f10a7666379845599c4d3100a3a 100644 (file)
@@ -255,3 +255,18 @@ nsfs-tighten-permission-checks-for-ns-iteration-ioctls.patch
 sched_ext-disable-preemption-between-scx_claim_exit-and-kicking-helper-work.patch
 sched_ext-fix-starvation-of-scx_enable-under-fair-class-saturation.patch
 iomap-reject-delalloc-mappings-during-writeback.patch
+fgraph-fix-thresh_return-clear-per-task-notrace.patch
+kvm-x86-co-locate-initialization-of-feature-msrs-in-kvm_arch_vcpu_create.patch
+kvm-x86-quirk-initialization-of-feature-msrs-to-kvm-s-max-configuration.patch
+kvm-x86-do-not-allow-re-enabling-quirks.patch
+kvm-x86-allow-vendor-code-to-disable-quirks.patch
+kvm-x86-introduce-supported_quirks-to-block-disabling-quirks.patch
+kvm-x86-introduce-intel-specific-quirk-kvm_x86_quirk_ignore_guest_pat.patch
+kvm-nvmx-add-consistency-checks-for-cr0.wp-and-cr4.cet.patch
+kvm-x86-introduce-kvm_x86_quirk_vmcs12_allow_freeze_in_smm.patch
+ksmbd-don-t-log-keys-in-smb3-signing-and-encryption-key-generation.patch
+drm-bridge-ti-sn65dsi83-halve-horizontal-syncs-for-dual-lvds-output.patch
+net-macb-shuffle-the-tx-ring-before-enabling-tx.patch
+cifs-open-files-should-not-hold-ref-on-superblock.patch
+crypto-atmel-sha204a-fix-oom-tfm_count-leak.patch
+xfs-fix-integer-overflow-in-bmap-intent-sort-comparator.patch
diff --git a/queue-6.12/xfs-fix-integer-overflow-in-bmap-intent-sort-comparator.patch b/queue-6.12/xfs-fix-integer-overflow-in-bmap-intent-sort-comparator.patch
new file mode 100644 (file)
index 0000000..00fa44c
--- /dev/null
@@ -0,0 +1,44 @@
+From stable+bounces-227201-greg=kroah.com@vger.kernel.org Thu Mar 19 03:00:05 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 18 Mar 2026 21:59:56 -0400
+Subject: xfs: fix integer overflow in bmap intent sort comparator
+To: stable@vger.kernel.org
+Cc: Long Li <leo.lilong@huawei.com>, "Darrick J. Wong" <djwong@kernel.org>, Carlos Maiolino <cem@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260319015956.1895520-1-sashal@kernel.org>
+
+From: Long Li <leo.lilong@huawei.com>
+
+[ Upstream commit 362c490980867930a098b99f421268fbd7ca05fd ]
+
+xfs_bmap_update_diff_items() sorts bmap intents by inode number using
+a subtraction of two xfs_ino_t (uint64_t) values, with the result
+truncated to int. This is incorrect when two inode numbers differ by
+more than INT_MAX (2^31 - 1), which is entirely possible on large XFS
+filesystems.
+
+Fix this by replacing the subtraction with cmp_int().
+
+Cc: <stable@vger.kernel.org> # v4.9
+Fixes: 9f3afb57d5f1 ("xfs: implement deferred bmbt map/unmap operations")
+Signed-off-by: Long Li <leo.lilong@huawei.com>
+Reviewed-by: Darrick J. Wong <djwong@kernel.org>
+Signed-off-by: Carlos Maiolino <cem@kernel.org>
+[ No cmp_int() ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/xfs/xfs_bmap_item.c |    3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+--- a/fs/xfs/xfs_bmap_item.c
++++ b/fs/xfs/xfs_bmap_item.c
+@@ -237,7 +237,8 @@ xfs_bmap_update_diff_items(
+       struct xfs_bmap_intent          *ba = bi_entry(a);
+       struct xfs_bmap_intent          *bb = bi_entry(b);
+-      return ba->bi_owner->i_ino - bb->bi_owner->i_ino;
++      return ((ba->bi_owner->i_ino > bb->bi_owner->i_ino) -
++              (ba->bi_owner->i_ino < bb->bi_owner->i_ino));
+ }
+ /* Log bmap updates in the intent item. */