]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/commitdiff
6.18-stable patches
authorGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Sat, 7 Feb 2026 15:02:33 +0000 (16:02 +0100)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Sat, 7 Feb 2026 15:02:33 +0000 (16:02 +0100)
added patches:
alsa-aloop-fix-racy-access-at-pcm-trigger.patch
arm-9468-1-fix-memset64-on-big-endian.patch
ceph-fix-null-pointer-dereference-in-ceph_mds_auth_match.patch
ceph-fix-oops-due-to-invalid-pointer-for-kfree-in-parse_longname.patch
cgroup-dmem-avoid-pool-uaf.patch
cgroup-dmem-avoid-rcu-warning-when-unregister-region.patch
cgroup-dmem-fix-null-pointer-dereference-when-setting-max.patch
drm-amd-set-minimum-version-for-set_hw_resource_1-on-gfx11-to-0x52.patch
gve-correct-ethtool-rx_dropped-calculation.patch
gve-fix-stats-report-corruption-on-queue-count-change.patch
kvm-x86-explicitly-configure-supported-xss-from-svm-vmx-_set_cpu_caps.patch
mm-shmem-prevent-infinite-loop-on-truncate-race.patch
mm-slab-add-alloc_tagging_slab_free_hook-for-memcg_alloc_abort_single.patch
net-cpsw-execute-ndo_set_rx_mode-callback-in-a-work-queue.patch
net-cpsw_new-execute-ndo_set_rx_mode-callback-in-a-work-queue.patch
net-spacemit-k1-emac-fix-jumbo-frame-support.patch
nouveau-add-a-third-state-to-the-fini-handler.patch
nouveau-gsp-fix-suspend-resume-regression-on-r570-firmware.patch
nouveau-gsp-use-rpc-sequence-numbers-properly.patch
platform-x86-intel_telemetry-fix-swapped-arrays-in-pss-output.patch
pmdomain-imx-gpcv2-fix-the-imx8mm-gpu-hang-due-to-wrong-adb400-reset.patch
pmdomain-imx8m-blk-ctrl-fix-out-of-range-access-of-bc-domains.patch
pmdomain-imx8mp-blk-ctrl-keep-gpc-power-domain-on-for-system-wakeup.patch
pmdomain-imx8mp-blk-ctrl-keep-usb-phy-power-domain-on-for-system-wakeup.patch
pmdomain-qcom-rpmpd-fix-off-by-one-error-in-clamping-to-the-highest-state.patch
procfs-avoid-fetching-build-id-while-holding-vma-lock.patch
rbd-check-for-eod-after-exclusive-lock-is-ensured-to-be-held.patch
revert-drm-amd-check-if-aspm-is-enabled-from-pcie-subsystem.patch
x86-kfence-fix-booting-on-32bit-non-pae-systems.patch
x86-vmware-fix-hypercall-clobbers.patch

31 files changed:
queue-6.18/alsa-aloop-fix-racy-access-at-pcm-trigger.patch [new file with mode: 0644]
queue-6.18/arm-9468-1-fix-memset64-on-big-endian.patch [new file with mode: 0644]
queue-6.18/ceph-fix-null-pointer-dereference-in-ceph_mds_auth_match.patch [new file with mode: 0644]
queue-6.18/ceph-fix-oops-due-to-invalid-pointer-for-kfree-in-parse_longname.patch [new file with mode: 0644]
queue-6.18/cgroup-dmem-avoid-pool-uaf.patch [new file with mode: 0644]
queue-6.18/cgroup-dmem-avoid-rcu-warning-when-unregister-region.patch [new file with mode: 0644]
queue-6.18/cgroup-dmem-fix-null-pointer-dereference-when-setting-max.patch [new file with mode: 0644]
queue-6.18/drm-amd-set-minimum-version-for-set_hw_resource_1-on-gfx11-to-0x52.patch [new file with mode: 0644]
queue-6.18/gve-correct-ethtool-rx_dropped-calculation.patch [new file with mode: 0644]
queue-6.18/gve-fix-stats-report-corruption-on-queue-count-change.patch [new file with mode: 0644]
queue-6.18/kvm-x86-explicitly-configure-supported-xss-from-svm-vmx-_set_cpu_caps.patch [new file with mode: 0644]
queue-6.18/mm-shmem-prevent-infinite-loop-on-truncate-race.patch [new file with mode: 0644]
queue-6.18/mm-slab-add-alloc_tagging_slab_free_hook-for-memcg_alloc_abort_single.patch [new file with mode: 0644]
queue-6.18/net-cpsw-execute-ndo_set_rx_mode-callback-in-a-work-queue.patch [new file with mode: 0644]
queue-6.18/net-cpsw_new-execute-ndo_set_rx_mode-callback-in-a-work-queue.patch [new file with mode: 0644]
queue-6.18/net-spacemit-k1-emac-fix-jumbo-frame-support.patch [new file with mode: 0644]
queue-6.18/nouveau-add-a-third-state-to-the-fini-handler.patch [new file with mode: 0644]
queue-6.18/nouveau-gsp-fix-suspend-resume-regression-on-r570-firmware.patch [new file with mode: 0644]
queue-6.18/nouveau-gsp-use-rpc-sequence-numbers-properly.patch [new file with mode: 0644]
queue-6.18/platform-x86-intel_telemetry-fix-swapped-arrays-in-pss-output.patch [new file with mode: 0644]
queue-6.18/pmdomain-imx-gpcv2-fix-the-imx8mm-gpu-hang-due-to-wrong-adb400-reset.patch [new file with mode: 0644]
queue-6.18/pmdomain-imx8m-blk-ctrl-fix-out-of-range-access-of-bc-domains.patch [new file with mode: 0644]
queue-6.18/pmdomain-imx8mp-blk-ctrl-keep-gpc-power-domain-on-for-system-wakeup.patch [new file with mode: 0644]
queue-6.18/pmdomain-imx8mp-blk-ctrl-keep-usb-phy-power-domain-on-for-system-wakeup.patch [new file with mode: 0644]
queue-6.18/pmdomain-qcom-rpmpd-fix-off-by-one-error-in-clamping-to-the-highest-state.patch [new file with mode: 0644]
queue-6.18/procfs-avoid-fetching-build-id-while-holding-vma-lock.patch [new file with mode: 0644]
queue-6.18/rbd-check-for-eod-after-exclusive-lock-is-ensured-to-be-held.patch [new file with mode: 0644]
queue-6.18/revert-drm-amd-check-if-aspm-is-enabled-from-pcie-subsystem.patch [new file with mode: 0644]
queue-6.18/series
queue-6.18/x86-kfence-fix-booting-on-32bit-non-pae-systems.patch [new file with mode: 0644]
queue-6.18/x86-vmware-fix-hypercall-clobbers.patch [new file with mode: 0644]

diff --git a/queue-6.18/alsa-aloop-fix-racy-access-at-pcm-trigger.patch b/queue-6.18/alsa-aloop-fix-racy-access-at-pcm-trigger.patch
new file mode 100644 (file)
index 0000000..b5d483b
--- /dev/null
@@ -0,0 +1,117 @@
+From 826af7fa62e347464b1b4e0ba2fe19a92438084f Mon Sep 17 00:00:00 2001
+From: Takashi Iwai <tiwai@suse.de>
+Date: Tue, 3 Feb 2026 15:09:59 +0100
+Subject: ALSA: aloop: Fix racy access at PCM trigger
+
+From: Takashi Iwai <tiwai@suse.de>
+
+commit 826af7fa62e347464b1b4e0ba2fe19a92438084f upstream.
+
+The PCM trigger callback of aloop driver tries to check the PCM state
+and stop the stream of the tied substream in the corresponding cable.
+Since both check and stop operations are performed outside the cable
+lock, this may result in UAF when a program attempts to trigger
+frequently while opening/closing the tied stream, as spotted by
+fuzzers.
+
+For addressing the UAF, this patch changes two things:
+- It covers the most of code in loopback_check_format() with
+  cable->lock spinlock, and add the proper NULL checks.  This avoids
+  already some racy accesses.
+- In addition, now we try to check the state of the capture PCM stream
+  that may be stopped in this function, which was the major pain point
+  leading to UAF.
+
+Reported-by: syzbot+5f8f3acdee1ec7a7ef7b@syzkaller.appspotmail.com
+Closes: https://lore.kernel.org/69783ba1.050a0220.c9109.0011.GAE@google.com
+Cc: <stable@vger.kernel.org>
+Link: https://patch.msgid.link/20260203141003.116584-1-tiwai@suse.de
+Signed-off-by: Takashi Iwai <tiwai@suse.de>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ sound/drivers/aloop.c |   62 +++++++++++++++++++++++++++++---------------------
+ 1 file changed, 36 insertions(+), 26 deletions(-)
+
+--- a/sound/drivers/aloop.c
++++ b/sound/drivers/aloop.c
+@@ -336,37 +336,43 @@ static bool is_access_interleaved(snd_pc
+ static int loopback_check_format(struct loopback_cable *cable, int stream)
+ {
++      struct loopback_pcm *dpcm_play, *dpcm_capt;
+       struct snd_pcm_runtime *runtime, *cruntime;
+       struct loopback_setup *setup;
+       struct snd_card *card;
++      bool stop_capture = false;
+       int check;
+-      if (cable->valid != CABLE_VALID_BOTH) {
+-              if (stream == SNDRV_PCM_STREAM_PLAYBACK)
+-                      goto __notify;
+-              return 0;
+-      }
+-      runtime = cable->streams[SNDRV_PCM_STREAM_PLAYBACK]->
+-                                                      substream->runtime;
+-      cruntime = cable->streams[SNDRV_PCM_STREAM_CAPTURE]->
+-                                                      substream->runtime;
+-      check = runtime->format != cruntime->format ||
+-              runtime->rate != cruntime->rate ||
+-              runtime->channels != cruntime->channels ||
+-              is_access_interleaved(runtime->access) !=
+-              is_access_interleaved(cruntime->access);
+-      if (!check)
+-              return 0;
+-      if (stream == SNDRV_PCM_STREAM_CAPTURE) {
+-              return -EIO;
+-      } else {
+-              snd_pcm_stop(cable->streams[SNDRV_PCM_STREAM_CAPTURE]->
+-                                      substream, SNDRV_PCM_STATE_DRAINING);
+-            __notify:
+-              runtime = cable->streams[SNDRV_PCM_STREAM_PLAYBACK]->
+-                                                      substream->runtime;
+-              setup = get_setup(cable->streams[SNDRV_PCM_STREAM_PLAYBACK]);
+-              card = cable->streams[SNDRV_PCM_STREAM_PLAYBACK]->loopback->card;
++      scoped_guard(spinlock_irqsave, &cable->lock) {
++              dpcm_play = cable->streams[SNDRV_PCM_STREAM_PLAYBACK];
++              dpcm_capt = cable->streams[SNDRV_PCM_STREAM_CAPTURE];
++
++              if (cable->valid != CABLE_VALID_BOTH) {
++                      if (stream == SNDRV_PCM_STREAM_CAPTURE || !dpcm_play)
++                              return 0;
++              } else {
++                      if (!dpcm_play || !dpcm_capt)
++                              return -EIO;
++                      runtime = dpcm_play->substream->runtime;
++                      cruntime = dpcm_capt->substream->runtime;
++                      if (!runtime || !cruntime)
++                              return -EIO;
++                      check = runtime->format != cruntime->format ||
++                      runtime->rate != cruntime->rate ||
++                      runtime->channels != cruntime->channels ||
++                      is_access_interleaved(runtime->access) !=
++                      is_access_interleaved(cruntime->access);
++                      if (!check)
++                              return 0;
++                      if (stream == SNDRV_PCM_STREAM_CAPTURE)
++                              return -EIO;
++                      else if (cruntime->state == SNDRV_PCM_STATE_RUNNING)
++                              stop_capture = true;
++              }
++
++              setup = get_setup(dpcm_play);
++              card = dpcm_play->loopback->card;
++              runtime = dpcm_play->substream->runtime;
+               if (setup->format != runtime->format) {
+                       snd_ctl_notify(card, SNDRV_CTL_EVENT_MASK_VALUE,
+                                                       &setup->format_id);
+@@ -389,6 +395,10 @@ static int loopback_check_format(struct
+                       setup->access = runtime->access;
+               }
+       }
++
++      if (stop_capture)
++              snd_pcm_stop(dpcm_capt->substream, SNDRV_PCM_STATE_DRAINING);
++
+       return 0;
+ }
diff --git a/queue-6.18/arm-9468-1-fix-memset64-on-big-endian.patch b/queue-6.18/arm-9468-1-fix-memset64-on-big-endian.patch
new file mode 100644 (file)
index 0000000..665d71a
--- /dev/null
@@ -0,0 +1,40 @@
+From 23ea2a4c72323feb6e3e025e8a6f18336513d5ad Mon Sep 17 00:00:00 2001
+From: Thomas Weissschuh <thomas.weissschuh@linutronix.de>
+Date: Wed, 7 Jan 2026 11:01:49 +0100
+Subject: ARM: 9468/1: fix memset64() on big-endian
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Thomas Weissschuh <thomas.weissschuh@linutronix.de>
+
+commit 23ea2a4c72323feb6e3e025e8a6f18336513d5ad upstream.
+
+On big-endian systems the 32-bit low and high halves need to be swapped
+for the underlying assembly implementation to work correctly.
+
+Fixes: fd1d362600e2 ("ARM: implement memset32 & memset64")
+Cc: stable@vger.kernel.org
+Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de>
+Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
+Reviewed-by: Arnd Bergmann <arnd@arndb.de>
+Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/arm/include/asm/string.h |    5 ++++-
+ 1 file changed, 4 insertions(+), 1 deletion(-)
+
+--- a/arch/arm/include/asm/string.h
++++ b/arch/arm/include/asm/string.h
+@@ -42,7 +42,10 @@ static inline void *memset32(uint32_t *p
+ extern void *__memset64(uint64_t *, uint32_t low, __kernel_size_t, uint32_t hi);
+ static inline void *memset64(uint64_t *p, uint64_t v, __kernel_size_t n)
+ {
+-      return __memset64(p, v, n * 8, v >> 32);
++      if (IS_ENABLED(CONFIG_CPU_LITTLE_ENDIAN))
++              return __memset64(p, v, n * 8, v >> 32);
++      else
++              return __memset64(p, v >> 32, n * 8, v);
+ }
+ /*
diff --git a/queue-6.18/ceph-fix-null-pointer-dereference-in-ceph_mds_auth_match.patch b/queue-6.18/ceph-fix-null-pointer-dereference-in-ceph_mds_auth_match.patch
new file mode 100644 (file)
index 0000000..62d83bb
--- /dev/null
@@ -0,0 +1,182 @@
+From 7987cce375ac8ce98e170a77aa2399f2cf6eb99f Mon Sep 17 00:00:00 2001
+From: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com>
+Date: Tue, 3 Feb 2026 14:54:46 -0800
+Subject: ceph: fix NULL pointer dereference in ceph_mds_auth_match()
+
+From: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com>
+
+commit 7987cce375ac8ce98e170a77aa2399f2cf6eb99f upstream.
+
+The CephFS kernel client has regression starting from 6.18-rc1.
+We have issue in ceph_mds_auth_match() if fs_name == NULL:
+
+    const char fs_name = mdsc->fsc->mount_options->mds_namespace;
+    ...
+    if (auth->match.fs_name && strcmp(auth->match.fs_name, fs_name)) {
+            / fsname mismatch, try next one */
+            return 0;
+    }
+
+Patrick Donnelly suggested that: In summary, we should definitely start
+decoding `fs_name` from the MDSMap and do strict authorizations checks
+against it. Note that the `-o mds_namespace=foo` should only be used for
+selecting the file system to mount and nothing else. It's possible
+no mds_namespace is specified but the kernel will mount the only
+file system that exists which may have name "foo".
+
+This patch reworks ceph_mdsmap_decode() and namespace_equals() with
+the goal of supporting the suggested concept. Now struct ceph_mdsmap
+contains m_fs_name field that receives copy of extracted FS name
+by ceph_extract_encoded_string(). For the case of "old" CephFS file
+systems, it is used "cephfs" name.
+
+[ idryomov: replace redundant %*pE with %s in ceph_mdsmap_decode(),
+  get rid of a series of strlen() calls in ceph_namespace_match(),
+  drop changes to namespace_equals() body to avoid treating empty
+  mds_namespace as equal, drop changes to ceph_mdsc_handle_fsmap()
+  as namespace_equals() isn't an equivalent substitution there ]
+
+Cc: stable@vger.kernel.org
+Fixes: 22c73d52a6d0 ("ceph: fix multifs mds auth caps issue")
+Link: https://tracker.ceph.com/issues/73886
+Signed-off-by: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com>
+Reviewed-by: Patrick Donnelly <pdonnell@ibm.com>
+Tested-by: Patrick Donnelly <pdonnell@ibm.com>
+Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/ceph/mds_client.c         |    5 +++--
+ fs/ceph/mdsmap.c             |   26 +++++++++++++++++++-------
+ fs/ceph/mdsmap.h             |    1 +
+ fs/ceph/super.h              |   16 ++++++++++++++--
+ include/linux/ceph/ceph_fs.h |    6 ++++++
+ 5 files changed, 43 insertions(+), 11 deletions(-)
+
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -5655,7 +5655,7 @@ static int ceph_mds_auth_match(struct ce
+       u32 caller_uid = from_kuid(&init_user_ns, cred->fsuid);
+       u32 caller_gid = from_kgid(&init_user_ns, cred->fsgid);
+       struct ceph_client *cl = mdsc->fsc->client;
+-      const char *fs_name = mdsc->fsc->mount_options->mds_namespace;
++      const char *fs_name = mdsc->mdsmap->m_fs_name;
+       const char *spath = mdsc->fsc->mount_options->server_path;
+       bool gid_matched = false;
+       u32 gid, tlen, len;
+@@ -5663,7 +5663,8 @@ static int ceph_mds_auth_match(struct ce
+       doutc(cl, "fsname check fs_name=%s  match.fs_name=%s\n",
+             fs_name, auth->match.fs_name ? auth->match.fs_name : "");
+-      if (auth->match.fs_name && strcmp(auth->match.fs_name, fs_name)) {
++
++      if (!ceph_namespace_match(auth->match.fs_name, fs_name)) {
+               /* fsname mismatch, try next one */
+               return 0;
+       }
+--- a/fs/ceph/mdsmap.c
++++ b/fs/ceph/mdsmap.c
+@@ -353,22 +353,33 @@ struct ceph_mdsmap *ceph_mdsmap_decode(s
+               __decode_and_drop_type(p, end, u8, bad_ext);
+       }
+       if (mdsmap_ev >= 8) {
+-              u32 fsname_len;
++              size_t fsname_len;
++
+               /* enabled */
+               ceph_decode_8_safe(p, end, m->m_enabled, bad_ext);
++
+               /* fs_name */
+-              ceph_decode_32_safe(p, end, fsname_len, bad_ext);
++              m->m_fs_name = ceph_extract_encoded_string(p, end,
++                                                         &fsname_len,
++                                                         GFP_NOFS);
++              if (IS_ERR(m->m_fs_name)) {
++                      m->m_fs_name = NULL;
++                      goto nomem;
++              }
+               /* validate fsname against mds_namespace */
+-              if (!namespace_equals(mdsc->fsc->mount_options, *p,
++              if (!namespace_equals(mdsc->fsc->mount_options, m->m_fs_name,
+                                     fsname_len)) {
+-                      pr_warn_client(cl, "fsname %*pE doesn't match mds_namespace %s\n",
+-                                     (int)fsname_len, (char *)*p,
++                      pr_warn_client(cl, "fsname %s doesn't match mds_namespace %s\n",
++                                     m->m_fs_name,
+                                      mdsc->fsc->mount_options->mds_namespace);
+                       goto bad;
+               }
+-              /* skip fsname after validation */
+-              ceph_decode_skip_n(p, end, fsname_len, bad);
++      } else {
++              m->m_enabled = false;
++              m->m_fs_name = kstrdup(CEPH_OLD_FS_NAME, GFP_NOFS);
++              if (!m->m_fs_name)
++                      goto nomem;
+       }
+       /* damaged */
+       if (mdsmap_ev >= 9) {
+@@ -430,6 +441,7 @@ void ceph_mdsmap_destroy(struct ceph_mds
+               kfree(m->m_info);
+       }
+       kfree(m->m_data_pg_pools);
++      kfree(m->m_fs_name);
+       kfree(m);
+ }
+--- a/fs/ceph/mdsmap.h
++++ b/fs/ceph/mdsmap.h
+@@ -45,6 +45,7 @@ struct ceph_mdsmap {
+       bool m_enabled;
+       bool m_damaged;
+       int m_num_laggy;
++      char *m_fs_name;
+ };
+ static inline struct ceph_entity_addr *
+--- a/fs/ceph/super.h
++++ b/fs/ceph/super.h
+@@ -104,14 +104,26 @@ struct ceph_mount_options {
+       struct fscrypt_dummy_policy dummy_enc_policy;
+ };
++#define CEPH_NAMESPACE_WILDCARD               "*"
++
++static inline bool ceph_namespace_match(const char *pattern,
++                                      const char *target)
++{
++      if (!pattern || !pattern[0] ||
++          !strcmp(pattern, CEPH_NAMESPACE_WILDCARD))
++              return true;
++
++      return !strcmp(pattern, target);
++}
++
+ /*
+  * Check if the mds namespace in ceph_mount_options matches
+  * the passed in namespace string. First time match (when
+  * ->mds_namespace is NULL) is treated specially, since
+  * ->mds_namespace needs to be initialized by the caller.
+  */
+-static inline int namespace_equals(struct ceph_mount_options *fsopt,
+-                                 const char *namespace, size_t len)
++static inline bool namespace_equals(struct ceph_mount_options *fsopt,
++                                  const char *namespace, size_t len)
+ {
+       return !(fsopt->mds_namespace &&
+                (strlen(fsopt->mds_namespace) != len ||
+--- a/include/linux/ceph/ceph_fs.h
++++ b/include/linux/ceph/ceph_fs.h
+@@ -31,6 +31,12 @@
+ #define CEPH_INO_CEPH   2            /* hidden .ceph dir */
+ #define CEPH_INO_GLOBAL_SNAPREALM  3 /* global dummy snaprealm */
++/*
++ * name for "old" CephFS file systems,
++ * see ceph.git e2b151d009640114b2565c901d6f41f6cd5ec652
++ */
++#define CEPH_OLD_FS_NAME      "cephfs"
++
+ /* arbitrary limit on max # of monitors (cluster of 3 is typical) */
+ #define CEPH_MAX_MON   31
diff --git a/queue-6.18/ceph-fix-oops-due-to-invalid-pointer-for-kfree-in-parse_longname.patch b/queue-6.18/ceph-fix-oops-due-to-invalid-pointer-for-kfree-in-parse_longname.patch
new file mode 100644 (file)
index 0000000..88e8a2d
--- /dev/null
@@ -0,0 +1,67 @@
+From bc8dedae022ce3058659c3addef3ec4b41d15e00 Mon Sep 17 00:00:00 2001
+From: Daniel Vogelbacher <daniel@chaospixel.com>
+Date: Sun, 1 Feb 2026 09:34:01 +0100
+Subject: ceph: fix oops due to invalid pointer for kfree() in parse_longname()
+
+From: Daniel Vogelbacher <daniel@chaospixel.com>
+
+commit bc8dedae022ce3058659c3addef3ec4b41d15e00 upstream.
+
+This fixes a kernel oops when reading ceph snapshot directories (.snap),
+for example by simply running `ls /mnt/my_ceph/.snap`.
+
+The variable str is guarded by __free(kfree), but advanced by one for
+skipping the initial '_' in snapshot names. Thus, kfree() is called
+with an invalid pointer.  This patch removes the need for advancing the
+pointer so kfree() is called with correct memory pointer.
+
+Steps to reproduce:
+
+1. Create snapshots on a cephfs volume (I've 63 snaps in my testcase)
+
+2. Add cephfs mount to fstab
+$ echo "samba-fileserver@.files=/volumes/datapool/stuff/3461082b-ecc9-4e82-8549-3fd2590d3fb6      /mnt/test/stuff   ceph     acl,noatime,_netdev    0       0" >> /etc/fstab
+
+3. Reboot the system
+$ systemctl reboot
+
+4. Check if it's really mounted
+$ mount | grep stuff
+
+5. List snapshots (expected 63 snapshots on my system)
+$ ls /mnt/test/stuff/.snap
+
+Now ls hangs forever and the kernel log shows the oops.
+
+Cc: stable@vger.kernel.org
+Fixes: 101841c38346 ("[ceph] parse_longname(): strrchr() expects NUL-terminated string")
+Closes: https://bugzilla.kernel.org/show_bug.cgi?id=220807
+Suggested-by: Helge Deller <deller@gmx.de>
+Signed-off-by: Daniel Vogelbacher <daniel@chaospixel.com>
+Reviewed-by: Viacheslav Dubeyko <Slava.Dubeyko@ibm.com>
+Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/ceph/crypto.c |    9 +++++----
+ 1 file changed, 5 insertions(+), 4 deletions(-)
+
+--- a/fs/ceph/crypto.c
++++ b/fs/ceph/crypto.c
+@@ -219,12 +219,13 @@ static struct inode *parse_longname(cons
+       struct ceph_vino vino = { .snap = CEPH_NOSNAP };
+       char *name_end, *inode_number;
+       int ret = -EIO;
+-      /* NUL-terminate */
+-      char *str __free(kfree) = kmemdup_nul(name, *name_len, GFP_KERNEL);
++      /* Snapshot name must start with an underscore */
++      if (*name_len <= 0 || name[0] != '_')
++              return ERR_PTR(-EIO);
++      /* Skip initial '_' and NUL-terminate */
++      char *str __free(kfree) = kmemdup_nul(name + 1, *name_len - 1, GFP_KERNEL);
+       if (!str)
+               return ERR_PTR(-ENOMEM);
+-      /* Skip initial '_' */
+-      str++;
+       name_end = strrchr(str, '_');
+       if (!name_end) {
+               doutc(cl, "failed to parse long snapshot name: %s\n", str);
diff --git a/queue-6.18/cgroup-dmem-avoid-pool-uaf.patch b/queue-6.18/cgroup-dmem-avoid-pool-uaf.patch
new file mode 100644 (file)
index 0000000..25e5c09
--- /dev/null
@@ -0,0 +1,224 @@
+From 99a2ef500906138ba58093b9893972a5c303c734 Mon Sep 17 00:00:00 2001
+From: Chen Ridong <chenridong@huawei.com>
+Date: Mon, 2 Feb 2026 12:27:18 +0000
+Subject: cgroup/dmem: avoid pool UAF
+
+From: Chen Ridong <chenridong@huawei.com>
+
+commit 99a2ef500906138ba58093b9893972a5c303c734 upstream.
+
+An UAF issue was observed:
+
+BUG: KASAN: slab-use-after-free in page_counter_uncharge+0x65/0x150
+Write of size 8 at addr ffff888106715440 by task insmod/527
+
+CPU: 4 UID: 0 PID: 527 Comm: insmod    6.19.0-rc7-next-20260129+ #11
+Tainted: [O]=OOT_MODULE
+Call Trace:
+<TASK>
+dump_stack_lvl+0x82/0xd0
+kasan_report+0xca/0x100
+kasan_check_range+0x39/0x1c0
+page_counter_uncharge+0x65/0x150
+dmem_cgroup_uncharge+0x1f/0x260
+
+Allocated by task 527:
+
+Freed by task 0:
+
+The buggy address belongs to the object at ffff888106715400
+which belongs to the cache kmalloc-512 of size 512
+The buggy address is located 64 bytes inside of
+freed 512-byte region [ffff888106715400, ffff888106715600)
+
+The buggy address belongs to the physical page:
+
+Memory state around the buggy address:
+ffff888106715300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+ffff888106715380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
+>ffff888106715400: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+                                    ^
+ffff888106715480: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+ffff888106715500: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
+
+The issue occurs because a pool can still be held by a caller after its
+associated memory region is unregistered. The current implementation frees
+the pool even if users still hold references to it (e.g., before uncharge
+operations complete).
+
+This patch adds a reference counter to each pool, ensuring that a pool is
+only freed when its reference count drops to zero.
+
+Fixes: b168ed458dde ("kernel/cgroup: Add "dmem" memory accounting cgroup")
+Cc: stable@vger.kernel.org # v6.14+
+Signed-off-by: Chen Ridong <chenridong@huawei.com>
+Signed-off-by: Tejun Heo <tj@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/cgroup/dmem.c | 60 ++++++++++++++++++++++++++++++++++++++++++--
+ 1 file changed, 58 insertions(+), 2 deletions(-)
+
+diff --git a/kernel/cgroup/dmem.c b/kernel/cgroup/dmem.c
+index 787b334e0f5d..1ea6afffa985 100644
+--- a/kernel/cgroup/dmem.c
++++ b/kernel/cgroup/dmem.c
+@@ -14,6 +14,7 @@
+ #include <linux/mutex.h>
+ #include <linux/page_counter.h>
+ #include <linux/parser.h>
++#include <linux/refcount.h>
+ #include <linux/rculist.h>
+ #include <linux/slab.h>
+@@ -71,7 +72,9 @@ struct dmem_cgroup_pool_state {
+       struct rcu_head rcu;
+       struct page_counter cnt;
++      struct dmem_cgroup_pool_state *parent;
++      refcount_t ref;
+       bool inited;
+ };
+@@ -88,6 +91,9 @@ struct dmem_cgroup_pool_state {
+ static DEFINE_SPINLOCK(dmemcg_lock);
+ static LIST_HEAD(dmem_cgroup_regions);
++static void dmemcg_free_region(struct kref *ref);
++static void dmemcg_pool_free_rcu(struct rcu_head *rcu);
++
+ static inline struct dmemcg_state *
+ css_to_dmemcs(struct cgroup_subsys_state *css)
+ {
+@@ -104,10 +110,38 @@ static struct dmemcg_state *parent_dmemcs(struct dmemcg_state *cg)
+       return cg->css.parent ? css_to_dmemcs(cg->css.parent) : NULL;
+ }
++static void dmemcg_pool_get(struct dmem_cgroup_pool_state *pool)
++{
++      refcount_inc(&pool->ref);
++}
++
++static bool dmemcg_pool_tryget(struct dmem_cgroup_pool_state *pool)
++{
++      return refcount_inc_not_zero(&pool->ref);
++}
++
++static void dmemcg_pool_put(struct dmem_cgroup_pool_state *pool)
++{
++      if (!refcount_dec_and_test(&pool->ref))
++              return;
++
++      call_rcu(&pool->rcu, dmemcg_pool_free_rcu);
++}
++
++static void dmemcg_pool_free_rcu(struct rcu_head *rcu)
++{
++      struct dmem_cgroup_pool_state *pool = container_of(rcu, typeof(*pool), rcu);
++
++      if (pool->parent)
++              dmemcg_pool_put(pool->parent);
++      kref_put(&pool->region->ref, dmemcg_free_region);
++      kfree(pool);
++}
++
+ static void free_cg_pool(struct dmem_cgroup_pool_state *pool)
+ {
+       list_del(&pool->region_node);
+-      kfree(pool);
++      dmemcg_pool_put(pool);
+ }
+ static void
+@@ -342,6 +376,12 @@ alloc_pool_single(struct dmemcg_state *dmemcs, struct dmem_cgroup_region *region
+       page_counter_init(&pool->cnt,
+                         ppool ? &ppool->cnt : NULL, true);
+       reset_all_resource_limits(pool);
++      refcount_set(&pool->ref, 1);
++      kref_get(&region->ref);
++      if (ppool && !pool->parent) {
++              pool->parent = ppool;
++              dmemcg_pool_get(ppool);
++      }
+       list_add_tail_rcu(&pool->css_node, &dmemcs->pools);
+       list_add_tail(&pool->region_node, &region->pools);
+@@ -389,6 +429,10 @@ get_cg_pool_locked(struct dmemcg_state *dmemcs, struct dmem_cgroup_region *regio
+               /* Fix up parent links, mark as inited. */
+               pool->cnt.parent = &ppool->cnt;
++              if (ppool && !pool->parent) {
++                      pool->parent = ppool;
++                      dmemcg_pool_get(ppool);
++              }
+               pool->inited = true;
+               pool = ppool;
+@@ -435,6 +479,8 @@ void dmem_cgroup_unregister_region(struct dmem_cgroup_region *region)
+       list_for_each_entry_safe(pool, next, &region->pools, region_node) {
+               list_del_rcu(&pool->css_node);
++              list_del(&pool->region_node);
++              dmemcg_pool_put(pool);
+       }
+       /*
+@@ -515,8 +561,10 @@ static struct dmem_cgroup_region *dmemcg_get_region_by_name(const char *name)
+  */
+ void dmem_cgroup_pool_state_put(struct dmem_cgroup_pool_state *pool)
+ {
+-      if (pool)
++      if (pool) {
+               css_put(&pool->cs->css);
++              dmemcg_pool_put(pool);
++      }
+ }
+ EXPORT_SYMBOL_GPL(dmem_cgroup_pool_state_put);
+@@ -530,6 +578,8 @@ get_cg_pool_unlocked(struct dmemcg_state *cg, struct dmem_cgroup_region *region)
+       pool = find_cg_pool_locked(cg, region);
+       if (pool && !READ_ONCE(pool->inited))
+               pool = NULL;
++      if (pool && !dmemcg_pool_tryget(pool))
++              pool = NULL;
+       rcu_read_unlock();
+       while (!pool) {
+@@ -538,6 +588,8 @@ get_cg_pool_unlocked(struct dmemcg_state *cg, struct dmem_cgroup_region *region)
+                       pool = get_cg_pool_locked(cg, region, &allocpool);
+               else
+                       pool = ERR_PTR(-ENODEV);
++              if (!IS_ERR(pool))
++                      dmemcg_pool_get(pool);
+               spin_unlock(&dmemcg_lock);
+               if (pool == ERR_PTR(-ENOMEM)) {
+@@ -573,6 +625,7 @@ void dmem_cgroup_uncharge(struct dmem_cgroup_pool_state *pool, u64 size)
+       page_counter_uncharge(&pool->cnt, size);
+       css_put(&pool->cs->css);
++      dmemcg_pool_put(pool);
+ }
+ EXPORT_SYMBOL_GPL(dmem_cgroup_uncharge);
+@@ -624,7 +677,9 @@ int dmem_cgroup_try_charge(struct dmem_cgroup_region *region, u64 size,
+               if (ret_limit_pool) {
+                       *ret_limit_pool = container_of(fail, struct dmem_cgroup_pool_state, cnt);
+                       css_get(&(*ret_limit_pool)->cs->css);
++                      dmemcg_pool_get(*ret_limit_pool);
+               }
++              dmemcg_pool_put(pool);
+               ret = -EAGAIN;
+               goto err;
+       }
+@@ -719,6 +774,7 @@ static ssize_t dmemcg_limit_write(struct kernfs_open_file *of,
+               /* And commit */
+               apply(pool, new_limit);
++              dmemcg_pool_put(pool);
+ out_put:
+               kref_put(&region->ref, dmemcg_free_region);
+-- 
+2.53.0
+
diff --git a/queue-6.18/cgroup-dmem-avoid-rcu-warning-when-unregister-region.patch b/queue-6.18/cgroup-dmem-avoid-rcu-warning-when-unregister-region.patch
new file mode 100644 (file)
index 0000000..6f04a01
--- /dev/null
@@ -0,0 +1,74 @@
+From 592a68212c5664bcaa88f24ed80bf791282790fe Mon Sep 17 00:00:00 2001
+From: Chen Ridong <chenridong@huawei.com>
+Date: Mon, 2 Feb 2026 12:27:17 +0000
+Subject: cgroup/dmem: avoid rcu warning when unregister region
+
+From: Chen Ridong <chenridong@huawei.com>
+
+commit 592a68212c5664bcaa88f24ed80bf791282790fe upstream.
+
+A warnning was detected:
+
+ WARNING: suspicious RCU usage
+ 6.19.0-rc7-next-20260129+ #1101 Tainted: G           O
+ kernel/cgroup/dmem.c:456 suspicious rcu_dereference_check() usage!
+
+ other info that might help us debug this:
+
+ rcu_scheduler_active = 2, debug_locks = 1
+ 1 lock held by insmod/532:
+  #0: ffffffff85e78b38 (dmemcg_lock){+.+.}-dmem_cgroup_unregister_region+
+
+ stack backtrace:
+ CPU: 2 UID: 0 PID: 532 Comm: insmod Tainted: 6.19.0-rc7-next-
+ Tainted: [O]=OOT_MODULE
+ Call Trace:
+  <TASK>
+  dump_stack_lvl+0xb0/0xd0
+  lockdep_rcu_suspicious+0x151/0x1c0
+  dmem_cgroup_unregister_region+0x1e2/0x380
+  ? __pfx_dmem_test_init+0x10/0x10 [dmem_uaf]
+  dmem_test_init+0x65/0xff0 [dmem_uaf]
+  do_one_initcall+0xbb/0x3a0
+
+The macro list_for_each_rcu() must be used within an RCU read-side critical
+section (between rcu_read_lock() and rcu_read_unlock()). Using it outside
+that context, as seen in dmem_cgroup_unregister_region(), triggers the
+lockdep warning because the RCU protection is not guaranteed.
+
+Replace list_for_each_rcu() with list_for_each_entry_safe(), which is
+appropriate for traversal under spinlock protection where nodes may be
+deleted.
+
+Fixes: b168ed458dde ("kernel/cgroup: Add "dmem" memory accounting cgroup")
+Cc: stable@vger.kernel.org # v6.14+
+Signed-off-by: Chen Ridong <chenridong@huawei.com>
+Signed-off-by: Tejun Heo <tj@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/cgroup/dmem.c |    7 ++-----
+ 1 file changed, 2 insertions(+), 5 deletions(-)
+
+--- a/kernel/cgroup/dmem.c
++++ b/kernel/cgroup/dmem.c
+@@ -423,7 +423,7 @@ static void dmemcg_free_region(struct kr
+  */
+ void dmem_cgroup_unregister_region(struct dmem_cgroup_region *region)
+ {
+-      struct list_head *entry;
++      struct dmem_cgroup_pool_state *pool, *next;
+       if (!region)
+               return;
+@@ -433,10 +433,7 @@ void dmem_cgroup_unregister_region(struc
+       /* Remove from global region list */
+       list_del_rcu(&region->region_node);
+-      list_for_each_rcu(entry, &region->pools) {
+-              struct dmem_cgroup_pool_state *pool =
+-                      container_of(entry, typeof(*pool), region_node);
+-
++      list_for_each_entry_safe(pool, next, &region->pools, region_node) {
+               list_del_rcu(&pool->css_node);
+       }
diff --git a/queue-6.18/cgroup-dmem-fix-null-pointer-dereference-when-setting-max.patch b/queue-6.18/cgroup-dmem-fix-null-pointer-dereference-when-setting-max.patch
new file mode 100644 (file)
index 0000000..e4f9692
--- /dev/null
@@ -0,0 +1,64 @@
+From 43151f812886be1855d2cba059f9c93e4729460b Mon Sep 17 00:00:00 2001
+From: Chen Ridong <chenridong@huawei.com>
+Date: Mon, 2 Feb 2026 12:27:16 +0000
+Subject: cgroup/dmem: fix NULL pointer dereference when setting max
+
+From: Chen Ridong <chenridong@huawei.com>
+
+commit 43151f812886be1855d2cba059f9c93e4729460b upstream.
+
+An issue was triggered:
+
+ BUG: kernel NULL pointer dereference, address: 0000000000000000
+ #PF: supervisor read access in kernel mode
+ #PF: error_code(0x0000) - not-present page
+ PGD 0 P4D 0
+ Oops: Oops: 0000 [#1] SMP NOPTI
+ CPU: 15 UID: 0 PID: 658 Comm: bash Tainted: 6.19.0-rc6-next-2026012
+ Tainted: [O]=OOT_MODULE
+ Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),
+ RIP: 0010:strcmp+0x10/0x30
+ RSP: 0018:ffffc900017f7dc0 EFLAGS: 00000246
+ RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffff888107cd4358
+ RDX: 0000000019f73907 RSI: ffffffff82cc381a RDI: 0000000000000000
+ RBP: ffff8881016bef0d R08: 000000006c0e7145 R09: 0000000056c0e714
+ R10: 0000000000000001 R11: ffff888107cd4358 R12: 0007ffffffffffff
+ R13: ffff888101399200 R14: ffff888100fcb360 R15: 0007ffffffffffff
+ CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
+ CR2: 0000000000000000 CR3: 0000000105c79000 CR4: 00000000000006f0
+ Call Trace:
+  <TASK>
+  dmemcg_limit_write.constprop.0+0x16d/0x390
+  ? __pfx_set_resource_max+0x10/0x10
+  kernfs_fop_write_iter+0x14e/0x200
+  vfs_write+0x367/0x510
+  ksys_write+0x66/0xe0
+  do_syscall_64+0x6b/0x390
+  entry_SYSCALL_64_after_hwframe+0x76/0x7e
+ RIP: 0033:0x7f42697e1887
+
+It was trriggered setting max without limitation, the command is like:
+"echo test/region0 > dmem.max". To fix this issue, add check whether
+options is valid after parsing the region_name.
+
+Fixes: b168ed458dde ("kernel/cgroup: Add "dmem" memory accounting cgroup")
+Cc: stable@vger.kernel.org # v6.14+
+Signed-off-by: Chen Ridong <chenridong@huawei.com>
+Signed-off-by: Tejun Heo <tj@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/cgroup/dmem.c |    3 +++
+ 1 file changed, 3 insertions(+)
+
+--- a/kernel/cgroup/dmem.c
++++ b/kernel/cgroup/dmem.c
+@@ -700,6 +700,9 @@ static ssize_t dmemcg_limit_write(struct
+               if (!region_name[0])
+                       continue;
++              if (!options || !*options)
++                      return -EINVAL;
++
+               rcu_read_lock();
+               region = dmemcg_get_region_by_name(region_name);
+               rcu_read_unlock();
diff --git a/queue-6.18/drm-amd-set-minimum-version-for-set_hw_resource_1-on-gfx11-to-0x52.patch b/queue-6.18/drm-amd-set-minimum-version-for-set_hw_resource_1-on-gfx11-to-0x52.patch
new file mode 100644 (file)
index 0000000..660e7b7
--- /dev/null
@@ -0,0 +1,41 @@
+From 1478a34470bf4755465d29b348b24a610bccc180 Mon Sep 17 00:00:00 2001
+From: Mario Limonciello <mario.limonciello@amd.com>
+Date: Thu, 29 Jan 2026 13:47:22 -0600
+Subject: drm/amd: Set minimum version for set_hw_resource_1 on gfx11 to 0x52
+
+From: Mario Limonciello <mario.limonciello@amd.com>
+
+commit 1478a34470bf4755465d29b348b24a610bccc180 upstream.
+
+commit f81cd793119e ("drm/amd/amdgpu: Fix MES init sequence") caused
+a dependency on new enough MES firmware to use amdgpu.  This was fixed
+on most gfx11 and gfx12 hardware with commit 0180e0a5dd5c
+("drm/amdgpu/mes: add compatibility checks for set_hw_resource_1"), but
+this left out that GC 11.0.4 had breakage at MES 0x51.
+
+Bump the requirement to 0x52 instead.
+
+Reported-by: danijel@nausys.com
+Closes: https://gitlab.freedesktop.org/drm/amd/-/issues/4576
+Fixes: f81cd793119e ("drm/amd/amdgpu: Fix MES init sequence")
+Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
+Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
+Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
+(cherry picked from commit c2d2ccc85faf8cc6934d50c18e43097eb453ade2)
+Cc: stable@vger.kernel.org
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/gpu/drm/amd/amdgpu/mes_v11_0.c |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
+@@ -1667,7 +1667,7 @@ static int mes_v11_0_hw_init(struct amdg
+       if (r)
+               goto failure;
+-      if ((adev->mes.sched_version & AMDGPU_MES_VERSION_MASK) >= 0x50) {
++      if ((adev->mes.sched_version & AMDGPU_MES_VERSION_MASK) >= 0x52) {
+               r = mes_v11_0_set_hw_resources_1(&adev->mes);
+               if (r) {
+                       DRM_ERROR("failed mes_v11_0_set_hw_resources_1, r=%d\n", r);
diff --git a/queue-6.18/gve-correct-ethtool-rx_dropped-calculation.patch b/queue-6.18/gve-correct-ethtool-rx_dropped-calculation.patch
new file mode 100644 (file)
index 0000000..efe8610
--- /dev/null
@@ -0,0 +1,111 @@
+From c7db85d579a1dccb624235534508c75fbf2dfe46 Mon Sep 17 00:00:00 2001
+From: Max Yuan <maxyuan@google.com>
+Date: Mon, 2 Feb 2026 19:39:25 +0000
+Subject: gve: Correct ethtool rx_dropped calculation
+
+From: Max Yuan <maxyuan@google.com>
+
+commit c7db85d579a1dccb624235534508c75fbf2dfe46 upstream.
+
+The gve driver's "rx_dropped" statistic, exposed via `ethtool -S`,
+incorrectly includes `rx_buf_alloc_fail` counts. These failures
+represent an inability to allocate receive buffers, not true packet
+drops where a received packet is discarded. This misrepresentation can
+lead to inaccurate diagnostics.
+
+This patch rectifies the ethtool "rx_dropped" calculation. It removes
+`rx_buf_alloc_fail` from the total and adds `xdp_tx_errors` and
+`xdp_redirect_errors`, which represent legitimate packet drops within
+the XDP path.
+
+Cc: stable@vger.kernel.org
+Fixes: 433e274b8f7b ("gve: Add stats for gve.")
+Signed-off-by: Max Yuan <maxyuan@google.com>
+Reviewed-by: Jordan Rhee <jordanrhee@google.com>
+Reviewed-by: Joshua Washington <joshwash@google.com>
+Reviewed-by: Matt Olson <maolson@google.com>
+Signed-off-by: Harshitha Ramamurthy <hramamurthy@google.com>
+Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
+Link: https://patch.msgid.link/20260202193925.3106272-3-hramamurthy@google.com
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/google/gve/gve_ethtool.c |   23 +++++++++++++++++------
+ 1 file changed, 17 insertions(+), 6 deletions(-)
+
+--- a/drivers/net/ethernet/google/gve/gve_ethtool.c
++++ b/drivers/net/ethernet/google/gve/gve_ethtool.c
+@@ -152,10 +152,11 @@ gve_get_ethtool_stats(struct net_device
+       u64 tmp_rx_pkts, tmp_rx_hsplit_pkt, tmp_rx_bytes, tmp_rx_hsplit_bytes,
+               tmp_rx_skb_alloc_fail, tmp_rx_buf_alloc_fail,
+               tmp_rx_desc_err_dropped_pkt, tmp_rx_hsplit_unsplit_pkt,
+-              tmp_tx_pkts, tmp_tx_bytes;
++              tmp_tx_pkts, tmp_tx_bytes,
++              tmp_xdp_tx_errors, tmp_xdp_redirect_errors;
+       u64 rx_buf_alloc_fail, rx_desc_err_dropped_pkt, rx_hsplit_unsplit_pkt,
+               rx_pkts, rx_hsplit_pkt, rx_skb_alloc_fail, rx_bytes, tx_pkts, tx_bytes,
+-              tx_dropped;
++              tx_dropped, xdp_tx_errors, xdp_redirect_errors;
+       int rx_base_stats_idx, max_rx_stats_idx, max_tx_stats_idx;
+       int stats_idx, stats_region_len, nic_stats_len;
+       struct stats *report_stats;
+@@ -199,6 +200,7 @@ gve_get_ethtool_stats(struct net_device
+       for (rx_pkts = 0, rx_bytes = 0, rx_hsplit_pkt = 0,
+            rx_skb_alloc_fail = 0, rx_buf_alloc_fail = 0,
+            rx_desc_err_dropped_pkt = 0, rx_hsplit_unsplit_pkt = 0,
++           xdp_tx_errors = 0, xdp_redirect_errors = 0,
+            ring = 0;
+            ring < priv->rx_cfg.num_queues; ring++) {
+               if (priv->rx) {
+@@ -216,6 +218,9 @@ gve_get_ethtool_stats(struct net_device
+                                       rx->rx_desc_err_dropped_pkt;
+                               tmp_rx_hsplit_unsplit_pkt =
+                                       rx->rx_hsplit_unsplit_pkt;
++                              tmp_xdp_tx_errors = rx->xdp_tx_errors;
++                              tmp_xdp_redirect_errors =
++                                      rx->xdp_redirect_errors;
+                       } while (u64_stats_fetch_retry(&priv->rx[ring].statss,
+                                                      start));
+                       rx_pkts += tmp_rx_pkts;
+@@ -225,6 +230,8 @@ gve_get_ethtool_stats(struct net_device
+                       rx_buf_alloc_fail += tmp_rx_buf_alloc_fail;
+                       rx_desc_err_dropped_pkt += tmp_rx_desc_err_dropped_pkt;
+                       rx_hsplit_unsplit_pkt += tmp_rx_hsplit_unsplit_pkt;
++                      xdp_tx_errors += tmp_xdp_tx_errors;
++                      xdp_redirect_errors += tmp_xdp_redirect_errors;
+               }
+       }
+       for (tx_pkts = 0, tx_bytes = 0, tx_dropped = 0, ring = 0;
+@@ -250,8 +257,8 @@ gve_get_ethtool_stats(struct net_device
+       data[i++] = rx_bytes;
+       data[i++] = tx_bytes;
+       /* total rx dropped packets */
+-      data[i++] = rx_skb_alloc_fail + rx_buf_alloc_fail +
+-                  rx_desc_err_dropped_pkt;
++      data[i++] = rx_skb_alloc_fail + rx_desc_err_dropped_pkt +
++                  xdp_tx_errors + xdp_redirect_errors;
+       data[i++] = tx_dropped;
+       data[i++] = priv->tx_timeo_cnt;
+       data[i++] = rx_skb_alloc_fail;
+@@ -330,6 +337,9 @@ gve_get_ethtool_stats(struct net_device
+                               tmp_rx_buf_alloc_fail = rx->rx_buf_alloc_fail;
+                               tmp_rx_desc_err_dropped_pkt =
+                                       rx->rx_desc_err_dropped_pkt;
++                              tmp_xdp_tx_errors = rx->xdp_tx_errors;
++                              tmp_xdp_redirect_errors =
++                                      rx->xdp_redirect_errors;
+                       } while (u64_stats_fetch_retry(&priv->rx[ring].statss,
+                                                      start));
+                       data[i++] = tmp_rx_bytes;
+@@ -340,8 +350,9 @@ gve_get_ethtool_stats(struct net_device
+                       data[i++] = rx->rx_frag_alloc_cnt;
+                       /* rx dropped packets */
+                       data[i++] = tmp_rx_skb_alloc_fail +
+-                              tmp_rx_buf_alloc_fail +
+-                              tmp_rx_desc_err_dropped_pkt;
++                                  tmp_rx_desc_err_dropped_pkt +
++                                  tmp_xdp_tx_errors +
++                                  tmp_xdp_redirect_errors;
+                       data[i++] = rx->rx_copybreak_pkt;
+                       data[i++] = rx->rx_copied_pkt;
+                       /* stats from NIC */
diff --git a/queue-6.18/gve-fix-stats-report-corruption-on-queue-count-change.patch b/queue-6.18/gve-fix-stats-report-corruption-on-queue-count-change.patch
new file mode 100644 (file)
index 0000000..b12adfe
--- /dev/null
@@ -0,0 +1,131 @@
+From 7b9ebcce0296e104a0d82a6b09d68564806158ff Mon Sep 17 00:00:00 2001
+From: Debarghya Kundu <debarghyak@google.com>
+Date: Mon, 2 Feb 2026 19:39:24 +0000
+Subject: gve: Fix stats report corruption on queue count change
+
+From: Debarghya Kundu <debarghyak@google.com>
+
+commit 7b9ebcce0296e104a0d82a6b09d68564806158ff upstream.
+
+The driver and the NIC share a region in memory for stats reporting.
+The NIC calculates its offset into this region based on the total size
+of the stats region and the size of the NIC's stats.
+
+When the number of queues is changed, the driver's stats region is
+resized. If the queue count is increased, the NIC can write past
+the end of the allocated stats region, causing memory corruption.
+If the queue count is decreased, there is a gap between the driver
+and NIC stats, leading to incorrect stats reporting.
+
+This change fixes the issue by allocating stats region with maximum
+size, and the offset calculation for NIC stats is changed to match
+with the calculation of the NIC.
+
+Cc: stable@vger.kernel.org
+Fixes: 24aeb56f2d38 ("gve: Add Gvnic stats AQ command and ethtool show/set-priv-flags.")
+Signed-off-by: Debarghya Kundu <debarghyak@google.com>
+Reviewed-by: Joshua Washington <joshwash@google.com>
+Signed-off-by: Harshitha Ramamurthy <hramamurthy@google.com>
+Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
+Link: https://patch.msgid.link/20260202193925.3106272-2-hramamurthy@google.com
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/google/gve/gve_ethtool.c |   54 ++++++++++++++++----------
+ drivers/net/ethernet/google/gve/gve_main.c    |    4 -
+ 2 files changed, 36 insertions(+), 22 deletions(-)
+
+--- a/drivers/net/ethernet/google/gve/gve_ethtool.c
++++ b/drivers/net/ethernet/google/gve/gve_ethtool.c
+@@ -156,7 +156,8 @@ gve_get_ethtool_stats(struct net_device
+       u64 rx_buf_alloc_fail, rx_desc_err_dropped_pkt, rx_hsplit_unsplit_pkt,
+               rx_pkts, rx_hsplit_pkt, rx_skb_alloc_fail, rx_bytes, tx_pkts, tx_bytes,
+               tx_dropped;
+-      int stats_idx, base_stats_idx, max_stats_idx;
++      int rx_base_stats_idx, max_rx_stats_idx, max_tx_stats_idx;
++      int stats_idx, stats_region_len, nic_stats_len;
+       struct stats *report_stats;
+       int *rx_qid_to_stats_idx;
+       int *tx_qid_to_stats_idx;
+@@ -265,20 +266,38 @@ gve_get_ethtool_stats(struct net_device
+       data[i++] = priv->stats_report_trigger_cnt;
+       i = GVE_MAIN_STATS_LEN;
+-      /* For rx cross-reporting stats, start from nic rx stats in report */
+-      base_stats_idx = GVE_TX_STATS_REPORT_NUM * num_tx_queues +
+-              GVE_RX_STATS_REPORT_NUM * priv->rx_cfg.num_queues;
+-      /* The boundary between driver stats and NIC stats shifts if there are
+-       * stopped queues.
+-       */
+-      base_stats_idx += NIC_RX_STATS_REPORT_NUM * num_stopped_rxqs +
+-              NIC_TX_STATS_REPORT_NUM * num_stopped_txqs;
+-      max_stats_idx = NIC_RX_STATS_REPORT_NUM *
+-              (priv->rx_cfg.num_queues - num_stopped_rxqs) +
+-              base_stats_idx;
++      rx_base_stats_idx = 0;
++      max_rx_stats_idx = 0;
++      max_tx_stats_idx = 0;
++      stats_region_len = priv->stats_report_len -
++                              sizeof(struct gve_stats_report);
++      nic_stats_len = (NIC_RX_STATS_REPORT_NUM * priv->rx_cfg.num_queues +
++              NIC_TX_STATS_REPORT_NUM * num_tx_queues) * sizeof(struct stats);
++      if (unlikely((stats_region_len -
++                              nic_stats_len) % sizeof(struct stats))) {
++              net_err_ratelimited("Starting index of NIC stats should be multiple of stats size");
++      } else {
++              /* For rx cross-reporting stats,
++               * start from nic rx stats in report
++               */
++              rx_base_stats_idx = (stats_region_len - nic_stats_len) /
++                                                      sizeof(struct stats);
++              /* The boundary between driver stats and NIC stats
++               * shifts if there are stopped queues
++               */
++              rx_base_stats_idx += NIC_RX_STATS_REPORT_NUM *
++                      num_stopped_rxqs + NIC_TX_STATS_REPORT_NUM *
++                      num_stopped_txqs;
++              max_rx_stats_idx = NIC_RX_STATS_REPORT_NUM *
++                      (priv->rx_cfg.num_queues - num_stopped_rxqs) +
++                      rx_base_stats_idx;
++              max_tx_stats_idx = NIC_TX_STATS_REPORT_NUM *
++                      (num_tx_queues - num_stopped_txqs) +
++                      max_rx_stats_idx;
++      }
+       /* Preprocess the stats report for rx, map queue id to start index */
+       skip_nic_stats = false;
+-      for (stats_idx = base_stats_idx; stats_idx < max_stats_idx;
++      for (stats_idx = rx_base_stats_idx; stats_idx < max_rx_stats_idx;
+               stats_idx += NIC_RX_STATS_REPORT_NUM) {
+               u32 stat_name = be32_to_cpu(report_stats[stats_idx].stat_name);
+               u32 queue_id = be32_to_cpu(report_stats[stats_idx].queue_id);
+@@ -354,14 +373,9 @@ gve_get_ethtool_stats(struct net_device
+               i += priv->rx_cfg.num_queues * NUM_GVE_RX_CNTS;
+       }
+-      /* For tx cross-reporting stats, start from nic tx stats in report */
+-      base_stats_idx = max_stats_idx;
+-      max_stats_idx = NIC_TX_STATS_REPORT_NUM *
+-              (num_tx_queues - num_stopped_txqs) +
+-              max_stats_idx;
+-      /* Preprocess the stats report for tx, map queue id to start index */
+       skip_nic_stats = false;
+-      for (stats_idx = base_stats_idx; stats_idx < max_stats_idx;
++      /* NIC TX stats start right after NIC RX stats */
++      for (stats_idx = max_rx_stats_idx; stats_idx < max_tx_stats_idx;
+               stats_idx += NIC_TX_STATS_REPORT_NUM) {
+               u32 stat_name = be32_to_cpu(report_stats[stats_idx].stat_name);
+               u32 queue_id = be32_to_cpu(report_stats[stats_idx].queue_id);
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -283,9 +283,9 @@ static int gve_alloc_stats_report(struct
+       int tx_stats_num, rx_stats_num;
+       tx_stats_num = (GVE_TX_STATS_REPORT_NUM + NIC_TX_STATS_REPORT_NUM) *
+-                     gve_num_tx_queues(priv);
++                              priv->tx_cfg.max_queues;
+       rx_stats_num = (GVE_RX_STATS_REPORT_NUM + NIC_RX_STATS_REPORT_NUM) *
+-                     priv->rx_cfg.num_queues;
++                              priv->rx_cfg.max_queues;
+       priv->stats_report_len = struct_size(priv->stats_report, stats,
+                                            size_add(tx_stats_num, rx_stats_num));
+       priv->stats_report =
diff --git a/queue-6.18/kvm-x86-explicitly-configure-supported-xss-from-svm-vmx-_set_cpu_caps.patch b/queue-6.18/kvm-x86-explicitly-configure-supported-xss-from-svm-vmx-_set_cpu_caps.patch
new file mode 100644 (file)
index 0000000..64599b2
--- /dev/null
@@ -0,0 +1,122 @@
+From f8ade833b733ae0b72e87ac6d2202a1afbe3eb4a Mon Sep 17 00:00:00 2001
+From: Sean Christopherson <seanjc@google.com>
+Date: Tue, 27 Jan 2026 17:43:08 -0800
+Subject: KVM: x86: Explicitly configure supported XSS from {svm,vmx}_set_cpu_caps()
+
+From: Sean Christopherson <seanjc@google.com>
+
+commit f8ade833b733ae0b72e87ac6d2202a1afbe3eb4a upstream.
+
+Explicitly configure KVM's supported XSS as part of each vendor's setup
+flow to fix a bug where clearing SHSTK and IBT in kvm_cpu_caps, e.g. due
+to lack of CET XFEATURE support, makes kvm-intel.ko unloadable when nested
+VMX is enabled, i.e. when nested=1.  The late clearing results in
+nested_vmx_setup_{entry,exit}_ctls() clearing VM_{ENTRY,EXIT}_LOAD_CET_STATE
+when nested_vmx_setup_ctls_msrs() runs during the CPU compatibility checks,
+ultimately leading to a mismatched VMCS config due to the reference config
+having the CET bits set, but every CPU's "local" config having the bits
+cleared.
+
+Note, kvm_caps.supported_{xcr0,xss} are unconditionally initialized by
+kvm_x86_vendor_init(), before calling into vendor code, and not referenced
+between ops->hardware_setup() and their current/old location.
+
+Fixes: 69cc3e886582 ("KVM: x86: Add XSS support for CET_KERNEL and CET_USER")
+Cc: stable@vger.kernel.org
+Cc: Mathias Krause <minipli@grsecurity.net>
+Cc: John Allen <john.allen@amd.com>
+Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
+Cc: Chao Gao <chao.gao@intel.com>
+Cc: Binbin Wu <binbin.wu@linux.intel.com>
+Cc: Xiaoyao Li <xiaoyao.li@intel.com>
+Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com>
+Reviewed-by: Binbin Wu <binbin.wu@linux.intel.com>
+Link: https://patch.msgid.link/20260128014310.3255561-2-seanjc@google.com
+Signed-off-by: Sean Christopherson <seanjc@google.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kvm/svm/svm.c |    2 ++
+ arch/x86/kvm/vmx/vmx.c |    2 ++
+ arch/x86/kvm/x86.c     |   30 +++++++++++++++++-------------
+ arch/x86/kvm/x86.h     |    2 ++
+ 4 files changed, 23 insertions(+), 13 deletions(-)
+
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -5285,6 +5285,8 @@ static __init void svm_set_cpu_caps(void
+        */
+       kvm_cpu_cap_clear(X86_FEATURE_BUS_LOCK_DETECT);
+       kvm_cpu_cap_clear(X86_FEATURE_MSR_IMM);
++
++      kvm_setup_xss_caps();
+ }
+ static __init int svm_hardware_setup(void)
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -8021,6 +8021,8 @@ static __init void vmx_set_cpu_caps(void
+               kvm_cpu_cap_clear(X86_FEATURE_SHSTK);
+               kvm_cpu_cap_clear(X86_FEATURE_IBT);
+       }
++
++      kvm_setup_xss_caps();
+ }
+ static bool vmx_is_io_intercepted(struct kvm_vcpu *vcpu,
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -9954,6 +9954,23 @@ static struct notifier_block pvclock_gto
+ };
+ #endif
++void kvm_setup_xss_caps(void)
++{
++      if (!kvm_cpu_cap_has(X86_FEATURE_XSAVES))
++              kvm_caps.supported_xss = 0;
++
++      if (!kvm_cpu_cap_has(X86_FEATURE_SHSTK) &&
++          !kvm_cpu_cap_has(X86_FEATURE_IBT))
++              kvm_caps.supported_xss &= ~XFEATURE_MASK_CET_ALL;
++
++      if ((kvm_caps.supported_xss & XFEATURE_MASK_CET_ALL) != XFEATURE_MASK_CET_ALL) {
++              kvm_cpu_cap_clear(X86_FEATURE_SHSTK);
++              kvm_cpu_cap_clear(X86_FEATURE_IBT);
++              kvm_caps.supported_xss &= ~XFEATURE_MASK_CET_ALL;
++      }
++}
++EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_setup_xss_caps);
++
+ static inline void kvm_ops_update(struct kvm_x86_init_ops *ops)
+ {
+       memcpy(&kvm_x86_ops, ops->runtime_ops, sizeof(kvm_x86_ops));
+@@ -10132,19 +10149,6 @@ int kvm_x86_vendor_init(struct kvm_x86_i
+       if (!tdp_enabled)
+               kvm_caps.supported_quirks &= ~KVM_X86_QUIRK_IGNORE_GUEST_PAT;
+-      if (!kvm_cpu_cap_has(X86_FEATURE_XSAVES))
+-              kvm_caps.supported_xss = 0;
+-
+-      if (!kvm_cpu_cap_has(X86_FEATURE_SHSTK) &&
+-          !kvm_cpu_cap_has(X86_FEATURE_IBT))
+-              kvm_caps.supported_xss &= ~XFEATURE_MASK_CET_ALL;
+-
+-      if ((kvm_caps.supported_xss & XFEATURE_MASK_CET_ALL) != XFEATURE_MASK_CET_ALL) {
+-              kvm_cpu_cap_clear(X86_FEATURE_SHSTK);
+-              kvm_cpu_cap_clear(X86_FEATURE_IBT);
+-              kvm_caps.supported_xss &= ~XFEATURE_MASK_CET_ALL;
+-      }
+-
+       if (kvm_caps.has_tsc_control) {
+               /*
+                * Make sure the user can only configure tsc_khz values that
+--- a/arch/x86/kvm/x86.h
++++ b/arch/x86/kvm/x86.h
+@@ -457,6 +457,8 @@ extern struct kvm_host_values kvm_host;
+ extern bool enable_pmu;
++void kvm_setup_xss_caps(void);
++
+ /*
+  * Get a filtered version of KVM's supported XCR0 that strips out dynamic
+  * features for which the current process doesn't (yet) have permission to use.
diff --git a/queue-6.18/mm-shmem-prevent-infinite-loop-on-truncate-race.patch b/queue-6.18/mm-shmem-prevent-infinite-loop-on-truncate-race.patch
new file mode 100644 (file)
index 0000000..30b4bc5
--- /dev/null
@@ -0,0 +1,84 @@
+From 2030dddf95451b4e7a389f052091e7c4b7b274c6 Mon Sep 17 00:00:00 2001
+From: Kairui Song <kasong@tencent.com>
+Date: Thu, 29 Jan 2026 00:19:23 +0800
+Subject: mm, shmem: prevent infinite loop on truncate race
+
+From: Kairui Song <kasong@tencent.com>
+
+commit 2030dddf95451b4e7a389f052091e7c4b7b274c6 upstream.
+
+When truncating a large swap entry, shmem_free_swap() returns 0 when the
+entry's index doesn't match the given index due to lookup alignment.  The
+failure fallback path checks if the entry crosses the end border and
+aborts when it happens, so truncate won't erase an unexpected entry or
+range.  But one scenario was ignored.
+
+When `index` points to the middle of a large swap entry, and the large
+swap entry doesn't go across the end border, find_get_entries() will
+return that large swap entry as the first item in the batch with
+`indices[0]` equal to `index`.  The entry's base index will be smaller
+than `indices[0]`, so shmem_free_swap() will fail and return 0 due to the
+"base < index" check.  The code will then call shmem_confirm_swap(), get
+the order, check if it crosses the END boundary (which it doesn't), and
+retry with the same index.
+
+The next iteration will find the same entry again at the same index with
+same indices, leading to an infinite loop.
+
+Fix this by retrying with a round-down index, and abort if the index is
+smaller than the truncate range.
+
+Link: https://lkml.kernel.org/r/aXo6ltB5iqAKJzY8@KASONG-MC4
+Fixes: 809bc86517cc ("mm: shmem: support large folio swap out")
+Fixes: 8a1968bd997f ("mm/shmem, swap: fix race of truncate and swap entry split")
+Signed-off-by: Kairui Song <kasong@tencent.com>
+Reported-by: Chris Mason <clm@meta.com>
+Closes: https://lore.kernel.org/linux-mm/20260128130336.727049-1-clm@meta.com/
+Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
+Cc: Baoquan He <bhe@redhat.com>
+Cc: Barry Song <baohua@kernel.org>
+Cc: Chris Li <chrisl@kernel.org>
+Cc: Hugh Dickins <hughd@google.com>
+Cc: Kemeng Shi <shikemeng@huaweicloud.com>
+Cc: Nhat Pham <nphamcs@gmail.com>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/shmem.c |   23 ++++++++++++++---------
+ 1 file changed, 14 insertions(+), 9 deletions(-)
+
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -1193,17 +1193,22 @@ whole_folios:
+                               swaps_freed = shmem_free_swap(mapping, indices[i],
+                                                             end - 1, folio);
+                               if (!swaps_freed) {
+-                                      /*
+-                                       * If found a large swap entry cross the end border,
+-                                       * skip it as the truncate_inode_partial_folio above
+-                                       * should have at least zerod its content once.
+-                                       */
++                                      pgoff_t base = indices[i];
++
+                                       order = shmem_confirm_swap(mapping, indices[i],
+                                                                  radix_to_swp_entry(folio));
+-                                      if (order > 0 && indices[i] + (1 << order) > end)
+-                                              continue;
+-                                      /* Swap was replaced by page: retry */
+-                                      index = indices[i];
++                                      /*
++                                       * If found a large swap entry cross the end or start
++                                       * border, skip it as the truncate_inode_partial_folio
++                                       * above should have at least zerod its content once.
++                                       */
++                                      if (order > 0) {
++                                              base = round_down(base, 1 << order);
++                                              if (base < start || base + (1 << order) > end)
++                                                      continue;
++                                      }
++                                      /* Swap was replaced by page or extended, retry */
++                                      index = base;
+                                       break;
+                               }
+                               nr_swaps_freed += swaps_freed;
diff --git a/queue-6.18/mm-slab-add-alloc_tagging_slab_free_hook-for-memcg_alloc_abort_single.patch b/queue-6.18/mm-slab-add-alloc_tagging_slab_free_hook-for-memcg_alloc_abort_single.patch
new file mode 100644 (file)
index 0000000..63c53b3
--- /dev/null
@@ -0,0 +1,99 @@
+From e6c53ead2d8fa73206e0a63e9cd9aea6bc929837 Mon Sep 17 00:00:00 2001
+From: Hao Ge <hao.ge@linux.dev>
+Date: Wed, 4 Feb 2026 18:14:01 +0800
+Subject: mm/slab: Add alloc_tagging_slab_free_hook for memcg_alloc_abort_single
+
+From: Hao Ge <hao.ge@linux.dev>
+
+commit e6c53ead2d8fa73206e0a63e9cd9aea6bc929837 upstream.
+
+When CONFIG_MEM_ALLOC_PROFILING_DEBUG is enabled, the following warning
+may be noticed:
+
+[ 3959.023862] ------------[ cut here ]------------
+[ 3959.023891] alloc_tag was not cleared (got tag for lib/xarray.c:378)
+[ 3959.023947] WARNING: ./include/linux/alloc_tag.h:155 at alloc_tag_add+0x128/0x178, CPU#6: mkfs.ntfs/113998
+[ 3959.023978] Modules linked in: dns_resolver tun brd overlay exfat btrfs blake2b libblake2b xor xor_neon raid6_pq loop sctp ip6_udp_tunnel udp_tunnel ext4 crc16 mbcache jbd2 rfkill sunrpc vfat fat sg fuse nfnetlink sr_mod virtio_gpu cdrom drm_client_lib virtio_dma_buf drm_shmem_helper drm_kms_helper ghash_ce drm sm4 backlight virtio_net net_failover virtio_scsi failover virtio_console virtio_blk virtio_mmio dm_mirror dm_region_hash dm_log dm_multipath dm_mod i2c_dev aes_neon_bs aes_ce_blk [last unloaded: hwpoison_inject]
+[ 3959.024170] CPU: 6 UID: 0 PID: 113998 Comm: mkfs.ntfs Kdump: loaded Tainted: G        W           6.19.0-rc7+ #7 PREEMPT(voluntary)
+[ 3959.024182] Tainted: [W]=WARN
+[ 3959.024186] Hardware name: QEMU KVM Virtual Machine, BIOS unknown 2/2/2022
+[ 3959.024192] pstate: 604000c5 (nZCv daIF +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
+[ 3959.024199] pc : alloc_tag_add+0x128/0x178
+[ 3959.024207] lr : alloc_tag_add+0x128/0x178
+[ 3959.024214] sp : ffff80008b696d60
+[ 3959.024219] x29: ffff80008b696d60 x28: 0000000000000000 x27: 0000000000000240
+[ 3959.024232] x26: 0000000000000000 x25: 0000000000000240 x24: ffff800085d17860
+[ 3959.024245] x23: 0000000000402800 x22: ffff0000c0012dc0 x21: 00000000000002d0
+[ 3959.024257] x20: ffff0000e6ef3318 x19: ffff800085ae0410 x18: 0000000000000000
+[ 3959.024269] x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000
+[ 3959.024281] x14: 0000000000000000 x13: 0000000000000001 x12: ffff600064101293
+[ 3959.024292] x11: 1fffe00064101292 x10: ffff600064101292 x9 : dfff800000000000
+[ 3959.024305] x8 : 00009fff9befed6e x7 : ffff000320809493 x6 : 0000000000000001
+[ 3959.024316] x5 : ffff000320809490 x4 : ffff600064101293 x3 : ffff800080691838
+[ 3959.024328] x2 : 0000000000000000 x1 : 0000000000000000 x0 : ffff0000d5bcd640
+[ 3959.024340] Call trace:
+[ 3959.024346]  alloc_tag_add+0x128/0x178 (P)
+[ 3959.024355]  __alloc_tagging_slab_alloc_hook+0x11c/0x1a8
+[ 3959.024362]  kmem_cache_alloc_lru_noprof+0x1b8/0x5e8
+[ 3959.024369]  xas_alloc+0x304/0x4f0
+[ 3959.024381]  xas_create+0x1e0/0x4a0
+[ 3959.024388]  xas_store+0x68/0xda8
+[ 3959.024395]  __filemap_add_folio+0x5b0/0xbd8
+[ 3959.024409]  filemap_add_folio+0x16c/0x7e0
+[ 3959.024416]  __filemap_get_folio_mpol+0x2dc/0x9e8
+[ 3959.024424]  iomap_get_folio+0xfc/0x180
+[ 3959.024435]  __iomap_get_folio+0x2f8/0x4b8
+[ 3959.024441]  iomap_write_begin+0x198/0xc18
+[ 3959.024448]  iomap_write_iter+0x2ec/0x8f8
+[ 3959.024454]  iomap_file_buffered_write+0x19c/0x290
+[ 3959.024461]  blkdev_write_iter+0x38c/0x978
+[ 3959.024470]  vfs_write+0x4d4/0x928
+[ 3959.024482]  ksys_write+0xfc/0x1f8
+[ 3959.024489]  __arm64_sys_write+0x74/0xb0
+[ 3959.024496]  invoke_syscall+0xd4/0x258
+[ 3959.024507]  el0_svc_common.constprop.0+0xb4/0x240
+[ 3959.024514]  do_el0_svc+0x48/0x68
+[ 3959.024520]  el0_svc+0x40/0xf8
+[ 3959.024526]  el0t_64_sync_handler+0xa0/0xe8
+[ 3959.024533]  el0t_64_sync+0x1ac/0x1b0
+[ 3959.024540] ---[ end trace 0000000000000000 ]---
+
+When __memcg_slab_post_alloc_hook() fails, there are two different
+free paths depending on whether size == 1 or size != 1. In the
+kmem_cache_free_bulk() path, we do call alloc_tagging_slab_free_hook().
+However, in memcg_alloc_abort_single() we don't, the above warning will be
+triggered on the next allocation.
+
+Therefore, add alloc_tagging_slab_free_hook() to the
+memcg_alloc_abort_single() path.
+
+Fixes: 9f9796b413d3 ("mm, slab: move memcg charging to post-alloc hook")
+Cc: stable@vger.kernel.org
+Suggested-by: Hao Li <hao.li@linux.dev>
+Signed-off-by: Hao Ge <hao.ge@linux.dev>
+Reviewed-by: Hao Li <hao.li@linux.dev>
+Reviewed-by: Suren Baghdasaryan <surenb@google.com>
+Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
+Link: https://patch.msgid.link/20260204101401.202762-1-hao.ge@linux.dev
+Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/slub.c |    6 +++++-
+ 1 file changed, 5 insertions(+), 1 deletion(-)
+
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -6667,8 +6667,12 @@ void slab_free(struct kmem_cache *s, str
+ static noinline
+ void memcg_alloc_abort_single(struct kmem_cache *s, void *object)
+ {
++      struct slab *slab = virt_to_slab(object);
++
++      alloc_tagging_slab_free_hook(s, slab, &object, 1);
++
+       if (likely(slab_free_hook(s, object, slab_want_init_on_free(s), false)))
+-              do_slab_free(s, virt_to_slab(object), object, object, 1, _RET_IP_);
++              do_slab_free(s, slab, object, object, 1, _RET_IP_);
+ }
+ #endif
diff --git a/queue-6.18/net-cpsw-execute-ndo_set_rx_mode-callback-in-a-work-queue.patch b/queue-6.18/net-cpsw-execute-ndo_set_rx_mode-callback-in-a-work-queue.patch
new file mode 100644 (file)
index 0000000..84c5cca
--- /dev/null
@@ -0,0 +1,162 @@
+From 0b8c878d117319f2be34c8391a77e0f4d5c94d79 Mon Sep 17 00:00:00 2001
+From: Kevin Hao <haokexin@gmail.com>
+Date: Tue, 3 Feb 2026 10:18:31 +0800
+Subject: net: cpsw: Execute ndo_set_rx_mode callback in a work queue
+
+From: Kevin Hao <haokexin@gmail.com>
+
+commit 0b8c878d117319f2be34c8391a77e0f4d5c94d79 upstream.
+
+Commit 1767bb2d47b7 ("ipv6: mcast: Don't hold RTNL for
+IPV6_ADD_MEMBERSHIP and MCAST_JOIN_GROUP.") removed the RTNL lock for
+IPV6_ADD_MEMBERSHIP and MCAST_JOIN_GROUP operations. However, this
+change triggered the following call trace on my BeagleBone Black board:
+  WARNING: net/8021q/vlan_core.c:236 at vlan_for_each+0x120/0x124, CPU#0: rpcbind/481
+  RTNL: assertion failed at net/8021q/vlan_core.c (236)
+  Modules linked in:
+  CPU: 0 UID: 997 PID: 481 Comm: rpcbind Not tainted 6.19.0-rc7-next-20260130-yocto-standard+ #35 PREEMPT
+  Hardware name: Generic AM33XX (Flattened Device Tree)
+  Call trace:
+   unwind_backtrace from show_stack+0x28/0x2c
+   show_stack from dump_stack_lvl+0x30/0x38
+   dump_stack_lvl from __warn+0xb8/0x11c
+   __warn from warn_slowpath_fmt+0x130/0x194
+   warn_slowpath_fmt from vlan_for_each+0x120/0x124
+   vlan_for_each from cpsw_add_mc_addr+0x54/0x98
+   cpsw_add_mc_addr from __hw_addr_ref_sync_dev+0xc4/0xec
+   __hw_addr_ref_sync_dev from __dev_mc_add+0x78/0x88
+   __dev_mc_add from igmp6_group_added+0x84/0xec
+   igmp6_group_added from __ipv6_dev_mc_inc+0x1fc/0x2f0
+   __ipv6_dev_mc_inc from __ipv6_sock_mc_join+0x124/0x1b4
+   __ipv6_sock_mc_join from do_ipv6_setsockopt+0x84c/0x1168
+   do_ipv6_setsockopt from ipv6_setsockopt+0x88/0xc8
+   ipv6_setsockopt from do_sock_setsockopt+0xe8/0x19c
+   do_sock_setsockopt from __sys_setsockopt+0x84/0xac
+   __sys_setsockopt from ret_fast_syscall+0x0/0x54
+
+This trace occurs because vlan_for_each() is called within
+cpsw_ndo_set_rx_mode(), which expects the RTNL lock to be held.
+Since modifying vlan_for_each() to operate without the RTNL lock is not
+straightforward, and because ndo_set_rx_mode() is invoked both with and
+without the RTNL lock across different code paths, simply adding
+rtnl_lock() in cpsw_ndo_set_rx_mode() is not a viable solution.
+
+To resolve this issue, we opt to execute the actual processing within
+a work queue, following the approach used by the icssg-prueth driver.
+
+Please note: To reproduce this issue, I manually reverted the changes to
+am335x-bone-common.dtsi from commit c477358e66a3 ("ARM: dts: am335x-bone:
+switch to new cpsw switch drv") in order to revert to the legacy cpsw
+driver.
+
+Fixes: 1767bb2d47b7 ("ipv6: mcast: Don't hold RTNL for IPV6_ADD_MEMBERSHIP and MCAST_JOIN_GROUP.")
+Signed-off-by: Kevin Hao <haokexin@gmail.com>
+Cc: stable@vger.kernel.org
+Link: https://patch.msgid.link/20260203-bbb-v5-2-ea0ea217a85c@gmail.com
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/ti/cpsw.c | 41 +++++++++++++++++++++++++++++-----
+ 1 file changed, 35 insertions(+), 6 deletions(-)
+
+diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
+index 54c24cd3d3be..b0e18bdc2c85 100644
+--- a/drivers/net/ethernet/ti/cpsw.c
++++ b/drivers/net/ethernet/ti/cpsw.c
+@@ -305,12 +305,19 @@ static int cpsw_purge_all_mc(struct net_device *ndev, const u8 *addr, int num)
+       return 0;
+ }
+-static void cpsw_ndo_set_rx_mode(struct net_device *ndev)
++static void cpsw_ndo_set_rx_mode_work(struct work_struct *work)
+ {
+-      struct cpsw_priv *priv = netdev_priv(ndev);
++      struct cpsw_priv *priv = container_of(work, struct cpsw_priv, rx_mode_work);
+       struct cpsw_common *cpsw = priv->cpsw;
++      struct net_device *ndev = priv->ndev;
+       int slave_port = -1;
++      rtnl_lock();
++      if (!netif_running(ndev))
++              goto unlock_rtnl;
++
++      netif_addr_lock_bh(ndev);
++
+       if (cpsw->data.dual_emac)
+               slave_port = priv->emac_port + 1;
+@@ -318,7 +325,7 @@ static void cpsw_ndo_set_rx_mode(struct net_device *ndev)
+               /* Enable promiscuous mode */
+               cpsw_set_promiscious(ndev, true);
+               cpsw_ale_set_allmulti(cpsw->ale, IFF_ALLMULTI, slave_port);
+-              return;
++              goto unlock_addr;
+       } else {
+               /* Disable promiscuous mode */
+               cpsw_set_promiscious(ndev, false);
+@@ -331,6 +338,18 @@ static void cpsw_ndo_set_rx_mode(struct net_device *ndev)
+       /* add/remove mcast address either for real netdev or for vlan */
+       __hw_addr_ref_sync_dev(&ndev->mc, ndev, cpsw_add_mc_addr,
+                              cpsw_del_mc_addr);
++
++unlock_addr:
++      netif_addr_unlock_bh(ndev);
++unlock_rtnl:
++      rtnl_unlock();
++}
++
++static void cpsw_ndo_set_rx_mode(struct net_device *ndev)
++{
++      struct cpsw_priv *priv = netdev_priv(ndev);
++
++      schedule_work(&priv->rx_mode_work);
+ }
+ static unsigned int cpsw_rxbuf_total_len(unsigned int len)
+@@ -1472,6 +1491,7 @@ static int cpsw_probe_dual_emac(struct cpsw_priv *priv)
+       priv_sl2->ndev = ndev;
+       priv_sl2->dev  = &ndev->dev;
+       priv_sl2->msg_enable = netif_msg_init(debug_level, CPSW_DEBUG);
++      INIT_WORK(&priv_sl2->rx_mode_work, cpsw_ndo_set_rx_mode_work);
+       if (is_valid_ether_addr(data->slave_data[1].mac_addr)) {
+               memcpy(priv_sl2->mac_addr, data->slave_data[1].mac_addr,
+@@ -1653,6 +1673,7 @@ static int cpsw_probe(struct platform_device *pdev)
+       priv->dev  = dev;
+       priv->msg_enable = netif_msg_init(debug_level, CPSW_DEBUG);
+       priv->emac_port = 0;
++      INIT_WORK(&priv->rx_mode_work, cpsw_ndo_set_rx_mode_work);
+       if (is_valid_ether_addr(data->slave_data[0].mac_addr)) {
+               memcpy(priv->mac_addr, data->slave_data[0].mac_addr, ETH_ALEN);
+@@ -1758,6 +1779,8 @@ static int cpsw_probe(struct platform_device *pdev)
+ static void cpsw_remove(struct platform_device *pdev)
+ {
+       struct cpsw_common *cpsw = platform_get_drvdata(pdev);
++      struct net_device *ndev;
++      struct cpsw_priv *priv;
+       int i, ret;
+       ret = pm_runtime_resume_and_get(&pdev->dev);
+@@ -1770,9 +1793,15 @@ static void cpsw_remove(struct platform_device *pdev)
+               return;
+       }
+-      for (i = 0; i < cpsw->data.slaves; i++)
+-              if (cpsw->slaves[i].ndev)
+-                      unregister_netdev(cpsw->slaves[i].ndev);
++      for (i = 0; i < cpsw->data.slaves; i++) {
++              ndev = cpsw->slaves[i].ndev;
++              if (!ndev)
++                      continue;
++
++              priv = netdev_priv(ndev);
++              unregister_netdev(ndev);
++              disable_work_sync(&priv->rx_mode_work);
++      }
+       cpts_release(cpsw->cpts);
+       cpdma_ctlr_destroy(cpsw->dma);
+-- 
+2.53.0
+
diff --git a/queue-6.18/net-cpsw_new-execute-ndo_set_rx_mode-callback-in-a-work-queue.patch b/queue-6.18/net-cpsw_new-execute-ndo_set_rx_mode-callback-in-a-work-queue.patch
new file mode 100644 (file)
index 0000000..e017d4b
--- /dev/null
@@ -0,0 +1,150 @@
+From c0b5dc73a38f954e780f93a549b8fe225235c07a Mon Sep 17 00:00:00 2001
+From: Kevin Hao <haokexin@gmail.com>
+Date: Tue, 3 Feb 2026 10:18:30 +0800
+Subject: net: cpsw_new: Execute ndo_set_rx_mode callback in a work queue
+
+From: Kevin Hao <haokexin@gmail.com>
+
+commit c0b5dc73a38f954e780f93a549b8fe225235c07a upstream.
+
+Commit 1767bb2d47b7 ("ipv6: mcast: Don't hold RTNL for
+IPV6_ADD_MEMBERSHIP and MCAST_JOIN_GROUP.") removed the RTNL lock for
+IPV6_ADD_MEMBERSHIP and MCAST_JOIN_GROUP operations. However, this
+change triggered the following call trace on my BeagleBone Black board:
+  WARNING: net/8021q/vlan_core.c:236 at vlan_for_each+0x120/0x124, CPU#0: rpcbind/496
+  RTNL: assertion failed at net/8021q/vlan_core.c (236)
+  Modules linked in:
+  CPU: 0 UID: 997 PID: 496 Comm: rpcbind Not tainted 6.19.0-rc6-next-20260122-yocto-standard+ #8 PREEMPT
+  Hardware name: Generic AM33XX (Flattened Device Tree)
+  Call trace:
+   unwind_backtrace from show_stack+0x28/0x2c
+   show_stack from dump_stack_lvl+0x30/0x38
+   dump_stack_lvl from __warn+0xb8/0x11c
+   __warn from warn_slowpath_fmt+0x130/0x194
+   warn_slowpath_fmt from vlan_for_each+0x120/0x124
+   vlan_for_each from cpsw_add_mc_addr+0x54/0xd8
+   cpsw_add_mc_addr from __hw_addr_ref_sync_dev+0xc4/0xec
+   __hw_addr_ref_sync_dev from __dev_mc_add+0x78/0x88
+   __dev_mc_add from igmp6_group_added+0x84/0xec
+   igmp6_group_added from __ipv6_dev_mc_inc+0x1fc/0x2f0
+   __ipv6_dev_mc_inc from __ipv6_sock_mc_join+0x124/0x1b4
+   __ipv6_sock_mc_join from do_ipv6_setsockopt+0x84c/0x1168
+   do_ipv6_setsockopt from ipv6_setsockopt+0x88/0xc8
+   ipv6_setsockopt from do_sock_setsockopt+0xe8/0x19c
+   do_sock_setsockopt from __sys_setsockopt+0x84/0xac
+   __sys_setsockopt from ret_fast_syscall+0x0/0x5
+
+This trace occurs because vlan_for_each() is called within
+cpsw_ndo_set_rx_mode(), which expects the RTNL lock to be held.
+Since modifying vlan_for_each() to operate without the RTNL lock is not
+straightforward, and because ndo_set_rx_mode() is invoked both with and
+without the RTNL lock across different code paths, simply adding
+rtnl_lock() in cpsw_ndo_set_rx_mode() is not a viable solution.
+
+To resolve this issue, we opt to execute the actual processing within
+a work queue, following the approach used by the icssg-prueth driver.
+
+Fixes: 1767bb2d47b7 ("ipv6: mcast: Don't hold RTNL for IPV6_ADD_MEMBERSHIP and MCAST_JOIN_GROUP.")
+Signed-off-by: Kevin Hao <haokexin@gmail.com>
+Cc: stable@vger.kernel.org
+Link: https://patch.msgid.link/20260203-bbb-v5-1-ea0ea217a85c@gmail.com
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/ti/cpsw_new.c  | 34 ++++++++++++++++++++++++-----
+ drivers/net/ethernet/ti/cpsw_priv.h |  1 +
+ 2 files changed, 30 insertions(+), 5 deletions(-)
+
+diff --git a/drivers/net/ethernet/ti/cpsw_new.c b/drivers/net/ethernet/ti/cpsw_new.c
+index ab88d4c02cbd..21af0a10626a 100644
+--- a/drivers/net/ethernet/ti/cpsw_new.c
++++ b/drivers/net/ethernet/ti/cpsw_new.c
+@@ -248,16 +248,22 @@ static int cpsw_purge_all_mc(struct net_device *ndev, const u8 *addr, int num)
+       return 0;
+ }
+-static void cpsw_ndo_set_rx_mode(struct net_device *ndev)
++static void cpsw_ndo_set_rx_mode_work(struct work_struct *work)
+ {
+-      struct cpsw_priv *priv = netdev_priv(ndev);
++      struct cpsw_priv *priv = container_of(work, struct cpsw_priv, rx_mode_work);
+       struct cpsw_common *cpsw = priv->cpsw;
++      struct net_device *ndev = priv->ndev;
++      rtnl_lock();
++      if (!netif_running(ndev))
++              goto unlock_rtnl;
++
++      netif_addr_lock_bh(ndev);
+       if (ndev->flags & IFF_PROMISC) {
+               /* Enable promiscuous mode */
+               cpsw_set_promiscious(ndev, true);
+               cpsw_ale_set_allmulti(cpsw->ale, IFF_ALLMULTI, priv->emac_port);
+-              return;
++              goto unlock_addr;
+       }
+       /* Disable promiscuous mode */
+@@ -270,6 +276,18 @@ static void cpsw_ndo_set_rx_mode(struct net_device *ndev)
+       /* add/remove mcast address either for real netdev or for vlan */
+       __hw_addr_ref_sync_dev(&ndev->mc, ndev, cpsw_add_mc_addr,
+                              cpsw_del_mc_addr);
++
++unlock_addr:
++      netif_addr_unlock_bh(ndev);
++unlock_rtnl:
++      rtnl_unlock();
++}
++
++static void cpsw_ndo_set_rx_mode(struct net_device *ndev)
++{
++      struct cpsw_priv *priv = netdev_priv(ndev);
++
++      schedule_work(&priv->rx_mode_work);
+ }
+ static unsigned int cpsw_rxbuf_total_len(unsigned int len)
+@@ -1398,6 +1416,7 @@ static int cpsw_create_ports(struct cpsw_common *cpsw)
+               priv->msg_enable = netif_msg_init(debug_level, CPSW_DEBUG);
+               priv->emac_port = i + 1;
+               priv->tx_packet_min = CPSW_MIN_PACKET_SIZE;
++              INIT_WORK(&priv->rx_mode_work, cpsw_ndo_set_rx_mode_work);
+               if (is_valid_ether_addr(slave_data->mac_addr)) {
+                       ether_addr_copy(priv->mac_addr, slave_data->mac_addr);
+@@ -1447,13 +1466,18 @@ static int cpsw_create_ports(struct cpsw_common *cpsw)
+ static void cpsw_unregister_ports(struct cpsw_common *cpsw)
+ {
++      struct net_device *ndev;
++      struct cpsw_priv *priv;
+       int i = 0;
+       for (i = 0; i < cpsw->data.slaves; i++) {
+-              if (!cpsw->slaves[i].ndev)
++              ndev = cpsw->slaves[i].ndev;
++              if (!ndev)
+                       continue;
+-              unregister_netdev(cpsw->slaves[i].ndev);
++              priv = netdev_priv(ndev);
++              unregister_netdev(ndev);
++              disable_work_sync(&priv->rx_mode_work);
+       }
+ }
+diff --git a/drivers/net/ethernet/ti/cpsw_priv.h b/drivers/net/ethernet/ti/cpsw_priv.h
+index 91add8925e23..acb6181c5c9e 100644
+--- a/drivers/net/ethernet/ti/cpsw_priv.h
++++ b/drivers/net/ethernet/ti/cpsw_priv.h
+@@ -391,6 +391,7 @@ struct cpsw_priv {
+       u32 tx_packet_min;
+       struct cpsw_ale_ratelimit ale_bc_ratelimit;
+       struct cpsw_ale_ratelimit ale_mc_ratelimit;
++      struct work_struct rx_mode_work;
+ };
+ #define ndev_to_cpsw(ndev) (((struct cpsw_priv *)netdev_priv(ndev))->cpsw)
+-- 
+2.53.0
+
diff --git a/queue-6.18/net-spacemit-k1-emac-fix-jumbo-frame-support.patch b/queue-6.18/net-spacemit-k1-emac-fix-jumbo-frame-support.patch
new file mode 100644 (file)
index 0000000..d1fe6c1
--- /dev/null
@@ -0,0 +1,105 @@
+From 3125fc17016945b11e9725c6aff30ff3326fd58f Mon Sep 17 00:00:00 2001
+From: Tomas Hlavacek <tmshlvck@gmail.com>
+Date: Fri, 30 Jan 2026 11:23:01 +0100
+Subject: net: spacemit: k1-emac: fix jumbo frame support
+
+From: Tomas Hlavacek <tmshlvck@gmail.com>
+
+commit 3125fc17016945b11e9725c6aff30ff3326fd58f upstream.
+
+The driver never programs the MAC frame size and jabber registers,
+causing the hardware to reject frames larger than the default 1518
+bytes even when larger DMA buffers are allocated.
+
+Program MAC_MAXIMUM_FRAME_SIZE, MAC_TRANSMIT_JABBER_SIZE, and
+MAC_RECEIVE_JABBER_SIZE based on the configured MTU. Also fix the
+maximum buffer size from 4096 to 4095, since the descriptor buffer
+size field is only 12 bits. Account for double VLAN tags in frame
+size calculations.
+
+Fixes: bfec6d7f2001 ("net: spacemit: Add K1 Ethernet MAC")
+Cc: stable@vger.kernel.org
+Signed-off-by: Tomas Hlavacek <tmshlvck@gmail.com>
+Link: https://patch.msgid.link/20260130102301.477514-1-tmshlvck@gmail.com
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/spacemit/k1_emac.c | 21 +++++++++++++++------
+ 1 file changed, 15 insertions(+), 6 deletions(-)
+
+diff --git a/drivers/net/ethernet/spacemit/k1_emac.c b/drivers/net/ethernet/spacemit/k1_emac.c
+index 88e9424d2d51..b49c4708bf9e 100644
+--- a/drivers/net/ethernet/spacemit/k1_emac.c
++++ b/drivers/net/ethernet/spacemit/k1_emac.c
+@@ -12,6 +12,7 @@
+ #include <linux/dma-mapping.h>
+ #include <linux/etherdevice.h>
+ #include <linux/ethtool.h>
++#include <linux/if_vlan.h>
+ #include <linux/interrupt.h>
+ #include <linux/io.h>
+ #include <linux/iopoll.h>
+@@ -38,7 +39,7 @@
+ #define EMAC_DEFAULT_BUFSIZE          1536
+ #define EMAC_RX_BUF_2K                        2048
+-#define EMAC_RX_BUF_4K                        4096
++#define EMAC_RX_BUF_MAX                       FIELD_MAX(RX_DESC_1_BUFFER_SIZE_1_MASK)
+ /* Tuning parameters from SpacemiT */
+ #define EMAC_TX_FRAMES                        64
+@@ -202,8 +203,7 @@ static void emac_init_hw(struct emac_priv *priv)
+ {
+       /* Destination address for 802.3x Ethernet flow control */
+       u8 fc_dest_addr[ETH_ALEN] = { 0x01, 0x80, 0xc2, 0x00, 0x00, 0x01 };
+-
+-      u32 rxirq = 0, dma = 0;
++      u32 rxirq = 0, dma = 0, frame_sz;
+       regmap_set_bits(priv->regmap_apmu,
+                       priv->regmap_apmu_offset + APMU_EMAC_CTRL_REG,
+@@ -228,6 +228,15 @@ static void emac_init_hw(struct emac_priv *priv)
+               DEFAULT_TX_THRESHOLD);
+       emac_wr(priv, MAC_RECEIVE_PACKET_START_THRESHOLD, DEFAULT_RX_THRESHOLD);
++      /* Set maximum frame size and jabber size based on configured MTU,
++       * accounting for Ethernet header, double VLAN tags, and FCS.
++       */
++      frame_sz = priv->ndev->mtu + ETH_HLEN + 2 * VLAN_HLEN + ETH_FCS_LEN;
++
++      emac_wr(priv, MAC_MAXIMUM_FRAME_SIZE, frame_sz);
++      emac_wr(priv, MAC_TRANSMIT_JABBER_SIZE, frame_sz);
++      emac_wr(priv, MAC_RECEIVE_JABBER_SIZE, frame_sz);
++
+       /* Configure flow control (enabled in emac_adjust_link() later) */
+       emac_set_mac_addr_reg(priv, fc_dest_addr, MAC_FC_SOURCE_ADDRESS_HIGH);
+       emac_wr(priv, MAC_FC_PAUSE_HIGH_THRESHOLD, DEFAULT_FC_FIFO_HIGH);
+@@ -924,14 +933,14 @@ static int emac_change_mtu(struct net_device *ndev, int mtu)
+               return -EBUSY;
+       }
+-      frame_len = mtu + ETH_HLEN + ETH_FCS_LEN;
++      frame_len = mtu + ETH_HLEN + 2 * VLAN_HLEN + ETH_FCS_LEN;
+       if (frame_len <= EMAC_DEFAULT_BUFSIZE)
+               priv->dma_buf_sz = EMAC_DEFAULT_BUFSIZE;
+       else if (frame_len <= EMAC_RX_BUF_2K)
+               priv->dma_buf_sz = EMAC_RX_BUF_2K;
+       else
+-              priv->dma_buf_sz = EMAC_RX_BUF_4K;
++              priv->dma_buf_sz = EMAC_RX_BUF_MAX;
+       ndev->mtu = mtu;
+@@ -2025,7 +2034,7 @@ static int emac_probe(struct platform_device *pdev)
+       ndev->hw_features = NETIF_F_SG;
+       ndev->features |= ndev->hw_features;
+-      ndev->max_mtu = EMAC_RX_BUF_4K - (ETH_HLEN + ETH_FCS_LEN);
++      ndev->max_mtu = EMAC_RX_BUF_MAX - (ETH_HLEN + 2 * VLAN_HLEN + ETH_FCS_LEN);
+       ndev->pcpu_stat_type = NETDEV_PCPU_STAT_DSTATS;
+       priv = netdev_priv(ndev);
+-- 
+2.53.0
+
diff --git a/queue-6.18/nouveau-add-a-third-state-to-the-fini-handler.patch b/queue-6.18/nouveau-add-a-third-state-to-the-fini-handler.patch
new file mode 100644 (file)
index 0000000..6171b68
--- /dev/null
@@ -0,0 +1,963 @@
+From 8f8a4dce64013737701d13565cf6107f42b725ea Mon Sep 17 00:00:00 2001
+From: Dave Airlie <airlied@redhat.com>
+Date: Tue, 3 Feb 2026 15:21:12 +1000
+Subject: nouveau: add a third state to the fini handler.
+
+From: Dave Airlie <airlied@redhat.com>
+
+commit 8f8a4dce64013737701d13565cf6107f42b725ea upstream.
+
+This is just refactoring to allow the lower layers to distinguish
+between suspend and runtime suspend.
+
+GSP 570 needs to set a flag with the GPU is going into GCOFF,
+this flag taken from the opengpu driver is set whenever runtime
+suspend is enterning GCOFF but not for normal suspend paths.
+
+This just refactors the code, a subsequent patch use the information.
+
+Fixes: 53dac0623853 ("drm/nouveau/gsp: add support for 570.144")
+Cc: <stable@vger.kernel.org>
+Reviewed-by: Lyude Paul <lyude@redhat.com>
+Tested-by: Lyude Paul <lyude@redhat.com>
+Signed-off-by: Dave Airlie <airlied@redhat.com>
+Link: https://patch.msgid.link/20260203052431.2219998-3-airlied@gmail.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/gpu/drm/nouveau/include/nvif/client.h             |    2 -
+ drivers/gpu/drm/nouveau/include/nvif/driver.h             |    2 -
+ drivers/gpu/drm/nouveau/include/nvkm/core/device.h        |    3 +
+ drivers/gpu/drm/nouveau/include/nvkm/core/engine.h        |    2 -
+ drivers/gpu/drm/nouveau/include/nvkm/core/object.h        |    5 +--
+ drivers/gpu/drm/nouveau/include/nvkm/core/oproxy.h        |    2 -
+ drivers/gpu/drm/nouveau/include/nvkm/core/subdev.h        |    4 +-
+ drivers/gpu/drm/nouveau/include/nvkm/core/suspend_state.h |   11 +++++++
+ drivers/gpu/drm/nouveau/nouveau_drm.c                     |    2 -
+ drivers/gpu/drm/nouveau/nouveau_nvif.c                    |   10 +++++-
+ drivers/gpu/drm/nouveau/nvif/client.c                     |    4 +-
+ drivers/gpu/drm/nouveau/nvkm/core/engine.c                |    4 +-
+ drivers/gpu/drm/nouveau/nvkm/core/ioctl.c                 |    4 +-
+ drivers/gpu/drm/nouveau/nvkm/core/object.c                |   20 ++++++++++--
+ drivers/gpu/drm/nouveau/nvkm/core/oproxy.c                |    2 -
+ drivers/gpu/drm/nouveau/nvkm/core/subdev.c                |   18 +++++++++--
+ drivers/gpu/drm/nouveau/nvkm/core/uevent.c                |    2 -
+ drivers/gpu/drm/nouveau/nvkm/engine/ce/ga100.c            |    2 -
+ drivers/gpu/drm/nouveau/nvkm/engine/ce/priv.h             |    2 -
+ drivers/gpu/drm/nouveau/nvkm/engine/device/base.c         |   22 ++++++++++----
+ drivers/gpu/drm/nouveau/nvkm/engine/device/pci.c          |    4 +-
+ drivers/gpu/drm/nouveau/nvkm/engine/device/priv.h         |    2 -
+ drivers/gpu/drm/nouveau/nvkm/engine/device/user.c         |    2 -
+ drivers/gpu/drm/nouveau/nvkm/engine/disp/base.c           |    4 +-
+ drivers/gpu/drm/nouveau/nvkm/engine/disp/chan.c           |    2 -
+ drivers/gpu/drm/nouveau/nvkm/engine/falcon.c              |    4 +-
+ drivers/gpu/drm/nouveau/nvkm/engine/fifo/base.c           |    2 -
+ drivers/gpu/drm/nouveau/nvkm/engine/fifo/uchan.c          |    6 +--
+ drivers/gpu/drm/nouveau/nvkm/engine/gr/base.c             |    4 +-
+ drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c            |    2 -
+ drivers/gpu/drm/nouveau/nvkm/engine/gr/nv04.c             |    2 -
+ drivers/gpu/drm/nouveau/nvkm/engine/gr/nv10.c             |    2 -
+ drivers/gpu/drm/nouveau/nvkm/engine/gr/nv20.c             |    2 -
+ drivers/gpu/drm/nouveau/nvkm/engine/gr/nv20.h             |    2 -
+ drivers/gpu/drm/nouveau/nvkm/engine/gr/nv40.c             |    4 +-
+ drivers/gpu/drm/nouveau/nvkm/engine/mpeg/nv44.c           |    2 -
+ drivers/gpu/drm/nouveau/nvkm/engine/sec2/base.c           |    2 -
+ drivers/gpu/drm/nouveau/nvkm/engine/xtensa.c              |    4 +-
+ drivers/gpu/drm/nouveau/nvkm/subdev/acr/base.c            |    2 -
+ drivers/gpu/drm/nouveau/nvkm/subdev/bar/base.c            |    2 -
+ drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c            |    2 -
+ drivers/gpu/drm/nouveau/nvkm/subdev/devinit/base.c        |    4 +-
+ drivers/gpu/drm/nouveau/nvkm/subdev/fault/base.c          |    2 -
+ drivers/gpu/drm/nouveau/nvkm/subdev/fault/user.c          |    2 -
+ drivers/gpu/drm/nouveau/nvkm/subdev/gpio/base.c           |    2 -
+ drivers/gpu/drm/nouveau/nvkm/subdev/gsp/base.c            |    2 -
+ drivers/gpu/drm/nouveau/nvkm/subdev/gsp/gh100.c           |    2 -
+ drivers/gpu/drm/nouveau/nvkm/subdev/gsp/priv.h            |    8 ++---
+ drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/gsp.c     |    2 -
+ drivers/gpu/drm/nouveau/nvkm/subdev/gsp/tu102.c           |    2 -
+ drivers/gpu/drm/nouveau/nvkm/subdev/i2c/base.c            |    2 -
+ drivers/gpu/drm/nouveau/nvkm/subdev/instmem/base.c        |    2 -
+ drivers/gpu/drm/nouveau/nvkm/subdev/pci/base.c            |    2 -
+ drivers/gpu/drm/nouveau/nvkm/subdev/pmu/base.c            |    2 -
+ drivers/gpu/drm/nouveau/nvkm/subdev/therm/base.c          |    6 +--
+ drivers/gpu/drm/nouveau/nvkm/subdev/timer/base.c          |    2 -
+ 56 files changed, 139 insertions(+), 84 deletions(-)
+ create mode 100644 drivers/gpu/drm/nouveau/include/nvkm/core/suspend_state.h
+
+--- a/drivers/gpu/drm/nouveau/include/nvif/client.h
++++ b/drivers/gpu/drm/nouveau/include/nvif/client.h
+@@ -11,7 +11,7 @@ struct nvif_client {
+ int  nvif_client_ctor(struct nvif_client *parent, const char *name, struct nvif_client *);
+ void nvif_client_dtor(struct nvif_client *);
+-int  nvif_client_suspend(struct nvif_client *);
++int  nvif_client_suspend(struct nvif_client *, bool);
+ int  nvif_client_resume(struct nvif_client *);
+ /*XXX*/
+--- a/drivers/gpu/drm/nouveau/include/nvif/driver.h
++++ b/drivers/gpu/drm/nouveau/include/nvif/driver.h
+@@ -8,7 +8,7 @@ struct nvif_driver {
+       const char *name;
+       int (*init)(const char *name, u64 device, const char *cfg,
+                   const char *dbg, void **priv);
+-      int (*suspend)(void *priv);
++      int (*suspend)(void *priv, bool runtime);
+       int (*resume)(void *priv);
+       int (*ioctl)(void *priv, void *data, u32 size, void **hack);
+       void __iomem *(*map)(void *priv, u64 handle, u32 size);
+--- a/drivers/gpu/drm/nouveau/include/nvkm/core/device.h
++++ b/drivers/gpu/drm/nouveau/include/nvkm/core/device.h
+@@ -2,6 +2,7 @@
+ #ifndef __NVKM_DEVICE_H__
+ #define __NVKM_DEVICE_H__
+ #include <core/oclass.h>
++#include <core/suspend_state.h>
+ #include <core/intr.h>
+ enum nvkm_subdev_type;
+@@ -93,7 +94,7 @@ struct nvkm_device_func {
+       void *(*dtor)(struct nvkm_device *);
+       int (*preinit)(struct nvkm_device *);
+       int (*init)(struct nvkm_device *);
+-      void (*fini)(struct nvkm_device *, bool suspend);
++      void (*fini)(struct nvkm_device *, enum nvkm_suspend_state suspend);
+       int (*irq)(struct nvkm_device *);
+       resource_size_t (*resource_addr)(struct nvkm_device *, enum nvkm_bar_id);
+       resource_size_t (*resource_size)(struct nvkm_device *, enum nvkm_bar_id);
+--- a/drivers/gpu/drm/nouveau/include/nvkm/core/engine.h
++++ b/drivers/gpu/drm/nouveau/include/nvkm/core/engine.h
+@@ -20,7 +20,7 @@ struct nvkm_engine_func {
+       int (*oneinit)(struct nvkm_engine *);
+       int (*info)(struct nvkm_engine *, u64 mthd, u64 *data);
+       int (*init)(struct nvkm_engine *);
+-      int (*fini)(struct nvkm_engine *, bool suspend);
++      int (*fini)(struct nvkm_engine *, enum nvkm_suspend_state suspend);
+       int (*reset)(struct nvkm_engine *);
+       int (*nonstall)(struct nvkm_engine *);
+       void (*intr)(struct nvkm_engine *);
+--- a/drivers/gpu/drm/nouveau/include/nvkm/core/object.h
++++ b/drivers/gpu/drm/nouveau/include/nvkm/core/object.h
+@@ -2,6 +2,7 @@
+ #ifndef __NVKM_OBJECT_H__
+ #define __NVKM_OBJECT_H__
+ #include <core/oclass.h>
++#include <core/suspend_state.h>
+ struct nvkm_event;
+ struct nvkm_gpuobj;
+ struct nvkm_uevent;
+@@ -27,7 +28,7 @@ enum nvkm_object_map {
+ struct nvkm_object_func {
+       void *(*dtor)(struct nvkm_object *);
+       int (*init)(struct nvkm_object *);
+-      int (*fini)(struct nvkm_object *, bool suspend);
++      int (*fini)(struct nvkm_object *, enum nvkm_suspend_state suspend);
+       int (*mthd)(struct nvkm_object *, u32 mthd, void *data, u32 size);
+       int (*ntfy)(struct nvkm_object *, u32 mthd, struct nvkm_event **);
+       int (*map)(struct nvkm_object *, void *argv, u32 argc,
+@@ -49,7 +50,7 @@ int nvkm_object_new(const struct nvkm_oc
+ void nvkm_object_del(struct nvkm_object **);
+ void *nvkm_object_dtor(struct nvkm_object *);
+ int nvkm_object_init(struct nvkm_object *);
+-int nvkm_object_fini(struct nvkm_object *, bool suspend);
++int nvkm_object_fini(struct nvkm_object *, enum nvkm_suspend_state);
+ int nvkm_object_mthd(struct nvkm_object *, u32 mthd, void *data, u32 size);
+ int nvkm_object_ntfy(struct nvkm_object *, u32 mthd, struct nvkm_event **);
+ int nvkm_object_map(struct nvkm_object *, void *argv, u32 argc,
+--- a/drivers/gpu/drm/nouveau/include/nvkm/core/oproxy.h
++++ b/drivers/gpu/drm/nouveau/include/nvkm/core/oproxy.h
+@@ -13,7 +13,7 @@ struct nvkm_oproxy {
+ struct nvkm_oproxy_func {
+       void (*dtor[2])(struct nvkm_oproxy *);
+       int  (*init[2])(struct nvkm_oproxy *);
+-      int  (*fini[2])(struct nvkm_oproxy *, bool suspend);
++      int  (*fini[2])(struct nvkm_oproxy *, enum nvkm_suspend_state suspend);
+ };
+ void nvkm_oproxy_ctor(const struct nvkm_oproxy_func *,
+--- a/drivers/gpu/drm/nouveau/include/nvkm/core/subdev.h
++++ b/drivers/gpu/drm/nouveau/include/nvkm/core/subdev.h
+@@ -40,7 +40,7 @@ struct nvkm_subdev_func {
+       int (*oneinit)(struct nvkm_subdev *);
+       int (*info)(struct nvkm_subdev *, u64 mthd, u64 *data);
+       int (*init)(struct nvkm_subdev *);
+-      int (*fini)(struct nvkm_subdev *, bool suspend);
++      int (*fini)(struct nvkm_subdev *, enum nvkm_suspend_state suspend);
+       void (*intr)(struct nvkm_subdev *);
+ };
+@@ -65,7 +65,7 @@ void nvkm_subdev_unref(struct nvkm_subde
+ int  nvkm_subdev_preinit(struct nvkm_subdev *);
+ int  nvkm_subdev_oneinit(struct nvkm_subdev *);
+ int  nvkm_subdev_init(struct nvkm_subdev *);
+-int  nvkm_subdev_fini(struct nvkm_subdev *, bool suspend);
++int  nvkm_subdev_fini(struct nvkm_subdev *, enum nvkm_suspend_state suspend);
+ int  nvkm_subdev_info(struct nvkm_subdev *, u64, u64 *);
+ void nvkm_subdev_intr(struct nvkm_subdev *);
+--- /dev/null
++++ b/drivers/gpu/drm/nouveau/include/nvkm/core/suspend_state.h
+@@ -0,0 +1,11 @@
++/* SPDX-License-Identifier: MIT */
++#ifndef __NVKM_SUSPEND_STATE_H__
++#define __NVKM_SUSPEND_STATE_H__
++
++enum nvkm_suspend_state {
++      NVKM_POWEROFF,
++      NVKM_SUSPEND,
++      NVKM_RUNTIME_SUSPEND,
++};
++
++#endif
+--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
+@@ -983,7 +983,7 @@ nouveau_do_suspend(struct nouveau_drm *d
+       }
+       NV_DEBUG(drm, "suspending object tree...\n");
+-      ret = nvif_client_suspend(&drm->_client);
++      ret = nvif_client_suspend(&drm->_client, runtime);
+       if (ret)
+               goto fail_client;
+--- a/drivers/gpu/drm/nouveau/nouveau_nvif.c
++++ b/drivers/gpu/drm/nouveau/nouveau_nvif.c
+@@ -62,10 +62,16 @@ nvkm_client_resume(void *priv)
+ }
+ static int
+-nvkm_client_suspend(void *priv)
++nvkm_client_suspend(void *priv, bool runtime)
+ {
+       struct nvkm_client *client = priv;
+-      return nvkm_object_fini(&client->object, true);
++      enum nvkm_suspend_state state;
++
++      if (runtime)
++              state = NVKM_RUNTIME_SUSPEND;
++      else
++              state = NVKM_SUSPEND;
++      return nvkm_object_fini(&client->object, state);
+ }
+ static int
+--- a/drivers/gpu/drm/nouveau/nvif/client.c
++++ b/drivers/gpu/drm/nouveau/nvif/client.c
+@@ -30,9 +30,9 @@
+ #include <nvif/if0000.h>
+ int
+-nvif_client_suspend(struct nvif_client *client)
++nvif_client_suspend(struct nvif_client *client, bool runtime)
+ {
+-      return client->driver->suspend(client->object.priv);
++      return client->driver->suspend(client->object.priv, runtime);
+ }
+ int
+--- a/drivers/gpu/drm/nouveau/nvkm/core/engine.c
++++ b/drivers/gpu/drm/nouveau/nvkm/core/engine.c
+@@ -41,7 +41,7 @@ nvkm_engine_reset(struct nvkm_engine *en
+       if (engine->func->reset)
+               return engine->func->reset(engine);
+-      nvkm_subdev_fini(&engine->subdev, false);
++      nvkm_subdev_fini(&engine->subdev, NVKM_POWEROFF);
+       return nvkm_subdev_init(&engine->subdev);
+ }
+@@ -98,7 +98,7 @@ nvkm_engine_info(struct nvkm_subdev *sub
+ }
+ static int
+-nvkm_engine_fini(struct nvkm_subdev *subdev, bool suspend)
++nvkm_engine_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_engine *engine = nvkm_engine(subdev);
+       if (engine->func->fini)
+--- a/drivers/gpu/drm/nouveau/nvkm/core/ioctl.c
++++ b/drivers/gpu/drm/nouveau/nvkm/core/ioctl.c
+@@ -141,7 +141,7 @@ nvkm_ioctl_new(struct nvkm_client *clien
+                       }
+                       ret = -EEXIST;
+               }
+-              nvkm_object_fini(object, false);
++              nvkm_object_fini(object, NVKM_POWEROFF);
+       }
+       nvkm_object_del(&object);
+@@ -160,7 +160,7 @@ nvkm_ioctl_del(struct nvkm_client *clien
+       nvif_ioctl(object, "delete size %d\n", size);
+       if (!(ret = nvif_unvers(ret, &data, &size, args->none))) {
+               nvif_ioctl(object, "delete\n");
+-              nvkm_object_fini(object, false);
++              nvkm_object_fini(object, NVKM_POWEROFF);
+               nvkm_object_del(&object);
+       }
+--- a/drivers/gpu/drm/nouveau/nvkm/core/object.c
++++ b/drivers/gpu/drm/nouveau/nvkm/core/object.c
+@@ -142,13 +142,25 @@ nvkm_object_bind(struct nvkm_object *obj
+ }
+ int
+-nvkm_object_fini(struct nvkm_object *object, bool suspend)
++nvkm_object_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend)
+ {
+-      const char *action = suspend ? "suspend" : "fini";
++      const char *action;
+       struct nvkm_object *child;
+       s64 time;
+       int ret;
++      switch (suspend) {
++      case NVKM_POWEROFF:
++      default:
++              action = "fini";
++              break;
++      case NVKM_SUSPEND:
++              action = "suspend";
++              break;
++      case NVKM_RUNTIME_SUSPEND:
++              action = "runtime";
++              break;
++      }
+       nvif_debug(object, "%s children...\n", action);
+       time = ktime_to_us(ktime_get());
+       list_for_each_entry_reverse(child, &object->tree, head) {
+@@ -212,11 +224,11 @@ nvkm_object_init(struct nvkm_object *obj
+ fail_child:
+       list_for_each_entry_continue_reverse(child, &object->tree, head)
+-              nvkm_object_fini(child, false);
++              nvkm_object_fini(child, NVKM_POWEROFF);
+ fail:
+       nvif_error(object, "init failed with %d\n", ret);
+       if (object->func->fini)
+-              object->func->fini(object, false);
++              object->func->fini(object, NVKM_POWEROFF);
+       return ret;
+ }
+--- a/drivers/gpu/drm/nouveau/nvkm/core/oproxy.c
++++ b/drivers/gpu/drm/nouveau/nvkm/core/oproxy.c
+@@ -87,7 +87,7 @@ nvkm_oproxy_uevent(struct nvkm_object *o
+ }
+ static int
+-nvkm_oproxy_fini(struct nvkm_object *object, bool suspend)
++nvkm_oproxy_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_oproxy *oproxy = nvkm_oproxy(object);
+       int ret;
+--- a/drivers/gpu/drm/nouveau/nvkm/core/subdev.c
++++ b/drivers/gpu/drm/nouveau/nvkm/core/subdev.c
+@@ -51,12 +51,24 @@ nvkm_subdev_info(struct nvkm_subdev *sub
+ }
+ int
+-nvkm_subdev_fini(struct nvkm_subdev *subdev, bool suspend)
++nvkm_subdev_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_device *device = subdev->device;
+-      const char *action = suspend ? "suspend" : subdev->use.enabled ? "fini" : "reset";
++      const char *action;
+       s64 time;
++      switch (suspend) {
++      case NVKM_POWEROFF:
++      default:
++              action = subdev->use.enabled ? "fini" : "reset";
++              break;
++      case NVKM_SUSPEND:
++              action = "suspend";
++              break;
++      case NVKM_RUNTIME_SUSPEND:
++              action = "runtime";
++              break;
++      }
+       nvkm_trace(subdev, "%s running...\n", action);
+       time = ktime_to_us(ktime_get());
+@@ -186,7 +198,7 @@ void
+ nvkm_subdev_unref(struct nvkm_subdev *subdev)
+ {
+       if (refcount_dec_and_mutex_lock(&subdev->use.refcount, &subdev->use.mutex)) {
+-              nvkm_subdev_fini(subdev, false);
++              nvkm_subdev_fini(subdev, NVKM_POWEROFF);
+               mutex_unlock(&subdev->use.mutex);
+       }
+ }
+--- a/drivers/gpu/drm/nouveau/nvkm/core/uevent.c
++++ b/drivers/gpu/drm/nouveau/nvkm/core/uevent.c
+@@ -73,7 +73,7 @@ nvkm_uevent_mthd(struct nvkm_object *obj
+ }
+ static int
+-nvkm_uevent_fini(struct nvkm_object *object, bool suspend)
++nvkm_uevent_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_uevent *uevent = nvkm_uevent(object);
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/ce/ga100.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/ce/ga100.c
+@@ -46,7 +46,7 @@ ga100_ce_nonstall(struct nvkm_engine *en
+ }
+ int
+-ga100_ce_fini(struct nvkm_engine *engine, bool suspend)
++ga100_ce_fini(struct nvkm_engine *engine, enum nvkm_suspend_state suspend)
+ {
+       nvkm_inth_block(&engine->subdev.inth);
+       return 0;
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/ce/priv.h
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/ce/priv.h
+@@ -14,7 +14,7 @@ extern const struct nvkm_object_func gv1
+ int ga100_ce_oneinit(struct nvkm_engine *);
+ int ga100_ce_init(struct nvkm_engine *);
+-int ga100_ce_fini(struct nvkm_engine *, bool);
++int ga100_ce_fini(struct nvkm_engine *, enum nvkm_suspend_state);
+ int ga100_ce_nonstall(struct nvkm_engine *);
+ u32 gb202_ce_grce_mask(struct nvkm_device *);
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
+@@ -2935,13 +2935,25 @@ nvkm_device_engine(struct nvkm_device *d
+ }
+ int
+-nvkm_device_fini(struct nvkm_device *device, bool suspend)
++nvkm_device_fini(struct nvkm_device *device, enum nvkm_suspend_state suspend)
+ {
+-      const char *action = suspend ? "suspend" : "fini";
++      const char *action;
+       struct nvkm_subdev *subdev;
+       int ret;
+       s64 time;
++      switch (suspend) {
++      case NVKM_POWEROFF:
++      default:
++              action = "fini";
++              break;
++      case NVKM_SUSPEND:
++              action = "suspend";
++              break;
++      case NVKM_RUNTIME_SUSPEND:
++              action = "runtime";
++              break;
++      }
+       nvdev_trace(device, "%s running...\n", action);
+       time = ktime_to_us(ktime_get());
+@@ -3031,7 +3043,7 @@ nvkm_device_init(struct nvkm_device *dev
+       if (ret)
+               return ret;
+-      nvkm_device_fini(device, false);
++      nvkm_device_fini(device, NVKM_POWEROFF);
+       nvdev_trace(device, "init running...\n");
+       time = ktime_to_us(ktime_get());
+@@ -3059,9 +3071,9 @@ nvkm_device_init(struct nvkm_device *dev
+ fail_subdev:
+       list_for_each_entry_from(subdev, &device->subdev, head)
+-              nvkm_subdev_fini(subdev, false);
++              nvkm_subdev_fini(subdev, NVKM_POWEROFF);
+ fail:
+-      nvkm_device_fini(device, false);
++      nvkm_device_fini(device, NVKM_POWEROFF);
+       nvdev_error(device, "init failed with %d\n", ret);
+       return ret;
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/device/pci.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/pci.c
+@@ -1605,10 +1605,10 @@ nvkm_device_pci_irq(struct nvkm_device *
+ }
+ static void
+-nvkm_device_pci_fini(struct nvkm_device *device, bool suspend)
++nvkm_device_pci_fini(struct nvkm_device *device, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_device_pci *pdev = nvkm_device_pci(device);
+-      if (suspend) {
++      if (suspend != NVKM_POWEROFF) {
+               pci_disable_device(pdev->pdev);
+               pdev->suspend = true;
+       }
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/device/priv.h
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/priv.h
+@@ -56,5 +56,5 @@ int  nvkm_device_ctor(const struct nvkm_
+                     const char *name, const char *cfg, const char *dbg,
+                     struct nvkm_device *);
+ int  nvkm_device_init(struct nvkm_device *);
+-int  nvkm_device_fini(struct nvkm_device *, bool suspend);
++int  nvkm_device_fini(struct nvkm_device *, enum nvkm_suspend_state suspend);
+ #endif
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/device/user.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/user.c
+@@ -218,7 +218,7 @@ nvkm_udevice_map(struct nvkm_object *obj
+ }
+ static int
+-nvkm_udevice_fini(struct nvkm_object *object, bool suspend)
++nvkm_udevice_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_udevice *udev = nvkm_udevice(object);
+       struct nvkm_device *device = udev->device;
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/base.c
+@@ -99,13 +99,13 @@ nvkm_disp_intr(struct nvkm_engine *engin
+ }
+ static int
+-nvkm_disp_fini(struct nvkm_engine *engine, bool suspend)
++nvkm_disp_fini(struct nvkm_engine *engine, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_disp *disp = nvkm_disp(engine);
+       struct nvkm_outp *outp;
+       if (disp->func->fini)
+-              disp->func->fini(disp, suspend);
++              disp->func->fini(disp, suspend != NVKM_POWEROFF);
+       list_for_each_entry(outp, &disp->outps, head) {
+               if (outp->func->fini)
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/chan.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/chan.c
+@@ -128,7 +128,7 @@ nvkm_disp_chan_child_get(struct nvkm_obj
+ }
+ static int
+-nvkm_disp_chan_fini(struct nvkm_object *object, bool suspend)
++nvkm_disp_chan_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_disp_chan *chan = nvkm_disp_chan(object);
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/falcon.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/falcon.c
+@@ -93,13 +93,13 @@ nvkm_falcon_intr(struct nvkm_engine *eng
+ }
+ static int
+-nvkm_falcon_fini(struct nvkm_engine *engine, bool suspend)
++nvkm_falcon_fini(struct nvkm_engine *engine, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_falcon *falcon = nvkm_falcon(engine);
+       struct nvkm_device *device = falcon->engine.subdev.device;
+       const u32 base = falcon->addr;
+-      if (!suspend) {
++      if (suspend == NVKM_POWEROFF) {
+               nvkm_memory_unref(&falcon->core);
+               if (falcon->external) {
+                       vfree(falcon->data.data);
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/base.c
+@@ -122,7 +122,7 @@ nvkm_fifo_class_get(struct nvkm_oclass *
+ }
+ static int
+-nvkm_fifo_fini(struct nvkm_engine *engine, bool suspend)
++nvkm_fifo_fini(struct nvkm_engine *engine, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_fifo *fifo = nvkm_fifo(engine);
+       struct nvkm_runl *runl;
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/fifo/uchan.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/fifo/uchan.c
+@@ -72,7 +72,7 @@ struct nvkm_uobj {
+ };
+ static int
+-nvkm_uchan_object_fini_1(struct nvkm_oproxy *oproxy, bool suspend)
++nvkm_uchan_object_fini_1(struct nvkm_oproxy *oproxy, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_uobj *uobj = container_of(oproxy, typeof(*uobj), oproxy);
+       struct nvkm_chan *chan = uobj->chan;
+@@ -87,7 +87,7 @@ nvkm_uchan_object_fini_1(struct nvkm_opr
+               nvkm_chan_cctx_bind(chan, ectx->engn, NULL);
+               if (refcount_dec_and_test(&ectx->uses))
+-                      nvkm_object_fini(ectx->object, false);
++                      nvkm_object_fini(ectx->object, NVKM_POWEROFF);
+               mutex_unlock(&chan->cgrp->mutex);
+       }
+@@ -269,7 +269,7 @@ nvkm_uchan_map(struct nvkm_object *objec
+ }
+ static int
+-nvkm_uchan_fini(struct nvkm_object *object, bool suspend)
++nvkm_uchan_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_chan *chan = nvkm_uchan(object)->chan;
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/base.c
+@@ -168,11 +168,11 @@ nvkm_gr_init(struct nvkm_engine *engine)
+ }
+ static int
+-nvkm_gr_fini(struct nvkm_engine *engine, bool suspend)
++nvkm_gr_fini(struct nvkm_engine *engine, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_gr *gr = nvkm_gr(engine);
+       if (gr->func->fini)
+-              return gr->func->fini(gr, suspend);
++              return gr->func->fini(gr, suspend != NVKM_POWEROFF);
+       return 0;
+ }
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c
+@@ -2330,7 +2330,7 @@ gf100_gr_reset(struct nvkm_gr *base)
+       WARN_ON(gf100_gr_fecs_halt_pipeline(gr));
+-      subdev->func->fini(subdev, false);
++      subdev->func->fini(subdev, NVKM_POWEROFF);
+       nvkm_mc_disable(device, subdev->type, subdev->inst);
+       if (gr->func->gpccs.reset)
+               gr->func->gpccs.reset(gr);
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/nv04.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/nv04.c
+@@ -1158,7 +1158,7 @@ nv04_gr_chan_dtor(struct nvkm_object *ob
+ }
+ static int
+-nv04_gr_chan_fini(struct nvkm_object *object, bool suspend)
++nv04_gr_chan_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend)
+ {
+       struct nv04_gr_chan *chan = nv04_gr_chan(object);
+       struct nv04_gr *gr = chan->gr;
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/nv10.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/nv10.c
+@@ -951,7 +951,7 @@ nv10_gr_context_switch(struct nv10_gr *g
+ }
+ static int
+-nv10_gr_chan_fini(struct nvkm_object *object, bool suspend)
++nv10_gr_chan_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend)
+ {
+       struct nv10_gr_chan *chan = nv10_gr_chan(object);
+       struct nv10_gr *gr = chan->gr;
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/nv20.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/nv20.c
+@@ -27,7 +27,7 @@ nv20_gr_chan_init(struct nvkm_object *ob
+ }
+ int
+-nv20_gr_chan_fini(struct nvkm_object *object, bool suspend)
++nv20_gr_chan_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend)
+ {
+       struct nv20_gr_chan *chan = nv20_gr_chan(object);
+       struct nv20_gr *gr = chan->gr;
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/nv20.h
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/nv20.h
+@@ -31,5 +31,5 @@ struct nv20_gr_chan {
+ void *nv20_gr_chan_dtor(struct nvkm_object *);
+ int nv20_gr_chan_init(struct nvkm_object *);
+-int nv20_gr_chan_fini(struct nvkm_object *, bool);
++int nv20_gr_chan_fini(struct nvkm_object *, enum nvkm_suspend_state);
+ #endif
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/nv40.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/nv40.c
+@@ -89,7 +89,7 @@ nv40_gr_chan_bind(struct nvkm_object *ob
+ }
+ static int
+-nv40_gr_chan_fini(struct nvkm_object *object, bool suspend)
++nv40_gr_chan_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend)
+ {
+       struct nv40_gr_chan *chan = nv40_gr_chan(object);
+       struct nv40_gr *gr = chan->gr;
+@@ -101,7 +101,7 @@ nv40_gr_chan_fini(struct nvkm_object *ob
+       nvkm_mask(device, 0x400720, 0x00000001, 0x00000000);
+       if (nvkm_rd32(device, 0x40032c) == inst) {
+-              if (suspend) {
++              if (suspend != NVKM_POWEROFF) {
+                       nvkm_wr32(device, 0x400720, 0x00000000);
+                       nvkm_wr32(device, 0x400784, inst);
+                       nvkm_mask(device, 0x400310, 0x00000020, 0x00000020);
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/mpeg/nv44.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/mpeg/nv44.c
+@@ -65,7 +65,7 @@ nv44_mpeg_chan_bind(struct nvkm_object *
+ }
+ static int
+-nv44_mpeg_chan_fini(struct nvkm_object *object, bool suspend)
++nv44_mpeg_chan_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend)
+ {
+       struct nv44_mpeg_chan *chan = nv44_mpeg_chan(object);
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/sec2/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/sec2/base.c
+@@ -37,7 +37,7 @@ nvkm_sec2_finimsg(void *priv, struct nvf
+ }
+ static int
+-nvkm_sec2_fini(struct nvkm_engine *engine, bool suspend)
++nvkm_sec2_fini(struct nvkm_engine *engine, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_sec2 *sec2 = nvkm_sec2(engine);
+       struct nvkm_subdev *subdev = &sec2->engine.subdev;
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/xtensa.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/xtensa.c
+@@ -76,7 +76,7 @@ nvkm_xtensa_intr(struct nvkm_engine *eng
+ }
+ static int
+-nvkm_xtensa_fini(struct nvkm_engine *engine, bool suspend)
++nvkm_xtensa_fini(struct nvkm_engine *engine, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_xtensa *xtensa = nvkm_xtensa(engine);
+       struct nvkm_device *device = xtensa->engine.subdev.device;
+@@ -85,7 +85,7 @@ nvkm_xtensa_fini(struct nvkm_engine *eng
+       nvkm_wr32(device, base + 0xd84, 0); /* INTR_EN */
+       nvkm_wr32(device, base + 0xd94, 0); /* FIFO_CTRL */
+-      if (!suspend)
++      if (suspend == NVKM_POWEROFF)
+               nvkm_memory_unref(&xtensa->gpu_fw);
+       return 0;
+ }
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/acr/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/acr/base.c
+@@ -182,7 +182,7 @@ nvkm_acr_managed_falcon(struct nvkm_devi
+ }
+ static int
+-nvkm_acr_fini(struct nvkm_subdev *subdev, bool suspend)
++nvkm_acr_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
+ {
+       if (!subdev->use.enabled)
+               return 0;
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/bar/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bar/base.c
+@@ -90,7 +90,7 @@ nvkm_bar_bar2_init(struct nvkm_device *d
+ }
+ static int
+-nvkm_bar_fini(struct nvkm_subdev *subdev, bool suspend)
++nvkm_bar_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_bar *bar = nvkm_bar(subdev);
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/clk/base.c
+@@ -577,7 +577,7 @@ nvkm_clk_read(struct nvkm_clk *clk, enum
+ }
+ static int
+-nvkm_clk_fini(struct nvkm_subdev *subdev, bool suspend)
++nvkm_clk_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_clk *clk = nvkm_clk(subdev);
+       flush_work(&clk->work);
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/base.c
+@@ -67,11 +67,11 @@ nvkm_devinit_post(struct nvkm_devinit *i
+ }
+ static int
+-nvkm_devinit_fini(struct nvkm_subdev *subdev, bool suspend)
++nvkm_devinit_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_devinit *init = nvkm_devinit(subdev);
+       /* force full reinit on resume */
+-      if (suspend)
++      if (suspend != NVKM_POWEROFF)
+               init->post = true;
+       return 0;
+ }
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/fault/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fault/base.c
+@@ -51,7 +51,7 @@ nvkm_fault_intr(struct nvkm_subdev *subd
+ }
+ static int
+-nvkm_fault_fini(struct nvkm_subdev *subdev, bool suspend)
++nvkm_fault_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_fault *fault = nvkm_fault(subdev);
+       if (fault->func->fini)
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/fault/user.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fault/user.c
+@@ -56,7 +56,7 @@ nvkm_ufault_map(struct nvkm_object *obje
+ }
+ static int
+-nvkm_ufault_fini(struct nvkm_object *object, bool suspend)
++nvkm_ufault_fini(struct nvkm_object *object, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_fault_buffer *buffer = nvkm_fault_buffer(object);
+       buffer->fault->func->buffer.fini(buffer);
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/gpio/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gpio/base.c
+@@ -144,7 +144,7 @@ nvkm_gpio_intr(struct nvkm_subdev *subde
+ }
+ static int
+-nvkm_gpio_fini(struct nvkm_subdev *subdev, bool suspend)
++nvkm_gpio_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_gpio *gpio = nvkm_gpio(subdev);
+       u32 mask = (1ULL << gpio->func->lines) - 1;
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/base.c
+@@ -48,7 +48,7 @@ nvkm_gsp_intr_stall(struct nvkm_gsp *gsp
+ }
+ static int
+-nvkm_gsp_fini(struct nvkm_subdev *subdev, bool suspend)
++nvkm_gsp_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_gsp *gsp = nvkm_gsp(subdev);
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/gh100.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/gh100.c
+@@ -17,7 +17,7 @@
+ #include <nvhw/ref/gh100/dev_riscv_pri.h>
+ int
+-gh100_gsp_fini(struct nvkm_gsp *gsp, bool suspend)
++gh100_gsp_fini(struct nvkm_gsp *gsp, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_falcon *falcon = &gsp->falcon;
+       int ret, time = 4000;
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/priv.h
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/priv.h
+@@ -59,7 +59,7 @@ struct nvkm_gsp_func {
+       void (*dtor)(struct nvkm_gsp *);
+       int (*oneinit)(struct nvkm_gsp *);
+       int (*init)(struct nvkm_gsp *);
+-      int (*fini)(struct nvkm_gsp *, bool suspend);
++      int (*fini)(struct nvkm_gsp *, enum nvkm_suspend_state suspend);
+       int (*reset)(struct nvkm_gsp *);
+       struct {
+@@ -75,7 +75,7 @@ int tu102_gsp_fwsec_sb_ctor(struct nvkm_
+ void tu102_gsp_fwsec_sb_dtor(struct nvkm_gsp *);
+ int tu102_gsp_oneinit(struct nvkm_gsp *);
+ int tu102_gsp_init(struct nvkm_gsp *);
+-int tu102_gsp_fini(struct nvkm_gsp *, bool suspend);
++int tu102_gsp_fini(struct nvkm_gsp *, enum nvkm_suspend_state suspend);
+ int tu102_gsp_reset(struct nvkm_gsp *);
+ u64 tu102_gsp_wpr_heap_size(struct nvkm_gsp *);
+@@ -87,12 +87,12 @@ int ga102_gsp_reset(struct nvkm_gsp *);
+ int gh100_gsp_oneinit(struct nvkm_gsp *);
+ int gh100_gsp_init(struct nvkm_gsp *);
+-int gh100_gsp_fini(struct nvkm_gsp *, bool suspend);
++int gh100_gsp_fini(struct nvkm_gsp *, enum nvkm_suspend_state suspend);
+ void r535_gsp_dtor(struct nvkm_gsp *);
+ int r535_gsp_oneinit(struct nvkm_gsp *);
+ int r535_gsp_init(struct nvkm_gsp *);
+-int r535_gsp_fini(struct nvkm_gsp *, bool suspend);
++int r535_gsp_fini(struct nvkm_gsp *, enum nvkm_suspend_state suspend);
+ int nvkm_gsp_new_(const struct nvkm_gsp_fwif *, struct nvkm_device *, enum nvkm_subdev_type, int,
+                 struct nvkm_gsp **);
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/gsp.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/gsp.c
+@@ -1721,7 +1721,7 @@ r535_gsp_sr_data_size(struct nvkm_gsp *g
+ }
+ int
+-r535_gsp_fini(struct nvkm_gsp *gsp, bool suspend)
++r535_gsp_fini(struct nvkm_gsp *gsp, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_rm *rm = gsp->rm;
+       int ret;
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/tu102.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/tu102.c
+@@ -161,7 +161,7 @@ tu102_gsp_reset(struct nvkm_gsp *gsp)
+ }
+ int
+-tu102_gsp_fini(struct nvkm_gsp *gsp, bool suspend)
++tu102_gsp_fini(struct nvkm_gsp *gsp, enum nvkm_suspend_state suspend)
+ {
+       u32 mbox0 = 0xff, mbox1 = 0xff;
+       int ret;
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/base.c
+@@ -135,7 +135,7 @@ nvkm_i2c_intr(struct nvkm_subdev *subdev
+ }
+ static int
+-nvkm_i2c_fini(struct nvkm_subdev *subdev, bool suspend)
++nvkm_i2c_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_i2c *i2c = nvkm_i2c(subdev);
+       struct nvkm_i2c_pad *pad;
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/instmem/base.c
+@@ -176,7 +176,7 @@ nvkm_instmem_boot(struct nvkm_instmem *i
+ }
+ static int
+-nvkm_instmem_fini(struct nvkm_subdev *subdev, bool suspend)
++nvkm_instmem_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_instmem *imem = nvkm_instmem(subdev);
+       int ret;
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pci/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pci/base.c
+@@ -74,7 +74,7 @@ nvkm_pci_rom_shadow(struct nvkm_pci *pci
+ }
+ static int
+-nvkm_pci_fini(struct nvkm_subdev *subdev, bool suspend)
++nvkm_pci_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_pci *pci = nvkm_pci(subdev);
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/base.c
+@@ -77,7 +77,7 @@ nvkm_pmu_intr(struct nvkm_subdev *subdev
+ }
+ static int
+-nvkm_pmu_fini(struct nvkm_subdev *subdev, bool suspend)
++nvkm_pmu_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_pmu *pmu = nvkm_pmu(subdev);
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/therm/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/therm/base.c
+@@ -341,15 +341,15 @@ nvkm_therm_intr(struct nvkm_subdev *subd
+ }
+ static int
+-nvkm_therm_fini(struct nvkm_subdev *subdev, bool suspend)
++nvkm_therm_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_therm *therm = nvkm_therm(subdev);
+       if (therm->func->fini)
+               therm->func->fini(therm);
+-      nvkm_therm_fan_fini(therm, suspend);
+-      nvkm_therm_sensor_fini(therm, suspend);
++      nvkm_therm_fan_fini(therm, suspend != NVKM_POWEROFF);
++      nvkm_therm_sensor_fini(therm, suspend != NVKM_POWEROFF);
+       if (suspend) {
+               therm->suspend = therm->mode;
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/timer/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/timer/base.c
+@@ -149,7 +149,7 @@ nvkm_timer_intr(struct nvkm_subdev *subd
+ }
+ static int
+-nvkm_timer_fini(struct nvkm_subdev *subdev, bool suspend)
++nvkm_timer_fini(struct nvkm_subdev *subdev, enum nvkm_suspend_state suspend)
+ {
+       struct nvkm_timer *tmr = nvkm_timer(subdev);
+       tmr->func->alarm_fini(tmr);
diff --git a/queue-6.18/nouveau-gsp-fix-suspend-resume-regression-on-r570-firmware.patch b/queue-6.18/nouveau-gsp-fix-suspend-resume-regression-on-r570-firmware.patch
new file mode 100644 (file)
index 0000000..02725fb
--- /dev/null
@@ -0,0 +1,102 @@
+From 8302d0afeaec0bc57d951dd085e0cffe997d4d18 Mon Sep 17 00:00:00 2001
+From: Dave Airlie <airlied@redhat.com>
+Date: Tue, 3 Feb 2026 15:21:13 +1000
+Subject: nouveau/gsp: fix suspend/resume regression on r570 firmware
+
+From: Dave Airlie <airlied@redhat.com>
+
+commit 8302d0afeaec0bc57d951dd085e0cffe997d4d18 upstream.
+
+The r570 firmware with certain GPUs (at least RTX6000) needs this
+flag to reflect the suspend vs runtime PM state of the driver.
+
+This uses that info to set the correct flags to the firmware.
+
+This fixes a regression on RTX6000 and other GPUs since r570 firmware
+was enabled.
+
+Fixes: 53dac0623853 ("drm/nouveau/gsp: add support for 570.144")
+Cc: <stable@vger.kernel.org>
+Reviewed-by: Lyude Paul <lyude@redhat.com>
+Tested-by: Lyude Paul <lyude@redhat.com>
+Signed-off-by: Dave Airlie <airlied@redhat.com>
+Link: https://patch.msgid.link/20260203052431.2219998-4-airlied@gmail.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/fbsr.c |    2 +-
+ drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/gsp.c  |    2 +-
+ drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r570/fbsr.c |    8 ++++----
+ drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/rm.h        |    2 +-
+ 4 files changed, 7 insertions(+), 7 deletions(-)
+
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/fbsr.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/fbsr.c
+@@ -208,7 +208,7 @@ r535_fbsr_resume(struct nvkm_gsp *gsp)
+ }
+ static int
+-r535_fbsr_suspend(struct nvkm_gsp *gsp)
++r535_fbsr_suspend(struct nvkm_gsp *gsp, bool runtime)
+ {
+       struct nvkm_subdev *subdev = &gsp->subdev;
+       struct nvkm_device *device = subdev->device;
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/gsp.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/gsp.c
+@@ -1748,7 +1748,7 @@ r535_gsp_fini(struct nvkm_gsp *gsp, enum
+               sr->sysmemAddrOfSuspendResumeData = gsp->sr.radix3.lvl0.addr;
+               sr->sizeOfSuspendResumeData = len;
+-              ret = rm->api->fbsr->suspend(gsp);
++              ret = rm->api->fbsr->suspend(gsp, suspend == NVKM_RUNTIME_SUSPEND);
+               if (ret) {
+                       nvkm_gsp_mem_dtor(&gsp->sr.meta);
+                       nvkm_gsp_radix3_dtor(gsp, &gsp->sr.radix3);
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r570/fbsr.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r570/fbsr.c
+@@ -62,7 +62,7 @@ r570_fbsr_resume(struct nvkm_gsp *gsp)
+ }
+ static int
+-r570_fbsr_init(struct nvkm_gsp *gsp, struct sg_table *sgt, u64 size)
++r570_fbsr_init(struct nvkm_gsp *gsp, struct sg_table *sgt, u64 size, bool runtime)
+ {
+       NV2080_CTRL_INTERNAL_FBSR_INIT_PARAMS *ctrl;
+       struct nvkm_gsp_object memlist;
+@@ -81,7 +81,7 @@ r570_fbsr_init(struct nvkm_gsp *gsp, str
+       ctrl->hClient = gsp->internal.client.object.handle;
+       ctrl->hSysMem = memlist.handle;
+       ctrl->sysmemAddrOfSuspendResumeData = gsp->sr.meta.addr;
+-      ctrl->bEnteringGcoffState = 1;
++      ctrl->bEnteringGcoffState = runtime ? 1 : 0;
+       ret = nvkm_gsp_rm_ctrl_wr(&gsp->internal.device.subdevice, ctrl);
+       if (ret)
+@@ -92,7 +92,7 @@ r570_fbsr_init(struct nvkm_gsp *gsp, str
+ }
+ static int
+-r570_fbsr_suspend(struct nvkm_gsp *gsp)
++r570_fbsr_suspend(struct nvkm_gsp *gsp, bool runtime)
+ {
+       struct nvkm_subdev *subdev = &gsp->subdev;
+       struct nvkm_device *device = subdev->device;
+@@ -133,7 +133,7 @@ r570_fbsr_suspend(struct nvkm_gsp *gsp)
+               return ret;
+       /* Initialise FBSR on RM. */
+-      ret = r570_fbsr_init(gsp, &gsp->sr.fbsr, size);
++      ret = r570_fbsr_init(gsp, &gsp->sr.fbsr, size, runtime);
+       if (ret) {
+               nvkm_gsp_sg_free(device, &gsp->sr.fbsr);
+               return ret;
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/rm.h
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/rm.h
+@@ -78,7 +78,7 @@ struct nvkm_rm_api {
+       } *device;
+       const struct nvkm_rm_api_fbsr {
+-              int (*suspend)(struct nvkm_gsp *);
++              int (*suspend)(struct nvkm_gsp *, bool runtime);
+               void (*resume)(struct nvkm_gsp *);
+       } *fbsr;
diff --git a/queue-6.18/nouveau-gsp-use-rpc-sequence-numbers-properly.patch b/queue-6.18/nouveau-gsp-use-rpc-sequence-numbers-properly.patch
new file mode 100644 (file)
index 0000000..5e9d53a
--- /dev/null
@@ -0,0 +1,114 @@
+From 90caca3b7264cc3e92e347b2004fff4e386fc26e Mon Sep 17 00:00:00 2001
+From: Dave Airlie <airlied@redhat.com>
+Date: Tue, 3 Feb 2026 15:21:11 +1000
+Subject: nouveau/gsp: use rpc sequence numbers properly.
+
+From: Dave Airlie <airlied@redhat.com>
+
+commit 90caca3b7264cc3e92e347b2004fff4e386fc26e upstream.
+
+There are two layers of sequence numbers, one at the msg level
+and one at the rpc level.
+
+570 firmware started asserting on the sequence numbers being
+in the right order, and we would see nocat records with asserts
+in them.
+
+Add the rpc level sequence number support.
+
+Fixes: 53dac0623853 ("drm/nouveau/gsp: add support for 570.144")
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Dave Airlie <airlied@redhat.com>
+Reviewed-by: Lyude Paul <lyude@redhat.com>
+Tested-by: Lyude Paul <lyude@redhat.com>
+Link: https://patch.msgid.link/20260203052431.2219998-2-airlied@gmail.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/gpu/drm/nouveau/include/nvkm/subdev/gsp.h     |    6 ++++++
+ drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/gsp.c |    4 ++--
+ drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/rpc.c |    6 ++++++
+ drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r570/gsp.c |    2 +-
+ 4 files changed, 15 insertions(+), 3 deletions(-)
+
+--- a/drivers/gpu/drm/nouveau/include/nvkm/subdev/gsp.h
++++ b/drivers/gpu/drm/nouveau/include/nvkm/subdev/gsp.h
+@@ -44,6 +44,9 @@ typedef void (*nvkm_gsp_event_func)(stru
+  * NVKM_GSP_RPC_REPLY_NOWAIT - If specified, immediately return to the
+  * caller after the GSP RPC command is issued.
+  *
++ * NVKM_GSP_RPC_REPLY_NOSEQ - If specified, exactly like NOWAIT
++ * but don't emit RPC sequence number.
++ *
+  * NVKM_GSP_RPC_REPLY_RECV - If specified, wait and receive the entire GSP
+  * RPC message after the GSP RPC command is issued.
+  *
+@@ -53,6 +56,7 @@ typedef void (*nvkm_gsp_event_func)(stru
+  */
+ enum nvkm_gsp_rpc_reply_policy {
+       NVKM_GSP_RPC_REPLY_NOWAIT = 0,
++      NVKM_GSP_RPC_REPLY_NOSEQ,
+       NVKM_GSP_RPC_REPLY_RECV,
+       NVKM_GSP_RPC_REPLY_POLL,
+ };
+@@ -242,6 +246,8 @@ struct nvkm_gsp {
+       /* The size of the registry RPC */
+       size_t registry_rpc_size;
++      u32 rpc_seq;
++
+ #ifdef CONFIG_DEBUG_FS
+       /*
+        * Logging buffers in debugfs. The wrapper objects need to remain
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/gsp.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/gsp.c
+@@ -704,7 +704,7 @@ r535_gsp_rpc_set_registry(struct nvkm_gs
+       build_registry(gsp, rpc);
+-      return nvkm_gsp_rpc_wr(gsp, rpc, NVKM_GSP_RPC_REPLY_NOWAIT);
++      return nvkm_gsp_rpc_wr(gsp, rpc, NVKM_GSP_RPC_REPLY_NOSEQ);
+ fail:
+       clean_registry(gsp);
+@@ -921,7 +921,7 @@ r535_gsp_set_system_info(struct nvkm_gsp
+       info->pciConfigMirrorSize = device->pci->func->cfg.size;
+       r535_gsp_acpi_info(gsp, &info->acpiMethodData);
+-      return nvkm_gsp_rpc_wr(gsp, info, NVKM_GSP_RPC_REPLY_NOWAIT);
++      return nvkm_gsp_rpc_wr(gsp, info, NVKM_GSP_RPC_REPLY_NOSEQ);
+ }
+ static int
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/rpc.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/rpc.c
+@@ -557,6 +557,7 @@ r535_gsp_rpc_handle_reply(struct nvkm_gs
+       switch (policy) {
+       case NVKM_GSP_RPC_REPLY_NOWAIT:
++      case NVKM_GSP_RPC_REPLY_NOSEQ:
+               break;
+       case NVKM_GSP_RPC_REPLY_RECV:
+               reply = r535_gsp_msg_recv(gsp, fn, gsp_rpc_len);
+@@ -588,6 +589,11 @@ r535_gsp_rpc_send(struct nvkm_gsp *gsp,
+                              rpc->data, rpc->length - sizeof(*rpc), true);
+       }
++      if (policy == NVKM_GSP_RPC_REPLY_NOSEQ)
++              rpc->sequence = 0;
++      else
++              rpc->sequence = gsp->rpc_seq++;
++
+       ret = r535_gsp_cmdq_push(gsp, rpc);
+       if (ret)
+               return ERR_PTR(ret);
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r570/gsp.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r570/gsp.c
+@@ -176,7 +176,7 @@ r570_gsp_set_system_info(struct nvkm_gsp
+       info->bIsPrimary = video_is_primary_device(device->dev);
+       info->bPreserveVideoMemoryAllocations = false;
+-      return nvkm_gsp_rpc_wr(gsp, info, NVKM_GSP_RPC_REPLY_NOWAIT);
++      return nvkm_gsp_rpc_wr(gsp, info, NVKM_GSP_RPC_REPLY_NOSEQ);
+ }
+ static void
diff --git a/queue-6.18/platform-x86-intel_telemetry-fix-swapped-arrays-in-pss-output.patch b/queue-6.18/platform-x86-intel_telemetry-fix-swapped-arrays-in-pss-output.patch
new file mode 100644 (file)
index 0000000..c47d1f9
--- /dev/null
@@ -0,0 +1,54 @@
+From 25e9e322d2ab5c03602eff4fbf4f7c40019d8de2 Mon Sep 17 00:00:00 2001
+From: Kaushlendra Kumar <kaushlendra.kumar@intel.com>
+Date: Wed, 24 Dec 2025 08:50:53 +0530
+Subject: platform/x86: intel_telemetry: Fix swapped arrays in PSS output
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Kaushlendra Kumar <kaushlendra.kumar@intel.com>
+
+commit 25e9e322d2ab5c03602eff4fbf4f7c40019d8de2 upstream.
+
+The LTR blocking statistics and wakeup event counters are incorrectly
+cross-referenced during debugfs output rendering. The code populates
+pss_ltr_blkd[] with LTR blocking data and pss_s0ix_wakeup[] with wakeup
+data, but the display loops reference the wrong arrays.
+
+This causes the "LTR Blocking Status" section to print wakeup events
+and the "Wakes Status" section to print LTR blockers, misleading power
+management analysis and S0ix residency debugging.
+
+Fix by aligning array usage with the intended output section labels.
+
+Fixes: 87bee290998d ("platform:x86: Add Intel Telemetry Debugfs interfaces")
+Cc: stable@vger.kernel.org
+Signed-off-by: Kaushlendra Kumar <kaushlendra.kumar@intel.com>
+Link: https://patch.msgid.link/20251224032053.3915900-1-kaushlendra.kumar@intel.com
+Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
+Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/platform/x86/intel/telemetry/debugfs.c |    4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/drivers/platform/x86/intel/telemetry/debugfs.c
++++ b/drivers/platform/x86/intel/telemetry/debugfs.c
+@@ -449,7 +449,7 @@ static int telem_pss_states_show(struct
+       for (index = 0; index < debugfs_conf->pss_ltr_evts; index++) {
+               seq_printf(s, "%-32s\t%u\n",
+                          debugfs_conf->pss_ltr_data[index].name,
+-                         pss_s0ix_wakeup[index]);
++                         pss_ltr_blkd[index]);
+       }
+       seq_puts(s, "\n--------------------------------------\n");
+@@ -459,7 +459,7 @@ static int telem_pss_states_show(struct
+       for (index = 0; index < debugfs_conf->pss_wakeup_evts; index++) {
+               seq_printf(s, "%-32s\t%u\n",
+                          debugfs_conf->pss_wakeup[index].name,
+-                         pss_ltr_blkd[index]);
++                         pss_s0ix_wakeup[index]);
+       }
+       return 0;
diff --git a/queue-6.18/pmdomain-imx-gpcv2-fix-the-imx8mm-gpu-hang-due-to-wrong-adb400-reset.patch b/queue-6.18/pmdomain-imx-gpcv2-fix-the-imx8mm-gpu-hang-due-to-wrong-adb400-reset.patch
new file mode 100644 (file)
index 0000000..60b971a
--- /dev/null
@@ -0,0 +1,67 @@
+From ae0a24c5a8dcea20bf8e344eadf6593e6d1959c3 Mon Sep 17 00:00:00 2001
+From: Jacky Bai <ping.bai@nxp.com>
+Date: Fri, 23 Jan 2026 10:51:26 +0800
+Subject: pmdomain: imx: gpcv2: Fix the imx8mm gpu hang due to wrong adb400 reset
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Jacky Bai <ping.bai@nxp.com>
+
+commit ae0a24c5a8dcea20bf8e344eadf6593e6d1959c3 upstream.
+
+On i.MX8MM, the GPUMIX, GPU2D, and GPU3D blocks share a common reset
+domain. Due to this hardware limitation, powering off/on GPU2D or GPU3D
+also triggers a reset of the GPUMIX domain, including its ADB400 port.
+However, the ADB400 interface must always be placed into power‑down mode
+before being reset.
+
+Currently the GPUMIX and GPU2D/3D power domains rely on runtime PM to
+handle dependency ordering. In some corner cases, the GPUMIX power off
+sequence is skipped, leaving the ADB400 port active when GPU2D/3D reset.
+This causes the GPUMIX ADB400 port to be reset while still active,
+leading to unpredictable bus behavior and GPU hangs.
+
+To avoid this, refine the power‑domain control logic so that the GPUMIX
+ADB400 port is explicitly powered down and powered up as part of the GPU
+power domain on/off sequence. This ensures proper ordering and prevents
+incorrect ADB400 reset.
+
+Suggested-by: Lucas Stach <l.stach@pengutronix.de>
+Signed-off-by: Jacky Bai <ping.bai@nxp.com>
+Reviewed-by: Lucas Stach <l.stach@pengutronix.de>
+Tested-by: Philipp Zabel <p.zabel@pengutronix.de>
+Cc: stable@vger.kernel.org
+Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/pmdomain/imx/gpcv2.c |    8 ++------
+ 1 file changed, 2 insertions(+), 6 deletions(-)
+
+--- a/drivers/pmdomain/imx/gpcv2.c
++++ b/drivers/pmdomain/imx/gpcv2.c
+@@ -165,13 +165,11 @@
+ #define IMX8M_VPU_HSK_PWRDNREQN                       BIT(5)
+ #define IMX8M_DISP_HSK_PWRDNREQN              BIT(4)
+-#define IMX8MM_GPUMIX_HSK_PWRDNACKN           BIT(29)
+-#define IMX8MM_GPU_HSK_PWRDNACKN              (BIT(27) | BIT(28))
++#define IMX8MM_GPU_HSK_PWRDNACKN              GENMASK(29, 27)
+ #define IMX8MM_VPUMIX_HSK_PWRDNACKN           BIT(26)
+ #define IMX8MM_DISPMIX_HSK_PWRDNACKN          BIT(25)
+ #define IMX8MM_HSIO_HSK_PWRDNACKN             (BIT(23) | BIT(24))
+-#define IMX8MM_GPUMIX_HSK_PWRDNREQN           BIT(11)
+-#define IMX8MM_GPU_HSK_PWRDNREQN              (BIT(9) | BIT(10))
++#define IMX8MM_GPU_HSK_PWRDNREQN              GENMASK(11, 9)
+ #define IMX8MM_VPUMIX_HSK_PWRDNREQN           BIT(8)
+ #define IMX8MM_DISPMIX_HSK_PWRDNREQN          BIT(7)
+ #define IMX8MM_HSIO_HSK_PWRDNREQN             (BIT(5) | BIT(6))
+@@ -794,8 +792,6 @@ static const struct imx_pgc_domain imx8m
+               .bits  = {
+                       .pxx = IMX8MM_GPUMIX_SW_Pxx_REQ,
+                       .map = IMX8MM_GPUMIX_A53_DOMAIN,
+-                      .hskreq = IMX8MM_GPUMIX_HSK_PWRDNREQN,
+-                      .hskack = IMX8MM_GPUMIX_HSK_PWRDNACKN,
+               },
+               .pgc   = BIT(IMX8MM_PGC_GPUMIX),
+               .keep_clocks = true,
diff --git a/queue-6.18/pmdomain-imx8m-blk-ctrl-fix-out-of-range-access-of-bc-domains.patch b/queue-6.18/pmdomain-imx8m-blk-ctrl-fix-out-of-range-access-of-bc-domains.patch
new file mode 100644 (file)
index 0000000..3ba1429
--- /dev/null
@@ -0,0 +1,32 @@
+From 6bd8b4a92a901fae1a422e6f914801063c345e8d Mon Sep 17 00:00:00 2001
+From: Xu Yang <xu.yang_2@nxp.com>
+Date: Fri, 30 Jan 2026 13:11:07 +0800
+Subject: pmdomain: imx8m-blk-ctrl: fix out-of-range access of bc->domains
+
+From: Xu Yang <xu.yang_2@nxp.com>
+
+commit 6bd8b4a92a901fae1a422e6f914801063c345e8d upstream.
+
+Fix out-of-range access of bc->domains in imx8m_blk_ctrl_remove().
+
+Fixes: 2684ac05a8c4 ("soc: imx: add i.MX8M blk-ctrl driver")
+Cc: stable@kernel.org
+Signed-off-by: Xu Yang <xu.yang_2@nxp.com>
+Reviewed-by: Daniel Baluta <daniel.baluta@nxp.com>
+Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/pmdomain/imx/imx8m-blk-ctrl.c |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/pmdomain/imx/imx8m-blk-ctrl.c
++++ b/drivers/pmdomain/imx/imx8m-blk-ctrl.c
+@@ -340,7 +340,7 @@ static void imx8m_blk_ctrl_remove(struct
+       of_genpd_del_provider(pdev->dev.of_node);
+-      for (i = 0; bc->onecell_data.num_domains; i++) {
++      for (i = 0; i < bc->onecell_data.num_domains; i++) {
+               struct imx8m_blk_ctrl_domain *domain = &bc->domains[i];
+               pm_genpd_remove(&domain->genpd);
diff --git a/queue-6.18/pmdomain-imx8mp-blk-ctrl-keep-gpc-power-domain-on-for-system-wakeup.patch b/queue-6.18/pmdomain-imx8mp-blk-ctrl-keep-gpc-power-domain-on-for-system-wakeup.patch
new file mode 100644 (file)
index 0000000..1c0d9a8
--- /dev/null
@@ -0,0 +1,109 @@
+From e9ab2b83893dd03cf04d98faded81190e635233f Mon Sep 17 00:00:00 2001
+From: Xu Yang <xu.yang_2@nxp.com>
+Date: Wed, 4 Feb 2026 19:11:41 +0800
+Subject: pmdomain: imx8mp-blk-ctrl: Keep gpc power domain on for system wakeup
+
+From: Xu Yang <xu.yang_2@nxp.com>
+
+commit e9ab2b83893dd03cf04d98faded81190e635233f upstream.
+
+Current design will power off all dependent GPC power domains in
+imx8mp_blk_ctrl_suspend(), even though the user device has enabled
+wakeup capability. The result is that wakeup function never works
+for such device.
+
+An example will be USB wakeup on i.MX8MP. PHY device '382f0040.usb-phy'
+is attached to power domain 'hsioblk-usb-phy2' which is spawned by hsio
+block control. A virtual power domain device 'genpd:3:32f10000.blk-ctrl'
+is created to build connection with 'hsioblk-usb-phy2' and it depends on
+GPC power domain 'usb-otg2'. If device '382f0040.usb-phy' enable wakeup,
+only power domain 'hsioblk-usb-phy2' keeps on during system suspend,
+power domain 'usb-otg2' is off all the time. So the wakeup event can't
+happen.
+
+In order to further establish a connection between the power domains
+related to GPC and block control during system suspend, register a genpd
+power on/off notifier for the power_dev. This allows us to prevent the GPC
+power domain from being powered off, in case the block control power
+domain is kept on to serve system wakeup.
+
+Suggested-by: Ulf Hansson <ulf.hansson@linaro.org>
+Fixes: 556f5cf9568a ("soc: imx: add i.MX8MP HSIO blk-ctrl")
+Cc: stable@vger.kernel.org
+Signed-off-by: Xu Yang <xu.yang_2@nxp.com>
+Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/pmdomain/imx/imx8mp-blk-ctrl.c |   26 ++++++++++++++++++++++++++
+ 1 file changed, 26 insertions(+)
+
+--- a/drivers/pmdomain/imx/imx8mp-blk-ctrl.c
++++ b/drivers/pmdomain/imx/imx8mp-blk-ctrl.c
+@@ -65,6 +65,7 @@ struct imx8mp_blk_ctrl_domain {
+       struct icc_bulk_data paths[DOMAIN_MAX_PATHS];
+       struct device *power_dev;
+       struct imx8mp_blk_ctrl *bc;
++      struct notifier_block power_nb;
+       int num_paths;
+       int id;
+ };
+@@ -594,6 +595,20 @@ static int imx8mp_blk_ctrl_power_off(str
+       return 0;
+ }
++static int imx8mp_blk_ctrl_gpc_notifier(struct notifier_block *nb,
++                                      unsigned long action, void *data)
++{
++      struct imx8mp_blk_ctrl_domain *domain =
++                      container_of(nb, struct imx8mp_blk_ctrl_domain, power_nb);
++
++      if (action == GENPD_NOTIFY_PRE_OFF) {
++              if (domain->genpd.status == GENPD_STATE_ON)
++                      return NOTIFY_BAD;
++      }
++
++      return NOTIFY_OK;
++}
++
+ static struct lock_class_key blk_ctrl_genpd_lock_class;
+ static int imx8mp_blk_ctrl_probe(struct platform_device *pdev)
+@@ -698,6 +713,14 @@ static int imx8mp_blk_ctrl_probe(struct
+                       goto cleanup_pds;
+               }
++              domain->power_nb.notifier_call = imx8mp_blk_ctrl_gpc_notifier;
++              ret = dev_pm_genpd_add_notifier(domain->power_dev, &domain->power_nb);
++              if (ret) {
++                      dev_err_probe(dev, ret, "failed to add power notifier\n");
++                      dev_pm_domain_detach(domain->power_dev, true);
++                      goto cleanup_pds;
++              }
++
+               domain->genpd.name = data->name;
+               domain->genpd.power_on = imx8mp_blk_ctrl_power_on;
+               domain->genpd.power_off = imx8mp_blk_ctrl_power_off;
+@@ -707,6 +730,7 @@ static int imx8mp_blk_ctrl_probe(struct
+               ret = pm_genpd_init(&domain->genpd, NULL, true);
+               if (ret) {
+                       dev_err_probe(dev, ret, "failed to init power domain\n");
++                      dev_pm_genpd_remove_notifier(domain->power_dev);
+                       dev_pm_domain_detach(domain->power_dev, true);
+                       goto cleanup_pds;
+               }
+@@ -755,6 +779,7 @@ cleanup_provider:
+ cleanup_pds:
+       for (i--; i >= 0; i--) {
+               pm_genpd_remove(&bc->domains[i].genpd);
++              dev_pm_genpd_remove_notifier(bc->domains[i].power_dev);
+               dev_pm_domain_detach(bc->domains[i].power_dev, true);
+       }
+@@ -774,6 +799,7 @@ static void imx8mp_blk_ctrl_remove(struc
+               struct imx8mp_blk_ctrl_domain *domain = &bc->domains[i];
+               pm_genpd_remove(&domain->genpd);
++              dev_pm_genpd_remove_notifier(domain->power_dev);
+               dev_pm_domain_detach(domain->power_dev, true);
+       }
diff --git a/queue-6.18/pmdomain-imx8mp-blk-ctrl-keep-usb-phy-power-domain-on-for-system-wakeup.patch b/queue-6.18/pmdomain-imx8mp-blk-ctrl-keep-usb-phy-power-domain-on-for-system-wakeup.patch
new file mode 100644 (file)
index 0000000..d865b86
--- /dev/null
@@ -0,0 +1,52 @@
+From e2c4c5b2bbd4f688a0f9f6da26cdf6d723c53478 Mon Sep 17 00:00:00 2001
+From: Xu Yang <xu.yang_2@nxp.com>
+Date: Wed, 4 Feb 2026 19:11:42 +0800
+Subject: pmdomain: imx8mp-blk-ctrl: Keep usb phy power domain on for system wakeup
+
+From: Xu Yang <xu.yang_2@nxp.com>
+
+commit e2c4c5b2bbd4f688a0f9f6da26cdf6d723c53478 upstream.
+
+USB system wakeup need its PHY on, so add the GENPD_FLAG_ACTIVE_WAKEUP
+flags to USB PHY genpd configuration.
+
+Signed-off-by: Xu Yang <xu.yang_2@nxp.com>
+Fixes: 556f5cf9568a ("soc: imx: add i.MX8MP HSIO blk-ctrl")
+Cc: stable@vger.kernel.org
+Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/pmdomain/imx/imx8mp-blk-ctrl.c |    4 ++++
+ 1 file changed, 4 insertions(+)
+
+--- a/drivers/pmdomain/imx/imx8mp-blk-ctrl.c
++++ b/drivers/pmdomain/imx/imx8mp-blk-ctrl.c
+@@ -53,6 +53,7 @@ struct imx8mp_blk_ctrl_domain_data {
+       const char * const *path_names;
+       int num_paths;
+       const char *gpc_name;
++      const unsigned int flags;
+ };
+ #define DOMAIN_MAX_CLKS 3
+@@ -265,10 +266,12 @@ static const struct imx8mp_blk_ctrl_doma
+       [IMX8MP_HSIOBLK_PD_USB_PHY1] = {
+               .name = "hsioblk-usb-phy1",
+               .gpc_name = "usb-phy1",
++              .flags = GENPD_FLAG_ACTIVE_WAKEUP,
+       },
+       [IMX8MP_HSIOBLK_PD_USB_PHY2] = {
+               .name = "hsioblk-usb-phy2",
+               .gpc_name = "usb-phy2",
++              .flags = GENPD_FLAG_ACTIVE_WAKEUP,
+       },
+       [IMX8MP_HSIOBLK_PD_PCIE] = {
+               .name = "hsioblk-pcie",
+@@ -724,6 +727,7 @@ static int imx8mp_blk_ctrl_probe(struct
+               domain->genpd.name = data->name;
+               domain->genpd.power_on = imx8mp_blk_ctrl_power_on;
+               domain->genpd.power_off = imx8mp_blk_ctrl_power_off;
++              domain->genpd.flags = data->flags;
+               domain->bc = bc;
+               domain->id = i;
diff --git a/queue-6.18/pmdomain-qcom-rpmpd-fix-off-by-one-error-in-clamping-to-the-highest-state.patch b/queue-6.18/pmdomain-qcom-rpmpd-fix-off-by-one-error-in-clamping-to-the-highest-state.patch
new file mode 100644 (file)
index 0000000..f72a0e7
--- /dev/null
@@ -0,0 +1,40 @@
+From 8aa6f7697f5981d336cac7af6ddd182a03c6da01 Mon Sep 17 00:00:00 2001
+From: Gabor Juhos <j4g8y7@gmail.com>
+Date: Thu, 22 Jan 2026 18:20:12 +0100
+Subject: pmdomain: qcom: rpmpd: fix off-by-one error in clamping to the highest state
+
+From: Gabor Juhos <j4g8y7@gmail.com>
+
+commit 8aa6f7697f5981d336cac7af6ddd182a03c6da01 upstream.
+
+As it is indicated by the comment, the rpmpd_aggregate_corner() function
+tries to clamp the state to the highest corner/level supported by the
+given power domain, however the calculation of the highest state contains
+an off-by-one error.
+
+The 'max_state' member of the 'rpmpd' structure indicates the highest
+corner/level, and as such it does not needs to be decremented.
+
+Change the code to use the 'max_state' value directly to avoid the error.
+
+Fixes: 98c8b3efacae ("soc: qcom: rpmpd: Add sync_state")
+Signed-off-by: Gabor Juhos <j4g8y7@gmail.com>
+Reviewed-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
+Cc: stable@vger.kernel.org
+Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/pmdomain/qcom/rpmpd.c |    2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/pmdomain/qcom/rpmpd.c
++++ b/drivers/pmdomain/qcom/rpmpd.c
+@@ -1001,7 +1001,7 @@ static int rpmpd_aggregate_corner(struct
+       /* Clamp to the highest corner/level if sync_state isn't done yet */
+       if (!pd->state_synced)
+-              this_active_corner = this_sleep_corner = pd->max_state - 1;
++              this_active_corner = this_sleep_corner = pd->max_state;
+       else
+               to_active_sleep(pd, pd->corner, &this_active_corner, &this_sleep_corner);
diff --git a/queue-6.18/procfs-avoid-fetching-build-id-while-holding-vma-lock.patch b/queue-6.18/procfs-avoid-fetching-build-id-while-holding-vma-lock.patch
new file mode 100644 (file)
index 0000000..79ed507
--- /dev/null
@@ -0,0 +1,260 @@
+From b5cbacd7f86f4f62b8813688c8e73be94e8e1951 Mon Sep 17 00:00:00 2001
+From: Andrii Nakryiko <andrii@kernel.org>
+Date: Thu, 29 Jan 2026 13:53:40 -0800
+Subject: procfs: avoid fetching build ID while holding VMA lock
+
+From: Andrii Nakryiko <andrii@kernel.org>
+
+commit b5cbacd7f86f4f62b8813688c8e73be94e8e1951 upstream.
+
+Fix PROCMAP_QUERY to fetch optional build ID only after dropping mmap_lock
+or per-VMA lock, whichever was used to lock VMA under question, to avoid
+deadlock reported by syzbot:
+
+ -> #1 (&mm->mmap_lock){++++}-{4:4}:
+        __might_fault+0xed/0x170
+        _copy_to_iter+0x118/0x1720
+        copy_page_to_iter+0x12d/0x1e0
+        filemap_read+0x720/0x10a0
+        blkdev_read_iter+0x2b5/0x4e0
+        vfs_read+0x7f4/0xae0
+        ksys_read+0x12a/0x250
+        do_syscall_64+0xcb/0xf80
+        entry_SYSCALL_64_after_hwframe+0x77/0x7f
+
+ -> #0 (&sb->s_type->i_mutex_key#8){++++}-{4:4}:
+        __lock_acquire+0x1509/0x26d0
+        lock_acquire+0x185/0x340
+        down_read+0x98/0x490
+        blkdev_read_iter+0x2a7/0x4e0
+        __kernel_read+0x39a/0xa90
+        freader_fetch+0x1d5/0xa80
+        __build_id_parse.isra.0+0xea/0x6a0
+        do_procmap_query+0xd75/0x1050
+        procfs_procmap_ioctl+0x7a/0xb0
+        __x64_sys_ioctl+0x18e/0x210
+        do_syscall_64+0xcb/0xf80
+        entry_SYSCALL_64_after_hwframe+0x77/0x7f
+
+ other info that might help us debug this:
+
+  Possible unsafe locking scenario:
+
+        CPU0                    CPU1
+        ----                    ----
+   rlock(&mm->mmap_lock);
+                                lock(&sb->s_type->i_mutex_key#8);
+                                lock(&mm->mmap_lock);
+   rlock(&sb->s_type->i_mutex_key#8);
+
+  *** DEADLOCK ***
+
+This seems to be exacerbated (as we haven't seen these syzbot reports
+before that) by the recent:
+
+       777a8560fd29 ("lib/buildid: use __kernel_read() for sleepable context")
+
+To make this safe, we need to grab file refcount while VMA is still locked, but
+other than that everything is pretty straightforward. Internal build_id_parse()
+API assumes VMA is passed, but it only needs the underlying file reference, so
+just add another variant build_id_parse_file() that expects file passed
+directly.
+
+[akpm@linux-foundation.org: fix up kerneldoc]
+Link: https://lkml.kernel.org/r/20260129215340.3742283-1-andrii@kernel.org
+Fixes: ed5d583a88a9 ("fs/procfs: implement efficient VMA querying API for /proc/<pid>/maps")
+Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
+Reported-by: <syzbot+4e70c8e0a2017b432f7a@syzkaller.appspotmail.com>
+Reviewed-by: Suren Baghdasaryan <surenb@google.com>
+Tested-by: Suren Baghdasaryan <surenb@google.com>
+Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev>
+Cc: Alexei Starovoitov <ast@kernel.org>
+Cc: Daniel Borkmann <daniel@iogearbox.net>
+Cc: Eduard Zingerman <eddyz87@gmail.com>
+Cc: Hao Luo <haoluo@google.com>
+Cc: Jiri Olsa <jolsa@kernel.org>
+Cc: John Fastabend <john.fastabend@gmail.com>
+Cc: KP Singh <kpsingh@kernel.org>
+Cc: Martin KaFai Lau <martin.lau@linux.dev>
+Cc: Song Liu <song@kernel.org>
+Cc: Stanislav Fomichev <sdf@fomichev.me>
+Cc: Yonghong Song <yonghong.song@linux.dev>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/proc/task_mmu.c      |   42 +++++++++++++++++++++++++++---------------
+ include/linux/buildid.h |    3 +++
+ lib/buildid.c           |   42 ++++++++++++++++++++++++++++++------------
+ 3 files changed, 60 insertions(+), 27 deletions(-)
+
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -656,6 +656,7 @@ static int do_procmap_query(struct mm_st
+       struct proc_maps_locking_ctx lock_ctx = { .mm = mm };
+       struct procmap_query karg;
+       struct vm_area_struct *vma;
++      struct file *vm_file = NULL;
+       const char *name = NULL;
+       char build_id_buf[BUILD_ID_SIZE_MAX], *name_buf = NULL;
+       __u64 usize;
+@@ -727,21 +728,6 @@ static int do_procmap_query(struct mm_st
+               karg.inode = 0;
+       }
+-      if (karg.build_id_size) {
+-              __u32 build_id_sz;
+-
+-              err = build_id_parse(vma, build_id_buf, &build_id_sz);
+-              if (err) {
+-                      karg.build_id_size = 0;
+-              } else {
+-                      if (karg.build_id_size < build_id_sz) {
+-                              err = -ENAMETOOLONG;
+-                              goto out;
+-                      }
+-                      karg.build_id_size = build_id_sz;
+-              }
+-      }
+-
+       if (karg.vma_name_size) {
+               size_t name_buf_sz = min_t(size_t, PATH_MAX, karg.vma_name_size);
+               const struct path *path;
+@@ -775,10 +761,34 @@ static int do_procmap_query(struct mm_st
+               karg.vma_name_size = name_sz;
+       }
++      if (karg.build_id_size && vma->vm_file)
++              vm_file = get_file(vma->vm_file);
++
+       /* unlock vma or mmap_lock, and put mm_struct before copying data to user */
+       query_vma_teardown(&lock_ctx);
+       mmput(mm);
++      if (karg.build_id_size) {
++              __u32 build_id_sz;
++
++              if (vm_file)
++                      err = build_id_parse_file(vm_file, build_id_buf, &build_id_sz);
++              else
++                      err = -ENOENT;
++              if (err) {
++                      karg.build_id_size = 0;
++              } else {
++                      if (karg.build_id_size < build_id_sz) {
++                              err = -ENAMETOOLONG;
++                              goto out;
++                      }
++                      karg.build_id_size = build_id_sz;
++              }
++      }
++
++      if (vm_file)
++              fput(vm_file);
++
+       if (karg.vma_name_size && copy_to_user(u64_to_user_ptr(karg.vma_name_addr),
+                                              name, karg.vma_name_size)) {
+               kfree(name_buf);
+@@ -798,6 +808,8 @@ static int do_procmap_query(struct mm_st
+ out:
+       query_vma_teardown(&lock_ctx);
+       mmput(mm);
++      if (vm_file)
++              fput(vm_file);
+       kfree(name_buf);
+       return err;
+ }
+--- a/include/linux/buildid.h
++++ b/include/linux/buildid.h
+@@ -7,7 +7,10 @@
+ #define BUILD_ID_SIZE_MAX 20
+ struct vm_area_struct;
++struct file;
++
+ int build_id_parse(struct vm_area_struct *vma, unsigned char *build_id, __u32 *size);
++int build_id_parse_file(struct file *file, unsigned char *build_id, __u32 *size);
+ int build_id_parse_nofault(struct vm_area_struct *vma, unsigned char *build_id, __u32 *size);
+ int build_id_parse_buf(const void *buf, unsigned char *build_id, u32 buf_size);
+--- a/lib/buildid.c
++++ b/lib/buildid.c
+@@ -295,7 +295,7 @@ static int get_build_id_64(struct freade
+ /* enough for Elf64_Ehdr, Elf64_Phdr, and all the smaller requests */
+ #define MAX_FREADER_BUF_SZ 64
+-static int __build_id_parse(struct vm_area_struct *vma, unsigned char *build_id,
++static int __build_id_parse(struct file *file, unsigned char *build_id,
+                           __u32 *size, bool may_fault)
+ {
+       const Elf32_Ehdr *ehdr;
+@@ -303,11 +303,7 @@ static int __build_id_parse(struct vm_ar
+       char buf[MAX_FREADER_BUF_SZ];
+       int ret;
+-      /* only works for page backed storage  */
+-      if (!vma->vm_file)
+-              return -EINVAL;
+-
+-      freader_init_from_file(&r, buf, sizeof(buf), vma->vm_file, may_fault);
++      freader_init_from_file(&r, buf, sizeof(buf), file, may_fault);
+       /* fetch first 18 bytes of ELF header for checks */
+       ehdr = freader_fetch(&r, 0, offsetofend(Elf32_Ehdr, e_type));
+@@ -335,8 +331,8 @@ out:
+       return ret;
+ }
+-/*
+- * Parse build ID of ELF file mapped to vma
++/**
++ * build_id_parse_nofault() - Parse build ID of ELF file mapped to vma
+  * @vma:      vma object
+  * @build_id: buffer to store build id, at least BUILD_ID_SIZE long
+  * @size:     returns actual build id size in case of success
+@@ -348,11 +344,14 @@ out:
+  */
+ int build_id_parse_nofault(struct vm_area_struct *vma, unsigned char *build_id, __u32 *size)
+ {
+-      return __build_id_parse(vma, build_id, size, false /* !may_fault */);
++      if (!vma->vm_file)
++              return -EINVAL;
++
++      return __build_id_parse(vma->vm_file, build_id, size, false /* !may_fault */);
+ }
+-/*
+- * Parse build ID of ELF file mapped to VMA
++/**
++ * build_id_parse() - Parse build ID of ELF file mapped to VMA
+  * @vma:      vma object
+  * @build_id: buffer to store build id, at least BUILD_ID_SIZE long
+  * @size:     returns actual build id size in case of success
+@@ -364,7 +363,26 @@ int build_id_parse_nofault(struct vm_are
+  */
+ int build_id_parse(struct vm_area_struct *vma, unsigned char *build_id, __u32 *size)
+ {
+-      return __build_id_parse(vma, build_id, size, true /* may_fault */);
++      if (!vma->vm_file)
++              return -EINVAL;
++
++      return __build_id_parse(vma->vm_file, build_id, size, true /* may_fault */);
++}
++
++/**
++ * build_id_parse_file() - Parse build ID of ELF file
++ * @file:      file object
++ * @build_id: buffer to store build id, at least BUILD_ID_SIZE long
++ * @size:     returns actual build id size in case of success
++ *
++ * Assumes faultable context and can cause page faults to bring in file data
++ * into page cache.
++ *
++ * Return: 0 on success; negative error, otherwise
++ */
++int build_id_parse_file(struct file *file, unsigned char *build_id, __u32 *size)
++{
++      return __build_id_parse(file, build_id, size, true /* may_fault */);
+ }
+ /**
diff --git a/queue-6.18/rbd-check-for-eod-after-exclusive-lock-is-ensured-to-be-held.patch b/queue-6.18/rbd-check-for-eod-after-exclusive-lock-is-ensured-to-be-held.patch
new file mode 100644 (file)
index 0000000..4ab5532
--- /dev/null
@@ -0,0 +1,94 @@
+From bd3884a204c3b507e6baa9a4091aa927f9af5404 Mon Sep 17 00:00:00 2001
+From: Ilya Dryomov <idryomov@gmail.com>
+Date: Wed, 7 Jan 2026 22:37:55 +0100
+Subject: rbd: check for EOD after exclusive lock is ensured to be held
+
+From: Ilya Dryomov <idryomov@gmail.com>
+
+commit bd3884a204c3b507e6baa9a4091aa927f9af5404 upstream.
+
+Similar to commit 870611e4877e ("rbd: get snapshot context after
+exclusive lock is ensured to be held"), move the "beyond EOD" check
+into the image request state machine so that it's performed after
+exclusive lock is ensured to be held.  This avoids various race
+conditions which can arise when the image is shrunk under I/O (in
+practice, mostly readahead).  In one such scenario
+
+    rbd_assert(objno < rbd_dev->object_map_size);
+
+can be triggered if a close-to-EOD read gets queued right before the
+shrink is initiated and the EOD check is performed against an outdated
+mapping_size.  After the resize is done on the server side and exclusive
+lock is (re)acquired bringing along the new (now shrunk) object map, the
+read starts going through the state machine and rbd_obj_may_exist() gets
+invoked on an object that is out of bounds of rbd_dev->object_map array.
+
+Cc: stable@vger.kernel.org
+Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
+Reviewed-by: Dongsheng Yang <dongsheng.yang@linux.dev>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/block/rbd.c |   33 +++++++++++++++++++++------------
+ 1 file changed, 21 insertions(+), 12 deletions(-)
+
+--- a/drivers/block/rbd.c
++++ b/drivers/block/rbd.c
+@@ -3495,11 +3495,29 @@ static void rbd_img_object_requests(stru
+       rbd_assert(!need_exclusive_lock(img_req) ||
+                  __rbd_is_lock_owner(rbd_dev));
+-      if (rbd_img_is_write(img_req)) {
+-              rbd_assert(!img_req->snapc);
++      if (test_bit(IMG_REQ_CHILD, &img_req->flags)) {
++              rbd_assert(!rbd_img_is_write(img_req));
++      } else {
++              struct request *rq = blk_mq_rq_from_pdu(img_req);
++              u64 off = (u64)blk_rq_pos(rq) << SECTOR_SHIFT;
++              u64 len = blk_rq_bytes(rq);
++              u64 mapping_size;
++
+               down_read(&rbd_dev->header_rwsem);
+-              img_req->snapc = ceph_get_snap_context(rbd_dev->header.snapc);
++              mapping_size = rbd_dev->mapping.size;
++              if (rbd_img_is_write(img_req)) {
++                      rbd_assert(!img_req->snapc);
++                      img_req->snapc =
++                          ceph_get_snap_context(rbd_dev->header.snapc);
++              }
+               up_read(&rbd_dev->header_rwsem);
++
++              if (unlikely(off + len > mapping_size)) {
++                      rbd_warn(rbd_dev, "beyond EOD (%llu~%llu > %llu)",
++                               off, len, mapping_size);
++                      img_req->pending.result = -EIO;
++                      return;
++              }
+       }
+       for_each_obj_request(img_req, obj_req) {
+@@ -4725,7 +4743,6 @@ static void rbd_queue_workfn(struct work
+       struct request *rq = blk_mq_rq_from_pdu(img_request);
+       u64 offset = (u64)blk_rq_pos(rq) << SECTOR_SHIFT;
+       u64 length = blk_rq_bytes(rq);
+-      u64 mapping_size;
+       int result;
+       /* Ignore/skip any zero-length requests */
+@@ -4738,17 +4755,9 @@ static void rbd_queue_workfn(struct work
+       blk_mq_start_request(rq);
+       down_read(&rbd_dev->header_rwsem);
+-      mapping_size = rbd_dev->mapping.size;
+       rbd_img_capture_header(img_request);
+       up_read(&rbd_dev->header_rwsem);
+-      if (offset + length > mapping_size) {
+-              rbd_warn(rbd_dev, "beyond EOD (%llu~%llu > %llu)", offset,
+-                       length, mapping_size);
+-              result = -EIO;
+-              goto err_img_request;
+-      }
+-
+       dout("%s rbd_dev %p img_req %p %s %llu~%llu\n", __func__, rbd_dev,
+            img_request, obj_op_name(op_type), offset, length);
diff --git a/queue-6.18/revert-drm-amd-check-if-aspm-is-enabled-from-pcie-subsystem.patch b/queue-6.18/revert-drm-amd-check-if-aspm-is-enabled-from-pcie-subsystem.patch
new file mode 100644 (file)
index 0000000..c4a751c
--- /dev/null
@@ -0,0 +1,46 @@
+From 243b467dea1735fed904c2e54d248a46fa417a2d Mon Sep 17 00:00:00 2001
+From: Bert Karwatzki <spasswolf@web.de>
+Date: Sun, 1 Feb 2026 01:24:45 +0100
+Subject: Revert "drm/amd: Check if ASPM is enabled from PCIe subsystem"
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Bert Karwatzki <spasswolf@web.de>
+
+commit 243b467dea1735fed904c2e54d248a46fa417a2d upstream.
+
+This reverts commit 7294863a6f01248d72b61d38478978d638641bee.
+
+This commit was erroneously applied again after commit 0ab5d711ec74
+("drm/amd: Refactor `amdgpu_aspm` to be evaluated per device")
+removed it, leading to very hard to debug crashes, when used with a system with two
+AMD GPUs of which only one supports ASPM.
+
+Link: https://lore.kernel.org/linux-acpi/20251006120944.7880-1-spasswolf@web.de/
+Link: https://github.com/acpica/acpica/issues/1060
+Fixes: 0ab5d711ec74 ("drm/amd: Refactor `amdgpu_aspm` to be evaluated per device")
+Signed-off-by: Bert Karwatzki <spasswolf@web.de>
+Reviewed-by: Christian König <christian.koenig@amd.com>
+Reviewed-by: Mario Limonciello (AMD) <superm1@kernel.org>
+Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
+Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
+(cherry picked from commit 97a9689300eb2b393ba5efc17c8e5db835917080)
+Cc: stable@vger.kernel.org
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c |    3 ---
+ 1 file changed, 3 deletions(-)
+
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -2334,9 +2334,6 @@ static int amdgpu_pci_probe(struct pci_d
+                       return -ENODEV;
+       }
+-      if (amdgpu_aspm == -1 && !pcie_aspm_enabled(pdev))
+-              amdgpu_aspm = 0;
+-
+       if (amdgpu_virtual_display ||
+           amdgpu_device_asic_has_dc_support(pdev, flags & AMD_ASIC_MASK))
+               supports_atomic = true;
index 6bd96d575e012295f4f80b8863d2026b37fdd254..5e70164a363d5648f5f55500ef07b1b40747da77 100644 (file)
@@ -1 +1,31 @@
 nvmet-tcp-add-bounds-checks-in-nvmet_tcp_build_pdu_iovec.patch
+x86-vmware-fix-hypercall-clobbers.patch
+x86-kfence-fix-booting-on-32bit-non-pae-systems.patch
+kvm-x86-explicitly-configure-supported-xss-from-svm-vmx-_set_cpu_caps.patch
+platform-x86-intel_telemetry-fix-swapped-arrays-in-pss-output.patch
+alsa-aloop-fix-racy-access-at-pcm-trigger.patch
+pmdomain-qcom-rpmpd-fix-off-by-one-error-in-clamping-to-the-highest-state.patch
+pmdomain-imx8mp-blk-ctrl-keep-gpc-power-domain-on-for-system-wakeup.patch
+pmdomain-imx-gpcv2-fix-the-imx8mm-gpu-hang-due-to-wrong-adb400-reset.patch
+pmdomain-imx8mp-blk-ctrl-keep-usb-phy-power-domain-on-for-system-wakeup.patch
+pmdomain-imx8m-blk-ctrl-fix-out-of-range-access-of-bc-domains.patch
+procfs-avoid-fetching-build-id-while-holding-vma-lock.patch
+mm-slab-add-alloc_tagging_slab_free_hook-for-memcg_alloc_abort_single.patch
+ceph-fix-null-pointer-dereference-in-ceph_mds_auth_match.patch
+rbd-check-for-eod-after-exclusive-lock-is-ensured-to-be-held.patch
+arm-9468-1-fix-memset64-on-big-endian.patch
+ceph-fix-oops-due-to-invalid-pointer-for-kfree-in-parse_longname.patch
+cgroup-dmem-fix-null-pointer-dereference-when-setting-max.patch
+cgroup-dmem-avoid-rcu-warning-when-unregister-region.patch
+cgroup-dmem-avoid-pool-uaf.patch
+drm-amd-set-minimum-version-for-set_hw_resource_1-on-gfx11-to-0x52.patch
+gve-fix-stats-report-corruption-on-queue-count-change.patch
+gve-correct-ethtool-rx_dropped-calculation.patch
+mm-shmem-prevent-infinite-loop-on-truncate-race.patch
+revert-drm-amd-check-if-aspm-is-enabled-from-pcie-subsystem.patch
+nouveau-add-a-third-state-to-the-fini-handler.patch
+nouveau-gsp-use-rpc-sequence-numbers-properly.patch
+nouveau-gsp-fix-suspend-resume-regression-on-r570-firmware.patch
+net-cpsw-execute-ndo_set_rx_mode-callback-in-a-work-queue.patch
+net-cpsw_new-execute-ndo_set_rx_mode-callback-in-a-work-queue.patch
+net-spacemit-k1-emac-fix-jumbo-frame-support.patch
diff --git a/queue-6.18/x86-kfence-fix-booting-on-32bit-non-pae-systems.patch b/queue-6.18/x86-kfence-fix-booting-on-32bit-non-pae-systems.patch
new file mode 100644 (file)
index 0000000..a4cebf6
--- /dev/null
@@ -0,0 +1,68 @@
+From 16459fe7e0ca6520a6e8f603de4ccd52b90fd765 Mon Sep 17 00:00:00 2001
+From: Andrew Cooper <andrew.cooper3@citrix.com>
+Date: Mon, 26 Jan 2026 21:10:46 +0000
+Subject: x86/kfence: fix booting on 32bit non-PAE systems
+
+From: Andrew Cooper <andrew.cooper3@citrix.com>
+
+commit 16459fe7e0ca6520a6e8f603de4ccd52b90fd765 upstream.
+
+The original patch inverted the PTE unconditionally to avoid
+L1TF-vulnerable PTEs, but Linux doesn't make this adjustment in 2-level
+paging.
+
+Adjust the logic to use the flip_protnone_guard() helper, which is a nop
+on 2-level paging but inverts the address bits in all other paging modes.
+
+This doesn't matter for the Xen aspect of the original change.  Linux no
+longer supports running 32bit PV under Xen, and Xen doesn't support
+running any 32bit PV guests without using PAE paging.
+
+Link: https://lkml.kernel.org/r/20260126211046.2096622-1-andrew.cooper3@citrix.com
+Fixes: b505f1944535 ("x86/kfence: avoid writing L1TF-vulnerable PTEs")
+Reported-by: Ryusuke Konishi <konishi.ryusuke@gmail.com>
+Closes: https://lore.kernel.org/lkml/CAKFNMokwjw68ubYQM9WkzOuH51wLznHpEOMSqtMoV1Rn9JV_gw@mail.gmail.com/
+Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
+Tested-by: Ryusuke Konishi <konishi.ryusuke@gmail.com>
+Tested-by: Borislav Petkov (AMD) <bp@alien8.de>
+Cc: Alexander Potapenko <glider@google.com>
+Cc: Marco Elver <elver@google.com>
+Cc: Dmitry Vyukov <dvyukov@google.com>
+Cc: Thomas Gleixner <tglx@linutronix.de>
+Cc: Ingo Molnar <mingo@redhat.com>
+Cc: Dave Hansen <dave.hansen@linux.intel.com>
+Cc: "H. Peter Anvin" <hpa@zytor.com>
+Cc: Jann Horn <jannh@google.com>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/include/asm/kfence.h |    7 ++++---
+ 1 file changed, 4 insertions(+), 3 deletions(-)
+
+--- a/arch/x86/include/asm/kfence.h
++++ b/arch/x86/include/asm/kfence.h
+@@ -42,7 +42,7 @@ static inline bool kfence_protect_page(u
+ {
+       unsigned int level;
+       pte_t *pte = lookup_address(addr, &level);
+-      pteval_t val;
++      pteval_t val, new;
+       if (WARN_ON(!pte || level != PG_LEVEL_4K))
+               return false;
+@@ -57,11 +57,12 @@ static inline bool kfence_protect_page(u
+               return true;
+       /*
+-       * Otherwise, invert the entire PTE.  This avoids writing out an
++       * Otherwise, flip the Present bit, taking care to avoid writing an
+        * L1TF-vulnerable PTE (not present, without the high address bits
+        * set).
+        */
+-      set_pte(pte, __pte(~val));
++      new = val ^ _PAGE_PRESENT;
++      set_pte(pte, __pte(flip_protnone_guard(val, new, PTE_PFN_MASK)));
+       /*
+        * If the page was protected (non-present) and we're making it
diff --git a/queue-6.18/x86-vmware-fix-hypercall-clobbers.patch b/queue-6.18/x86-vmware-fix-hypercall-clobbers.patch
new file mode 100644 (file)
index 0000000..66dcbe0
--- /dev/null
@@ -0,0 +1,75 @@
+From 2687c848e57820651b9f69d30c4710f4219f7dbf Mon Sep 17 00:00:00 2001
+From: Josh Poimboeuf <jpoimboe@kernel.org>
+Date: Fri, 6 Feb 2026 14:24:55 -0800
+Subject: x86/vmware: Fix hypercall clobbers
+
+From: Josh Poimboeuf <jpoimboe@kernel.org>
+
+commit 2687c848e57820651b9f69d30c4710f4219f7dbf upstream.
+
+Fedora QA reported the following panic:
+
+  BUG: unable to handle page fault for address: 0000000040003e54
+  #PF: supervisor write access in kernel mode
+  #PF: error_code(0x0002) - not-present page
+  Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS edk2-20251119-3.fc43 11/19/2025
+  RIP: 0010:vmware_hypercall4.constprop.0+0x52/0x90
+  ..
+  Call Trace:
+   vmmouse_report_events+0x13e/0x1b0
+   psmouse_handle_byte+0x15/0x60
+   ps2_interrupt+0x8a/0xd0
+   ...
+
+because the QEMU VMware mouse emulation is buggy, and clears the top 32
+bits of %rdi that the kernel kept a pointer in.
+
+The QEMU vmmouse driver saves and restores the register state in a
+"uint32_t data[6];" and as a result restores the state with the high
+bits all cleared.
+
+RDI originally contained the value of a valid kernel stack address
+(0xff5eeb3240003e54).  After the vmware hypercall it now contains
+0x40003e54, and we get a page fault as a result when it is dereferenced.
+
+The proper fix would be in QEMU, but this works around the issue in the
+kernel to keep old setups working, when old kernels had not happened to
+keep any state in %rdi over the hypercall.
+
+In theory this same issue exists for all the hypercalls in the vmmouse
+driver; in practice it has only been seen with vmware_hypercall3() and
+vmware_hypercall4().  For now, just mark RDI/RSI as clobbered for those
+two calls.  This should have a minimal effect on code generation overall
+as it should be rare for the compiler to want to make RDI/RSI live
+across hypercalls.
+
+Reported-by: Justin Forbes <jforbes@fedoraproject.org>
+Link: https://lore.kernel.org/all/99a9c69a-fc1a-43b7-8d1e-c42d6493b41f@broadcom.com/
+Signed-off-by: Josh Poimboeuf <jpoimboe@kernel.org>
+Cc: stable@kernel.org
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/include/asm/vmware.h |    4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/arch/x86/include/asm/vmware.h
++++ b/arch/x86/include/asm/vmware.h
+@@ -140,7 +140,7 @@ unsigned long vmware_hypercall3(unsigned
+                 "b" (in1),
+                 "c" (cmd),
+                 "d" (0)
+-              : "cc", "memory");
++              : "di", "si", "cc", "memory");
+       return out0;
+ }
+@@ -165,7 +165,7 @@ unsigned long vmware_hypercall4(unsigned
+                 "b" (in1),
+                 "c" (cmd),
+                 "d" (0)
+-              : "cc", "memory");
++              : "di", "si", "cc", "memory");
+       return out0;
+ }