--- /dev/null
+From stable+bounces-241680-greg=kroah.com@vger.kernel.org Tue Apr 28 17:06:22 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 28 Apr 2026 10:32:38 -0400
+Subject: arm64: mm: Fix rodata=full block mapping support for realm guests
+To: stable@vger.kernel.org
+Cc: Ryan Roberts <ryan.roberts@arm.com>, Jinjiang Tu <tujinjiang@huawei.com>, Kevin Brodsky <kevin.brodsky@arm.com>, Suzuki K Poulose <suzuki.poulose@arm.com>, Catalin Marinas <catalin.marinas@arm.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260428143238.2960283-2-sashal@kernel.org>
+
+From: Ryan Roberts <ryan.roberts@arm.com>
+
+[ Upstream commit f12b435de2f2bb09ce406467020181ada528844c ]
+
+Commit a166563e7ec37 ("arm64: mm: support large block mapping when
+rodata=full") enabled the linear map to be mapped by block/cont while
+still allowing granular permission changes on BBML2_NOABORT systems by
+lazily splitting the live mappings. This mechanism was intended to be
+usable by realm guests since they need to dynamically share dma buffers
+with the host by "decrypting" them - which for Arm CCA, means marking
+them as shared in the page tables.
+
+However, it turns out that the mechanism was failing for realm guests
+because realms need to share their dma buffers (via
+__set_memory_enc_dec()) much earlier during boot than
+split_kernel_leaf_mapping() was able to handle. The report linked below
+showed that GIC's ITS was one such user. But during the investigation I
+found other callsites that could not meet the
+split_kernel_leaf_mapping() constraints.
+
+The problem is that we block map the linear map based on the boot CPU
+supporting BBML2_NOABORT, then check that all the other CPUs support it
+too when finalizing the caps. If they don't, then we stop_machine() and
+split to ptes. For safety, split_kernel_leaf_mapping() previously
+wouldn't permit splitting until after the caps were finalized. That
+ensured that if any secondary cpus were running that didn't support
+BBML2_NOABORT, we wouldn't risk breaking them.
+
+I've fix this problem by reducing the black-out window where we refuse
+to split; there are now 2 windows. The first is from T0 until the page
+allocator is inititialized. Splitting allocates memory for the page
+allocator so it must be in use. The second covers the period between
+starting to online the secondary cpus until the system caps are
+finalized (this is a very small window).
+
+All of the problematic callers are calling __set_memory_enc_dec() before
+the secondary cpus come online, so this solves the problem. However, one
+of these callers, swiotlb_update_mem_attributes(), was trying to split
+before the page allocator was initialized. So I have moved this call
+from arch_mm_preinit() to mem_init(), which solves the ordering issue.
+
+I've added warnings and return an error if any attempt is made to split
+in the black-out windows.
+
+Note there are other issues which prevent booting all the way to user
+space, which will be fixed in subsequent patches.
+
+Reported-by: Jinjiang Tu <tujinjiang@huawei.com>
+Closes: https://lore.kernel.org/all/0b2a4ae5-fc51-4d77-b177-b2e9db74f11d@huawei.com/
+Fixes: a166563e7ec3 ("arm64: mm: support large block mapping when rodata=full")
+Cc: stable@vger.kernel.org
+Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com>
+Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
+Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
+Tested-by: Suzuki K Poulose <suzuki.poulose@arm.com>
+Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
+[ adjusted context to use `__ASSEMBLY__` instead of `__ASSEMBLER__` ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/arm64/include/asm/mmu.h | 2 +
+ arch/arm64/mm/init.c | 9 +++++++-
+ arch/arm64/mm/mmu.c | 45 ++++++++++++++++++++++++++++++-------------
+ 3 files changed, 42 insertions(+), 14 deletions(-)
+
+--- a/arch/arm64/include/asm/mmu.h
++++ b/arch/arm64/include/asm/mmu.h
+@@ -112,5 +112,7 @@ void kpti_install_ng_mappings(void);
+ static inline void kpti_install_ng_mappings(void) {}
+ #endif
+
++extern bool page_alloc_available;
++
+ #endif /* !__ASSEMBLY__ */
+ #endif
+--- a/arch/arm64/mm/init.c
++++ b/arch/arm64/mm/init.c
+@@ -357,7 +357,6 @@ void __init arch_mm_preinit(void)
+ }
+
+ swiotlb_init(swiotlb, flags);
+- swiotlb_update_mem_attributes();
+
+ /*
+ * Check boundaries twice: Some fundamental inconsistencies can be
+@@ -384,6 +383,14 @@ void __init arch_mm_preinit(void)
+ }
+ }
+
++bool page_alloc_available __ro_after_init;
++
++void __init mem_init(void)
++{
++ page_alloc_available = true;
++ swiotlb_update_mem_attributes();
++}
++
+ void free_initmem(void)
+ {
+ void *lm_init_begin = lm_alias(__init_begin);
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -774,30 +774,51 @@ static inline bool force_pte_mapping(voi
+ }
+
+ static DEFINE_MUTEX(pgtable_split_lock);
++static bool linear_map_requires_bbml2;
+
+ int split_kernel_leaf_mapping(unsigned long start, unsigned long end)
+ {
+ int ret;
+
+ /*
+- * !BBML2_NOABORT systems should not be trying to change permissions on
+- * anything that is not pte-mapped in the first place. Just return early
+- * and let the permission change code raise a warning if not already
+- * pte-mapped.
+- */
+- if (!system_supports_bbml2_noabort())
+- return 0;
+-
+- /*
+ * If the region is within a pte-mapped area, there is no need to try to
+ * split. Additionally, CONFIG_DEBUG_PAGEALLOC and CONFIG_KFENCE may
+ * change permissions from atomic context so for those cases (which are
+ * always pte-mapped), we must not go any further because taking the
+- * mutex below may sleep.
++ * mutex below may sleep. Do not call force_pte_mapping() here because
++ * it could return a confusing result if called from a secondary cpu
++ * prior to finalizing caps. Instead, linear_map_requires_bbml2 gives us
++ * what we need.
+ */
+- if (force_pte_mapping() || is_kfence_address((void *)start))
++ if (!linear_map_requires_bbml2 || is_kfence_address((void *)start))
+ return 0;
+
++ if (!system_supports_bbml2_noabort()) {
++ /*
++ * !BBML2_NOABORT systems should not be trying to change
++ * permissions on anything that is not pte-mapped in the first
++ * place. Just return early and let the permission change code
++ * raise a warning if not already pte-mapped.
++ */
++ if (system_capabilities_finalized())
++ return 0;
++
++ /*
++ * Boot-time: split_kernel_leaf_mapping_locked() allocates from
++ * page allocator. Can't split until it's available.
++ */
++ if (WARN_ON(!page_alloc_available))
++ return -EBUSY;
++
++ /*
++ * Boot-time: Started secondary cpus but don't know if they
++ * support BBML2_NOABORT yet. Can't allow splitting in this
++ * window in case they don't.
++ */
++ if (WARN_ON(num_online_cpus() > 1))
++ return -EBUSY;
++ }
++
+ /*
+ * Ensure start and end are at least page-aligned since this is the
+ * finest granularity we can split to.
+@@ -897,8 +918,6 @@ static int range_split_to_ptes(unsigned
+ return ret;
+ }
+
+-static bool linear_map_requires_bbml2 __initdata;
+-
+ u32 idmap_kpti_bbml2_flag;
+
+ static void __init init_idmap_kpti_bbml2_flag(void)
--- /dev/null
+From stable+bounces-241679-greg=kroah.com@vger.kernel.org Tue Apr 28 16:37:06 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 28 Apr 2026 10:32:37 -0400
+Subject: arm64: mm: Simplify check in arch_kfence_init_pool()
+To: stable@vger.kernel.org
+Cc: Kevin Brodsky <kevin.brodsky@arm.com>, Ryan Roberts <ryan.roberts@arm.com>, Catalin Marinas <catalin.marinas@arm.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260428143238.2960283-1-sashal@kernel.org>
+
+From: Kevin Brodsky <kevin.brodsky@arm.com>
+
+[ Upstream commit b7737c38e7cb611c2fbd87af3b09afeb92c96fe7 ]
+
+TL;DR: checking force_pte_mapping() in arch_kfence_init_pool() is
+sufficient
+
+Commit ce2b3a50ad92 ("arm64: mm: Don't sleep in
+split_kernel_leaf_mapping() when in atomic context") recently added
+an arm64 implementation of arch_kfence_init_pool() to ensure that
+the KFENCE pool is PTE-mapped. Assuming that the pool was not
+initialised early, block splitting is necessary if the linear
+mapping is not fully PTE-mapped, in other words if
+force_pte_mapping() is false.
+
+arch_kfence_init_pool() currently makes another check: whether
+BBML2-noabort is supported, i.e. whether we are *able* to split
+block mappings. This check is however unnecessary, because
+force_pte_mapping() is always true if KFENCE is enabled and
+BBML2-noabort is not supported. This must be the case by design,
+since KFENCE requires PTE-mapped pages in all cases. We can
+therefore remove that check.
+
+The situation is different in split_kernel_leaf_mapping(), as that
+function is called unconditionally regardless of the configuration.
+If BBML2-noabort is not supported, it cannot do anything and bails
+out. If force_pte_mapping() is true, there is nothing to do and it
+also bails out, but these are independent checks.
+
+Commit 53357f14f924 ("arm64: mm: Tidy up force_pte_mapping()")
+grouped these checks into a helper, split_leaf_mapping_possible().
+This isn't so helpful as only split_kernel_leaf_mapping() should
+check both. Revert the parts of that commit that introduced the
+helper, reintroducing the more accurate comments in
+split_kernel_leaf_mapping().
+
+Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
+Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
+Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
+Stable-dep-of: f12b435de2f2 ("arm64: mm: Fix rodata=full block mapping support for realm guests")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/arm64/mm/mmu.c | 33 ++++++++++++++++-----------------
+ 1 file changed, 16 insertions(+), 17 deletions(-)
+
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -773,18 +773,6 @@ static inline bool force_pte_mapping(voi
+ return rodata_full || arm64_kfence_can_set_direct_map() || is_realm_world();
+ }
+
+-static inline bool split_leaf_mapping_possible(void)
+-{
+- /*
+- * !BBML2_NOABORT systems should never run into scenarios where we would
+- * have to split. So exit early and let calling code detect it and raise
+- * a warning.
+- */
+- if (!system_supports_bbml2_noabort())
+- return false;
+- return !force_pte_mapping();
+-}
+-
+ static DEFINE_MUTEX(pgtable_split_lock);
+
+ int split_kernel_leaf_mapping(unsigned long start, unsigned long end)
+@@ -792,11 +780,22 @@ int split_kernel_leaf_mapping(unsigned l
+ int ret;
+
+ /*
+- * Exit early if the region is within a pte-mapped area or if we can't
+- * split. For the latter case, the permission change code will raise a
+- * warning if not already pte-mapped.
++ * !BBML2_NOABORT systems should not be trying to change permissions on
++ * anything that is not pte-mapped in the first place. Just return early
++ * and let the permission change code raise a warning if not already
++ * pte-mapped.
++ */
++ if (!system_supports_bbml2_noabort())
++ return 0;
++
++ /*
++ * If the region is within a pte-mapped area, there is no need to try to
++ * split. Additionally, CONFIG_DEBUG_PAGEALLOC and CONFIG_KFENCE may
++ * change permissions from atomic context so for those cases (which are
++ * always pte-mapped), we must not go any further because taking the
++ * mutex below may sleep.
+ */
+- if (!split_leaf_mapping_possible() || is_kfence_address((void *)start))
++ if (force_pte_mapping() || is_kfence_address((void *)start))
+ return 0;
+
+ /*
+@@ -1095,7 +1094,7 @@ bool arch_kfence_init_pool(void)
+ int ret;
+
+ /* Exit early if we know the linear map is already pte-mapped. */
+- if (!split_leaf_mapping_possible())
++ if (force_pte_mapping())
+ return true;
+
+ /* Kfence pool is already pte-mapped for the early init case. */
--- /dev/null
+From stable+bounces-242638-greg=kroah.com@vger.kernel.org Sun May 3 06:25:15 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Sun, 3 May 2026 00:24:54 -0400
+Subject: iio: frequency: admv1013: add dev variable
+To: stable@vger.kernel.org
+Cc: Antoniu Miclaus <antoniu.miclaus@analog.com>, Andy Shevchenko <andriy.shevchenko@intel.com>, Jonathan Cameron <Jonathan.Cameron@huawei.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260503042456.979738-1-sashal@kernel.org>
+
+From: Antoniu Miclaus <antoniu.miclaus@analog.com>
+
+[ Upstream commit e61b5bb0e91390adee41eaddc0a1a7d55d5652b2 ]
+
+Introduce a local struct device pointer in functions that reference
+&spi->dev for device-managed resource calls and device property reads,
+improving code readability.
+
+Signed-off-by: Antoniu Miclaus <antoniu.miclaus@analog.com>
+Reviewed-by: Andy Shevchenko <andriy.shevchenko@intel.com>
+Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
+Stable-dep-of: aac0a51b1670 ("iio: frequency: admv1013: fix NULL pointer dereference on str")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/iio/frequency/admv1013.c | 29 +++++++++++++++--------------
+ 1 file changed, 15 insertions(+), 14 deletions(-)
+
+--- a/drivers/iio/frequency/admv1013.c
++++ b/drivers/iio/frequency/admv1013.c
+@@ -518,11 +518,11 @@ static int admv1013_properties_parse(str
+ {
+ int ret;
+ const char *str;
+- struct spi_device *spi = st->spi;
++ struct device *dev = &st->spi->dev;
+
+- st->det_en = device_property_read_bool(&spi->dev, "adi,detector-enable");
++ st->det_en = device_property_read_bool(dev, "adi,detector-enable");
+
+- ret = device_property_read_string(&spi->dev, "adi,input-mode", &str);
++ ret = device_property_read_string(dev, "adi,input-mode", &str);
+ if (ret)
+ st->input_mode = ADMV1013_IQ_MODE;
+
+@@ -533,7 +533,7 @@ static int admv1013_properties_parse(str
+ else
+ return -EINVAL;
+
+- ret = device_property_read_string(&spi->dev, "adi,quad-se-mode", &str);
++ ret = device_property_read_string(dev, "adi,quad-se-mode", &str);
+ if (ret)
+ st->quad_se_mode = ADMV1013_SE_MODE_DIFF;
+
+@@ -546,11 +546,11 @@ static int admv1013_properties_parse(str
+ else
+ return -EINVAL;
+
+- ret = devm_regulator_bulk_get_enable(&st->spi->dev,
++ ret = devm_regulator_bulk_get_enable(dev,
+ ARRAY_SIZE(admv1013_vcc_regs),
+ admv1013_vcc_regs);
+ if (ret) {
+- dev_err_probe(&spi->dev, ret,
++ dev_err_probe(dev, ret,
+ "Failed to request VCC regulators\n");
+ return ret;
+ }
+@@ -562,9 +562,10 @@ static int admv1013_probe(struct spi_dev
+ {
+ struct iio_dev *indio_dev;
+ struct admv1013_state *st;
++ struct device *dev = &spi->dev;
+ int ret, vcm_uv;
+
+- indio_dev = devm_iio_device_alloc(&spi->dev, sizeof(*st));
++ indio_dev = devm_iio_device_alloc(dev, sizeof(*st));
+ if (!indio_dev)
+ return -ENOMEM;
+
+@@ -581,20 +582,20 @@ static int admv1013_probe(struct spi_dev
+ if (ret)
+ return ret;
+
+- ret = devm_regulator_get_enable_read_voltage(&spi->dev, "vcm");
++ ret = devm_regulator_get_enable_read_voltage(dev, "vcm");
+ if (ret < 0)
+- return dev_err_probe(&spi->dev, ret,
++ return dev_err_probe(dev, ret,
+ "failed to get the common-mode voltage\n");
+
+ vcm_uv = ret;
+
+- st->clkin = devm_clk_get_enabled(&spi->dev, "lo_in");
++ st->clkin = devm_clk_get_enabled(dev, "lo_in");
+ if (IS_ERR(st->clkin))
+- return dev_err_probe(&spi->dev, PTR_ERR(st->clkin),
++ return dev_err_probe(dev, PTR_ERR(st->clkin),
+ "failed to get the LO input clock\n");
+
+ st->nb.notifier_call = admv1013_freq_change;
+- ret = devm_clk_notifier_register(&spi->dev, st->clkin, &st->nb);
++ ret = devm_clk_notifier_register(dev, st->clkin, &st->nb);
+ if (ret)
+ return ret;
+
+@@ -606,11 +607,11 @@ static int admv1013_probe(struct spi_dev
+ return ret;
+ }
+
+- ret = devm_add_action_or_reset(&spi->dev, admv1013_powerdown, st);
++ ret = devm_add_action_or_reset(dev, admv1013_powerdown, st);
+ if (ret)
+ return ret;
+
+- return devm_iio_device_register(&spi->dev, indio_dev);
++ return devm_iio_device_register(dev, indio_dev);
+ }
+
+ static const struct spi_device_id admv1013_id[] = {
--- /dev/null
+From stable+bounces-242639-greg=kroah.com@vger.kernel.org Sun May 3 06:25:22 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Sun, 3 May 2026 00:24:55 -0400
+Subject: iio: frequency: admv1013: fix NULL pointer dereference on str
+To: stable@vger.kernel.org
+Cc: "Antoniu Miclaus" <antoniu.miclaus@analog.com>, "Nuno Sá" <nuno.sa@analog.com>, "Andy Shevchenko" <andriy.shevchenko@intel.com>, Stable@vger.kernel.org, "Jonathan Cameron" <Jonathan.Cameron@huawei.com>, "Sasha Levin" <sashal@kernel.org>
+Message-ID: <20260503042456.979738-2-sashal@kernel.org>
+
+From: Antoniu Miclaus <antoniu.miclaus@analog.com>
+
+[ Upstream commit aac0a51b16700b403a55b67ba495de021db78763 ]
+
+When device_property_read_string() fails, str is left uninitialized
+but the code falls through to strcmp(str, ...), dereferencing a garbage
+pointer. Replace manual read/strcmp with
+device_property_match_property_string() and consolidate the SE mode
+enums into a single sequential enum, mapping to hardware register
+values via a switch consistent with other bitfields in the driver.
+
+Several cleanup patches have been applied to this driver recently so
+this will need a manual backport.
+
+Fixes: da35a7b526d9 ("iio: frequency: admv1013: add support for ADMV1013")
+Reviewed-by: Nuno Sá <nuno.sa@analog.com>
+Signed-off-by: Antoniu Miclaus <antoniu.miclaus@analog.com>
+Reviewed-by: Andy Shevchenko <andriy.shevchenko@intel.com>
+Cc: <Stable@vger.kernel.org>
+Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/iio/frequency/admv1013.c | 67 ++++++++++++++++++++++-----------------
+ 1 file changed, 38 insertions(+), 29 deletions(-)
+
+--- a/drivers/iio/frequency/admv1013.c
++++ b/drivers/iio/frequency/admv1013.c
+@@ -85,9 +85,9 @@ enum {
+ };
+
+ enum {
+- ADMV1013_SE_MODE_POS = 6,
+- ADMV1013_SE_MODE_NEG = 9,
+- ADMV1013_SE_MODE_DIFF = 12
++ ADMV1013_SE_MODE_POS,
++ ADMV1013_SE_MODE_NEG,
++ ADMV1013_SE_MODE_DIFF,
+ };
+
+ struct admv1013_state {
+@@ -470,10 +470,23 @@ static int admv1013_init(struct admv1013
+ if (ret)
+ return ret;
+
+- data = FIELD_PREP(ADMV1013_QUAD_SE_MODE_MSK, st->quad_se_mode);
++ switch (st->quad_se_mode) {
++ case ADMV1013_SE_MODE_POS:
++ data = 6;
++ break;
++ case ADMV1013_SE_MODE_NEG:
++ data = 9;
++ break;
++ case ADMV1013_SE_MODE_DIFF:
++ data = 12;
++ break;
++ default:
++ return -EINVAL;
++ }
+
+ ret = __admv1013_spi_update_bits(st, ADMV1013_REG_QUAD,
+- ADMV1013_QUAD_SE_MODE_MSK, data);
++ ADMV1013_QUAD_SE_MODE_MSK,
++ FIELD_PREP(ADMV1013_QUAD_SE_MODE_MSK, data));
+ if (ret)
+ return ret;
+
+@@ -514,37 +527,33 @@ static void admv1013_powerdown(void *dat
+ admv1013_spi_update_bits(data, ADMV1013_REG_ENABLE, enable_reg_msk, enable_reg);
+ }
+
++static const char * const admv1013_input_modes[] = {
++ [ADMV1013_IQ_MODE] = "iq",
++ [ADMV1013_IF_MODE] = "if",
++};
++
++static const char * const admv1013_quad_se_modes[] = {
++ [ADMV1013_SE_MODE_POS] = "se-pos",
++ [ADMV1013_SE_MODE_NEG] = "se-neg",
++ [ADMV1013_SE_MODE_DIFF] = "diff",
++};
++
+ static int admv1013_properties_parse(struct admv1013_state *st)
+ {
+ int ret;
+- const char *str;
+ struct device *dev = &st->spi->dev;
+
+ st->det_en = device_property_read_bool(dev, "adi,detector-enable");
+
+- ret = device_property_read_string(dev, "adi,input-mode", &str);
+- if (ret)
+- st->input_mode = ADMV1013_IQ_MODE;
+-
+- if (!strcmp(str, "iq"))
+- st->input_mode = ADMV1013_IQ_MODE;
+- else if (!strcmp(str, "if"))
+- st->input_mode = ADMV1013_IF_MODE;
+- else
+- return -EINVAL;
+-
+- ret = device_property_read_string(dev, "adi,quad-se-mode", &str);
+- if (ret)
+- st->quad_se_mode = ADMV1013_SE_MODE_DIFF;
+-
+- if (!strcmp(str, "diff"))
+- st->quad_se_mode = ADMV1013_SE_MODE_DIFF;
+- else if (!strcmp(str, "se-pos"))
+- st->quad_se_mode = ADMV1013_SE_MODE_POS;
+- else if (!strcmp(str, "se-neg"))
+- st->quad_se_mode = ADMV1013_SE_MODE_NEG;
+- else
+- return -EINVAL;
++ ret = device_property_match_property_string(dev, "adi,input-mode",
++ admv1013_input_modes,
++ ARRAY_SIZE(admv1013_input_modes));
++ st->input_mode = ret >= 0 ? ret : ADMV1013_IQ_MODE;
++
++ ret = device_property_match_property_string(dev, "adi,quad-se-mode",
++ admv1013_quad_se_modes,
++ ARRAY_SIZE(admv1013_quad_se_modes));
++ st->quad_se_mode = ret >= 0 ? ret : ADMV1013_SE_MODE_DIFF;
+
+ ret = devm_regulator_bulk_get_enable(dev,
+ ARRAY_SIZE(admv1013_vcc_regs),
--- /dev/null
+From stable+bounces-241678-greg=kroah.com@vger.kernel.org Tue Apr 28 16:36:08 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 28 Apr 2026 10:31:11 -0400
+Subject: lib: test_hmm: evict device pages on file close to avoid use-after-free
+To: stable@vger.kernel.org
+Cc: Alistair Popple <apopple@nvidia.com>, Zenghui Yu <zenghui.yu@linux.dev>, Balbir Singh <balbirs@nvidia.com>, David Hildenbrand <david@kernel.org>, Jason Gunthorpe <jgg@ziepe.ca>, Leon Romanovsky <leon@kernel.org>, Liam Howlett <liam.howlett@oracle.com>, "Lorenzo Stoakes (Oracle)" <ljs@kernel.org>, Michal Hocko <mhocko@suse.com>, Mike Rapoport <rppt@kernel.org>, Suren Baghdasaryan <surenb@google.com>, Matthew Brost <matthew.brost@intel.com>, Andrew Morton <akpm@linux-foundation.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260428143111.2958615-1-sashal@kernel.org>
+
+From: Alistair Popple <apopple@nvidia.com>
+
+[ Upstream commit 744dd97752ef1076a8d8672bb0d8aa2c7abc1144 ]
+
+Patch series "Minor hmm_test fixes and cleanups".
+
+Two bugfixes a cleanup for the HMM kernel selftests. These were mostly
+reported by Zenghui Yu with special thanks to Lorenzo for analysing and
+pointing out the problems.
+
+This patch (of 3):
+
+When dmirror_fops_release() is called it frees the dmirror struct but
+doesn't migrate device private pages back to system memory first. This
+leaves those pages with a dangling zone_device_data pointer to the freed
+dmirror.
+
+If a subsequent fault occurs on those pages (eg. during coredump) the
+dmirror_devmem_fault() callback dereferences the stale pointer causing a
+kernel panic. This was reported [1] when running mm/ksft_hmm.sh on arm64,
+where a test failure triggered SIGABRT and the resulting coredump walked
+the VMAs faulting in the stale device private pages.
+
+Fix this by calling dmirror_device_evict_chunk() for each devmem chunk in
+dmirror_fops_release() to migrate all device private pages back to system
+memory before freeing the dmirror struct. The function is moved earlier
+in the file to avoid a forward declaration.
+
+Link: https://lore.kernel.org/20260331063445.3551404-1-apopple@nvidia.com
+Link: https://lore.kernel.org/20260331063445.3551404-2-apopple@nvidia.com
+Fixes: b2ef9f5a5cb3 ("mm/hmm/test: add selftest driver for HMM")
+Signed-off-by: Alistair Popple <apopple@nvidia.com>
+Reported-by: Zenghui Yu <zenghui.yu@linux.dev>
+Closes: https://lore.kernel.org/linux-mm/8bd0396a-8997-4d2e-a13f-5aac033083d7@linux.dev/
+Reviewed-by: Balbir Singh <balbirs@nvidia.com>
+Tested-by: Zenghui Yu <zenghui.yu@linux.dev>
+Cc: David Hildenbrand <david@kernel.org>
+Cc: Jason Gunthorpe <jgg@ziepe.ca>
+Cc: Leon Romanovsky <leon@kernel.org>
+Cc: Liam Howlett <liam.howlett@oracle.com>
+Cc: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
+Cc: Michal Hocko <mhocko@suse.com>
+Cc: Mike Rapoport <rppt@kernel.org>
+Cc: Suren Baghdasaryan <surenb@google.com>
+Cc: Zenghui Yu <zenghui.yu@linux.dev>
+Cc: Matthew Brost <matthew.brost@intel.com>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+[ kept the existing simpler `dmirror_device_evict_chunk()` body instead of the upstream compound-folio version ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ lib/test_hmm.c | 86 ++++++++++++++++++++++++++++++++-------------------------
+ 1 file changed, 49 insertions(+), 37 deletions(-)
+
+--- a/lib/test_hmm.c
++++ b/lib/test_hmm.c
+@@ -183,11 +183,60 @@ static int dmirror_fops_open(struct inod
+ return 0;
+ }
+
++static void dmirror_device_evict_chunk(struct dmirror_chunk *chunk)
++{
++ unsigned long start_pfn = chunk->pagemap.range.start >> PAGE_SHIFT;
++ unsigned long end_pfn = chunk->pagemap.range.end >> PAGE_SHIFT;
++ unsigned long npages = end_pfn - start_pfn + 1;
++ unsigned long i;
++ unsigned long *src_pfns;
++ unsigned long *dst_pfns;
++
++ src_pfns = kvcalloc(npages, sizeof(*src_pfns), GFP_KERNEL | __GFP_NOFAIL);
++ dst_pfns = kvcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL | __GFP_NOFAIL);
++
++ migrate_device_range(src_pfns, start_pfn, npages);
++ for (i = 0; i < npages; i++) {
++ struct page *dpage, *spage;
++
++ spage = migrate_pfn_to_page(src_pfns[i]);
++ if (!spage || !(src_pfns[i] & MIGRATE_PFN_MIGRATE))
++ continue;
++
++ if (WARN_ON(!is_device_private_page(spage) &&
++ !is_device_coherent_page(spage)))
++ continue;
++ spage = BACKING_PAGE(spage);
++ dpage = alloc_page(GFP_HIGHUSER_MOVABLE | __GFP_NOFAIL);
++ lock_page(dpage);
++ copy_highpage(dpage, spage);
++ dst_pfns[i] = migrate_pfn(page_to_pfn(dpage));
++ if (src_pfns[i] & MIGRATE_PFN_WRITE)
++ dst_pfns[i] |= MIGRATE_PFN_WRITE;
++ }
++ migrate_device_pages(src_pfns, dst_pfns, npages);
++ migrate_device_finalize(src_pfns, dst_pfns, npages);
++ kvfree(src_pfns);
++ kvfree(dst_pfns);
++}
++
+ static int dmirror_fops_release(struct inode *inode, struct file *filp)
+ {
+ struct dmirror *dmirror = filp->private_data;
++ struct dmirror_device *mdevice = dmirror->mdevice;
++ int i;
+
+ mmu_interval_notifier_remove(&dmirror->notifier);
++
++ if (mdevice->devmem_chunks) {
++ for (i = 0; i < mdevice->devmem_count; i++) {
++ struct dmirror_chunk *devmem =
++ mdevice->devmem_chunks[i];
++
++ dmirror_device_evict_chunk(devmem);
++ }
++ }
++
+ xa_destroy(&dmirror->pt);
+ kfree(dmirror);
+ return 0;
+@@ -1192,43 +1241,6 @@ static int dmirror_snapshot(struct dmirr
+ return ret;
+ }
+
+-static void dmirror_device_evict_chunk(struct dmirror_chunk *chunk)
+-{
+- unsigned long start_pfn = chunk->pagemap.range.start >> PAGE_SHIFT;
+- unsigned long end_pfn = chunk->pagemap.range.end >> PAGE_SHIFT;
+- unsigned long npages = end_pfn - start_pfn + 1;
+- unsigned long i;
+- unsigned long *src_pfns;
+- unsigned long *dst_pfns;
+-
+- src_pfns = kvcalloc(npages, sizeof(*src_pfns), GFP_KERNEL | __GFP_NOFAIL);
+- dst_pfns = kvcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL | __GFP_NOFAIL);
+-
+- migrate_device_range(src_pfns, start_pfn, npages);
+- for (i = 0; i < npages; i++) {
+- struct page *dpage, *spage;
+-
+- spage = migrate_pfn_to_page(src_pfns[i]);
+- if (!spage || !(src_pfns[i] & MIGRATE_PFN_MIGRATE))
+- continue;
+-
+- if (WARN_ON(!is_device_private_page(spage) &&
+- !is_device_coherent_page(spage)))
+- continue;
+- spage = BACKING_PAGE(spage);
+- dpage = alloc_page(GFP_HIGHUSER_MOVABLE | __GFP_NOFAIL);
+- lock_page(dpage);
+- copy_highpage(dpage, spage);
+- dst_pfns[i] = migrate_pfn(page_to_pfn(dpage));
+- if (src_pfns[i] & MIGRATE_PFN_WRITE)
+- dst_pfns[i] |= MIGRATE_PFN_WRITE;
+- }
+- migrate_device_pages(src_pfns, dst_pfns, npages);
+- migrate_device_finalize(src_pfns, dst_pfns, npages);
+- kvfree(src_pfns);
+- kvfree(dst_pfns);
+-}
+-
+ /* Removes free pages from the free list so they can't be re-allocated */
+ static void dmirror_remove_free_pages(struct dmirror_chunk *devmem)
+ {
--- /dev/null
+From stable+bounces-242496-greg=kroah.com@vger.kernel.org Fri May 1 21:13:09 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Fri, 1 May 2026 15:12:22 -0400
+Subject: media: rc: igorplugusb: heed coherency rules
+To: stable@vger.kernel.org
+Cc: Oliver Neukum <oneukum@suse.com>, Sean Young <sean@mess.org>, Hans Verkuil <hverkuil+cisco@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260501191222.3975247-1-sashal@kernel.org>
+
+From: Oliver Neukum <oneukum@suse.com>
+
+[ Upstream commit eac69475b01fe1e861dfe3960b57fa95671c132e ]
+
+In a control request, the USB request structure
+can be subject to DMA on some HCs. Hence it must obey
+the rules for DMA coherency. Allocate it separately.
+
+Fixes: b1c97193c6437 ("[media] rc: port IgorPlug-USB to rc-core")
+Cc: stable@vger.kernel.org
+Signed-off-by: Oliver Neukum <oneukum@suse.com>
+Signed-off-by: Sean Young <sean@mess.org>
+Signed-off-by: Hans Verkuil <hverkuil+cisco@kernel.org>
+[ replaced kzalloc_obj(*ir->request, GFP_KERNEL) with kzalloc(sizeof(*ir->request), GFP_KERNEL) ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/media/rc/igorplugusb.c | 16 +++++++++++-----
+ 1 file changed, 11 insertions(+), 5 deletions(-)
+
+--- a/drivers/media/rc/igorplugusb.c
++++ b/drivers/media/rc/igorplugusb.c
+@@ -34,7 +34,7 @@ struct igorplugusb {
+ struct device *dev;
+
+ struct urb *urb;
+- struct usb_ctrlrequest request;
++ struct usb_ctrlrequest *request;
+
+ struct timer_list timer;
+
+@@ -122,7 +122,7 @@ static void igorplugusb_cmd(struct igorp
+ {
+ int ret;
+
+- ir->request.bRequest = cmd;
++ ir->request->bRequest = cmd;
+ ir->urb->transfer_flags = 0;
+ ret = usb_submit_urb(ir->urb, GFP_ATOMIC);
+ if (ret && ret != -EPERM)
+@@ -164,13 +164,17 @@ static int igorplugusb_probe(struct usb_
+ if (!ir)
+ return -ENOMEM;
+
++ ir->request = kzalloc(sizeof(*ir->request), GFP_KERNEL);
++ if (!ir->request)
++ goto fail;
++
+ ir->dev = &intf->dev;
+
+ timer_setup(&ir->timer, igorplugusb_timer, 0);
+
+- ir->request.bRequest = GET_INFRACODE;
+- ir->request.bRequestType = USB_TYPE_VENDOR | USB_DIR_IN;
+- ir->request.wLength = cpu_to_le16(MAX_PACKET);
++ ir->request->bRequest = GET_INFRACODE;
++ ir->request->bRequestType = USB_TYPE_VENDOR | USB_DIR_IN;
++ ir->request->wLength = cpu_to_le16(MAX_PACKET);
+
+ ir->urb = usb_alloc_urb(0, GFP_KERNEL);
+ if (!ir->urb)
+@@ -228,6 +232,7 @@ fail:
+ usb_free_urb(ir->urb);
+ rc_free_device(ir->rc);
+ kfree(ir->buf_in);
++ kfree(ir->request);
+
+ return ret;
+ }
+@@ -243,6 +248,7 @@ static void igorplugusb_disconnect(struc
+ usb_unpoison_urb(ir->urb);
+ usb_free_urb(ir->urb);
+ kfree(ir->buf_in);
++ kfree(ir->request);
+ }
+
+ static const struct usb_device_id igorplugusb_table[] = {
--- /dev/null
+From stable+bounces-242455-greg=kroah.com@vger.kernel.org Fri May 1 17:46:19 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Fri, 1 May 2026 11:42:03 -0400
+Subject: media: rc: ttusbir: respect DMA coherency rules
+To: stable@vger.kernel.org
+Cc: Oliver Neukum <oneukum@suse.com>, Sean Young <sean@mess.org>, Hans Verkuil <hverkuil+cisco@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260501154203.3594498-1-sashal@kernel.org>
+
+From: Oliver Neukum <oneukum@suse.com>
+
+[ Upstream commit 50acaad3d202c064779db8dc3d010007347f59c7 ]
+
+Buffers must not share a cache line with other data structures.
+Allocate separately.
+
+Fixes: 0938069fa0897 ("[media] rc: Add support for the TechnoTrend USB IR Receiver")
+Cc: stable@vger.kernel.org
+Signed-off-by: Oliver Neukum <oneukum@suse.com>
+Signed-off-by: Sean Young <sean@mess.org>
+Signed-off-by: Hans Verkuil <hverkuil+cisco@kernel.org>
+[ kept kzalloc(sizeof(*tt), GFP_KERNEL) instead of kzalloc_obj() ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/media/rc/ttusbir.c | 13 +++++++++----
+ 1 file changed, 9 insertions(+), 4 deletions(-)
+
+--- a/drivers/media/rc/ttusbir.c
++++ b/drivers/media/rc/ttusbir.c
+@@ -32,7 +32,7 @@ struct ttusbir {
+
+ struct led_classdev led;
+ struct urb *bulk_urb;
+- uint8_t bulk_buffer[5];
++ u8 *bulk_buffer;
+ int bulk_out_endp, iso_in_endp;
+ bool led_on, is_led_on;
+ atomic_t led_complete;
+@@ -186,13 +186,16 @@ static int ttusbir_probe(struct usb_inte
+ struct rc_dev *rc;
+ int i, j, ret;
+ int altsetting = -1;
++ u8 *buffer;
+
+ tt = kzalloc(sizeof(*tt), GFP_KERNEL);
++ buffer = kzalloc(5, GFP_KERNEL);
+ rc = rc_allocate_device(RC_DRIVER_IR_RAW);
+- if (!tt || !rc) {
++ if (!tt || !rc || buffer) {
+ ret = -ENOMEM;
+ goto out;
+ }
++ tt->bulk_buffer = buffer;
+
+ /* find the correct alt setting */
+ for (i = 0; i < intf->num_altsetting && altsetting == -1; i++) {
+@@ -281,8 +284,8 @@ static int ttusbir_probe(struct usb_inte
+ tt->bulk_buffer[3] = 0x01;
+
+ usb_fill_bulk_urb(tt->bulk_urb, tt->udev, usb_sndbulkpipe(tt->udev,
+- tt->bulk_out_endp), tt->bulk_buffer, sizeof(tt->bulk_buffer),
+- ttusbir_bulk_complete, tt);
++ tt->bulk_out_endp), tt->bulk_buffer, 5,
++ ttusbir_bulk_complete, tt);
+
+ tt->led.name = "ttusbir:green:power";
+ tt->led.default_trigger = "rc-feedback";
+@@ -351,6 +354,7 @@ out:
+ kfree(tt);
+ }
+ rc_free_device(rc);
++ kfree(buffer);
+
+ return ret;
+ }
+@@ -373,6 +377,7 @@ static void ttusbir_disconnect(struct us
+ }
+ usb_kill_urb(tt->bulk_urb);
+ usb_free_urb(tt->bulk_urb);
++ kfree(tt->bulk_buffer);
+ usb_set_intfdata(intf, NULL);
+ kfree(tt);
+ }
--- /dev/null
+From stable+bounces-241533-greg=kroah.com@vger.kernel.org Tue Apr 28 12:13:10 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 28 Apr 2026 06:12:05 -0400
+Subject: mei: me: add nova lake point H DID
+To: stable@vger.kernel.org
+Cc: Alexander Usyskin <alexander.usyskin@intel.com>, stable <stable@kernel.org>, Tomas Winkler <tomasw@gmail.com>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260428101205.2778177-2-sashal@kernel.org>
+
+From: Alexander Usyskin <alexander.usyskin@intel.com>
+
+[ Upstream commit a5a1804332afc7035d5c5b880548262e81d796bc ]
+
+Add Nova Lake H device id.
+
+Cc: stable <stable@kernel.org>
+Co-developed-by: Tomas Winkler <tomasw@gmail.com>
+Signed-off-by: Tomas Winkler <tomasw@gmail.com>
+Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
+Link: https://patch.msgid.link/20260405141758.1634556-1-alexander.usyskin@intel.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/misc/mei/hw-me-regs.h | 1 +
+ drivers/misc/mei/pci-me.c | 1 +
+ 2 files changed, 2 insertions(+)
+
+--- a/drivers/misc/mei/hw-me-regs.h
++++ b/drivers/misc/mei/hw-me-regs.h
+@@ -123,6 +123,7 @@
+ #define PCI_DEVICE_ID_INTEL_MEI_WCL_P 0x4D70 /* Wildcat Lake P */
+
+ #define PCI_DEVICE_ID_INTEL_MEI_NVL_S 0x6E68 /* Nova Lake Point S */
++#define PCI_DEVICE_ID_INTEL_MEI_NVL_H 0xD370 /* Nova Lake Point H */
+
+ /*
+ * MEI HW Section
+--- a/drivers/misc/mei/pci-me.c
++++ b/drivers/misc/mei/pci-me.c
+@@ -130,6 +130,7 @@ static const struct pci_device_id mei_me
+ {PCI_DEVICE_DATA(INTEL, MEI_WCL_P, MEI_ME_PCH15_CFG)},
+
+ {PCI_DEVICE_DATA(INTEL, MEI_NVL_S, MEI_ME_PCH15_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_NVL_H, MEI_ME_PCH15_CFG)},
+
+ /* required last entry */
+ {0, }
--- /dev/null
+From stable+bounces-241532-greg=kroah.com@vger.kernel.org Tue Apr 28 12:15:32 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 28 Apr 2026 06:12:04 -0400
+Subject: mei: me: use PCI_DEVICE_DATA macro
+To: stable@vger.kernel.org
+Cc: Alexander Usyskin <alexander.usyskin@intel.com>, Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260428101205.2778177-1-sashal@kernel.org>
+
+From: Alexander Usyskin <alexander.usyskin@intel.com>
+
+[ Upstream commit 9e7a2409ecf4d411b7cc91615b08f6a7576f0aaa ]
+
+Drop old local MEI_PCI_DEVICE macro and use common
+PCI_DEVICE_DATA instead.
+Update defines to adhere to current naming convention.
+
+Suggested-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
+Link: https://patch.msgid.link/20260201094358.1440593-2-alexander.usyskin@intel.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Stable-dep-of: a5a1804332af ("mei: me: add nova lake point H DID")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/misc/mei/bus-fixup.c | 6 -
+ drivers/misc/mei/hw-me-regs.h | 162 +++++++++++++++++-----------------
+ drivers/misc/mei/hw-me.h | 6 -
+ drivers/misc/mei/pci-me.c | 200 +++++++++++++++++++++---------------------
+ 4 files changed, 184 insertions(+), 190 deletions(-)
+
+--- a/drivers/misc/mei/bus-fixup.c
++++ b/drivers/misc/mei/bus-fixup.c
+@@ -303,9 +303,9 @@ static void mei_wd(struct mei_cl_device
+ {
+ struct pci_dev *pdev = to_pci_dev(cldev->dev.parent);
+
+- if (pdev->device == MEI_DEV_ID_WPT_LP ||
+- pdev->device == MEI_DEV_ID_SPT ||
+- pdev->device == MEI_DEV_ID_SPT_H)
++ if (pdev->device == PCI_DEVICE_ID_INTEL_MEI_WPT_LP ||
++ pdev->device == PCI_DEVICE_ID_INTEL_MEI_SPT ||
++ pdev->device == PCI_DEVICE_ID_INTEL_MEI_SPT_H)
+ cldev->me_cl->props.protocol_version = 0x2;
+
+ cldev->do_match = 1;
+--- a/drivers/misc/mei/hw-me-regs.h
++++ b/drivers/misc/mei/hw-me-regs.h
+@@ -9,120 +9,120 @@
+ /*
+ * MEI device IDs
+ */
+-#define MEI_DEV_ID_82946GZ 0x2974 /* 82946GZ/GL */
+-#define MEI_DEV_ID_82G35 0x2984 /* 82G35 Express */
+-#define MEI_DEV_ID_82Q965 0x2994 /* 82Q963/Q965 */
+-#define MEI_DEV_ID_82G965 0x29A4 /* 82P965/G965 */
++#define PCI_DEVICE_ID_INTEL_MEI_82946GZ 0x2974 /* 82946GZ/GL */
++#define PCI_DEVICE_ID_INTEL_MEI_82G35 0x2984 /* 82G35 Express */
++#define PCI_DEVICE_ID_INTEL_MEI_82Q965 0x2994 /* 82Q963/Q965 */
++#define PCI_DEVICE_ID_INTEL_MEI_82G965 0x29A4 /* 82P965/G965 */
+
+-#define MEI_DEV_ID_82GM965 0x2A04 /* Mobile PM965/GM965 */
+-#define MEI_DEV_ID_82GME965 0x2A14 /* Mobile GME965/GLE960 */
++#define PCI_DEVICE_ID_INTEL_MEI_82GM965 0x2A04 /* Mobile PM965/GM965 */
++#define PCI_DEVICE_ID_INTEL_MEI_82GME965 0x2A14 /* Mobile GME965/GLE960 */
+
+-#define MEI_DEV_ID_ICH9_82Q35 0x29B4 /* 82Q35 Express */
+-#define MEI_DEV_ID_ICH9_82G33 0x29C4 /* 82G33/G31/P35/P31 Express */
+-#define MEI_DEV_ID_ICH9_82Q33 0x29D4 /* 82Q33 Express */
+-#define MEI_DEV_ID_ICH9_82X38 0x29E4 /* 82X38/X48 Express */
+-#define MEI_DEV_ID_ICH9_3200 0x29F4 /* 3200/3210 Server */
++#define PCI_DEVICE_ID_INTEL_MEI_ICH9_82Q35 0x29B4 /* 82Q35 Express */
++#define PCI_DEVICE_ID_INTEL_MEI_ICH9_82G33 0x29C4 /* 82G33/G31/P35/P31 Express */
++#define PCI_DEVICE_ID_INTEL_MEI_ICH9_82Q33 0x29D4 /* 82Q33 Express */
++#define PCI_DEVICE_ID_INTEL_MEI_ICH9_82X38 0x29E4 /* 82X38/X48 Express */
++#define PCI_DEVICE_ID_INTEL_MEI_ICH9_3200 0x29F4 /* 3200/3210 Server */
+
+-#define MEI_DEV_ID_ICH9_6 0x28B4 /* Bearlake */
+-#define MEI_DEV_ID_ICH9_7 0x28C4 /* Bearlake */
+-#define MEI_DEV_ID_ICH9_8 0x28D4 /* Bearlake */
+-#define MEI_DEV_ID_ICH9_9 0x28E4 /* Bearlake */
+-#define MEI_DEV_ID_ICH9_10 0x28F4 /* Bearlake */
++#define PCI_DEVICE_ID_INTEL_MEI_ICH9_6 0x28B4 /* Bearlake */
++#define PCI_DEVICE_ID_INTEL_MEI_ICH9_7 0x28C4 /* Bearlake */
++#define PCI_DEVICE_ID_INTEL_MEI_ICH9_8 0x28D4 /* Bearlake */
++#define PCI_DEVICE_ID_INTEL_MEI_ICH9_9 0x28E4 /* Bearlake */
++#define PCI_DEVICE_ID_INTEL_MEI_ICH9_10 0x28F4 /* Bearlake */
+
+-#define MEI_DEV_ID_ICH9M_1 0x2A44 /* Cantiga */
+-#define MEI_DEV_ID_ICH9M_2 0x2A54 /* Cantiga */
+-#define MEI_DEV_ID_ICH9M_3 0x2A64 /* Cantiga */
+-#define MEI_DEV_ID_ICH9M_4 0x2A74 /* Cantiga */
++#define PCI_DEVICE_ID_INTEL_MEI_ICH9M_1 0x2A44 /* Cantiga */
++#define PCI_DEVICE_ID_INTEL_MEI_ICH9M_2 0x2A54 /* Cantiga */
++#define PCI_DEVICE_ID_INTEL_MEI_ICH9M_3 0x2A64 /* Cantiga */
++#define PCI_DEVICE_ID_INTEL_MEI_ICH9M_4 0x2A74 /* Cantiga */
+
+-#define MEI_DEV_ID_ICH10_1 0x2E04 /* Eaglelake */
+-#define MEI_DEV_ID_ICH10_2 0x2E14 /* Eaglelake */
+-#define MEI_DEV_ID_ICH10_3 0x2E24 /* Eaglelake */
+-#define MEI_DEV_ID_ICH10_4 0x2E34 /* Eaglelake */
++#define PCI_DEVICE_ID_INTEL_MEI_ICH10_1 0x2E04 /* Eaglelake */
++#define PCI_DEVICE_ID_INTEL_MEI_ICH10_2 0x2E14 /* Eaglelake */
++#define PCI_DEVICE_ID_INTEL_MEI_ICH10_3 0x2E24 /* Eaglelake */
++#define PCI_DEVICE_ID_INTEL_MEI_ICH10_4 0x2E34 /* Eaglelake */
+
+-#define MEI_DEV_ID_IBXPK_1 0x3B64 /* Calpella */
+-#define MEI_DEV_ID_IBXPK_2 0x3B65 /* Calpella */
++#define PCI_DEVICE_ID_INTEL_MEI_IBXPK_1 0x3B64 /* Calpella */
++#define PCI_DEVICE_ID_INTEL_MEI_IBXPK_2 0x3B65 /* Calpella */
+
+-#define MEI_DEV_ID_CPT_1 0x1C3A /* Couger Point */
+-#define MEI_DEV_ID_PBG_1 0x1D3A /* C600/X79 Patsburg */
++#define PCI_DEVICE_ID_INTEL_MEI_CPT_1 0x1C3A /* Couger Point */
++#define PCI_DEVICE_ID_INTEL_MEI_PBG_1 0x1D3A /* C600/X79 Patsburg */
+
+-#define MEI_DEV_ID_PPT_1 0x1E3A /* Panther Point */
+-#define MEI_DEV_ID_PPT_2 0x1CBA /* Panther Point */
+-#define MEI_DEV_ID_PPT_3 0x1DBA /* Panther Point */
++#define PCI_DEVICE_ID_INTEL_MEI_PPT_1 0x1E3A /* Panther Point */
++#define PCI_DEVICE_ID_INTEL_MEI_PPT_2 0x1CBA /* Panther Point */
++#define PCI_DEVICE_ID_INTEL_MEI_PPT_3 0x1DBA /* Panther Point */
+
+-#define MEI_DEV_ID_LPT_H 0x8C3A /* Lynx Point H */
+-#define MEI_DEV_ID_LPT_W 0x8D3A /* Lynx Point - Wellsburg */
+-#define MEI_DEV_ID_LPT_LP 0x9C3A /* Lynx Point LP */
+-#define MEI_DEV_ID_LPT_HR 0x8CBA /* Lynx Point H Refresh */
++#define PCI_DEVICE_ID_INTEL_MEI_LPT_H 0x8C3A /* Lynx Point H */
++#define PCI_DEVICE_ID_INTEL_MEI_LPT_W 0x8D3A /* Lynx Point - Wellsburg */
++#define PCI_DEVICE_ID_INTEL_MEI_LPT_LP 0x9C3A /* Lynx Point LP */
++#define PCI_DEVICE_ID_INTEL_MEI_LPT_HR 0x8CBA /* Lynx Point H Refresh */
+
+-#define MEI_DEV_ID_WPT_LP 0x9CBA /* Wildcat Point LP */
+-#define MEI_DEV_ID_WPT_LP_2 0x9CBB /* Wildcat Point LP 2 */
++#define PCI_DEVICE_ID_INTEL_MEI_WPT_LP 0x9CBA /* Wildcat Point LP */
++#define PCI_DEVICE_ID_INTEL_MEI_WPT_LP_2 0x9CBB /* Wildcat Point LP 2 */
+
+-#define MEI_DEV_ID_SPT 0x9D3A /* Sunrise Point */
+-#define MEI_DEV_ID_SPT_2 0x9D3B /* Sunrise Point 2 */
+-#define MEI_DEV_ID_SPT_3 0x9D3E /* Sunrise Point 3 (iToutch) */
+-#define MEI_DEV_ID_SPT_H 0xA13A /* Sunrise Point H */
+-#define MEI_DEV_ID_SPT_H_2 0xA13B /* Sunrise Point H 2 */
++#define PCI_DEVICE_ID_INTEL_MEI_SPT 0x9D3A /* Sunrise Point */
++#define PCI_DEVICE_ID_INTEL_MEI_SPT_2 0x9D3B /* Sunrise Point 2 */
++#define PCI_DEVICE_ID_INTEL_MEI_SPT_3 0x9D3E /* Sunrise Point 3 (iToutch) */
++#define PCI_DEVICE_ID_INTEL_MEI_SPT_H 0xA13A /* Sunrise Point H */
++#define PCI_DEVICE_ID_INTEL_MEI_SPT_H_2 0xA13B /* Sunrise Point H 2 */
+
+-#define MEI_DEV_ID_LBG 0xA1BA /* Lewisburg (SPT) */
++#define PCI_DEVICE_ID_INTEL_MEI_LBG 0xA1BA /* Lewisburg (SPT) */
+
+-#define MEI_DEV_ID_BXT_M 0x1A9A /* Broxton M */
+-#define MEI_DEV_ID_APL_I 0x5A9A /* Apollo Lake I */
++#define PCI_DEVICE_ID_INTEL_MEI_BXT_M 0x1A9A /* Broxton M */
++#define PCI_DEVICE_ID_INTEL_MEI_APL_I 0x5A9A /* Apollo Lake I */
+
+-#define MEI_DEV_ID_DNV_IE 0x19E5 /* Denverton IE */
++#define PCI_DEVICE_ID_INTEL_MEI_DNV_IE 0x19E5 /* Denverton IE */
+
+-#define MEI_DEV_ID_GLK 0x319A /* Gemini Lake */
++#define PCI_DEVICE_ID_INTEL_MEI_GLK 0x319A /* Gemini Lake */
+
+-#define MEI_DEV_ID_KBP 0xA2BA /* Kaby Point */
+-#define MEI_DEV_ID_KBP_2 0xA2BB /* Kaby Point 2 */
+-#define MEI_DEV_ID_KBP_3 0xA2BE /* Kaby Point 3 (iTouch) */
++#define PCI_DEVICE_ID_INTEL_MEI_KBP 0xA2BA /* Kaby Point */
++#define PCI_DEVICE_ID_INTEL_MEI_KBP_2 0xA2BB /* Kaby Point 2 */
++#define PCI_DEVICE_ID_INTEL_MEI_KBP_3 0xA2BE /* Kaby Point 3 (iTouch) */
+
+-#define MEI_DEV_ID_CNP_LP 0x9DE0 /* Cannon Point LP */
+-#define MEI_DEV_ID_CNP_LP_3 0x9DE4 /* Cannon Point LP 3 (iTouch) */
+-#define MEI_DEV_ID_CNP_H 0xA360 /* Cannon Point H */
+-#define MEI_DEV_ID_CNP_H_3 0xA364 /* Cannon Point H 3 (iTouch) */
++#define PCI_DEVICE_ID_INTEL_MEI_CNP_LP 0x9DE0 /* Cannon Point LP */
++#define PCI_DEVICE_ID_INTEL_MEI_CNP_LP_3 0x9DE4 /* Cannon Point LP 3 (iTouch) */
++#define PCI_DEVICE_ID_INTEL_MEI_CNP_H 0xA360 /* Cannon Point H */
++#define PCI_DEVICE_ID_INTEL_MEI_CNP_H_3 0xA364 /* Cannon Point H 3 (iTouch) */
+
+-#define MEI_DEV_ID_CMP_LP 0x02e0 /* Comet Point LP */
+-#define MEI_DEV_ID_CMP_LP_3 0x02e4 /* Comet Point LP 3 (iTouch) */
++#define PCI_DEVICE_ID_INTEL_MEI_CMP_LP 0x02e0 /* Comet Point LP */
++#define PCI_DEVICE_ID_INTEL_MEI_CMP_LP_3 0x02e4 /* Comet Point LP 3 (iTouch) */
+
+-#define MEI_DEV_ID_CMP_V 0xA3BA /* Comet Point Lake V */
++#define PCI_DEVICE_ID_INTEL_MEI_CMP_V 0xA3BA /* Comet Point Lake V */
+
+-#define MEI_DEV_ID_CMP_H 0x06e0 /* Comet Lake H */
+-#define MEI_DEV_ID_CMP_H_3 0x06e4 /* Comet Lake H 3 (iTouch) */
++#define PCI_DEVICE_ID_INTEL_MEI_CMP_H 0x06e0 /* Comet Lake H */
++#define PCI_DEVICE_ID_INTEL_MEI_CMP_H_3 0x06e4 /* Comet Lake H 3 (iTouch) */
+
+-#define MEI_DEV_ID_CDF 0x18D3 /* Cedar Fork */
++#define PCI_DEVICE_ID_INTEL_MEI_CDF 0x18D3 /* Cedar Fork */
+
+-#define MEI_DEV_ID_ICP_LP 0x34E0 /* Ice Lake Point LP */
+-#define MEI_DEV_ID_ICP_N 0x38E0 /* Ice Lake Point N */
++#define PCI_DEVICE_ID_INTEL_MEI_ICP_LP 0x34E0 /* Ice Lake Point LP */
++#define PCI_DEVICE_ID_INTEL_MEI_ICP_N 0x38E0 /* Ice Lake Point N */
+
+-#define MEI_DEV_ID_JSP_N 0x4DE0 /* Jasper Lake Point N */
++#define PCI_DEVICE_ID_INTEL_MEI_JSP_N 0x4DE0 /* Jasper Lake Point N */
+
+-#define MEI_DEV_ID_TGP_LP 0xA0E0 /* Tiger Lake Point LP */
+-#define MEI_DEV_ID_TGP_H 0x43E0 /* Tiger Lake Point H */
++#define PCI_DEVICE_ID_INTEL_MEI_TGP_LP 0xA0E0 /* Tiger Lake Point LP */
++#define PCI_DEVICE_ID_INTEL_MEI_TGP_H 0x43E0 /* Tiger Lake Point H */
+
+-#define MEI_DEV_ID_MCC 0x4B70 /* Mule Creek Canyon (EHL) */
+-#define MEI_DEV_ID_MCC_4 0x4B75 /* Mule Creek Canyon 4 (EHL) */
++#define PCI_DEVICE_ID_INTEL_MEI_MCC 0x4B70 /* Mule Creek Canyon (EHL) */
++#define PCI_DEVICE_ID_INTEL_MEI_MCC_4 0x4B75 /* Mule Creek Canyon 4 (EHL) */
+
+-#define MEI_DEV_ID_EBG 0x1BE0 /* Emmitsburg WS */
++#define PCI_DEVICE_ID_INTEL_MEI_EBG 0x1BE0 /* Emmitsburg WS */
+
+-#define MEI_DEV_ID_ADP_S 0x7AE8 /* Alder Lake Point S */
+-#define MEI_DEV_ID_ADP_LP 0x7A60 /* Alder Lake Point LP */
+-#define MEI_DEV_ID_ADP_P 0x51E0 /* Alder Lake Point P */
+-#define MEI_DEV_ID_ADP_N 0x54E0 /* Alder Lake Point N */
++#define PCI_DEVICE_ID_INTEL_MEI_ADP_S 0x7AE8 /* Alder Lake Point S */
++#define PCI_DEVICE_ID_INTEL_MEI_ADP_LP 0x7A60 /* Alder Lake Point LP */
++#define PCI_DEVICE_ID_INTEL_MEI_ADP_P 0x51E0 /* Alder Lake Point P */
++#define PCI_DEVICE_ID_INTEL_MEI_ADP_N 0x54E0 /* Alder Lake Point N */
+
+-#define MEI_DEV_ID_RPL_S 0x7A68 /* Raptor Lake Point S */
++#define PCI_DEVICE_ID_INTEL_MEI_RPL_S 0x7A68 /* Raptor Lake Point S */
+
+-#define MEI_DEV_ID_MTL_M 0x7E70 /* Meteor Lake Point M */
+-#define MEI_DEV_ID_ARL_S 0x7F68 /* Arrow Lake Point S */
+-#define MEI_DEV_ID_ARL_H 0x7770 /* Arrow Lake Point H */
++#define PCI_DEVICE_ID_INTEL_MEI_MTL_M 0x7E70 /* Meteor Lake Point M */
++#define PCI_DEVICE_ID_INTEL_MEI_ARL_S 0x7F68 /* Arrow Lake Point S */
++#define PCI_DEVICE_ID_INTEL_MEI_ARL_H 0x7770 /* Arrow Lake Point H */
+
+-#define MEI_DEV_ID_LNL_M 0xA870 /* Lunar Lake Point M */
++#define PCI_DEVICE_ID_INTEL_MEI_LNL_M 0xA870 /* Lunar Lake Point M */
+
+-#define MEI_DEV_ID_PTL_H 0xE370 /* Panther Lake H */
+-#define MEI_DEV_ID_PTL_P 0xE470 /* Panther Lake P */
++#define PCI_DEVICE_ID_INTEL_MEI_PTL_H 0xE370 /* Panther Lake H */
++#define PCI_DEVICE_ID_INTEL_MEI_PTL_P 0xE470 /* Panther Lake P */
+
+-#define MEI_DEV_ID_WCL_P 0x4D70 /* Wildcat Lake P */
++#define PCI_DEVICE_ID_INTEL_MEI_WCL_P 0x4D70 /* Wildcat Lake P */
+
+-#define MEI_DEV_ID_NVL_S 0x6E68 /* Nova Lake Point S */
++#define PCI_DEVICE_ID_INTEL_MEI_NVL_S 0x6E68 /* Nova Lake Point S */
+
+ /*
+ * MEI HW Section
+--- a/drivers/misc/mei/hw-me.h
++++ b/drivers/misc/mei/hw-me.h
+@@ -33,12 +33,6 @@ struct mei_cfg {
+ u32 hw_trc_supported:1;
+ };
+
+-
+-#define MEI_PCI_DEVICE(dev, cfg) \
+- .vendor = PCI_VENDOR_ID_INTEL, .device = (dev), \
+- .subvendor = PCI_ANY_ID, .subdevice = PCI_ANY_ID, \
+- .driver_data = (kernel_ulong_t)(cfg),
+-
+ #define MEI_ME_RPM_TIMEOUT 500 /* ms */
+
+ /**
+--- a/drivers/misc/mei/pci-me.c
++++ b/drivers/misc/mei/pci-me.c
+@@ -26,110 +26,110 @@
+
+ /* mei_pci_tbl - PCI Device ID Table */
+ static const struct pci_device_id mei_me_pci_tbl[] = {
+- {MEI_PCI_DEVICE(MEI_DEV_ID_82946GZ, MEI_ME_ICH_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_82G35, MEI_ME_ICH_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_82Q965, MEI_ME_ICH_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_82G965, MEI_ME_ICH_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_82GM965, MEI_ME_ICH_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_82GME965, MEI_ME_ICH_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_ICH9_82Q35, MEI_ME_ICH_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_ICH9_82G33, MEI_ME_ICH_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_ICH9_82Q33, MEI_ME_ICH_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_ICH9_82X38, MEI_ME_ICH_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_ICH9_3200, MEI_ME_ICH_CFG)},
+-
+- {MEI_PCI_DEVICE(MEI_DEV_ID_ICH9_6, MEI_ME_ICH_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_ICH9_7, MEI_ME_ICH_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_ICH9_8, MEI_ME_ICH_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_ICH9_9, MEI_ME_ICH_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_ICH9_10, MEI_ME_ICH_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_ICH9M_1, MEI_ME_ICH_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_ICH9M_2, MEI_ME_ICH_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_ICH9M_3, MEI_ME_ICH_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_ICH9M_4, MEI_ME_ICH_CFG)},
+-
+- {MEI_PCI_DEVICE(MEI_DEV_ID_ICH10_1, MEI_ME_ICH10_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_ICH10_2, MEI_ME_ICH10_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_ICH10_3, MEI_ME_ICH10_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_ICH10_4, MEI_ME_ICH10_CFG)},
+-
+- {MEI_PCI_DEVICE(MEI_DEV_ID_IBXPK_1, MEI_ME_PCH6_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_IBXPK_2, MEI_ME_PCH6_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_CPT_1, MEI_ME_PCH_CPT_PBG_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_PBG_1, MEI_ME_PCH_CPT_PBG_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_PPT_1, MEI_ME_PCH7_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_PPT_2, MEI_ME_PCH7_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_PPT_3, MEI_ME_PCH7_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_LPT_H, MEI_ME_PCH8_SPS_4_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_LPT_W, MEI_ME_PCH8_SPS_4_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_LPT_LP, MEI_ME_PCH8_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_LPT_HR, MEI_ME_PCH8_SPS_4_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_WPT_LP, MEI_ME_PCH8_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_WPT_LP_2, MEI_ME_PCH8_CFG)},
+-
+- {MEI_PCI_DEVICE(MEI_DEV_ID_SPT, MEI_ME_PCH8_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_SPT_2, MEI_ME_PCH8_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_SPT_3, MEI_ME_PCH8_ITOUCH_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_SPT_H, MEI_ME_PCH8_SPS_4_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_SPT_H_2, MEI_ME_PCH8_SPS_4_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_LBG, MEI_ME_PCH12_SPS_4_CFG)},
+-
+- {MEI_PCI_DEVICE(MEI_DEV_ID_BXT_M, MEI_ME_PCH8_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_APL_I, MEI_ME_PCH8_CFG)},
+-
+- {MEI_PCI_DEVICE(MEI_DEV_ID_DNV_IE, MEI_ME_PCH8_CFG)},
+-
+- {MEI_PCI_DEVICE(MEI_DEV_ID_GLK, MEI_ME_PCH8_CFG)},
+-
+- {MEI_PCI_DEVICE(MEI_DEV_ID_KBP, MEI_ME_PCH8_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_KBP_2, MEI_ME_PCH8_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_KBP_3, MEI_ME_PCH8_CFG)},
+-
+- {MEI_PCI_DEVICE(MEI_DEV_ID_CNP_LP, MEI_ME_PCH12_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_CNP_LP_3, MEI_ME_PCH8_ITOUCH_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_CNP_H, MEI_ME_PCH12_SPS_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_CNP_H_3, MEI_ME_PCH12_SPS_ITOUCH_CFG)},
+-
+- {MEI_PCI_DEVICE(MEI_DEV_ID_CMP_LP, MEI_ME_PCH12_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_CMP_LP_3, MEI_ME_PCH8_ITOUCH_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_CMP_V, MEI_ME_PCH12_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_CMP_H, MEI_ME_PCH12_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_CMP_H_3, MEI_ME_PCH8_ITOUCH_CFG)},
+-
+- {MEI_PCI_DEVICE(MEI_DEV_ID_ICP_LP, MEI_ME_PCH12_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_ICP_N, MEI_ME_PCH12_CFG)},
+-
+- {MEI_PCI_DEVICE(MEI_DEV_ID_TGP_LP, MEI_ME_PCH15_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_TGP_H, MEI_ME_PCH15_SPS_CFG)},
+-
+- {MEI_PCI_DEVICE(MEI_DEV_ID_JSP_N, MEI_ME_PCH15_CFG)},
+-
+- {MEI_PCI_DEVICE(MEI_DEV_ID_MCC, MEI_ME_PCH15_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_MCC_4, MEI_ME_PCH8_CFG)},
+-
+- {MEI_PCI_DEVICE(MEI_DEV_ID_CDF, MEI_ME_PCH8_CFG)},
+-
+- {MEI_PCI_DEVICE(MEI_DEV_ID_EBG, MEI_ME_PCH15_SPS_CFG)},
+-
+- {MEI_PCI_DEVICE(MEI_DEV_ID_ADP_S, MEI_ME_PCH15_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_ADP_LP, MEI_ME_PCH15_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_ADP_P, MEI_ME_PCH15_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_ADP_N, MEI_ME_PCH15_CFG)},
+-
+- {MEI_PCI_DEVICE(MEI_DEV_ID_RPL_S, MEI_ME_PCH15_SPS_CFG)},
+-
+- {MEI_PCI_DEVICE(MEI_DEV_ID_MTL_M, MEI_ME_PCH15_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_ARL_S, MEI_ME_PCH15_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_ARL_H, MEI_ME_PCH15_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_82946GZ, MEI_ME_ICH_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_82G35, MEI_ME_ICH_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_82Q965, MEI_ME_ICH_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_82G965, MEI_ME_ICH_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_82GM965, MEI_ME_ICH_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_82GME965, MEI_ME_ICH_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_ICH9_82Q35, MEI_ME_ICH_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_ICH9_82G33, MEI_ME_ICH_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_ICH9_82Q33, MEI_ME_ICH_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_ICH9_82X38, MEI_ME_ICH_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_ICH9_3200, MEI_ME_ICH_CFG)},
++
++ {PCI_DEVICE_DATA(INTEL, MEI_ICH9_6, MEI_ME_ICH_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_ICH9_7, MEI_ME_ICH_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_ICH9_8, MEI_ME_ICH_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_ICH9_9, MEI_ME_ICH_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_ICH9_10, MEI_ME_ICH_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_ICH9M_1, MEI_ME_ICH_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_ICH9M_2, MEI_ME_ICH_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_ICH9M_3, MEI_ME_ICH_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_ICH9M_4, MEI_ME_ICH_CFG)},
++
++ {PCI_DEVICE_DATA(INTEL, MEI_ICH10_1, MEI_ME_ICH10_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_ICH10_2, MEI_ME_ICH10_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_ICH10_3, MEI_ME_ICH10_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_ICH10_4, MEI_ME_ICH10_CFG)},
++
++ {PCI_DEVICE_DATA(INTEL, MEI_IBXPK_1, MEI_ME_PCH6_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_IBXPK_2, MEI_ME_PCH6_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_CPT_1, MEI_ME_PCH_CPT_PBG_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_PBG_1, MEI_ME_PCH_CPT_PBG_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_PPT_1, MEI_ME_PCH7_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_PPT_2, MEI_ME_PCH7_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_PPT_3, MEI_ME_PCH7_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_LPT_H, MEI_ME_PCH8_SPS_4_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_LPT_W, MEI_ME_PCH8_SPS_4_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_LPT_LP, MEI_ME_PCH8_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_LPT_HR, MEI_ME_PCH8_SPS_4_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_WPT_LP, MEI_ME_PCH8_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_WPT_LP_2, MEI_ME_PCH8_CFG)},
++
++ {PCI_DEVICE_DATA(INTEL, MEI_SPT, MEI_ME_PCH8_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_SPT_2, MEI_ME_PCH8_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_SPT_3, MEI_ME_PCH8_ITOUCH_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_SPT_H, MEI_ME_PCH8_SPS_4_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_SPT_H_2, MEI_ME_PCH8_SPS_4_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_LBG, MEI_ME_PCH12_SPS_4_CFG)},
++
++ {PCI_DEVICE_DATA(INTEL, MEI_BXT_M, MEI_ME_PCH8_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_APL_I, MEI_ME_PCH8_CFG)},
++
++ {PCI_DEVICE_DATA(INTEL, MEI_DNV_IE, MEI_ME_PCH8_CFG)},
++
++ {PCI_DEVICE_DATA(INTEL, MEI_GLK, MEI_ME_PCH8_CFG)},
++
++ {PCI_DEVICE_DATA(INTEL, MEI_KBP, MEI_ME_PCH8_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_KBP_2, MEI_ME_PCH8_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_KBP_3, MEI_ME_PCH8_CFG)},
++
++ {PCI_DEVICE_DATA(INTEL, MEI_CNP_LP, MEI_ME_PCH12_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_CNP_LP_3, MEI_ME_PCH8_ITOUCH_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_CNP_H, MEI_ME_PCH12_SPS_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_CNP_H_3, MEI_ME_PCH12_SPS_ITOUCH_CFG)},
++
++ {PCI_DEVICE_DATA(INTEL, MEI_CMP_LP, MEI_ME_PCH12_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_CMP_LP_3, MEI_ME_PCH8_ITOUCH_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_CMP_V, MEI_ME_PCH12_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_CMP_H, MEI_ME_PCH12_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_CMP_H_3, MEI_ME_PCH8_ITOUCH_CFG)},
++
++ {PCI_DEVICE_DATA(INTEL, MEI_ICP_LP, MEI_ME_PCH12_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_ICP_N, MEI_ME_PCH12_CFG)},
++
++ {PCI_DEVICE_DATA(INTEL, MEI_TGP_LP, MEI_ME_PCH15_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_TGP_H, MEI_ME_PCH15_SPS_CFG)},
++
++ {PCI_DEVICE_DATA(INTEL, MEI_JSP_N, MEI_ME_PCH15_CFG)},
++
++ {PCI_DEVICE_DATA(INTEL, MEI_MCC, MEI_ME_PCH15_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_MCC_4, MEI_ME_PCH8_CFG)},
++
++ {PCI_DEVICE_DATA(INTEL, MEI_CDF, MEI_ME_PCH8_CFG)},
++
++ {PCI_DEVICE_DATA(INTEL, MEI_EBG, MEI_ME_PCH15_SPS_CFG)},
++
++ {PCI_DEVICE_DATA(INTEL, MEI_ADP_S, MEI_ME_PCH15_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_ADP_LP, MEI_ME_PCH15_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_ADP_P, MEI_ME_PCH15_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_ADP_N, MEI_ME_PCH15_CFG)},
++
++ {PCI_DEVICE_DATA(INTEL, MEI_RPL_S, MEI_ME_PCH15_SPS_CFG)},
++
++ {PCI_DEVICE_DATA(INTEL, MEI_MTL_M, MEI_ME_PCH15_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_ARL_S, MEI_ME_PCH15_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_ARL_H, MEI_ME_PCH15_CFG)},
+
+- {MEI_PCI_DEVICE(MEI_DEV_ID_LNL_M, MEI_ME_PCH15_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_LNL_M, MEI_ME_PCH15_CFG)},
+
+- {MEI_PCI_DEVICE(MEI_DEV_ID_PTL_H, MEI_ME_PCH15_CFG)},
+- {MEI_PCI_DEVICE(MEI_DEV_ID_PTL_P, MEI_ME_PCH15_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_PTL_H, MEI_ME_PCH15_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_PTL_P, MEI_ME_PCH15_CFG)},
+
+- {MEI_PCI_DEVICE(MEI_DEV_ID_WCL_P, MEI_ME_PCH15_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_WCL_P, MEI_ME_PCH15_CFG)},
+
+- {MEI_PCI_DEVICE(MEI_DEV_ID_NVL_S, MEI_ME_PCH15_CFG)},
++ {PCI_DEVICE_DATA(INTEL, MEI_NVL_S, MEI_ME_PCH15_CFG)},
+
+ /* required last entry */
+ {0, }
--- /dev/null
+From stable+bounces-241767-greg=kroah.com@vger.kernel.org Tue Apr 28 22:05:40 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 28 Apr 2026 16:05:33 -0400
+Subject: mm: prevent droppable mappings from being locked
+To: stable@vger.kernel.org
+Cc: Anthony Yznaga <anthony.yznaga@oracle.com>, David Hildenbrand <david@kernel.org>, Pedro Falcato <pfalcato@suse.de>, "Lorenzo Stoakes (Oracle)" <ljs@kernel.org>, Jann Horn <jannh@google.com>, "Jason A. Donenfeld" <jason@zx2c4.com>, Liam Howlett <liam.howlett@oracle.com>, Michal Hocko <mhocko@suse.com>, Mike Rapoport <rppt@kernel.org>, Shuah Khan <shuah@kernel.org>, Suren Baghdasaryan <surenb@google.com>, Vlastimil Babka <vbabka@kernel.org>, Andrew Morton <akpm@linux-foundation.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260428200533.3190779-1-sashal@kernel.org>
+
+From: Anthony Yznaga <anthony.yznaga@oracle.com>
+
+[ Upstream commit d239462787b072c78eb19fc1f155c3d411256282 ]
+
+Droppable mappings must not be lockable. There is a check for VMAs with
+VM_DROPPABLE set in mlock_fixup() along with checks for other types of
+unlockable VMAs which ensures this when calling mlock()/mlock2().
+
+For mlockall(MCL_FUTURE), the check for unlockable VMAs is different. In
+apply_mlockall_flags(), if the flags parameter has MCL_FUTURE set, the
+current task's mm's default VMA flag field mm->def_flags has VM_LOCKED
+applied to it. VM_LOCKONFAULT is also applied if MCL_ONFAULT is also set.
+When these flags are set as default in this manner they are cleared in
+__mmap_complete() for new mappings that do not support mlock. A check for
+VM_DROPPABLE in __mmap_complete() is missing resulting in droppable
+mappings created with VM_LOCKED set. To fix this and reduce that chance
+of similar bugs in the future, introduce and use vma_supports_mlock().
+
+Link: https://lkml.kernel.org/r/20260310155821.17869-1-anthony.yznaga@oracle.com
+Fixes: 9651fcedf7b9 ("mm: add MAP_DROPPABLE for designating always lazily freeable mappings")
+Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com>
+Suggested-by: David Hildenbrand <david@kernel.org>
+Acked-by: David Hildenbrand (Arm) <david@kernel.org>
+Reviewed-by: Pedro Falcato <pfalcato@suse.de>
+Reviewed-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
+Tested-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
+Cc: Jann Horn <jannh@google.com>
+Cc: Jason A. Donenfeld <jason@zx2c4.com>
+Cc: Liam Howlett <liam.howlett@oracle.com>
+Cc: Michal Hocko <mhocko@suse.com>
+Cc: Mike Rapoport <rppt@kernel.org>
+Cc: Shuah Khan <shuah@kernel.org>
+Cc: Suren Baghdasaryan <surenb@google.com>
+Cc: Vlastimil Babka <vbabka@kernel.org>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+[ added const to is_vm_hugetlb_page and stubbed vma_supports_mlock in vma_internal.h instead of the split-out stubs.h ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ include/linux/hugetlb_inline.h | 4 ++--
+ mm/internal.h | 10 ++++++++++
+ mm/mlock.c | 10 ++++++----
+ mm/vma.c | 4 +---
+ tools/testing/vma/vma_internal.h | 7 ++++++-
+ 5 files changed, 25 insertions(+), 10 deletions(-)
+
+--- a/include/linux/hugetlb_inline.h
++++ b/include/linux/hugetlb_inline.h
+@@ -6,14 +6,14 @@
+
+ #include <linux/mm.h>
+
+-static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
++static inline bool is_vm_hugetlb_page(const struct vm_area_struct *vma)
+ {
+ return !!(vma->vm_flags & VM_HUGETLB);
+ }
+
+ #else
+
+-static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
++static inline bool is_vm_hugetlb_page(const struct vm_area_struct *vma)
+ {
+ return false;
+ }
+--- a/mm/internal.h
++++ b/mm/internal.h
+@@ -1129,6 +1129,16 @@ static inline struct file *maybe_unlock_
+ }
+ return fpin;
+ }
++
++static inline bool vma_supports_mlock(const struct vm_area_struct *vma)
++{
++ if (vma->vm_flags & (VM_SPECIAL | VM_DROPPABLE))
++ return false;
++ if (vma_is_dax(vma) || is_vm_hugetlb_page(vma))
++ return false;
++ return vma != get_gate_vma(current->mm);
++}
++
+ #else /* !CONFIG_MMU */
+ static inline void unmap_mapping_folio(struct folio *folio) { }
+ static inline void mlock_new_folio(struct folio *folio) { }
+--- a/mm/mlock.c
++++ b/mm/mlock.c
+@@ -472,10 +472,12 @@ static int mlock_fixup(struct vma_iterat
+ int ret = 0;
+ vm_flags_t oldflags = vma->vm_flags;
+
+- if (newflags == oldflags || (oldflags & VM_SPECIAL) ||
+- is_vm_hugetlb_page(vma) || vma == get_gate_vma(current->mm) ||
+- vma_is_dax(vma) || vma_is_secretmem(vma) || (oldflags & VM_DROPPABLE))
+- /* don't set VM_LOCKED or VM_LOCKONFAULT and don't count */
++ if (newflags == oldflags || vma_is_secretmem(vma) ||
++ !vma_supports_mlock(vma))
++ /*
++ * Don't set VM_LOCKED or VM_LOCKONFAULT and don't count.
++ * For secretmem, don't allow the memory to be unlocked.
++ */
+ goto out;
+
+ vma = vma_modify_flags(vmi, *prev, vma, start, end, newflags);
+--- a/mm/vma.c
++++ b/mm/vma.c
+@@ -2571,9 +2571,7 @@ static void __mmap_complete(struct mmap_
+
+ vm_stat_account(mm, vma->vm_flags, map->pglen);
+ if (vm_flags & VM_LOCKED) {
+- if ((vm_flags & VM_SPECIAL) || vma_is_dax(vma) ||
+- is_vm_hugetlb_page(vma) ||
+- vma == get_gate_vma(mm))
++ if (!vma_supports_mlock(vma))
+ vm_flags_clear(vma, VM_LOCKED_MASK);
+ else
+ mm->locked_vm += map->pglen;
+--- a/tools/testing/vma/vma_internal.h
++++ b/tools/testing/vma/vma_internal.h
+@@ -989,7 +989,12 @@ static inline bool mapping_can_writeback
+ return true;
+ }
+
+-static inline bool is_vm_hugetlb_page(struct vm_area_struct *vma)
++static inline bool is_vm_hugetlb_page(const struct vm_area_struct *vma)
++{
++ return false;
++}
++
++static inline bool vma_supports_mlock(const struct vm_area_struct *vma)
+ {
+ return false;
+ }
--- /dev/null
+From stable+bounces-242809-greg=kroah.com@vger.kernel.org Sun May 3 19:55:17 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Sun, 3 May 2026 13:55:04 -0400
+Subject: net: qrtr: ns: Limit the maximum number of lookups
+To: stable@vger.kernel.org
+Cc: Manivannan Sadhasivam <manivannan.sadhasivam@oss.qualcomm.com>, Jakub Kicinski <kuba@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260503175504.1151555-1-sashal@kernel.org>
+
+From: Manivannan Sadhasivam <manivannan.sadhasivam@oss.qualcomm.com>
+
+[ Upstream commit 5640227d9a21c6a8be249a10677b832e7f40dc55 ]
+
+Current code does no bound checking on the number of lookups a client can
+perform. Though the code restricts the lookups to local clients, there is
+still a possibility of a malicious local client sending a flood of
+NEW_LOOKUP messages over the same socket.
+
+Fix this issue by limiting the maximum number of lookups to 64 globally.
+Since the nameserver allows only atmost one local observer, this global
+lookup count will ensure that the lookups stay within the limit.
+
+Note that, limit of 64 is chosen based on the current platform
+requirements. If requirement changes in the future, this limit can be
+increased.
+
+Cc: stable@vger.kernel.org
+Fixes: 0c2204a4ad71 ("net: qrtr: Migrate nameservice to kernel from userspace")
+Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@oss.qualcomm.com>
+Link: https://patch.msgid.link/20260409-qrtr-fix-v3-2-00a8a5ff2b51@oss.qualcomm.com
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+[ adapted comment block to only mention QRTR_NS_MAX_LOOKUPS and kept kzalloc() instead of kzalloc_obj() due to missing prerequisite commits ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/qrtr/ns.c | 14 ++++++++++++++
+ 1 file changed, 14 insertions(+)
+
+--- a/net/qrtr/ns.c
++++ b/net/qrtr/ns.c
+@@ -22,6 +22,7 @@ static struct {
+ struct socket *sock;
+ struct sockaddr_qrtr bcast_sq;
+ struct list_head lookups;
++ u32 lookup_count;
+ struct workqueue_struct *workqueue;
+ struct work_struct work;
+ void (*saved_data_ready)(struct sock *sk);
+@@ -76,6 +77,11 @@ struct qrtr_node {
+ */
+ #define QRTR_NS_MAX_SERVERS 256
+
++/* Max lookup limit is chosen based on the current platform requirements. If the
++ * requirement changes in the future, this value can be increased.
++ */
++#define QRTR_NS_MAX_LOOKUPS 64
++
+ static struct qrtr_node *node_get(unsigned int node_id)
+ {
+ struct qrtr_node *node;
+@@ -444,6 +450,7 @@ static int ctrl_cmd_del_client(struct so
+
+ list_del(&lookup->li);
+ kfree(lookup);
++ qrtr_ns.lookup_count--;
+ }
+
+ /* Remove the server belonging to this port but don't broadcast
+@@ -561,6 +568,11 @@ static int ctrl_cmd_new_lookup(struct so
+ if (from->sq_node != qrtr_ns.local_node)
+ return -EINVAL;
+
++ if (qrtr_ns.lookup_count >= QRTR_NS_MAX_LOOKUPS) {
++ pr_err_ratelimited("QRTR client node exceeds max lookup limit!\n");
++ return -ENOSPC;
++ }
++
+ lookup = kzalloc(sizeof(*lookup), GFP_KERNEL);
+ if (!lookup)
+ return -ENOMEM;
+@@ -569,6 +581,7 @@ static int ctrl_cmd_new_lookup(struct so
+ lookup->service = service;
+ lookup->instance = instance;
+ list_add_tail(&lookup->li, &qrtr_ns.lookups);
++ qrtr_ns.lookup_count++;
+
+ memset(&filter, 0, sizeof(filter));
+ filter.service = service;
+@@ -609,6 +622,7 @@ static void ctrl_cmd_del_lookup(struct s
+
+ list_del(&lookup->li);
+ kfree(lookup);
++ qrtr_ns.lookup_count--;
+ }
+ }
+
--- /dev/null
+From stable+bounces-242807-greg=kroah.com@vger.kernel.org Sun May 3 19:54:50 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Sun, 3 May 2026 13:54:25 -0400
+Subject: net: qrtr: ns: Limit the maximum server registration per node
+To: stable@vger.kernel.org
+Cc: Manivannan Sadhasivam <manivannan.sadhasivam@oss.qualcomm.com>, Yiming Qian <yimingqian591@gmail.com>, Simon Horman <horms@kernel.org>, Jakub Kicinski <kuba@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260503175425.1150300-1-sashal@kernel.org>
+
+From: Manivannan Sadhasivam <manivannan.sadhasivam@oss.qualcomm.com>
+
+[ Upstream commit d5ee2ff98322337951c56398e79d51815acbf955 ]
+
+Current code does no bound checking on the number of servers added per
+node. A malicious client can flood NEW_SERVER messages and exhaust memory.
+
+Fix this issue by limiting the maximum number of server registrations to
+256 per node. If the NEW_SERVER message is received for an old port, then
+don't restrict it as it will get replaced. While at it, also rate limit
+the error messages in the failure path of qrtr_ns_worker().
+
+Note that the limit of 256 is chosen based on the current platform
+requirements. If requirement changes in the future, this limit can be
+increased.
+
+Cc: stable@vger.kernel.org
+Fixes: 0c2204a4ad71 ("net: qrtr: Migrate nameservice to kernel from userspace")
+Reported-by: Yiming Qian <yimingqian591@gmail.com>
+Reviewed-by: Simon Horman <horms@kernel.org>
+Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@oss.qualcomm.com>
+Link: https://patch.msgid.link/20260409-qrtr-fix-v3-1-00a8a5ff2b51@oss.qualcomm.com
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/qrtr/ns.c | 26 +++++++++++++++++++++-----
+ 1 file changed, 21 insertions(+), 5 deletions(-)
+
+--- a/net/qrtr/ns.c
++++ b/net/qrtr/ns.c
+@@ -68,8 +68,14 @@ struct qrtr_server {
+ struct qrtr_node {
+ unsigned int id;
+ struct xarray servers;
++ u32 server_count;
+ };
+
++/* Max server limit is chosen based on the current platform requirements. If the
++ * requirement changes in the future, this value can be increased.
++ */
++#define QRTR_NS_MAX_SERVERS 256
++
+ static struct qrtr_node *node_get(unsigned int node_id)
+ {
+ struct qrtr_node *node;
+@@ -230,6 +236,17 @@ static struct qrtr_server *server_add(un
+ if (!service || !port)
+ return NULL;
+
++ node = node_get(node_id);
++ if (!node)
++ return NULL;
++
++ /* Make sure the new servers per port are capped at the maximum value */
++ old = xa_load(&node->servers, port);
++ if (!old && node->server_count >= QRTR_NS_MAX_SERVERS) {
++ pr_err_ratelimited("QRTR client node %u exceeds max server limit!\n", node_id);
++ return NULL;
++ }
++
+ srv = kzalloc(sizeof(*srv), GFP_KERNEL);
+ if (!srv)
+ return NULL;
+@@ -239,10 +256,6 @@ static struct qrtr_server *server_add(un
+ srv->node = node_id;
+ srv->port = port;
+
+- node = node_get(node_id);
+- if (!node)
+- goto err;
+-
+ /* Delete the old server on the same port */
+ old = xa_store(&node->servers, port, srv, GFP_KERNEL);
+ if (old) {
+@@ -253,6 +266,8 @@ static struct qrtr_server *server_add(un
+ } else {
+ kfree(old);
+ }
++ } else {
++ node->server_count++;
+ }
+
+ trace_qrtr_ns_server_add(srv->service, srv->instance,
+@@ -293,6 +308,7 @@ static int server_del(struct qrtr_node *
+ }
+
+ kfree(srv);
++ node->server_count--;
+
+ return 0;
+ }
+@@ -681,7 +697,7 @@ static void qrtr_ns_worker(struct work_s
+ }
+
+ if (ret < 0)
+- pr_err("failed while handling packet from %d:%d",
++ pr_err_ratelimited("failed while handling packet from %d:%d",
+ sq.sq_node, sq.sq_port);
+ }
+
--- /dev/null
+From stable+bounces-242862-greg=kroah.com@vger.kernel.org Mon May 4 09:40:45 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 4 May 2026 03:40:08 -0400
+Subject: net: qrtr: ns: Limit the total number of nodes
+To: stable@vger.kernel.org
+Cc: Manivannan Sadhasivam <manivannan.sadhasivam@oss.qualcomm.com>, Jakub Kicinski <kuba@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260504074008.1850682-2-sashal@kernel.org>
+
+From: Manivannan Sadhasivam <manivannan.sadhasivam@oss.qualcomm.com>
+
+[ Upstream commit 27d5e84e810b0849d08b9aec68e48570461ce313 ]
+
+Currently, the nameserver doesn't limit the number of nodes it handles.
+This can be an attack vector if a malicious client starts registering
+random nodes, leading to memory exhaustion.
+
+Hence, limit the maximum number of nodes to 64. Note that, limit of 64 is
+chosen based on the current platform requirements. If requirement changes
+in the future, this limit can be increased.
+
+Cc: stable@vger.kernel.org
+Fixes: 0c2204a4ad71 ("net: qrtr: Migrate nameservice to kernel from userspace")
+Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@oss.qualcomm.com>
+Link: https://patch.msgid.link/20260409-qrtr-fix-v3-4-00a8a5ff2b51@oss.qualcomm.com
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+[ dropped comment/define changes for missing QRTR_NS_MAX_SERVERS/LOOKUPS prereqs and kept plain kzalloc instead of kzalloc_obj ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/qrtr/ns.c | 15 +++++++++++++++
+ 1 file changed, 15 insertions(+)
+
+--- a/net/qrtr/ns.c
++++ b/net/qrtr/ns.c
+@@ -82,6 +82,13 @@ struct qrtr_node {
+ */
+ #define QRTR_NS_MAX_LOOKUPS 64
+
++/* Max nodes limit is chosen based on the current platform requirements.
++ * If the requirement changes in the future, this value can be increased.
++ */
++#define QRTR_NS_MAX_NODES 64
++
++static u8 node_count;
++
+ static struct qrtr_node *node_get(unsigned int node_id)
+ {
+ struct qrtr_node *node;
+@@ -90,6 +97,11 @@ static struct qrtr_node *node_get(unsign
+ if (node)
+ return node;
+
++ if (node_count >= QRTR_NS_MAX_NODES) {
++ pr_err_ratelimited("QRTR clients exceed max node limit!\n");
++ return NULL;
++ }
++
+ /* If node didn't exist, allocate and insert it to the tree */
+ node = kzalloc(sizeof(*node), GFP_KERNEL);
+ if (!node)
+@@ -103,6 +115,8 @@ static struct qrtr_node *node_get(unsign
+ return NULL;
+ }
+
++ node_count++;
++
+ return node;
+ }
+
+@@ -409,6 +423,7 @@ static int ctrl_cmd_bye(struct sockaddr_
+ delete_node:
+ xa_erase(&nodes, from->sq_node);
+ kfree(node);
++ node_count--;
+
+ return ret;
+ }
--- /dev/null
+From stable+bounces-242427-greg=kroah.com@vger.kernel.org Fri May 1 15:19:08 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Fri, 1 May 2026 09:18:56 -0400
+Subject: phy: qcom: m31-eusb2: clear PLL_EN during init
+To: stable@vger.kernel.org
+Cc: Elson Serrao <elson.serrao@oss.qualcomm.com>, Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>, Vinod Koul <vkoul@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260501131857.3242270-2-sashal@kernel.org>
+
+From: Elson Serrao <elson.serrao@oss.qualcomm.com>
+
+[ Upstream commit 520a98bdf7ae0130e22d8adced3d69a2e211b41f ]
+
+The driver currently sets bit 0 of USB_PHY_CFG1 (PLL_EN) during PHY
+initialization. According to the M31 EUSB2 PHY hardware documentation,
+this bit is intended only for test/debug scenarios and does not control
+mission mode operation. Keeping PLL_EN asserted causes the PHY to draw
+additional current during USB bus suspend. Clearing this bit results in
+lower suspend power consumption without affecting normal operation.
+
+Update the driver to leave PLL_EN cleared as recommended by the hardware
+documentation.
+
+Fixes: 9c8504861cc4 ("phy: qcom: Add M31 based eUSB2 PHY driver")
+Cc: stable@vger.kernel.org
+Signed-off-by: Elson Serrao <elson.serrao@oss.qualcomm.com>
+Reviewed-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
+Link: https://patch.msgid.link/20260217201130.2804550-1-elson.serrao@oss.qualcomm.com
+Signed-off-by: Vinod Koul <vkoul@kernel.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/phy/qualcomm/phy-qcom-m31-eusb2.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/phy/qualcomm/phy-qcom-m31-eusb2.c
++++ b/drivers/phy/qualcomm/phy-qcom-m31-eusb2.c
+@@ -83,7 +83,7 @@ static const struct m31_phy_tbl_entry m3
+ M31_EUSB_PHY_INIT_CFG(USB_PHY_CFG0, UTMI_PHY_CMN_CTRL_OVERRIDE_EN, 1),
+ M31_EUSB_PHY_INIT_CFG(USB_PHY_UTMI_CTRL5, POR, 1),
+ M31_EUSB_PHY_INIT_CFG(USB_PHY_HS_PHY_CTRL_COMMON0, PHY_ENABLE, 1),
+- M31_EUSB_PHY_INIT_CFG(USB_PHY_CFG1, PLL_EN, 1),
++ M31_EUSB_PHY_INIT_CFG(USB_PHY_CFG1, PLL_EN, 0),
+ M31_EUSB_PHY_INIT_CFG(USB_PHY_FSEL_SEL, FSEL_SEL, 1),
+ };
+
--- /dev/null
+From stable+bounces-242426-greg=kroah.com@vger.kernel.org Fri May 1 15:19:04 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Fri, 1 May 2026 09:18:55 -0400
+Subject: phy: qcom: m31-eusb2: Update init sequence to set PHY_ENABLE
+To: stable@vger.kernel.org
+Cc: Ronak Raheja <ronak.raheja@oss.qualcomm.com>, Wesley Cheng <wesley.cheng@oss.qualcomm.com>, Dmitry Baryshkov <dmitry.baryshkov@oss.qualcomm.com>, Neil Armstrong <neil.armstrong@linaro.org>, Vinod Koul <vkoul@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260501131857.3242270-1-sashal@kernel.org>
+
+From: Ronak Raheja <ronak.raheja@oss.qualcomm.com>
+
+[ Upstream commit 7044ed6749c8a7d49e67b2f07f42da2f29d26be6 ]
+
+Certain platforms may not have the PHY_ENABLE bit set on power on reset.
+Update the current sequence to explicitly write to enable the PHY_ENABLE
+bit. This ensures that regardless of the platform, the PHY is properly
+enabled.
+
+Signed-off-by: Ronak Raheja <ronak.raheja@oss.qualcomm.com>
+Signed-off-by: Wesley Cheng <wesley.cheng@oss.qualcomm.com>
+Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@oss.qualcomm.com>
+Reviewed-by: Neil Armstrong <neil.armstrong@linaro.org>
+Link: https://patch.msgid.link/20250920032158.242725-1-wesley.cheng@oss.qualcomm.com
+Signed-off-by: Vinod Koul <vkoul@kernel.org>
+Stable-dep-of: 520a98bdf7ae ("phy: qcom: m31-eusb2: clear PLL_EN during init")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/phy/qualcomm/phy-qcom-m31-eusb2.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+--- a/drivers/phy/qualcomm/phy-qcom-m31-eusb2.c
++++ b/drivers/phy/qualcomm/phy-qcom-m31-eusb2.c
+@@ -25,6 +25,7 @@
+ #define POR BIT(1)
+
+ #define USB_PHY_HS_PHY_CTRL_COMMON0 (0x54)
++#define PHY_ENABLE BIT(0)
+ #define SIDDQ_SEL BIT(1)
+ #define SIDDQ BIT(2)
+ #define FSEL GENMASK(6, 4)
+@@ -81,6 +82,7 @@ struct m31_eusb2_priv_data {
+ static const struct m31_phy_tbl_entry m31_eusb2_setup_tbl[] = {
+ M31_EUSB_PHY_INIT_CFG(USB_PHY_CFG0, UTMI_PHY_CMN_CTRL_OVERRIDE_EN, 1),
+ M31_EUSB_PHY_INIT_CFG(USB_PHY_UTMI_CTRL5, POR, 1),
++ M31_EUSB_PHY_INIT_CFG(USB_PHY_HS_PHY_CTRL_COMMON0, PHY_ENABLE, 1),
+ M31_EUSB_PHY_INIT_CFG(USB_PHY_CFG1, PLL_EN, 1),
+ M31_EUSB_PHY_INIT_CFG(USB_PHY_FSEL_SEL, FSEL_SEL, 1),
+ };
sched_ext-documentation-clarify-ops.dispatch-role-in-task-lifecycle.patch
scsi-sd-fix-missing-put_disk-when-device_add-disk_dev-fails.patch
seg6-fix-seg6-lwtunnel-output-redirect-for-l2-reduced-encap-mode.patch
+mm-prevent-droppable-mappings-from-being-locked.patch
+arm64-mm-simplify-check-in-arch_kfence_init_pool.patch
+arm64-mm-fix-rodata-full-block-mapping-support-for-realm-guests.patch
+lib-test_hmm-evict-device-pages-on-file-close-to-avoid-use-after-free.patch
+mei-me-use-pci_device_data-macro.patch
+mei-me-add-nova-lake-point-h-did.patch
+phy-qcom-m31-eusb2-update-init-sequence-to-set-phy_enable.patch
+phy-qcom-m31-eusb2-clear-pll_en-during-init.patch
+wifi-mt76-mt792x-describe-usb-wfsys-reset-with-a-descriptor.patch
+wifi-mt76-mt792x-fix-mt7925u-usb-wfsys-reset-handling.patch
+media-rc-ttusbir-respect-dma-coherency-rules.patch
+media-rc-igorplugusb-heed-coherency-rules.patch
+iio-frequency-admv1013-add-dev-variable.patch
+iio-frequency-admv1013-fix-null-pointer-dereference-on-str.patch
+net-qrtr-ns-limit-the-maximum-server-registration-per-node.patch
+net-qrtr-ns-limit-the-maximum-number-of-lookups.patch
+net-qrtr-ns-limit-the-total-number-of-nodes.patch
--- /dev/null
+From stable+bounces-242159-greg=kroah.com@vger.kernel.org Thu Apr 30 18:37:35 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 30 Apr 2026 12:31:34 -0400
+Subject: wifi: mt76: mt792x: describe USB WFSYS reset with a descriptor
+To: stable@vger.kernel.org
+Cc: Sean Wang <sean.wang@mediatek.com>, Felix Fietkau <nbd@nbd.name>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260430163135.1837241-1-sashal@kernel.org>
+
+From: Sean Wang <sean.wang@mediatek.com>
+
+[ Upstream commit e6f48512c1ceebcd1ce6bb83df3b3d56a261507d ]
+
+Prepare mt792xu_wfsys_reset() for chips that share the same USB WFSYS
+reset flow but use different register definitions.
+
+This is a pure refactor of the current mt7921u path and keeps the reset
+sequence unchanged.
+
+Signed-off-by: Sean Wang <sean.wang@mediatek.com>
+Link: https://patch.msgid.link/20260311002825.15502-1-sean.wang@kernel.org
+Signed-off-by: Felix Fietkau <nbd@nbd.name>
+Stable-dep-of: 56154fef47d1 ("wifi: mt76: mt792x: fix mt7925u USB WFSYS reset handling")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/wireless/mediatek/mt76/mt792x_usb.c | 40 +++++++++++++++++++-----
+ 1 file changed, 32 insertions(+), 8 deletions(-)
+
+--- a/drivers/net/wireless/mediatek/mt76/mt792x_usb.c
++++ b/drivers/net/wireless/mediatek/mt76/mt792x_usb.c
+@@ -206,6 +206,24 @@ static void mt792xu_epctl_rst_opt(struct
+ mt792xu_uhw_wr(&dev->mt76, MT_SSUSB_EPCTL_CSR_EP_RST_OPT, val);
+ }
+
++struct mt792xu_wfsys_desc {
++ u32 rst_reg;
++ u32 done_reg;
++ u32 done_mask;
++ u32 done_val;
++ u32 delay_ms;
++ bool need_status_sel;
++};
++
++static const struct mt792xu_wfsys_desc mt7921_wfsys_desc = {
++ .rst_reg = MT_CBTOP_RGU_WF_SUBSYS_RST,
++ .done_reg = MT_UDMA_CONN_INFRA_STATUS,
++ .done_mask = MT_UDMA_CONN_WFSYS_INIT_DONE,
++ .done_val = MT_UDMA_CONN_WFSYS_INIT_DONE,
++ .delay_ms = 0,
++ .need_status_sel = true,
++};
++
+ int mt792xu_dma_init(struct mt792x_dev *dev, bool resume)
+ {
+ int err;
+@@ -236,25 +254,31 @@ EXPORT_SYMBOL_GPL(mt792xu_dma_init);
+
+ int mt792xu_wfsys_reset(struct mt792x_dev *dev)
+ {
++ const struct mt792xu_wfsys_desc *desc = &mt7921_wfsys_desc;
+ u32 val;
+ int i;
+
+ mt792xu_epctl_rst_opt(dev, false);
+
+- val = mt792xu_uhw_rr(&dev->mt76, MT_CBTOP_RGU_WF_SUBSYS_RST);
++ val = mt792xu_uhw_rr(&dev->mt76, desc->rst_reg);
+ val |= MT_CBTOP_RGU_WF_SUBSYS_RST_WF_WHOLE_PATH;
+- mt792xu_uhw_wr(&dev->mt76, MT_CBTOP_RGU_WF_SUBSYS_RST, val);
++ mt792xu_uhw_wr(&dev->mt76, desc->rst_reg, val);
+
+- usleep_range(10, 20);
++ if (desc->delay_ms)
++ msleep(desc->delay_ms);
++ else
++ usleep_range(10, 20);
+
+- val = mt792xu_uhw_rr(&dev->mt76, MT_CBTOP_RGU_WF_SUBSYS_RST);
++ val = mt792xu_uhw_rr(&dev->mt76, desc->rst_reg);
+ val &= ~MT_CBTOP_RGU_WF_SUBSYS_RST_WF_WHOLE_PATH;
+- mt792xu_uhw_wr(&dev->mt76, MT_CBTOP_RGU_WF_SUBSYS_RST, val);
++ mt792xu_uhw_wr(&dev->mt76, desc->rst_reg, val);
++
++ if (desc->need_status_sel)
++ mt792xu_uhw_wr(&dev->mt76, MT_UDMA_CONN_INFRA_STATUS_SEL, 0);
+
+- mt792xu_uhw_wr(&dev->mt76, MT_UDMA_CONN_INFRA_STATUS_SEL, 0);
+ for (i = 0; i < MT792x_WFSYS_INIT_RETRY_COUNT; i++) {
+- val = mt792xu_uhw_rr(&dev->mt76, MT_UDMA_CONN_INFRA_STATUS);
+- if (val & MT_UDMA_CONN_WFSYS_INIT_DONE)
++ val = mt792xu_uhw_rr(&dev->mt76, desc->done_reg);
++ if ((val & desc->done_mask) == desc->done_val)
+ break;
+
+ msleep(100);
--- /dev/null
+From stable+bounces-242160-greg=kroah.com@vger.kernel.org Thu Apr 30 18:31:44 2026
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 30 Apr 2026 12:31:35 -0400
+Subject: wifi: mt76: mt792x: fix mt7925u USB WFSYS reset handling
+To: stable@vger.kernel.org
+Cc: Sean Wang <sean.wang@mediatek.com>, Felix Fietkau <nbd@nbd.name>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20260430163135.1837241-2-sashal@kernel.org>
+
+From: Sean Wang <sean.wang@mediatek.com>
+
+[ Upstream commit 56154fef47d104effa9f29ed3db4f805cbc0d640 ]
+
+mt7925u uses different reset/status registers from mt7921u. Reusing the
+mt7921u register set causes the WFSYS reset to fail.
+
+Add a chip-specific descriptor in mt792xu_wfsys_reset() to select the
+correct registers and fix mt7925u failing to initialize after a warm
+reboot.
+
+Fixes: d28e1a48952e ("wifi: mt76: mt792x: introduce mt792x-usb module")
+Cc: stable@vger.kernel.org
+Signed-off-by: Sean Wang <sean.wang@mediatek.com>
+Link: https://patch.msgid.link/20260311002825.15502-2-sean.wang@kernel.org
+Signed-off-by: Felix Fietkau <nbd@nbd.name>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/wireless/mediatek/mt76/mt792x_regs.h | 4 ++++
+ drivers/net/wireless/mediatek/mt76/mt792x_usb.c | 13 ++++++++++++-
+ 2 files changed, 16 insertions(+), 1 deletion(-)
+
+--- a/drivers/net/wireless/mediatek/mt76/mt792x_regs.h
++++ b/drivers/net/wireless/mediatek/mt76/mt792x_regs.h
+@@ -390,6 +390,10 @@
+ #define MT_CBTOP_RGU_WF_SUBSYS_RST MT_CBTOP_RGU(0x600)
+ #define MT_CBTOP_RGU_WF_SUBSYS_RST_WF_WHOLE_PATH BIT(0)
+
++#define MT7925_CBTOP_RGU_WF_SUBSYS_RST 0x70028600
++#define MT7925_WFSYS_INIT_DONE_ADDR 0x184c1604
++#define MT7925_WFSYS_INIT_DONE 0x00001d1e
++
+ #define MT_HW_BOUND 0x70010020
+ #define MT_HW_CHIPID 0x70010200
+ #define MT_HW_REV 0x70010204
+--- a/drivers/net/wireless/mediatek/mt76/mt792x_usb.c
++++ b/drivers/net/wireless/mediatek/mt76/mt792x_usb.c
+@@ -224,6 +224,15 @@ static const struct mt792xu_wfsys_desc m
+ .need_status_sel = true,
+ };
+
++static const struct mt792xu_wfsys_desc mt7925_wfsys_desc = {
++ .rst_reg = MT7925_CBTOP_RGU_WF_SUBSYS_RST,
++ .done_reg = MT7925_WFSYS_INIT_DONE_ADDR,
++ .done_mask = U32_MAX,
++ .done_val = MT7925_WFSYS_INIT_DONE,
++ .delay_ms = 20,
++ .need_status_sel = false,
++};
++
+ int mt792xu_dma_init(struct mt792x_dev *dev, bool resume)
+ {
+ int err;
+@@ -254,7 +263,9 @@ EXPORT_SYMBOL_GPL(mt792xu_dma_init);
+
+ int mt792xu_wfsys_reset(struct mt792x_dev *dev)
+ {
+- const struct mt792xu_wfsys_desc *desc = &mt7921_wfsys_desc;
++ const struct mt792xu_wfsys_desc *desc = is_mt7925(&dev->mt76) ?
++ &mt7925_wfsys_desc :
++ &mt7921_wfsys_desc;
+ u32 val;
+ int i;
+