--- /dev/null
+From cb128d20b87425da3ea7d5857ab66d6f351ae34c Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 25 Mar 2024 11:41:58 -0700
+Subject: clk: Get runtime PM before walking tree during disable_unused
+
+From: Stephen Boyd <sboyd@kernel.org>
+
+[ Upstream commit e581cf5d216289ef292d1a4036d53ce90e122469 ]
+
+Doug reported [1] the following hung task:
+
+ INFO: task swapper/0:1 blocked for more than 122 seconds.
+ Not tainted 5.15.149-21875-gf795ebc40eb8 #1
+ "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
+ task:swapper/0 state:D stack: 0 pid: 1 ppid: 0 flags:0x00000008
+ Call trace:
+ __switch_to+0xf4/0x1f4
+ __schedule+0x418/0xb80
+ schedule+0x5c/0x10c
+ rpm_resume+0xe0/0x52c
+ rpm_resume+0x178/0x52c
+ __pm_runtime_resume+0x58/0x98
+ clk_pm_runtime_get+0x30/0xb0
+ clk_disable_unused_subtree+0x58/0x208
+ clk_disable_unused_subtree+0x38/0x208
+ clk_disable_unused_subtree+0x38/0x208
+ clk_disable_unused_subtree+0x38/0x208
+ clk_disable_unused_subtree+0x38/0x208
+ clk_disable_unused+0x4c/0xe4
+ do_one_initcall+0xcc/0x2d8
+ do_initcall_level+0xa4/0x148
+ do_initcalls+0x5c/0x9c
+ do_basic_setup+0x24/0x30
+ kernel_init_freeable+0xec/0x164
+ kernel_init+0x28/0x120
+ ret_from_fork+0x10/0x20
+ INFO: task kworker/u16:0:9 blocked for more than 122 seconds.
+ Not tainted 5.15.149-21875-gf795ebc40eb8 #1
+ "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
+ task:kworker/u16:0 state:D stack: 0 pid: 9 ppid: 2 flags:0x00000008
+ Workqueue: events_unbound deferred_probe_work_func
+ Call trace:
+ __switch_to+0xf4/0x1f4
+ __schedule+0x418/0xb80
+ schedule+0x5c/0x10c
+ schedule_preempt_disabled+0x2c/0x48
+ __mutex_lock+0x238/0x488
+ __mutex_lock_slowpath+0x1c/0x28
+ mutex_lock+0x50/0x74
+ clk_prepare_lock+0x7c/0x9c
+ clk_core_prepare_lock+0x20/0x44
+ clk_prepare+0x24/0x30
+ clk_bulk_prepare+0x40/0xb0
+ mdss_runtime_resume+0x54/0x1c8
+ pm_generic_runtime_resume+0x30/0x44
+ __genpd_runtime_resume+0x68/0x7c
+ genpd_runtime_resume+0x108/0x1f4
+ __rpm_callback+0x84/0x144
+ rpm_callback+0x30/0x88
+ rpm_resume+0x1f4/0x52c
+ rpm_resume+0x178/0x52c
+ __pm_runtime_resume+0x58/0x98
+ __device_attach+0xe0/0x170
+ device_initial_probe+0x1c/0x28
+ bus_probe_device+0x3c/0x9c
+ device_add+0x644/0x814
+ mipi_dsi_device_register_full+0xe4/0x170
+ devm_mipi_dsi_device_register_full+0x28/0x70
+ ti_sn_bridge_probe+0x1dc/0x2c0
+ auxiliary_bus_probe+0x4c/0x94
+ really_probe+0xcc/0x2c8
+ __driver_probe_device+0xa8/0x130
+ driver_probe_device+0x48/0x110
+ __device_attach_driver+0xa4/0xcc
+ bus_for_each_drv+0x8c/0xd8
+ __device_attach+0xf8/0x170
+ device_initial_probe+0x1c/0x28
+ bus_probe_device+0x3c/0x9c
+ deferred_probe_work_func+0x9c/0xd8
+ process_one_work+0x148/0x518
+ worker_thread+0x138/0x350
+ kthread+0x138/0x1e0
+ ret_from_fork+0x10/0x20
+
+The first thread is walking the clk tree and calling
+clk_pm_runtime_get() to power on devices required to read the clk
+hardware via struct clk_ops::is_enabled(). This thread holds the clk
+prepare_lock, and is trying to runtime PM resume a device, when it finds
+that the device is in the process of resuming so the thread schedule()s
+away waiting for the device to finish resuming before continuing. The
+second thread is runtime PM resuming the same device, but the runtime
+resume callback is calling clk_prepare(), trying to grab the
+prepare_lock waiting on the first thread.
+
+This is a classic ABBA deadlock. To properly fix the deadlock, we must
+never runtime PM resume or suspend a device with the clk prepare_lock
+held. Actually doing that is near impossible today because the global
+prepare_lock would have to be dropped in the middle of the tree, the
+device runtime PM resumed/suspended, and then the prepare_lock grabbed
+again to ensure consistency of the clk tree topology. If anything
+changes with the clk tree in the meantime, we've lost and will need to
+start the operation all over again.
+
+Luckily, most of the time we're simply incrementing or decrementing the
+runtime PM count on an active device, so we don't have the chance to
+schedule away with the prepare_lock held. Let's fix this immediate
+problem that can be triggered more easily by simply booting on Qualcomm
+sc7180.
+
+Introduce a list of clk_core structures that have been registered, or
+are in the process of being registered, that require runtime PM to
+operate. Iterate this list and call clk_pm_runtime_get() on each of them
+without holding the prepare_lock during clk_disable_unused(). This way
+we can be certain that the runtime PM state of the devices will be
+active and resumed so we can't schedule away while walking the clk tree
+with the prepare_lock held. Similarly, call clk_pm_runtime_put() without
+the prepare_lock held to properly drop the runtime PM reference. We
+remove the calls to clk_pm_runtime_{get,put}() in this path because
+they're superfluous now that we know the devices are runtime resumed.
+
+Reported-by: Douglas Anderson <dianders@chromium.org>
+Closes: https://lore.kernel.org/all/20220922084322.RFC.2.I375b6b9e0a0a5348962f004beb3dafee6a12dfbb@changeid/ [1]
+Closes: https://issuetracker.google.com/328070191
+Cc: Marek Szyprowski <m.szyprowski@samsung.com>
+Cc: Ulf Hansson <ulf.hansson@linaro.org>
+Cc: Krzysztof Kozlowski <krzk@kernel.org>
+Fixes: 9a34b45397e5 ("clk: Add support for runtime PM")
+Signed-off-by: Stephen Boyd <sboyd@kernel.org>
+Link: https://lore.kernel.org/r/20240325184204.745706-5-sboyd@kernel.org
+Reviewed-by: Douglas Anderson <dianders@chromium.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/clk/clk.c | 117 +++++++++++++++++++++++++++++++++++++++++-----
+ 1 file changed, 105 insertions(+), 12 deletions(-)
+
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index d2b6c374c3f95..a0927c7f83d60 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -37,6 +37,10 @@ static HLIST_HEAD(clk_root_list);
+ static HLIST_HEAD(clk_orphan_list);
+ static LIST_HEAD(clk_notifier_list);
+
++/* List of registered clks that use runtime PM */
++static HLIST_HEAD(clk_rpm_list);
++static DEFINE_MUTEX(clk_rpm_list_lock);
++
+ static const struct hlist_head *all_lists[] = {
+ &clk_root_list,
+ &clk_orphan_list,
+@@ -59,6 +63,7 @@ struct clk_core {
+ struct clk_hw *hw;
+ struct module *owner;
+ struct device *dev;
++ struct hlist_node rpm_node;
+ struct device_node *of_node;
+ struct clk_core *parent;
+ struct clk_parent_map *parents;
+@@ -129,6 +134,89 @@ static void clk_pm_runtime_put(struct clk_core *core)
+ pm_runtime_put_sync(core->dev);
+ }
+
++/**
++ * clk_pm_runtime_get_all() - Runtime "get" all clk provider devices
++ *
++ * Call clk_pm_runtime_get() on all runtime PM enabled clks in the clk tree so
++ * that disabling unused clks avoids a deadlock where a device is runtime PM
++ * resuming/suspending and the runtime PM callback is trying to grab the
++ * prepare_lock for something like clk_prepare_enable() while
++ * clk_disable_unused_subtree() holds the prepare_lock and is trying to runtime
++ * PM resume/suspend the device as well.
++ *
++ * Context: Acquires the 'clk_rpm_list_lock' and returns with the lock held on
++ * success. Otherwise the lock is released on failure.
++ *
++ * Return: 0 on success, negative errno otherwise.
++ */
++static int clk_pm_runtime_get_all(void)
++{
++ int ret;
++ struct clk_core *core, *failed;
++
++ /*
++ * Grab the list lock to prevent any new clks from being registered
++ * or unregistered until clk_pm_runtime_put_all().
++ */
++ mutex_lock(&clk_rpm_list_lock);
++
++ /*
++ * Runtime PM "get" all the devices that are needed for the clks
++ * currently registered. Do this without holding the prepare_lock, to
++ * avoid the deadlock.
++ */
++ hlist_for_each_entry(core, &clk_rpm_list, rpm_node) {
++ ret = clk_pm_runtime_get(core);
++ if (ret) {
++ failed = core;
++ pr_err("clk: Failed to runtime PM get '%s' for clk '%s'\n",
++ dev_name(failed->dev), failed->name);
++ goto err;
++ }
++ }
++
++ return 0;
++
++err:
++ hlist_for_each_entry(core, &clk_rpm_list, rpm_node) {
++ if (core == failed)
++ break;
++
++ clk_pm_runtime_put(core);
++ }
++ mutex_unlock(&clk_rpm_list_lock);
++
++ return ret;
++}
++
++/**
++ * clk_pm_runtime_put_all() - Runtime "put" all clk provider devices
++ *
++ * Put the runtime PM references taken in clk_pm_runtime_get_all() and release
++ * the 'clk_rpm_list_lock'.
++ */
++static void clk_pm_runtime_put_all(void)
++{
++ struct clk_core *core;
++
++ hlist_for_each_entry(core, &clk_rpm_list, rpm_node)
++ clk_pm_runtime_put(core);
++ mutex_unlock(&clk_rpm_list_lock);
++}
++
++static void clk_pm_runtime_init(struct clk_core *core)
++{
++ struct device *dev = core->dev;
++
++ if (dev && pm_runtime_enabled(dev)) {
++ core->rpm_enabled = true;
++
++ mutex_lock(&clk_rpm_list_lock);
++ hlist_add_head(&core->rpm_node, &clk_rpm_list);
++ mutex_unlock(&clk_rpm_list_lock);
++ }
++}
++
+ /*** locking ***/
+ static void clk_prepare_lock(void)
+ {
+@@ -1231,9 +1319,6 @@ static void __init clk_unprepare_unused_subtree(struct clk_core *core)
+ if (core->flags & CLK_IGNORE_UNUSED)
+ return;
+
+- if (clk_pm_runtime_get(core))
+- return;
+-
+ if (clk_core_is_prepared(core)) {
+ trace_clk_unprepare(core);
+ if (core->ops->unprepare_unused)
+@@ -1242,8 +1327,6 @@ static void __init clk_unprepare_unused_subtree(struct clk_core *core)
+ core->ops->unprepare(core->hw);
+ trace_clk_unprepare_complete(core);
+ }
+-
+- clk_pm_runtime_put(core);
+ }
+
+ static void __init clk_disable_unused_subtree(struct clk_core *core)
+@@ -1259,9 +1342,6 @@ static void __init clk_disable_unused_subtree(struct clk_core *core)
+ if (core->flags & CLK_OPS_PARENT_ENABLE)
+ clk_core_prepare_enable(core->parent);
+
+- if (clk_pm_runtime_get(core))
+- goto unprepare_out;
+-
+ flags = clk_enable_lock();
+
+ if (core->enable_count)
+@@ -1286,8 +1366,6 @@ static void __init clk_disable_unused_subtree(struct clk_core *core)
+
+ unlock_out:
+ clk_enable_unlock(flags);
+- clk_pm_runtime_put(core);
+-unprepare_out:
+ if (core->flags & CLK_OPS_PARENT_ENABLE)
+ clk_core_disable_unprepare(core->parent);
+ }
+@@ -1303,6 +1381,7 @@ __setup("clk_ignore_unused", clk_ignore_unused_setup);
+ static int __init clk_disable_unused(void)
+ {
+ struct clk_core *core;
++ int ret;
+
+ if (clk_ignore_unused) {
+ pr_warn("clk: Not disabling unused clocks\n");
+@@ -1311,6 +1390,13 @@ static int __init clk_disable_unused(void)
+
+ pr_info("clk: Disabling unused clocks\n");
+
++ ret = clk_pm_runtime_get_all();
++ if (ret)
++ return ret;
++ /*
++ * Grab the prepare lock to keep the clk topology stable while iterating
++ * over clks.
++ */
+ clk_prepare_lock();
+
+ hlist_for_each_entry(core, &clk_root_list, child_node)
+@@ -1327,6 +1413,8 @@ static int __init clk_disable_unused(void)
+
+ clk_prepare_unlock();
+
++ clk_pm_runtime_put_all();
++
+ return 0;
+ }
+ late_initcall_sync(clk_disable_unused);
+@@ -3846,6 +3934,12 @@ static void __clk_release(struct kref *ref)
+ {
+ struct clk_core *core = container_of(ref, struct clk_core, ref);
+
++ if (core->rpm_enabled) {
++ mutex_lock(&clk_rpm_list_lock);
++ hlist_del(&core->rpm_node);
++ mutex_unlock(&clk_rpm_list_lock);
++ }
++
+ clk_core_free_parent_map(core);
+ kfree_const(core->name);
+ kfree(core);
+@@ -3885,9 +3979,8 @@ __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw)
+ }
+ core->ops = init->ops;
+
+- if (dev && pm_runtime_enabled(dev))
+- core->rpm_enabled = true;
+ core->dev = dev;
++ clk_pm_runtime_init(core);
+ core->of_node = np;
+ if (dev && dev->driver)
+ core->owner = dev->driver->owner;
+--
+2.43.0
+
--- /dev/null
+From b528df28bd6f2d4a773ebacdb957a8df020a85e6 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 25 Mar 2024 11:41:57 -0700
+Subject: clk: Initialize struct clk_core kref earlier
+
+From: Stephen Boyd <sboyd@kernel.org>
+
+[ Upstream commit 9d05ae531c2cff20d5d527f04e28d28e04379929 ]
+
+Initialize this kref once we allocate memory for the struct clk_core so
+that we can reuse the release function to free any memory associated
+with the structure. This mostly consolidates code, but also clarifies
+that the kref lifetime exists once the container structure (struct
+clk_core) is allocated instead of leaving it in a half-baked state for
+most of __clk_core_init().
+
+Reviewed-by: Douglas Anderson <dianders@chromium.org>
+Signed-off-by: Stephen Boyd <sboyd@kernel.org>
+Link: https://lore.kernel.org/r/20240325184204.745706-4-sboyd@kernel.org
+Stable-dep-of: e581cf5d2162 ("clk: Get runtime PM before walking tree during disable_unused")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/clk/clk.c | 28 +++++++++++++---------------
+ 1 file changed, 13 insertions(+), 15 deletions(-)
+
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index bcaaadb0fed8d..d2b6c374c3f95 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -3632,8 +3632,6 @@ static int __clk_core_init(struct clk_core *core)
+ }
+
+ clk_core_reparent_orphans_nolock();
+-
+- kref_init(&core->ref);
+ out:
+ clk_pm_runtime_put(core);
+ unlock:
+@@ -3843,6 +3841,16 @@ static void clk_core_free_parent_map(struct clk_core *core)
+ kfree(core->parents);
+ }
+
++/* Free memory allocated for a struct clk_core */
++static void __clk_release(struct kref *ref)
++{
++ struct clk_core *core = container_of(ref, struct clk_core, ref);
++
++ clk_core_free_parent_map(core);
++ kfree_const(core->name);
++ kfree(core);
++}
++
+ static struct clk *
+ __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw)
+ {
+@@ -3863,6 +3871,8 @@ __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw)
+ goto fail_out;
+ }
+
++ kref_init(&core->ref);
++
+ core->name = kstrdup_const(init->name, GFP_KERNEL);
+ if (!core->name) {
+ ret = -ENOMEM;
+@@ -3917,12 +3927,10 @@ __clk_register(struct device *dev, struct device_node *np, struct clk_hw *hw)
+ hw->clk = NULL;
+
+ fail_create_clk:
+- clk_core_free_parent_map(core);
+ fail_parents:
+ fail_ops:
+- kfree_const(core->name);
+ fail_name:
+- kfree(core);
++ kref_put(&core->ref, __clk_release);
+ fail_out:
+ return ERR_PTR(ret);
+ }
+@@ -4002,16 +4010,6 @@ int of_clk_hw_register(struct device_node *node, struct clk_hw *hw)
+ }
+ EXPORT_SYMBOL_GPL(of_clk_hw_register);
+
+-/* Free memory allocated for a clock. */
+-static void __clk_release(struct kref *ref)
+-{
+- struct clk_core *core = container_of(ref, struct clk_core, ref);
+-
+- clk_core_free_parent_map(core);
+- kfree_const(core->name);
+- kfree(core);
+-}
+-
+ /*
+ * Empty clk_ops for unregistered clocks. These are used temporarily
+ * after clk_unregister() was called on a clock and until last clock
+--
+2.43.0
+
--- /dev/null
+From af361d657bb82c3259014eefa357fc3f35c50bc5 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 17 Feb 2022 14:05:53 -0800
+Subject: clk: Mark 'all_lists' as const
+
+From: Stephen Boyd <sboyd@kernel.org>
+
+[ Upstream commit 75061a6ff49ba3482c6319ded0c26e6a526b0967 ]
+
+This list array doesn't change at runtime. Mark it const to move to RO
+memory.
+
+Signed-off-by: Stephen Boyd <sboyd@kernel.org>
+Link: https://lore.kernel.org/r/20220217220554.2711696-2-sboyd@kernel.org
+Stable-dep-of: e581cf5d2162 ("clk: Get runtime PM before walking tree during disable_unused")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/clk/clk.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index 67a882e03dfdd..1043addcd38f6 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -37,7 +37,7 @@ static HLIST_HEAD(clk_root_list);
+ static HLIST_HEAD(clk_orphan_list);
+ static LIST_HEAD(clk_notifier_list);
+
+-static struct hlist_head *all_lists[] = {
++static const struct hlist_head *all_lists[] = {
+ &clk_root_list,
+ &clk_orphan_list,
+ NULL,
+@@ -4063,7 +4063,7 @@ static void clk_core_evict_parent_cache_subtree(struct clk_core *root,
+ /* Remove this clk from all parent caches */
+ static void clk_core_evict_parent_cache(struct clk_core *core)
+ {
+- struct hlist_head **lists;
++ const struct hlist_head **lists;
+ struct clk_core *root;
+
+ lockdep_assert_held(&prepare_lock);
+--
+2.43.0
+
--- /dev/null
+From ae39f24fad3a5c313ac446710637dd0088101d62 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 7 Mar 2023 14:29:28 +0100
+Subject: clk: Print an info line before disabling unused clocks
+
+From: Konrad Dybcio <konrad.dybcio@linaro.org>
+
+[ Upstream commit 12ca59b91d04df32e41be5a52f0cabba912c11de ]
+
+Currently, the regulator framework informs us before calling into
+their unused cleanup paths, which eases at least some debugging. The
+same could be beneficial for clocks, so that random shutdowns shortly
+after most initcalls are done can be less of a guess.
+
+Add a pr_info before disabling unused clocks to do so.
+
+Reviewed-by: Marijn Suijten <marijn.suijten@somainline.org>
+Signed-off-by: Konrad Dybcio <konrad.dybcio@linaro.org>
+Link: https://lore.kernel.org/r/20230307132928.3887737-1-konrad.dybcio@linaro.org
+Signed-off-by: Stephen Boyd <sboyd@kernel.org>
+Stable-dep-of: e581cf5d2162 ("clk: Get runtime PM before walking tree during disable_unused")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/clk/clk.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index f6be526005bbe..bcaaadb0fed8d 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -1309,6 +1309,8 @@ static int __init clk_disable_unused(void)
+ return 0;
+ }
+
++ pr_info("clk: Disabling unused clocks\n");
++
+ clk_prepare_lock();
+
+ hlist_for_each_entry(core, &clk_root_list, child_node)
+--
+2.43.0
+
--- /dev/null
+From 039939c7b8bb33613b661e2c98800a5f04e38264 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 30 Jun 2022 18:12:04 +0300
+Subject: clk: remove extra empty line
+
+From: Claudiu Beznea <claudiu.beznea@microchip.com>
+
+[ Upstream commit 79806d338829b2bf903480428d8ce5aab8e2d24b ]
+
+Remove extra empty line.
+
+Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com>
+Link: https://lore.kernel.org/r/20220630151205.3935560-1-claudiu.beznea@microchip.com
+Signed-off-by: Stephen Boyd <sboyd@kernel.org>
+Stable-dep-of: e581cf5d2162 ("clk: Get runtime PM before walking tree during disable_unused")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/clk/clk.c | 1 -
+ 1 file changed, 1 deletion(-)
+
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index 1043addcd38f6..f6be526005bbe 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -3631,7 +3631,6 @@ static int __clk_core_init(struct clk_core *core)
+
+ clk_core_reparent_orphans_nolock();
+
+-
+ kref_init(&core->ref);
+ out:
+ clk_pm_runtime_put(core);
+--
+2.43.0
+
--- /dev/null
+From 0e96fcb748ad52710c09b6b44c28149264c03cb4 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 25 Mar 2024 11:41:55 -0700
+Subject: clk: Remove prepare_lock hold assertion in __clk_release()
+
+From: Stephen Boyd <sboyd@kernel.org>
+
+[ Upstream commit 8358a76cfb47c9a5af627a0c4e7168aa14fa25f6 ]
+
+Removing this assertion lets us move the kref_put() call outside the
+prepare_lock section. We don't need to hold the prepare_lock here to
+free memory and destroy the clk_core structure. We've already unlinked
+the clk from the clk tree and by the time the release function runs
+nothing holds a reference to the clk_core anymore so anything with the
+pointer can't access the memory that's being freed anyway. Way back in
+commit 496eadf821c2 ("clk: Use lockdep asserts to find missing hold of
+prepare_lock") we didn't need to have this assertion either.
+
+Fixes: 496eadf821c2 ("clk: Use lockdep asserts to find missing hold of prepare_lock")
+Cc: Krzysztof Kozlowski <krzk@kernel.org>
+Reviewed-by: Douglas Anderson <dianders@chromium.org>
+Signed-off-by: Stephen Boyd <sboyd@kernel.org>
+Link: https://lore.kernel.org/r/20240325184204.745706-2-sboyd@kernel.org
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/clk/clk.c | 2 --
+ 1 file changed, 2 deletions(-)
+
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index aa2f1f8aa2994..67a882e03dfdd 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -4006,8 +4006,6 @@ static void __clk_release(struct kref *ref)
+ {
+ struct clk_core *core = container_of(ref, struct clk_core, ref);
+
+- lockdep_assert_held(&prepare_lock);
+-
+ clk_core_free_parent_map(core);
+ kfree_const(core->name);
+ kfree(core);
+--
+2.43.0
+
--- /dev/null
+From 1d58626b156f71d001ac27d0294447cefe780de1 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 11 Apr 2024 14:08:52 +0300
+Subject: drm: nv04: Fix out of bounds access
+
+From: Mikhail Kobuk <m.kobuk@ispras.ru>
+
+[ Upstream commit cf92bb778eda7830e79452c6917efa8474a30c1e ]
+
+When Output Resource (dcb->or) value is assigned in
+fabricate_dcb_output(), there may be out of bounds access to
+dac_users array in case dcb->or is zero because ffs(dcb->or) is
+used as index there.
+The 'or' argument of fabricate_dcb_output() must be interpreted as a
+number of bit to set, not value.
+
+Utilize macros from 'enum nouveau_or' in calls instead of hardcoding.
+
+Found by Linux Verification Center (linuxtesting.org) with SVACE.
+
+Fixes: 2e5702aff395 ("drm/nouveau: fabricate DCB encoder table for iMac G4")
+Fixes: 670820c0e6a9 ("drm/nouveau: Workaround incorrect DCB entry on a GeForce3 Ti 200.")
+Signed-off-by: Mikhail Kobuk <m.kobuk@ispras.ru>
+Signed-off-by: Danilo Krummrich <dakr@redhat.com>
+Link: https://patchwork.freedesktop.org/patch/msgid/20240411110854.16701-1-m.kobuk@ispras.ru
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/gpu/drm/nouveau/nouveau_bios.c | 13 +++++++------
+ 1 file changed, 7 insertions(+), 6 deletions(-)
+
+diff --git a/drivers/gpu/drm/nouveau/nouveau_bios.c b/drivers/gpu/drm/nouveau/nouveau_bios.c
+index d204ea8a5618e..5cdf0d8d4bc18 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_bios.c
++++ b/drivers/gpu/drm/nouveau/nouveau_bios.c
+@@ -23,6 +23,7 @@
+ */
+
+ #include "nouveau_drv.h"
++#include "nouveau_bios.h"
+ #include "nouveau_reg.h"
+ #include "dispnv04/hw.h"
+ #include "nouveau_encoder.h"
+@@ -1672,7 +1673,7 @@ apply_dcb_encoder_quirks(struct drm_device *dev, int idx, u32 *conn, u32 *conf)
+ */
+ if (nv_match_device(dev, 0x0201, 0x1462, 0x8851)) {
+ if (*conn == 0xf2005014 && *conf == 0xffffffff) {
+- fabricate_dcb_output(dcb, DCB_OUTPUT_TMDS, 1, 1, 1);
++ fabricate_dcb_output(dcb, DCB_OUTPUT_TMDS, 1, 1, DCB_OUTPUT_B);
+ return false;
+ }
+ }
+@@ -1758,26 +1759,26 @@ fabricate_dcb_encoder_table(struct drm_device *dev, struct nvbios *bios)
+ #ifdef __powerpc__
+ /* Apple iMac G4 NV17 */
+ if (of_machine_is_compatible("PowerMac4,5")) {
+- fabricate_dcb_output(dcb, DCB_OUTPUT_TMDS, 0, all_heads, 1);
+- fabricate_dcb_output(dcb, DCB_OUTPUT_ANALOG, 1, all_heads, 2);
++ fabricate_dcb_output(dcb, DCB_OUTPUT_TMDS, 0, all_heads, DCB_OUTPUT_B);
++ fabricate_dcb_output(dcb, DCB_OUTPUT_ANALOG, 1, all_heads, DCB_OUTPUT_C);
+ return;
+ }
+ #endif
+
+ /* Make up some sane defaults */
+ fabricate_dcb_output(dcb, DCB_OUTPUT_ANALOG,
+- bios->legacy.i2c_indices.crt, 1, 1);
++ bios->legacy.i2c_indices.crt, 1, DCB_OUTPUT_B);
+
+ if (nv04_tv_identify(dev, bios->legacy.i2c_indices.tv) >= 0)
+ fabricate_dcb_output(dcb, DCB_OUTPUT_TV,
+ bios->legacy.i2c_indices.tv,
+- all_heads, 0);
++ all_heads, DCB_OUTPUT_A);
+
+ else if (bios->tmds.output0_script_ptr ||
+ bios->tmds.output1_script_ptr)
+ fabricate_dcb_output(dcb, DCB_OUTPUT_TMDS,
+ bios->legacy.i2c_indices.panel,
+- all_heads, 1);
++ all_heads, DCB_OUTPUT_B);
+ }
+
+ static int
+--
+2.43.0
+
--- /dev/null
+From b5a756939611eba7601d44ffb37bf1f58d8c7193 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 4 Apr 2024 13:07:59 +0300
+Subject: drm/panel: visionox-rm69299: don't unregister DSI device
+
+From: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
+
+[ Upstream commit 9e4d3f4f34455abbaa9930bf6b7575a5cd081496 ]
+
+The DSI device for the panel was registered by the DSI host, so it is an
+error to unregister it from the panel driver. Drop the call to
+mipi_dsi_device_unregister().
+
+Fixes: c7f66d32dd43 ("drm/panel: add support for rm69299 visionox panel")
+Reviewed-by: Jessica Zhang <quic_jesszhan@quicinc.com>
+Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
+Link: https://patchwork.freedesktop.org/patch/msgid/20240404-drop-panel-unregister-v1-1-9f56953c5fb9@linaro.org
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/gpu/drm/panel/panel-visionox-rm69299.c | 2 --
+ 1 file changed, 2 deletions(-)
+
+diff --git a/drivers/gpu/drm/panel/panel-visionox-rm69299.c b/drivers/gpu/drm/panel/panel-visionox-rm69299.c
+index eb43503ec97b3..6134432e4918d 100644
+--- a/drivers/gpu/drm/panel/panel-visionox-rm69299.c
++++ b/drivers/gpu/drm/panel/panel-visionox-rm69299.c
+@@ -261,8 +261,6 @@ static int visionox_rm69299_remove(struct mipi_dsi_device *dsi)
+ struct visionox_rm69299 *ctx = mipi_dsi_get_drvdata(dsi);
+
+ mipi_dsi_detach(ctx->dsi);
+- mipi_dsi_device_unregister(ctx->dsi);
+-
+ drm_panel_remove(&ctx->panel);
+ return 0;
+ }
+--
+2.43.0
+
--- /dev/null
+From eafe098bc555680560db39b534bec118e2a005c6 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 10 Apr 2024 09:58:02 +0800
+Subject: kprobes: Fix possible use-after-free issue on kprobe registration
+
+From: Zheng Yejian <zhengyejian1@huawei.com>
+
+commit 325f3fb551f8cd672dbbfc4cf58b14f9ee3fc9e8 upstream.
+
+When unloading a module, its state is changing MODULE_STATE_LIVE ->
+ MODULE_STATE_GOING -> MODULE_STATE_UNFORMED. Each change will take
+a time. `is_module_text_address()` and `__module_text_address()`
+works with MODULE_STATE_LIVE and MODULE_STATE_GOING.
+If we use `is_module_text_address()` and `__module_text_address()`
+separately, there is a chance that the first one is succeeded but the
+next one is failed because module->state becomes MODULE_STATE_UNFORMED
+between those operations.
+
+In `check_kprobe_address_safe()`, if the second `__module_text_address()`
+is failed, that is ignored because it expected a kernel_text address.
+But it may have failed simply because module->state has been changed
+to MODULE_STATE_UNFORMED. In this case, arm_kprobe() will try to modify
+non-exist module text address (use-after-free).
+
+To fix this problem, we should not use separated `is_module_text_address()`
+and `__module_text_address()`, but use only `__module_text_address()`
+once and do `try_module_get(module)` which is only available with
+MODULE_STATE_LIVE.
+
+Link: https://lore.kernel.org/all/20240410015802.265220-1-zhengyejian1@huawei.com/
+
+Fixes: 28f6c37a2910 ("kprobes: Forbid probing on trampoline and BPF code areas")
+Cc: stable@vger.kernel.org
+Signed-off-by: Zheng Yejian <zhengyejian1@huawei.com>
+Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
+[Fix conflict due to lack dependency
+commit 223a76b268c9 ("kprobes: Fix coding style issues")]
+Signed-off-by: Zheng Yejian <zhengyejian1@huawei.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/kprobes.c | 18 ++++++++++++------
+ 1 file changed, 12 insertions(+), 6 deletions(-)
+
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index 05d3e156a7d63..dba6541c0fc3c 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -1647,10 +1647,17 @@ static int check_kprobe_address_safe(struct kprobe *p,
+ jump_label_lock();
+ preempt_disable();
+
+- /* Ensure it is not in reserved area nor out of text */
+- if (!(core_kernel_text((unsigned long) p->addr) ||
+- is_module_text_address((unsigned long) p->addr)) ||
+- in_gate_area_no_mm((unsigned long) p->addr) ||
++ /* Ensure the address is in a text area, and find a module if exists. */
++ *probed_mod = NULL;
++ if (!core_kernel_text((unsigned long) p->addr)) {
++ *probed_mod = __module_text_address((unsigned long) p->addr);
++ if (!(*probed_mod)) {
++ ret = -EINVAL;
++ goto out;
++ }
++ }
++ /* Ensure it is not in reserved area. */
++ if (in_gate_area_no_mm((unsigned long) p->addr) ||
+ within_kprobe_blacklist((unsigned long) p->addr) ||
+ jump_label_text_reserved(p->addr, p->addr) ||
+ static_call_text_reserved(p->addr, p->addr) ||
+@@ -1660,8 +1667,7 @@ static int check_kprobe_address_safe(struct kprobe *p,
+ goto out;
+ }
+
+- /* Check if are we probing a module */
+- *probed_mod = __module_text_address((unsigned long) p->addr);
++ /* Get module refcount and reject __init functions for loaded modules. */
+ if (*probed_mod) {
+ /*
+ * We must hold a refcount of the probed module while updating
+--
+2.43.0
+
--- /dev/null
+From a0cfa2880cc61669d25cc98fdc85a72880a61679 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Sun, 7 Apr 2024 14:56:04 +0800
+Subject: netfilter: nf_tables: Fix potential data-race in
+ __nft_expr_type_get()
+
+From: Ziyang Xuan <william.xuanziyang@huawei.com>
+
+[ Upstream commit f969eb84ce482331a991079ab7a5c4dc3b7f89bf ]
+
+nft_unregister_expr() can concurrent with __nft_expr_type_get(),
+and there is not any protection when iterate over nf_tables_expressions
+list in __nft_expr_type_get(). Therefore, there is potential data-race
+of nf_tables_expressions list entry.
+
+Use list_for_each_entry_rcu() to iterate over nf_tables_expressions
+list in __nft_expr_type_get(), and use rcu_read_lock() in the caller
+nft_expr_type_get() to protect the entire type query process.
+
+Fixes: ef1f7df9170d ("netfilter: nf_tables: expression ops overloading")
+Signed-off-by: Ziyang Xuan <william.xuanziyang@huawei.com>
+Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ net/netfilter/nf_tables_api.c | 8 ++++++--
+ 1 file changed, 6 insertions(+), 2 deletions(-)
+
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index ab7f7e45b9846..858d09b54eaa4 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -2739,7 +2739,7 @@ static const struct nft_expr_type *__nft_expr_type_get(u8 family,
+ {
+ const struct nft_expr_type *type, *candidate = NULL;
+
+- list_for_each_entry(type, &nf_tables_expressions, list) {
++ list_for_each_entry_rcu(type, &nf_tables_expressions, list) {
+ if (!nla_strcmp(nla, type->name)) {
+ if (!type->family && !candidate)
+ candidate = type;
+@@ -2771,9 +2771,13 @@ static const struct nft_expr_type *nft_expr_type_get(struct net *net,
+ if (nla == NULL)
+ return ERR_PTR(-EINVAL);
+
++ rcu_read_lock();
+ type = __nft_expr_type_get(family, nla);
+- if (type != NULL && try_module_get(type->owner))
++ if (type != NULL && try_module_get(type->owner)) {
++ rcu_read_unlock();
+ return type;
++ }
++ rcu_read_unlock();
+
+ lockdep_nfnl_nft_mutex_not_held();
+ #ifdef CONFIG_MODULES
+--
+2.43.0
+
--- /dev/null
+From 373fa197ed1d58c0e39a2341189ebec6e41c6cb8 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 10 Apr 2024 21:05:13 +0200
+Subject: netfilter: nft_set_pipapo: do not free live element
+
+From: Florian Westphal <fw@strlen.de>
+
+[ Upstream commit 3cfc9ec039af60dbd8965ae085b2c2ccdcfbe1cc ]
+
+Pablo reports a crash with large batches of elements with a
+back-to-back add/remove pattern. Quoting Pablo:
+
+ add_elem("00000000") timeout 100 ms
+ ...
+ add_elem("0000000X") timeout 100 ms
+ del_elem("0000000X") <---------------- delete one that was just added
+ ...
+ add_elem("00005000") timeout 100 ms
+
+ 1) nft_pipapo_remove() removes element 0000000X
+ Then, KASAN shows a splat.
+
+Looking at the remove function there is a chance that we will drop a
+rule that maps to a non-deactivated element.
+
+Removal happens in two steps, first we do a lookup for key k and return the
+to-be-removed element and mark it as inactive in the next generation.
+Then, in a second step, the element gets removed from the set/map.
+
+The _remove function does not work correctly if we have more than one
+element that share the same key.
+
+This can happen if we insert an element into a set when the set already
+holds an element with same key, but the element mapping to the existing
+key has timed out or is not active in the next generation.
+
+In such case its possible that removal will unmap the wrong element.
+If this happens, we will leak the non-deactivated element, it becomes
+unreachable.
+
+The element that got deactivated (and will be freed later) will
+remain reachable in the set data structure, this can result in
+a crash when such an element is retrieved during lookup (stale
+pointer).
+
+Add a check that the fully matching key does in fact map to the element
+that we have marked as inactive in the deactivation step.
+If not, we need to continue searching.
+
+Add a bug/warn trap at the end of the function as well, the remove
+function must not ever be called with an invisible/unreachable/non-existent
+element.
+
+v2: avoid uneeded temporary variable (Stefano)
+
+Fixes: 3c4287f62044 ("nf_tables: Add set type for arbitrary concatenation of ranges")
+Reported-by: Pablo Neira Ayuso <pablo@netfilter.org>
+Reviewed-by: Stefano Brivio <sbrivio@redhat.com>
+Signed-off-by: Florian Westphal <fw@strlen.de>
+Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ net/netfilter/nft_set_pipapo.c | 14 +++++++++-----
+ 1 file changed, 9 insertions(+), 5 deletions(-)
+
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index b9682e085fcef..5a8521abd8f5c 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -1980,6 +1980,8 @@ static void nft_pipapo_remove(const struct net *net, const struct nft_set *set,
+ rules_fx = rules_f0;
+
+ nft_pipapo_for_each_field(f, i, m) {
++ bool last = i == m->field_count - 1;
++
+ if (!pipapo_match_field(f, start, rules_fx,
+ match_start, match_end))
+ break;
+@@ -1992,16 +1994,18 @@ static void nft_pipapo_remove(const struct net *net, const struct nft_set *set,
+
+ match_start += NFT_PIPAPO_GROUPS_PADDED_SIZE(f);
+ match_end += NFT_PIPAPO_GROUPS_PADDED_SIZE(f);
+- }
+
+- if (i == m->field_count) {
+- priv->dirty = true;
+- pipapo_drop(m, rulemap);
+- return;
++ if (last && f->mt[rulemap[i].to].e == e) {
++ priv->dirty = true;
++ pipapo_drop(m, rulemap);
++ return;
++ }
+ }
+
+ first_rule += rules_f0;
+ }
++
++ WARN_ON_ONCE(1); /* elem_priv not found */
+ }
+
+ /**
+--
+2.43.0
+
--- /dev/null
+From 4e330cf8e3c2b3326f29ce9bc1c3d0f7ec93ac0e Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Fri, 22 Mar 2024 13:20:49 +0200
+Subject: RDMA/cm: Print the old state when cm_destroy_id gets timeout
+
+From: Mark Zhang <markzhang@nvidia.com>
+
+[ Upstream commit b68e1acb5834ed1a2ad42d9d002815a8bae7c0b6 ]
+
+The old state is helpful for debugging, as the current state is always
+IB_CM_IDLE when timeout happens.
+
+Fixes: 96d9cbe2f2ff ("RDMA/cm: add timeout to cm_destroy_id wait")
+Signed-off-by: Mark Zhang <markzhang@nvidia.com>
+Link: https://lore.kernel.org/r/20240322112049.2022994-1-markzhang@nvidia.com
+Signed-off-by: Leon Romanovsky <leon@kernel.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/infiniband/core/cm.c | 11 +++++++----
+ 1 file changed, 7 insertions(+), 4 deletions(-)
+
+diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
+index 2a30b25c5e7e5..26c66685a43dd 100644
+--- a/drivers/infiniband/core/cm.c
++++ b/drivers/infiniband/core/cm.c
+@@ -1057,23 +1057,26 @@ static void cm_reset_to_idle(struct cm_id_private *cm_id_priv)
+ }
+ }
+
+-static noinline void cm_destroy_id_wait_timeout(struct ib_cm_id *cm_id)
++static noinline void cm_destroy_id_wait_timeout(struct ib_cm_id *cm_id,
++ enum ib_cm_state old_state)
+ {
+ struct cm_id_private *cm_id_priv;
+
+ cm_id_priv = container_of(cm_id, struct cm_id_private, id);
+- pr_err("%s: cm_id=%p timed out. state=%d refcnt=%d\n", __func__,
+- cm_id, cm_id->state, refcount_read(&cm_id_priv->refcount));
++ pr_err("%s: cm_id=%p timed out. state %d -> %d, refcnt=%d\n", __func__,
++ cm_id, old_state, cm_id->state, refcount_read(&cm_id_priv->refcount));
+ }
+
+ static void cm_destroy_id(struct ib_cm_id *cm_id, int err)
+ {
+ struct cm_id_private *cm_id_priv;
++ enum ib_cm_state old_state;
+ struct cm_work *work;
+ int ret;
+
+ cm_id_priv = container_of(cm_id, struct cm_id_private, id);
+ spin_lock_irq(&cm_id_priv->lock);
++ old_state = cm_id->state;
+ retest:
+ switch (cm_id->state) {
+ case IB_CM_LISTEN:
+@@ -1187,7 +1190,7 @@ static void cm_destroy_id(struct ib_cm_id *cm_id, int err)
+ msecs_to_jiffies(
+ CM_DESTROY_ID_WAIT_TIMEOUT));
+ if (!ret) /* timeout happened */
+- cm_destroy_id_wait_timeout(cm_id);
++ cm_destroy_id_wait_timeout(cm_id, old_state);
+ } while (!ret);
+
+ while ((work = cm_dequeue_work(cm_id_priv)) != NULL)
+--
+2.43.0
+
--- /dev/null
+From f0b2ba257dfdd5fbd7672fea4ace851d853e11df Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 3 Apr 2024 12:03:46 +0300
+Subject: RDMA/mlx5: Fix port number for counter query in multi-port
+ configuration
+
+From: Michael Guralnik <michaelgur@nvidia.com>
+
+[ Upstream commit be121ffb384f53e966ee7299ffccc6eeb61bc73d ]
+
+Set the correct port when querying PPCNT in multi-port configuration.
+Distinguish between cases where switchdev mode was enabled to multi-port
+configuration and don't overwrite the queried port to 1 in multi-port
+case.
+
+Fixes: 74b30b3ad5ce ("RDMA/mlx5: Set local port to one when accessing counters")
+Signed-off-by: Michael Guralnik <michaelgur@nvidia.com>
+Link: https://lore.kernel.org/r/9bfcc8ade958b760a51408c3ad654a01b11f7d76.1712134988.git.leon@kernel.org
+Signed-off-by: Leon Romanovsky <leon@kernel.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/infiniband/hw/mlx5/mad.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/drivers/infiniband/hw/mlx5/mad.c b/drivers/infiniband/hw/mlx5/mad.c
+index cca7a4a6bd82d..7f12a9b05c872 100644
+--- a/drivers/infiniband/hw/mlx5/mad.c
++++ b/drivers/infiniband/hw/mlx5/mad.c
+@@ -166,7 +166,8 @@ static int process_pma_cmd(struct mlx5_ib_dev *dev, u8 port_num,
+ mdev = dev->mdev;
+ mdev_port_num = 1;
+ }
+- if (MLX5_CAP_GEN(dev->mdev, num_ports) == 1) {
++ if (MLX5_CAP_GEN(dev->mdev, num_ports) == 1 &&
++ !mlx5_core_mp_enabled(mdev)) {
+ /* set local port to one for Function-Per-Port HCA. */
+ mdev = dev->mdev;
+ mdev_port_num = 1;
+--
+2.43.0
+
--- /dev/null
+From 24cb4126c3565cc2f2b7c6c165d0ca811ac75369 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 14 Mar 2024 07:51:40 +0100
+Subject: RDMA/rxe: Fix the problem "mutex_destroy missing"
+
+From: Yanjun.Zhu <yanjun.zhu@linux.dev>
+
+[ Upstream commit 481047d7e8391d3842ae59025806531cdad710d9 ]
+
+When a mutex lock is not used any more, the function mutex_destroy
+should be called to mark the mutex lock uninitialized.
+
+Fixes: 8700e3e7c485 ("Soft RoCE driver")
+Signed-off-by: Yanjun.Zhu <yanjun.zhu@linux.dev>
+Link: https://lore.kernel.org/r/20240314065140.27468-1-yanjun.zhu@linux.dev
+Reviewed-by: Daisuke Matsuda <matsuda-daisuke@fujitsu.com>
+Signed-off-by: Leon Romanovsky <leon@kernel.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/infiniband/sw/rxe/rxe.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c
+index 95f0de0c8b49c..0505c81aa8d04 100644
+--- a/drivers/infiniband/sw/rxe/rxe.c
++++ b/drivers/infiniband/sw/rxe/rxe.c
+@@ -35,6 +35,8 @@ void rxe_dealloc(struct ib_device *ib_dev)
+
+ if (rxe->tfm)
+ crypto_free_shash(rxe->tfm);
++
++ mutex_destroy(&rxe->usdev_lock);
+ }
+
+ /* initialize rxe device parameters */
+--
+2.43.0
+
--- /dev/null
+From f8e3c1cf9e23a256368ca13e52896e719e4d3aae Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 18 Apr 2024 18:58:06 +0530
+Subject: Revert "tracing/trigger: Fix to return error if failed to alloc
+ snapshot"
+
+From: Siddh Raman Pant <siddh.raman.pant@oracle.com>
+
+This reverts commit 56cfbe60710772916a5ba092c99542332b48e870 which is
+commit 0958b33ef5a04ed91f61cef4760ac412080c4e08 upstream.
+
+The change has an incorrect assumption about the return value because
+in the current stable trees for versions 5.15 and before, the following
+commit responsible for making 0 a success value is not present:
+b8cc44a4d3c1 ("tracing: Remove logic for registering multiple event triggers at a time")
+
+The return value should be 0 on failure in the current tree, because in
+the functions event_trigger_callback() and event_enable_trigger_func(),
+we have:
+
+ ret = cmd_ops->reg(glob, trigger_ops, trigger_data, file);
+ /*
+ * The above returns on success the # of functions enabled,
+ * but if it didn't find any functions it returns zero.
+ * Consider no functions a failure too.
+ */
+ if (!ret) {
+ ret = -ENOENT;
+
+Cc: stable@kernel.org # 5.15, 5.10, 5.4, 4.19
+Signed-off-by: Siddh Raman Pant <siddh.raman.pant@oracle.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/trace/trace_events_trigger.c | 6 ++----
+ 1 file changed, 2 insertions(+), 4 deletions(-)
+
+diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
+index e4340958da2df..4bc90965abb25 100644
+--- a/kernel/trace/trace_events_trigger.c
++++ b/kernel/trace/trace_events_trigger.c
+@@ -1140,10 +1140,8 @@ register_snapshot_trigger(char *glob, struct event_trigger_ops *ops,
+ struct event_trigger_data *data,
+ struct trace_event_file *file)
+ {
+- int ret = tracing_alloc_snapshot_instance(file->tr);
+-
+- if (ret < 0)
+- return ret;
++ if (tracing_alloc_snapshot_instance(file->tr) != 0)
++ return 0;
+
+ return register_trigger(glob, ops, data, file);
+ }
+--
+2.43.0
+
--- /dev/null
+From faf496c02dce46cc4bddd0a9b71a0554b227e92f Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 26 Feb 2024 11:18:16 +0800
+Subject: selftests/ftrace: Limit length in subsystem-enable tests
+
+From: Yuanhe Shu <xiangzao@linux.alibaba.com>
+
+commit 1a4ea83a6e67f1415a1f17c1af5e9c814c882bb5 upstream.
+
+While sched* events being traced and sched* events continuously happen,
+"[xx] event tracing - enable/disable with subsystem level files" would
+not stop as on some slower systems it seems to take forever.
+Select the first 100 lines of output would be enough to judge whether
+there are more than 3 types of sched events.
+
+Fixes: 815b18ea66d6 ("ftracetest: Add basic event tracing test cases")
+Cc: stable@vger.kernel.org
+Signed-off-by: Yuanhe Shu <xiangzao@linux.alibaba.com>
+Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
+Acked-by: Steven Rostedt (Google) <rostedt@goodmis.org>
+Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ .../selftests/ftrace/test.d/event/subsystem-enable.tc | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/tools/testing/selftests/ftrace/test.d/event/subsystem-enable.tc b/tools/testing/selftests/ftrace/test.d/event/subsystem-enable.tc
+index b1ede62498667..b7c8f29c09a97 100644
+--- a/tools/testing/selftests/ftrace/test.d/event/subsystem-enable.tc
++++ b/tools/testing/selftests/ftrace/test.d/event/subsystem-enable.tc
+@@ -18,7 +18,7 @@ echo 'sched:*' > set_event
+
+ yield
+
+-count=`cat trace | grep -v ^# | awk '{ print $5 }' | sort -u | wc -l`
++count=`head -n 100 trace | grep -v ^# | awk '{ print $5 }' | sort -u | wc -l`
+ if [ $count -lt 3 ]; then
+ fail "at least fork, exec and exit events should be recorded"
+ fi
+@@ -29,7 +29,7 @@ echo 1 > events/sched/enable
+
+ yield
+
+-count=`cat trace | grep -v ^# | awk '{ print $5 }' | sort -u | wc -l`
++count=`head -n 100 trace | grep -v ^# | awk '{ print $5 }' | sort -u | wc -l`
+ if [ $count -lt 3 ]; then
+ fail "at least fork, exec and exit events should be recorded"
+ fi
+@@ -40,7 +40,7 @@ echo 0 > events/sched/enable
+
+ yield
+
+-count=`cat trace | grep -v ^# | awk '{ print $5 }' | sort -u | wc -l`
++count=`head -n 100 trace | grep -v ^# | awk '{ print $5 }' | sort -u | wc -l`
+ if [ $count -ne 0 ]; then
+ fail "any of scheduler events should not be recorded"
+ fi
+--
+2.43.0
+
rdma-mlx5-fix-port-number-for-counter-query-in-multi.patch
drm-nv04-fix-out-of-bounds-access.patch
drm-panel-visionox-rm69299-don-t-unregister-dsi-devi.patch
+selftests-ftrace-limit-length-in-subsystem-enable-te.patch
+kprobes-fix-possible-use-after-free-issue-on-kprobe-.patch
+revert-tracing-trigger-fix-to-return-error-if-failed.patch
+netfilter-nf_tables-fix-potential-data-race-in-__nft.patch-25891
+netfilter-nft_set_pipapo-do-not-free-live-element.patch-6586
+tun-limit-printing-rate-when-illegal-packet-received.patch-16285
+rdma-rxe-fix-the-problem-mutex_destroy-missing.patch-21680
+rdma-cm-print-the-old-state-when-cm_destroy_id-gets-.patch-22806
+rdma-mlx5-fix-port-number-for-counter-query-in-multi.patch-8293
+drm-nv04-fix-out-of-bounds-access.patch-17182
+drm-panel-visionox-rm69299-don-t-unregister-dsi-devi.patch-29707
+clk-remove-prepare_lock-hold-assertion-in-__clk_rele.patch
+clk-mark-all_lists-as-const.patch
+clk-remove-extra-empty-line.patch
+clk-print-an-info-line-before-disabling-unused-clock.patch
+clk-initialize-struct-clk_core-kref-earlier.patch
+clk-get-runtime-pm-before-walking-tree-during-disabl.patch
+x86-cpufeatures-fix-dependencies-for-gfni-vaes-and-v.patch
--- /dev/null
+From e11b1a06368955e76f40d7d933fc707af7ad056b Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Sun, 14 Apr 2024 22:02:46 -0400
+Subject: tun: limit printing rate when illegal packet received by tun dev
+
+From: Lei Chen <lei.chen@smartx.com>
+
+[ Upstream commit f8bbc07ac535593139c875ffa19af924b1084540 ]
+
+vhost_worker will call tun call backs to receive packets. If too many
+illegal packets arrives, tun_do_read will keep dumping packet contents.
+When console is enabled, it will costs much more cpu time to dump
+packet and soft lockup will be detected.
+
+net_ratelimit mechanism can be used to limit the dumping rate.
+
+PID: 33036 TASK: ffff949da6f20000 CPU: 23 COMMAND: "vhost-32980"
+ #0 [fffffe00003fce50] crash_nmi_callback at ffffffff89249253
+ #1 [fffffe00003fce58] nmi_handle at ffffffff89225fa3
+ #2 [fffffe00003fceb0] default_do_nmi at ffffffff8922642e
+ #3 [fffffe00003fced0] do_nmi at ffffffff8922660d
+ #4 [fffffe00003fcef0] end_repeat_nmi at ffffffff89c01663
+ [exception RIP: io_serial_in+20]
+ RIP: ffffffff89792594 RSP: ffffa655314979e8 RFLAGS: 00000002
+ RAX: ffffffff89792500 RBX: ffffffff8af428a0 RCX: 0000000000000000
+ RDX: 00000000000003fd RSI: 0000000000000005 RDI: ffffffff8af428a0
+ RBP: 0000000000002710 R8: 0000000000000004 R9: 000000000000000f
+ R10: 0000000000000000 R11: ffffffff8acbf64f R12: 0000000000000020
+ R13: ffffffff8acbf698 R14: 0000000000000058 R15: 0000000000000000
+ ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018
+ #5 [ffffa655314979e8] io_serial_in at ffffffff89792594
+ #6 [ffffa655314979e8] wait_for_xmitr at ffffffff89793470
+ #7 [ffffa65531497a08] serial8250_console_putchar at ffffffff897934f6
+ #8 [ffffa65531497a20] uart_console_write at ffffffff8978b605
+ #9 [ffffa65531497a48] serial8250_console_write at ffffffff89796558
+ #10 [ffffa65531497ac8] console_unlock at ffffffff89316124
+ #11 [ffffa65531497b10] vprintk_emit at ffffffff89317c07
+ #12 [ffffa65531497b68] printk at ffffffff89318306
+ #13 [ffffa65531497bc8] print_hex_dump at ffffffff89650765
+ #14 [ffffa65531497ca8] tun_do_read at ffffffffc0b06c27 [tun]
+ #15 [ffffa65531497d38] tun_recvmsg at ffffffffc0b06e34 [tun]
+ #16 [ffffa65531497d68] handle_rx at ffffffffc0c5d682 [vhost_net]
+ #17 [ffffa65531497ed0] vhost_worker at ffffffffc0c644dc [vhost]
+ #18 [ffffa65531497f10] kthread at ffffffff892d2e72
+ #19 [ffffa65531497f50] ret_from_fork at ffffffff89c0022f
+
+Fixes: ef3db4a59542 ("tun: avoid BUG, dump packet on GSO errors")
+Signed-off-by: Lei Chen <lei.chen@smartx.com>
+Reviewed-by: Willem de Bruijn <willemb@google.com>
+Acked-by: Jason Wang <jasowang@redhat.com>
+Reviewed-by: Eric Dumazet <edumazet@google.com>
+Acked-by: Michael S. Tsirkin <mst@redhat.com>
+Link: https://lore.kernel.org/r/20240415020247.2207781-1-lei.chen@smartx.com
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/net/tun.c | 18 ++++++++++--------
+ 1 file changed, 10 insertions(+), 8 deletions(-)
+
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index bb0368272a1bb..77e63e7366e78 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -2141,14 +2141,16 @@ static ssize_t tun_put_user(struct tun_struct *tun,
+ tun_is_little_endian(tun), true,
+ vlan_hlen)) {
+ struct skb_shared_info *sinfo = skb_shinfo(skb);
+- pr_err("unexpected GSO type: "
+- "0x%x, gso_size %d, hdr_len %d\n",
+- sinfo->gso_type, tun16_to_cpu(tun, gso.gso_size),
+- tun16_to_cpu(tun, gso.hdr_len));
+- print_hex_dump(KERN_ERR, "tun: ",
+- DUMP_PREFIX_NONE,
+- 16, 1, skb->head,
+- min((int)tun16_to_cpu(tun, gso.hdr_len), 64), true);
++
++ if (net_ratelimit()) {
++ netdev_err(tun->dev, "unexpected GSO type: 0x%x, gso_size %d, hdr_len %d\n",
++ sinfo->gso_type, tun16_to_cpu(tun, gso.gso_size),
++ tun16_to_cpu(tun, gso.hdr_len));
++ print_hex_dump(KERN_ERR, "tun: ",
++ DUMP_PREFIX_NONE,
++ 16, 1, skb->head,
++ min((int)tun16_to_cpu(tun, gso.hdr_len), 64), true);
++ }
+ WARN_ON_ONCE(1);
+ return -EINVAL;
+ }
+--
+2.43.0
+
--- /dev/null
+From 6f94d804d22485cddf629efc00adb3b241013ba8 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 16 Apr 2024 23:04:34 -0700
+Subject: x86/cpufeatures: Fix dependencies for GFNI, VAES, and VPCLMULQDQ
+
+From: Eric Biggers <ebiggers@google.com>
+
+[ Upstream commit 9543f6e26634537997b6e909c20911b7bf4876de ]
+
+Fix cpuid_deps[] to list the correct dependencies for GFNI, VAES, and
+VPCLMULQDQ. These features don't depend on AVX512, and there exist CPUs
+that support these features but not AVX512. GFNI actually doesn't even
+depend on AVX.
+
+This prevents GFNI from being unnecessarily disabled if AVX is disabled
+to mitigate the GDS vulnerability.
+
+This also prevents all three features from being unnecessarily disabled
+if AVX512VL (or its dependency AVX512F) were to be disabled, but it
+looks like there isn't any case where this happens anyway.
+
+Fixes: c128dbfa0f87 ("x86/cpufeatures: Enable new SSE/AVX/AVX512 CPU features")
+Signed-off-by: Eric Biggers <ebiggers@google.com>
+Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
+Acked-by: Dave Hansen <dave.hansen@linux.intel.com>
+Link: https://lore.kernel.org/r/20240417060434.47101-1-ebiggers@kernel.org
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ arch/x86/kernel/cpu/cpuid-deps.c | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/arch/x86/kernel/cpu/cpuid-deps.c b/arch/x86/kernel/cpu/cpuid-deps.c
+index d502241995a39..24fca3d56c7f3 100644
+--- a/arch/x86/kernel/cpu/cpuid-deps.c
++++ b/arch/x86/kernel/cpu/cpuid-deps.c
+@@ -44,7 +44,10 @@ static const struct cpuid_dep cpuid_deps[] = {
+ { X86_FEATURE_F16C, X86_FEATURE_XMM2, },
+ { X86_FEATURE_AES, X86_FEATURE_XMM2 },
+ { X86_FEATURE_SHA_NI, X86_FEATURE_XMM2 },
++ { X86_FEATURE_GFNI, X86_FEATURE_XMM2 },
+ { X86_FEATURE_FMA, X86_FEATURE_AVX },
++ { X86_FEATURE_VAES, X86_FEATURE_AVX },
++ { X86_FEATURE_VPCLMULQDQ, X86_FEATURE_AVX },
+ { X86_FEATURE_AVX2, X86_FEATURE_AVX, },
+ { X86_FEATURE_AVX512F, X86_FEATURE_AVX, },
+ { X86_FEATURE_AVX512IFMA, X86_FEATURE_AVX512F },
+@@ -56,9 +59,6 @@ static const struct cpuid_dep cpuid_deps[] = {
+ { X86_FEATURE_AVX512VL, X86_FEATURE_AVX512F },
+ { X86_FEATURE_AVX512VBMI, X86_FEATURE_AVX512F },
+ { X86_FEATURE_AVX512_VBMI2, X86_FEATURE_AVX512VL },
+- { X86_FEATURE_GFNI, X86_FEATURE_AVX512VL },
+- { X86_FEATURE_VAES, X86_FEATURE_AVX512VL },
+- { X86_FEATURE_VPCLMULQDQ, X86_FEATURE_AVX512VL },
+ { X86_FEATURE_AVX512_VNNI, X86_FEATURE_AVX512VL },
+ { X86_FEATURE_AVX512_BITALG, X86_FEATURE_AVX512VL },
+ { X86_FEATURE_AVX512_4VNNIW, X86_FEATURE_AVX512F },
+--
+2.43.0
+