--- /dev/null
+From d6c4c46d6d4980fdc4b002e4d343a3b2cf5d9c80 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Fri, 29 Nov 2024 13:33:03 +0000
+Subject: btrfs: fix missing snapshot drew unlock when root is dead during swap
+ activation
+
+From: Filipe Manana <fdmanana@suse.com>
+
+[ Upstream commit 9c803c474c6c002d8ade68ebe99026cc39c37f85 ]
+
+When activating a swap file we acquire the root's snapshot drew lock and
+then check if the root is dead, failing and returning with -EPERM if it's
+dead but without unlocking the root's snapshot lock. Fix this by adding
+the missing unlock.
+
+Fixes: 60021bd754c6 ("btrfs: prevent subvol with swapfile from being deleted")
+Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
+Reviewed-by: David Sterba <dsterba@suse.com>
+Reviewed-by: Qu Wenruo <wqu@suse.com>
+Signed-off-by: Filipe Manana <fdmanana@suse.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ fs/btrfs/inode.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index eb12ba64ac7a7..8f048e517e656 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -10891,6 +10891,7 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ if (btrfs_root_dead(root)) {
+ spin_unlock(&root->root_item_lock);
+
++ btrfs_drew_write_unlock(&root->snapshot_lock);
+ btrfs_exclop_finish(fs_info);
+ btrfs_warn(fs_info,
+ "cannot activate swapfile because subvolume %llu is being deleted",
+--
+2.43.0
+
--- /dev/null
+From e4ba3d12772eb41dbb1e31897a4095a1614e7362 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 19 Nov 2024 05:44:32 +0000
+Subject: sched/core: Prevent wakeup of ksoftirqd during idle load balance
+
+From: K Prateek Nayak <kprateek.nayak@amd.com>
+
+[ Upstream commit e932c4ab38f072ce5894b2851fea8bc5754bb8e5 ]
+
+Scheduler raises a SCHED_SOFTIRQ to trigger a load balancing event on
+from the IPI handler on the idle CPU. If the SMP function is invoked
+from an idle CPU via flush_smp_call_function_queue() then the HARD-IRQ
+flag is not set and raise_softirq_irqoff() needlessly wakes ksoftirqd
+because soft interrupts are handled before ksoftirqd get on the CPU.
+
+Adding a trace_printk() in nohz_csd_func() at the spot of raising
+SCHED_SOFTIRQ and enabling trace events for sched_switch, sched_wakeup,
+and softirq_entry (for SCHED_SOFTIRQ vector alone) helps observing the
+current behavior:
+
+ <idle>-0 [000] dN.1.: nohz_csd_func: Raising SCHED_SOFTIRQ from nohz_csd_func
+ <idle>-0 [000] dN.4.: sched_wakeup: comm=ksoftirqd/0 pid=16 prio=120 target_cpu=000
+ <idle>-0 [000] .Ns1.: softirq_entry: vec=7 [action=SCHED]
+ <idle>-0 [000] .Ns1.: softirq_exit: vec=7 [action=SCHED]
+ <idle>-0 [000] d..2.: sched_switch: prev_comm=swapper/0 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=ksoftirqd/0 next_pid=16 next_prio=120
+ ksoftirqd/0-16 [000] d..2.: sched_switch: prev_comm=ksoftirqd/0 prev_pid=16 prev_prio=120 prev_state=S ==> next_comm=swapper/0 next_pid=0 next_prio=120
+ ...
+
+Use __raise_softirq_irqoff() to raise the softirq. The SMP function call
+is always invoked on the requested CPU in an interrupt handler. It is
+guaranteed that soft interrupts are handled at the end.
+
+Following are the observations with the changes when enabling the same
+set of events:
+
+ <idle>-0 [000] dN.1.: nohz_csd_func: Raising SCHED_SOFTIRQ for nohz_idle_balance
+ <idle>-0 [000] dN.1.: softirq_raise: vec=7 [action=SCHED]
+ <idle>-0 [000] .Ns1.: softirq_entry: vec=7 [action=SCHED]
+
+No unnecessary ksoftirqd wakeups are seen from idle task's context to
+service the softirq.
+
+Fixes: b2a02fc43a1f ("smp: Optimize send_call_function_single_ipi()")
+Closes: https://lore.kernel.org/lkml/fcf823f-195e-6c9a-eac3-25f870cb35ac@inria.fr/ [1]
+Reported-by: Julia Lawall <julia.lawall@inria.fr>
+Suggested-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
+Link: https://lore.kernel.org/r/20241119054432.6405-5-kprateek.nayak@amd.com
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ kernel/sched/core.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 7946c73dca31d..ed92b75f7e024 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -1108,7 +1108,7 @@ static void nohz_csd_func(void *info)
+ rq->idle_balance = idle_cpu(cpu);
+ if (rq->idle_balance) {
+ rq->nohz_idle_balance = flags;
+- raise_softirq_irqoff(SCHED_SOFTIRQ);
++ __raise_softirq_irqoff(SCHED_SOFTIRQ);
+ }
+ }
+
+--
+2.43.0
+
--- /dev/null
+From 5618fb6cdebeeb84a63f5c19f295bdf22fb1c1a4 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 19 Nov 2024 05:44:30 +0000
+Subject: sched/core: Remove the unnecessary need_resched() check in
+ nohz_csd_func()
+
+From: K Prateek Nayak <kprateek.nayak@amd.com>
+
+[ Upstream commit ea9cffc0a154124821531991d5afdd7e8b20d7aa ]
+
+The need_resched() check currently in nohz_csd_func() can be tracked
+to have been added in scheduler_ipi() back in 2011 via commit
+ca38062e57e9 ("sched: Use resched IPI to kick off the nohz idle balance")
+
+Since then, it has travelled quite a bit but it seems like an idle_cpu()
+check currently is sufficient to detect the need to bail out from an
+idle load balancing. To justify this removal, consider all the following
+case where an idle load balancing could race with a task wakeup:
+
+o Since commit f3dd3f674555b ("sched: Remove the limitation of WF_ON_CPU
+ on wakelist if wakee cpu is idle") a target perceived to be idle
+ (target_rq->nr_running == 0) will return true for
+ ttwu_queue_cond(target) which will offload the task wakeup to the idle
+ target via an IPI.
+
+ In all such cases target_rq->ttwu_pending will be set to 1 before
+ queuing the wake function.
+
+ If an idle load balance races here, following scenarios are possible:
+
+ - The CPU is not in TIF_POLLING_NRFLAG mode in which case an actual
+ IPI is sent to the CPU to wake it out of idle. If the
+ nohz_csd_func() queues before sched_ttwu_pending(), the idle load
+ balance will bail out since idle_cpu(target) returns 0 since
+ target_rq->ttwu_pending is 1. If the nohz_csd_func() is queued after
+ sched_ttwu_pending() it should see rq->nr_running to be non-zero and
+ bail out of idle load balancing.
+
+ - The CPU is in TIF_POLLING_NRFLAG mode and instead of an actual IPI,
+ the sender will simply set TIF_NEED_RESCHED for the target to put it
+ out of idle and flush_smp_call_function_queue() in do_idle() will
+ execute the call function. Depending on the ordering of the queuing
+ of nohz_csd_func() and sched_ttwu_pending(), the idle_cpu() check in
+ nohz_csd_func() should either see target_rq->ttwu_pending = 1 or
+ target_rq->nr_running to be non-zero if there is a genuine task
+ wakeup racing with the idle load balance kick.
+
+o The waker CPU perceives the target CPU to be busy
+ (targer_rq->nr_running != 0) but the CPU is in fact going idle and due
+ to a series of unfortunate events, the system reaches a case where the
+ waker CPU decides to perform the wakeup by itself in ttwu_queue() on
+ the target CPU but target is concurrently selected for idle load
+ balance (XXX: Can this happen? I'm not sure, but we'll consider the
+ mother of all coincidences to estimate the worst case scenario).
+
+ ttwu_do_activate() calls enqueue_task() which would increment
+ "rq->nr_running" post which it calls wakeup_preempt() which is
+ responsible for setting TIF_NEED_RESCHED (via a resched IPI or by
+ setting TIF_NEED_RESCHED on a TIF_POLLING_NRFLAG idle CPU) The key
+ thing to note in this case is that rq->nr_running is already non-zero
+ in case of a wakeup before TIF_NEED_RESCHED is set which would
+ lead to idle_cpu() check returning false.
+
+In all cases, it seems that need_resched() check is unnecessary when
+checking for idle_cpu() first since an impending wakeup racing with idle
+load balancer will either set the "rq->ttwu_pending" or indicate a newly
+woken task via "rq->nr_running".
+
+Chasing the reason why this check might have existed in the first place,
+I came across Peter's suggestion on the fist iteration of Suresh's
+patch from 2011 [1] where the condition to raise the SCHED_SOFTIRQ was:
+
+ sched_ttwu_do_pending(list);
+
+ if (unlikely((rq->idle == current) &&
+ rq->nohz_balance_kick &&
+ !need_resched()))
+ raise_softirq_irqoff(SCHED_SOFTIRQ);
+
+Since the condition to raise the SCHED_SOFIRQ was preceded by
+sched_ttwu_do_pending() (which is equivalent of sched_ttwu_pending()) in
+the current upstream kernel, the need_resched() check was necessary to
+catch a newly queued task. Peter suggested modifying it to:
+
+ if (idle_cpu() && rq->nohz_balance_kick && !need_resched())
+ raise_softirq_irqoff(SCHED_SOFTIRQ);
+
+where idle_cpu() seems to have replaced "rq->idle == current" check.
+
+Even back then, the idle_cpu() check would have been sufficient to catch
+a new task being enqueued. Since commit b2a02fc43a1f ("smp: Optimize
+send_call_function_single_ipi()") overloads the interpretation of
+TIF_NEED_RESCHED for TIF_POLLING_NRFLAG idling, remove the
+need_resched() check in nohz_csd_func() to raise SCHED_SOFTIRQ based
+on Peter's suggestion.
+
+Fixes: b2a02fc43a1f ("smp: Optimize send_call_function_single_ipi()")
+Suggested-by: Peter Zijlstra <peterz@infradead.org>
+Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Link: https://lore.kernel.org/r/20241119054432.6405-3-kprateek.nayak@amd.com
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ kernel/sched/core.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 21b4f96d80a1b..7946c73dca31d 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -1106,7 +1106,7 @@ static void nohz_csd_func(void *info)
+ WARN_ON(!(flags & NOHZ_KICK_MASK));
+
+ rq->idle_balance = idle_cpu(cpu);
+- if (rq->idle_balance && !need_resched()) {
++ if (rq->idle_balance) {
+ rq->nohz_idle_balance = flags;
+ raise_softirq_irqoff(SCHED_SOFTIRQ);
+ }
+--
+2.43.0
+
--- /dev/null
+From fc7b7239d31d2dc9c21ce4fb09fc9960f14d21d6 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 23 Aug 2021 12:16:59 +0100
+Subject: sched/fair: Add NOHZ balancer flag for nohz.next_balance updates
+
+From: Valentin Schneider <valentin.schneider@arm.com>
+
+[ Upstream commit efd984c481abb516fab8bafb25bf41fd9397a43c ]
+
+A following patch will trigger NOHZ idle balances as a means to update
+nohz.next_balance. Vincent noted that blocked load updates can have
+non-negligible overhead, which should be avoided if the intent is to only
+update nohz.next_balance.
+
+Add a new NOHZ balance kick flag, NOHZ_NEXT_KICK. Gate NOHZ blocked load
+update by the presence of NOHZ_STATS_KICK - currently all NOHZ balance
+kicks will have the NOHZ_STATS_KICK flag set, so no change in behaviour is
+expected.
+
+Suggested-by: Vincent Guittot <vincent.guittot@linaro.org>
+Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
+Link: https://lkml.kernel.org/r/20210823111700.2842997-2-valentin.schneider@arm.com
+Stable-dep-of: ff47a0acfcce ("sched/fair: Check idle_cpu() before need_resched() to detect ilb CPU turning busy")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ kernel/sched/fair.c | 24 ++++++++++++++----------
+ kernel/sched/sched.h | 8 +++++++-
+ 2 files changed, 21 insertions(+), 11 deletions(-)
+
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 68793b50adad7..6e1a6d6285d12 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -10764,7 +10764,7 @@ static void nohz_balancer_kick(struct rq *rq)
+ goto out;
+
+ if (rq->nr_running >= 2) {
+- flags = NOHZ_KICK_MASK;
++ flags = NOHZ_STATS_KICK | NOHZ_BALANCE_KICK;
+ goto out;
+ }
+
+@@ -10778,7 +10778,7 @@ static void nohz_balancer_kick(struct rq *rq)
+ * on.
+ */
+ if (rq->cfs.h_nr_running >= 1 && check_cpu_capacity(rq, sd)) {
+- flags = NOHZ_KICK_MASK;
++ flags = NOHZ_STATS_KICK | NOHZ_BALANCE_KICK;
+ goto unlock;
+ }
+ }
+@@ -10792,7 +10792,7 @@ static void nohz_balancer_kick(struct rq *rq)
+ */
+ for_each_cpu_and(i, sched_domain_span(sd), nohz.idle_cpus_mask) {
+ if (sched_asym_prefer(i, cpu)) {
+- flags = NOHZ_KICK_MASK;
++ flags = NOHZ_STATS_KICK | NOHZ_BALANCE_KICK;
+ goto unlock;
+ }
+ }
+@@ -10805,7 +10805,7 @@ static void nohz_balancer_kick(struct rq *rq)
+ * to run the misfit task on.
+ */
+ if (check_misfit_status(rq, sd)) {
+- flags = NOHZ_KICK_MASK;
++ flags = NOHZ_STATS_KICK | NOHZ_BALANCE_KICK;
+ goto unlock;
+ }
+
+@@ -10832,7 +10832,7 @@ static void nohz_balancer_kick(struct rq *rq)
+ */
+ nr_busy = atomic_read(&sds->nr_busy_cpus);
+ if (nr_busy > 1) {
+- flags = NOHZ_KICK_MASK;
++ flags = NOHZ_STATS_KICK | NOHZ_BALANCE_KICK;
+ goto unlock;
+ }
+ }
+@@ -10994,7 +10994,8 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags,
+ * setting the flag, we are sure to not clear the state and not
+ * check the load of an idle cpu.
+ */
+- WRITE_ONCE(nohz.has_blocked, 0);
++ if (flags & NOHZ_STATS_KICK)
++ WRITE_ONCE(nohz.has_blocked, 0);
+
+ /*
+ * Ensures that if we miss the CPU, we must see the has_blocked
+@@ -11016,13 +11017,15 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags,
+ * balancing owner will pick it up.
+ */
+ if (need_resched()) {
+- has_blocked_load = true;
++ if (flags & NOHZ_STATS_KICK)
++ has_blocked_load = true;
+ goto abort;
+ }
+
+ rq = cpu_rq(balance_cpu);
+
+- has_blocked_load |= update_nohz_stats(rq);
++ if (flags & NOHZ_STATS_KICK)
++ has_blocked_load |= update_nohz_stats(rq);
+
+ /*
+ * If time for next balance is due,
+@@ -11053,8 +11056,9 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags,
+ if (likely(update_next_balance))
+ nohz.next_balance = next_balance;
+
+- WRITE_ONCE(nohz.next_blocked,
+- now + msecs_to_jiffies(LOAD_AVG_PERIOD));
++ if (flags & NOHZ_STATS_KICK)
++ WRITE_ONCE(nohz.next_blocked,
++ now + msecs_to_jiffies(LOAD_AVG_PERIOD));
+
+ abort:
+ /* There is still blocked load, enable periodic update */
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 48bcc1876df83..6fc16bc13abf5 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -2739,12 +2739,18 @@ extern void cfs_bandwidth_usage_dec(void);
+ #define NOHZ_BALANCE_KICK_BIT 0
+ #define NOHZ_STATS_KICK_BIT 1
+ #define NOHZ_NEWILB_KICK_BIT 2
++#define NOHZ_NEXT_KICK_BIT 3
+
++/* Run rebalance_domains() */
+ #define NOHZ_BALANCE_KICK BIT(NOHZ_BALANCE_KICK_BIT)
++/* Update blocked load */
+ #define NOHZ_STATS_KICK BIT(NOHZ_STATS_KICK_BIT)
++/* Update blocked load when entering idle */
+ #define NOHZ_NEWILB_KICK BIT(NOHZ_NEWILB_KICK_BIT)
++/* Update nohz.next_balance */
++#define NOHZ_NEXT_KICK BIT(NOHZ_NEXT_KICK_BIT)
+
+-#define NOHZ_KICK_MASK (NOHZ_BALANCE_KICK | NOHZ_STATS_KICK)
++#define NOHZ_KICK_MASK (NOHZ_BALANCE_KICK | NOHZ_STATS_KICK | NOHZ_NEXT_KICK)
+
+ #define nohz_flags(cpu) (&cpu_rq(cpu)->nohz_flags)
+
+--
+2.43.0
+
--- /dev/null
+From 6786ffcc16b2642d9c94c3b615204229a5e3e83a Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 19 Nov 2024 05:44:31 +0000
+Subject: sched/fair: Check idle_cpu() before need_resched() to detect ilb CPU
+ turning busy
+
+From: K Prateek Nayak <kprateek.nayak@amd.com>
+
+[ Upstream commit ff47a0acfcce309cf9e175149c75614491953c8f ]
+
+Commit b2a02fc43a1f ("smp: Optimize send_call_function_single_ipi()")
+optimizes IPIs to idle CPUs in TIF_POLLING_NRFLAG mode by setting the
+TIF_NEED_RESCHED flag in idle task's thread info and relying on
+flush_smp_call_function_queue() in idle exit path to run the
+call-function. A softirq raised by the call-function is handled shortly
+after in do_softirq_post_smp_call_flush() but the TIF_NEED_RESCHED flag
+remains set and is only cleared later when schedule_idle() calls
+__schedule().
+
+need_resched() check in _nohz_idle_balance() exists to bail out of load
+balancing if another task has woken up on the CPU currently in-charge of
+idle load balancing which is being processed in SCHED_SOFTIRQ context.
+Since the optimization mentioned above overloads the interpretation of
+TIF_NEED_RESCHED, check for idle_cpu() before going with the existing
+need_resched() check which can catch a genuine task wakeup on an idle
+CPU processing SCHED_SOFTIRQ from do_softirq_post_smp_call_flush(), as
+well as the case where ksoftirqd needs to be preempted as a result of
+new task wakeup or slice expiry.
+
+In case of PREEMPT_RT or threadirqs, although the idle load balancing
+may be inhibited in some cases on the ilb CPU, the fact that ksoftirqd
+is the only fair task going back to sleep will trigger a newidle balance
+on the CPU which will alleviate some imbalance if it exists if idle
+balance fails to do so.
+
+Fixes: b2a02fc43a1f ("smp: Optimize send_call_function_single_ipi()")
+Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Link: https://lore.kernel.org/r/20241119054432.6405-4-kprateek.nayak@amd.com
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ kernel/sched/fair.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 6e1a6d6285d12..4056330d38887 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -11016,7 +11016,7 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags,
+ * work being done for other CPUs. Next load
+ * balancing owner will pick it up.
+ */
+- if (need_resched()) {
++ if (!idle_cpu(this_cpu) && need_resched()) {
+ if (flags & NOHZ_STATS_KICK)
+ has_blocked_load = true;
+ goto abort;
+--
+2.43.0
+
misc-eeprom-eeprom_93cx6-add-quirk-for-extra-read-cl.patch
modpost-include-.text.-in-text_sections.patch
modpost-add-.irqentry.text-to-other_sections.patch
+sched-core-remove-the-unnecessary-need_resched-check.patch
+sched-fair-add-nohz-balancer-flag-for-nohz.next_bala.patch
+sched-fair-check-idle_cpu-before-need_resched-to-det.patch
+sched-core-prevent-wakeup-of-ksoftirqd-during-idle-l.patch
+btrfs-fix-missing-snapshot-drew-unlock-when-root-is-.patch
+tracing-eprobe-fix-to-release-eprobe-when-failed-to-.patch
--- /dev/null
+From b90557ee64e08eb5dc0f828d2319d241984bd4ea Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Sat, 30 Nov 2024 01:47:47 +0900
+Subject: tracing/eprobe: Fix to release eprobe when failed to add dyn_event
+
+From: Masami Hiramatsu (Google) <mhiramat@kernel.org>
+
+[ Upstream commit 494b332064c0ce2f7392fa92632bc50191c1b517 ]
+
+Fix eprobe event to unregister event call and release eprobe when it fails
+to add dynamic event correctly.
+
+Link: https://lore.kernel.org/all/173289886698.73724.1959899350183686006.stgit@devnote2/
+
+Fixes: 7491e2c44278 ("tracing: Add a probe that attaches to trace events")
+Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ kernel/trace/trace_eprobe.c | 5 +++++
+ 1 file changed, 5 insertions(+)
+
+diff --git a/kernel/trace/trace_eprobe.c b/kernel/trace/trace_eprobe.c
+index 085f056e66f19..6ba95e32df388 100644
+--- a/kernel/trace/trace_eprobe.c
++++ b/kernel/trace/trace_eprobe.c
+@@ -979,6 +979,11 @@ static int __trace_eprobe_create(int argc, const char *argv[])
+ goto error;
+ }
+ ret = dyn_event_add(&ep->devent, &ep->tp.event->call);
++ if (ret < 0) {
++ trace_probe_unregister_event_call(&ep->tp);
++ mutex_unlock(&event_mutex);
++ goto error;
++ }
+ mutex_unlock(&event_mutex);
+ return ret;
+ parse_error:
+--
+2.43.0
+