--- /dev/null
+From vegard.nossum@oracle.com Wed Oct 2 17:07:25 2024
+From: Vegard Nossum <vegard.nossum@oracle.com>
+Date: Wed, 2 Oct 2024 17:06:01 +0200
+Subject: drm/bridge: adv7511: fix crash on irq during probe
+To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Cc: stable@vger.kernel.org, pavel@denx.de, cengiz.can@canonical.com, mheyne@amazon.de, mngyadam@amazon.com, kuntal.nayak@broadcom.com, ajay.kaher@broadcom.com, zsm@chromium.org, dan.carpenter@linaro.org, shivani.agarwal@broadcom.com, "Mads Bligaard Nielsen" <bli@bang-olufsen.dk>, "Alvin Šipraga" <alsi@bang-olufsen.dk>, "Robert Foss" <rfoss@kernel.org>, "Harshit Mogalapalli" <harshit.m.mogalapalli@oracle.com>, "Vegard Nossum" <vegard.nossum@oracle.com>
+Message-ID: <20241002150606.11385-11-vegard.nossum@oracle.com>
+
+From: Mads Bligaard Nielsen <bli@bang-olufsen.dk>
+
+[ Upstream commit aeedaee5ef5468caf59e2bb1265c2116e0c9a924 ]
+
+Moved IRQ registration down to end of adv7511_probe().
+
+If an IRQ already is pending during adv7511_probe
+(before adv7511_cec_init) then cec_received_msg_ts
+could crash using uninitialized data:
+
+ Unable to handle kernel read from unreadable memory at virtual address 00000000000003d5
+ Internal error: Oops: 96000004 [#1] PREEMPT_RT SMP
+ Call trace:
+ cec_received_msg_ts+0x48/0x990 [cec]
+ adv7511_cec_irq_process+0x1cc/0x308 [adv7511]
+ adv7511_irq_process+0xd8/0x120 [adv7511]
+ adv7511_irq_handler+0x1c/0x30 [adv7511]
+ irq_thread_fn+0x30/0xa0
+ irq_thread+0x14c/0x238
+ kthread+0x190/0x1a8
+
+Fixes: 3b1b975003e4 ("drm: adv7511/33: add HDMI CEC support")
+Signed-off-by: Mads Bligaard Nielsen <bli@bang-olufsen.dk>
+Signed-off-by: Alvin Šipraga <alsi@bang-olufsen.dk>
+Reviewed-by: Robert Foss <rfoss@kernel.org>
+Signed-off-by: Robert Foss <rfoss@kernel.org>
+Link: https://patchwork.freedesktop.org/patch/msgid/20240219-adv7511-cec-irq-crash-fix-v2-1-245e53c4b96f@bang-olufsen.dk
+(cherry picked from commit aeedaee5ef5468caf59e2bb1265c2116e0c9a924)
+[Harshit: CVE-2024-26876; Resolve conflicts due to missing commit:
+ c75551214858 ("drm: adv7511: Add has_dsi variable to struct
+ adv7511_chip_info") in 6.6.y and adv7511_chip_info struct is also not
+ defined]
+Signed-off-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
+Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/gpu/drm/bridge/adv7511/adv7511_drv.c | 22 +++++++++++-----------
+ 1 file changed, 11 insertions(+), 11 deletions(-)
+
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+@@ -1291,17 +1291,6 @@ static int adv7511_probe(struct i2c_clie
+
+ INIT_WORK(&adv7511->hpd_work, adv7511_hpd_work);
+
+- if (i2c->irq) {
+- init_waitqueue_head(&adv7511->wq);
+-
+- ret = devm_request_threaded_irq(dev, i2c->irq, NULL,
+- adv7511_irq_handler,
+- IRQF_ONESHOT, dev_name(dev),
+- adv7511);
+- if (ret)
+- goto err_unregister_cec;
+- }
+-
+ adv7511_power_off(adv7511);
+
+ i2c_set_clientdata(i2c, adv7511);
+@@ -1325,6 +1314,17 @@ static int adv7511_probe(struct i2c_clie
+
+ adv7511_audio_init(dev, adv7511);
+
++ if (i2c->irq) {
++ init_waitqueue_head(&adv7511->wq);
++
++ ret = devm_request_threaded_irq(dev, i2c->irq, NULL,
++ adv7511_irq_handler,
++ IRQF_ONESHOT, dev_name(dev),
++ adv7511);
++ if (ret)
++ goto err_unregister_audio;
++ }
++
+ if (adv7511->type == ADV7533 || adv7511->type == ADV7535) {
+ ret = adv7533_attach_dsi(adv7511);
+ if (ret)
--- /dev/null
+From stable+bounces-80564-greg=kroah.com@vger.kernel.org Wed Oct 2 17:13:23 2024
+From: Vegard Nossum <vegard.nossum@oracle.com>
+Date: Wed, 2 Oct 2024 17:12:32 +0200
+Subject: efi/unaccepted: touch soft lockup during memory accept
+To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Cc: stable@vger.kernel.org, pavel@denx.de, cengiz.can@canonical.com, mheyne@amazon.de, mngyadam@amazon.com, kuntal.nayak@broadcom.com, ajay.kaher@broadcom.com, zsm@chromium.org, dan.carpenter@linaro.org, shivani.agarwal@broadcom.com, Chen Yu <yu.c.chen@intel.com>, Md Iqbal Hossain <md.iqbal.hossain@intel.com>, "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>, Ard Biesheuvel <ardb@kernel.org>, Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>, Vegard Nossum <vegard.nossum@oracle.com>
+Message-ID: <20241002151236.11787-1-vegard.nossum@oracle.com>
+
+From: Chen Yu <yu.c.chen@intel.com>
+
+[ Upstream commit 1c5a1627f48105cbab81d25ec2f72232bfaa8185 ]
+
+Commit 50e782a86c98 ("efi/unaccepted: Fix soft lockups caused by
+parallel memory acceptance") has released the spinlock so other CPUs can
+do memory acceptance in parallel and not triggers softlockup on other
+CPUs.
+
+However the softlock up was intermittent shown up if the memory of the
+TD guest is large, and the timeout of softlockup is set to 1 second:
+
+ RIP: 0010:_raw_spin_unlock_irqrestore
+ Call Trace:
+ ? __hrtimer_run_queues
+ <IRQ>
+ ? hrtimer_interrupt
+ ? watchdog_timer_fn
+ ? __sysvec_apic_timer_interrupt
+ ? __pfx_watchdog_timer_fn
+ ? sysvec_apic_timer_interrupt
+ </IRQ>
+ ? __hrtimer_run_queues
+ <TASK>
+ ? hrtimer_interrupt
+ ? asm_sysvec_apic_timer_interrupt
+ ? _raw_spin_unlock_irqrestore
+ ? __sysvec_apic_timer_interrupt
+ ? sysvec_apic_timer_interrupt
+ accept_memory
+ try_to_accept_memory
+ do_huge_pmd_anonymous_page
+ get_page_from_freelist
+ __handle_mm_fault
+ __alloc_pages
+ __folio_alloc
+ ? __tdx_hypercall
+ handle_mm_fault
+ vma_alloc_folio
+ do_user_addr_fault
+ do_huge_pmd_anonymous_page
+ exc_page_fault
+ ? __do_huge_pmd_anonymous_page
+ asm_exc_page_fault
+ __handle_mm_fault
+
+When the local irq is enabled at the end of accept_memory(), the
+softlockup detects that the watchdog on single CPU has not been fed for
+a while. That is to say, even other CPUs will not be blocked by
+spinlock, the current CPU might be stunk with local irq disabled for a
+while, which hurts not only nmi watchdog but also softlockup.
+
+Chao Gao pointed out that the memory accept could be time costly and
+there was similar report before. Thus to avoid any softlocup detection
+during this stage, give the softlockup a flag to skip the timeout check
+at the end of accept_memory(), by invoking touch_softlockup_watchdog().
+
+Reported-by: Md Iqbal Hossain <md.iqbal.hossain@intel.com>
+Signed-off-by: Chen Yu <yu.c.chen@intel.com>
+Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
+Fixes: 50e782a86c98 ("efi/unaccepted: Fix soft lockups caused by parallel memory acceptance")
+Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
+(cherry picked from commit 1c5a1627f48105cbab81d25ec2f72232bfaa8185)
+[Harshit: CVE-2024-36936; Minor conflict resolution due to header file
+ differences due to missing commit: 7cd34dd3c9bf ("efi/unaccepted: do not
+ let /proc/vmcore try to access unaccepted memory") in 6.6.y]
+Signed-off-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
+Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/firmware/efi/unaccepted_memory.c | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+--- a/drivers/firmware/efi/unaccepted_memory.c
++++ b/drivers/firmware/efi/unaccepted_memory.c
+@@ -3,6 +3,7 @@
+ #include <linux/efi.h>
+ #include <linux/memblock.h>
+ #include <linux/spinlock.h>
++#include <linux/nmi.h>
+ #include <asm/unaccepted_memory.h>
+
+ /* Protects unaccepted memory bitmap and accepting_list */
+@@ -148,6 +149,9 @@ retry:
+ }
+
+ list_del(&range.list);
++
++ touch_softlockup_watchdog();
++
+ spin_unlock_irqrestore(&unaccepted_memory_lock, flags);
+ }
+
--- /dev/null
+From stable+bounces-80562-greg=kroah.com@vger.kernel.org Wed Oct 2 17:07:50 2024
+From: Vegard Nossum <vegard.nossum@oracle.com>
+Date: Wed, 2 Oct 2024 17:06:00 +0200
+Subject: iommufd: Fix protection fault in iommufd_test_syz_conv_iova
+To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Cc: stable@vger.kernel.org, pavel@denx.de, cengiz.can@canonical.com, mheyne@amazon.de, mngyadam@amazon.com, kuntal.nayak@broadcom.com, ajay.kaher@broadcom.com, zsm@chromium.org, dan.carpenter@linaro.org, shivani.agarwal@broadcom.com, Nicolin Chen <nicolinc@nvidia.com>, Jason Gunthorpe <jgg@nvidia.com>, Kevin Tian <kevin.tian@intel.com>, Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>, Vegard Nossum <vegard.nossum@oracle.com>
+Message-ID: <20241002150606.11385-10-vegard.nossum@oracle.com>
+
+From: Nicolin Chen <nicolinc@nvidia.com>
+
+[ Upstream commit cf7c2789822db8b5efa34f5ebcf1621bc0008d48 ]
+
+Syzkaller reported the following bug:
+
+ general protection fault, probably for non-canonical address 0xdffffc0000000038: 0000 [#1] SMP KASAN
+ KASAN: null-ptr-deref in range [0x00000000000001c0-0x00000000000001c7]
+ Call Trace:
+ lock_acquire
+ lock_acquire+0x1ce/0x4f0
+ down_read+0x93/0x4a0
+ iommufd_test_syz_conv_iova+0x56/0x1f0
+ iommufd_test_access_rw.isra.0+0x2ec/0x390
+ iommufd_test+0x1058/0x1e30
+ iommufd_fops_ioctl+0x381/0x510
+ vfs_ioctl
+ __do_sys_ioctl
+ __se_sys_ioctl
+ __x64_sys_ioctl+0x170/0x1e0
+ do_syscall_x64
+ do_syscall_64+0x71/0x140
+
+This is because the new iommufd_access_change_ioas() sets access->ioas to
+NULL during its process, so the lock might be gone in a concurrent racing
+context.
+
+Fix this by doing the same access->ioas sanity as iommufd_access_rw() and
+iommufd_access_pin_pages() functions do.
+
+Cc: stable@vger.kernel.org
+Fixes: 9227da7816dd ("iommufd: Add iommufd_access_change_ioas(_id) helpers")
+Link: https://lore.kernel.org/r/3f1932acaf1dd494d404c04364d73ce8f57f3e5e.1708636627.git.nicolinc@nvidia.com
+Reported-by: Jason Gunthorpe <jgg@nvidia.com>
+Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
+Reviewed-by: Kevin Tian <kevin.tian@intel.com>
+Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
+(cherry picked from commit cf7c2789822db8b5efa34f5ebcf1621bc0008d48)
+[Harshit: CVE-2024-26785; Resolve conflicts due to missing commit:
+ bd7a282650b8 ("iommufd: Add iommufd_ctx to iommufd_put_object()") in
+ 6.6.y]
+Signed-off-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
+Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/iommu/iommufd/selftest.c | 27 +++++++++++++++++++++------
+ 1 file changed, 21 insertions(+), 6 deletions(-)
+
+--- a/drivers/iommu/iommufd/selftest.c
++++ b/drivers/iommu/iommufd/selftest.c
+@@ -44,8 +44,8 @@ enum {
+ * In syzkaller mode the 64 bit IOVA is converted into an nth area and offset
+ * value. This has a much smaller randomization space and syzkaller can hit it.
+ */
+-static unsigned long iommufd_test_syz_conv_iova(struct io_pagetable *iopt,
+- u64 *iova)
++static unsigned long __iommufd_test_syz_conv_iova(struct io_pagetable *iopt,
++ u64 *iova)
+ {
+ struct syz_layout {
+ __u32 nth_area;
+@@ -69,6 +69,21 @@ static unsigned long iommufd_test_syz_co
+ return 0;
+ }
+
++static unsigned long iommufd_test_syz_conv_iova(struct iommufd_access *access,
++ u64 *iova)
++{
++ unsigned long ret;
++
++ mutex_lock(&access->ioas_lock);
++ if (!access->ioas) {
++ mutex_unlock(&access->ioas_lock);
++ return 0;
++ }
++ ret = __iommufd_test_syz_conv_iova(&access->ioas->iopt, iova);
++ mutex_unlock(&access->ioas_lock);
++ return ret;
++}
++
+ void iommufd_test_syz_conv_iova_id(struct iommufd_ucmd *ucmd,
+ unsigned int ioas_id, u64 *iova, u32 *flags)
+ {
+@@ -81,7 +96,7 @@ void iommufd_test_syz_conv_iova_id(struc
+ ioas = iommufd_get_ioas(ucmd->ictx, ioas_id);
+ if (IS_ERR(ioas))
+ return;
+- *iova = iommufd_test_syz_conv_iova(&ioas->iopt, iova);
++ *iova = __iommufd_test_syz_conv_iova(&ioas->iopt, iova);
+ iommufd_put_object(&ioas->obj);
+ }
+
+@@ -852,7 +867,7 @@ static int iommufd_test_access_pages(str
+ }
+
+ if (flags & MOCK_FLAGS_ACCESS_SYZ)
+- iova = iommufd_test_syz_conv_iova(&staccess->access->ioas->iopt,
++ iova = iommufd_test_syz_conv_iova(staccess->access,
+ &cmd->access_pages.iova);
+
+ npages = (ALIGN(iova + length, PAGE_SIZE) -
+@@ -954,8 +969,8 @@ static int iommufd_test_access_rw(struct
+ }
+
+ if (flags & MOCK_FLAGS_ACCESS_SYZ)
+- iova = iommufd_test_syz_conv_iova(&staccess->access->ioas->iopt,
+- &cmd->access_rw.iova);
++ iova = iommufd_test_syz_conv_iova(staccess->access,
++ &cmd->access_rw.iova);
+
+ rc = iommufd_access_rw(staccess->access, iova, tmp, length, flags);
+ if (rc)
--- /dev/null
+From stable+bounces-80561-greg=kroah.com@vger.kernel.org Wed Oct 2 17:07:48 2024
+From: Vegard Nossum <vegard.nossum@oracle.com>
+Date: Wed, 2 Oct 2024 17:05:59 +0200
+Subject: net: dsa: fix netdev_priv() dereference before check on non-DSA netdevice events
+To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Cc: stable@vger.kernel.org, pavel@denx.de, cengiz.can@canonical.com, mheyne@amazon.de, mngyadam@amazon.com, kuntal.nayak@broadcom.com, ajay.kaher@broadcom.com, zsm@chromium.org, dan.carpenter@linaro.org, shivani.agarwal@broadcom.com, Vladimir Oltean <vladimir.oltean@nxp.com>, syzbot+d81bcd883824180500c8@syzkaller.appspotmail.com, Florian Fainelli <florian.fainelli@broadcom.com>, Eric Dumazet <edumazet@google.com>, Jakub Kicinski <kuba@kernel.org>, Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>, Vegard Nossum <vegard.nossum@oracle.com>
+Message-ID: <20241002150606.11385-9-vegard.nossum@oracle.com>
+
+From: Vladimir Oltean <vladimir.oltean@nxp.com>
+
+[ Upstream commit 844f104790bd69c2e4dbb9ee3eba46fde1fcea7b ]
+
+After the blamed commit, we started doing this dereference for every
+NETDEV_CHANGEUPPER and NETDEV_PRECHANGEUPPER event in the system.
+
+static inline struct dsa_port *dsa_user_to_port(const struct net_device *dev)
+{
+ struct dsa_user_priv *p = netdev_priv(dev);
+
+ return p->dp;
+}
+
+Which is obviously bogus, because not all net_devices have a netdev_priv()
+of type struct dsa_user_priv. But struct dsa_user_priv is fairly small,
+and p->dp means dereferencing 8 bytes starting with offset 16. Most
+drivers allocate that much private memory anyway, making our access not
+fault, and we discard the bogus data quickly afterwards, so this wasn't
+caught.
+
+But the dummy interface is somewhat special in that it calls
+alloc_netdev() with a priv size of 0. So every netdev_priv() dereference
+is invalid, and we get this when we emit a NETDEV_PRECHANGEUPPER event
+with a VLAN as its new upper:
+
+$ ip link add dummy1 type dummy
+$ ip link add link dummy1 name dummy1.100 type vlan id 100
+[ 43.309174] ==================================================================
+[ 43.316456] BUG: KASAN: slab-out-of-bounds in dsa_user_prechangeupper+0x30/0xe8
+[ 43.323835] Read of size 8 at addr ffff3f86481d2990 by task ip/374
+[ 43.330058]
+[ 43.342436] Call trace:
+[ 43.366542] dsa_user_prechangeupper+0x30/0xe8
+[ 43.371024] dsa_user_netdevice_event+0xb38/0xee8
+[ 43.375768] notifier_call_chain+0xa4/0x210
+[ 43.379985] raw_notifier_call_chain+0x24/0x38
+[ 43.384464] __netdev_upper_dev_link+0x3ec/0x5d8
+[ 43.389120] netdev_upper_dev_link+0x70/0xa8
+[ 43.393424] register_vlan_dev+0x1bc/0x310
+[ 43.397554] vlan_newlink+0x210/0x248
+[ 43.401247] rtnl_newlink+0x9fc/0xe30
+[ 43.404942] rtnetlink_rcv_msg+0x378/0x580
+
+Avoid the kernel oops by dereferencing after the type check, as customary.
+
+Fixes: 4c3f80d22b2e ("net: dsa: walk through all changeupper notifier functions")
+Reported-and-tested-by: syzbot+d81bcd883824180500c8@syzkaller.appspotmail.com
+Closes: https://lore.kernel.org/netdev/0000000000001d4255060e87545c@google.com/
+Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
+Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com>
+Reviewed-by: Eric Dumazet <edumazet@google.com>
+Link: https://lore.kernel.org/r/20240110003354.2796778-1-vladimir.oltean@nxp.com
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+(cherry picked from commit 844f104790bd69c2e4dbb9ee3eba46fde1fcea7b)
+[Harshit: CVE-2024-26596; Resolve conflicts due to missing commit: 6ca80638b90c
+ ("net: dsa: Use conduit and user terms") in 6.6.y, used dsa_slave_to_port()
+ instead of dsa_user_to_port()]
+Signed-off-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
+Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/dsa/slave.c | 7 +++++--
+ 1 file changed, 5 insertions(+), 2 deletions(-)
+
+--- a/net/dsa/slave.c
++++ b/net/dsa/slave.c
+@@ -2822,13 +2822,14 @@ EXPORT_SYMBOL_GPL(dsa_slave_dev_check);
+ static int dsa_slave_changeupper(struct net_device *dev,
+ struct netdev_notifier_changeupper_info *info)
+ {
+- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct netlink_ext_ack *extack;
+ int err = NOTIFY_DONE;
++ struct dsa_port *dp;
+
+ if (!dsa_slave_dev_check(dev))
+ return err;
+
++ dp = dsa_slave_to_port(dev);
+ extack = netdev_notifier_info_to_extack(&info->info);
+
+ if (netif_is_bridge_master(info->upper_dev)) {
+@@ -2881,11 +2882,13 @@ static int dsa_slave_changeupper(struct
+ static int dsa_slave_prechangeupper(struct net_device *dev,
+ struct netdev_notifier_changeupper_info *info)
+ {
+- struct dsa_port *dp = dsa_slave_to_port(dev);
++ struct dsa_port *dp;
+
+ if (!dsa_slave_dev_check(dev))
+ return NOTIFY_DONE;
+
++ dp = dsa_slave_to_port(dev);
++
+ if (netif_is_bridge_master(info->upper_dev) && !info->linking)
+ dsa_port_pre_bridge_leave(dp, info->upper_dev);
+ else if (netif_is_lag_master(info->upper_dev) && !info->linking)
--- /dev/null
+From stable+bounces-80567-greg=kroah.com@vger.kernel.org Wed Oct 2 17:15:05 2024
+From: Vegard Nossum <vegard.nossum@oracle.com>
+Date: Wed, 2 Oct 2024 17:12:36 +0200
+Subject: net: stmmac: move the EST lock to struct stmmac_priv
+To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Cc: stable@vger.kernel.org, pavel@denx.de, cengiz.can@canonical.com, mheyne@amazon.de, mngyadam@amazon.com, kuntal.nayak@broadcom.com, ajay.kaher@broadcom.com, zsm@chromium.org, dan.carpenter@linaro.org, shivani.agarwal@broadcom.com, Xiaolei Wang <xiaolei.wang@windriver.com>, Simon Horman <horms@kernel.org>, Serge Semin <fancer.lancer@gmail.com>, Andrew Halaney <ahalaney@redhat.com>, Jakub Kicinski <kuba@kernel.org>, Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>, Vegard Nossum <vegard.nossum@oracle.com>
+Message-ID: <20241002151236.11787-5-vegard.nossum@oracle.com>
+
+From: Xiaolei Wang <xiaolei.wang@windriver.com>
+
+[ Upstream commit 36ac9e7f2e5786bd37c5cd91132e1f39c29b8197 ]
+
+Reinitialize the whole EST structure would also reset the mutex
+lock which is embedded in the EST structure, and then trigger
+the following warning. To address this, move the lock to struct
+stmmac_priv. We also need to reacquire the mutex lock when doing
+this initialization.
+
+DEBUG_LOCKS_WARN_ON(lock->magic != lock)
+WARNING: CPU: 3 PID: 505 at kernel/locking/mutex.c:587 __mutex_lock+0xd84/0x1068
+ Modules linked in:
+ CPU: 3 PID: 505 Comm: tc Not tainted 6.9.0-rc6-00053-g0106679839f7-dirty #29
+ Hardware name: NXP i.MX8MPlus EVK board (DT)
+ pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
+ pc : __mutex_lock+0xd84/0x1068
+ lr : __mutex_lock+0xd84/0x1068
+ sp : ffffffc0864e3570
+ x29: ffffffc0864e3570 x28: ffffffc0817bdc78 x27: 0000000000000003
+ x26: ffffff80c54f1808 x25: ffffff80c9164080 x24: ffffffc080d723ac
+ x23: 0000000000000000 x22: 0000000000000002 x21: 0000000000000000
+ x20: 0000000000000000 x19: ffffffc083bc3000 x18: ffffffffffffffff
+ x17: ffffffc08117b080 x16: 0000000000000002 x15: ffffff80d2d40000
+ x14: 00000000000002da x13: ffffff80d2d404b8 x12: ffffffc082b5a5c8
+ x11: ffffffc082bca680 x10: ffffffc082bb2640 x9 : ffffffc082bb2698
+ x8 : 0000000000017fe8 x7 : c0000000ffffefff x6 : 0000000000000001
+ x5 : ffffff8178fe0d48 x4 : 0000000000000000 x3 : 0000000000000027
+ x2 : ffffff8178fe0d50 x1 : 0000000000000000 x0 : 0000000000000000
+ Call trace:
+ __mutex_lock+0xd84/0x1068
+ mutex_lock_nested+0x28/0x34
+ tc_setup_taprio+0x118/0x68c
+ stmmac_setup_tc+0x50/0xf0
+ taprio_change+0x868/0xc9c
+
+Fixes: b2aae654a479 ("net: stmmac: add mutex lock to protect est parameters")
+Signed-off-by: Xiaolei Wang <xiaolei.wang@windriver.com>
+Reviewed-by: Simon Horman <horms@kernel.org>
+Reviewed-by: Serge Semin <fancer.lancer@gmail.com>
+Reviewed-by: Andrew Halaney <ahalaney@redhat.com>
+Link: https://lore.kernel.org/r/20240513014346.1718740-2-xiaolei.wang@windriver.com
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+(cherry picked from commit 36ac9e7f2e5786bd37c5cd91132e1f39c29b8197)
+[Harshit: CVE-2024-38594; resolved conflicts due to missing commit:
+ 5ca63ffdb94b ("net: stmmac: Report taprio offload status") in 6.6.y]
+Signed-off-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
+Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/stmicro/stmmac/stmmac.h | 2 ++
+ drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c | 8 ++++----
+ drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c | 18 ++++++++++--------
+ include/linux/stmmac.h | 1 -
+ 4 files changed, 16 insertions(+), 13 deletions(-)
+
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+@@ -248,6 +248,8 @@ struct stmmac_priv {
+ struct stmmac_extra_stats xstats ____cacheline_aligned_in_smp;
+ struct stmmac_safety_stats sstats;
+ struct plat_stmmacenet_data *plat;
++ /* Protect est parameters */
++ struct mutex est_lock;
+ struct dma_features dma_cap;
+ struct stmmac_counters mmc;
+ int hw_cap_support;
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
+@@ -70,11 +70,11 @@ static int stmmac_adjust_time(struct ptp
+ /* If EST is enabled, disabled it before adjust ptp time. */
+ if (priv->plat->est && priv->plat->est->enable) {
+ est_rst = true;
+- mutex_lock(&priv->plat->est->lock);
++ mutex_lock(&priv->est_lock);
+ priv->plat->est->enable = false;
+ stmmac_est_configure(priv, priv->ioaddr, priv->plat->est,
+ priv->plat->clk_ptp_rate);
+- mutex_unlock(&priv->plat->est->lock);
++ mutex_unlock(&priv->est_lock);
+ }
+
+ write_lock_irqsave(&priv->ptp_lock, flags);
+@@ -87,7 +87,7 @@ static int stmmac_adjust_time(struct ptp
+ ktime_t current_time_ns, basetime;
+ u64 cycle_time;
+
+- mutex_lock(&priv->plat->est->lock);
++ mutex_lock(&priv->est_lock);
+ priv->ptp_clock_ops.gettime64(&priv->ptp_clock_ops, ¤t_time);
+ current_time_ns = timespec64_to_ktime(current_time);
+ time.tv_nsec = priv->plat->est->btr_reserve[0];
+@@ -104,7 +104,7 @@ static int stmmac_adjust_time(struct ptp
+ priv->plat->est->enable = true;
+ ret = stmmac_est_configure(priv, priv->ioaddr, priv->plat->est,
+ priv->plat->clk_ptp_rate);
+- mutex_unlock(&priv->plat->est->lock);
++ mutex_unlock(&priv->est_lock);
+ if (ret)
+ netdev_err(priv->dev, "failed to configure EST\n");
+ }
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c
+@@ -984,17 +984,19 @@ static int tc_setup_taprio(struct stmmac
+ if (!plat->est)
+ return -ENOMEM;
+
+- mutex_init(&priv->plat->est->lock);
++ mutex_init(&priv->est_lock);
+ } else {
++ mutex_lock(&priv->est_lock);
+ memset(plat->est, 0, sizeof(*plat->est));
++ mutex_unlock(&priv->est_lock);
+ }
+
+ size = qopt->num_entries;
+
+- mutex_lock(&priv->plat->est->lock);
++ mutex_lock(&priv->est_lock);
+ priv->plat->est->gcl_size = size;
+ priv->plat->est->enable = qopt->cmd == TAPRIO_CMD_REPLACE;
+- mutex_unlock(&priv->plat->est->lock);
++ mutex_unlock(&priv->est_lock);
+
+ for (i = 0; i < size; i++) {
+ s64 delta_ns = qopt->entries[i].interval;
+@@ -1025,7 +1027,7 @@ static int tc_setup_taprio(struct stmmac
+ priv->plat->est->gcl[i] = delta_ns | (gates << wid);
+ }
+
+- mutex_lock(&priv->plat->est->lock);
++ mutex_lock(&priv->est_lock);
+ /* Adjust for real system time */
+ priv->ptp_clock_ops.gettime64(&priv->ptp_clock_ops, ¤t_time);
+ current_time_ns = timespec64_to_ktime(current_time);
+@@ -1044,7 +1046,7 @@ static int tc_setup_taprio(struct stmmac
+ priv->plat->est->ctr[1] = (u32)ctr;
+
+ if (fpe && !priv->dma_cap.fpesel) {
+- mutex_unlock(&priv->plat->est->lock);
++ mutex_unlock(&priv->est_lock);
+ return -EOPNOTSUPP;
+ }
+
+@@ -1055,7 +1057,7 @@ static int tc_setup_taprio(struct stmmac
+
+ ret = stmmac_est_configure(priv, priv->ioaddr, priv->plat->est,
+ priv->plat->clk_ptp_rate);
+- mutex_unlock(&priv->plat->est->lock);
++ mutex_unlock(&priv->est_lock);
+ if (ret) {
+ netdev_err(priv->dev, "failed to configure EST\n");
+ goto disable;
+@@ -1072,11 +1074,11 @@ static int tc_setup_taprio(struct stmmac
+
+ disable:
+ if (priv->plat->est) {
+- mutex_lock(&priv->plat->est->lock);
++ mutex_lock(&priv->est_lock);
+ priv->plat->est->enable = false;
+ stmmac_est_configure(priv, priv->ioaddr, priv->plat->est,
+ priv->plat->clk_ptp_rate);
+- mutex_unlock(&priv->plat->est->lock);
++ mutex_unlock(&priv->est_lock);
+ }
+
+ priv->plat->fpe_cfg->enable = false;
+--- a/include/linux/stmmac.h
++++ b/include/linux/stmmac.h
+@@ -117,7 +117,6 @@ struct stmmac_axi {
+
+ #define EST_GCL 1024
+ struct stmmac_est {
+- struct mutex lock;
+ int enable;
+ u32 btr_reserve[2];
+ u32 btr_offset[2];
--- /dev/null
+From stable+bounces-80557-greg=kroah.com@vger.kernel.org Wed Oct 2 17:07:44 2024
+From: Vegard Nossum <vegard.nossum@oracle.com>
+Date: Wed, 2 Oct 2024 17:05:57 +0200
+Subject: netfilter: nf_tables: fix memleak in map from abort path
+To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Cc: stable@vger.kernel.org, pavel@denx.de, cengiz.can@canonical.com, mheyne@amazon.de, mngyadam@amazon.com, kuntal.nayak@broadcom.com, ajay.kaher@broadcom.com, zsm@chromium.org, dan.carpenter@linaro.org, shivani.agarwal@broadcom.com, Pablo Neira Ayuso <pablo@netfilter.org>, Vegard Nossum <vegard.nossum@oracle.com>
+Message-ID: <20241002150606.11385-7-vegard.nossum@oracle.com>
+
+From: Pablo Neira Ayuso <pablo@netfilter.org>
+
+[ Upstream commit 86a1471d7cde792941109b93b558b5dc078b9ee9 ]
+
+The delete set command does not rely on the transaction object for
+element removal, therefore, a combination of delete element + delete set
+from the abort path could result in restoring twice the refcount of the
+mapping.
+
+Check for inactive element in the next generation for the delete element
+command in the abort path, skip restoring state if next generation bit
+has been already cleared. This is similar to the activate logic using
+the set walk iterator.
+
+[ 6170.286929] ------------[ cut here ]------------
+[ 6170.286939] WARNING: CPU: 6 PID: 790302 at net/netfilter/nf_tables_api.c:2086 nf_tables_chain_destroy+0x1f7/0x220 [nf_tables]
+[ 6170.287071] Modules linked in: [...]
+[ 6170.287633] CPU: 6 PID: 790302 Comm: kworker/6:2 Not tainted 6.9.0-rc3+ #365
+[ 6170.287768] RIP: 0010:nf_tables_chain_destroy+0x1f7/0x220 [nf_tables]
+[ 6170.287886] Code: df 48 8d 7d 58 e8 69 2e 3b df 48 8b 7d 58 e8 80 1b 37 df 48 8d 7d 68 e8 57 2e 3b df 48 8b 7d 68 e8 6e 1b 37 df 48 89 ef eb c4 <0f> 0b 48 83 c4 08 5b 5d 41 5c 41 5d 41 5e 41 5f c3 cc cc cc cc 0f
+[ 6170.287895] RSP: 0018:ffff888134b8fd08 EFLAGS: 00010202
+[ 6170.287904] RAX: 0000000000000001 RBX: ffff888125bffb28 RCX: dffffc0000000000
+[ 6170.287912] RDX: 0000000000000003 RSI: ffffffffa20298ab RDI: ffff88811ebe4750
+[ 6170.287919] RBP: ffff88811ebe4700 R08: ffff88838e812650 R09: fffffbfff0623a55
+[ 6170.287926] R10: ffffffff8311d2af R11: 0000000000000001 R12: ffff888125bffb10
+[ 6170.287933] R13: ffff888125bffb10 R14: dead000000000122 R15: dead000000000100
+[ 6170.287940] FS: 0000000000000000(0000) GS:ffff888390b00000(0000) knlGS:0000000000000000
+[ 6170.287948] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
+[ 6170.287955] CR2: 00007fd31fc00710 CR3: 0000000133f60004 CR4: 00000000001706f0
+[ 6170.287962] Call Trace:
+[ 6170.287967] <TASK>
+[ 6170.287973] ? __warn+0x9f/0x1a0
+[ 6170.287986] ? nf_tables_chain_destroy+0x1f7/0x220 [nf_tables]
+[ 6170.288092] ? report_bug+0x1b1/0x1e0
+[ 6170.287986] ? nf_tables_chain_destroy+0x1f7/0x220 [nf_tables]
+[ 6170.288092] ? report_bug+0x1b1/0x1e0
+[ 6170.288104] ? handle_bug+0x3c/0x70
+[ 6170.288112] ? exc_invalid_op+0x17/0x40
+[ 6170.288120] ? asm_exc_invalid_op+0x1a/0x20
+[ 6170.288132] ? nf_tables_chain_destroy+0x2b/0x220 [nf_tables]
+[ 6170.288243] ? nf_tables_chain_destroy+0x1f7/0x220 [nf_tables]
+[ 6170.288366] ? nf_tables_chain_destroy+0x2b/0x220 [nf_tables]
+[ 6170.288483] nf_tables_trans_destroy_work+0x588/0x590 [nf_tables]
+
+Fixes: 591054469b3e ("netfilter: nf_tables: revisit chain/object refcounting from elements")
+Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
+(cherry picked from commit 86a1471d7cde792941109b93b558b5dc078b9ee9)
+[Vegard: CVE-2024-27011; fixed conflicts due to missing commits
+ 0e1ea651c9717ddcd8e0648d8468477a31867b0a ("netfilter: nf_tables: shrink
+ memory consumption of set elements") and
+ 9dad402b89e81a0516bad5e0ac009b7a0a80898f ("netfilter: nf_tables: expose
+ opaque set element as struct nft_elem_priv") so we pass the correct types
+ and values to nft_setelem_active_next() + nft_set_elem_ext()]
+Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/netfilter/nf_tables_api.c | 16 ++++++++++++++--
+ 1 file changed, 14 insertions(+), 2 deletions(-)
+
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -7061,6 +7061,16 @@ void nft_data_hold(const struct nft_data
+ }
+ }
+
++static int nft_setelem_active_next(const struct net *net,
++ const struct nft_set *set,
++ struct nft_set_elem *elem)
++{
++ const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv);
++ u8 genmask = nft_genmask_next(net);
++
++ return nft_set_elem_active(ext, genmask);
++}
++
+ static void nft_setelem_data_activate(const struct net *net,
+ const struct nft_set *set,
+ struct nft_set_elem *elem)
+@@ -10538,8 +10548,10 @@ static int __nf_tables_abort(struct net
+ case NFT_MSG_DESTROYSETELEM:
+ te = (struct nft_trans_elem *)trans->data;
+
+- nft_setelem_data_activate(net, te->set, &te->elem);
+- nft_setelem_activate(net, te->set, &te->elem);
++ if (!nft_setelem_active_next(net, te->set, &te->elem)) {
++ nft_setelem_data_activate(net, te->set, &te->elem);
++ nft_setelem_activate(net, te->set, &te->elem);
++ }
+ if (!nft_setelem_is_catchall(te->set, &te->elem))
+ te->set->ndeact--;
+
--- /dev/null
+From stable+bounces-80559-greg=kroah.com@vger.kernel.org Wed Oct 2 17:07:45 2024
+From: Vegard Nossum <vegard.nossum@oracle.com>
+Date: Wed, 2 Oct 2024 17:05:58 +0200
+Subject: netfilter: nf_tables: restore set elements when delete set fails
+To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Cc: stable@vger.kernel.org, pavel@denx.de, cengiz.can@canonical.com, mheyne@amazon.de, mngyadam@amazon.com, kuntal.nayak@broadcom.com, ajay.kaher@broadcom.com, zsm@chromium.org, dan.carpenter@linaro.org, shivani.agarwal@broadcom.com, Pablo Neira Ayuso <pablo@netfilter.org>, Vegard Nossum <vegard.nossum@oracle.com>
+Message-ID: <20241002150606.11385-8-vegard.nossum@oracle.com>
+
+From: Pablo Neira Ayuso <pablo@netfilter.org>
+
+>From abort path, nft_mapelem_activate() needs to restore refcounters to
+the original state. Currently, it uses the set->ops->walk() to iterate
+over these set elements. The existing set iterator skips inactive
+elements in the next generation, this does not work from the abort path
+to restore the original state since it has to skip active elements
+instead (not inactive ones).
+
+This patch moves the check for inactive elements to the set iterator
+callback, then it reverses the logic for the .activate case which
+needs to skip active elements.
+
+Toggle next generation bit for elements when delete set command is
+invoked and call nft_clear() from .activate (abort) path to restore the
+next generation bit.
+
+The splat below shows an object in mappings memleak:
+
+[43929.457523] ------------[ cut here ]------------
+[43929.457532] WARNING: CPU: 0 PID: 1139 at include/net/netfilter/nf_tables.h:1237 nft_setelem_data_deactivate+0xe4/0xf0 [nf_tables]
+[...]
+[43929.458014] RIP: 0010:nft_setelem_data_deactivate+0xe4/0xf0 [nf_tables]
+[43929.458076] Code: 83 f8 01 77 ab 49 8d 7c 24 08 e8 37 5e d0 de 49 8b 6c 24 08 48 8d 7d 50 e8 e9 5c d0 de 8b 45 50 8d 50 ff 89 55 50 85 c0 75 86 <0f> 0b eb 82 0f 0b eb b3 0f 1f 40 00 90 90 90 90 90 90 90 90 90 90
+[43929.458081] RSP: 0018:ffff888140f9f4b0 EFLAGS: 00010246
+[43929.458086] RAX: 0000000000000000 RBX: ffff8881434f5288 RCX: dffffc0000000000
+[43929.458090] RDX: 00000000ffffffff RSI: ffffffffa26d28a7 RDI: ffff88810ecc9550
+[43929.458093] RBP: ffff88810ecc9500 R08: 0000000000000001 R09: ffffed10281f3e8f
+[43929.458096] R10: 0000000000000003 R11: ffff0000ffff0000 R12: ffff8881434f52a0
+[43929.458100] R13: ffff888140f9f5f4 R14: ffff888151c7a800 R15: 0000000000000002
+[43929.458103] FS: 00007f0c687c4740(0000) GS:ffff888390800000(0000) knlGS:0000000000000000
+[43929.458107] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
+[43929.458111] CR2: 00007f58dbe5b008 CR3: 0000000123602005 CR4: 00000000001706f0
+[43929.458114] Call Trace:
+[43929.458118] <TASK>
+[43929.458121] ? __warn+0x9f/0x1a0
+[43929.458127] ? nft_setelem_data_deactivate+0xe4/0xf0 [nf_tables]
+[43929.458188] ? report_bug+0x1b1/0x1e0
+[43929.458196] ? handle_bug+0x3c/0x70
+[43929.458200] ? exc_invalid_op+0x17/0x40
+[43929.458211] ? nft_setelem_data_deactivate+0xd7/0xf0 [nf_tables]
+[43929.458271] ? nft_setelem_data_deactivate+0xe4/0xf0 [nf_tables]
+[43929.458332] nft_mapelem_deactivate+0x24/0x30 [nf_tables]
+[43929.458392] nft_rhash_walk+0xdd/0x180 [nf_tables]
+[43929.458453] ? __pfx_nft_rhash_walk+0x10/0x10 [nf_tables]
+[43929.458512] ? rb_insert_color+0x2e/0x280
+[43929.458520] nft_map_deactivate+0xdc/0x1e0 [nf_tables]
+[43929.458582] ? __pfx_nft_map_deactivate+0x10/0x10 [nf_tables]
+[43929.458642] ? __pfx_nft_mapelem_deactivate+0x10/0x10 [nf_tables]
+[43929.458701] ? __rcu_read_unlock+0x46/0x70
+[43929.458709] nft_delset+0xff/0x110 [nf_tables]
+[43929.458769] nft_flush_table+0x16f/0x460 [nf_tables]
+[43929.458830] nf_tables_deltable+0x501/0x580 [nf_tables]
+
+Fixes: 628bd3e49cba ("netfilter: nf_tables: drop map element references from preparation phase")
+Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
+(cherry picked from commit e79b47a8615d42c68aaeb68971593333667382ed)
+[Vegard: CVE-2024-27012; fixed conflicts due to missing commits
+ 0e1ea651c9717ddcd8e0648d8468477a31867b0a ("netfilter: nf_tables: shrink
+ memory consumption of set elements") and
+ 9dad402b89e81a0516bad5e0ac009b7a0a80898f ("netfilter: nf_tables: expose
+ opaque set element as struct nft_elem_priv") so we pass the correct types
+ and values to nft_setelem_data_deactivate(), nft_setelem_validate(),
+ nft_set_elem_ext(), etc.]
+Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/netfilter/nf_tables_api.c | 41 +++++++++++++++++++++++++++++++++++++----
+ net/netfilter/nft_set_bitmap.c | 4 +---
+ net/netfilter/nft_set_hash.c | 8 ++------
+ net/netfilter/nft_set_pipapo.c | 5 +----
+ net/netfilter/nft_set_rbtree.c | 4 +---
+ 5 files changed, 42 insertions(+), 20 deletions(-)
+
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -594,6 +594,12 @@ static int nft_mapelem_deactivate(const
+ const struct nft_set_iter *iter,
+ struct nft_set_elem *elem)
+ {
++ struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv);
++
++ if (!nft_set_elem_active(ext, iter->genmask))
++ return 0;
++
++ nft_set_elem_change_active(ctx->net, set, ext);
+ nft_setelem_data_deactivate(ctx->net, set, elem);
+
+ return 0;
+@@ -619,6 +625,7 @@ static void nft_map_catchall_deactivate(
+ continue;
+
+ elem.priv = catchall->elem;
++ nft_set_elem_change_active(ctx->net, set, ext);
+ nft_setelem_data_deactivate(ctx->net, set, &elem);
+ break;
+ }
+@@ -3820,6 +3827,9 @@ int nft_setelem_validate(const struct nf
+ const struct nft_data *data;
+ int err;
+
++ if (!nft_set_elem_active(ext, iter->genmask))
++ return 0;
++
+ if (nft_set_ext_exists(ext, NFT_SET_EXT_FLAGS) &&
+ *nft_set_ext_flags(ext) & NFT_SET_ELEM_INTERVAL_END)
+ return 0;
+@@ -3843,19 +3853,22 @@ int nft_setelem_validate(const struct nf
+
+ int nft_set_catchall_validate(const struct nft_ctx *ctx, struct nft_set *set)
+ {
+- u8 genmask = nft_genmask_next(ctx->net);
++ struct nft_set_iter dummy_iter = {
++ .genmask = nft_genmask_next(ctx->net),
++ };
+ struct nft_set_elem_catchall *catchall;
+ struct nft_set_elem elem;
++
+ struct nft_set_ext *ext;
+ int ret = 0;
+
+ list_for_each_entry_rcu(catchall, &set->catchall_list, list) {
+ ext = nft_set_elem_ext(set, catchall->elem);
+- if (!nft_set_elem_active(ext, genmask))
++ if (!nft_set_elem_active(ext, dummy_iter.genmask))
+ continue;
+
+ elem.priv = catchall->elem;
+- ret = nft_setelem_validate(ctx, set, NULL, &elem);
++ ret = nft_setelem_validate(ctx, set, &dummy_iter, &elem);
+ if (ret < 0)
+ return ret;
+ }
+@@ -5347,6 +5360,11 @@ static int nf_tables_bind_check_setelem(
+ const struct nft_set_iter *iter,
+ struct nft_set_elem *elem)
+ {
++ const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv);
++
++ if (!nft_set_elem_active(ext, iter->genmask))
++ return 0;
++
+ return nft_setelem_data_validate(ctx, set, elem);
+ }
+
+@@ -5441,6 +5459,13 @@ static int nft_mapelem_activate(const st
+ const struct nft_set_iter *iter,
+ struct nft_set_elem *elem)
+ {
++ struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv);
++
++ /* called from abort path, reverse check to undo changes. */
++ if (nft_set_elem_active(ext, iter->genmask))
++ return 0;
++
++ nft_clear(ctx->net, ext);
+ nft_setelem_data_activate(ctx->net, set, elem);
+
+ return 0;
+@@ -5459,6 +5484,7 @@ static void nft_map_catchall_activate(co
+ if (!nft_set_elem_active(ext, genmask))
+ continue;
+
++ nft_clear(ctx->net, ext);
+ elem.priv = catchall->elem;
+ nft_setelem_data_activate(ctx->net, set, &elem);
+ break;
+@@ -5733,6 +5759,9 @@ static int nf_tables_dump_setelem(const
+ const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv);
+ struct nft_set_dump_args *args;
+
++ if (!nft_set_elem_active(ext, iter->genmask))
++ return 0;
++
+ if (nft_set_elem_expired(ext) || nft_set_elem_is_dead(ext))
+ return 0;
+
+@@ -6500,7 +6529,7 @@ static void nft_setelem_activate(struct
+ struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv);
+
+ if (nft_setelem_is_catchall(set, elem)) {
+- nft_set_elem_change_active(net, set, ext);
++ nft_clear(net, ext);
+ } else {
+ set->ops->activate(net, set, elem);
+ }
+@@ -7194,9 +7223,13 @@ static int nft_setelem_flush(const struc
+ const struct nft_set_iter *iter,
+ struct nft_set_elem *elem)
+ {
++ const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv);
+ struct nft_trans *trans;
+ int err;
+
++ if (!nft_set_elem_active(ext, iter->genmask))
++ return 0;
++
+ trans = nft_trans_alloc_gfp(ctx, NFT_MSG_DELSETELEM,
+ sizeof(struct nft_trans_elem), GFP_ATOMIC);
+ if (!trans)
+--- a/net/netfilter/nft_set_bitmap.c
++++ b/net/netfilter/nft_set_bitmap.c
+@@ -171,7 +171,7 @@ static void nft_bitmap_activate(const st
+ nft_bitmap_location(set, nft_set_ext_key(&be->ext), &idx, &off);
+ /* Enter 11 state. */
+ priv->bitmap[idx] |= (genmask << off);
+- nft_set_elem_change_active(net, set, &be->ext);
++ nft_clear(net, &be->ext);
+ }
+
+ static bool nft_bitmap_flush(const struct net *net,
+@@ -223,8 +223,6 @@ static void nft_bitmap_walk(const struct
+ list_for_each_entry_rcu(be, &priv->list, head) {
+ if (iter->count < iter->skip)
+ goto cont;
+- if (!nft_set_elem_active(&be->ext, iter->genmask))
+- goto cont;
+
+ elem.priv = be;
+
+--- a/net/netfilter/nft_set_hash.c
++++ b/net/netfilter/nft_set_hash.c
+@@ -189,7 +189,7 @@ static void nft_rhash_activate(const str
+ {
+ struct nft_rhash_elem *he = elem->priv;
+
+- nft_set_elem_change_active(net, set, &he->ext);
++ nft_clear(net, &he->ext);
+ }
+
+ static bool nft_rhash_flush(const struct net *net,
+@@ -277,8 +277,6 @@ static void nft_rhash_walk(const struct
+
+ if (iter->count < iter->skip)
+ goto cont;
+- if (!nft_set_elem_active(&he->ext, iter->genmask))
+- goto cont;
+
+ elem.priv = he;
+
+@@ -587,7 +585,7 @@ static void nft_hash_activate(const stru
+ {
+ struct nft_hash_elem *he = elem->priv;
+
+- nft_set_elem_change_active(net, set, &he->ext);
++ nft_clear(net, &he->ext);
+ }
+
+ static bool nft_hash_flush(const struct net *net,
+@@ -641,8 +639,6 @@ static void nft_hash_walk(const struct n
+ hlist_for_each_entry_rcu(he, &priv->table[i], node) {
+ if (iter->count < iter->skip)
+ goto cont;
+- if (!nft_set_elem_active(&he->ext, iter->genmask))
+- goto cont;
+
+ elem.priv = he;
+
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -1766,7 +1766,7 @@ static void nft_pipapo_activate(const st
+ {
+ struct nft_pipapo_elem *e = elem->priv;
+
+- nft_set_elem_change_active(net, set, &e->ext);
++ nft_clear(net, &e->ext);
+ }
+
+ /**
+@@ -2068,9 +2068,6 @@ static void nft_pipapo_walk(const struct
+
+ e = f->mt[r].e;
+
+- if (!nft_set_elem_active(&e->ext, iter->genmask))
+- goto cont;
+-
+ elem.priv = e;
+
+ iter->err = iter->fn(ctx, set, iter, &elem);
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -527,7 +527,7 @@ static void nft_rbtree_activate(const st
+ {
+ struct nft_rbtree_elem *rbe = elem->priv;
+
+- nft_set_elem_change_active(net, set, &rbe->ext);
++ nft_clear(net, &rbe->ext);
+ }
+
+ static bool nft_rbtree_flush(const struct net *net,
+@@ -596,8 +596,6 @@ static void nft_rbtree_walk(const struct
+
+ if (iter->count < iter->skip)
+ goto cont;
+- if (!nft_set_elem_active(&rbe->ext, iter->genmask))
+- goto cont;
+
+ elem.priv = rbe;
+
--- /dev/null
+From stable+bounces-80568-greg=kroah.com@vger.kernel.org Wed Oct 2 17:15:21 2024
+From: Vegard Nossum <vegard.nossum@oracle.com>
+Date: Wed, 2 Oct 2024 17:12:35 +0200
+Subject: null_blk: fix null-ptr-dereference while configuring 'power' and 'submit_queues'
+To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Cc: stable@vger.kernel.org, pavel@denx.de, cengiz.can@canonical.com, mheyne@amazon.de, mngyadam@amazon.com, kuntal.nayak@broadcom.com, ajay.kaher@broadcom.com, zsm@chromium.org, dan.carpenter@linaro.org, shivani.agarwal@broadcom.com, Yu Kuai <yukuai3@huawei.com>, Yi Zhang <yi.zhang@redhat.com>, Zhu Yanjun <yanjun.zhu@linux.dev>, Jens Axboe <axboe@kernel.dk>, Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>, Vegard Nossum <vegard.nossum@oracle.com>
+Message-ID: <20241002151236.11787-4-vegard.nossum@oracle.com>
+
+From: Yu Kuai <yukuai3@huawei.com>
+
+[ Upstream commit a2db328b0839312c169eb42746ec46fc1ab53ed2 ]
+
+Writing 'power' and 'submit_queues' concurrently will trigger kernel
+panic:
+
+Test script:
+
+modprobe null_blk nr_devices=0
+mkdir -p /sys/kernel/config/nullb/nullb0
+while true; do echo 1 > submit_queues; echo 4 > submit_queues; done &
+while true; do echo 1 > power; echo 0 > power; done
+
+Test result:
+
+BUG: kernel NULL pointer dereference, address: 0000000000000148
+Oops: 0000 [#1] PREEMPT SMP
+RIP: 0010:__lock_acquire+0x41d/0x28f0
+Call Trace:
+ <TASK>
+ lock_acquire+0x121/0x450
+ down_write+0x5f/0x1d0
+ simple_recursive_removal+0x12f/0x5c0
+ blk_mq_debugfs_unregister_hctxs+0x7c/0x100
+ blk_mq_update_nr_hw_queues+0x4a3/0x720
+ nullb_update_nr_hw_queues+0x71/0xf0 [null_blk]
+ nullb_device_submit_queues_store+0x79/0xf0 [null_blk]
+ configfs_write_iter+0x119/0x1e0
+ vfs_write+0x326/0x730
+ ksys_write+0x74/0x150
+
+This is because del_gendisk() can concurrent with
+blk_mq_update_nr_hw_queues():
+
+nullb_device_power_store nullb_apply_submit_queues
+ null_del_dev
+ del_gendisk
+ nullb_update_nr_hw_queues
+ if (!dev->nullb)
+ // still set while gendisk is deleted
+ return 0
+ blk_mq_update_nr_hw_queues
+ dev->nullb = NULL
+
+Fix this problem by resuing the global mutex to protect
+nullb_device_power_store() and nullb_update_nr_hw_queues() from configfs.
+
+Fixes: 45919fbfe1c4 ("null_blk: Enable modifying 'submit_queues' after an instance has been configured")
+Reported-and-tested-by: Yi Zhang <yi.zhang@redhat.com>
+Closes: https://lore.kernel.org/all/CAHj4cs9LgsHLnjg8z06LQ3Pr5cax-+Ps+xT7AP7TPnEjStuwZA@mail.gmail.com/
+Signed-off-by: Yu Kuai <yukuai3@huawei.com>
+Reviewed-by: Zhu Yanjun <yanjun.zhu@linux.dev>
+Link: https://lore.kernel.org/r/20240523153934.1937851-1-yukuai1@huaweicloud.com
+Signed-off-by: Jens Axboe <axboe@kernel.dk>
+(cherry picked from commit a2db328b0839312c169eb42746ec46fc1ab53ed2)
+[Harshit: CVE-2024-36478; Resolve conflicts due to missing commit:
+ e440626b1caf ("null_blk: pass queue_limits to blk_mq_alloc_disk") in
+ 6.6.y]
+Signed-off-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
+Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/block/null_blk/main.c | 40 ++++++++++++++++++++++++++--------------
+ 1 file changed, 26 insertions(+), 14 deletions(-)
+
+--- a/drivers/block/null_blk/main.c
++++ b/drivers/block/null_blk/main.c
+@@ -392,13 +392,25 @@ static int nullb_update_nr_hw_queues(str
+ static int nullb_apply_submit_queues(struct nullb_device *dev,
+ unsigned int submit_queues)
+ {
+- return nullb_update_nr_hw_queues(dev, submit_queues, dev->poll_queues);
++ int ret;
++
++ mutex_lock(&lock);
++ ret = nullb_update_nr_hw_queues(dev, submit_queues, dev->poll_queues);
++ mutex_unlock(&lock);
++
++ return ret;
+ }
+
+ static int nullb_apply_poll_queues(struct nullb_device *dev,
+ unsigned int poll_queues)
+ {
+- return nullb_update_nr_hw_queues(dev, dev->submit_queues, poll_queues);
++ int ret;
++
++ mutex_lock(&lock);
++ ret = nullb_update_nr_hw_queues(dev, dev->submit_queues, poll_queues);
++ mutex_unlock(&lock);
++
++ return ret;
+ }
+
+ NULLB_DEVICE_ATTR(size, ulong, NULL);
+@@ -444,28 +456,31 @@ static ssize_t nullb_device_power_store(
+ if (ret < 0)
+ return ret;
+
++ ret = count;
++ mutex_lock(&lock);
+ if (!dev->power && newp) {
+ if (test_and_set_bit(NULLB_DEV_FL_UP, &dev->flags))
+- return count;
++ goto out;
++
+ ret = null_add_dev(dev);
+ if (ret) {
+ clear_bit(NULLB_DEV_FL_UP, &dev->flags);
+- return ret;
++ goto out;
+ }
+
+ set_bit(NULLB_DEV_FL_CONFIGURED, &dev->flags);
+ dev->power = newp;
+ } else if (dev->power && !newp) {
+ if (test_and_clear_bit(NULLB_DEV_FL_UP, &dev->flags)) {
+- mutex_lock(&lock);
+ dev->power = newp;
+ null_del_dev(dev->nullb);
+- mutex_unlock(&lock);
+ }
+ clear_bit(NULLB_DEV_FL_CONFIGURED, &dev->flags);
+ }
+
+- return count;
++out:
++ mutex_unlock(&lock);
++ return ret;
+ }
+
+ CONFIGFS_ATTR(nullb_device_, power);
+@@ -2153,15 +2168,12 @@ static int null_add_dev(struct nullb_dev
+ nullb->q->queuedata = nullb;
+ blk_queue_flag_set(QUEUE_FLAG_NONROT, nullb->q);
+
+- mutex_lock(&lock);
+ rv = ida_alloc(&nullb_indexes, GFP_KERNEL);
+- if (rv < 0) {
+- mutex_unlock(&lock);
++ if (rv < 0)
+ goto out_cleanup_zone;
+- }
++
+ nullb->index = rv;
+ dev->index = rv;
+- mutex_unlock(&lock);
+
+ blk_queue_logical_block_size(nullb->q, dev->blocksize);
+ blk_queue_physical_block_size(nullb->q, dev->blocksize);
+@@ -2185,9 +2197,7 @@ static int null_add_dev(struct nullb_dev
+ if (rv)
+ goto out_ida_free;
+
+- mutex_lock(&lock);
+ list_add_tail(&nullb->list, &nullb_list);
+- mutex_unlock(&lock);
+
+ pr_info("disk %s created\n", nullb->disk_name);
+
+@@ -2236,7 +2246,9 @@ static int null_create_dev(void)
+ if (!dev)
+ return -ENOMEM;
+
++ mutex_lock(&lock);
+ ret = null_add_dev(dev);
++ mutex_unlock(&lock);
+ if (ret) {
+ null_free_dev(dev);
+ return ret;
--- /dev/null
+From stable+bounces-80566-greg=kroah.com@vger.kernel.org Wed Oct 2 17:14:52 2024
+From: Vegard Nossum <vegard.nossum@oracle.com>
+Date: Wed, 2 Oct 2024 17:12:34 +0200
+Subject: null_blk: Remove usage of the deprecated ida_simple_xx() API
+To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Cc: stable@vger.kernel.org, pavel@denx.de, cengiz.can@canonical.com, mheyne@amazon.de, mngyadam@amazon.com, kuntal.nayak@broadcom.com, ajay.kaher@broadcom.com, zsm@chromium.org, dan.carpenter@linaro.org, shivani.agarwal@broadcom.com, Christophe JAILLET <christophe.jaillet@wanadoo.fr>, Jens Axboe <axboe@kernel.dk>, Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>, Vegard Nossum <vegard.nossum@oracle.com>
+Message-ID: <20241002151236.11787-3-vegard.nossum@oracle.com>
+
+From: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
+
+[ Upstream commit 95931a245b44ee04f3359ec432e73614d44d8b38 ]
+
+ida_alloc() and ida_free() should be preferred to the deprecated
+ida_simple_get() and ida_simple_remove().
+
+This is less verbose.
+
+Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
+Link: https://lore.kernel.org/r/bf257b1078475a415cdc3344c6a750842946e367.1705222845.git.christophe.jaillet@wanadoo.fr
+Signed-off-by: Jens Axboe <axboe@kernel.dk>
+(cherry picked from commit 95931a245b44ee04f3359ec432e73614d44d8b38)
+[Harshit: backport to 6.6.y]
+Signed-off-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
+Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/block/null_blk/main.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/drivers/block/null_blk/main.c
++++ b/drivers/block/null_blk/main.c
+@@ -1819,7 +1819,7 @@ static void null_del_dev(struct nullb *n
+
+ dev = nullb->dev;
+
+- ida_simple_remove(&nullb_indexes, nullb->index);
++ ida_free(&nullb_indexes, nullb->index);
+
+ list_del_init(&nullb->list);
+
+@@ -2154,7 +2154,7 @@ static int null_add_dev(struct nullb_dev
+ blk_queue_flag_set(QUEUE_FLAG_NONROT, nullb->q);
+
+ mutex_lock(&lock);
+- rv = ida_simple_get(&nullb_indexes, 0, 0, GFP_KERNEL);
++ rv = ida_alloc(&nullb_indexes, GFP_KERNEL);
+ if (rv < 0) {
+ mutex_unlock(&lock);
+ goto out_cleanup_zone;
--- /dev/null
+From stable+bounces-80565-greg=kroah.com@vger.kernel.org Wed Oct 2 17:14:50 2024
+From: Vegard Nossum <vegard.nossum@oracle.com>
+Date: Wed, 2 Oct 2024 17:12:33 +0200
+Subject: platform/x86: think-lmi: Fix password opcode ordering for workstations
+To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Cc: stable@vger.kernel.org, pavel@denx.de, cengiz.can@canonical.com, mheyne@amazon.de, mngyadam@amazon.com, kuntal.nayak@broadcom.com, ajay.kaher@broadcom.com, zsm@chromium.org, dan.carpenter@linaro.org, shivani.agarwal@broadcom.com, "Mark Pearson" <mpearson-lenovo@squebb.ca>, "Ilpo Järvinen" <ilpo.jarvinen@linux.intel.com>, "Hans de Goede" <hdegoede@redhat.com>, "Harshit Mogalapalli" <harshit.m.mogalapalli@oracle.com>, "Vegard Nossum" <vegard.nossum@oracle.com>
+Message-ID: <20241002151236.11787-2-vegard.nossum@oracle.com>
+
+From: Mark Pearson <mpearson-lenovo@squebb.ca>
+
+[ Upstream commit 6f7d0f5fd8e440c3446560100ac4ff9a55eec340 ]
+
+The Lenovo workstations require the password opcode to be run before
+the attribute value is changed (if Admin password is enabled).
+
+Tested on some Thinkpads to confirm they are OK with this order too.
+
+Signed-off-by: Mark Pearson <mpearson-lenovo@squebb.ca>
+Fixes: 640a5fa50a42 ("platform/x86: think-lmi: Opcode support")
+Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
+Link: https://lore.kernel.org/r/20240209152359.528919-1-mpearson-lenovo@squebb.ca
+Reviewed-by: Hans de Goede <hdegoede@redhat.com>
+Signed-off-by: Hans de Goede <hdegoede@redhat.com>
+(cherry picked from commit 6f7d0f5fd8e440c3446560100ac4ff9a55eec340)
+[Harshit: CVE-2024-26836; Resolve conflicts due to missing commit:
+ 318d97849fc2 ("platform/x86: think-lmi: Add bulk save feature") which is
+ not in 6.6.y]
+Signed-off-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
+Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/platform/x86/think-lmi.c | 16 +++++++++-------
+ 1 file changed, 9 insertions(+), 7 deletions(-)
+
+--- a/drivers/platform/x86/think-lmi.c
++++ b/drivers/platform/x86/think-lmi.c
+@@ -1021,7 +1021,16 @@ static ssize_t current_value_store(struc
+ * Note - this sets the variable and then the password as separate
+ * WMI calls. Function tlmi_save_bios_settings will error if the
+ * password is incorrect.
++ * Workstation's require the opcode to be set before changing the
++ * attribute.
+ */
++ if (tlmi_priv.pwd_admin->valid && tlmi_priv.pwd_admin->password[0]) {
++ ret = tlmi_opcode_setting("WmiOpcodePasswordAdmin",
++ tlmi_priv.pwd_admin->password);
++ if (ret)
++ goto out;
++ }
++
+ set_str = kasprintf(GFP_KERNEL, "%s,%s;", setting->display_name,
+ new_setting);
+ if (!set_str) {
+@@ -1033,13 +1042,6 @@ static ssize_t current_value_store(struc
+ if (ret)
+ goto out;
+
+- if (tlmi_priv.pwd_admin->valid && tlmi_priv.pwd_admin->password[0]) {
+- ret = tlmi_opcode_setting("WmiOpcodePasswordAdmin",
+- tlmi_priv.pwd_admin->password);
+- if (ret)
+- goto out;
+- }
+-
+ ret = tlmi_save_bios_settings("");
+ } else { /* old non-opcode based authentication method (deprecated) */
+ if (tlmi_priv.pwd_admin->valid && tlmi_priv.pwd_admin->password[0]) {
btrfs-drop-the-backref-cache-during-relocation-if-we.patch
drm-rockchip-vop-enable-vop_feature_internal_rgb-on-rk3066.patch
revert-drm-amd-display-skip-recompute-dsc-params-if-no-stream-on-link.patch
+ubifs-ubifs_symlink-fix-memleak-of-inode-i_link-in-error-path.patch
+netfilter-nf_tables-fix-memleak-in-map-from-abort-path.patch
+netfilter-nf_tables-restore-set-elements-when-delete-set-fails.patch
+net-dsa-fix-netdev_priv-dereference-before-check-on-non-dsa-netdevice-events.patch
+iommufd-fix-protection-fault-in-iommufd_test_syz_conv_iova.patch
+drm-bridge-adv7511-fix-crash-on-irq-during-probe.patch
+efi-unaccepted-touch-soft-lockup-during-memory-accept.patch
+platform-x86-think-lmi-fix-password-opcode-ordering-for-workstations.patch
+null_blk-remove-usage-of-the-deprecated-ida_simple_xx-api.patch
+null_blk-fix-null-ptr-dereference-while-configuring-power-and-submit_queues.patch
+net-stmmac-move-the-est-lock-to-struct-stmmac_priv.patch
--- /dev/null
+From stable+bounces-80556-greg=kroah.com@vger.kernel.org Wed Oct 2 17:07:43 2024
+From: Vegard Nossum <vegard.nossum@oracle.com>
+Date: Wed, 2 Oct 2024 17:05:52 +0200
+Subject: ubifs: ubifs_symlink: Fix memleak of inode->i_link in error path
+To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Cc: stable@vger.kernel.org, pavel@denx.de, cengiz.can@canonical.com, mheyne@amazon.de, mngyadam@amazon.com, kuntal.nayak@broadcom.com, ajay.kaher@broadcom.com, zsm@chromium.org, dan.carpenter@linaro.org, shivani.agarwal@broadcom.com, Zhihao Cheng <chengzhihao1@huawei.com>, Eric Biggers <ebiggers@kernel.org>, Eric Biggers <ebiggers@google.com>, Richard Weinberger <richard@nod.at>, Vegard Nossum <vegard.nossum@oracle.com>
+Message-ID: <20241002150606.11385-2-vegard.nossum@oracle.com>
+
+From: Zhihao Cheng <chengzhihao1@huawei.com>
+
+[ Upstream commit 6379b44cdcd67f5f5d986b73953e99700591edfa ]
+
+For error handling path in ubifs_symlink(), inode will be marked as
+bad first, then iput() is invoked. If inode->i_link is initialized by
+fscrypt_encrypt_symlink() in encryption scenario, inode->i_link won't
+be freed by callchain ubifs_free_inode -> fscrypt_free_inode in error
+handling path, because make_bad_inode() has changed 'inode->i_mode' as
+'S_IFREG'.
+Following kmemleak is easy to be reproduced by injecting error in
+ubifs_jnl_update() when doing symlink in encryption scenario:
+ unreferenced object 0xffff888103da3d98 (size 8):
+ comm "ln", pid 1692, jiffies 4294914701 (age 12.045s)
+ backtrace:
+ kmemdup+0x32/0x70
+ __fscrypt_encrypt_symlink+0xed/0x1c0
+ ubifs_symlink+0x210/0x300 [ubifs]
+ vfs_symlink+0x216/0x360
+ do_symlinkat+0x11a/0x190
+ do_syscall_64+0x3b/0xe0
+There are two ways fixing it:
+ 1. Remove make_bad_inode() in error handling path. We can do that
+ because ubifs_evict_inode() will do same processes for good
+ symlink inode and bad symlink inode, for inode->i_nlink checking
+ is before is_bad_inode().
+ 2. Free inode->i_link before marking inode bad.
+Method 2 is picked, it has less influence, personally, I think.
+
+Cc: stable@vger.kernel.org
+Fixes: 2c58d548f570 ("fscrypt: cache decrypted symlink target in ->i_link")
+Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com>
+Suggested-by: Eric Biggers <ebiggers@kernel.org>
+Reviewed-by: Eric Biggers <ebiggers@google.com>
+Signed-off-by: Richard Weinberger <richard@nod.at>
+(cherry picked from commit 6379b44cdcd67f5f5d986b73953e99700591edfa)
+[Vegard: CVE-2024-26972; no conflicts]
+Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/ubifs/dir.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+--- a/fs/ubifs/dir.c
++++ b/fs/ubifs/dir.c
+@@ -1126,6 +1126,8 @@ out_cancel:
+ dir_ui->ui_size = dir->i_size;
+ mutex_unlock(&dir_ui->ui_mutex);
+ out_inode:
++ /* Free inode->i_link before inode is marked as bad. */
++ fscrypt_free_inode(inode);
+ make_bad_inode(inode);
+ iput(inode);
+ out_fname: