--- /dev/null
+From e9f180d7cfde23b9f8eebd60272465176373ab2c Mon Sep 17 00:00:00 2001
+From: David Hildenbrand <david@redhat.com>
+Date: Tue, 22 Apr 2025 16:49:42 +0200
+Subject: kernel/fork: only call untrack_pfn_clear() on VMAs duplicated for fork()
+
+From: David Hildenbrand <david@redhat.com>
+
+commit e9f180d7cfde23b9f8eebd60272465176373ab2c upstream.
+
+Not intuitive, but vm_area_dup() located in kernel/fork.c is not only used
+for duplicating VMAs during fork(), but also for duplicating VMAs when
+splitting VMAs or when mremap()'ing them.
+
+VM_PFNMAP mappings can at least get ordinarily mremap()'ed (no change in
+size) and apparently also shrunk during mremap(), which implies
+duplicating the VMA in __split_vma() first.
+
+In case of ordinary mremap() (no change in size), we first duplicate the
+VMA in copy_vma_and_data()->copy_vma() to then call untrack_pfn_clear() on
+the old VMA: we effectively move the VM_PAT reservation. So the
+untrack_pfn_clear() call on the new VMA duplicating is wrong in that
+context.
+
+Splitting of VMAs seems problematic, because we don't duplicate/adjust the
+reservation when splitting the VMA. Instead, in memtype_erase() -- called
+during zapping/munmap -- we shrink a reservation in case only the end
+address matches: Assume we split a VMA into A and B, both would share a
+reservation until B is unmapped.
+
+So when unmapping B, the reservation would be updated to cover only A.
+When unmapping A, we would properly remove the now-shrunk reservation.
+That scenario describes the mremap() shrinking (old_size > new_size),
+where we split + unmap B, and the untrack_pfn_clear() on the new VMA when
+is wrong.
+
+What if we manage to split a VM_PFNMAP VMA into A and B and unmap A first?
+It would be broken because we would never free the reservation. Likely,
+there are ways to trigger such a VMA split outside of mremap().
+
+Affecting other VMA duplication was not intended, vm_area_dup() being used
+outside of kernel/fork.c was an oversight. So let's fix that for; how to
+handle VMA splits better should be investigated separately.
+
+
+With a simple reproducer that uses mprotect() to split such a VMA I can
+trigger
+
+x86/PAT: pat_mremap:26448 freeing invalid memtype [mem 0x00000000-0x00000fff]
+
+Link: https://lkml.kernel.org/r/20250422144942.2871395-1-david@redhat.com
+Fixes: dc84bc2aba85 ("x86/mm/pat: Fix VM_PAT handling when fork() fails in copy_page_range()")
+Signed-off-by: David Hildenbrand <david@redhat.com>
+Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
+Cc: Ingo Molnar <mingo@kernel.org>
+Cc: Dave Hansen <dave.hansen@linux.intel.com>
+Cc: Andy Lutomirski <luto@kernel.org>
+Cc: Peter Zijlstra <peterz@infradead.org>
+Cc: Thomas Gleixner <tglx@linutronix.de>
+Cc: Borislav Petkov <bp@alien8.de>
+Cc: Rik van Riel <riel@surriel.com>
+Cc: "H. Peter Anvin" <hpa@zytor.com>
+Cc: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/fork.c | 9 +++++----
+ 1 file changed, 5 insertions(+), 4 deletions(-)
+
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -476,10 +476,6 @@ struct vm_area_struct *vm_area_dup(struc
+ *new = data_race(*orig);
+ INIT_LIST_HEAD(&new->anon_vma_chain);
+ dup_anon_vma_name(orig, new);
+-
+- /* track_pfn_copy() will later take care of copying internal state. */
+- if (unlikely(new->vm_flags & VM_PFNMAP))
+- untrack_pfn_clear(new);
+ }
+ return new;
+ }
+@@ -650,6 +646,11 @@ static __latent_entropy int dup_mmap(str
+ tmp = vm_area_dup(mpnt);
+ if (!tmp)
+ goto fail_nomem;
++
++ /* track_pfn_copy() will later take care of copying internal state. */
++ if (unlikely(tmp->vm_flags & VM_PFNMAP))
++ untrack_pfn_clear(tmp);
++
+ retval = vma_dup_policy(mpnt, tmp);
+ if (retval)
+ goto fail_nomem_policy;
--- /dev/null
+From 8c56c5dbcf52220cc9be7a36e7f21ebd5939e0b9 Mon Sep 17 00:00:00 2001
+From: David Hildenbrand <david@redhat.com>
+Date: Tue, 8 Apr 2025 10:59:50 +0200
+Subject: mm: (un)track_pfn_copy() fix + doc improvements
+
+From: David Hildenbrand <david@redhat.com>
+
+commit 8c56c5dbcf52220cc9be7a36e7f21ebd5939e0b9 upstream.
+
+We got a late smatch warning and some additional review feedback.
+
+ smatch warnings:
+ mm/memory.c:1428 copy_page_range() error: uninitialized symbol 'pfn'.
+
+We actually use the pfn only when it is properly initialized; however, we
+may pass an uninitialized value to a function -- although it will not use
+it that likely still is UB in C.
+
+So let's just fix it by always initializing pfn in the caller of
+track_pfn_copy(), and improving the documentation of track_pfn_copy().
+
+While at it, clarify the doc of untrack_pfn_copy(), that internal checks
+make sure if we actually have to untrack anything.
+
+Link: https://lkml.kernel.org/r/20250408085950.976103-1-david@redhat.com
+Fixes: dc84bc2aba85 ("x86/mm/pat: Fix VM_PAT handling when fork() fails in copy_page_range()")
+Signed-off-by: David Hildenbrand <david@redhat.com>
+Reported-by: kernel test robot <lkp@intel.com>
+Reported-by: Dan Carpenter <error27@gmail.com>
+Closes: https://lore.kernel.org/r/202503270941.IFILyNCX-lkp@intel.com/
+Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
+Acked-by: Ingo Molnar <mingo@kernel.org>
+Cc: Andrew Morton <akpm@linux-foundation.org>
+Cc: Dave Hansen <dave.hansen@linux.intel.com>
+Cc: Andy Lutomirski <luto@kernel.org>
+Cc: Peter Zijlstra <peterz@infradead.org>
+Cc: Thomas Gleixner <tglx@linutronix.de>
+Cc: Borislav Petkov <bp@alien8.de>
+Cc: Rik van Riel <riel@surriel.com>
+Cc: "H. Peter Anvin" <hpa@zytor.com>
+Cc: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ include/linux/pgtable.h | 9 ++++++---
+ mm/memory.c | 2 +-
+ 2 files changed, 7 insertions(+), 4 deletions(-)
+
+--- a/include/linux/pgtable.h
++++ b/include/linux/pgtable.h
+@@ -1196,8 +1196,9 @@ static inline void track_pfn_insert(stru
+
+ /*
+ * track_pfn_copy is called when a VM_PFNMAP VMA is about to get the page
+- * tables copied during copy_page_range(). On success, stores the pfn to be
+- * passed to untrack_pfn_copy().
++ * tables copied during copy_page_range(). Will store the pfn to be
++ * passed to untrack_pfn_copy() only if there is something to be untracked.
++ * Callers should initialize the pfn to 0.
+ */
+ static inline int track_pfn_copy(struct vm_area_struct *dst_vma,
+ struct vm_area_struct *src_vma, unsigned long *pfn)
+@@ -1207,7 +1208,9 @@ static inline int track_pfn_copy(struct
+
+ /*
+ * untrack_pfn_copy is called when a VM_PFNMAP VMA failed to copy during
+- * copy_page_range(), but after track_pfn_copy() was already called.
++ * copy_page_range(), but after track_pfn_copy() was already called. Can
++ * be called even if track_pfn_copy() did not actually track anything:
++ * handled internally.
+ */
+ static inline void untrack_pfn_copy(struct vm_area_struct *dst_vma,
+ unsigned long pfn)
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -1283,7 +1283,7 @@ copy_page_range(struct vm_area_struct *d
+ struct mm_struct *dst_mm = dst_vma->vm_mm;
+ struct mm_struct *src_mm = src_vma->vm_mm;
+ struct mmu_notifier_range range;
+- unsigned long next, pfn;
++ unsigned long next, pfn = 0;
+ bool is_cow;
+ int ret;
+
--- /dev/null
+From 0dcc53abf58d572d34c5313de85f607cd33fc691 Mon Sep 17 00:00:00 2001
+From: Su Hui <suhui@nfschina.com>
+Date: Wed, 5 Jun 2024 11:47:43 +0800
+Subject: net: ethtool: fix the error condition in ethtool_get_phy_stats_ethtool()
+
+From: Su Hui <suhui@nfschina.com>
+
+commit 0dcc53abf58d572d34c5313de85f607cd33fc691 upstream.
+
+Clang static checker (scan-build) warning:
+net/ethtool/ioctl.c:line 2233, column 2
+Called function pointer is null (null dereference).
+
+Return '-EOPNOTSUPP' when 'ops->get_ethtool_phy_stats' is NULL to fix
+this typo error.
+
+Fixes: 201ed315f967 ("net/ethtool/ioctl: split ethtool_get_phy_stats into multiple helpers")
+Signed-off-by: Su Hui <suhui@nfschina.com>
+Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
+Reviewed-by: Hariprasad Kelam <hkelam@marvell.com>
+Link: https://lore.kernel.org/r/20240605034742.921751-1-suhui@nfschina.com
+Signed-off-by: Paolo Abeni <pabeni@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/ethtool/ioctl.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/net/ethtool/ioctl.c
++++ b/net/ethtool/ioctl.c
+@@ -2132,7 +2132,7 @@ static int ethtool_get_phy_stats_ethtool
+ const struct ethtool_ops *ops = dev->ethtool_ops;
+ int n_stats, ret;
+
+- if (!ops || !ops->get_sset_count || ops->get_ethtool_phy_stats)
++ if (!ops || !ops->get_sset_count || !ops->get_ethtool_phy_stats)
+ return -EOPNOTSUPP;
+
+ n_stats = ops->get_sset_count(dev, ETH_SS_PHY_STATS);
--- /dev/null
+From 61921bdaa132b580b6db6858e6d7dcdb870df5fe Mon Sep 17 00:00:00 2001
+From: Petr Tesarik <petr@tesarici.cz>
+Date: Fri, 5 Jan 2024 21:16:42 +0100
+Subject: net: stmmac: fix ethtool per-queue statistics
+
+From: Petr Tesarik <petr@tesarici.cz>
+
+commit 61921bdaa132b580b6db6858e6d7dcdb870df5fe upstream.
+
+Fix per-queue statistics for devices with more than one queue.
+
+The output data pointer is currently reset in each loop iteration,
+effectively summing all queue statistics in the first four u64 values.
+
+The summary values are not even labeled correctly. For example, if eth0 has
+2 queues, ethtool -S eth0 shows:
+
+ q0_tx_pkt_n: 374 (actually tx_pkt_n over all queues)
+ q0_tx_irq_n: 23 (actually tx_normal_irq_n over all queues)
+ q1_tx_pkt_n: 462 (actually rx_pkt_n over all queues)
+ q1_tx_irq_n: 446 (actually rx_normal_irq_n over all queues)
+ q0_rx_pkt_n: 0
+ q0_rx_irq_n: 0
+ q1_rx_pkt_n: 0
+ q1_rx_irq_n: 0
+
+Fixes: 133466c3bbe1 ("net: stmmac: use per-queue 64 bit statistics where necessary")
+Cc: stable@vger.kernel.org
+Signed-off-by: Petr Tesarik <petr@tesarici.cz>
+Reviewed-by: Jisheng Zhang <jszhang@kernel.org>
+Reviewed-by: Andrew Lunn <andrew@lunn.ch>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c | 9 ++-------
+ 1 file changed, 2 insertions(+), 7 deletions(-)
+
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
+@@ -552,15 +552,12 @@ static void stmmac_get_per_qstats(struct
+ u32 rx_cnt = priv->plat->rx_queues_to_use;
+ unsigned int start;
+ int q, stat;
+- u64 *pos;
+ char *p;
+
+- pos = data;
+ for (q = 0; q < tx_cnt; q++) {
+ struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[q];
+ struct stmmac_txq_stats snapshot;
+
+- data = pos;
+ do {
+ start = u64_stats_fetch_begin(&txq_stats->syncp);
+ snapshot = *txq_stats;
+@@ -568,17 +565,15 @@ static void stmmac_get_per_qstats(struct
+
+ p = (char *)&snapshot + offsetof(struct stmmac_txq_stats, tx_pkt_n);
+ for (stat = 0; stat < STMMAC_TXQ_STATS; stat++) {
+- *data++ += (*(u64 *)p);
++ *data++ = (*(u64 *)p);
+ p += sizeof(u64);
+ }
+ }
+
+- pos = data;
+ for (q = 0; q < rx_cnt; q++) {
+ struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[q];
+ struct stmmac_rxq_stats snapshot;
+
+- data = pos;
+ do {
+ start = u64_stats_fetch_begin(&rxq_stats->syncp);
+ snapshot = *rxq_stats;
+@@ -586,7 +581,7 @@ static void stmmac_get_per_qstats(struct
+
+ p = (char *)&snapshot + offsetof(struct stmmac_rxq_stats, rx_pkt_n);
+ for (stat = 0; stat < STMMAC_RXQ_STATS; stat++) {
+- *data++ += (*(u64 *)p);
++ *data++ = (*(u64 *)p);
+ p += sizeof(u64);
+ }
+ }
--- /dev/null
+From 8070274b472e2e9f5f67a990f5e697634c415708 Mon Sep 17 00:00:00 2001
+From: Jisheng Zhang <jszhang@kernel.org>
+Date: Mon, 18 Sep 2023 00:53:28 +0800
+Subject: net: stmmac: fix incorrect rxq|txq_stats reference
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Jisheng Zhang <jszhang@kernel.org>
+
+commit 8070274b472e2e9f5f67a990f5e697634c415708 upstream.
+
+commit 133466c3bbe1 ("net: stmmac: use per-queue 64 bit statistics
+where necessary") caused one regression as found by Uwe, the backtrace
+looks like:
+
+ INFO: trying to register non-static key.
+ The code is fine but needs lockdep annotation, or maybe
+ you didn't initialize this object before use?
+ turning off the locking correctness validator.
+ CPU: 0 PID: 1 Comm: swapper/0 Not tainted 6.5.0-rc1-00449-g133466c3bbe1-dirty #21
+ Hardware name: STM32 (Device Tree Support)
+ unwind_backtrace from show_stack+0x18/0x1c
+ show_stack from dump_stack_lvl+0x60/0x90
+ dump_stack_lvl from register_lock_class+0x98c/0x99c
+ register_lock_class from __lock_acquire+0x74/0x293c
+ __lock_acquire from lock_acquire+0x134/0x398
+ lock_acquire from stmmac_get_stats64+0x2ac/0x2fc
+ stmmac_get_stats64 from dev_get_stats+0x44/0x130
+ dev_get_stats from rtnl_fill_stats+0x38/0x120
+ rtnl_fill_stats from rtnl_fill_ifinfo+0x834/0x17f4
+ rtnl_fill_ifinfo from rtmsg_ifinfo_build_skb+0xc0/0x144
+ rtmsg_ifinfo_build_skb from rtmsg_ifinfo+0x50/0x88
+ rtmsg_ifinfo from __dev_notify_flags+0xc0/0xec
+ __dev_notify_flags from dev_change_flags+0x50/0x5c
+ dev_change_flags from ip_auto_config+0x2f4/0x1260
+ ip_auto_config from do_one_initcall+0x70/0x35c
+ do_one_initcall from kernel_init_freeable+0x2ac/0x308
+ kernel_init_freeable from kernel_init+0x1c/0x138
+ kernel_init from ret_from_fork+0x14/0x2c
+
+The reason is the rxq|txq_stats structures are not what expected
+because stmmac_open() -> __stmmac_open() the structure is overwritten
+by "memcpy(&priv->dma_conf, dma_conf, sizeof(*dma_conf));"
+This causes the well initialized syncp member of rxq|txq_stats is
+overwritten unexpectedly as pointed out by Johannes and Uwe.
+
+Fix this issue by moving rxq|txq_stats back to stmmac_extra_stats. For
+SMP cache friendly, we also mark stmmac_txq_stats and stmmac_rxq_stats
+as ____cacheline_aligned_in_smp.
+
+Fixes: 133466c3bbe1 ("net: stmmac: use per-queue 64 bit statistics where necessary")
+Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
+Reported-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
+Tested-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
+Link: https://lore.kernel.org/r/20230917165328.3403-1-jszhang@kernel.org
+Signed-off-by: Paolo Abeni <pabeni@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/stmicro/stmmac/common.h | 7 -
+ drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c | 16 +-
+ drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c | 16 +-
+ drivers/net/ethernet/stmicro/stmmac/dwmac_lib.c | 16 +-
+ drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c | 16 +-
+ drivers/net/ethernet/stmicro/stmmac/stmmac.h | 2
+ drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c | 32 ++--
+ drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 127 ++++++++++---------
+ 8 files changed, 121 insertions(+), 111 deletions(-)
+
+--- a/drivers/net/ethernet/stmicro/stmmac/common.h
++++ b/drivers/net/ethernet/stmicro/stmmac/common.h
+@@ -69,7 +69,7 @@ struct stmmac_txq_stats {
+ u64 tx_tso_frames;
+ u64 tx_tso_nfrags;
+ struct u64_stats_sync syncp;
+-};
++} ____cacheline_aligned_in_smp;
+
+ struct stmmac_rxq_stats {
+ u64 rx_bytes;
+@@ -78,7 +78,7 @@ struct stmmac_rxq_stats {
+ u64 rx_normal_irq_n;
+ u64 napi_poll;
+ struct u64_stats_sync syncp;
+-};
++} ____cacheline_aligned_in_smp;
+
+ /* Extra statistic and debug information exposed by ethtool */
+ struct stmmac_extra_stats {
+@@ -201,6 +201,9 @@ struct stmmac_extra_stats {
+ unsigned long mtl_est_hlbf;
+ unsigned long mtl_est_btre;
+ unsigned long mtl_est_btrlm;
++ /* per queue statistics */
++ struct stmmac_txq_stats txq_stats[MTL_MAX_TX_QUEUES];
++ struct stmmac_rxq_stats rxq_stats[MTL_MAX_RX_QUEUES];
+ unsigned long rx_dropped;
+ unsigned long rx_errors;
+ unsigned long tx_dropped;
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+@@ -440,8 +440,8 @@ static int sun8i_dwmac_dma_interrupt(str
+ struct stmmac_extra_stats *x, u32 chan,
+ u32 dir)
+ {
+- struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[chan];
+- struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[chan];
++ struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[chan];
++ struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[chan];
+ int ret = 0;
+ u32 v;
+
+@@ -454,9 +454,9 @@ static int sun8i_dwmac_dma_interrupt(str
+
+ if (v & EMAC_TX_INT) {
+ ret |= handle_tx;
+- u64_stats_update_begin(&tx_q->txq_stats.syncp);
+- tx_q->txq_stats.tx_normal_irq_n++;
+- u64_stats_update_end(&tx_q->txq_stats.syncp);
++ u64_stats_update_begin(&txq_stats->syncp);
++ txq_stats->tx_normal_irq_n++;
++ u64_stats_update_end(&txq_stats->syncp);
+ }
+
+ if (v & EMAC_TX_DMA_STOP_INT)
+@@ -478,9 +478,9 @@ static int sun8i_dwmac_dma_interrupt(str
+
+ if (v & EMAC_RX_INT) {
+ ret |= handle_rx;
+- u64_stats_update_begin(&rx_q->rxq_stats.syncp);
+- rx_q->rxq_stats.rx_normal_irq_n++;
+- u64_stats_update_end(&rx_q->rxq_stats.syncp);
++ u64_stats_update_begin(&rxq_stats->syncp);
++ rxq_stats->rx_normal_irq_n++;
++ u64_stats_update_end(&rxq_stats->syncp);
+ }
+
+ if (v & EMAC_RX_BUF_UA_INT)
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c
+@@ -171,8 +171,8 @@ int dwmac4_dma_interrupt(struct stmmac_p
+ const struct dwmac4_addrs *dwmac4_addrs = priv->plat->dwmac4_addrs;
+ u32 intr_status = readl(ioaddr + DMA_CHAN_STATUS(dwmac4_addrs, chan));
+ u32 intr_en = readl(ioaddr + DMA_CHAN_INTR_ENA(dwmac4_addrs, chan));
+- struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[chan];
+- struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[chan];
++ struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[chan];
++ struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[chan];
+ int ret = 0;
+
+ if (dir == DMA_DIR_RX)
+@@ -201,15 +201,15 @@ int dwmac4_dma_interrupt(struct stmmac_p
+ }
+ /* TX/RX NORMAL interrupts */
+ if (likely(intr_status & DMA_CHAN_STATUS_RI)) {
+- u64_stats_update_begin(&rx_q->rxq_stats.syncp);
+- rx_q->rxq_stats.rx_normal_irq_n++;
+- u64_stats_update_end(&rx_q->rxq_stats.syncp);
++ u64_stats_update_begin(&rxq_stats->syncp);
++ rxq_stats->rx_normal_irq_n++;
++ u64_stats_update_end(&rxq_stats->syncp);
+ ret |= handle_rx;
+ }
+ if (likely(intr_status & DMA_CHAN_STATUS_TI)) {
+- u64_stats_update_begin(&tx_q->txq_stats.syncp);
+- tx_q->txq_stats.tx_normal_irq_n++;
+- u64_stats_update_end(&tx_q->txq_stats.syncp);
++ u64_stats_update_begin(&txq_stats->syncp);
++ txq_stats->tx_normal_irq_n++;
++ u64_stats_update_end(&txq_stats->syncp);
+ ret |= handle_tx;
+ }
+
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac_lib.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac_lib.c
+@@ -162,8 +162,8 @@ static void show_rx_process_state(unsign
+ int dwmac_dma_interrupt(struct stmmac_priv *priv, void __iomem *ioaddr,
+ struct stmmac_extra_stats *x, u32 chan, u32 dir)
+ {
+- struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[chan];
+- struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[chan];
++ struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[chan];
++ struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[chan];
+ int ret = 0;
+ /* read the status register (CSR5) */
+ u32 intr_status = readl(ioaddr + DMA_STATUS);
+@@ -215,16 +215,16 @@ int dwmac_dma_interrupt(struct stmmac_pr
+ u32 value = readl(ioaddr + DMA_INTR_ENA);
+ /* to schedule NAPI on real RIE event. */
+ if (likely(value & DMA_INTR_ENA_RIE)) {
+- u64_stats_update_begin(&rx_q->rxq_stats.syncp);
+- rx_q->rxq_stats.rx_normal_irq_n++;
+- u64_stats_update_end(&rx_q->rxq_stats.syncp);
++ u64_stats_update_begin(&rxq_stats->syncp);
++ rxq_stats->rx_normal_irq_n++;
++ u64_stats_update_end(&rxq_stats->syncp);
+ ret |= handle_rx;
+ }
+ }
+ if (likely(intr_status & DMA_STATUS_TI)) {
+- u64_stats_update_begin(&tx_q->txq_stats.syncp);
+- tx_q->txq_stats.tx_normal_irq_n++;
+- u64_stats_update_end(&tx_q->txq_stats.syncp);
++ u64_stats_update_begin(&txq_stats->syncp);
++ txq_stats->tx_normal_irq_n++;
++ u64_stats_update_end(&txq_stats->syncp);
+ ret |= handle_tx;
+ }
+ if (unlikely(intr_status & DMA_STATUS_ERI))
+--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
+@@ -333,8 +333,8 @@ static int dwxgmac2_dma_interrupt(struct
+ struct stmmac_extra_stats *x, u32 chan,
+ u32 dir)
+ {
+- struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[chan];
+- struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[chan];
++ struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[chan];
++ struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[chan];
+ u32 intr_status = readl(ioaddr + XGMAC_DMA_CH_STATUS(chan));
+ u32 intr_en = readl(ioaddr + XGMAC_DMA_CH_INT_EN(chan));
+ int ret = 0;
+@@ -363,15 +363,15 @@ static int dwxgmac2_dma_interrupt(struct
+ /* TX/RX NORMAL interrupts */
+ if (likely(intr_status & XGMAC_NIS)) {
+ if (likely(intr_status & XGMAC_RI)) {
+- u64_stats_update_begin(&rx_q->rxq_stats.syncp);
+- rx_q->rxq_stats.rx_normal_irq_n++;
+- u64_stats_update_end(&rx_q->rxq_stats.syncp);
++ u64_stats_update_begin(&rxq_stats->syncp);
++ rxq_stats->rx_normal_irq_n++;
++ u64_stats_update_end(&rxq_stats->syncp);
+ ret |= handle_rx;
+ }
+ if (likely(intr_status & (XGMAC_TI | XGMAC_TBU))) {
+- u64_stats_update_begin(&tx_q->txq_stats.syncp);
+- tx_q->txq_stats.tx_normal_irq_n++;
+- u64_stats_update_end(&tx_q->txq_stats.syncp);
++ u64_stats_update_begin(&txq_stats->syncp);
++ txq_stats->tx_normal_irq_n++;
++ u64_stats_update_end(&txq_stats->syncp);
+ ret |= handle_tx;
+ }
+ }
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+@@ -77,7 +77,6 @@ struct stmmac_tx_queue {
+ dma_addr_t dma_tx_phy;
+ dma_addr_t tx_tail_addr;
+ u32 mss;
+- struct stmmac_txq_stats txq_stats;
+ };
+
+ struct stmmac_rx_buffer {
+@@ -119,7 +118,6 @@ struct stmmac_rx_queue {
+ unsigned int len;
+ unsigned int error;
+ } state;
+- struct stmmac_rxq_stats rxq_stats;
+ };
+
+ struct stmmac_channel {
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
+@@ -557,14 +557,14 @@ static void stmmac_get_per_qstats(struct
+
+ pos = data;
+ for (q = 0; q < tx_cnt; q++) {
+- struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[q];
++ struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[q];
+ struct stmmac_txq_stats snapshot;
+
+ data = pos;
+ do {
+- start = u64_stats_fetch_begin(&tx_q->txq_stats.syncp);
+- snapshot = tx_q->txq_stats;
+- } while (u64_stats_fetch_retry(&tx_q->txq_stats.syncp, start));
++ start = u64_stats_fetch_begin(&txq_stats->syncp);
++ snapshot = *txq_stats;
++ } while (u64_stats_fetch_retry(&txq_stats->syncp, start));
+
+ p = (char *)&snapshot + offsetof(struct stmmac_txq_stats, tx_pkt_n);
+ for (stat = 0; stat < STMMAC_TXQ_STATS; stat++) {
+@@ -575,14 +575,14 @@ static void stmmac_get_per_qstats(struct
+
+ pos = data;
+ for (q = 0; q < rx_cnt; q++) {
+- struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[q];
++ struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[q];
+ struct stmmac_rxq_stats snapshot;
+
+ data = pos;
+ do {
+- start = u64_stats_fetch_begin(&rx_q->rxq_stats.syncp);
+- snapshot = rx_q->rxq_stats;
+- } while (u64_stats_fetch_retry(&rx_q->rxq_stats.syncp, start));
++ start = u64_stats_fetch_begin(&rxq_stats->syncp);
++ snapshot = *rxq_stats;
++ } while (u64_stats_fetch_retry(&rxq_stats->syncp, start));
+
+ p = (char *)&snapshot + offsetof(struct stmmac_rxq_stats, rx_pkt_n);
+ for (stat = 0; stat < STMMAC_RXQ_STATS; stat++) {
+@@ -646,14 +646,14 @@ static void stmmac_get_ethtool_stats(str
+
+ pos = j;
+ for (i = 0; i < rx_queues_count; i++) {
+- struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[i];
++ struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[i];
+ struct stmmac_rxq_stats snapshot;
+
+ j = pos;
+ do {
+- start = u64_stats_fetch_begin(&rx_q->rxq_stats.syncp);
+- snapshot = rx_q->rxq_stats;
+- } while (u64_stats_fetch_retry(&rx_q->rxq_stats.syncp, start));
++ start = u64_stats_fetch_begin(&rxq_stats->syncp);
++ snapshot = *rxq_stats;
++ } while (u64_stats_fetch_retry(&rxq_stats->syncp, start));
+
+ data[j++] += snapshot.rx_pkt_n;
+ data[j++] += snapshot.rx_normal_irq_n;
+@@ -663,14 +663,14 @@ static void stmmac_get_ethtool_stats(str
+
+ pos = j;
+ for (i = 0; i < tx_queues_count; i++) {
+- struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[i];
++ struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[i];
+ struct stmmac_txq_stats snapshot;
+
+ j = pos;
+ do {
+- start = u64_stats_fetch_begin(&tx_q->txq_stats.syncp);
+- snapshot = tx_q->txq_stats;
+- } while (u64_stats_fetch_retry(&tx_q->txq_stats.syncp, start));
++ start = u64_stats_fetch_begin(&txq_stats->syncp);
++ snapshot = *txq_stats;
++ } while (u64_stats_fetch_retry(&txq_stats->syncp, start));
+
+ data[j++] += snapshot.tx_pkt_n;
+ data[j++] += snapshot.tx_normal_irq_n;
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -2426,6 +2426,7 @@ static bool stmmac_xdp_xmit_zc(struct st
+ {
+ struct netdev_queue *nq = netdev_get_tx_queue(priv->dev, queue);
+ struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue];
++ struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[queue];
+ struct xsk_buff_pool *pool = tx_q->xsk_pool;
+ unsigned int entry = tx_q->cur_tx;
+ struct dma_desc *tx_desc = NULL;
+@@ -2505,9 +2506,9 @@ static bool stmmac_xdp_xmit_zc(struct st
+ tx_q->cur_tx = STMMAC_GET_ENTRY(tx_q->cur_tx, priv->dma_conf.dma_tx_size);
+ entry = tx_q->cur_tx;
+ }
+- flags = u64_stats_update_begin_irqsave(&tx_q->txq_stats.syncp);
+- tx_q->txq_stats.tx_set_ic_bit += tx_set_ic_bit;
+- u64_stats_update_end_irqrestore(&tx_q->txq_stats.syncp, flags);
++ flags = u64_stats_update_begin_irqsave(&txq_stats->syncp);
++ txq_stats->tx_set_ic_bit += tx_set_ic_bit;
++ u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
+
+ if (tx_desc) {
+ stmmac_flush_tx_descriptors(priv, queue);
+@@ -2547,6 +2548,7 @@ static void stmmac_bump_dma_threshold(st
+ static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue)
+ {
+ struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue];
++ struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[queue];
+ unsigned int bytes_compl = 0, pkts_compl = 0;
+ unsigned int entry, xmits = 0, count = 0;
+ u32 tx_packets = 0, tx_errors = 0;
+@@ -2706,11 +2708,11 @@ static int stmmac_tx_clean(struct stmmac
+ if (tx_q->dirty_tx != tx_q->cur_tx)
+ stmmac_tx_timer_arm(priv, queue);
+
+- flags = u64_stats_update_begin_irqsave(&tx_q->txq_stats.syncp);
+- tx_q->txq_stats.tx_packets += tx_packets;
+- tx_q->txq_stats.tx_pkt_n += tx_packets;
+- tx_q->txq_stats.tx_clean++;
+- u64_stats_update_end_irqrestore(&tx_q->txq_stats.syncp, flags);
++ flags = u64_stats_update_begin_irqsave(&txq_stats->syncp);
++ txq_stats->tx_packets += tx_packets;
++ txq_stats->tx_pkt_n += tx_packets;
++ txq_stats->tx_clean++;
++ u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
+
+ priv->xstats.tx_errors += tx_errors;
+
+@@ -4123,6 +4125,7 @@ static netdev_tx_t stmmac_tso_xmit(struc
+ int nfrags = skb_shinfo(skb)->nr_frags;
+ u32 queue = skb_get_queue_mapping(skb);
+ unsigned int first_entry, tx_packets;
++ struct stmmac_txq_stats *txq_stats;
+ int tmp_pay_len = 0, first_tx;
+ struct stmmac_tx_queue *tx_q;
+ bool has_vlan, set_ic;
+@@ -4133,6 +4136,7 @@ static netdev_tx_t stmmac_tso_xmit(struc
+ int i;
+
+ tx_q = &priv->dma_conf.tx_queue[queue];
++ txq_stats = &priv->xstats.txq_stats[queue];
+ first_tx = tx_q->cur_tx;
+
+ /* Compute header lengths */
+@@ -4303,13 +4307,13 @@ static netdev_tx_t stmmac_tso_xmit(struc
+ netif_tx_stop_queue(netdev_get_tx_queue(priv->dev, queue));
+ }
+
+- flags = u64_stats_update_begin_irqsave(&tx_q->txq_stats.syncp);
+- tx_q->txq_stats.tx_bytes += skb->len;
+- tx_q->txq_stats.tx_tso_frames++;
+- tx_q->txq_stats.tx_tso_nfrags += nfrags;
++ flags = u64_stats_update_begin_irqsave(&txq_stats->syncp);
++ txq_stats->tx_bytes += skb->len;
++ txq_stats->tx_tso_frames++;
++ txq_stats->tx_tso_nfrags += nfrags;
+ if (set_ic)
+- tx_q->txq_stats.tx_set_ic_bit++;
+- u64_stats_update_end_irqrestore(&tx_q->txq_stats.syncp, flags);
++ txq_stats->tx_set_ic_bit++;
++ u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
+
+ if (priv->sarc_type)
+ stmmac_set_desc_sarc(priv, first, priv->sarc_type);
+@@ -4380,6 +4384,7 @@ static netdev_tx_t stmmac_xmit(struct sk
+ u32 queue = skb_get_queue_mapping(skb);
+ int nfrags = skb_shinfo(skb)->nr_frags;
+ int gso = skb_shinfo(skb)->gso_type;
++ struct stmmac_txq_stats *txq_stats;
+ struct dma_edesc *tbs_desc = NULL;
+ struct dma_desc *desc, *first;
+ struct stmmac_tx_queue *tx_q;
+@@ -4389,6 +4394,7 @@ static netdev_tx_t stmmac_xmit(struct sk
+ dma_addr_t des;
+
+ tx_q = &priv->dma_conf.tx_queue[queue];
++ txq_stats = &priv->xstats.txq_stats[queue];
+ first_tx = tx_q->cur_tx;
+
+ if (priv->tx_path_in_lpi_mode && priv->eee_sw_timer_en)
+@@ -4540,11 +4546,11 @@ static netdev_tx_t stmmac_xmit(struct sk
+ netif_tx_stop_queue(netdev_get_tx_queue(priv->dev, queue));
+ }
+
+- flags = u64_stats_update_begin_irqsave(&tx_q->txq_stats.syncp);
+- tx_q->txq_stats.tx_bytes += skb->len;
++ flags = u64_stats_update_begin_irqsave(&txq_stats->syncp);
++ txq_stats->tx_bytes += skb->len;
+ if (set_ic)
+- tx_q->txq_stats.tx_set_ic_bit++;
+- u64_stats_update_end_irqrestore(&tx_q->txq_stats.syncp, flags);
++ txq_stats->tx_set_ic_bit++;
++ u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
+
+ if (priv->sarc_type)
+ stmmac_set_desc_sarc(priv, first, priv->sarc_type);
+@@ -4751,6 +4757,7 @@ static unsigned int stmmac_rx_buf2_len(s
+ static int stmmac_xdp_xmit_xdpf(struct stmmac_priv *priv, int queue,
+ struct xdp_frame *xdpf, bool dma_map)
+ {
++ struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[queue];
+ struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue];
+ unsigned int entry = tx_q->cur_tx;
+ struct dma_desc *tx_desc;
+@@ -4810,9 +4817,9 @@ static int stmmac_xdp_xmit_xdpf(struct s
+ unsigned long flags;
+ tx_q->tx_count_frames = 0;
+ stmmac_set_tx_ic(priv, tx_desc);
+- flags = u64_stats_update_begin_irqsave(&tx_q->txq_stats.syncp);
+- tx_q->txq_stats.tx_set_ic_bit++;
+- u64_stats_update_end_irqrestore(&tx_q->txq_stats.syncp, flags);
++ flags = u64_stats_update_begin_irqsave(&txq_stats->syncp);
++ txq_stats->tx_set_ic_bit++;
++ u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
+ }
+
+ stmmac_enable_dma_transmission(priv, priv->ioaddr);
+@@ -4967,7 +4974,7 @@ static void stmmac_dispatch_skb_zc(struc
+ struct dma_desc *p, struct dma_desc *np,
+ struct xdp_buff *xdp)
+ {
+- struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue];
++ struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[queue];
+ struct stmmac_channel *ch = &priv->channel[queue];
+ unsigned int len = xdp->data_end - xdp->data;
+ enum pkt_hash_types hash_type;
+@@ -4997,10 +5004,10 @@ static void stmmac_dispatch_skb_zc(struc
+ skb_record_rx_queue(skb, queue);
+ napi_gro_receive(&ch->rxtx_napi, skb);
+
+- flags = u64_stats_update_begin_irqsave(&rx_q->rxq_stats.syncp);
+- rx_q->rxq_stats.rx_pkt_n++;
+- rx_q->rxq_stats.rx_bytes += len;
+- u64_stats_update_end_irqrestore(&rx_q->rxq_stats.syncp, flags);
++ flags = u64_stats_update_begin_irqsave(&rxq_stats->syncp);
++ rxq_stats->rx_pkt_n++;
++ rxq_stats->rx_bytes += len;
++ u64_stats_update_end_irqrestore(&rxq_stats->syncp, flags);
+ }
+
+ static bool stmmac_rx_refill_zc(struct stmmac_priv *priv, u32 queue, u32 budget)
+@@ -5063,6 +5070,7 @@ static bool stmmac_rx_refill_zc(struct s
+
+ static int stmmac_rx_zc(struct stmmac_priv *priv, int limit, u32 queue)
+ {
++ struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[queue];
+ struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue];
+ unsigned int count = 0, error = 0, len = 0;
+ int dirty = stmmac_rx_dirty(priv, queue);
+@@ -5222,9 +5230,9 @@ read_again:
+
+ stmmac_finalize_xdp_rx(priv, xdp_status);
+
+- flags = u64_stats_update_begin_irqsave(&rx_q->rxq_stats.syncp);
+- rx_q->rxq_stats.rx_pkt_n += count;
+- u64_stats_update_end_irqrestore(&rx_q->rxq_stats.syncp, flags);
++ flags = u64_stats_update_begin_irqsave(&rxq_stats->syncp);
++ rxq_stats->rx_pkt_n += count;
++ u64_stats_update_end_irqrestore(&rxq_stats->syncp, flags);
+
+ priv->xstats.rx_dropped += rx_dropped;
+ priv->xstats.rx_errors += rx_errors;
+@@ -5252,6 +5260,7 @@ read_again:
+ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
+ {
+ u32 rx_errors = 0, rx_dropped = 0, rx_bytes = 0, rx_packets = 0;
++ struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[queue];
+ struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue];
+ struct stmmac_channel *ch = &priv->channel[queue];
+ unsigned int count = 0, error = 0, len = 0;
+@@ -5509,11 +5518,11 @@ drain_data:
+
+ stmmac_rx_refill(priv, queue);
+
+- flags = u64_stats_update_begin_irqsave(&rx_q->rxq_stats.syncp);
+- rx_q->rxq_stats.rx_packets += rx_packets;
+- rx_q->rxq_stats.rx_bytes += rx_bytes;
+- rx_q->rxq_stats.rx_pkt_n += count;
+- u64_stats_update_end_irqrestore(&rx_q->rxq_stats.syncp, flags);
++ flags = u64_stats_update_begin_irqsave(&rxq_stats->syncp);
++ rxq_stats->rx_packets += rx_packets;
++ rxq_stats->rx_bytes += rx_bytes;
++ rxq_stats->rx_pkt_n += count;
++ u64_stats_update_end_irqrestore(&rxq_stats->syncp, flags);
+
+ priv->xstats.rx_dropped += rx_dropped;
+ priv->xstats.rx_errors += rx_errors;
+@@ -5526,15 +5535,15 @@ static int stmmac_napi_poll_rx(struct na
+ struct stmmac_channel *ch =
+ container_of(napi, struct stmmac_channel, rx_napi);
+ struct stmmac_priv *priv = ch->priv_data;
+- struct stmmac_rx_queue *rx_q;
++ struct stmmac_rxq_stats *rxq_stats;
+ u32 chan = ch->index;
+ unsigned long flags;
+ int work_done;
+
+- rx_q = &priv->dma_conf.rx_queue[chan];
+- flags = u64_stats_update_begin_irqsave(&rx_q->rxq_stats.syncp);
+- rx_q->rxq_stats.napi_poll++;
+- u64_stats_update_end_irqrestore(&rx_q->rxq_stats.syncp, flags);
++ rxq_stats = &priv->xstats.rxq_stats[chan];
++ flags = u64_stats_update_begin_irqsave(&rxq_stats->syncp);
++ rxq_stats->napi_poll++;
++ u64_stats_update_end_irqrestore(&rxq_stats->syncp, flags);
+
+ work_done = stmmac_rx(priv, budget, chan);
+ if (work_done < budget && napi_complete_done(napi, work_done)) {
+@@ -5553,15 +5562,15 @@ static int stmmac_napi_poll_tx(struct na
+ struct stmmac_channel *ch =
+ container_of(napi, struct stmmac_channel, tx_napi);
+ struct stmmac_priv *priv = ch->priv_data;
+- struct stmmac_tx_queue *tx_q;
++ struct stmmac_txq_stats *txq_stats;
+ u32 chan = ch->index;
+ unsigned long flags;
+ int work_done;
+
+- tx_q = &priv->dma_conf.tx_queue[chan];
+- flags = u64_stats_update_begin_irqsave(&tx_q->txq_stats.syncp);
+- tx_q->txq_stats.napi_poll++;
+- u64_stats_update_end_irqrestore(&tx_q->txq_stats.syncp, flags);
++ txq_stats = &priv->xstats.txq_stats[chan];
++ flags = u64_stats_update_begin_irqsave(&txq_stats->syncp);
++ txq_stats->napi_poll++;
++ u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
+
+ work_done = stmmac_tx_clean(priv, budget, chan);
+ work_done = min(work_done, budget);
+@@ -5583,20 +5592,20 @@ static int stmmac_napi_poll_rxtx(struct
+ container_of(napi, struct stmmac_channel, rxtx_napi);
+ struct stmmac_priv *priv = ch->priv_data;
+ int rx_done, tx_done, rxtx_done;
+- struct stmmac_rx_queue *rx_q;
+- struct stmmac_tx_queue *tx_q;
++ struct stmmac_rxq_stats *rxq_stats;
++ struct stmmac_txq_stats *txq_stats;
+ u32 chan = ch->index;
+ unsigned long flags;
+
+- rx_q = &priv->dma_conf.rx_queue[chan];
+- flags = u64_stats_update_begin_irqsave(&rx_q->rxq_stats.syncp);
+- rx_q->rxq_stats.napi_poll++;
+- u64_stats_update_end_irqrestore(&rx_q->rxq_stats.syncp, flags);
+-
+- tx_q = &priv->dma_conf.tx_queue[chan];
+- flags = u64_stats_update_begin_irqsave(&tx_q->txq_stats.syncp);
+- tx_q->txq_stats.napi_poll++;
+- u64_stats_update_end_irqrestore(&tx_q->txq_stats.syncp, flags);
++ rxq_stats = &priv->xstats.rxq_stats[chan];
++ flags = u64_stats_update_begin_irqsave(&rxq_stats->syncp);
++ rxq_stats->napi_poll++;
++ u64_stats_update_end_irqrestore(&rxq_stats->syncp, flags);
++
++ txq_stats = &priv->xstats.txq_stats[chan];
++ flags = u64_stats_update_begin_irqsave(&txq_stats->syncp);
++ txq_stats->napi_poll++;
++ u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
+
+ tx_done = stmmac_tx_clean(priv, budget, chan);
+ tx_done = min(tx_done, budget);
+@@ -6843,7 +6852,7 @@ static void stmmac_get_stats64(struct ne
+ int q;
+
+ for (q = 0; q < tx_cnt; q++) {
+- struct stmmac_txq_stats *txq_stats = &priv->dma_conf.tx_queue[q].txq_stats;
++ struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[q];
+ u64 tx_packets;
+ u64 tx_bytes;
+
+@@ -6858,7 +6867,7 @@ static void stmmac_get_stats64(struct ne
+ }
+
+ for (q = 0; q < rx_cnt; q++) {
+- struct stmmac_rxq_stats *rxq_stats = &priv->dma_conf.rx_queue[q].rxq_stats;
++ struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[q];
+ u64 rx_packets;
+ u64 rx_bytes;
+
+@@ -7230,9 +7239,9 @@ int stmmac_dvr_probe(struct device *devi
+ priv->dev = ndev;
+
+ for (i = 0; i < MTL_MAX_RX_QUEUES; i++)
+- u64_stats_init(&priv->dma_conf.rx_queue[i].rxq_stats.syncp);
++ u64_stats_init(&priv->xstats.rxq_stats[i].syncp);
+ for (i = 0; i < MTL_MAX_TX_QUEUES; i++)
+- u64_stats_init(&priv->dma_conf.tx_queue[i].txq_stats.syncp);
++ u64_stats_init(&priv->xstats.txq_stats[i].syncp);
+
+ stmmac_set_ethtool_ops(ndev);
+ priv->pause = pause;
--- /dev/null
+From 38cc3c6dcc09dc3a1800b5ec22aef643ca11eab8 Mon Sep 17 00:00:00 2001
+From: Petr Tesarik <petr@tesarici.cz>
+Date: Sat, 3 Feb 2024 20:09:27 +0100
+Subject: net: stmmac: protect updates of 64-bit statistics counters
+
+From: Petr Tesarik <petr@tesarici.cz>
+
+commit 38cc3c6dcc09dc3a1800b5ec22aef643ca11eab8 upstream.
+
+As explained by a comment in <linux/u64_stats_sync.h>, write side of struct
+u64_stats_sync must ensure mutual exclusion, or one seqcount update could
+be lost on 32-bit platforms, thus blocking readers forever. Such lockups
+have been observed in real world after stmmac_xmit() on one CPU raced with
+stmmac_napi_poll_tx() on another CPU.
+
+To fix the issue without introducing a new lock, split the statics into
+three parts:
+
+1. fields updated only under the tx queue lock,
+2. fields updated only during NAPI poll,
+3. fields updated only from interrupt context,
+
+Updates to fields in the first two groups are already serialized through
+other locks. It is sufficient to split the existing struct u64_stats_sync
+so that each group has its own.
+
+Note that tx_set_ic_bit is updated from both contexts. Split this counter
+so that each context gets its own, and calculate their sum to get the total
+value in stmmac_get_ethtool_stats().
+
+For the third group, multiple interrupts may be processed by different CPUs
+at the same time, but interrupts on the same CPU will not nest. Move fields
+from this group to a newly created per-cpu struct stmmac_pcpu_stats.
+
+Fixes: 133466c3bbe1 ("net: stmmac: use per-queue 64 bit statistics where necessary")
+Link: https://lore.kernel.org/netdev/Za173PhviYg-1qIn@torres.zugschlus.de/t/
+Cc: stable@vger.kernel.org
+Signed-off-by: Petr Tesarik <petr@tesarici.cz>
+Reviewed-by: Jisheng Zhang <jszhang@kernel.org>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/stmicro/stmmac/common.h | 56 +++++---
+ drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c | 15 +-
+ drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c | 15 +-
+ drivers/net/ethernet/stmicro/stmmac/dwmac_lib.c | 15 +-
+ drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c | 15 +-
+ drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c | 129 ++++++++++++------
+ drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 133 +++++++++----------
+ 7 files changed, 221 insertions(+), 157 deletions(-)
+
+--- a/drivers/net/ethernet/stmicro/stmmac/common.h
++++ b/drivers/net/ethernet/stmicro/stmmac/common.h
+@@ -58,28 +58,51 @@
+ #undef FRAME_FILTER_DEBUG
+ /* #define FRAME_FILTER_DEBUG */
+
++struct stmmac_q_tx_stats {
++ u64_stats_t tx_bytes;
++ u64_stats_t tx_set_ic_bit;
++ u64_stats_t tx_tso_frames;
++ u64_stats_t tx_tso_nfrags;
++};
++
++struct stmmac_napi_tx_stats {
++ u64_stats_t tx_packets;
++ u64_stats_t tx_pkt_n;
++ u64_stats_t poll;
++ u64_stats_t tx_clean;
++ u64_stats_t tx_set_ic_bit;
++};
++
+ struct stmmac_txq_stats {
+- u64 tx_bytes;
+- u64 tx_packets;
+- u64 tx_pkt_n;
+- u64 tx_normal_irq_n;
+- u64 napi_poll;
+- u64 tx_clean;
+- u64 tx_set_ic_bit;
+- u64 tx_tso_frames;
+- u64 tx_tso_nfrags;
+- struct u64_stats_sync syncp;
++ /* Updates protected by tx queue lock. */
++ struct u64_stats_sync q_syncp;
++ struct stmmac_q_tx_stats q;
++
++ /* Updates protected by NAPI poll logic. */
++ struct u64_stats_sync napi_syncp;
++ struct stmmac_napi_tx_stats napi;
+ } ____cacheline_aligned_in_smp;
+
++struct stmmac_napi_rx_stats {
++ u64_stats_t rx_bytes;
++ u64_stats_t rx_packets;
++ u64_stats_t rx_pkt_n;
++ u64_stats_t poll;
++};
++
+ struct stmmac_rxq_stats {
+- u64 rx_bytes;
+- u64 rx_packets;
+- u64 rx_pkt_n;
+- u64 rx_normal_irq_n;
+- u64 napi_poll;
+- struct u64_stats_sync syncp;
++ /* Updates protected by NAPI poll logic. */
++ struct u64_stats_sync napi_syncp;
++ struct stmmac_napi_rx_stats napi;
+ } ____cacheline_aligned_in_smp;
+
++/* Updates on each CPU protected by not allowing nested irqs. */
++struct stmmac_pcpu_stats {
++ struct u64_stats_sync syncp;
++ u64_stats_t rx_normal_irq_n[MTL_MAX_TX_QUEUES];
++ u64_stats_t tx_normal_irq_n[MTL_MAX_RX_QUEUES];
++};
++
+ /* Extra statistic and debug information exposed by ethtool */
+ struct stmmac_extra_stats {
+ /* Transmit errors */
+@@ -204,6 +227,7 @@ struct stmmac_extra_stats {
+ /* per queue statistics */
+ struct stmmac_txq_stats txq_stats[MTL_MAX_TX_QUEUES];
+ struct stmmac_rxq_stats rxq_stats[MTL_MAX_RX_QUEUES];
++ struct stmmac_pcpu_stats __percpu *pcpu_stats;
+ unsigned long rx_dropped;
+ unsigned long rx_errors;
+ unsigned long tx_dropped;
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+@@ -440,8 +440,7 @@ static int sun8i_dwmac_dma_interrupt(str
+ struct stmmac_extra_stats *x, u32 chan,
+ u32 dir)
+ {
+- struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[chan];
+- struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[chan];
++ struct stmmac_pcpu_stats *stats = this_cpu_ptr(priv->xstats.pcpu_stats);
+ int ret = 0;
+ u32 v;
+
+@@ -454,9 +453,9 @@ static int sun8i_dwmac_dma_interrupt(str
+
+ if (v & EMAC_TX_INT) {
+ ret |= handle_tx;
+- u64_stats_update_begin(&txq_stats->syncp);
+- txq_stats->tx_normal_irq_n++;
+- u64_stats_update_end(&txq_stats->syncp);
++ u64_stats_update_begin(&stats->syncp);
++ u64_stats_inc(&stats->tx_normal_irq_n[chan]);
++ u64_stats_update_end(&stats->syncp);
+ }
+
+ if (v & EMAC_TX_DMA_STOP_INT)
+@@ -478,9 +477,9 @@ static int sun8i_dwmac_dma_interrupt(str
+
+ if (v & EMAC_RX_INT) {
+ ret |= handle_rx;
+- u64_stats_update_begin(&rxq_stats->syncp);
+- rxq_stats->rx_normal_irq_n++;
+- u64_stats_update_end(&rxq_stats->syncp);
++ u64_stats_update_begin(&stats->syncp);
++ u64_stats_inc(&stats->rx_normal_irq_n[chan]);
++ u64_stats_update_end(&stats->syncp);
+ }
+
+ if (v & EMAC_RX_BUF_UA_INT)
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c
+@@ -171,8 +171,7 @@ int dwmac4_dma_interrupt(struct stmmac_p
+ const struct dwmac4_addrs *dwmac4_addrs = priv->plat->dwmac4_addrs;
+ u32 intr_status = readl(ioaddr + DMA_CHAN_STATUS(dwmac4_addrs, chan));
+ u32 intr_en = readl(ioaddr + DMA_CHAN_INTR_ENA(dwmac4_addrs, chan));
+- struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[chan];
+- struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[chan];
++ struct stmmac_pcpu_stats *stats = this_cpu_ptr(priv->xstats.pcpu_stats);
+ int ret = 0;
+
+ if (dir == DMA_DIR_RX)
+@@ -201,15 +200,15 @@ int dwmac4_dma_interrupt(struct stmmac_p
+ }
+ /* TX/RX NORMAL interrupts */
+ if (likely(intr_status & DMA_CHAN_STATUS_RI)) {
+- u64_stats_update_begin(&rxq_stats->syncp);
+- rxq_stats->rx_normal_irq_n++;
+- u64_stats_update_end(&rxq_stats->syncp);
++ u64_stats_update_begin(&stats->syncp);
++ u64_stats_inc(&stats->rx_normal_irq_n[chan]);
++ u64_stats_update_end(&stats->syncp);
+ ret |= handle_rx;
+ }
+ if (likely(intr_status & DMA_CHAN_STATUS_TI)) {
+- u64_stats_update_begin(&txq_stats->syncp);
+- txq_stats->tx_normal_irq_n++;
+- u64_stats_update_end(&txq_stats->syncp);
++ u64_stats_update_begin(&stats->syncp);
++ u64_stats_inc(&stats->tx_normal_irq_n[chan]);
++ u64_stats_update_end(&stats->syncp);
+ ret |= handle_tx;
+ }
+
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac_lib.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac_lib.c
+@@ -162,8 +162,7 @@ static void show_rx_process_state(unsign
+ int dwmac_dma_interrupt(struct stmmac_priv *priv, void __iomem *ioaddr,
+ struct stmmac_extra_stats *x, u32 chan, u32 dir)
+ {
+- struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[chan];
+- struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[chan];
++ struct stmmac_pcpu_stats *stats = this_cpu_ptr(priv->xstats.pcpu_stats);
+ int ret = 0;
+ /* read the status register (CSR5) */
+ u32 intr_status = readl(ioaddr + DMA_STATUS);
+@@ -215,16 +214,16 @@ int dwmac_dma_interrupt(struct stmmac_pr
+ u32 value = readl(ioaddr + DMA_INTR_ENA);
+ /* to schedule NAPI on real RIE event. */
+ if (likely(value & DMA_INTR_ENA_RIE)) {
+- u64_stats_update_begin(&rxq_stats->syncp);
+- rxq_stats->rx_normal_irq_n++;
+- u64_stats_update_end(&rxq_stats->syncp);
++ u64_stats_update_begin(&stats->syncp);
++ u64_stats_inc(&stats->rx_normal_irq_n[chan]);
++ u64_stats_update_end(&stats->syncp);
+ ret |= handle_rx;
+ }
+ }
+ if (likely(intr_status & DMA_STATUS_TI)) {
+- u64_stats_update_begin(&txq_stats->syncp);
+- txq_stats->tx_normal_irq_n++;
+- u64_stats_update_end(&txq_stats->syncp);
++ u64_stats_update_begin(&stats->syncp);
++ u64_stats_inc(&stats->tx_normal_irq_n[chan]);
++ u64_stats_update_end(&stats->syncp);
+ ret |= handle_tx;
+ }
+ if (unlikely(intr_status & DMA_STATUS_ERI))
+--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_dma.c
+@@ -333,8 +333,7 @@ static int dwxgmac2_dma_interrupt(struct
+ struct stmmac_extra_stats *x, u32 chan,
+ u32 dir)
+ {
+- struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[chan];
+- struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[chan];
++ struct stmmac_pcpu_stats *stats = this_cpu_ptr(priv->xstats.pcpu_stats);
+ u32 intr_status = readl(ioaddr + XGMAC_DMA_CH_STATUS(chan));
+ u32 intr_en = readl(ioaddr + XGMAC_DMA_CH_INT_EN(chan));
+ int ret = 0;
+@@ -363,15 +362,15 @@ static int dwxgmac2_dma_interrupt(struct
+ /* TX/RX NORMAL interrupts */
+ if (likely(intr_status & XGMAC_NIS)) {
+ if (likely(intr_status & XGMAC_RI)) {
+- u64_stats_update_begin(&rxq_stats->syncp);
+- rxq_stats->rx_normal_irq_n++;
+- u64_stats_update_end(&rxq_stats->syncp);
++ u64_stats_update_begin(&stats->syncp);
++ u64_stats_inc(&stats->rx_normal_irq_n[chan]);
++ u64_stats_update_end(&stats->syncp);
+ ret |= handle_rx;
+ }
+ if (likely(intr_status & (XGMAC_TI | XGMAC_TBU))) {
+- u64_stats_update_begin(&txq_stats->syncp);
+- txq_stats->tx_normal_irq_n++;
+- u64_stats_update_end(&txq_stats->syncp);
++ u64_stats_update_begin(&stats->syncp);
++ u64_stats_inc(&stats->tx_normal_irq_n[chan]);
++ u64_stats_update_end(&stats->syncp);
+ ret |= handle_tx;
+ }
+ }
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
+@@ -546,44 +546,79 @@ stmmac_set_pauseparam(struct net_device
+ }
+ }
+
++static u64 stmmac_get_rx_normal_irq_n(struct stmmac_priv *priv, int q)
++{
++ u64 total;
++ int cpu;
++
++ total = 0;
++ for_each_possible_cpu(cpu) {
++ struct stmmac_pcpu_stats *pcpu;
++ unsigned int start;
++ u64 irq_n;
++
++ pcpu = per_cpu_ptr(priv->xstats.pcpu_stats, cpu);
++ do {
++ start = u64_stats_fetch_begin(&pcpu->syncp);
++ irq_n = u64_stats_read(&pcpu->rx_normal_irq_n[q]);
++ } while (u64_stats_fetch_retry(&pcpu->syncp, start));
++ total += irq_n;
++ }
++ return total;
++}
++
++static u64 stmmac_get_tx_normal_irq_n(struct stmmac_priv *priv, int q)
++{
++ u64 total;
++ int cpu;
++
++ total = 0;
++ for_each_possible_cpu(cpu) {
++ struct stmmac_pcpu_stats *pcpu;
++ unsigned int start;
++ u64 irq_n;
++
++ pcpu = per_cpu_ptr(priv->xstats.pcpu_stats, cpu);
++ do {
++ start = u64_stats_fetch_begin(&pcpu->syncp);
++ irq_n = u64_stats_read(&pcpu->tx_normal_irq_n[q]);
++ } while (u64_stats_fetch_retry(&pcpu->syncp, start));
++ total += irq_n;
++ }
++ return total;
++}
++
+ static void stmmac_get_per_qstats(struct stmmac_priv *priv, u64 *data)
+ {
+ u32 tx_cnt = priv->plat->tx_queues_to_use;
+ u32 rx_cnt = priv->plat->rx_queues_to_use;
+ unsigned int start;
+- int q, stat;
+- char *p;
++ int q;
+
+ for (q = 0; q < tx_cnt; q++) {
+ struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[q];
+- struct stmmac_txq_stats snapshot;
++ u64 pkt_n;
+
+ do {
+- start = u64_stats_fetch_begin(&txq_stats->syncp);
+- snapshot = *txq_stats;
+- } while (u64_stats_fetch_retry(&txq_stats->syncp, start));
++ start = u64_stats_fetch_begin(&txq_stats->napi_syncp);
++ pkt_n = u64_stats_read(&txq_stats->napi.tx_pkt_n);
++ } while (u64_stats_fetch_retry(&txq_stats->napi_syncp, start));
+
+- p = (char *)&snapshot + offsetof(struct stmmac_txq_stats, tx_pkt_n);
+- for (stat = 0; stat < STMMAC_TXQ_STATS; stat++) {
+- *data++ = (*(u64 *)p);
+- p += sizeof(u64);
+- }
++ *data++ = pkt_n;
++ *data++ = stmmac_get_tx_normal_irq_n(priv, q);
+ }
+
+ for (q = 0; q < rx_cnt; q++) {
+ struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[q];
+- struct stmmac_rxq_stats snapshot;
++ u64 pkt_n;
+
+ do {
+- start = u64_stats_fetch_begin(&rxq_stats->syncp);
+- snapshot = *rxq_stats;
+- } while (u64_stats_fetch_retry(&rxq_stats->syncp, start));
++ start = u64_stats_fetch_begin(&rxq_stats->napi_syncp);
++ pkt_n = u64_stats_read(&rxq_stats->napi.rx_pkt_n);
++ } while (u64_stats_fetch_retry(&rxq_stats->napi_syncp, start));
+
+- p = (char *)&snapshot + offsetof(struct stmmac_rxq_stats, rx_pkt_n);
+- for (stat = 0; stat < STMMAC_RXQ_STATS; stat++) {
+- *data++ = (*(u64 *)p);
+- p += sizeof(u64);
+- }
++ *data++ = pkt_n;
++ *data++ = stmmac_get_rx_normal_irq_n(priv, q);
+ }
+ }
+
+@@ -642,39 +677,49 @@ static void stmmac_get_ethtool_stats(str
+ pos = j;
+ for (i = 0; i < rx_queues_count; i++) {
+ struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[i];
+- struct stmmac_rxq_stats snapshot;
++ struct stmmac_napi_rx_stats snapshot;
++ u64 n_irq;
+
+ j = pos;
+ do {
+- start = u64_stats_fetch_begin(&rxq_stats->syncp);
+- snapshot = *rxq_stats;
+- } while (u64_stats_fetch_retry(&rxq_stats->syncp, start));
+-
+- data[j++] += snapshot.rx_pkt_n;
+- data[j++] += snapshot.rx_normal_irq_n;
+- normal_irq_n += snapshot.rx_normal_irq_n;
+- napi_poll += snapshot.napi_poll;
++ start = u64_stats_fetch_begin(&rxq_stats->napi_syncp);
++ snapshot = rxq_stats->napi;
++ } while (u64_stats_fetch_retry(&rxq_stats->napi_syncp, start));
++
++ data[j++] += u64_stats_read(&snapshot.rx_pkt_n);
++ n_irq = stmmac_get_rx_normal_irq_n(priv, i);
++ data[j++] += n_irq;
++ normal_irq_n += n_irq;
++ napi_poll += u64_stats_read(&snapshot.poll);
+ }
+
+ pos = j;
+ for (i = 0; i < tx_queues_count; i++) {
+ struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[i];
+- struct stmmac_txq_stats snapshot;
++ struct stmmac_napi_tx_stats napi_snapshot;
++ struct stmmac_q_tx_stats q_snapshot;
++ u64 n_irq;
+
+ j = pos;
+ do {
+- start = u64_stats_fetch_begin(&txq_stats->syncp);
+- snapshot = *txq_stats;
+- } while (u64_stats_fetch_retry(&txq_stats->syncp, start));
+-
+- data[j++] += snapshot.tx_pkt_n;
+- data[j++] += snapshot.tx_normal_irq_n;
+- normal_irq_n += snapshot.tx_normal_irq_n;
+- data[j++] += snapshot.tx_clean;
+- data[j++] += snapshot.tx_set_ic_bit;
+- data[j++] += snapshot.tx_tso_frames;
+- data[j++] += snapshot.tx_tso_nfrags;
+- napi_poll += snapshot.napi_poll;
++ start = u64_stats_fetch_begin(&txq_stats->q_syncp);
++ q_snapshot = txq_stats->q;
++ } while (u64_stats_fetch_retry(&txq_stats->q_syncp, start));
++ do {
++ start = u64_stats_fetch_begin(&txq_stats->napi_syncp);
++ napi_snapshot = txq_stats->napi;
++ } while (u64_stats_fetch_retry(&txq_stats->napi_syncp, start));
++
++ data[j++] += u64_stats_read(&napi_snapshot.tx_pkt_n);
++ n_irq = stmmac_get_tx_normal_irq_n(priv, i);
++ data[j++] += n_irq;
++ normal_irq_n += n_irq;
++ data[j++] += u64_stats_read(&napi_snapshot.tx_clean);
++ data[j++] += u64_stats_read(&q_snapshot.tx_set_ic_bit) +
++ u64_stats_read(&napi_snapshot.tx_set_ic_bit);
++ data[j++] += u64_stats_read(&q_snapshot.tx_tso_frames);
++ data[j++] += u64_stats_read(&q_snapshot.tx_tso_nfrags);
++ napi_poll += u64_stats_read(&napi_snapshot.poll);
+ }
+ normal_irq_n += priv->xstats.rx_early_irq;
+ data[j++] = normal_irq_n;
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -2433,7 +2433,6 @@ static bool stmmac_xdp_xmit_zc(struct st
+ struct xdp_desc xdp_desc;
+ bool work_done = true;
+ u32 tx_set_ic_bit = 0;
+- unsigned long flags;
+
+ /* Avoids TX time-out as we are sharing with slow path */
+ txq_trans_cond_update(nq);
+@@ -2506,9 +2505,9 @@ static bool stmmac_xdp_xmit_zc(struct st
+ tx_q->cur_tx = STMMAC_GET_ENTRY(tx_q->cur_tx, priv->dma_conf.dma_tx_size);
+ entry = tx_q->cur_tx;
+ }
+- flags = u64_stats_update_begin_irqsave(&txq_stats->syncp);
+- txq_stats->tx_set_ic_bit += tx_set_ic_bit;
+- u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
++ u64_stats_update_begin(&txq_stats->napi_syncp);
++ u64_stats_add(&txq_stats->napi.tx_set_ic_bit, tx_set_ic_bit);
++ u64_stats_update_end(&txq_stats->napi_syncp);
+
+ if (tx_desc) {
+ stmmac_flush_tx_descriptors(priv, queue);
+@@ -2552,7 +2551,6 @@ static int stmmac_tx_clean(struct stmmac
+ unsigned int bytes_compl = 0, pkts_compl = 0;
+ unsigned int entry, xmits = 0, count = 0;
+ u32 tx_packets = 0, tx_errors = 0;
+- unsigned long flags;
+
+ __netif_tx_lock_bh(netdev_get_tx_queue(priv->dev, queue));
+
+@@ -2708,11 +2706,11 @@ static int stmmac_tx_clean(struct stmmac
+ if (tx_q->dirty_tx != tx_q->cur_tx)
+ stmmac_tx_timer_arm(priv, queue);
+
+- flags = u64_stats_update_begin_irqsave(&txq_stats->syncp);
+- txq_stats->tx_packets += tx_packets;
+- txq_stats->tx_pkt_n += tx_packets;
+- txq_stats->tx_clean++;
+- u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
++ u64_stats_update_begin(&txq_stats->napi_syncp);
++ u64_stats_add(&txq_stats->napi.tx_packets, tx_packets);
++ u64_stats_add(&txq_stats->napi.tx_pkt_n, tx_packets);
++ u64_stats_inc(&txq_stats->napi.tx_clean);
++ u64_stats_update_end(&txq_stats->napi_syncp);
+
+ priv->xstats.tx_errors += tx_errors;
+
+@@ -4130,7 +4128,6 @@ static netdev_tx_t stmmac_tso_xmit(struc
+ struct stmmac_tx_queue *tx_q;
+ bool has_vlan, set_ic;
+ u8 proto_hdr_len, hdr;
+- unsigned long flags;
+ u32 pay_len, mss;
+ dma_addr_t des;
+ int i;
+@@ -4307,13 +4304,13 @@ static netdev_tx_t stmmac_tso_xmit(struc
+ netif_tx_stop_queue(netdev_get_tx_queue(priv->dev, queue));
+ }
+
+- flags = u64_stats_update_begin_irqsave(&txq_stats->syncp);
+- txq_stats->tx_bytes += skb->len;
+- txq_stats->tx_tso_frames++;
+- txq_stats->tx_tso_nfrags += nfrags;
++ u64_stats_update_begin(&txq_stats->q_syncp);
++ u64_stats_add(&txq_stats->q.tx_bytes, skb->len);
++ u64_stats_inc(&txq_stats->q.tx_tso_frames);
++ u64_stats_add(&txq_stats->q.tx_tso_nfrags, nfrags);
+ if (set_ic)
+- txq_stats->tx_set_ic_bit++;
+- u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
++ u64_stats_inc(&txq_stats->q.tx_set_ic_bit);
++ u64_stats_update_end(&txq_stats->q_syncp);
+
+ if (priv->sarc_type)
+ stmmac_set_desc_sarc(priv, first, priv->sarc_type);
+@@ -4390,7 +4387,6 @@ static netdev_tx_t stmmac_xmit(struct sk
+ struct stmmac_tx_queue *tx_q;
+ bool has_vlan, set_ic;
+ int entry, first_tx;
+- unsigned long flags;
+ dma_addr_t des;
+
+ tx_q = &priv->dma_conf.tx_queue[queue];
+@@ -4546,11 +4542,11 @@ static netdev_tx_t stmmac_xmit(struct sk
+ netif_tx_stop_queue(netdev_get_tx_queue(priv->dev, queue));
+ }
+
+- flags = u64_stats_update_begin_irqsave(&txq_stats->syncp);
+- txq_stats->tx_bytes += skb->len;
++ u64_stats_update_begin(&txq_stats->q_syncp);
++ u64_stats_add(&txq_stats->q.tx_bytes, skb->len);
+ if (set_ic)
+- txq_stats->tx_set_ic_bit++;
+- u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
++ u64_stats_inc(&txq_stats->q.tx_set_ic_bit);
++ u64_stats_update_end(&txq_stats->q_syncp);
+
+ if (priv->sarc_type)
+ stmmac_set_desc_sarc(priv, first, priv->sarc_type);
+@@ -4814,12 +4810,11 @@ static int stmmac_xdp_xmit_xdpf(struct s
+ set_ic = false;
+
+ if (set_ic) {
+- unsigned long flags;
+ tx_q->tx_count_frames = 0;
+ stmmac_set_tx_ic(priv, tx_desc);
+- flags = u64_stats_update_begin_irqsave(&txq_stats->syncp);
+- txq_stats->tx_set_ic_bit++;
+- u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
++ u64_stats_update_begin(&txq_stats->q_syncp);
++ u64_stats_inc(&txq_stats->q.tx_set_ic_bit);
++ u64_stats_update_end(&txq_stats->q_syncp);
+ }
+
+ stmmac_enable_dma_transmission(priv, priv->ioaddr);
+@@ -4979,7 +4974,6 @@ static void stmmac_dispatch_skb_zc(struc
+ unsigned int len = xdp->data_end - xdp->data;
+ enum pkt_hash_types hash_type;
+ int coe = priv->hw->rx_csum;
+- unsigned long flags;
+ struct sk_buff *skb;
+ u32 hash;
+
+@@ -5004,10 +4998,10 @@ static void stmmac_dispatch_skb_zc(struc
+ skb_record_rx_queue(skb, queue);
+ napi_gro_receive(&ch->rxtx_napi, skb);
+
+- flags = u64_stats_update_begin_irqsave(&rxq_stats->syncp);
+- rxq_stats->rx_pkt_n++;
+- rxq_stats->rx_bytes += len;
+- u64_stats_update_end_irqrestore(&rxq_stats->syncp, flags);
++ u64_stats_update_begin(&rxq_stats->napi_syncp);
++ u64_stats_inc(&rxq_stats->napi.rx_pkt_n);
++ u64_stats_add(&rxq_stats->napi.rx_bytes, len);
++ u64_stats_update_end(&rxq_stats->napi_syncp);
+ }
+
+ static bool stmmac_rx_refill_zc(struct stmmac_priv *priv, u32 queue, u32 budget)
+@@ -5079,7 +5073,6 @@ static int stmmac_rx_zc(struct stmmac_pr
+ unsigned int desc_size;
+ struct bpf_prog *prog;
+ bool failure = false;
+- unsigned long flags;
+ int xdp_status = 0;
+ int status = 0;
+
+@@ -5230,9 +5223,9 @@ read_again:
+
+ stmmac_finalize_xdp_rx(priv, xdp_status);
+
+- flags = u64_stats_update_begin_irqsave(&rxq_stats->syncp);
+- rxq_stats->rx_pkt_n += count;
+- u64_stats_update_end_irqrestore(&rxq_stats->syncp, flags);
++ u64_stats_update_begin(&rxq_stats->napi_syncp);
++ u64_stats_add(&rxq_stats->napi.rx_pkt_n, count);
++ u64_stats_update_end(&rxq_stats->napi_syncp);
+
+ priv->xstats.rx_dropped += rx_dropped;
+ priv->xstats.rx_errors += rx_errors;
+@@ -5270,7 +5263,6 @@ static int stmmac_rx(struct stmmac_priv
+ unsigned int desc_size;
+ struct sk_buff *skb = NULL;
+ struct stmmac_xdp_buff ctx;
+- unsigned long flags;
+ int xdp_status = 0;
+ int buf_sz;
+
+@@ -5518,11 +5510,11 @@ drain_data:
+
+ stmmac_rx_refill(priv, queue);
+
+- flags = u64_stats_update_begin_irqsave(&rxq_stats->syncp);
+- rxq_stats->rx_packets += rx_packets;
+- rxq_stats->rx_bytes += rx_bytes;
+- rxq_stats->rx_pkt_n += count;
+- u64_stats_update_end_irqrestore(&rxq_stats->syncp, flags);
++ u64_stats_update_begin(&rxq_stats->napi_syncp);
++ u64_stats_add(&rxq_stats->napi.rx_packets, rx_packets);
++ u64_stats_add(&rxq_stats->napi.rx_bytes, rx_bytes);
++ u64_stats_add(&rxq_stats->napi.rx_pkt_n, count);
++ u64_stats_update_end(&rxq_stats->napi_syncp);
+
+ priv->xstats.rx_dropped += rx_dropped;
+ priv->xstats.rx_errors += rx_errors;
+@@ -5537,13 +5529,12 @@ static int stmmac_napi_poll_rx(struct na
+ struct stmmac_priv *priv = ch->priv_data;
+ struct stmmac_rxq_stats *rxq_stats;
+ u32 chan = ch->index;
+- unsigned long flags;
+ int work_done;
+
+ rxq_stats = &priv->xstats.rxq_stats[chan];
+- flags = u64_stats_update_begin_irqsave(&rxq_stats->syncp);
+- rxq_stats->napi_poll++;
+- u64_stats_update_end_irqrestore(&rxq_stats->syncp, flags);
++ u64_stats_update_begin(&rxq_stats->napi_syncp);
++ u64_stats_inc(&rxq_stats->napi.poll);
++ u64_stats_update_end(&rxq_stats->napi_syncp);
+
+ work_done = stmmac_rx(priv, budget, chan);
+ if (work_done < budget && napi_complete_done(napi, work_done)) {
+@@ -5564,13 +5555,12 @@ static int stmmac_napi_poll_tx(struct na
+ struct stmmac_priv *priv = ch->priv_data;
+ struct stmmac_txq_stats *txq_stats;
+ u32 chan = ch->index;
+- unsigned long flags;
+ int work_done;
+
+ txq_stats = &priv->xstats.txq_stats[chan];
+- flags = u64_stats_update_begin_irqsave(&txq_stats->syncp);
+- txq_stats->napi_poll++;
+- u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
++ u64_stats_update_begin(&txq_stats->napi_syncp);
++ u64_stats_inc(&txq_stats->napi.poll);
++ u64_stats_update_end(&txq_stats->napi_syncp);
+
+ work_done = stmmac_tx_clean(priv, budget, chan);
+ work_done = min(work_done, budget);
+@@ -5595,17 +5585,16 @@ static int stmmac_napi_poll_rxtx(struct
+ struct stmmac_rxq_stats *rxq_stats;
+ struct stmmac_txq_stats *txq_stats;
+ u32 chan = ch->index;
+- unsigned long flags;
+
+ rxq_stats = &priv->xstats.rxq_stats[chan];
+- flags = u64_stats_update_begin_irqsave(&rxq_stats->syncp);
+- rxq_stats->napi_poll++;
+- u64_stats_update_end_irqrestore(&rxq_stats->syncp, flags);
++ u64_stats_update_begin(&rxq_stats->napi_syncp);
++ u64_stats_inc(&rxq_stats->napi.poll);
++ u64_stats_update_end(&rxq_stats->napi_syncp);
+
+ txq_stats = &priv->xstats.txq_stats[chan];
+- flags = u64_stats_update_begin_irqsave(&txq_stats->syncp);
+- txq_stats->napi_poll++;
+- u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
++ u64_stats_update_begin(&txq_stats->napi_syncp);
++ u64_stats_inc(&txq_stats->napi.poll);
++ u64_stats_update_end(&txq_stats->napi_syncp);
+
+ tx_done = stmmac_tx_clean(priv, budget, chan);
+ tx_done = min(tx_done, budget);
+@@ -6857,10 +6846,13 @@ static void stmmac_get_stats64(struct ne
+ u64 tx_bytes;
+
+ do {
+- start = u64_stats_fetch_begin(&txq_stats->syncp);
+- tx_packets = txq_stats->tx_packets;
+- tx_bytes = txq_stats->tx_bytes;
+- } while (u64_stats_fetch_retry(&txq_stats->syncp, start));
++ start = u64_stats_fetch_begin(&txq_stats->q_syncp);
++ tx_bytes = u64_stats_read(&txq_stats->q.tx_bytes);
++ } while (u64_stats_fetch_retry(&txq_stats->q_syncp, start));
++ do {
++ start = u64_stats_fetch_begin(&txq_stats->napi_syncp);
++ tx_packets = u64_stats_read(&txq_stats->napi.tx_packets);
++ } while (u64_stats_fetch_retry(&txq_stats->napi_syncp, start));
+
+ stats->tx_packets += tx_packets;
+ stats->tx_bytes += tx_bytes;
+@@ -6872,10 +6864,10 @@ static void stmmac_get_stats64(struct ne
+ u64 rx_bytes;
+
+ do {
+- start = u64_stats_fetch_begin(&rxq_stats->syncp);
+- rx_packets = rxq_stats->rx_packets;
+- rx_bytes = rxq_stats->rx_bytes;
+- } while (u64_stats_fetch_retry(&rxq_stats->syncp, start));
++ start = u64_stats_fetch_begin(&rxq_stats->napi_syncp);
++ rx_packets = u64_stats_read(&rxq_stats->napi.rx_packets);
++ rx_bytes = u64_stats_read(&rxq_stats->napi.rx_bytes);
++ } while (u64_stats_fetch_retry(&rxq_stats->napi_syncp, start));
+
+ stats->rx_packets += rx_packets;
+ stats->rx_bytes += rx_bytes;
+@@ -7239,9 +7231,16 @@ int stmmac_dvr_probe(struct device *devi
+ priv->dev = ndev;
+
+ for (i = 0; i < MTL_MAX_RX_QUEUES; i++)
+- u64_stats_init(&priv->xstats.rxq_stats[i].syncp);
+- for (i = 0; i < MTL_MAX_TX_QUEUES; i++)
+- u64_stats_init(&priv->xstats.txq_stats[i].syncp);
++ u64_stats_init(&priv->xstats.rxq_stats[i].napi_syncp);
++ for (i = 0; i < MTL_MAX_TX_QUEUES; i++) {
++ u64_stats_init(&priv->xstats.txq_stats[i].q_syncp);
++ u64_stats_init(&priv->xstats.txq_stats[i].napi_syncp);
++ }
++
++ priv->xstats.pcpu_stats =
++ devm_netdev_alloc_pcpu_stats(device, struct stmmac_pcpu_stats);
++ if (!priv->xstats.pcpu_stats)
++ return -ENOMEM;
+
+ stmmac_set_ethtool_ops(ndev);
+ priv->pause = pause;
tty-fix-tty_port_tty_-hangup-kernel-doc.patch
firmware-arm_scmi-fix-unused-notifier-block-in-unregister.patch
revert-iommu-amd-skip-enabling-command-event-buffers-for-kdump.patch
+net-ethtool-fix-the-error-condition-in-ethtool_get_phy_stats_ethtool.patch
+kernel-fork-only-call-untrack_pfn_clear-on-vmas-duplicated-for-fork.patch
+mm-un-track_pfn_copy-fix-doc-improvements.patch
+usb-gadget-lpc32xx_udc-fix-clock-imbalance-in-error-path.patch
+net-stmmac-fix-incorrect-rxq-txq_stats-reference.patch
+net-stmmac-fix-ethtool-per-queue-statistics.patch
+net-stmmac-protect-updates-of-64-bit-statistics-counters.patch
+wifi-nl80211-fix-puncturing-bitmap-policy.patch
+wifi-mac80211-fix-switch-count-in-ema-beacons.patch
--- /dev/null
+From 782be79e4551550d7a82b1957fc0f7347e6d461f Mon Sep 17 00:00:00 2001
+From: Johan Hovold <johan@kernel.org>
+Date: Thu, 18 Dec 2025 16:35:15 +0100
+Subject: usb: gadget: lpc32xx_udc: fix clock imbalance in error path
+
+From: Johan Hovold <johan@kernel.org>
+
+commit 782be79e4551550d7a82b1957fc0f7347e6d461f upstream.
+
+A recent change fixing a device reference leak introduced a clock
+imbalance by reusing an error path so that the clock may be disabled
+before having been enabled.
+
+Note that the clock framework allows for passing in NULL clocks so there
+is no risk for a NULL pointer dereference.
+
+Also drop the bogus I2C client NULL check added by the offending commit
+as the pointer has already been verified to be non-NULL.
+
+Fixes: c84117912bdd ("USB: lpc32xx_udc: Fix error handling in probe")
+Cc: stable@vger.kernel.org
+Cc: Ma Ke <make24@iscas.ac.cn>
+Signed-off-by: Johan Hovold <johan@kernel.org>
+Reviewed-by: Vladimir Zapolskiy <vz@mleia.com>
+Link: https://patch.msgid.link/20251218153519.19453-2-johan@kernel.org
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/usb/gadget/udc/lpc32xx_udc.c | 19 +++++++++----------
+ 1 file changed, 9 insertions(+), 10 deletions(-)
+
+--- a/drivers/usb/gadget/udc/lpc32xx_udc.c
++++ b/drivers/usb/gadget/udc/lpc32xx_udc.c
+@@ -3027,7 +3027,7 @@ static int lpc32xx_udc_probe(struct plat
+ pdev->dev.dma_mask = &lpc32xx_usbd_dmamask;
+ retval = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32));
+ if (retval)
+- goto i2c_fail;
++ goto err_put_client;
+
+ udc->board = &lpc32xx_usbddata;
+
+@@ -3047,7 +3047,7 @@ static int lpc32xx_udc_probe(struct plat
+ udc->udp_irq[i] = platform_get_irq(pdev, i);
+ if (udc->udp_irq[i] < 0) {
+ retval = udc->udp_irq[i];
+- goto i2c_fail;
++ goto err_put_client;
+ }
+ }
+
+@@ -3055,7 +3055,7 @@ static int lpc32xx_udc_probe(struct plat
+ if (IS_ERR(udc->udp_baseaddr)) {
+ dev_err(udc->dev, "IO map failure\n");
+ retval = PTR_ERR(udc->udp_baseaddr);
+- goto i2c_fail;
++ goto err_put_client;
+ }
+
+ /* Get USB device clock */
+@@ -3063,14 +3063,14 @@ static int lpc32xx_udc_probe(struct plat
+ if (IS_ERR(udc->usb_slv_clk)) {
+ dev_err(udc->dev, "failed to acquire USB device clock\n");
+ retval = PTR_ERR(udc->usb_slv_clk);
+- goto i2c_fail;
++ goto err_put_client;
+ }
+
+ /* Enable USB device clock */
+ retval = clk_prepare_enable(udc->usb_slv_clk);
+ if (retval < 0) {
+ dev_err(udc->dev, "failed to start USB device clock\n");
+- goto i2c_fail;
++ goto err_put_client;
+ }
+
+ /* Setup deferred workqueue data */
+@@ -3172,9 +3172,10 @@ dma_alloc_fail:
+ dma_free_coherent(&pdev->dev, UDCA_BUFF_SIZE,
+ udc->udca_v_base, udc->udca_p_base);
+ i2c_fail:
+- if (udc->isp1301_i2c_client)
+- put_device(&udc->isp1301_i2c_client->dev);
+ clk_disable_unprepare(udc->usb_slv_clk);
++err_put_client:
++ put_device(&udc->isp1301_i2c_client->dev);
++
+ dev_err(udc->dev, "%s probe failed, %d\n", driver_name, retval);
+
+ return retval;
+@@ -3199,11 +3200,9 @@ static int lpc32xx_udc_remove(struct pla
+ dma_free_coherent(&pdev->dev, UDCA_BUFF_SIZE,
+ udc->udca_v_base, udc->udca_p_base);
+
+- if (udc->isp1301_i2c_client)
+- put_device(&udc->isp1301_i2c_client->dev);
+-
+ clk_disable_unprepare(udc->usb_slv_clk);
+
++ put_device(&udc->isp1301_i2c_client->dev);
+ return 0;
+ }
+
--- /dev/null
+From 1afa18e9e72396d1e1aedd6dbb34681f2413316b Mon Sep 17 00:00:00 2001
+From: Aditya Kumar Singh <quic_adisi@quicinc.com>
+Date: Wed, 31 May 2023 11:50:12 +0530
+Subject: wifi: mac80211: fix switch count in EMA beacons
+
+From: Aditya Kumar Singh <quic_adisi@quicinc.com>
+
+commit 1afa18e9e72396d1e1aedd6dbb34681f2413316b upstream.
+
+Currently, whenever an EMA beacon is formed, due to is_template
+argument being false from the caller, the switch count is always
+decremented once which is wrong.
+
+Also if switch count is equal to profile periodicity, this makes
+the switch count to reach till zero which triggers a WARN_ON_ONCE.
+
+[ 261.593915] CPU: 1 PID: 800 Comm: kworker/u8:3 Not tainted 5.4.213 #0
+[ 261.616143] Hardware name: Qualcomm Technologies, Inc. IPQ9574
+[ 261.622666] Workqueue: phy0 ath12k_get_link_bss_conf [ath12k]
+[ 261.629771] pstate: 60400005 (nZCv daif +PAN -UAO)
+[ 261.635595] pc : ieee80211_next_txq+0x1ac/0x1b8 [mac80211]
+[ 261.640282] lr : ieee80211_beacon_update_cntdwn+0x64/0xb4 [mac80211]
+[...]
+[ 261.729683] Call trace:
+[ 261.734986] ieee80211_next_txq+0x1ac/0x1b8 [mac80211]
+[ 261.737156] ieee80211_beacon_cntdwn_is_complete+0xa28/0x1194 [mac80211]
+[ 261.742365] ieee80211_beacon_cntdwn_is_complete+0xef4/0x1194 [mac80211]
+[ 261.749224] ieee80211_beacon_get_template_ema_list+0x38/0x5c [mac80211]
+[ 261.755908] ath12k_get_link_bss_conf+0xf8/0x33b4 [ath12k]
+[ 261.762590] ath12k_get_link_bss_conf+0x390/0x33b4 [ath12k]
+[ 261.767881] process_one_work+0x194/0x270
+[ 261.773346] worker_thread+0x200/0x314
+[ 261.777514] kthread+0x140/0x150
+[ 261.781158] ret_from_fork+0x10/0x18
+
+Fix this issue by making the is_template argument as true when fetching
+the EMA beacons.
+
+Fixes: bd54f3c29077 ("wifi: mac80211: generate EMA beacons in AP mode")
+Signed-off-by: Aditya Kumar Singh <quic_adisi@quicinc.com>
+Link: https://lore.kernel.org/r/20230531062012.4537-1-quic_adisi@quicinc.com
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/mac80211/tx.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -5456,7 +5456,7 @@ ieee80211_beacon_get_template_ema_list(s
+ {
+ struct ieee80211_ema_beacons *ema_beacons = NULL;
+
+- WARN_ON(__ieee80211_beacon_get(hw, vif, NULL, false, link_id, 0,
++ WARN_ON(__ieee80211_beacon_get(hw, vif, NULL, true, link_id, 0,
+ &ema_beacons));
+
+ return ema_beacons;
--- /dev/null
+From b27f07c50a73e34eefb6b1030b235192b7ded850 Mon Sep 17 00:00:00 2001
+From: Johannes Berg <johannes.berg@intel.com>
+Date: Fri, 24 Feb 2023 13:36:57 +0100
+Subject: wifi: nl80211: fix puncturing bitmap policy
+
+From: Johannes Berg <johannes.berg@intel.com>
+
+commit b27f07c50a73e34eefb6b1030b235192b7ded850 upstream.
+
+This was meant to be a u32, and while applying the patch
+I tried to use policy validation for it. However, not only
+did I copy/paste it to u8 instead of u32, but also used
+the policy range erroneously. Fix both of these issues.
+
+Fixes: d7c1a9a0ed18 ("wifi: nl80211: validate and configure puncturing bitmap")
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/wireless/nl80211.c | 8 +++++++-
+ 1 file changed, 7 insertions(+), 1 deletion(-)
+
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -467,6 +467,11 @@ static struct netlink_range_validation q
+ .max = INT_MAX,
+ };
+
++static struct netlink_range_validation nl80211_punct_bitmap_range = {
++ .min = 0,
++ .max = 0xffff,
++};
++
+ static const struct nla_policy nl80211_policy[NUM_NL80211_ATTR] = {
+ [0] = { .strict_start_type = NL80211_ATTR_HE_OBSS_PD },
+ [NL80211_ATTR_WIPHY] = { .type = NLA_U32 },
+@@ -810,7 +815,8 @@ static const struct nla_policy nl80211_p
+ [NL80211_ATTR_MLD_ADDR] = NLA_POLICY_EXACT_LEN(ETH_ALEN),
+ [NL80211_ATTR_MLO_SUPPORT] = { .type = NLA_FLAG },
+ [NL80211_ATTR_MAX_NUM_AKM_SUITES] = { .type = NLA_REJECT },
+- [NL80211_ATTR_PUNCT_BITMAP] = NLA_POLICY_RANGE(NLA_U8, 0, 0xffff),
++ [NL80211_ATTR_PUNCT_BITMAP] =
++ NLA_POLICY_FULL_RANGE(NLA_U32, &nl80211_punct_bitmap_range),
+
+ [NL80211_ATTR_MAX_HW_TIMESTAMP_PEERS] = { .type = NLA_U16 },
+ [NL80211_ATTR_HW_TIMESTAMP_ENABLED] = { .type = NLA_FLAG },