--- /dev/null
+From a1cc1697bb56cdf880ad4d17b79a39ef2c294bc9 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Pali=20Roh=C3=A1r?= <pali@kernel.org>
+Date: Thu, 10 Mar 2022 11:39:23 +0100
+Subject: arm64: dts: marvell: armada-37xx: Remap IO space to bus address 0x0
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Pali Rohár <pali@kernel.org>
+
+commit a1cc1697bb56cdf880ad4d17b79a39ef2c294bc9 upstream.
+
+Legacy and old PCI I/O based cards do not support 32-bit I/O addressing.
+
+Since commit 64f160e19e92 ("PCI: aardvark: Configure PCIe resources from
+'ranges' DT property") kernel can set different PCIe address on CPU and
+different on the bus for the one A37xx address mapping without any firmware
+support in case the bus address does not conflict with other A37xx mapping.
+
+So remap I/O space to the bus address 0x0 to enable support for old legacy
+I/O port based cards which have hardcoded I/O ports in low address space.
+
+Note that DDR on A37xx is mapped to bus address 0x0. And mapping of I/O
+space can be set to address 0x0 too because MEM space and I/O space are
+separate and so do not conflict.
+
+Remapping IO space on Turris Mox to different address is not possible to
+due bootloader bug.
+
+Signed-off-by: Pali Rohár <pali@kernel.org>
+Reported-by: Arnd Bergmann <arnd@arndb.de>
+Fixes: 76f6386b25cc ("arm64: dts: marvell: Add Aardvark PCIe support for Armada 3700")
+Cc: stable@vger.kernel.org # 64f160e19e92 ("PCI: aardvark: Configure PCIe resources from 'ranges' DT property")
+Cc: stable@vger.kernel.org # 514ef1e62d65 ("arm64: dts: marvell: armada-37xx: Extend PCIe MEM space")
+Reviewed-by: Arnd Bergmann <arnd@arndb.de>
+Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts | 7 ++++++-
+ arch/arm64/boot/dts/marvell/armada-37xx.dtsi | 2 +-
+ 2 files changed, 7 insertions(+), 2 deletions(-)
+
+--- a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
++++ b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts
+@@ -139,7 +139,9 @@
+ /*
+ * U-Boot port for Turris Mox has a bug which always expects that "ranges" DT property
+ * contains exactly 2 ranges with 3 (child) address cells, 2 (parent) address cells and
+- * 2 size cells and also expects that the second range starts at 16 MB offset. If these
++ * 2 size cells and also expects that the second range starts at 16 MB offset. Also it
++ * expects that first range uses same address for PCI (child) and CPU (parent) cells (so
++ * no remapping) and that this address is the lowest from all specified ranges. If these
+ * conditions are not met then U-Boot crashes during loading kernel DTB file. PCIe address
+ * space is 128 MB long, so the best split between MEM and IO is to use fixed 16 MB window
+ * for IO and the rest 112 MB (64+32+16) for MEM, despite that maximal IO size is just 64 kB.
+@@ -148,6 +150,9 @@
+ * https://source.denx.de/u-boot/u-boot/-/commit/cb2ddb291ee6fcbddd6d8f4ff49089dfe580f5d7
+ * https://source.denx.de/u-boot/u-boot/-/commit/c64ac3b3185aeb3846297ad7391fc6df8ecd73bf
+ * https://source.denx.de/u-boot/u-boot/-/commit/4a82fca8e330157081fc132a591ebd99ba02ee33
++ * Bug related to requirement of same child and parent addresses for first range is fixed
++ * in U-Boot version 2022.04 by following commit:
++ * https://source.denx.de/u-boot/u-boot/-/commit/1fd54253bca7d43d046bba4853fe5fafd034bc17
+ */
+ #address-cells = <3>;
+ #size-cells = <2>;
+--- a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
++++ b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+@@ -497,7 +497,7 @@
+ * (totaling 127 MiB) for MEM.
+ */
+ ranges = <0x82000000 0 0xe8000000 0 0xe8000000 0 0x07f00000 /* Port 0 MEM */
+- 0x81000000 0 0xefff0000 0 0xefff0000 0 0x00010000>; /* Port 0 IO */
++ 0x81000000 0 0x00000000 0 0xefff0000 0 0x00010000>; /* Port 0 IO */
+ interrupt-map-mask = <0 0 0 7>;
+ interrupt-map = <0 0 0 1 &pcie_intc 0>,
+ <0 0 0 2 &pcie_intc 1>,
--- /dev/null
+From 6e2edd6371a497a6350bb735534c9bda2a31f43d Mon Sep 17 00:00:00 2001
+From: Catalin Marinas <catalin.marinas@arm.com>
+Date: Thu, 3 Mar 2022 18:00:44 +0000
+Subject: arm64: Ensure execute-only permissions are not allowed without EPAN
+
+From: Catalin Marinas <catalin.marinas@arm.com>
+
+commit 6e2edd6371a497a6350bb735534c9bda2a31f43d upstream.
+
+Commit 18107f8a2df6 ("arm64: Support execute-only permissions with
+Enhanced PAN") re-introduced execute-only permissions when EPAN is
+available. When EPAN is not available, arch_filter_pgprot() is supposed
+to change a PAGE_EXECONLY permission into PAGE_READONLY_EXEC. However,
+if BTI or MTE are present, such check does not detect the execute-only
+pgprot in the presence of PTE_GP (BTI) or MT_NORMAL_TAGGED (MTE),
+allowing the user to request PROT_EXEC with PROT_BTI or PROT_MTE.
+
+Remove the arch_filter_pgprot() function, change the default VM_EXEC
+permissions to PAGE_READONLY_EXEC and update the protection_map[] array
+at core_initcall() if EPAN is detected.
+
+Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
+Fixes: 18107f8a2df6 ("arm64: Support execute-only permissions with Enhanced PAN")
+Cc: <stable@vger.kernel.org> # 5.13.x
+Acked-by: Will Deacon <will@kernel.org>
+Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com>
+Tested-by: Vladimir Murzin <vladimir.murzin@arm.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/arm64/Kconfig | 3 ---
+ arch/arm64/include/asm/pgtable-prot.h | 4 ++--
+ arch/arm64/include/asm/pgtable.h | 11 -----------
+ arch/arm64/mm/mmap.c | 17 +++++++++++++++++
+ 4 files changed, 19 insertions(+), 16 deletions(-)
+
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -1264,9 +1264,6 @@ config HW_PERF_EVENTS
+ def_bool y
+ depends on ARM_PMU
+
+-config ARCH_HAS_FILTER_PGPROT
+- def_bool y
+-
+ # Supported by clang >= 7.0
+ config CC_HAVE_SHADOW_CALL_STACK
+ def_bool $(cc-option, -fsanitize=shadow-call-stack -ffixed-x18)
+--- a/arch/arm64/include/asm/pgtable-prot.h
++++ b/arch/arm64/include/asm/pgtable-prot.h
+@@ -92,7 +92,7 @@ extern bool arm64_use_ng_mappings;
+ #define __P001 PAGE_READONLY
+ #define __P010 PAGE_READONLY
+ #define __P011 PAGE_READONLY
+-#define __P100 PAGE_EXECONLY
++#define __P100 PAGE_READONLY_EXEC /* PAGE_EXECONLY if Enhanced PAN */
+ #define __P101 PAGE_READONLY_EXEC
+ #define __P110 PAGE_READONLY_EXEC
+ #define __P111 PAGE_READONLY_EXEC
+@@ -101,7 +101,7 @@ extern bool arm64_use_ng_mappings;
+ #define __S001 PAGE_READONLY
+ #define __S010 PAGE_SHARED
+ #define __S011 PAGE_SHARED
+-#define __S100 PAGE_EXECONLY
++#define __S100 PAGE_READONLY_EXEC /* PAGE_EXECONLY if Enhanced PAN */
+ #define __S101 PAGE_READONLY_EXEC
+ #define __S110 PAGE_SHARED_EXEC
+ #define __S111 PAGE_SHARED_EXEC
+--- a/arch/arm64/include/asm/pgtable.h
++++ b/arch/arm64/include/asm/pgtable.h
+@@ -1017,17 +1017,6 @@ static inline bool arch_wants_old_prefau
+ }
+ #define arch_wants_old_prefaulted_pte arch_wants_old_prefaulted_pte
+
+-static inline pgprot_t arch_filter_pgprot(pgprot_t prot)
+-{
+- if (cpus_have_const_cap(ARM64_HAS_EPAN))
+- return prot;
+-
+- if (pgprot_val(prot) != pgprot_val(PAGE_EXECONLY))
+- return prot;
+-
+- return PAGE_READONLY_EXEC;
+-}
+-
+ static inline bool pud_sect_supported(void)
+ {
+ return PAGE_SIZE == SZ_4K;
+--- a/arch/arm64/mm/mmap.c
++++ b/arch/arm64/mm/mmap.c
+@@ -7,8 +7,10 @@
+
+ #include <linux/io.h>
+ #include <linux/memblock.h>
++#include <linux/mm.h>
+ #include <linux/types.h>
+
++#include <asm/cpufeature.h>
+ #include <asm/page.h>
+
+ /*
+@@ -38,3 +40,18 @@ int valid_mmap_phys_addr_range(unsigned
+ {
+ return !(((pfn << PAGE_SHIFT) + size) & ~PHYS_MASK);
+ }
++
++static int __init adjust_protection_map(void)
++{
++ /*
++ * With Enhanced PAN we can honour the execute-only permissions as
++ * there is no PAN override with such mappings.
++ */
++ if (cpus_have_const_cap(ARM64_HAS_EPAN)) {
++ protection_map[VM_EXEC] = PAGE_EXECONLY;
++ protection_map[VM_EXEC | VM_SHARED] = PAGE_EXECONLY;
++ }
++
++ return 0;
++}
++arch_initcall(adjust_protection_map);
--- /dev/null
+From b859ebedd1e730bbda69142fca87af4e712649a1 Mon Sep 17 00:00:00 2001
+From: Paul Semel <semelpaul@gmail.com>
+Date: Tue, 8 Mar 2022 10:30:58 +0100
+Subject: arm64: kasan: fix include error in MTE functions
+
+From: Paul Semel <semelpaul@gmail.com>
+
+commit b859ebedd1e730bbda69142fca87af4e712649a1 upstream.
+
+Fix `error: expected string literal in 'asm'`.
+This happens when compiling an ebpf object file that includes
+`net/net_namespace.h` from linux kernel headers.
+
+Include trace:
+ include/net/net_namespace.h:10
+ include/linux/workqueue.h:9
+ include/linux/timer.h:8
+ include/linux/debugobjects.h:6
+ include/linux/spinlock.h:90
+ include/linux/workqueue.h:9
+ arch/arm64/include/asm/spinlock.h:9
+ arch/arm64/include/generated/asm/qrwlock.h:1
+ include/asm-generic/qrwlock.h:14
+ arch/arm64/include/asm/processor.h:33
+ arch/arm64/include/asm/kasan.h:9
+ arch/arm64/include/asm/mte-kasan.h:45
+ arch/arm64/include/asm/mte-def.h:14
+
+Signed-off-by: Paul Semel <paul.semel@datadoghq.com>
+Fixes: 2cb34276427a ("arm64: kasan: simplify and inline MTE functions")
+Cc: <stable@vger.kernel.org> # 5.12.x
+Link: https://lore.kernel.org/r/bacb5387-2992-97e4-0c48-1ed925905bee@gmail.com
+Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/arm64/include/asm/mte-kasan.h | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/arch/arm64/include/asm/mte-kasan.h
++++ b/arch/arm64/include/asm/mte-kasan.h
+@@ -5,6 +5,7 @@
+ #ifndef __ASM_MTE_KASAN_H
+ #define __ASM_MTE_KASAN_H
+
++#include <asm/compiler.h>
+ #include <asm/mte-def.h>
+
+ #ifndef __ASSEMBLY__
--- /dev/null
+From a679a61520d8a7b0211a1da990404daf5cc80b72 Mon Sep 17 00:00:00 2001
+From: Miklos Szeredi <mszeredi@redhat.com>
+Date: Fri, 18 Feb 2022 11:47:51 +0100
+Subject: fuse: fix fileattr op failure
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Miklos Szeredi <mszeredi@redhat.com>
+
+commit a679a61520d8a7b0211a1da990404daf5cc80b72 upstream.
+
+The fileattr API conversion broke lsattr on ntfs3g.
+
+Previously the ioctl(... FS_IOC_GETFLAGS) returned an EINVAL error, but
+after the conversion the error returned by the fuse filesystem was not
+propagated back to the ioctl() system call, resulting in success being
+returned with bogus values.
+
+Fix by checking for outarg.result in fuse_priv_ioctl(), just as generic
+ioctl code does.
+
+Reported-by: Jean-Pierre André <jean-pierre.andre@wanadoo.fr>
+Fixes: 72227eac177d ("fuse: convert to fileattr")
+Cc: <stable@vger.kernel.org> # v5.13
+Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/fuse/ioctl.c | 9 ++++++---
+ 1 file changed, 6 insertions(+), 3 deletions(-)
+
+--- a/fs/fuse/ioctl.c
++++ b/fs/fuse/ioctl.c
+@@ -394,9 +394,12 @@ static int fuse_priv_ioctl(struct inode
+ args.out_args[1].value = ptr;
+
+ err = fuse_simple_request(fm, &args);
+- if (!err && outarg.flags & FUSE_IOCTL_RETRY)
+- err = -EIO;
+-
++ if (!err) {
++ if (outarg.result < 0)
++ err = outarg.result;
++ else if (outarg.flags & FUSE_IOCTL_RETRY)
++ err = -EIO;
++ }
+ return err;
+ }
+
--- /dev/null
+From 0c4bcfdecb1ac0967619ee7ff44871d93c08c909 Mon Sep 17 00:00:00 2001
+From: Miklos Szeredi <mszeredi@redhat.com>
+Date: Mon, 7 Mar 2022 16:30:44 +0100
+Subject: fuse: fix pipe buffer lifetime for direct_io
+
+From: Miklos Szeredi <mszeredi@redhat.com>
+
+commit 0c4bcfdecb1ac0967619ee7ff44871d93c08c909 upstream.
+
+In FOPEN_DIRECT_IO mode, fuse_file_write_iter() calls
+fuse_direct_write_iter(), which normally calls fuse_direct_io(), which then
+imports the write buffer with fuse_get_user_pages(), which uses
+iov_iter_get_pages() to grab references to userspace pages instead of
+actually copying memory.
+
+On the filesystem device side, these pages can then either be read to
+userspace (via fuse_dev_read()), or splice()d over into a pipe using
+fuse_dev_splice_read() as pipe buffers with &nosteal_pipe_buf_ops.
+
+This is wrong because after fuse_dev_do_read() unlocks the FUSE request,
+the userspace filesystem can mark the request as completed, causing write()
+to return. At that point, the userspace filesystem should no longer have
+access to the pipe buffer.
+
+Fix by copying pages coming from the user address space to new pipe
+buffers.
+
+Reported-by: Jann Horn <jannh@google.com>
+Fixes: c3021629a0d8 ("fuse: support splice() reading from fuse device")
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/fuse/dev.c | 12 +++++++++++-
+ fs/fuse/file.c | 1 +
+ fs/fuse/fuse_i.h | 1 +
+ 3 files changed, 13 insertions(+), 1 deletion(-)
+
+--- a/fs/fuse/dev.c
++++ b/fs/fuse/dev.c
+@@ -941,7 +941,17 @@ static int fuse_copy_page(struct fuse_co
+
+ while (count) {
+ if (cs->write && cs->pipebufs && page) {
+- return fuse_ref_page(cs, page, offset, count);
++ /*
++ * Can't control lifetime of pipe buffers, so always
++ * copy user pages.
++ */
++ if (cs->req->args->user_pages) {
++ err = fuse_copy_fill(cs);
++ if (err)
++ return err;
++ } else {
++ return fuse_ref_page(cs, page, offset, count);
++ }
+ } else if (!cs->len) {
+ if (cs->move_pages && page &&
+ offset == 0 && count == PAGE_SIZE) {
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -1413,6 +1413,7 @@ static int fuse_get_user_pages(struct fu
+ (PAGE_SIZE - ret) & (PAGE_SIZE - 1);
+ }
+
++ ap->args.user_pages = true;
+ if (write)
+ ap->args.in_pages = true;
+ else
+--- a/fs/fuse/fuse_i.h
++++ b/fs/fuse/fuse_i.h
+@@ -256,6 +256,7 @@ struct fuse_args {
+ bool nocreds:1;
+ bool in_pages:1;
+ bool out_pages:1;
++ bool user_pages:1;
+ bool out_argvar:1;
+ bool page_zeroing:1;
+ bool page_replace:1;
--- /dev/null
+From f0d2f15362f02444c5d7ffd5a5eb03e4aa54b685 Mon Sep 17 00:00:00 2001
+From: Rong Chen <rong.chen@amlogic.com>
+Date: Wed, 16 Feb 2022 20:42:39 +0800
+Subject: mmc: meson: Fix usage of meson_mmc_post_req()
+
+From: Rong Chen <rong.chen@amlogic.com>
+
+commit f0d2f15362f02444c5d7ffd5a5eb03e4aa54b685 upstream.
+
+Currently meson_mmc_post_req() is called in meson_mmc_request() right
+after meson_mmc_start_cmd(). This could lead to DMA unmapping before the request
+is actually finished.
+
+To fix, don't call meson_mmc_post_req() until meson_mmc_request_done().
+
+Signed-off-by: Rong Chen <rong.chen@amlogic.com>
+Reviewed-by: Kevin Hilman <khilman@baylibre.com>
+Fixes: 79ed05e329c3 ("mmc: meson-gx: add support for descriptor chain mode")
+Cc: stable@vger.kernel.org
+Link: https://lore.kernel.org/r/20220216124239.4007667-1-rong.chen@amlogic.com
+Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/mmc/host/meson-gx-mmc.c | 15 ++++++++-------
+ 1 file changed, 8 insertions(+), 7 deletions(-)
+
+--- a/drivers/mmc/host/meson-gx-mmc.c
++++ b/drivers/mmc/host/meson-gx-mmc.c
+@@ -173,6 +173,8 @@ struct meson_host {
+ int irq;
+
+ bool vqmmc_enabled;
++ bool needs_pre_post_req;
++
+ };
+
+ #define CMD_CFG_LENGTH_MASK GENMASK(8, 0)
+@@ -663,6 +665,8 @@ static void meson_mmc_request_done(struc
+ struct meson_host *host = mmc_priv(mmc);
+
+ host->cmd = NULL;
++ if (host->needs_pre_post_req)
++ meson_mmc_post_req(mmc, mrq, 0);
+ mmc_request_done(host->mmc, mrq);
+ }
+
+@@ -880,7 +884,7 @@ static int meson_mmc_validate_dram_acces
+ static void meson_mmc_request(struct mmc_host *mmc, struct mmc_request *mrq)
+ {
+ struct meson_host *host = mmc_priv(mmc);
+- bool needs_pre_post_req = mrq->data &&
++ host->needs_pre_post_req = mrq->data &&
+ !(mrq->data->host_cookie & SD_EMMC_PRE_REQ_DONE);
+
+ /*
+@@ -896,22 +900,19 @@ static void meson_mmc_request(struct mmc
+ }
+ }
+
+- if (needs_pre_post_req) {
++ if (host->needs_pre_post_req) {
+ meson_mmc_get_transfer_mode(mmc, mrq);
+ if (!meson_mmc_desc_chain_mode(mrq->data))
+- needs_pre_post_req = false;
++ host->needs_pre_post_req = false;
+ }
+
+- if (needs_pre_post_req)
++ if (host->needs_pre_post_req)
+ meson_mmc_pre_req(mmc, mrq);
+
+ /* Stop execution */
+ writel(0, host->regs + SD_EMMC_START);
+
+ meson_mmc_start_cmd(mmc, mrq->sbc ?: mrq->cmd);
+-
+- if (needs_pre_post_req)
+- meson_mmc_post_req(mmc, mrq, 0);
+ }
+
+ static void meson_mmc_read_resp(struct mmc_host *mmc, struct mmc_command *cmd)
--- /dev/null
+From 0bf476fc3624e3a72af4ba7340d430a91c18cd67 Mon Sep 17 00:00:00 2001
+From: Robert Hancock <robert.hancock@calian.com>
+Date: Thu, 3 Mar 2022 12:10:27 -0600
+Subject: net: macb: Fix lost RX packet wakeup race in NAPI receive
+
+From: Robert Hancock <robert.hancock@calian.com>
+
+commit 0bf476fc3624e3a72af4ba7340d430a91c18cd67 upstream.
+
+There is an oddity in the way the RSR register flags propagate to the
+ISR register (and the actual interrupt output) on this hardware: it
+appears that RSR register bits only result in ISR being asserted if the
+interrupt was actually enabled at the time, so enabling interrupts with
+RSR bits already set doesn't trigger an interrupt to be raised. There
+was already a partial fix for this race in the macb_poll function where
+it checked for RSR bits being set and re-triggered NAPI receive.
+However, there was a still a race window between checking RSR and
+actually enabling interrupts, where a lost wakeup could happen. It's
+necessary to check again after enabling interrupts to see if RSR was set
+just prior to the interrupt being enabled, and re-trigger receive in that
+case.
+
+This issue was noticed in a point-to-point UDP request-response protocol
+which periodically saw timeouts or abnormally high response times due to
+received packets not being processed in a timely fashion. In many
+applications, more packets arriving, including TCP retransmissions, would
+cause the original packet to be processed, thus masking the issue.
+
+Fixes: 02f7a34f34e3 ("net: macb: Re-enable RX interrupt only when RX is done")
+Cc: stable@vger.kernel.org
+Co-developed-by: Scott McNutt <scott.mcnutt@siriusxm.com>
+Signed-off-by: Scott McNutt <scott.mcnutt@siriusxm.com>
+Signed-off-by: Robert Hancock <robert.hancock@calian.com>
+Tested-by: Claudiu Beznea <claudiu.beznea@microchip.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/cadence/macb_main.c | 25 ++++++++++++++++++++++++-
+ 1 file changed, 24 insertions(+), 1 deletion(-)
+
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -1614,7 +1614,14 @@ static int macb_poll(struct napi_struct
+ if (work_done < budget) {
+ napi_complete_done(napi, work_done);
+
+- /* Packets received while interrupts were disabled */
++ /* RSR bits only seem to propagate to raise interrupts when
++ * interrupts are enabled at the time, so if bits are already
++ * set due to packets received while interrupts were disabled,
++ * they will not cause another interrupt to be generated when
++ * interrupts are re-enabled.
++ * Check for this case here. This has been seen to happen
++ * around 30% of the time under heavy network load.
++ */
+ status = macb_readl(bp, RSR);
+ if (status) {
+ if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
+@@ -1622,6 +1629,22 @@ static int macb_poll(struct napi_struct
+ napi_reschedule(napi);
+ } else {
+ queue_writel(queue, IER, bp->rx_intr_mask);
++
++ /* In rare cases, packets could have been received in
++ * the window between the check above and re-enabling
++ * interrupts. Therefore, a double-check is required
++ * to avoid losing a wakeup. This can potentially race
++ * with the interrupt handler doing the same actions
++ * if an interrupt is raised just after enabling them,
++ * but this should be harmless.
++ */
++ status = macb_readl(bp, RSR);
++ if (unlikely(status)) {
++ queue_writel(queue, IDR, bp->rx_intr_mask);
++ if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
++ queue_writel(queue, ISR, MACB_BIT(RCOMP));
++ napi_schedule(napi);
++ }
+ }
+ }
+
--- /dev/null
+From c80ee64a8020ef1a6a92109798080786829b8994 Mon Sep 17 00:00:00 2001
+From: Jisheng Zhang <jszhang@kernel.org>
+Date: Fri, 11 Feb 2022 00:49:43 +0800
+Subject: riscv: alternative only works on !XIP_KERNEL
+
+From: Jisheng Zhang <jszhang@kernel.org>
+
+commit c80ee64a8020ef1a6a92109798080786829b8994 upstream.
+
+The alternative mechanism needs runtime code patching, it can't work
+on XIP_KERNEL. And the errata workarounds are implemented via the
+alternative mechanism. So add !XIP_KERNEL dependency for alternative
+and erratas.
+
+Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
+Fixes: 44c922572952 ("RISC-V: enable XIP")
+Cc: stable@vger.kernel.org
+Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/riscv/Kconfig.erratas | 1 +
+ arch/riscv/Kconfig.socs | 4 ++--
+ 2 files changed, 3 insertions(+), 2 deletions(-)
+
+--- a/arch/riscv/Kconfig.erratas
++++ b/arch/riscv/Kconfig.erratas
+@@ -2,6 +2,7 @@ menu "CPU errata selection"
+
+ config RISCV_ERRATA_ALTERNATIVE
+ bool "RISC-V alternative scheme"
++ depends on !XIP_KERNEL
+ default y
+ help
+ This Kconfig allows the kernel to automatically patch the
+--- a/arch/riscv/Kconfig.socs
++++ b/arch/riscv/Kconfig.socs
+@@ -14,8 +14,8 @@ config SOC_SIFIVE
+ select CLK_SIFIVE
+ select CLK_SIFIVE_PRCI
+ select SIFIVE_PLIC
+- select RISCV_ERRATA_ALTERNATIVE
+- select ERRATA_SIFIVE
++ select RISCV_ERRATA_ALTERNATIVE if !XIP_KERNEL
++ select ERRATA_SIFIVE if !XIP_KERNEL
+ help
+ This enables support for SiFive SoC platform hardware.
+
--- /dev/null
+From 0966d385830de3470b7131db8e86c0c5bc9c52dc Mon Sep 17 00:00:00 2001
+From: Emil Renner Berthing <kernel@esmil.dk>
+Date: Wed, 23 Feb 2022 20:12:57 +0100
+Subject: riscv: Fix auipc+jalr relocation range checks
+
+From: Emil Renner Berthing <kernel@esmil.dk>
+
+commit 0966d385830de3470b7131db8e86c0c5bc9c52dc upstream.
+
+RISC-V can do PC-relative jumps with a 32bit range using the following
+two instructions:
+
+ auipc t0, imm20 ; t0 = PC + imm20 * 2^12
+ jalr ra, t0, imm12 ; ra = PC + 4, PC = t0 + imm12
+
+Crucially both the 20bit immediate imm20 and the 12bit immediate imm12
+are treated as two's-complement signed values. For this reason the
+immediates are usually calculated like this:
+
+ imm20 = (offset + 0x800) >> 12
+ imm12 = offset & 0xfff
+
+..where offset is the signed offset from the auipc instruction. When
+the 11th bit of offset is 0 the addition of 0x800 doesn't change the top
+20 bits and imm12 considered positive. When the 11th bit is 1 the carry
+of the addition by 0x800 means imm20 is one higher, but since imm12 is
+then considered negative the two's complement representation means it
+all cancels out nicely.
+
+However, this addition by 0x800 (2^11) means an offset greater than or
+equal to 2^31 - 2^11 would overflow so imm20 is considered negative and
+result in a backwards jump. Similarly the lower range of offset is also
+moved down by 2^11 and hence the true 32bit range is
+
+ [-2^31 - 2^11, 2^31 - 2^11)
+
+Signed-off-by: Emil Renner Berthing <kernel@esmil.dk>
+Fixes: e2c0cdfba7f6 ("RISC-V: User-facing API")
+Cc: stable@vger.kernel.org
+Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/riscv/kernel/module.c | 21 ++++++++++++++++-----
+ 1 file changed, 16 insertions(+), 5 deletions(-)
+
+--- a/arch/riscv/kernel/module.c
++++ b/arch/riscv/kernel/module.c
+@@ -13,6 +13,19 @@
+ #include <linux/pgtable.h>
+ #include <asm/sections.h>
+
++/*
++ * The auipc+jalr instruction pair can reach any PC-relative offset
++ * in the range [-2^31 - 2^11, 2^31 - 2^11)
++ */
++static bool riscv_insn_valid_32bit_offset(ptrdiff_t val)
++{
++#ifdef CONFIG_32BIT
++ return true;
++#else
++ return (-(1L << 31) - (1L << 11)) <= val && val < ((1L << 31) - (1L << 11));
++#endif
++}
++
+ static int apply_r_riscv_32_rela(struct module *me, u32 *location, Elf_Addr v)
+ {
+ if (v != (u32)v) {
+@@ -95,7 +108,7 @@ static int apply_r_riscv_pcrel_hi20_rela
+ ptrdiff_t offset = (void *)v - (void *)location;
+ s32 hi20;
+
+- if (offset != (s32)offset) {
++ if (!riscv_insn_valid_32bit_offset(offset)) {
+ pr_err(
+ "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
+ me->name, (long long)v, location);
+@@ -197,10 +210,9 @@ static int apply_r_riscv_call_plt_rela(s
+ Elf_Addr v)
+ {
+ ptrdiff_t offset = (void *)v - (void *)location;
+- s32 fill_v = offset;
+ u32 hi20, lo12;
+
+- if (offset != fill_v) {
++ if (!riscv_insn_valid_32bit_offset(offset)) {
+ /* Only emit the plt entry if offset over 32-bit range */
+ if (IS_ENABLED(CONFIG_MODULE_SECTIONS)) {
+ offset = module_emit_plt_entry(me, v);
+@@ -224,10 +236,9 @@ static int apply_r_riscv_call_rela(struc
+ Elf_Addr v)
+ {
+ ptrdiff_t offset = (void *)v - (void *)location;
+- s32 fill_v = offset;
+ u32 hi20, lo12;
+
+- if (offset != fill_v) {
++ if (!riscv_insn_valid_32bit_offset(offset)) {
+ pr_err(
+ "%s: target %016llx can not be addressed by the 32-bit offset from PC = %p\n",
+ me->name, (long long)v, location);
selftest-vm-fix-map_fixed_noreplace-test-failure.patch
selftests-memfd-clean-up-mapping-in-mfd_fail_write.patch
arm-spectre-bhb-provide-empty-stub-for-non-config.patch
+fuse-fix-fileattr-op-failure.patch
+fuse-fix-pipe-buffer-lifetime-for-direct_io.patch
+staging-rtl8723bs-fix-access-point-mode-deadlock.patch
+staging-gdm724x-fix-use-after-free-in-gdm_lte_rx.patch
+net-macb-fix-lost-rx-packet-wakeup-race-in-napi-receive.patch
+riscv-alternative-only-works-on-xip_kernel.patch
+mmc-meson-fix-usage-of-meson_mmc_post_req.patch
+riscv-fix-auipc-jalr-relocation-range-checks.patch
+tracing-osnoise-force-quiescent-states-while-tracing.patch
+tracing-osnoise-do-not-unregister-events-twice.patch
+arm64-dts-marvell-armada-37xx-remap-io-space-to-bus-address-0x0.patch
+arm64-ensure-execute-only-permissions-are-not-allowed-without-epan.patch
+arm64-kasan-fix-include-error-in-mte-functions.patch
+swiotlb-rework-fix-info-leak-with-dma_from_device.patch
--- /dev/null
+From fc7f750dc9d102c1ed7bbe4591f991e770c99033 Mon Sep 17 00:00:00 2001
+From: Dan Carpenter <dan.carpenter@oracle.com>
+Date: Mon, 28 Feb 2022 10:43:31 +0300
+Subject: staging: gdm724x: fix use after free in gdm_lte_rx()
+
+From: Dan Carpenter <dan.carpenter@oracle.com>
+
+commit fc7f750dc9d102c1ed7bbe4591f991e770c99033 upstream.
+
+The netif_rx_ni() function frees the skb so we can't dereference it to
+save the skb->len.
+
+Fixes: 61e121047645 ("staging: gdm7240: adding LTE USB driver")
+Cc: stable <stable@vger.kernel.org>
+Reported-by: kernel test robot <lkp@intel.com>
+Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
+Link: https://lore.kernel.org/r/20220228074331.GA13685@kili
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/staging/gdm724x/gdm_lte.c | 5 +++--
+ 1 file changed, 3 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/staging/gdm724x/gdm_lte.c b/drivers/staging/gdm724x/gdm_lte.c
+index 493ed4821515..0d8d8fed283d 100644
+--- a/drivers/staging/gdm724x/gdm_lte.c
++++ b/drivers/staging/gdm724x/gdm_lte.c
+@@ -76,14 +76,15 @@ static void tx_complete(void *arg)
+
+ static int gdm_lte_rx(struct sk_buff *skb, struct nic *nic, int nic_type)
+ {
+- int ret;
++ int ret, len;
+
++ len = skb->len + ETH_HLEN;
+ ret = netif_rx_ni(skb);
+ if (ret == NET_RX_DROP) {
+ nic->stats.rx_dropped++;
+ } else {
+ nic->stats.rx_packets++;
+- nic->stats.rx_bytes += skb->len + ETH_HLEN;
++ nic->stats.rx_bytes += len;
+ }
+
+ return 0;
+--
+2.35.1
+
--- /dev/null
+From 8f4347081be32e67b0873827e0138ab0fdaaf450 Mon Sep 17 00:00:00 2001
+From: Hans de Goede <hdegoede@redhat.com>
+Date: Wed, 2 Mar 2022 11:16:36 +0100
+Subject: staging: rtl8723bs: Fix access-point mode deadlock
+
+From: Hans de Goede <hdegoede@redhat.com>
+
+commit 8f4347081be32e67b0873827e0138ab0fdaaf450 upstream.
+
+Commit 54659ca026e5 ("staging: rtl8723bs: remove possible deadlock when
+disconnect (v2)") split the locking of pxmitpriv->lock vs sleep_q/lock
+into 2 locks in attempt to fix a lockdep reported issue with the locking
+order of the sta_hash_lock vs pxmitpriv->lock.
+
+But in the end this turned out to not fully solve the sta_hash_lock issue
+so commit a7ac783c338b ("staging: rtl8723bs: remove a second possible
+deadlock") was added to fix this in another way.
+
+The original fix was kept as it was still seen as a good thing to have,
+but now it turns out that it creates a deadlock in access-point mode:
+
+[Feb20 23:47] ======================================================
+[ +0.074085] WARNING: possible circular locking dependency detected
+[ +0.074077] 5.16.0-1-amd64 #1 Tainted: G C E
+[ +0.064710] ------------------------------------------------------
+[ +0.074075] ksoftirqd/3/29 is trying to acquire lock:
+[ +0.060542] ffffb8b30062ab00 (&pxmitpriv->lock){+.-.}-{2:2}, at: rtw_xmit_classifier+0x8a/0x140 [r8723bs]
+[ +0.114921]
+ but task is already holding lock:
+[ +0.069908] ffffb8b3007ab704 (&psta->sleep_q.lock){+.-.}-{2:2}, at: wakeup_sta_to_xmit+0x3b/0x300 [r8723bs]
+[ +0.116976]
+ which lock already depends on the new lock.
+
+[ +0.098037]
+ the existing dependency chain (in reverse order) is:
+[ +0.089704]
+ -> #1 (&psta->sleep_q.lock){+.-.}-{2:2}:
+[ +0.077232] _raw_spin_lock_bh+0x34/0x40
+[ +0.053261] xmitframe_enqueue_for_sleeping_sta+0xc1/0x2f0 [r8723bs]
+[ +0.082572] rtw_xmit+0x58b/0x940 [r8723bs]
+[ +0.056528] _rtw_xmit_entry+0xba/0x350 [r8723bs]
+[ +0.062755] dev_hard_start_xmit+0xf1/0x320
+[ +0.056381] sch_direct_xmit+0x9e/0x360
+[ +0.052212] __dev_queue_xmit+0xce4/0x1080
+[ +0.055334] ip6_finish_output2+0x18f/0x6e0
+[ +0.056378] ndisc_send_skb+0x2c8/0x870
+[ +0.052209] ndisc_send_ns+0xd3/0x210
+[ +0.050130] addrconf_dad_work+0x3df/0x5a0
+[ +0.055338] process_one_work+0x274/0x5a0
+[ +0.054296] worker_thread+0x52/0x3b0
+[ +0.050124] kthread+0x16c/0x1a0
+[ +0.044925] ret_from_fork+0x1f/0x30
+[ +0.049092]
+ -> #0 (&pxmitpriv->lock){+.-.}-{2:2}:
+[ +0.074101] __lock_acquire+0x10f5/0x1d80
+[ +0.054298] lock_acquire+0xd7/0x300
+[ +0.049088] _raw_spin_lock_bh+0x34/0x40
+[ +0.053248] rtw_xmit_classifier+0x8a/0x140 [r8723bs]
+[ +0.066949] rtw_xmitframe_enqueue+0xa/0x20 [r8723bs]
+[ +0.066946] rtl8723bs_hal_xmitframe_enqueue+0x14/0x50 [r8723bs]
+[ +0.078386] wakeup_sta_to_xmit+0xa6/0x300 [r8723bs]
+[ +0.065903] rtw_recv_entry+0xe36/0x1160 [r8723bs]
+[ +0.063809] rtl8723bs_recv_tasklet+0x349/0x6c0 [r8723bs]
+[ +0.071093] tasklet_action_common.constprop.0+0xe5/0x110
+[ +0.070966] __do_softirq+0x16f/0x50a
+[ +0.050134] __irq_exit_rcu+0xeb/0x140
+[ +0.051172] irq_exit_rcu+0xa/0x20
+[ +0.047006] common_interrupt+0xb8/0xd0
+[ +0.052214] asm_common_interrupt+0x1e/0x40
+[ +0.056381] finish_task_switch.isra.0+0x100/0x3a0
+[ +0.063670] __schedule+0x3ad/0xd20
+[ +0.048047] schedule+0x4e/0xc0
+[ +0.043880] smpboot_thread_fn+0xc4/0x220
+[ +0.054298] kthread+0x16c/0x1a0
+[ +0.044922] ret_from_fork+0x1f/0x30
+[ +0.049088]
+ other info that might help us debug this:
+
+[ +0.095950] Possible unsafe locking scenario:
+
+[ +0.070952] CPU0 CPU1
+[ +0.054282] ---- ----
+[ +0.054285] lock(&psta->sleep_q.lock);
+[ +0.047004] lock(&pxmitpriv->lock);
+[ +0.074082] lock(&psta->sleep_q.lock);
+[ +0.077209] lock(&pxmitpriv->lock);
+[ +0.043873]
+ *** DEADLOCK ***
+
+[ +0.070950] 1 lock held by ksoftirqd/3/29:
+[ +0.049082] #0: ffffb8b3007ab704 (&psta->sleep_q.lock){+.-.}-{2:2}, at: wakeup_sta_to_xmit+0x3b/0x300 [r8723bs]
+
+Analysis shows that in hindsight the splitting of the lock was not
+a good idea, so revert this to fix the access-point mode deadlock.
+
+Note this is a straight-forward revert done with git revert, the commented
+out "/* spin_lock_bh(&psta_bmc->sleep_q.lock); */" lines were part of the
+code before the reverted changes.
+
+Fixes: 54659ca026e5 ("staging: rtl8723bs: remove possible deadlock when disconnect (v2)")
+Cc: stable <stable@vger.kernel.org>
+Cc: Fabio Aiuto <fabioaiuto83@gmail.com>
+Signed-off-by: Hans de Goede <hdegoede@redhat.com>
+BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=215542
+Link: https://lore.kernel.org/r/20220302101637.26542-1-hdegoede@redhat.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/staging/rtl8723bs/core/rtw_mlme_ext.c | 7 +++++--
+ drivers/staging/rtl8723bs/core/rtw_recv.c | 10 +++++++---
+ drivers/staging/rtl8723bs/core/rtw_sta_mgt.c | 22 ++++++++++------------
+ drivers/staging/rtl8723bs/core/rtw_xmit.c | 16 +++++++++-------
+ drivers/staging/rtl8723bs/hal/rtl8723bs_xmit.c | 2 ++
+ 5 files changed, 33 insertions(+), 24 deletions(-)
+
+--- a/drivers/staging/rtl8723bs/core/rtw_mlme_ext.c
++++ b/drivers/staging/rtl8723bs/core/rtw_mlme_ext.c
+@@ -5907,6 +5907,7 @@ u8 chk_bmc_sleepq_hdl(struct adapter *pa
+ struct sta_info *psta_bmc;
+ struct list_head *xmitframe_plist, *xmitframe_phead, *tmp;
+ struct xmit_frame *pxmitframe = NULL;
++ struct xmit_priv *pxmitpriv = &padapter->xmitpriv;
+ struct sta_priv *pstapriv = &padapter->stapriv;
+
+ /* for BC/MC Frames */
+@@ -5917,7 +5918,8 @@ u8 chk_bmc_sleepq_hdl(struct adapter *pa
+ if ((pstapriv->tim_bitmap&BIT(0)) && (psta_bmc->sleepq_len > 0)) {
+ msleep(10);/* 10ms, ATIM(HIQ) Windows */
+
+- spin_lock_bh(&psta_bmc->sleep_q.lock);
++ /* spin_lock_bh(&psta_bmc->sleep_q.lock); */
++ spin_lock_bh(&pxmitpriv->lock);
+
+ xmitframe_phead = get_list_head(&psta_bmc->sleep_q);
+ list_for_each_safe(xmitframe_plist, tmp, xmitframe_phead) {
+@@ -5940,7 +5942,8 @@ u8 chk_bmc_sleepq_hdl(struct adapter *pa
+ rtw_hal_xmitframe_enqueue(padapter, pxmitframe);
+ }
+
+- spin_unlock_bh(&psta_bmc->sleep_q.lock);
++ /* spin_unlock_bh(&psta_bmc->sleep_q.lock); */
++ spin_unlock_bh(&pxmitpriv->lock);
+
+ /* check hi queue and bmc_sleepq */
+ rtw_chk_hi_queue_cmd(padapter);
+--- a/drivers/staging/rtl8723bs/core/rtw_recv.c
++++ b/drivers/staging/rtl8723bs/core/rtw_recv.c
+@@ -957,8 +957,10 @@ static signed int validate_recv_ctrl_fra
+ if ((psta->state&WIFI_SLEEP_STATE) && (pstapriv->sta_dz_bitmap&BIT(psta->aid))) {
+ struct list_head *xmitframe_plist, *xmitframe_phead;
+ struct xmit_frame *pxmitframe = NULL;
++ struct xmit_priv *pxmitpriv = &padapter->xmitpriv;
+
+- spin_lock_bh(&psta->sleep_q.lock);
++ /* spin_lock_bh(&psta->sleep_q.lock); */
++ spin_lock_bh(&pxmitpriv->lock);
+
+ xmitframe_phead = get_list_head(&psta->sleep_q);
+ xmitframe_plist = get_next(xmitframe_phead);
+@@ -989,10 +991,12 @@ static signed int validate_recv_ctrl_fra
+ update_beacon(padapter, WLAN_EID_TIM, NULL, true);
+ }
+
+- spin_unlock_bh(&psta->sleep_q.lock);
++ /* spin_unlock_bh(&psta->sleep_q.lock); */
++ spin_unlock_bh(&pxmitpriv->lock);
+
+ } else {
+- spin_unlock_bh(&psta->sleep_q.lock);
++ /* spin_unlock_bh(&psta->sleep_q.lock); */
++ spin_unlock_bh(&pxmitpriv->lock);
+
+ if (pstapriv->tim_bitmap&BIT(psta->aid)) {
+ if (psta->sleepq_len == 0) {
+--- a/drivers/staging/rtl8723bs/core/rtw_sta_mgt.c
++++ b/drivers/staging/rtl8723bs/core/rtw_sta_mgt.c
+@@ -293,48 +293,46 @@ u32 rtw_free_stainfo(struct adapter *pad
+
+ /* list_del_init(&psta->wakeup_list); */
+
+- spin_lock_bh(&psta->sleep_q.lock);
++ spin_lock_bh(&pxmitpriv->lock);
++
+ rtw_free_xmitframe_queue(pxmitpriv, &psta->sleep_q);
+ psta->sleepq_len = 0;
+- spin_unlock_bh(&psta->sleep_q.lock);
+-
+- spin_lock_bh(&pxmitpriv->lock);
+
+ /* vo */
+- spin_lock_bh(&pstaxmitpriv->vo_q.sta_pending.lock);
++ /* spin_lock_bh(&(pxmitpriv->vo_pending.lock)); */
+ rtw_free_xmitframe_queue(pxmitpriv, &pstaxmitpriv->vo_q.sta_pending);
+ list_del_init(&(pstaxmitpriv->vo_q.tx_pending));
+ phwxmit = pxmitpriv->hwxmits;
+ phwxmit->accnt -= pstaxmitpriv->vo_q.qcnt;
+ pstaxmitpriv->vo_q.qcnt = 0;
+- spin_unlock_bh(&pstaxmitpriv->vo_q.sta_pending.lock);
++ /* spin_unlock_bh(&(pxmitpriv->vo_pending.lock)); */
+
+ /* vi */
+- spin_lock_bh(&pstaxmitpriv->vi_q.sta_pending.lock);
++ /* spin_lock_bh(&(pxmitpriv->vi_pending.lock)); */
+ rtw_free_xmitframe_queue(pxmitpriv, &pstaxmitpriv->vi_q.sta_pending);
+ list_del_init(&(pstaxmitpriv->vi_q.tx_pending));
+ phwxmit = pxmitpriv->hwxmits+1;
+ phwxmit->accnt -= pstaxmitpriv->vi_q.qcnt;
+ pstaxmitpriv->vi_q.qcnt = 0;
+- spin_unlock_bh(&pstaxmitpriv->vi_q.sta_pending.lock);
++ /* spin_unlock_bh(&(pxmitpriv->vi_pending.lock)); */
+
+ /* be */
+- spin_lock_bh(&pstaxmitpriv->be_q.sta_pending.lock);
++ /* spin_lock_bh(&(pxmitpriv->be_pending.lock)); */
+ rtw_free_xmitframe_queue(pxmitpriv, &pstaxmitpriv->be_q.sta_pending);
+ list_del_init(&(pstaxmitpriv->be_q.tx_pending));
+ phwxmit = pxmitpriv->hwxmits+2;
+ phwxmit->accnt -= pstaxmitpriv->be_q.qcnt;
+ pstaxmitpriv->be_q.qcnt = 0;
+- spin_unlock_bh(&pstaxmitpriv->be_q.sta_pending.lock);
++ /* spin_unlock_bh(&(pxmitpriv->be_pending.lock)); */
+
+ /* bk */
+- spin_lock_bh(&pstaxmitpriv->bk_q.sta_pending.lock);
++ /* spin_lock_bh(&(pxmitpriv->bk_pending.lock)); */
+ rtw_free_xmitframe_queue(pxmitpriv, &pstaxmitpriv->bk_q.sta_pending);
+ list_del_init(&(pstaxmitpriv->bk_q.tx_pending));
+ phwxmit = pxmitpriv->hwxmits+3;
+ phwxmit->accnt -= pstaxmitpriv->bk_q.qcnt;
+ pstaxmitpriv->bk_q.qcnt = 0;
+- spin_unlock_bh(&pstaxmitpriv->bk_q.sta_pending.lock);
++ /* spin_unlock_bh(&(pxmitpriv->bk_pending.lock)); */
+
+ spin_unlock_bh(&pxmitpriv->lock);
+
+--- a/drivers/staging/rtl8723bs/core/rtw_xmit.c
++++ b/drivers/staging/rtl8723bs/core/rtw_xmit.c
+@@ -1734,12 +1734,15 @@ void rtw_free_xmitframe_queue(struct xmi
+ struct list_head *plist, *phead, *tmp;
+ struct xmit_frame *pxmitframe;
+
++ spin_lock_bh(&pframequeue->lock);
++
+ phead = get_list_head(pframequeue);
+ list_for_each_safe(plist, tmp, phead) {
+ pxmitframe = list_entry(plist, struct xmit_frame, list);
+
+ rtw_free_xmitframe(pxmitpriv, pxmitframe);
+ }
++ spin_unlock_bh(&pframequeue->lock);
+ }
+
+ s32 rtw_xmitframe_enqueue(struct adapter *padapter, struct xmit_frame *pxmitframe)
+@@ -1794,7 +1797,6 @@ s32 rtw_xmit_classifier(struct adapter *
+ struct sta_info *psta;
+ struct tx_servq *ptxservq;
+ struct pkt_attrib *pattrib = &pxmitframe->attrib;
+- struct xmit_priv *xmit_priv = &padapter->xmitpriv;
+ struct hw_xmit *phwxmits = padapter->xmitpriv.hwxmits;
+ signed int res = _SUCCESS;
+
+@@ -1812,14 +1814,12 @@ s32 rtw_xmit_classifier(struct adapter *
+
+ ptxservq = rtw_get_sta_pending(padapter, psta, pattrib->priority, (u8 *)(&ac_index));
+
+- spin_lock_bh(&xmit_priv->lock);
+ if (list_empty(&ptxservq->tx_pending))
+ list_add_tail(&ptxservq->tx_pending, get_list_head(phwxmits[ac_index].sta_queue));
+
+ list_add_tail(&pxmitframe->list, get_list_head(&ptxservq->sta_pending));
+ ptxservq->qcnt++;
+ phwxmits[ac_index].accnt++;
+- spin_unlock_bh(&xmit_priv->lock);
+
+ exit:
+
+@@ -2202,10 +2202,11 @@ void wakeup_sta_to_xmit(struct adapter *
+ struct list_head *xmitframe_plist, *xmitframe_phead, *tmp;
+ struct xmit_frame *pxmitframe = NULL;
+ struct sta_priv *pstapriv = &padapter->stapriv;
++ struct xmit_priv *pxmitpriv = &padapter->xmitpriv;
+
+ psta_bmc = rtw_get_bcmc_stainfo(padapter);
+
+- spin_lock_bh(&psta->sleep_q.lock);
++ spin_lock_bh(&pxmitpriv->lock);
+
+ xmitframe_phead = get_list_head(&psta->sleep_q);
+ list_for_each_safe(xmitframe_plist, tmp, xmitframe_phead) {
+@@ -2306,7 +2307,7 @@ void wakeup_sta_to_xmit(struct adapter *
+
+ _exit:
+
+- spin_unlock_bh(&psta->sleep_q.lock);
++ spin_unlock_bh(&pxmitpriv->lock);
+
+ if (update_mask)
+ update_beacon(padapter, WLAN_EID_TIM, NULL, true);
+@@ -2318,8 +2319,9 @@ void xmit_delivery_enabled_frames(struct
+ struct list_head *xmitframe_plist, *xmitframe_phead, *tmp;
+ struct xmit_frame *pxmitframe = NULL;
+ struct sta_priv *pstapriv = &padapter->stapriv;
++ struct xmit_priv *pxmitpriv = &padapter->xmitpriv;
+
+- spin_lock_bh(&psta->sleep_q.lock);
++ spin_lock_bh(&pxmitpriv->lock);
+
+ xmitframe_phead = get_list_head(&psta->sleep_q);
+ list_for_each_safe(xmitframe_plist, tmp, xmitframe_phead) {
+@@ -2372,7 +2374,7 @@ void xmit_delivery_enabled_frames(struct
+ }
+ }
+
+- spin_unlock_bh(&psta->sleep_q.lock);
++ spin_unlock_bh(&pxmitpriv->lock);
+ }
+
+ void enqueue_pending_xmitbuf(struct xmit_priv *pxmitpriv, struct xmit_buf *pxmitbuf)
+--- a/drivers/staging/rtl8723bs/hal/rtl8723bs_xmit.c
++++ b/drivers/staging/rtl8723bs/hal/rtl8723bs_xmit.c
+@@ -507,7 +507,9 @@ s32 rtl8723bs_hal_xmit(
+ rtw_issue_addbareq_cmd(padapter, pxmitframe);
+ }
+
++ spin_lock_bh(&pxmitpriv->lock);
+ err = rtw_xmitframe_enqueue(padapter, pxmitframe);
++ spin_unlock_bh(&pxmitpriv->lock);
+ if (err != _SUCCESS) {
+ rtw_free_xmitframe(pxmitpriv, pxmitframe);
+
--- /dev/null
+From aa6f8dcbab473f3a3c7454b74caa46d36cdc5d13 Mon Sep 17 00:00:00 2001
+From: Halil Pasic <pasic@linux.ibm.com>
+Date: Sat, 5 Mar 2022 18:07:14 +0100
+Subject: swiotlb: rework "fix info leak with DMA_FROM_DEVICE"
+
+From: Halil Pasic <pasic@linux.ibm.com>
+
+commit aa6f8dcbab473f3a3c7454b74caa46d36cdc5d13 upstream.
+
+Unfortunately, we ended up merging an old version of the patch "fix info
+leak with DMA_FROM_DEVICE" instead of merging the latest one. Christoph
+(the swiotlb maintainer), he asked me to create an incremental fix
+(after I have pointed this out the mix up, and asked him for guidance).
+So here we go.
+
+The main differences between what we got and what was agreed are:
+* swiotlb_sync_single_for_device is also required to do an extra bounce
+* We decided not to introduce DMA_ATTR_OVERWRITE until we have exploiters
+* The implantation of DMA_ATTR_OVERWRITE is flawed: DMA_ATTR_OVERWRITE
+ must take precedence over DMA_ATTR_SKIP_CPU_SYNC
+
+Thus this patch removes DMA_ATTR_OVERWRITE, and makes
+swiotlb_sync_single_for_device() bounce unconditionally (that is, also
+when dir == DMA_TO_DEVICE) in order do avoid synchronising back stale
+data from the swiotlb buffer.
+
+Let me note, that if the size used with dma_sync_* API is less than the
+size used with dma_[un]map_*, under certain circumstances we may still
+end up with swiotlb not being transparent. In that sense, this is no
+perfect fix either.
+
+To get this bullet proof, we would have to bounce the entire
+mapping/bounce buffer. For that we would have to figure out the starting
+address, and the size of the mapping in
+swiotlb_sync_single_for_device(). While this does seem possible, there
+seems to be no firm consensus on how things are supposed to work.
+
+Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
+Fixes: ddbd89deb7d3 ("swiotlb: fix info leak with DMA_FROM_DEVICE")
+Cc: stable@vger.kernel.org
+Reviewed-by: Christoph Hellwig <hch@lst.de>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ Documentation/core-api/dma-attributes.rst | 8 --------
+ include/linux/dma-mapping.h | 8 --------
+ kernel/dma/swiotlb.c | 23 +++++++++++++++--------
+ 3 files changed, 15 insertions(+), 24 deletions(-)
+
+--- a/Documentation/core-api/dma-attributes.rst
++++ b/Documentation/core-api/dma-attributes.rst
+@@ -130,11 +130,3 @@ accesses to DMA buffers in both privileg
+ subsystem that the buffer is fully accessible at the elevated privilege
+ level (and ideally inaccessible or at least read-only at the
+ lesser-privileged levels).
+-
+-DMA_ATTR_OVERWRITE
+-------------------
+-
+-This is a hint to the DMA-mapping subsystem that the device is expected to
+-overwrite the entire mapped size, thus the caller does not require any of the
+-previous buffer contents to be preserved. This allows bounce-buffering
+-implementations to optimise DMA_FROM_DEVICE transfers.
+--- a/include/linux/dma-mapping.h
++++ b/include/linux/dma-mapping.h
+@@ -62,14 +62,6 @@
+ #define DMA_ATTR_PRIVILEGED (1UL << 9)
+
+ /*
+- * This is a hint to the DMA-mapping subsystem that the device is expected
+- * to overwrite the entire mapped size, thus the caller does not require any
+- * of the previous buffer contents to be preserved. This allows
+- * bounce-buffering implementations to optimise DMA_FROM_DEVICE transfers.
+- */
+-#define DMA_ATTR_OVERWRITE (1UL << 10)
+-
+-/*
+ * A dma_addr_t can hold any valid DMA or bus address for the platform. It can
+ * be given to a device to use as a DMA source or target. It is specific to a
+ * given device and there may be a translation between the CPU physical address
+--- a/kernel/dma/swiotlb.c
++++ b/kernel/dma/swiotlb.c
+@@ -581,10 +581,14 @@ phys_addr_t swiotlb_tbl_map_single(struc
+ for (i = 0; i < nr_slots(alloc_size + offset); i++)
+ mem->slots[index + i].orig_addr = slot_addr(orig_addr, i);
+ tlb_addr = slot_addr(mem->start, index) + offset;
+- if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
+- (!(attrs & DMA_ATTR_OVERWRITE) || dir == DMA_TO_DEVICE ||
+- dir == DMA_BIDIRECTIONAL))
+- swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_TO_DEVICE);
++ /*
++ * When dir == DMA_FROM_DEVICE we could omit the copy from the orig
++ * to the tlb buffer, if we knew for sure the device will
++ * overwirte the entire current content. But we don't. Thus
++ * unconditional bounce may prevent leaking swiotlb content (i.e.
++ * kernel memory) to user-space.
++ */
++ swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_TO_DEVICE);
+ return tlb_addr;
+ }
+
+@@ -651,10 +655,13 @@ void swiotlb_tbl_unmap_single(struct dev
+ void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr,
+ size_t size, enum dma_data_direction dir)
+ {
+- if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)
+- swiotlb_bounce(dev, tlb_addr, size, DMA_TO_DEVICE);
+- else
+- BUG_ON(dir != DMA_FROM_DEVICE);
++ /*
++ * Unconditional bounce is necessary to avoid corruption on
++ * sync_*_for_cpu or dma_ummap_* when the device didn't overwrite
++ * the whole lengt of the bounce buffer.
++ */
++ swiotlb_bounce(dev, tlb_addr, size, DMA_TO_DEVICE);
++ BUG_ON(!valid_dma_direction(dir));
+ }
+
+ void swiotlb_sync_single_for_cpu(struct device *dev, phys_addr_t tlb_addr,
--- /dev/null
+From f0cfe17bcc1dd2f0872966b554a148e888833ee9 Mon Sep 17 00:00:00 2001
+From: Daniel Bristot de Oliveira <bristot@kernel.org>
+Date: Wed, 9 Mar 2022 14:13:02 +0100
+Subject: tracing/osnoise: Do not unregister events twice
+
+From: Daniel Bristot de Oliveira <bristot@kernel.org>
+
+commit f0cfe17bcc1dd2f0872966b554a148e888833ee9 upstream.
+
+Nicolas reported that using:
+
+ # trace-cmd record -e all -M 10 -p osnoise --poll
+
+Resulted in the following kernel warning:
+
+ ------------[ cut here ]------------
+ WARNING: CPU: 0 PID: 1217 at kernel/tracepoint.c:404 tracepoint_probe_unregister+0x280/0x370
+ [...]
+ CPU: 0 PID: 1217 Comm: trace-cmd Not tainted 5.17.0-rc6-next-20220307-nico+ #19
+ RIP: 0010:tracepoint_probe_unregister+0x280/0x370
+ [...]
+ CR2: 00007ff919b29497 CR3: 0000000109da4005 CR4: 0000000000170ef0
+ Call Trace:
+ <TASK>
+ osnoise_workload_stop+0x36/0x90
+ tracing_set_tracer+0x108/0x260
+ tracing_set_trace_write+0x94/0xd0
+ ? __check_object_size.part.0+0x10a/0x150
+ ? selinux_file_permission+0x104/0x150
+ vfs_write+0xb5/0x290
+ ksys_write+0x5f/0xe0
+ do_syscall_64+0x3b/0x90
+ entry_SYSCALL_64_after_hwframe+0x44/0xae
+ RIP: 0033:0x7ff919a18127
+ [...]
+ ---[ end trace 0000000000000000 ]---
+
+The warning complains about an attempt to unregister an
+unregistered tracepoint.
+
+This happens on trace-cmd because it first stops tracing, and
+then switches the tracer to nop. Which is equivalent to:
+
+ # cd /sys/kernel/tracing/
+ # echo osnoise > current_tracer
+ # echo 0 > tracing_on
+ # echo nop > current_tracer
+
+The osnoise tracer stops the workload when no trace instance
+is actually collecting data. This can be caused both by
+disabling tracing or disabling the tracer itself.
+
+To avoid unregistering events twice, use the existing
+trace_osnoise_callback_enabled variable to check if the events
+(and the workload) are actually active before trying to
+deactivate them.
+
+Link: https://lore.kernel.org/all/c898d1911f7f9303b7e14726e7cc9678fbfb4a0e.camel@redhat.com/
+Link: https://lkml.kernel.org/r/938765e17d5a781c2df429a98f0b2e7cc317b022.1646823913.git.bristot@kernel.org
+
+Cc: stable@vger.kernel.org
+Cc: Marcelo Tosatti <mtosatti@redhat.com>
+Fixes: 2fac8d6486d5 ("tracing/osnoise: Allow multiple instances of the same tracer")
+Reported-by: Nicolas Saenz Julienne <nsaenzju@redhat.com>
+Signed-off-by: Daniel Bristot de Oliveira <bristot@kernel.org>
+Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/trace/trace_osnoise.c | 11 +++++++++++
+ 1 file changed, 11 insertions(+)
+
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -2222,6 +2222,17 @@ static void osnoise_workload_stop(void)
+ if (osnoise_has_registered_instances())
+ return;
+
++ /*
++ * If callbacks were already disabled in a previous stop
++ * call, there is no need to disable then again.
++ *
++ * For instance, this happens when tracing is stopped via:
++ * echo 0 > tracing_on
++ * echo nop > current_tracer.
++ */
++ if (!trace_osnoise_callback_enabled)
++ return;
++
+ trace_osnoise_callback_enabled = false;
+ /*
+ * Make sure that ftrace_nmi_enter/exit() see
--- /dev/null
+From caf4c86bf136845982c5103b2661751b40c474c0 Mon Sep 17 00:00:00 2001
+From: Nicolas Saenz Julienne <nsaenzju@redhat.com>
+Date: Mon, 7 Mar 2022 19:07:40 +0100
+Subject: tracing/osnoise: Force quiescent states while tracing
+
+From: Nicolas Saenz Julienne <nsaenzju@redhat.com>
+
+commit caf4c86bf136845982c5103b2661751b40c474c0 upstream.
+
+At the moment running osnoise on a nohz_full CPU or uncontested FIFO
+priority and a PREEMPT_RCU kernel might have the side effect of
+extending grace periods too much. This will entice RCU to force a
+context switch on the wayward CPU to end the grace period, all while
+introducing unwarranted noise into the tracer. This behaviour is
+unavoidable as overly extending grace periods might exhaust the system's
+memory.
+
+This same exact problem is what extended quiescent states (EQS) were
+created for, conversely, rcu_momentary_dyntick_idle() emulates them by
+performing a zero duration EQS. So let's make use of it.
+
+In the common case rcu_momentary_dyntick_idle() is fairly inexpensive:
+atomically incrementing a local per-CPU counter and doing a store. So it
+shouldn't affect osnoise's measurements (which has a 1us granularity),
+so we'll call it unanimously.
+
+The uncommon case involve calling rcu_momentary_dyntick_idle() after
+having the osnoise process:
+
+ - Receive an expedited quiescent state IPI with preemption disabled or
+ during an RCU critical section. (activates rdp->cpu_no_qs.b.exp
+ code-path).
+
+ - Being preempted within in an RCU critical section and having the
+ subsequent outermost rcu_read_unlock() called with interrupts
+ disabled. (t->rcu_read_unlock_special.b.blocked code-path).
+
+Neither of those are possible at the moment, and are unlikely to be in
+the future given the osnoise's loop design. On top of this, the noise
+generated by the situations described above is unavoidable, and if not
+exposed by rcu_momentary_dyntick_idle() will be eventually seen in
+subsequent rcu_read_unlock() calls or schedule operations.
+
+Link: https://lkml.kernel.org/r/20220307180740.577607-1-nsaenzju@redhat.com
+
+Cc: stable@vger.kernel.org
+Fixes: bce29ac9ce0b ("trace: Add osnoise tracer")
+Signed-off-by: Nicolas Saenz Julienne <nsaenzju@redhat.com>
+Acked-by: Paul E. McKenney <paulmck@kernel.org>
+Acked-by: Daniel Bristot de Oliveira <bristot@kernel.org>
+Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/trace/trace_osnoise.c | 20 ++++++++++++++++++++
+ 1 file changed, 20 insertions(+)
+
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -1388,6 +1388,26 @@ static int run_osnoise(void)
+ }
+
+ /*
++ * In some cases, notably when running on a nohz_full CPU with
++ * a stopped tick PREEMPT_RCU has no way to account for QSs.
++ * This will eventually cause unwarranted noise as PREEMPT_RCU
++ * will force preemption as the means of ending the current
++ * grace period. We avoid this problem by calling
++ * rcu_momentary_dyntick_idle(), which performs a zero duration
++ * EQS allowing PREEMPT_RCU to end the current grace period.
++ * This call shouldn't be wrapped inside an RCU critical
++ * section.
++ *
++ * Note that in non PREEMPT_RCU kernels QSs are handled through
++ * cond_resched()
++ */
++ if (IS_ENABLED(CONFIG_PREEMPT_RCU)) {
++ local_irq_disable();
++ rcu_momentary_dyntick_idle();
++ local_irq_enable();
++ }
++
++ /*
+ * For the non-preemptive kernel config: let threads runs, if
+ * they so wish.
+ */