--- /dev/null
+From 81235ae0c846e1fb46a2c6fe9283fe2b2b24f7dc Mon Sep 17 00:00:00 2001
+From: Mark Rutland <mark.rutland@arm.com>
+Date: Wed, 6 Nov 2024 16:42:20 +0000
+Subject: arm64: Kconfig: Make SME depend on BROKEN for now
+
+From: Mark Rutland <mark.rutland@arm.com>
+
+commit 81235ae0c846e1fb46a2c6fe9283fe2b2b24f7dc upstream.
+
+Although support for SME was merged in v5.19, we've since uncovered a
+number of issues with the implementation, including issues which might
+corrupt the FPSIMD/SVE/SME state of arbitrary tasks. While there are
+patches to address some of these issues, ongoing review has highlighted
+additional functional problems, and more time is necessary to analyse
+and fix these.
+
+For now, mark SME as BROKEN in the hope that we can fix things properly
+in the near future. As SME is an OPTIONAL part of ARMv9.2+, and there is
+very little extant hardware, this should not adversely affect the vast
+majority of users.
+
+Signed-off-by: Mark Rutland <mark.rutland@arm.com>
+Cc: Ard Biesheuvel <ardb@kernel.org>
+Cc: Catalin Marinas <catalin.marinas@arm.com>
+Cc: Marc Zyngier <maz@kernel.org>
+Cc: Mark Brown <broonie@kernel.org>
+Cc: Will Deacon <will@kernel.org>
+Cc: stable@vger.kernel.org # 5.19
+Acked-by: Catalin Marinas <catalin.marinas@arm.com>
+Link: https://lore.kernel.org/r/20241106164220.2789279-1-mark.rutland@arm.com
+Signed-off-by: Will Deacon <will@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/arm64/Kconfig | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -2173,6 +2173,7 @@ config ARM64_SME
+ bool "ARM Scalable Matrix Extension support"
+ default y
+ depends on ARM64_SVE
++ depends on BROKEN
+ help
+ The Scalable Matrix Extension (SME) is an extension to the AArch64
+ execution state which utilises a substantial subset of the SVE
--- /dev/null
+From 8c462d56487e3abdbf8a61cedfe7c795a54f4a78 Mon Sep 17 00:00:00 2001
+From: Mark Rutland <mark.rutland@arm.com>
+Date: Wed, 6 Nov 2024 16:04:48 +0000
+Subject: arm64: smccc: Remove broken support for SMCCCv1.3 SVE discard hint
+
+From: Mark Rutland <mark.rutland@arm.com>
+
+commit 8c462d56487e3abdbf8a61cedfe7c795a54f4a78 upstream.
+
+SMCCCv1.3 added a hint bit which callers can set in an SMCCC function ID
+(AKA "FID") to indicate that it is acceptable for the SMCCC
+implementation to discard SVE and/or SME state over a specific SMCCC
+call. The kernel support for using this hint is broken and SMCCC calls
+may clobber the SVE and/or SME state of arbitrary tasks, though FPSIMD
+state is unaffected.
+
+The kernel support is intended to use the hint when there is no SVE or
+SME state to save, and to do this it checks whether TIF_FOREIGN_FPSTATE
+is set or TIF_SVE is clear in assembly code:
+
+| ldr <flags>, [<current_task>, #TSK_TI_FLAGS]
+| tbnz <flags>, #TIF_FOREIGN_FPSTATE, 1f // Any live FP state?
+| tbnz <flags>, #TIF_SVE, 2f // Does that state include SVE?
+|
+| 1: orr <fid>, <fid>, ARM_SMCCC_1_3_SVE_HINT
+| 2:
+| << SMCCC call using FID >>
+
+This is not safe as-is:
+
+(1) SMCCC calls can be made in a preemptible context and preemption can
+ result in TIF_FOREIGN_FPSTATE being set or cleared at arbitrary
+ points in time. Thus checking for TIF_FOREIGN_FPSTATE provides no
+ guarantee.
+
+(2) TIF_FOREIGN_FPSTATE only indicates that the live FP/SVE/SME state in
+ the CPU does not belong to the current task, and does not indicate
+ that clobbering this state is acceptable.
+
+ When the live CPU state is clobbered it is necessary to update
+ fpsimd_last_state.st to ensure that a subsequent context switch will
+ reload FP/SVE/SME state from memory rather than consuming the
+ clobbered state. This and the SMCCC call itself must happen in a
+ critical section with preemption disabled to avoid races.
+
+(3) Live SVE/SME state can exist with TIF_SVE clear (e.g. with only
+ TIF_SME set), and checking TIF_SVE alone is insufficient.
+
+Remove the broken support for the SMCCCv1.3 SVE saving hint. This is
+effectively a revert of commits:
+
+* cfa7ff959a78 ("arm64: smccc: Support SMCCC v1.3 SVE register saving hint")
+* a7c3acca5380 ("arm64: smccc: Save lr before calling __arm_smccc_sve_check()")
+
+... leaving behind the ARM_SMCCC_VERSION_1_3 and ARM_SMCCC_1_3_SVE_HINT
+definitions, since these are simply definitions from the SMCCC
+specification, and the latter is used in KVM via ARM_SMCCC_CALL_HINTS.
+
+If we want to bring this back in future, we'll probably want to handle
+this logic in C where we can use all the usual FPSIMD/SVE/SME helper
+functions, and that'll likely require some rework of the SMCCC code
+and/or its callers.
+
+Fixes: cfa7ff959a78 ("arm64: smccc: Support SMCCC v1.3 SVE register saving hint")
+Signed-off-by: Mark Rutland <mark.rutland@arm.com>
+Cc: Ard Biesheuvel <ardb@kernel.org>
+Cc: Catalin Marinas <catalin.marinas@arm.com>
+Cc: Marc Zyngier <maz@kernel.org>
+Cc: Mark Brown <broonie@kernel.org>
+Cc: Will Deacon <will@kernel.org>
+Cc: stable@vger.kernel.org
+Reviewed-by: Mark Brown <broonie@kernel.org>
+Link: https://lore.kernel.org/r/20241106160448.2712997-1-mark.rutland@arm.com
+Signed-off-by: Will Deacon <will@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/arm64/kernel/smccc-call.S | 35 +++--------------------------------
+ drivers/firmware/smccc/smccc.c | 4 ----
+ include/linux/arm-smccc.h | 32 +++-----------------------------
+ 3 files changed, 6 insertions(+), 65 deletions(-)
+
+--- a/arch/arm64/kernel/smccc-call.S
++++ b/arch/arm64/kernel/smccc-call.S
+@@ -7,48 +7,19 @@
+
+ #include <asm/asm-offsets.h>
+ #include <asm/assembler.h>
+-#include <asm/thread_info.h>
+-
+-/*
+- * If we have SMCCC v1.3 and (as is likely) no SVE state in
+- * the registers then set the SMCCC hint bit to say there's no
+- * need to preserve it. Do this by directly adjusting the SMCCC
+- * function value which is already stored in x0 ready to be called.
+- */
+-SYM_FUNC_START(__arm_smccc_sve_check)
+-
+- ldr_l x16, smccc_has_sve_hint
+- cbz x16, 2f
+-
+- get_current_task x16
+- ldr x16, [x16, #TSK_TI_FLAGS]
+- tbnz x16, #TIF_FOREIGN_FPSTATE, 1f // Any live FP state?
+- tbnz x16, #TIF_SVE, 2f // Does that state include SVE?
+-
+-1: orr x0, x0, ARM_SMCCC_1_3_SVE_HINT
+-
+-2: ret
+-SYM_FUNC_END(__arm_smccc_sve_check)
+-EXPORT_SYMBOL(__arm_smccc_sve_check)
+
+ .macro SMCCC instr
+- stp x29, x30, [sp, #-16]!
+- mov x29, sp
+-alternative_if ARM64_SVE
+- bl __arm_smccc_sve_check
+-alternative_else_nop_endif
+ \instr #0
+- ldr x4, [sp, #16]
++ ldr x4, [sp]
+ stp x0, x1, [x4, #ARM_SMCCC_RES_X0_OFFS]
+ stp x2, x3, [x4, #ARM_SMCCC_RES_X2_OFFS]
+- ldr x4, [sp, #24]
++ ldr x4, [sp, #8]
+ cbz x4, 1f /* no quirk structure */
+ ldr x9, [x4, #ARM_SMCCC_QUIRK_ID_OFFS]
+ cmp x9, #ARM_SMCCC_QUIRK_QCOM_A6
+ b.ne 1f
+ str x6, [x4, ARM_SMCCC_QUIRK_STATE_OFFS]
+-1: ldp x29, x30, [sp], #16
+- ret
++1: ret
+ .endm
+
+ /*
+--- a/drivers/firmware/smccc/smccc.c
++++ b/drivers/firmware/smccc/smccc.c
+@@ -16,7 +16,6 @@ static u32 smccc_version = ARM_SMCCC_VER
+ static enum arm_smccc_conduit smccc_conduit = SMCCC_CONDUIT_NONE;
+
+ bool __ro_after_init smccc_trng_available = false;
+-u64 __ro_after_init smccc_has_sve_hint = false;
+ s32 __ro_after_init smccc_soc_id_version = SMCCC_RET_NOT_SUPPORTED;
+ s32 __ro_after_init smccc_soc_id_revision = SMCCC_RET_NOT_SUPPORTED;
+
+@@ -28,9 +27,6 @@ void __init arm_smccc_version_init(u32 v
+ smccc_conduit = conduit;
+
+ smccc_trng_available = smccc_probe_trng();
+- if (IS_ENABLED(CONFIG_ARM64_SVE) &&
+- smccc_version >= ARM_SMCCC_VERSION_1_3)
+- smccc_has_sve_hint = true;
+
+ if ((smccc_version >= ARM_SMCCC_VERSION_1_2) &&
+ (smccc_conduit != SMCCC_CONDUIT_NONE)) {
+--- a/include/linux/arm-smccc.h
++++ b/include/linux/arm-smccc.h
+@@ -227,8 +227,6 @@ u32 arm_smccc_get_version(void);
+
+ void __init arm_smccc_version_init(u32 version, enum arm_smccc_conduit conduit);
+
+-extern u64 smccc_has_sve_hint;
+-
+ /**
+ * arm_smccc_get_soc_id_version()
+ *
+@@ -327,15 +325,6 @@ struct arm_smccc_quirk {
+ };
+
+ /**
+- * __arm_smccc_sve_check() - Set the SVE hint bit when doing SMC calls
+- *
+- * Sets the SMCCC hint bit to indicate if there is live state in the SVE
+- * registers, this modifies x0 in place and should never be called from C
+- * code.
+- */
+-asmlinkage unsigned long __arm_smccc_sve_check(unsigned long x0);
+-
+-/**
+ * __arm_smccc_smc() - make SMC calls
+ * @a0-a7: arguments passed in registers 0 to 7
+ * @res: result values from registers 0 to 3
+@@ -402,20 +391,6 @@ asmlinkage void __arm_smccc_hvc(unsigned
+
+ #endif
+
+-/* nVHE hypervisor doesn't have a current thread so needs separate checks */
+-#if defined(CONFIG_ARM64_SVE) && !defined(__KVM_NVHE_HYPERVISOR__)
+-
+-#define SMCCC_SVE_CHECK ALTERNATIVE("nop \n", "bl __arm_smccc_sve_check \n", \
+- ARM64_SVE)
+-#define smccc_sve_clobbers "x16", "x30", "cc",
+-
+-#else
+-
+-#define SMCCC_SVE_CHECK
+-#define smccc_sve_clobbers
+-
+-#endif
+-
+ #define __constraint_read_2 "r" (arg0)
+ #define __constraint_read_3 __constraint_read_2, "r" (arg1)
+ #define __constraint_read_4 __constraint_read_3, "r" (arg2)
+@@ -486,12 +461,11 @@ asmlinkage void __arm_smccc_hvc(unsigned
+ register unsigned long r3 asm("r3"); \
+ CONCATENATE(__declare_arg_, \
+ COUNT_ARGS(__VA_ARGS__))(__VA_ARGS__); \
+- asm volatile(SMCCC_SVE_CHECK \
+- inst "\n" : \
++ asm volatile(inst "\n" : \
+ "=r" (r0), "=r" (r1), "=r" (r2), "=r" (r3) \
+ : CONCATENATE(__constraint_read_, \
+ COUNT_ARGS(__VA_ARGS__)) \
+- : smccc_sve_clobbers "memory"); \
++ : "memory"); \
+ if (___res) \
+ *___res = (typeof(*___res)){r0, r1, r2, r3}; \
+ } while (0)
+@@ -540,7 +514,7 @@ asmlinkage void __arm_smccc_hvc(unsigned
+ asm ("" : \
+ : CONCATENATE(__constraint_read_, \
+ COUNT_ARGS(__VA_ARGS__)) \
+- : smccc_sve_clobbers "memory"); \
++ : "memory"); \
+ if (___res) \
+ ___res->a0 = SMCCC_RET_NOT_SUPPORTED; \
+ } while (0)
--- /dev/null
+From 751ecf6afd6568adc98f2a6052315552c0483d18 Mon Sep 17 00:00:00 2001
+From: Mark Brown <broonie@kernel.org>
+Date: Wed, 30 Oct 2024 20:23:50 +0000
+Subject: arm64/sve: Discard stale CPU state when handling SVE traps
+
+From: Mark Brown <broonie@kernel.org>
+
+commit 751ecf6afd6568adc98f2a6052315552c0483d18 upstream.
+
+The logic for handling SVE traps manipulates saved FPSIMD/SVE state
+incorrectly, and a race with preemption can result in a task having
+TIF_SVE set and TIF_FOREIGN_FPSTATE clear even though the live CPU state
+is stale (e.g. with SVE traps enabled). This has been observed to result
+in warnings from do_sve_acc() where SVE traps are not expected while
+TIF_SVE is set:
+
+| if (test_and_set_thread_flag(TIF_SVE))
+| WARN_ON(1); /* SVE access shouldn't have trapped */
+
+Warnings of this form have been reported intermittently, e.g.
+
+ https://lore.kernel.org/linux-arm-kernel/CA+G9fYtEGe_DhY2Ms7+L7NKsLYUomGsgqpdBj+QwDLeSg=JhGg@mail.gmail.com/
+ https://lore.kernel.org/linux-arm-kernel/000000000000511e9a060ce5a45c@google.com/
+
+The race can occur when the SVE trap handler is preempted before and
+after manipulating the saved FPSIMD/SVE state, starting and ending on
+the same CPU, e.g.
+
+| void do_sve_acc(unsigned long esr, struct pt_regs *regs)
+| {
+| // Trap on CPU 0 with TIF_SVE clear, SVE traps enabled
+| // task->fpsimd_cpu is 0.
+| // per_cpu_ptr(&fpsimd_last_state, 0) is task.
+|
+| ...
+|
+| // Preempted; migrated from CPU 0 to CPU 1.
+| // TIF_FOREIGN_FPSTATE is set.
+|
+| get_cpu_fpsimd_context();
+|
+| if (test_and_set_thread_flag(TIF_SVE))
+| WARN_ON(1); /* SVE access shouldn't have trapped */
+|
+| sve_init_regs() {
+| if (!test_thread_flag(TIF_FOREIGN_FPSTATE)) {
+| ...
+| } else {
+| fpsimd_to_sve(current);
+| current->thread.fp_type = FP_STATE_SVE;
+| }
+| }
+|
+| put_cpu_fpsimd_context();
+|
+| // Preempted; migrated from CPU 1 to CPU 0.
+| // task->fpsimd_cpu is still 0
+| // If per_cpu_ptr(&fpsimd_last_state, 0) is still task then:
+| // - Stale HW state is reused (with SVE traps enabled)
+| // - TIF_FOREIGN_FPSTATE is cleared
+| // - A return to userspace skips HW state restore
+| }
+
+Fix the case where the state is not live and TIF_FOREIGN_FPSTATE is set
+by calling fpsimd_flush_task_state() to detach from the saved CPU
+state. This ensures that a subsequent context switch will not reuse the
+stale CPU state, and will instead set TIF_FOREIGN_FPSTATE, forcing the
+new state to be reloaded from memory prior to a return to userspace.
+
+Fixes: cccb78ce89c4 ("arm64/sve: Rework SVE access trap to convert state in registers")
+Reported-by: Mark Rutland <mark.rutland@arm.com>
+Signed-off-by: Mark Brown <broonie@kernel.org>
+Cc: stable@vger.kernel.org
+Reviewed-by: Mark Rutland <mark.rutland@arm.com>
+Link: https://lore.kernel.org/r/20241030-arm64-fpsimd-foreign-flush-v1-1-bd7bd66905a2@kernel.org
+Signed-off-by: Will Deacon <will@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/arm64/kernel/fpsimd.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/arch/arm64/kernel/fpsimd.c
++++ b/arch/arm64/kernel/fpsimd.c
+@@ -1367,6 +1367,7 @@ static void sve_init_regs(void)
+ } else {
+ fpsimd_to_sve(current);
+ current->thread.fp_type = FP_STATE_SVE;
++ fpsimd_flush_task_state(current);
+ }
+ }
+
--- /dev/null
+From cda7163d4e3d99db93aa38f0e825b8433c7a8452 Mon Sep 17 00:00:00 2001
+From: Qu Wenruo <wqu@suse.com>
+Date: Wed, 30 Oct 2024 11:25:47 +1030
+Subject: btrfs: fix per-subvolume RO/RW flags with new mount API
+
+From: Qu Wenruo <wqu@suse.com>
+
+commit cda7163d4e3d99db93aa38f0e825b8433c7a8452 upstream.
+
+[BUG]
+With util-linux 2.40.2, the 'mount' utility is already utilizing the new
+mount API. e.g:
+
+ # strace mount -o subvol=subv1,ro /dev/test/scratch1 /mnt/test/
+ ...
+ fsconfig(3, FSCONFIG_SET_STRING, "source", "/dev/mapper/test-scratch1", 0) = 0
+ fsconfig(3, FSCONFIG_SET_STRING, "subvol", "subv1", 0) = 0
+ fsconfig(3, FSCONFIG_SET_FLAG, "ro", NULL, 0) = 0
+ fsconfig(3, FSCONFIG_CMD_CREATE, NULL, NULL, 0) = 0
+ fsmount(3, FSMOUNT_CLOEXEC, 0) = 4
+ mount_setattr(4, "", AT_EMPTY_PATH, {attr_set=MOUNT_ATTR_RDONLY, attr_clr=0, propagation=0 /* MS_??? */, userns_fd=0}, 32) = 0
+ move_mount(4, "", AT_FDCWD, "/mnt/test", MOVE_MOUNT_F_EMPTY_PATH) = 0
+
+But this leads to a new problem, that per-subvolume RO/RW mount no
+longer works, if the initial mount is RO:
+
+ # mount -o subvol=subv1,ro /dev/test/scratch1 /mnt/test
+ # mount -o rw,subvol=subv2 /dev/test/scratch1 /mnt/scratch
+ # mount | grep mnt
+ /dev/mapper/test-scratch1 on /mnt/test type btrfs (ro,relatime,discard=async,space_cache=v2,subvolid=256,subvol=/subv1)
+ /dev/mapper/test-scratch1 on /mnt/scratch type btrfs (ro,relatime,discard=async,space_cache=v2,subvolid=257,subvol=/subv2)
+ # touch /mnt/scratch/foobar
+ touch: cannot touch '/mnt/scratch/foobar': Read-only file system
+
+This is a common use cases on distros.
+
+[CAUSE]
+We have a workaround for remount to handle the RO->RW change, but if the
+mount is using the new mount API, we do not do that, and rely on the
+mount tool NOT to set the ro flag.
+
+But that's not how the mount tool is doing for the new API:
+
+ fsconfig(3, FSCONFIG_SET_STRING, "source", "/dev/mapper/test-scratch1", 0) = 0
+ fsconfig(3, FSCONFIG_SET_STRING, "subvol", "subv1", 0) = 0
+ fsconfig(3, FSCONFIG_SET_FLAG, "ro", NULL, 0) = 0 <<<< Setting RO flag for super block
+ fsconfig(3, FSCONFIG_CMD_CREATE, NULL, NULL, 0) = 0
+ fsmount(3, FSMOUNT_CLOEXEC, 0) = 4
+ mount_setattr(4, "", AT_EMPTY_PATH, {attr_set=MOUNT_ATTR_RDONLY, attr_clr=0, propagation=0 /* MS_??? */, userns_fd=0}, 32) = 0
+ move_mount(4, "", AT_FDCWD, "/mnt/test", MOVE_MOUNT_F_EMPTY_PATH) = 0
+
+This means we will set the super block RO at the first mount.
+
+Later RW mount will not try to reconfigure the fs to RW because the
+mount tool is already using the new API.
+
+This totally breaks the per-subvolume RO/RW mount behavior.
+
+[FIX]
+Do not skip the reconfiguration even if using the new API. The old
+comments are just expecting any mount tool to properly skip the RO flag
+set even if we specify "ro", which is not the reality.
+
+Update the comments regarding the backward compatibility on the kernel
+level so it works with old and new mount utilities.
+
+CC: stable@vger.kernel.org # 6.8+
+Fixes: f044b318675f ("btrfs: handle the ro->rw transition for mounting different subvolumes")
+Signed-off-by: Qu Wenruo <wqu@suse.com>
+Reviewed-by: David Sterba <dsterba@suse.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/super.c | 25 +++++--------------------
+ 1 file changed, 5 insertions(+), 20 deletions(-)
+
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -1979,25 +1979,10 @@ error:
+ * fsconfig(FSCONFIG_SET_FLAG, "ro"). This option is seen by the filesystem
+ * in fc->sb_flags.
+ *
+- * This disambiguation has rather positive consequences. Mounting a subvolume
+- * ro will not also turn the superblock ro. Only the mount for the subvolume
+- * will become ro.
+- *
+- * So, if the superblock creation request comes from the new mount API the
+- * caller must have explicitly done:
+- *
+- * fsconfig(FSCONFIG_SET_FLAG, "ro")
+- * fsmount/mount_setattr(MOUNT_ATTR_RDONLY)
+- *
+- * IOW, at some point the caller must have explicitly turned the whole
+- * superblock ro and we shouldn't just undo it like we did for the old mount
+- * API. In any case, it lets us avoid the hack in the new mount API.
+- *
+- * Consequently, the remounting hack must only be used for requests originating
+- * from the old mount API and should be marked for full deprecation so it can be
+- * turned off in a couple of years.
+- *
+- * The new mount API has no reason to support this hack.
++ * But, currently the util-linux mount command already utilizes the new mount
++ * API and is still setting fsconfig(FSCONFIG_SET_FLAG, "ro") no matter if it's
++ * btrfs or not, setting the whole super block RO. To make per-subvolume mounting
++ * work with different options work we need to keep backward compatibility.
+ */
+ static struct vfsmount *btrfs_reconfigure_for_mount(struct fs_context *fc)
+ {
+@@ -2019,7 +2004,7 @@ static struct vfsmount *btrfs_reconfigur
+ if (IS_ERR(mnt))
+ return mnt;
+
+- if (!fc->oldapi || !ro2rw)
++ if (!ro2rw)
+ return mnt;
+
+ /* We need to convert to rw, call reconfigure. */
--- /dev/null
+From 2b084d8205949dd804e279df8e68531da78be1e8 Mon Sep 17 00:00:00 2001
+From: Haisu Wang <haisuwang@tencent.com>
+Date: Fri, 25 Oct 2024 14:54:40 +0800
+Subject: btrfs: fix the length of reserved qgroup to free
+
+From: Haisu Wang <haisuwang@tencent.com>
+
+commit 2b084d8205949dd804e279df8e68531da78be1e8 upstream.
+
+The dealloc flag may be cleared and the extent won't reach the disk in
+cow_file_range when errors path. The reserved qgroup space is freed in
+commit 30479f31d44d ("btrfs: fix qgroup reserve leaks in
+cow_file_range"). However, the length of untouched region to free needs
+to be adjusted with the correct remaining region size.
+
+Fixes: 30479f31d44d ("btrfs: fix qgroup reserve leaks in cow_file_range")
+CC: stable@vger.kernel.org # 6.11+
+Reviewed-by: Qu Wenruo <wqu@suse.com>
+Reviewed-by: Boris Burkov <boris@bur.io>
+Signed-off-by: Haisu Wang <haisuwang@tencent.com>
+Reviewed-by: David Sterba <dsterba@suse.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/inode.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -1599,7 +1599,7 @@ out_unlock:
+ clear_bits |= EXTENT_CLEAR_DATA_RESV;
+ extent_clear_unlock_delalloc(inode, start, end, locked_page,
+ &cached, clear_bits, page_ops);
+- btrfs_qgroup_free_data(inode, NULL, start, cur_alloc_size, NULL);
++ btrfs_qgroup_free_data(inode, NULL, start, end - start + 1, NULL);
+ }
+ return ret;
+ }
--- /dev/null
+From c9a75ec45f1111ef530ab186c2a7684d0a0c9245 Mon Sep 17 00:00:00 2001
+From: Filipe Manana <fdmanana@suse.com>
+Date: Mon, 4 Nov 2024 12:11:15 +0000
+Subject: btrfs: reinitialize delayed ref list after deleting it from the list
+
+From: Filipe Manana <fdmanana@suse.com>
+
+commit c9a75ec45f1111ef530ab186c2a7684d0a0c9245 upstream.
+
+At insert_delayed_ref() if we need to update the action of an existing
+ref to BTRFS_DROP_DELAYED_REF, we delete the ref from its ref head's
+ref_add_list using list_del(), which leaves the ref's add_list member
+not reinitialized, as list_del() sets the next and prev members of the
+list to LIST_POISON1 and LIST_POISON2, respectively.
+
+If later we end up calling drop_delayed_ref() against the ref, which can
+happen during merging or when destroying delayed refs due to a transaction
+abort, we can trigger a crash since at drop_delayed_ref() we call
+list_empty() against the ref's add_list, which returns false since
+the list was not reinitialized after the list_del() and as a consequence
+we call list_del() again at drop_delayed_ref(). This results in an
+invalid list access since the next and prev members are set to poison
+pointers, resulting in a splat if CONFIG_LIST_HARDENED and
+CONFIG_DEBUG_LIST are set or invalid poison pointer dereferences
+otherwise.
+
+So fix this by deleting from the list with list_del_init() instead.
+
+Fixes: 1d57ee941692 ("btrfs: improve delayed refs iterations")
+CC: stable@vger.kernel.org # 4.19+
+Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
+Signed-off-by: Filipe Manana <fdmanana@suse.com>
+Reviewed-by: David Sterba <dsterba@suse.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/delayed-ref.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/fs/btrfs/delayed-ref.c
++++ b/fs/btrfs/delayed-ref.c
+@@ -649,7 +649,7 @@ static bool insert_delayed_ref(struct bt
+ &href->ref_add_list);
+ else if (ref->action == BTRFS_DROP_DELAYED_REF) {
+ ASSERT(!list_empty(&exist->add_list));
+- list_del(&exist->add_list);
++ list_del_init(&exist->add_list);
+ } else {
+ ASSERT(0);
+ }
--- /dev/null
+From 81d2fb4c7c18a3b36ba3e00b9d5b753107472d75 Mon Sep 17 00:00:00 2001
+From: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
+Date: Fri, 25 Oct 2024 11:38:42 -0700
+Subject: idpf: avoid vport access in idpf_get_link_ksettings
+
+From: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
+
+commit 81d2fb4c7c18a3b36ba3e00b9d5b753107472d75 upstream.
+
+When the device control plane is removed or the platform
+running device control plane is rebooted, a reset is detected
+on the driver. On driver reset, it releases the resources and
+waits for the reset to complete. If the reset fails, it takes
+the error path and releases the vport lock. At this time if the
+monitoring tools tries to access link settings, it call traces
+for accessing released vport pointer.
+
+To avoid it, move link_speed_mbps to netdev_priv structure
+which removes the dependency on vport pointer and the vport lock
+in idpf_get_link_ksettings. Also use netif_carrier_ok()
+to check the link status and adjust the offsetof to use link_up
+instead of link_speed_mbps.
+
+Fixes: 02cbfba1add5 ("idpf: add ethtool callbacks")
+Cc: stable@vger.kernel.org # 6.7+
+Reviewed-by: Tarun K Singh <tarun.k.singh@intel.com>
+Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
+Tested-by: Krishneil Singh <krishneil.k.singh@intel.com>
+Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/intel/idpf/idpf.h | 4 ++--
+ drivers/net/ethernet/intel/idpf/idpf_ethtool.c | 11 +++--------
+ drivers/net/ethernet/intel/idpf/idpf_lib.c | 4 ++--
+ drivers/net/ethernet/intel/idpf/idpf_virtchnl.c | 2 +-
+ 4 files changed, 8 insertions(+), 13 deletions(-)
+
+--- a/drivers/net/ethernet/intel/idpf/idpf.h
++++ b/drivers/net/ethernet/intel/idpf/idpf.h
+@@ -141,6 +141,7 @@ enum idpf_vport_state {
+ * @adapter: Adapter back pointer
+ * @vport: Vport back pointer
+ * @vport_id: Vport identifier
++ * @link_speed_mbps: Link speed in mbps
+ * @vport_idx: Relative vport index
+ * @state: See enum idpf_vport_state
+ * @netstats: Packet and byte stats
+@@ -150,6 +151,7 @@ struct idpf_netdev_priv {
+ struct idpf_adapter *adapter;
+ struct idpf_vport *vport;
+ u32 vport_id;
++ u32 link_speed_mbps;
+ u16 vport_idx;
+ enum idpf_vport_state state;
+ struct rtnl_link_stats64 netstats;
+@@ -287,7 +289,6 @@ struct idpf_port_stats {
+ * @tx_itr_profile: TX profiles for Dynamic Interrupt Moderation
+ * @port_stats: per port csum, header split, and other offload stats
+ * @link_up: True if link is up
+- * @link_speed_mbps: Link speed in mbps
+ * @sw_marker_wq: workqueue for marker packets
+ */
+ struct idpf_vport {
+@@ -331,7 +332,6 @@ struct idpf_vport {
+ struct idpf_port_stats port_stats;
+
+ bool link_up;
+- u32 link_speed_mbps;
+
+ wait_queue_head_t sw_marker_wq;
+ };
+--- a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
+@@ -1296,24 +1296,19 @@ static void idpf_set_msglevel(struct net
+ static int idpf_get_link_ksettings(struct net_device *netdev,
+ struct ethtool_link_ksettings *cmd)
+ {
+- struct idpf_vport *vport;
+-
+- idpf_vport_ctrl_lock(netdev);
+- vport = idpf_netdev_to_vport(netdev);
++ struct idpf_netdev_priv *np = netdev_priv(netdev);
+
+ ethtool_link_ksettings_zero_link_mode(cmd, supported);
+ cmd->base.autoneg = AUTONEG_DISABLE;
+ cmd->base.port = PORT_NONE;
+- if (vport->link_up) {
++ if (netif_carrier_ok(netdev)) {
+ cmd->base.duplex = DUPLEX_FULL;
+- cmd->base.speed = vport->link_speed_mbps;
++ cmd->base.speed = np->link_speed_mbps;
+ } else {
+ cmd->base.duplex = DUPLEX_UNKNOWN;
+ cmd->base.speed = SPEED_UNKNOWN;
+ }
+
+- idpf_vport_ctrl_unlock(netdev);
+-
+ return 0;
+ }
+
+--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+@@ -1873,7 +1873,7 @@ int idpf_initiate_soft_reset(struct idpf
+ * mess with. Nothing below should use those variables from new_vport
+ * and should instead always refer to them in vport if they need to.
+ */
+- memcpy(new_vport, vport, offsetof(struct idpf_vport, link_speed_mbps));
++ memcpy(new_vport, vport, offsetof(struct idpf_vport, link_up));
+
+ /* Adjust resource parameters prior to reallocating resources */
+ switch (reset_cause) {
+@@ -1919,7 +1919,7 @@ int idpf_initiate_soft_reset(struct idpf
+ /* Same comment as above regarding avoiding copying the wait_queues and
+ * mutexes applies here. We do not want to mess with those if possible.
+ */
+- memcpy(vport, new_vport, offsetof(struct idpf_vport, link_speed_mbps));
++ memcpy(vport, new_vport, offsetof(struct idpf_vport, link_up));
+
+ if (reset_cause == IDPF_SR_Q_CHANGE)
+ idpf_vport_alloc_vec_indexes(vport);
+--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+@@ -141,7 +141,7 @@ static void idpf_handle_event_link(struc
+ }
+ np = netdev_priv(vport->netdev);
+
+- vport->link_speed_mbps = le32_to_cpu(v2e->link_speed);
++ np->link_speed_mbps = le32_to_cpu(v2e->link_speed);
+
+ if (vport->link_up == v2e->link_status)
+ return;
--- /dev/null
+From 9b58031ff96b84a38d7b73b23c7ecfb2e0557f43 Mon Sep 17 00:00:00 2001
+From: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
+Date: Fri, 25 Oct 2024 11:38:43 -0700
+Subject: idpf: fix idpf_vc_core_init error path
+
+From: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
+
+commit 9b58031ff96b84a38d7b73b23c7ecfb2e0557f43 upstream.
+
+In an event where the platform running the device control plane
+is rebooted, reset is detected on the driver. It releases
+all the resources and waits for the reset to complete. Once the
+reset is done, it tries to build the resources back. At this
+time if the device control plane is not yet started, then
+the driver timeouts on the virtchnl message and retries to
+establish the mailbox again.
+
+In the retry flow, mailbox is deinitialized but the mailbox
+workqueue is still alive and polling for the mailbox message.
+This results in accessing the released control queue leading to
+null-ptr-deref. Fix it by unrolling the work queue cancellation
+and mailbox deinitialization in the reverse order which they got
+initialized.
+
+Fixes: 4930fbf419a7 ("idpf: add core init and interrupt request")
+Fixes: 34c21fa894a1 ("idpf: implement virtchnl transaction manager")
+Cc: stable@vger.kernel.org # 6.9+
+Reviewed-by: Tarun K Singh <tarun.k.singh@intel.com>
+Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@intel.com>
+Tested-by: Krishneil Singh <krishneil.k.singh@intel.com>
+Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/intel/idpf/idpf_lib.c | 1 +
+ drivers/net/ethernet/intel/idpf/idpf_virtchnl.c | 1 -
+ 2 files changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+@@ -1799,6 +1799,7 @@ static int idpf_init_hard_reset(struct i
+ */
+ err = idpf_vc_core_init(adapter);
+ if (err) {
++ cancel_delayed_work_sync(&adapter->mbx_task);
+ idpf_deinit_dflt_mbx(adapter);
+ goto unlock_mutex;
+ }
+--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+@@ -3063,7 +3063,6 @@ init_failed:
+ adapter->state = __IDPF_VER_CHECK;
+ if (adapter->vcxn_mngr)
+ idpf_vc_xn_shutdown(adapter->vcxn_mngr);
+- idpf_deinit_dflt_mbx(adapter);
+ set_bit(IDPF_HR_DRV_LOAD, adapter->flags);
+ queue_delayed_work(adapter->vc_event_wq, &adapter->vc_event_task,
+ msecs_to_jiffies(task_delay));
--- /dev/null
+From a373830f96db288a3eb43a8692b6bcd0bd88dfe1 Mon Sep 17 00:00:00 2001
+From: Gautam Menghani <gautam@linux.ibm.com>
+Date: Mon, 28 Oct 2024 14:34:09 +0530
+Subject: KVM: PPC: Book3S HV: Mask off LPCR_MER for a vCPU before running it to avoid spurious interrupts
+
+From: Gautam Menghani <gautam@linux.ibm.com>
+
+commit a373830f96db288a3eb43a8692b6bcd0bd88dfe1 upstream.
+
+Running a L2 vCPU (see [1] for terminology) with LPCR_MER bit set and no
+pending interrupts results in that L2 vCPU getting an infinite flood of
+spurious interrupts. The 'if check' in kvmhv_run_single_vcpu() sets the
+LPCR_MER bit if there are pending interrupts.
+
+The spurious flood problem can be observed in 2 cases:
+1. Crashing the guest while interrupt heavy workload is running
+ a. Start a L2 guest and run an interrupt heavy workload (eg: ipistorm)
+ b. While the workload is running, crash the guest (make sure kdump
+ is configured)
+ c. Any one of the vCPUs of the guest will start getting an infinite
+ flood of spurious interrupts.
+
+2. Running LTP stress tests in multiple guests at the same time
+ a. Start 4 L2 guests.
+ b. Start running LTP stress tests on all 4 guests at same time.
+ c. In some time, any one/more of the vCPUs of any of the guests will
+ start getting an infinite flood of spurious interrupts.
+
+The root cause of both the above issues is the same:
+1. A NMI is sent to a running vCPU that has LPCR_MER bit set.
+2. In the NMI path, all registers are refreshed, i.e, H_GUEST_GET_STATE
+ is called for all the registers.
+3. When H_GUEST_GET_STATE is called for LPCR, the vcpu->arch.vcore->lpcr
+ of that vCPU at L1 level gets updated with LPCR_MER set to 1, and this
+ new value is always used whenever that vCPU runs, regardless of whether
+ there was a pending interrupt.
+4. Since LPCR_MER is set, the vCPU in L2 always jumps to the external
+ interrupt handler, and this cycle never ends.
+
+Fix the spurious flood by masking off the LPCR_MER bit before running a
+L2 vCPU to ensure that it is not set if there are no pending interrupts.
+
+[1] Terminology:
+1. L0 : PAPR hypervisor running in HV mode
+2. L1 : Linux guest (logical partition) running on top of L0
+3. L2 : KVM guest running on top of L1
+
+Fixes: ec0f6639fa88 ("KVM: PPC: Book3S HV nestedv2: Ensure LPCR_MER bit is passed to the L0")
+Cc: stable@vger.kernel.org # v6.8+
+Signed-off-by: Gautam Menghani <gautam@linux.ibm.com>
+Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/powerpc/kvm/book3s_hv.c | 12 ++++++++++++
+ 1 file changed, 12 insertions(+)
+
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -4892,6 +4892,18 @@ int kvmhv_run_single_vcpu(struct kvm_vcp
+ BOOK3S_INTERRUPT_EXTERNAL, 0);
+ else
+ lpcr |= LPCR_MER;
++ } else {
++ /*
++ * L1's copy of L2's LPCR (vcpu->arch.vcore->lpcr) can get its MER bit
++ * unexpectedly set - for e.g. during NMI handling when all register
++ * states are synchronized from L0 to L1. L1 needs to inform L0 about
++ * MER=1 only when there are pending external interrupts.
++ * In the above if check, MER bit is set if there are pending
++ * external interrupts. Hence, explicity mask off MER bit
++ * here as otherwise it may generate spurious interrupts in L2 KVM
++ * causing an endless loop, which results in L2 guest getting hung.
++ */
++ lpcr &= ~LPCR_MER;
+ }
+ } else if (vcpu->arch.pending_exceptions ||
+ vcpu->arch.doorbell_request ||
--- /dev/null
+From 9c9201afebea1efc7ea4b8f721ee18a05bb8aca1 Mon Sep 17 00:00:00 2001
+From: Koichiro Den <koichiro.den@gmail.com>
+Date: Tue, 5 Nov 2024 11:27:47 +0900
+Subject: mm/slab: fix warning caused by duplicate kmem_cache creation in kmem_buckets_create
+
+From: Koichiro Den <koichiro.den@gmail.com>
+
+commit 9c9201afebea1efc7ea4b8f721ee18a05bb8aca1 upstream.
+
+Commit b035f5a6d852 ("mm: slab: reduce the kmalloc() minimum alignment
+if DMA bouncing possible") reduced ARCH_KMALLOC_MINALIGN to 8 on arm64.
+However, with KASAN_HW_TAGS enabled, arch_slab_minalign() becomes 16.
+This causes kmalloc_caches[*][8] to be aliased to kmalloc_caches[*][16],
+resulting in kmem_buckets_create() attempting to create a kmem_cache for
+size 16 twice. This duplication triggers warnings on boot:
+
+[ 2.325108] ------------[ cut here ]------------
+[ 2.325135] kmem_cache of name 'memdup_user-16' already exists
+[ 2.325783] WARNING: CPU: 0 PID: 1 at mm/slab_common.c:107 __kmem_cache_create_args+0xb8/0x3b0
+[ 2.327957] Modules linked in:
+[ 2.328550] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.12.0-rc5mm-unstable-arm64+ #12
+[ 2.328683] Hardware name: QEMU QEMU Virtual Machine, BIOS 2024.02-2 03/11/2024
+[ 2.328790] pstate: 61000009 (nZCv daif -PAN -UAO -TCO +DIT -SSBS BTYPE=--)
+[ 2.328911] pc : __kmem_cache_create_args+0xb8/0x3b0
+[ 2.328930] lr : __kmem_cache_create_args+0xb8/0x3b0
+[ 2.328942] sp : ffff800083d6fc50
+[ 2.328961] x29: ffff800083d6fc50 x28: f2ff0000c1674410 x27: ffff8000820b0598
+[ 2.329061] x26: 000000007fffffff x25: 0000000000000010 x24: 0000000000002000
+[ 2.329101] x23: ffff800083d6fce8 x22: ffff8000832222e8 x21: ffff800083222388
+[ 2.329118] x20: f2ff0000c1674410 x19: f5ff0000c16364c0 x18: ffff800083d80030
+[ 2.329135] x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000
+[ 2.329152] x14: 0000000000000000 x13: 0a73747369786520 x12: 79646165726c6120
+[ 2.329169] x11: 656820747563205b x10: 2d2d2d2d2d2d2d2d x9 : 0000000000000000
+[ 2.329194] x8 : 0000000000000000 x7 : 0000000000000000 x6 : 0000000000000000
+[ 2.329210] x5 : 0000000000000000 x4 : 0000000000000000 x3 : 0000000000000000
+[ 2.329226] x2 : 0000000000000000 x1 : 0000000000000000 x0 : 0000000000000000
+[ 2.329291] Call trace:
+[ 2.329407] __kmem_cache_create_args+0xb8/0x3b0
+[ 2.329499] kmem_buckets_create+0xfc/0x320
+[ 2.329526] init_user_buckets+0x34/0x78
+[ 2.329540] do_one_initcall+0x64/0x3c8
+[ 2.329550] kernel_init_freeable+0x26c/0x578
+[ 2.329562] kernel_init+0x3c/0x258
+[ 2.329574] ret_from_fork+0x10/0x20
+[ 2.329698] ---[ end trace 0000000000000000 ]---
+
+[ 2.403704] ------------[ cut here ]------------
+[ 2.404716] kmem_cache of name 'msg_msg-16' already exists
+[ 2.404801] WARNING: CPU: 2 PID: 1 at mm/slab_common.c:107 __kmem_cache_create_args+0xb8/0x3b0
+[ 2.404842] Modules linked in:
+[ 2.404971] CPU: 2 UID: 0 PID: 1 Comm: swapper/0 Tainted: G W 6.12.0-rc5mm-unstable-arm64+ #12
+[ 2.405026] Tainted: [W]=WARN
+[ 2.405043] Hardware name: QEMU QEMU Virtual Machine, BIOS 2024.02-2 03/11/2024
+[ 2.405057] pstate: 60400009 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
+[ 2.405079] pc : __kmem_cache_create_args+0xb8/0x3b0
+[ 2.405100] lr : __kmem_cache_create_args+0xb8/0x3b0
+[ 2.405111] sp : ffff800083d6fc50
+[ 2.405115] x29: ffff800083d6fc50 x28: fbff0000c1674410 x27: ffff8000820b0598
+[ 2.405135] x26: 000000000000ffd0 x25: 0000000000000010 x24: 0000000000006000
+[ 2.405153] x23: ffff800083d6fce8 x22: ffff8000832222e8 x21: ffff800083222388
+[ 2.405169] x20: fbff0000c1674410 x19: fdff0000c163d6c0 x18: ffff800083d80030
+[ 2.405185] x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000000
+[ 2.405201] x14: 0000000000000000 x13: 0a73747369786520 x12: 79646165726c6120
+[ 2.405217] x11: 656820747563205b x10: 2d2d2d2d2d2d2d2d x9 : 0000000000000000
+[ 2.405233] x8 : 0000000000000000 x7 : 0000000000000000 x6 : 0000000000000000
+[ 2.405248] x5 : 0000000000000000 x4 : 0000000000000000 x3 : 0000000000000000
+[ 2.405271] x2 : 0000000000000000 x1 : 0000000000000000 x0 : 0000000000000000
+[ 2.405287] Call trace:
+[ 2.405293] __kmem_cache_create_args+0xb8/0x3b0
+[ 2.405305] kmem_buckets_create+0xfc/0x320
+[ 2.405315] init_msg_buckets+0x34/0x78
+[ 2.405326] do_one_initcall+0x64/0x3c8
+[ 2.405337] kernel_init_freeable+0x26c/0x578
+[ 2.405348] kernel_init+0x3c/0x258
+[ 2.405360] ret_from_fork+0x10/0x20
+[ 2.405370] ---[ end trace 0000000000000000 ]---
+
+To address this, alias kmem_cache for sizes smaller than min alignment
+to the aligned sized kmem_cache, as done with the default system kmalloc
+bucket.
+
+Fixes: b32801d1255b ("mm/slab: Introduce kmem_buckets_create() and family")
+Cc: <stable@vger.kernel.org> # v6.11+
+Signed-off-by: Koichiro Den <koichiro.den@gmail.com>
+Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
+Tested-by: Catalin Marinas <catalin.marinas@arm.com>
+Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/slab_common.c | 31 ++++++++++++++++++++-----------
+ 1 file changed, 20 insertions(+), 11 deletions(-)
+
+--- a/mm/slab_common.c
++++ b/mm/slab_common.c
+@@ -418,8 +418,11 @@ kmem_buckets *kmem_buckets_create(const
+ unsigned int usersize,
+ void (*ctor)(void *))
+ {
++ unsigned long mask = 0;
++ unsigned int idx;
+ kmem_buckets *b;
+- int idx;
++
++ BUILD_BUG_ON(ARRAY_SIZE(kmalloc_caches[KMALLOC_NORMAL]) > BITS_PER_LONG);
+
+ /*
+ * When the separate buckets API is not built in, just return
+@@ -441,7 +444,7 @@ kmem_buckets *kmem_buckets_create(const
+ for (idx = 0; idx < ARRAY_SIZE(kmalloc_caches[KMALLOC_NORMAL]); idx++) {
+ char *short_size, *cache_name;
+ unsigned int cache_useroffset, cache_usersize;
+- unsigned int size;
++ unsigned int size, aligned_idx;
+
+ if (!kmalloc_caches[KMALLOC_NORMAL][idx])
+ continue;
+@@ -454,10 +457,6 @@ kmem_buckets *kmem_buckets_create(const
+ if (WARN_ON(!short_size))
+ goto fail;
+
+- cache_name = kasprintf(GFP_KERNEL, "%s-%s", name, short_size + 1);
+- if (WARN_ON(!cache_name))
+- goto fail;
+-
+ if (useroffset >= size) {
+ cache_useroffset = 0;
+ cache_usersize = 0;
+@@ -465,18 +464,28 @@ kmem_buckets *kmem_buckets_create(const
+ cache_useroffset = useroffset;
+ cache_usersize = min(size - cache_useroffset, usersize);
+ }
+- (*b)[idx] = kmem_cache_create_usercopy(cache_name, size,
++
++ aligned_idx = __kmalloc_index(size, false);
++ if (!(*b)[aligned_idx]) {
++ cache_name = kasprintf(GFP_KERNEL, "%s-%s", name, short_size + 1);
++ if (WARN_ON(!cache_name))
++ goto fail;
++ (*b)[aligned_idx] = kmem_cache_create_usercopy(cache_name, size,
+ 0, flags, cache_useroffset,
+ cache_usersize, ctor);
+- kfree(cache_name);
+- if (WARN_ON(!(*b)[idx]))
+- goto fail;
++ kfree(cache_name);
++ if (WARN_ON(!(*b)[aligned_idx]))
++ goto fail;
++ set_bit(aligned_idx, &mask);
++ }
++ if (idx != aligned_idx)
++ (*b)[idx] = (*b)[aligned_idx];
+ }
+
+ return b;
+
+ fail:
+- for (idx = 0; idx < ARRAY_SIZE(kmalloc_caches[KMALLOC_NORMAL]); idx++)
++ for_each_set_bit(idx, &mask, ARRAY_SIZE(kmalloc_caches[KMALLOC_NORMAL]))
+ kmem_cache_destroy((*b)[idx]);
+ kfree(b);
+
--- /dev/null
+From 99635c91fb8b860a6404b9bc8b769df7bdaa2ae3 Mon Sep 17 00:00:00 2001
+From: Geliang Tang <tanggeliang@kylinos.cn>
+Date: Mon, 4 Nov 2024 13:31:42 +0100
+Subject: mptcp: use sock_kfree_s instead of kfree
+
+From: Geliang Tang <tanggeliang@kylinos.cn>
+
+commit 99635c91fb8b860a6404b9bc8b769df7bdaa2ae3 upstream.
+
+The local address entries on userspace_pm_local_addr_list are allocated
+by sock_kmalloc().
+
+It's then required to use sock_kfree_s() instead of kfree() to free
+these entries in order to adjust the allocated size on the sk side.
+
+Fixes: 24430f8bf516 ("mptcp: add address into userspace pm list")
+Cc: stable@vger.kernel.org
+Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
+Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
+Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
+Link: https://patch.msgid.link/20241104-net-mptcp-misc-6-12-v1-2-c13f2ff1656f@kernel.org
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/mptcp/pm_userspace.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+--- a/net/mptcp/pm_userspace.c
++++ b/net/mptcp/pm_userspace.c
+@@ -91,6 +91,7 @@ static int mptcp_userspace_pm_delete_loc
+ struct mptcp_pm_addr_entry *addr)
+ {
+ struct mptcp_pm_addr_entry *entry, *tmp;
++ struct sock *sk = (struct sock *)msk;
+
+ list_for_each_entry_safe(entry, tmp, &msk->pm.userspace_pm_local_addr_list, list) {
+ if (mptcp_addresses_equal(&entry->addr, &addr->addr, false)) {
+@@ -98,7 +99,7 @@ static int mptcp_userspace_pm_delete_loc
+ * be used multiple times (e.g. fullmesh mode).
+ */
+ list_del_rcu(&entry->list);
+- kfree(entry);
++ sock_kfree_s(sk, entry, sizeof(*entry));
+ msk->pm.local_addr_used--;
+ return 0;
+ }
--- /dev/null
+From 1f26339b2ed63d1e8e18a18674fb73a392f3660e Mon Sep 17 00:00:00 2001
+From: Stefan Wahren <wahrenst@gmx.net>
+Date: Tue, 5 Nov 2024 17:31:01 +0100
+Subject: net: vertexcom: mse102x: Fix possible double free of TX skb
+
+From: Stefan Wahren <wahrenst@gmx.net>
+
+commit 1f26339b2ed63d1e8e18a18674fb73a392f3660e upstream.
+
+The scope of the TX skb is wider than just mse102x_tx_frame_spi(),
+so in case the TX skb room needs to be expanded, we should free the
+the temporary skb instead of the original skb. Otherwise the original
+TX skb pointer would be freed again in mse102x_tx_work(), which leads
+to crashes:
+
+ Internal error: Oops: 0000000096000004 [#2] PREEMPT SMP
+ CPU: 0 PID: 712 Comm: kworker/0:1 Tainted: G D 6.6.23
+ Hardware name: chargebyte Charge SOM DC-ONE (DT)
+ Workqueue: events mse102x_tx_work [mse102x]
+ pstate: 20400009 (nzCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
+ pc : skb_release_data+0xb8/0x1d8
+ lr : skb_release_data+0x1ac/0x1d8
+ sp : ffff8000819a3cc0
+ x29: ffff8000819a3cc0 x28: ffff0000046daa60 x27: ffff0000057f2dc0
+ x26: ffff000005386c00 x25: 0000000000000002 x24: 00000000ffffffff
+ x23: 0000000000000000 x22: 0000000000000001 x21: ffff0000057f2e50
+ x20: 0000000000000006 x19: 0000000000000000 x18: ffff00003fdacfcc
+ x17: e69ad452d0c49def x16: 84a005feff870102 x15: 0000000000000000
+ x14: 000000000000024a x13: 0000000000000002 x12: 0000000000000000
+ x11: 0000000000000400 x10: 0000000000000930 x9 : ffff00003fd913e8
+ x8 : fffffc00001bc008
+ x7 : 0000000000000000 x6 : 0000000000000008
+ x5 : ffff00003fd91340 x4 : 0000000000000000 x3 : 0000000000000009
+ x2 : 00000000fffffffe x1 : 0000000000000000 x0 : 0000000000000000
+ Call trace:
+ skb_release_data+0xb8/0x1d8
+ kfree_skb_reason+0x48/0xb0
+ mse102x_tx_work+0x164/0x35c [mse102x]
+ process_one_work+0x138/0x260
+ worker_thread+0x32c/0x438
+ kthread+0x118/0x11c
+ ret_from_fork+0x10/0x20
+ Code: aa1303e0 97fffab6 72001c1f 54000141 (f9400660)
+
+Cc: stable@vger.kernel.org
+Fixes: 2f207cbf0dd4 ("net: vertexcom: Add MSE102x SPI support")
+Signed-off-by: Stefan Wahren <wahrenst@gmx.net>
+Link: https://patch.msgid.link/20241105163101.33216-1-wahrenst@gmx.net
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/vertexcom/mse102x.c | 5 +++--
+ 1 file changed, 3 insertions(+), 2 deletions(-)
+
+--- a/drivers/net/ethernet/vertexcom/mse102x.c
++++ b/drivers/net/ethernet/vertexcom/mse102x.c
+@@ -222,7 +222,7 @@ static int mse102x_tx_frame_spi(struct m
+ struct mse102x_net_spi *mses = to_mse102x_spi(mse);
+ struct spi_transfer *xfer = &mses->spi_xfer;
+ struct spi_message *msg = &mses->spi_msg;
+- struct sk_buff *tskb;
++ struct sk_buff *tskb = NULL;
+ int ret;
+
+ netif_dbg(mse, tx_queued, mse->ndev, "%s: skb %p, %d@%p\n",
+@@ -235,7 +235,6 @@ static int mse102x_tx_frame_spi(struct m
+ if (!tskb)
+ return -ENOMEM;
+
+- dev_kfree_skb(txp);
+ txp = tskb;
+ }
+
+@@ -257,6 +256,8 @@ static int mse102x_tx_frame_spi(struct m
+ mse->stats.xfer_err++;
+ }
+
++ dev_kfree_skb(tskb);
++
+ return ret;
+ }
+
--- /dev/null
+From 3b557be89fc688dbd9ccf704a70f7600a094f13a Mon Sep 17 00:00:00 2001
+From: Jinjie Ruan <ruanjinjie@huawei.com>
+Date: Fri, 1 Nov 2024 10:53:16 +0800
+Subject: net: wwan: t7xx: Fix off-by-one error in t7xx_dpmaif_rx_buf_alloc()
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Jinjie Ruan <ruanjinjie@huawei.com>
+
+commit 3b557be89fc688dbd9ccf704a70f7600a094f13a upstream.
+
+The error path in t7xx_dpmaif_rx_buf_alloc(), free and unmap the already
+allocated and mapped skb in a loop, but the loop condition terminates when
+the index reaches zero, which fails to free the first allocated skb at
+index zero.
+
+Check with i-- so that skb at index 0 is freed as well.
+
+Cc: stable@vger.kernel.org
+Fixes: d642b012df70 ("net: wwan: t7xx: Add data path interface")
+Acked-by: Sergey Ryazanov <ryazanov.s.a@gmail.com>
+Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
+Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
+Link: https://patch.msgid.link/20241101025316.3234023-1-ruanjinjie@huawei.com
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
++++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
+@@ -226,7 +226,7 @@ int t7xx_dpmaif_rx_buf_alloc(struct dpma
+ return 0;
+
+ err_unmap_skbs:
+- while (--i > 0)
++ while (i--)
+ t7xx_unmap_bat_skb(dpmaif_ctrl->dev, bat_req->bat_skb, i);
+
+ return ret;
--- /dev/null
+From dc270d7159699ad6d11decadfce9633f0f71c1db Mon Sep 17 00:00:00 2001
+From: Roberto Sassu <roberto.sassu@huawei.com>
+Date: Fri, 25 Oct 2024 16:03:27 +0200
+Subject: nfs: Fix KMSAN warning in decode_getfattr_attrs()
+
+From: Roberto Sassu <roberto.sassu@huawei.com>
+
+commit dc270d7159699ad6d11decadfce9633f0f71c1db upstream.
+
+Fix the following KMSAN warning:
+
+CPU: 1 UID: 0 PID: 7651 Comm: cp Tainted: G B
+Tainted: [B]=BAD_PAGE
+Hardware name: QEMU Standard PC (Q35 + ICH9, 2009)
+=====================================================
+=====================================================
+BUG: KMSAN: uninit-value in decode_getfattr_attrs+0x2d6d/0x2f90
+ decode_getfattr_attrs+0x2d6d/0x2f90
+ decode_getfattr_generic+0x806/0xb00
+ nfs4_xdr_dec_getattr+0x1de/0x240
+ rpcauth_unwrap_resp_decode+0xab/0x100
+ rpcauth_unwrap_resp+0x95/0xc0
+ call_decode+0x4ff/0xb50
+ __rpc_execute+0x57b/0x19d0
+ rpc_execute+0x368/0x5e0
+ rpc_run_task+0xcfe/0xee0
+ nfs4_proc_getattr+0x5b5/0x990
+ __nfs_revalidate_inode+0x477/0xd00
+ nfs_access_get_cached+0x1021/0x1cc0
+ nfs_do_access+0x9f/0xae0
+ nfs_permission+0x1e4/0x8c0
+ inode_permission+0x356/0x6c0
+ link_path_walk+0x958/0x1330
+ path_lookupat+0xce/0x6b0
+ filename_lookup+0x23e/0x770
+ vfs_statx+0xe7/0x970
+ vfs_fstatat+0x1f2/0x2c0
+ __se_sys_newfstatat+0x67/0x880
+ __x64_sys_newfstatat+0xbd/0x120
+ x64_sys_call+0x1826/0x3cf0
+ do_syscall_64+0xd0/0x1b0
+ entry_SYSCALL_64_after_hwframe+0x77/0x7f
+
+The KMSAN warning is triggered in decode_getfattr_attrs(), when calling
+decode_attr_mdsthreshold(). It appears that fattr->mdsthreshold is not
+initialized.
+
+Fix the issue by initializing fattr->mdsthreshold to NULL in
+nfs_fattr_init().
+
+Cc: stable@vger.kernel.org # v3.5.x
+Fixes: 88034c3d88c2 ("NFSv4.1 mdsthreshold attribute xdr")
+Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
+Signed-off-by: Anna Schumaker <anna.schumaker@oracle.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/nfs/inode.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -1656,6 +1656,7 @@ void nfs_fattr_init(struct nfs_fattr *fa
+ fattr->gencount = nfs_inc_attr_generation_counter();
+ fattr->owner_name = NULL;
+ fattr->group_name = NULL;
++ fattr->mdsthreshold = NULL;
+ }
+ EXPORT_SYMBOL_GPL(nfs_fattr_init);
+
--- /dev/null
+From 54c814c8b23bc7617be3d46abdb896937695dbfa Mon Sep 17 00:00:00 2001
+From: Bart Van Assche <bvanassche@acm.org>
+Date: Thu, 31 Oct 2024 14:26:24 -0700
+Subject: scsi: ufs: core: Start the RTC update work later
+
+From: Bart Van Assche <bvanassche@acm.org>
+
+commit 54c814c8b23bc7617be3d46abdb896937695dbfa upstream.
+
+The RTC update work involves runtime resuming the UFS controller. Hence,
+only start the RTC update work after runtime power management in the UFS
+driver has been fully initialized. This patch fixes the following kernel
+crash:
+
+Internal error: Oops: 0000000096000006 [#1] PREEMPT SMP
+Workqueue: events ufshcd_rtc_work
+Call trace:
+ _raw_spin_lock_irqsave+0x34/0x8c (P)
+ pm_runtime_get_if_active+0x24/0x9c (L)
+ pm_runtime_get_if_active+0x24/0x9c
+ ufshcd_rtc_work+0x138/0x1b4
+ process_one_work+0x148/0x288
+ worker_thread+0x2cc/0x3d4
+ kthread+0x110/0x114
+ ret_from_fork+0x10/0x20
+
+Reported-by: Neil Armstrong <neil.armstrong@linaro.org>
+Closes: https://lore.kernel.org/linux-scsi/0c0bc528-fdc2-4106-bc99-f23ae377f6f5@linaro.org/
+Fixes: 6bf999e0eb41 ("scsi: ufs: core: Add UFS RTC support")
+Cc: Bean Huo <beanhuo@micron.com>
+Cc: stable@vger.kernel.org
+Signed-off-by: Bart Van Assche <bvanassche@acm.org>
+Link: https://lore.kernel.org/r/20241031212632.2799127-1-bvanassche@acm.org
+Reviewed-by: Peter Wang <peter.wang@mediatek.com>
+Reviewed-by: Bean Huo <beanhuo@micron.com>
+Tested-by: Neil Armstrong <neil.armstrong@linaro.org> # on SM8650-HDK
+Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/ufs/core/ufshcd.c | 10 ++++++++--
+ 1 file changed, 8 insertions(+), 2 deletions(-)
+
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -8641,6 +8641,14 @@ static int ufshcd_add_lus(struct ufs_hba
+ ufshcd_init_clk_scaling_sysfs(hba);
+ }
+
++ /*
++ * The RTC update code accesses the hba->ufs_device_wlun->sdev_gendev
++ * pointer and hence must only be started after the WLUN pointer has
++ * been initialized by ufshcd_scsi_add_wlus().
++ */
++ schedule_delayed_work(&hba->ufs_rtc_update_work,
++ msecs_to_jiffies(UFS_RTC_UPDATE_INTERVAL_MS));
++
+ ufs_bsg_probe(hba);
+ scsi_scan_host(hba->host);
+
+@@ -8800,8 +8808,6 @@ static int ufshcd_device_init(struct ufs
+ ufshcd_force_reset_auto_bkops(hba);
+
+ ufshcd_set_timestamp_attr(hba);
+- schedule_delayed_work(&hba->ufs_rtc_update_work,
+- msecs_to_jiffies(UFS_RTC_UPDATE_INTERVAL_MS));
+
+ /* Gear up to HS gear if supported */
+ if (hba->max_pwr_info.is_valid) {
dm-fix-a-crash-if-blk_alloc_disk-fails.patch
mptcp-no-admin-perm-to-list-endpoints.patch
alsa-usb-audio-add-quirk-for-hp-320-fhd-webcam.patch
+scsi-ufs-core-start-the-rtc-update-work-later.patch
+nfs-fix-kmsan-warning-in-decode_getfattr_attrs.patch
+tracing-fix-tracefs-mount-options.patch
+net-wwan-t7xx-fix-off-by-one-error-in-t7xx_dpmaif_rx_buf_alloc.patch
+net-vertexcom-mse102x-fix-possible-double-free-of-tx-skb.patch
+mptcp-use-sock_kfree_s-instead-of-kfree.patch
+arm64-sve-discard-stale-cpu-state-when-handling-sve-traps.patch
+arm64-kconfig-make-sme-depend-on-broken-for-now.patch
+arm64-smccc-remove-broken-support-for-smcccv1.3-sve-discard-hint.patch
+mm-slab-fix-warning-caused-by-duplicate-kmem_cache-creation-in-kmem_buckets_create.patch
+kvm-ppc-book3s-hv-mask-off-lpcr_mer-for-a-vcpu-before-running-it-to-avoid-spurious-interrupts.patch
+idpf-avoid-vport-access-in-idpf_get_link_ksettings.patch
+idpf-fix-idpf_vc_core_init-error-path.patch
+btrfs-fix-the-length-of-reserved-qgroup-to-free.patch
+btrfs-fix-per-subvolume-ro-rw-flags-with-new-mount-api.patch
+btrfs-reinitialize-delayed-ref-list-after-deleting-it-from-the-list.patch
--- /dev/null
+From e4d32142d1de8bcafd90ea5f4f557104f0969c41 Mon Sep 17 00:00:00 2001
+From: Kalesh Singh <kaleshsingh@google.com>
+Date: Wed, 30 Oct 2024 10:17:48 -0700
+Subject: tracing: Fix tracefs mount options
+
+From: Kalesh Singh <kaleshsingh@google.com>
+
+commit e4d32142d1de8bcafd90ea5f4f557104f0969c41 upstream.
+
+Commit 78ff64081949 ("vfs: Convert tracefs to use the new mount API")
+converted tracefs to use the new mount APIs caused mount options
+(e.g. gid=<gid>) to not take effect.
+
+The tracefs superblock can be updated from multiple paths:
+ - on fs_initcall() to init_trace_printk_function_export()
+ - from a work queue to initialize eventfs
+ tracer_init_tracefs_work_func()
+ - fsconfig() syscall to mount or remount of tracefs
+
+The tracefs superblock root inode gets created early on in
+init_trace_printk_function_export().
+
+With the new mount API, tracefs effectively uses get_tree_single() instead
+of the old API mount_single().
+
+Previously, mount_single() ensured that the options are always applied to
+the superblock root inode:
+ (1) If the root inode didn't exist, call fill_super() to create it
+ and apply the options.
+ (2) If the root inode exists, call reconfigure_single() which
+ effectively calls tracefs_apply_options() to parse and apply
+ options to the subperblock's fs_info and inode and remount
+ eventfs (if necessary)
+
+On the other hand, get_tree_single() effectively calls vfs_get_super()
+which:
+ (3) If the root inode doesn't exists, calls fill_super() to create it
+ and apply the options.
+ (4) If the root inode already exists, updates the fs_context root
+ with the superblock's root inode.
+
+(4) above is always the case for tracefs mounts, since the super block's
+root inode will already be created by init_trace_printk_function_export().
+
+This means that the mount options get ignored:
+ - Since it isn't applied to the superblock's root inode, it doesn't
+ get inherited by the children.
+ - Since eventfs is initialized from a separate work queue and
+ before call to mount with the options, and it doesn't get remounted
+ for mount.
+
+Ensure that the mount options are applied to the super block and eventfs
+is remounted to respect the mount options.
+
+To understand this better, if fstab has the following:
+
+ tracefs /sys/kernel/tracing tracefs nosuid,nodev,noexec,gid=tracing 0 0
+
+On boot up, permissions look like:
+
+ # ls -l /sys/kernel/tracing/trace
+ -rw-r----- 1 root root 0 Nov 1 08:37 /sys/kernel/tracing/trace
+
+When it should look like:
+
+ # ls -l /sys/kernel/tracing/trace
+ -rw-r----- 1 root tracing 0 Nov 1 08:37 /sys/kernel/tracing/trace
+
+Link: https://lore.kernel.org/r/536e99d3-345c-448b-adee-a21389d7ab4b@redhat.com/
+
+Cc: Eric Sandeen <sandeen@redhat.com>
+Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
+Cc: Shuah Khan <shuah@kernel.org>
+Cc: Ali Zahraee <ahzahraee@gmail.com>
+Cc: Christian Brauner <brauner@kernel.org>
+Cc: David Howells <dhowells@redhat.com>
+Cc: Steven Rostedt <rostedt@goodmis.org>
+Cc: Masami Hiramatsu <mhiramat@kernel.org>
+Cc: stable@vger.kernel.org
+Fixes: 78ff64081949 ("vfs: Convert tracefs to use the new mount API")
+Link: https://lore.kernel.org/20241030171928.4168869-2-kaleshsingh@google.com
+Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
+Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/tracefs/inode.c | 12 +++++++++---
+ 1 file changed, 9 insertions(+), 3 deletions(-)
+
+diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c
+index 1748dff58c3b..cfc614c638da 100644
+--- a/fs/tracefs/inode.c
++++ b/fs/tracefs/inode.c
+@@ -392,6 +392,9 @@ static int tracefs_reconfigure(struct fs_context *fc)
+ struct tracefs_fs_info *sb_opts = sb->s_fs_info;
+ struct tracefs_fs_info *new_opts = fc->s_fs_info;
+
++ if (!new_opts)
++ return 0;
++
+ sync_filesystem(sb);
+ /* structure copy of new mount options to sb */
+ *sb_opts = *new_opts;
+@@ -478,14 +481,17 @@ static int tracefs_fill_super(struct super_block *sb, struct fs_context *fc)
+ sb->s_op = &tracefs_super_operations;
+ sb->s_d_op = &tracefs_dentry_operations;
+
+- tracefs_apply_options(sb, false);
+-
+ return 0;
+ }
+
+ static int tracefs_get_tree(struct fs_context *fc)
+ {
+- return get_tree_single(fc, tracefs_fill_super);
++ int err = get_tree_single(fc, tracefs_fill_super);
++
++ if (err)
++ return err;
++
++ return tracefs_reconfigure(fc);
+ }
+
+ static void tracefs_free_fc(struct fs_context *fc)
+--
+2.47.0
+