--- /dev/null
+From 3c0696076aad60a2f04c019761921954579e1b0e Mon Sep 17 00:00:00 2001
+From: James Houghton <jthoughton@google.com>
+Date: Mon, 4 Dec 2023 17:26:46 +0000
+Subject: arm64: mm: Always make sw-dirty PTEs hw-dirty in pte_modify
+
+From: James Houghton <jthoughton@google.com>
+
+commit 3c0696076aad60a2f04c019761921954579e1b0e upstream.
+
+It is currently possible for a userspace application to enter an
+infinite page fault loop when using HugeTLB pages implemented with
+contiguous PTEs when HAFDBS is not available. This happens because:
+
+1. The kernel may sometimes write PTEs that are sw-dirty but hw-clean
+ (PTE_DIRTY | PTE_RDONLY | PTE_WRITE).
+
+2. If, during a write, the CPU uses a sw-dirty, hw-clean PTE in handling
+ the memory access on a system without HAFDBS, we will get a page
+ fault.
+
+3. HugeTLB will check if it needs to update the dirty bits on the PTE.
+ For contiguous PTEs, it will check to see if the pgprot bits need
+ updating. In this case, HugeTLB wants to write a sequence of
+ sw-dirty, hw-dirty PTEs, but it finds that all the PTEs it is about
+ to overwrite are all pte_dirty() (pte_sw_dirty() => pte_dirty()),
+ so it thinks no update is necessary.
+
+We can get the kernel to write a sw-dirty, hw-clean PTE with the
+following steps (showing the relevant VMA flags and pgprot bits):
+
+i. Create a valid, writable contiguous PTE.
+ VMA vmflags: VM_SHARED | VM_READ | VM_WRITE
+ VMA pgprot bits: PTE_RDONLY | PTE_WRITE
+ PTE pgprot bits: PTE_DIRTY | PTE_WRITE
+
+ii. mprotect the VMA to PROT_NONE.
+ VMA vmflags: VM_SHARED
+ VMA pgprot bits: PTE_RDONLY
+ PTE pgprot bits: PTE_DIRTY | PTE_RDONLY
+
+iii. mprotect the VMA back to PROT_READ | PROT_WRITE.
+ VMA vmflags: VM_SHARED | VM_READ | VM_WRITE
+ VMA pgprot bits: PTE_RDONLY | PTE_WRITE
+ PTE pgprot bits: PTE_DIRTY | PTE_WRITE | PTE_RDONLY
+
+Make it impossible to create a writeable sw-dirty, hw-clean PTE with
+pte_modify(). Such a PTE should be impossible to create, and there may
+be places that assume that pte_dirty() implies pte_hw_dirty().
+
+Signed-off-by: James Houghton <jthoughton@google.com>
+Fixes: 031e6e6b4e12 ("arm64: hugetlb: Avoid unnecessary clearing in huge_ptep_set_access_flags")
+Cc: <stable@vger.kernel.org>
+Acked-by: Will Deacon <will@kernel.org>
+Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
+Link: https://lore.kernel.org/r/20231204172646.2541916-3-jthoughton@google.com
+Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/arm64/include/asm/pgtable.h | 6 ++++++
+ 1 file changed, 6 insertions(+)
+
+--- a/arch/arm64/include/asm/pgtable.h
++++ b/arch/arm64/include/asm/pgtable.h
+@@ -822,6 +822,12 @@ static inline pte_t pte_modify(pte_t pte
+ if (pte_hw_dirty(pte))
+ pte = pte_mkdirty(pte);
+ pte_val(pte) = (pte_val(pte) & ~mask) | (pgprot_val(newprot) & mask);
++ /*
++ * If we end up clearing hw dirtiness for a sw-dirty PTE, set hardware
++ * dirtiness again.
++ */
++ if (pte_sw_dirty(pte))
++ pte = pte_mkdirty(pte);
+ return pte;
+ }
+
--- /dev/null
+From a86805504b88f636a6458520d85afdf0634e3c6b Mon Sep 17 00:00:00 2001
+From: Boris Burkov <boris@bur.io>
+Date: Fri, 1 Dec 2023 13:00:12 -0800
+Subject: btrfs: don't clear qgroup reserved bit in release_folio
+
+From: Boris Burkov <boris@bur.io>
+
+commit a86805504b88f636a6458520d85afdf0634e3c6b upstream.
+
+The EXTENT_QGROUP_RESERVED bit is used to "lock" regions of the file for
+duplicate reservations. That is two writes to that range in one
+transaction shouldn't create two reservations, as the reservation will
+only be freed once when the write finally goes down. Therefore, it is
+never OK to clear that bit without freeing the associated qgroup
+reserve. At this point, we don't want to be freeing the reserve, so mask
+off the bit.
+
+CC: stable@vger.kernel.org # 5.15+
+Reviewed-by: Qu Wenruo <wqu@suse.com>
+Signed-off-by: Boris Burkov <boris@bur.io>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/extent_io.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -3390,7 +3390,8 @@ static int try_release_extent_state(stru
+ ret = 0;
+ } else {
+ u32 clear_bits = ~(EXTENT_LOCKED | EXTENT_NODATASUM |
+- EXTENT_DELALLOC_NEW | EXTENT_CTLBITS);
++ EXTENT_DELALLOC_NEW | EXTENT_CTLBITS |
++ EXTENT_QGROUP_RESERVED);
+
+ /*
+ * At this point we can safely clear everything except the
--- /dev/null
+From f63e1164b90b385cd832ff0fdfcfa76c3cc15436 Mon Sep 17 00:00:00 2001
+From: Boris Burkov <boris@bur.io>
+Date: Fri, 1 Dec 2023 13:00:09 -0800
+Subject: btrfs: free qgroup reserve when ORDERED_IOERR is set
+
+From: Boris Burkov <boris@bur.io>
+
+commit f63e1164b90b385cd832ff0fdfcfa76c3cc15436 upstream.
+
+An ordered extent completing is a critical moment in qgroup reserve
+handling, as the ownership of the reservation is handed off from the
+ordered extent to the delayed ref. In the happy path we release (unlock)
+but do not free (decrement counter) the reservation, and the delayed ref
+drives the free. However, on an error, we don't create a delayed ref,
+since there is no ref to add. Therefore, free on the error path.
+
+CC: stable@vger.kernel.org # 6.1+
+Reviewed-by: Qu Wenruo <wqu@suse.com>
+Signed-off-by: Boris Burkov <boris@bur.io>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/ordered-data.c | 4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+--- a/fs/btrfs/ordered-data.c
++++ b/fs/btrfs/ordered-data.c
+@@ -544,7 +544,9 @@ void btrfs_remove_ordered_extent(struct
+ release = entry->disk_num_bytes;
+ else
+ release = entry->num_bytes;
+- btrfs_delalloc_release_metadata(btrfs_inode, release, false);
++ btrfs_delalloc_release_metadata(btrfs_inode, release,
++ test_bit(BTRFS_ORDERED_IOERR,
++ &entry->flags));
+ }
+
+ percpu_counter_add_batch(&fs_info->ordered_bytes, -entry->num_bytes,
--- /dev/null
+From 54bed6bafa0f38daf9697af50e3aff5ff1354fe1 Mon Sep 17 00:00:00 2001
+From: Amelie Delaunay <amelie.delaunay@foss.st.com>
+Date: Mon, 6 Nov 2023 14:48:32 +0100
+Subject: dmaengine: stm32-dma: avoid bitfield overflow assertion
+
+From: Amelie Delaunay <amelie.delaunay@foss.st.com>
+
+commit 54bed6bafa0f38daf9697af50e3aff5ff1354fe1 upstream.
+
+stm32_dma_get_burst() returns a negative error for invalid input, which
+gets turned into a large u32 value in stm32_dma_prep_dma_memcpy() that
+in turn triggers an assertion because it does not fit into a two-bit field:
+drivers/dma/stm32-dma.c: In function 'stm32_dma_prep_dma_memcpy':
+include/linux/compiler_types.h:354:38: error: call to '__compiletime_assert_282' declared with attribute error: FIELD_PREP: value too large for the field
+ _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
+ ^
+ include/linux/compiler_types.h:335:4: note: in definition of macro '__compiletime_assert'
+ prefix ## suffix(); \
+ ^~~~~~
+ include/linux/compiler_types.h:354:2: note: in expansion of macro '_compiletime_assert'
+ _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
+ ^~~~~~~~~~~~~~~~~~~
+ include/linux/build_bug.h:39:37: note: in expansion of macro 'compiletime_assert'
+ #define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg)
+ ^~~~~~~~~~~~~~~~~~
+ include/linux/bitfield.h:68:3: note: in expansion of macro 'BUILD_BUG_ON_MSG'
+ BUILD_BUG_ON_MSG(__builtin_constant_p(_val) ? \
+ ^~~~~~~~~~~~~~~~
+ include/linux/bitfield.h:114:3: note: in expansion of macro '__BF_FIELD_CHECK'
+ __BF_FIELD_CHECK(_mask, 0ULL, _val, "FIELD_PREP: "); \
+ ^~~~~~~~~~~~~~~~
+ drivers/dma/stm32-dma.c:1237:4: note: in expansion of macro 'FIELD_PREP'
+ FIELD_PREP(STM32_DMA_SCR_PBURST_MASK, dma_burst) |
+ ^~~~~~~~~~
+
+As an easy workaround, assume the error can happen, so try to handle this
+by failing stm32_dma_prep_dma_memcpy() before the assertion. It replicates
+what is done in stm32_dma_set_xfer_param() where stm32_dma_get_burst() is
+also used.
+
+Fixes: 1c32d6c37cc2 ("dmaengine: stm32-dma: use bitfield helpers")
+Fixes: a2b6103b7a8a ("dmaengine: stm32-dma: Improve memory burst management")
+Signed-off-by: Arnd Bergmann <arnd@arndb.de>
+Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
+Cc: stable@vger.kernel.org
+Reported-by: kernel test robot <lkp@intel.com>
+Closes: https://lore.kernel.org/oe-kbuild-all/202311060135.Q9eMnpCL-lkp@intel.com/
+Link: https://lore.kernel.org/r/20231106134832.1470305-1-amelie.delaunay@foss.st.com
+Signed-off-by: Vinod Koul <vkoul@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/dma/stm32-dma.c | 8 ++++++--
+ 1 file changed, 6 insertions(+), 2 deletions(-)
+
+--- a/drivers/dma/stm32-dma.c
++++ b/drivers/dma/stm32-dma.c
+@@ -1249,8 +1249,8 @@ static struct dma_async_tx_descriptor *s
+ enum dma_slave_buswidth max_width;
+ struct stm32_dma_desc *desc;
+ size_t xfer_count, offset;
+- u32 num_sgs, best_burst, dma_burst, threshold;
+- int i;
++ u32 num_sgs, best_burst, threshold;
++ int dma_burst, i;
+
+ num_sgs = DIV_ROUND_UP(len, STM32_DMA_ALIGNED_MAX_DATA_ITEMS);
+ desc = kzalloc(struct_size(desc, sg_req, num_sgs), GFP_NOWAIT);
+@@ -1268,6 +1268,10 @@ static struct dma_async_tx_descriptor *s
+ best_burst = stm32_dma_get_best_burst(len, STM32_DMA_MAX_BURST,
+ threshold, max_width);
+ dma_burst = stm32_dma_get_burst(chan, best_burst);
++ if (dma_burst < 0) {
++ kfree(desc);
++ return NULL;
++ }
+
+ stm32_dma_clear_reg(&desc->sg_req[i].chan_reg);
+ desc->sg_req[i].chan_reg.dma_scr =
--- /dev/null
+From e7ab758741672acb21c5d841a9f0309d30e48a06 Mon Sep 17 00:00:00 2001
+From: Mario Limonciello <mario.limonciello@amd.com>
+Date: Mon, 19 Jun 2023 15:04:24 -0500
+Subject: drm/amd/display: Disable PSR-SU on Parade 0803 TCON again
+
+From: Mario Limonciello <mario.limonciello@amd.com>
+
+commit e7ab758741672acb21c5d841a9f0309d30e48a06 upstream.
+
+When screen brightness is rapidly changed and PSR-SU is enabled the
+display hangs on panels with this TCON even on the latest DCN 3.1.4
+microcode (0x8002a81 at this time).
+
+This was disabled previously as commit 072030b17830 ("drm/amd: Disable
+PSR-SU on Parade 0803 TCON") but reverted as commit 1e66a17ce546 ("Revert
+"drm/amd: Disable PSR-SU on Parade 0803 TCON"") in favor of testing for
+a new enough microcode (commit cd2e31a9ab93 ("drm/amd/display: Set minimum
+requirement for using PSR-SU on Phoenix")).
+
+As hangs are still happening specifically with this TCON, disable PSR-SU
+again for it until it can be root caused.
+
+Cc: stable@vger.kernel.org
+Cc: aaron.ma@canonical.com
+Cc: binli@gnome.org
+Cc: Marc Rossi <Marc.Rossi@amd.com>
+Cc: Hamza Mahfooz <Hamza.Mahfooz@amd.com>
+Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
+Link: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2046131
+Acked-by: Alex Deucher <alexander.deucher@amd.com>
+Reviewed-by: Harry Wentland <harry.wentland@amd.com>
+Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/gpu/drm/amd/display/modules/power/power_helpers.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+--- a/drivers/gpu/drm/amd/display/modules/power/power_helpers.c
++++ b/drivers/gpu/drm/amd/display/modules/power/power_helpers.c
+@@ -816,6 +816,8 @@ bool is_psr_su_specific_panel(struct dc_
+ ((dpcd_caps->sink_dev_id_str[1] == 0x08 && dpcd_caps->sink_dev_id_str[0] == 0x08) ||
+ (dpcd_caps->sink_dev_id_str[1] == 0x08 && dpcd_caps->sink_dev_id_str[0] == 0x07)))
+ isPSRSUSupported = false;
++ else if (dpcd_caps->sink_dev_id_str[1] == 0x08 && dpcd_caps->sink_dev_id_str[0] == 0x03)
++ isPSRSUSupported = false;
+ else if (dpcd_caps->psr_info.force_psrsu_cap == 0x1)
+ isPSRSUSupported = true;
+ }
--- /dev/null
+From ceb9a321e7639700844aa3bf234a4e0884f13b77 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Christian=20K=C3=B6nig?= <christian.koenig@amd.com>
+Date: Fri, 8 Dec 2023 13:43:09 +0100
+Subject: drm/amdgpu: fix tear down order in amdgpu_vm_pt_free
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Christian König <christian.koenig@amd.com>
+
+commit ceb9a321e7639700844aa3bf234a4e0884f13b77 upstream.
+
+When freeing PD/PT with shadows it can happen that the shadow
+destruction races with detaching the PD/PT from the VM causing a NULL
+pointer dereference in the invalidation code.
+
+Fix this by detaching the the PD/PT from the VM first and then
+freeing the shadow instead.
+
+Signed-off-by: Christian König <christian.koenig@amd.com>
+Fixes: https://gitlab.freedesktop.org/drm/amd/-/issues/2867
+Cc: <stable@vger.kernel.org>
+Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
+Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c
+@@ -631,13 +631,14 @@ static void amdgpu_vm_pt_free(struct amd
+
+ if (!entry->bo)
+ return;
++
++ entry->bo->vm_bo = NULL;
+ shadow = amdgpu_bo_shadowed(entry->bo);
+ if (shadow) {
+ ttm_bo_set_bulk_move(&shadow->tbo, NULL);
+ amdgpu_bo_unref(&shadow);
+ }
+ ttm_bo_set_bulk_move(&entry->bo->tbo, NULL);
+- entry->bo->vm_bo = NULL;
+
+ spin_lock(&entry->vm->status_lock);
+ list_del(&entry->vm_status);
--- /dev/null
+From ab4750332dbe535243def5dcebc24ca00c1f98ac Mon Sep 17 00:00:00 2001
+From: Alex Deucher <alexander.deucher@amd.com>
+Date: Thu, 7 Dec 2023 10:14:41 -0500
+Subject: drm/amdgpu/sdma5.2: add begin/end_use ring callbacks
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Alex Deucher <alexander.deucher@amd.com>
+
+commit ab4750332dbe535243def5dcebc24ca00c1f98ac upstream.
+
+Add begin/end_use ring callbacks to disallow GFXOFF when
+SDMA work is submitted and allow it again afterward.
+
+This should avoid corner cases where GFXOFF is erroneously
+entered when SDMA is still active. For now just allow/disallow
+GFXOFF in the begin and end helpers until we root cause the
+issue. This should not impact power as SDMA usage is pretty
+minimal and GFXOSS should not be active when SDMA is active
+anyway, this just makes it explicit.
+
+v2: move everything into sdma5.2 code. No reason for this
+to be generic at this point.
+v3: Add comments in new code
+
+Link: https://gitlab.freedesktop.org/drm/amd/-/issues/2220
+Reviewed-by: Mario Limonciello <mario.limonciello@amd.com> (v1)
+Tested-by: Mario Limonciello <mario.limonciello@amd.com> (v1)
+Reviewed-by: Christian König <christian.koenig@amd.com>
+Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
+Cc: stable@vger.kernel.org # 5.15+
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c | 28 ++++++++++++++++++++++++++++
+ 1 file changed, 28 insertions(+)
+
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_2.c
+@@ -1690,6 +1690,32 @@ static void sdma_v5_2_get_clockgating_st
+ *flags |= AMD_CG_SUPPORT_SDMA_LS;
+ }
+
++static void sdma_v5_2_ring_begin_use(struct amdgpu_ring *ring)
++{
++ struct amdgpu_device *adev = ring->adev;
++
++ /* SDMA 5.2.3 (RMB) FW doesn't seem to properly
++ * disallow GFXOFF in some cases leading to
++ * hangs in SDMA. Disallow GFXOFF while SDMA is active.
++ * We can probably just limit this to 5.2.3,
++ * but it shouldn't hurt for other parts since
++ * this GFXOFF will be disallowed anyway when SDMA is
++ * active, this just makes it explicit.
++ */
++ amdgpu_gfx_off_ctrl(adev, false);
++}
++
++static void sdma_v5_2_ring_end_use(struct amdgpu_ring *ring)
++{
++ struct amdgpu_device *adev = ring->adev;
++
++ /* SDMA 5.2.3 (RMB) FW doesn't seem to properly
++ * disallow GFXOFF in some cases leading to
++ * hangs in SDMA. Allow GFXOFF when SDMA is complete.
++ */
++ amdgpu_gfx_off_ctrl(adev, true);
++}
++
+ const struct amd_ip_funcs sdma_v5_2_ip_funcs = {
+ .name = "sdma_v5_2",
+ .early_init = sdma_v5_2_early_init,
+@@ -1738,6 +1764,8 @@ static const struct amdgpu_ring_funcs sd
+ .test_ib = sdma_v5_2_ring_test_ib,
+ .insert_nop = sdma_v5_2_ring_insert_nop,
+ .pad_ib = sdma_v5_2_ring_pad_ib,
++ .begin_use = sdma_v5_2_ring_begin_use,
++ .end_use = sdma_v5_2_ring_end_use,
+ .emit_wreg = sdma_v5_2_ring_emit_wreg,
+ .emit_reg_wait = sdma_v5_2_ring_emit_reg_wait,
+ .emit_reg_write_reg_wait = sdma_v5_2_ring_emit_reg_write_reg_wait,
--- /dev/null
+From 0ccd963fe555451b1f84e6d14d2b3ef03dd5c947 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Ville=20Syrj=C3=A4l=C3=A4?= <ville.syrjala@linux.intel.com>
+Date: Tue, 5 Dec 2023 20:03:08 +0200
+Subject: drm/i915: Fix remapped stride with CCS on ADL+
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Ville Syrjälä <ville.syrjala@linux.intel.com>
+
+commit 0ccd963fe555451b1f84e6d14d2b3ef03dd5c947 upstream.
+
+On ADL+ the hardware automagically calculates the CCS AUX surface
+stride from the main surface stride, so when remapping we can't
+really play a lot of tricks with the main surface stride, or else
+the AUX surface stride would get miscalculated and no longer
+match the actual data layout in memory.
+
+Supposedly we could remap in 256 main surface tile units
+(AUX page(4096)/cachline(64)*4(4x1 main surface tiles per
+AUX cacheline)=256 main surface tiles), but the extra complexity
+is probably not worth the hassle.
+
+So let's just make sure our mapping stride is calculated from
+the full framebuffer stride (instead of the framebuffer width).
+This way the stride we program into PLANE_STRIDE will be the
+original framebuffer stride, and thus there will be no change
+to the AUX stride/layout.
+
+Cc: stable@vger.kernel.org
+Cc: Imre Deak <imre.deak@intel.com>
+Cc: Juha-Pekka Heikkila <juhapekka.heikkila@gmail.com>
+Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
+Link: https://patchwork.freedesktop.org/patch/msgid/20231205180308.7505-1-ville.syrjala@linux.intel.com
+Reviewed-by: Imre Deak <imre.deak@intel.com>
+(cherry picked from commit 2c12eb36f849256f5eb00ffaee9bf99396fd3814)
+Signed-off-by: Jani Nikula <jani.nikula@intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/gpu/drm/i915/display/intel_fb.c | 16 ++++++++++++++--
+ 1 file changed, 14 insertions(+), 2 deletions(-)
+
+--- a/drivers/gpu/drm/i915/display/intel_fb.c
++++ b/drivers/gpu/drm/i915/display/intel_fb.c
+@@ -1441,8 +1441,20 @@ static u32 calc_plane_remap_info(const s
+
+ size += remap_info->size;
+ } else {
+- unsigned int dst_stride = plane_view_dst_stride_tiles(fb, color_plane,
+- remap_info->width);
++ unsigned int dst_stride;
++
++ /*
++ * The hardware automagically calculates the CCS AUX surface
++ * stride from the main surface stride so can't really remap a
++ * smaller subset (unless we'd remap in whole AUX page units).
++ */
++ if (intel_fb_needs_pot_stride_remap(fb) &&
++ intel_fb_is_ccs_modifier(fb->base.modifier))
++ dst_stride = remap_info->src_stride;
++ else
++ dst_stride = remap_info->width;
++
++ dst_stride = plane_view_dst_stride_tiles(fb, color_plane, dst_stride);
+
+ assign_chk_ovf(i915, remap_info->dst_stride, dst_stride);
+ color_plane_info->mapping_stride = dst_stride *
--- /dev/null
+From 081488051d28d32569ebb7c7a23572778b2e7d57 Mon Sep 17 00:00:00 2001
+From: Yu Zhao <yuzhao@google.com>
+Date: Thu, 7 Dec 2023 23:14:04 -0700
+Subject: mm/mglru: fix underprotected page cache
+
+From: Yu Zhao <yuzhao@google.com>
+
+commit 081488051d28d32569ebb7c7a23572778b2e7d57 upstream.
+
+Unmapped folios accessed through file descriptors can be underprotected.
+Those folios are added to the oldest generation based on:
+
+1. The fact that they are less costly to reclaim (no need to walk the
+ rmap and flush the TLB) and have less impact on performance (don't
+ cause major PFs and can be non-blocking if needed again).
+2. The observation that they are likely to be single-use. E.g., for
+ client use cases like Android, its apps parse configuration files
+ and store the data in heap (anon); for server use cases like MySQL,
+ it reads from InnoDB files and holds the cached data for tables in
+ buffer pools (anon).
+
+However, the oldest generation can be very short lived, and if so, it
+doesn't provide the PID controller with enough time to respond to a surge
+of refaults. (Note that the PID controller uses weighted refaults and
+those from evicted generations only take a half of the whole weight.) In
+other words, for a short lived generation, the moving average smooths out
+the spike quickly.
+
+To fix the problem:
+1. For folios that are already on LRU, if they can be beyond the
+ tracking range of tiers, i.e., five accesses through file
+ descriptors, move them to the second oldest generation to give them
+ more time to age. (Note that tiers are used by the PID controller
+ to statistically determine whether folios accessed multiple times
+ through file descriptors are worth protecting.)
+2. When adding unmapped folios to LRU, adjust the placement of them so
+ that they are not too close to the tail. The effect of this is
+ similar to the above.
+
+On Android, launching 55 apps sequentially:
+ Before After Change
+ workingset_refault_anon 25641024 25598972 0%
+ workingset_refault_file 115016834 106178438 -8%
+
+Link: https://lkml.kernel.org/r/20231208061407.2125867-1-yuzhao@google.com
+Fixes: ac35a4902374 ("mm: multi-gen LRU: minimal implementation")
+Signed-off-by: Yu Zhao <yuzhao@google.com>
+Reported-by: Charan Teja Kalla <quic_charante@quicinc.com>
+Tested-by: Kalesh Singh <kaleshsingh@google.com>
+Cc: T.J. Mercier <tjmercier@google.com>
+Cc: Kairui Song <ryncsn@gmail.com>
+Cc: Hillf Danton <hdanton@sina.com>
+Cc: Jaroslav Pulchart <jaroslav.pulchart@gooddata.com>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ include/linux/mm_inline.h | 23 ++++++++++++++---------
+ mm/vmscan.c | 2 +-
+ mm/workingset.c | 6 +++---
+ 3 files changed, 18 insertions(+), 13 deletions(-)
+
+--- a/include/linux/mm_inline.h
++++ b/include/linux/mm_inline.h
+@@ -231,22 +231,27 @@ static inline bool lru_gen_add_folio(str
+ if (folio_test_unevictable(folio) || !lrugen->enabled)
+ return false;
+ /*
+- * There are three common cases for this page:
+- * 1. If it's hot, e.g., freshly faulted in or previously hot and
+- * migrated, add it to the youngest generation.
+- * 2. If it's cold but can't be evicted immediately, i.e., an anon page
+- * not in swapcache or a dirty page pending writeback, add it to the
+- * second oldest generation.
+- * 3. Everything else (clean, cold) is added to the oldest generation.
++ * There are four common cases for this page:
++ * 1. If it's hot, i.e., freshly faulted in, add it to the youngest
++ * generation, and it's protected over the rest below.
++ * 2. If it can't be evicted immediately, i.e., a dirty page pending
++ * writeback, add it to the second youngest generation.
++ * 3. If it should be evicted first, e.g., cold and clean from
++ * folio_rotate_reclaimable(), add it to the oldest generation.
++ * 4. Everything else falls between 2 & 3 above and is added to the
++ * second oldest generation if it's considered inactive, or the
++ * oldest generation otherwise. See lru_gen_is_active().
+ */
+ if (folio_test_active(folio))
+ seq = lrugen->max_seq;
+ else if ((type == LRU_GEN_ANON && !folio_test_swapcache(folio)) ||
+ (folio_test_reclaim(folio) &&
+ (folio_test_dirty(folio) || folio_test_writeback(folio))))
+- seq = lrugen->min_seq[type] + 1;
+- else
++ seq = lrugen->max_seq - 1;
++ else if (reclaiming || lrugen->min_seq[type] + MIN_NR_GENS >= lrugen->max_seq)
+ seq = lrugen->min_seq[type];
++ else
++ seq = lrugen->min_seq[type] + 1;
+
+ gen = lru_gen_from_seq(seq);
+ flags = (gen + 1UL) << LRU_GEN_PGOFF;
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -4770,7 +4770,7 @@ static bool sort_folio(struct lruvec *lr
+ }
+
+ /* protected */
+- if (tier > tier_idx) {
++ if (tier > tier_idx || refs == BIT(LRU_REFS_WIDTH)) {
+ int hist = lru_hist_from_seq(lrugen->min_seq[type]);
+
+ gen = folio_inc_gen(lruvec, folio, false);
+--- a/mm/workingset.c
++++ b/mm/workingset.c
+@@ -289,10 +289,10 @@ static void lru_gen_refault(struct folio
+ * 1. For pages accessed through page tables, hotter pages pushed out
+ * hot pages which refaulted immediately.
+ * 2. For pages accessed multiple times through file descriptors,
+- * numbers of accesses might have been out of the range.
++ * they would have been protected by sort_folio().
+ */
+- if (lru_gen_in_fault() || refs == BIT(LRU_REFS_WIDTH)) {
+- folio_set_workingset(folio);
++ if (lru_gen_in_fault() || refs >= BIT(LRU_REFS_WIDTH) - 1) {
++ set_mask_bits(&folio->flags, 0, LRU_REFS_MASK | BIT(PG_workingset));
+ mod_lruvec_state(lruvec, WORKINGSET_RESTORE_BASE + type, delta);
+ }
+ unlock:
--- /dev/null
+From 55ac8bbe358bdd2f3c044c12f249fd22d48fe015 Mon Sep 17 00:00:00 2001
+From: David Stevens <stevensd@chromium.org>
+Date: Tue, 18 Apr 2023 17:40:31 +0900
+Subject: mm/shmem: fix race in shmem_undo_range w/THP
+
+From: David Stevens <stevensd@chromium.org>
+
+commit 55ac8bbe358bdd2f3c044c12f249fd22d48fe015 upstream.
+
+Split folios during the second loop of shmem_undo_range. It's not
+sufficient to only split folios when dealing with partial pages, since
+it's possible for a THP to be faulted in after that point. Calling
+truncate_inode_folio in that situation can result in throwing away data
+outside of the range being targeted.
+
+[akpm@linux-foundation.org: tidy up comment layout]
+Link: https://lkml.kernel.org/r/20230418084031.3439795-1-stevensd@google.com
+Fixes: b9a8a4195c7d ("truncate,shmem: Handle truncates that split large folios")
+Signed-off-by: David Stevens <stevensd@chromium.org>
+Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
+Cc: Suleiman Souhlal <suleiman@google.com>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/shmem.c | 19 ++++++++++++++++++-
+ 1 file changed, 18 insertions(+), 1 deletion(-)
+
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -1024,7 +1024,24 @@ whole_folios:
+ }
+ VM_BUG_ON_FOLIO(folio_test_writeback(folio),
+ folio);
+- truncate_inode_folio(mapping, folio);
++
++ if (!folio_test_large(folio)) {
++ truncate_inode_folio(mapping, folio);
++ } else if (truncate_inode_partial_folio(folio, lstart, lend)) {
++ /*
++ * If we split a page, reset the loop so
++ * that we pick up the new sub pages.
++ * Otherwise the THP was entirely
++ * dropped or the target range was
++ * zeroed, so just continue the loop as
++ * is.
++ */
++ if (!folio_test_large(folio)) {
++ folio_unlock(folio);
++ index = start;
++ break;
++ }
++ }
+ }
+ index = folio->index + folio_nr_pages(folio) - 1;
+ folio_unlock(folio);
btrfs-do-not-allow-non-subvolume-root-targets-for-snapshot.patch
soundwire-stream-fix-null-pointer-dereference-for-multi_link.patch
ext4-prevent-the-normalized-size-from-exceeding-ext_max_blocks.patch
+arm64-mm-always-make-sw-dirty-ptes-hw-dirty-in-pte_modify.patch
+team-fix-use-after-free-when-an-option-instance-allocation-fails.patch
+drm-amdgpu-sdma5.2-add-begin-end_use-ring-callbacks.patch
+dmaengine-stm32-dma-avoid-bitfield-overflow-assertion.patch
+mm-mglru-fix-underprotected-page-cache.patch
+mm-shmem-fix-race-in-shmem_undo_range-w-thp.patch
+btrfs-free-qgroup-reserve-when-ordered_ioerr-is-set.patch
+btrfs-don-t-clear-qgroup-reserved-bit-in-release_folio.patch
+drm-amdgpu-fix-tear-down-order-in-amdgpu_vm_pt_free.patch
+drm-amd-display-disable-psr-su-on-parade-0803-tcon-again.patch
+drm-i915-fix-remapped-stride-with-ccs-on-adl.patch
+smb-client-fix-oob-in-receive_encrypted_standard.patch
+smb-client-fix-null-deref-in-asn1_ber_decoder.patch
+smb-client-fix-oob-in-smb2_query_reparse_point.patch
--- /dev/null
+From 90d025c2e953c11974e76637977c473200593a46 Mon Sep 17 00:00:00 2001
+From: Paulo Alcantara <pc@manguebit.com>
+Date: Mon, 11 Dec 2023 10:26:42 -0300
+Subject: smb: client: fix NULL deref in asn1_ber_decoder()
+
+From: Paulo Alcantara <pc@manguebit.com>
+
+commit 90d025c2e953c11974e76637977c473200593a46 upstream.
+
+If server replied SMB2_NEGOTIATE with a zero SecurityBufferOffset,
+smb2_get_data_area() sets @len to non-zero but return NULL, so
+decode_negTokeninit() ends up being called with a NULL @security_blob:
+
+ BUG: kernel NULL pointer dereference, address: 0000000000000000
+ #PF: supervisor read access in kernel mode
+ #PF: error_code(0x0000) - not-present page
+ PGD 0 P4D 0
+ Oops: 0000 [#1] PREEMPT SMP NOPTI
+ CPU: 2 PID: 871 Comm: mount.cifs Not tainted 6.7.0-rc4 #2
+ Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.2-3-gd478f380-rebuilt.opensuse.org 04/01/2014
+ RIP: 0010:asn1_ber_decoder+0x173/0xc80
+ Code: 01 4c 39 2c 24 75 09 45 84 c9 0f 85 2f 03 00 00 48 8b 14 24 4c 29 ea 48 83 fa 01 0f 86 1e 07 00 00 48 8b 74 24 28 4d 8d 5d 01 <42> 0f b6 3c 2e 89 fa 40 88 7c 24 5c f7 d2 83 e2 1f 0f 84 3d 07 00
+ RSP: 0018:ffffc9000063f950 EFLAGS: 00010202
+ RAX: 0000000000000002 RBX: 0000000000000000 RCX: 000000000000004a
+ RDX: 000000000000004a RSI: 0000000000000000 RDI: 0000000000000000
+ RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
+ R10: 0000000000000002 R11: 0000000000000001 R12: 0000000000000000
+ R13: 0000000000000000 R14: 000000000000004d R15: 0000000000000000
+ FS: 00007fce52b0fbc0(0000) GS:ffff88806ba00000(0000) knlGS:0000000000000000
+ CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
+ CR2: 0000000000000000 CR3: 000000001ae64000 CR4: 0000000000750ef0
+ PKRU: 55555554
+ Call Trace:
+ <TASK>
+ ? __die+0x23/0x70
+ ? page_fault_oops+0x181/0x480
+ ? __stack_depot_save+0x1e6/0x480
+ ? exc_page_fault+0x6f/0x1c0
+ ? asm_exc_page_fault+0x26/0x30
+ ? asn1_ber_decoder+0x173/0xc80
+ ? check_object+0x40/0x340
+ decode_negTokenInit+0x1e/0x30 [cifs]
+ SMB2_negotiate+0xc99/0x17c0 [cifs]
+ ? smb2_negotiate+0x46/0x60 [cifs]
+ ? srso_alias_return_thunk+0x5/0xfbef5
+ smb2_negotiate+0x46/0x60 [cifs]
+ cifs_negotiate_protocol+0xae/0x130 [cifs]
+ cifs_get_smb_ses+0x517/0x1040 [cifs]
+ ? srso_alias_return_thunk+0x5/0xfbef5
+ ? srso_alias_return_thunk+0x5/0xfbef5
+ ? queue_delayed_work_on+0x5d/0x90
+ cifs_mount_get_session+0x78/0x200 [cifs]
+ dfs_mount_share+0x13a/0x9f0 [cifs]
+ ? srso_alias_return_thunk+0x5/0xfbef5
+ ? lock_acquire+0xbf/0x2b0
+ ? find_nls+0x16/0x80
+ ? srso_alias_return_thunk+0x5/0xfbef5
+ cifs_mount+0x7e/0x350 [cifs]
+ cifs_smb3_do_mount+0x128/0x780 [cifs]
+ smb3_get_tree+0xd9/0x290 [cifs]
+ vfs_get_tree+0x2c/0x100
+ ? capable+0x37/0x70
+ path_mount+0x2d7/0xb80
+ ? srso_alias_return_thunk+0x5/0xfbef5
+ ? _raw_spin_unlock_irqrestore+0x44/0x60
+ __x64_sys_mount+0x11a/0x150
+ do_syscall_64+0x47/0xf0
+ entry_SYSCALL_64_after_hwframe+0x6f/0x77
+ RIP: 0033:0x7fce52c2ab1e
+
+Fix this by setting @len to zero when @off == 0 so callers won't
+attempt to dereference non-existing data areas.
+
+Reported-by: Robert Morris <rtm@csail.mit.edu>
+Cc: stable@vger.kernel.org
+Signed-off-by: Paulo Alcantara (SUSE) <pc@manguebit.com>
+Signed-off-by: Steve French <stfrench@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/smb/client/smb2misc.c | 26 ++++++++++----------------
+ 1 file changed, 10 insertions(+), 16 deletions(-)
+
+--- a/fs/smb/client/smb2misc.c
++++ b/fs/smb/client/smb2misc.c
+@@ -313,6 +313,9 @@ static const bool has_smb2_data_area[NUM
+ char *
+ smb2_get_data_area_len(int *off, int *len, struct smb2_hdr *shdr)
+ {
++ const int max_off = 4096;
++ const int max_len = 128 * 1024;
++
+ *off = 0;
+ *len = 0;
+
+@@ -384,29 +387,20 @@ smb2_get_data_area_len(int *off, int *le
+ * Invalid length or offset probably means data area is invalid, but
+ * we have little choice but to ignore the data area in this case.
+ */
+- if (*off > 4096) {
+- cifs_dbg(VFS, "offset %d too large, data area ignored\n", *off);
+- *len = 0;
+- *off = 0;
+- } else if (*off < 0) {
+- cifs_dbg(VFS, "negative offset %d to data invalid ignore data area\n",
+- *off);
++ if (unlikely(*off < 0 || *off > max_off ||
++ *len < 0 || *len > max_len)) {
++ cifs_dbg(VFS, "%s: invalid data area (off=%d len=%d)\n",
++ __func__, *off, *len);
+ *off = 0;
+ *len = 0;
+- } else if (*len < 0) {
+- cifs_dbg(VFS, "negative data length %d invalid, data area ignored\n",
+- *len);
+- *len = 0;
+- } else if (*len > 128 * 1024) {
+- cifs_dbg(VFS, "data area larger than 128K: %d\n", *len);
++ } else if (*off == 0) {
+ *len = 0;
+ }
+
+ /* return pointer to beginning of data area, ie offset from SMB start */
+- if ((*off != 0) && (*len != 0))
++ if (*off > 0 && *len > 0)
+ return (char *)shdr + *off;
+- else
+- return NULL;
++ return NULL;
+ }
+
+ /*
--- /dev/null
+From eec04ea119691e65227a97ce53c0da6b9b74b0b7 Mon Sep 17 00:00:00 2001
+From: Paulo Alcantara <pc@manguebit.com>
+Date: Mon, 11 Dec 2023 10:26:40 -0300
+Subject: smb: client: fix OOB in receive_encrypted_standard()
+
+From: Paulo Alcantara <pc@manguebit.com>
+
+commit eec04ea119691e65227a97ce53c0da6b9b74b0b7 upstream.
+
+Fix potential OOB in receive_encrypted_standard() if server returned a
+large shdr->NextCommand that would end up writing off the end of
+@next_buffer.
+
+Fixes: b24df3e30cbf ("cifs: update receive_encrypted_standard to handle compounded responses")
+Cc: stable@vger.kernel.org
+Reported-by: Robert Morris <rtm@csail.mit.edu>
+Signed-off-by: Paulo Alcantara (SUSE) <pc@manguebit.com>
+Signed-off-by: Steve French <stfrench@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/smb/client/smb2ops.c | 14 ++++++++------
+ 1 file changed, 8 insertions(+), 6 deletions(-)
+
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -5065,6 +5065,7 @@ receive_encrypted_standard(struct TCP_Se
+ struct smb2_hdr *shdr;
+ unsigned int pdu_length = server->pdu_size;
+ unsigned int buf_size;
++ unsigned int next_cmd;
+ struct mid_q_entry *mid_entry;
+ int next_is_large;
+ char *next_buffer = NULL;
+@@ -5093,14 +5094,15 @@ receive_encrypted_standard(struct TCP_Se
+ next_is_large = server->large_buf;
+ one_more:
+ shdr = (struct smb2_hdr *)buf;
+- if (shdr->NextCommand) {
++ next_cmd = le32_to_cpu(shdr->NextCommand);
++ if (next_cmd) {
++ if (WARN_ON_ONCE(next_cmd > pdu_length))
++ return -1;
+ if (next_is_large)
+ next_buffer = (char *)cifs_buf_get();
+ else
+ next_buffer = (char *)cifs_small_buf_get();
+- memcpy(next_buffer,
+- buf + le32_to_cpu(shdr->NextCommand),
+- pdu_length - le32_to_cpu(shdr->NextCommand));
++ memcpy(next_buffer, buf + next_cmd, pdu_length - next_cmd);
+ }
+
+ mid_entry = smb2_find_mid(server, buf);
+@@ -5124,8 +5126,8 @@ one_more:
+ else
+ ret = cifs_handle_standard(server, mid_entry);
+
+- if (ret == 0 && shdr->NextCommand) {
+- pdu_length -= le32_to_cpu(shdr->NextCommand);
++ if (ret == 0 && next_cmd) {
++ pdu_length -= next_cmd;
+ server->large_buf = next_is_large;
+ if (next_is_large)
+ server->bigbuf = buf = next_buffer;
--- /dev/null
+From 3a42709fa909e22b0be4bb1e2795aa04ada732a3 Mon Sep 17 00:00:00 2001
+From: Paulo Alcantara <pc@manguebit.com>
+Date: Mon, 11 Dec 2023 10:26:43 -0300
+Subject: smb: client: fix OOB in smb2_query_reparse_point()
+
+From: Paulo Alcantara <pc@manguebit.com>
+
+commit 3a42709fa909e22b0be4bb1e2795aa04ada732a3 upstream.
+
+Validate @ioctl_rsp->OutputOffset and @ioctl_rsp->OutputCount so that
+their sum does not wrap to a number that is smaller than @reparse_buf
+and we end up with a wild pointer as follows:
+
+ BUG: unable to handle page fault for address: ffff88809c5cd45f
+ #PF: supervisor read access in kernel mode
+ #PF: error_code(0x0000) - not-present page
+ PGD 4a01067 P4D 4a01067 PUD 0
+ Oops: 0000 [#1] PREEMPT SMP NOPTI
+ CPU: 2 PID: 1260 Comm: mount.cifs Not tainted 6.7.0-rc4 #2
+ Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS
+ rel-1.16.2-3-gd478f380-rebuilt.opensuse.org 04/01/2014
+ RIP: 0010:smb2_query_reparse_point+0x3e0/0x4c0 [cifs]
+ Code: ff ff e8 f3 51 fe ff 41 89 c6 58 5a 45 85 f6 0f 85 14 fe ff ff
+ 49 8b 57 48 8b 42 60 44 8b 42 64 42 8d 0c 00 49 39 4f 50 72 40 <8b>
+ 04 02 48 8b 9d f0 fe ff ff 49 8b 57 50 89 03 48 8b 9d e8 fe ff
+ RSP: 0018:ffffc90000347a90 EFLAGS: 00010212
+ RAX: 000000008000001f RBX: ffff88800ae11000 RCX: 00000000000000ec
+ RDX: ffff88801c5cd440 RSI: 0000000000000000 RDI: ffffffff82004aa4
+ RBP: ffffc90000347bb0 R08: 00000000800000cd R09: 0000000000000001
+ R10: 0000000000000000 R11: 0000000000000024 R12: ffff8880114d4100
+ R13: ffff8880114d4198 R14: 0000000000000000 R15: ffff8880114d4000
+ FS: 00007f02c07babc0(0000) GS:ffff88806ba00000(0000)
+ knlGS:0000000000000000
+ CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
+ CR2: ffff88809c5cd45f CR3: 0000000011750000 CR4: 0000000000750ef0
+ PKRU: 55555554
+ Call Trace:
+ <TASK>
+ ? __die+0x23/0x70
+ ? page_fault_oops+0x181/0x480
+ ? search_module_extables+0x19/0x60
+ ? srso_alias_return_thunk+0x5/0xfbef5
+ ? exc_page_fault+0x1b6/0x1c0
+ ? asm_exc_page_fault+0x26/0x30
+ ? _raw_spin_unlock_irqrestore+0x44/0x60
+ ? smb2_query_reparse_point+0x3e0/0x4c0 [cifs]
+ cifs_get_fattr+0x16e/0xa50 [cifs]
+ ? srso_alias_return_thunk+0x5/0xfbef5
+ ? lock_acquire+0xbf/0x2b0
+ cifs_root_iget+0x163/0x5f0 [cifs]
+ cifs_smb3_do_mount+0x5bd/0x780 [cifs]
+ smb3_get_tree+0xd9/0x290 [cifs]
+ vfs_get_tree+0x2c/0x100
+ ? capable+0x37/0x70
+ path_mount+0x2d7/0xb80
+ ? srso_alias_return_thunk+0x5/0xfbef5
+ ? _raw_spin_unlock_irqrestore+0x44/0x60
+ __x64_sys_mount+0x11a/0x150
+ do_syscall_64+0x47/0xf0
+ entry_SYSCALL_64_after_hwframe+0x6f/0x77
+ RIP: 0033:0x7f02c08d5b1e
+
+Fixes: 2e4564b31b64 ("smb3: add support for stat of WSL reparse points for special file types")
+Cc: stable@vger.kernel.org
+Reported-by: Robert Morris <rtm@csail.mit.edu>
+Signed-off-by: Paulo Alcantara (SUSE) <pc@manguebit.com>
+Signed-off-by: Steve French <stfrench@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/smb/client/smb2ops.c | 26 ++++++++++++++++----------
+ 1 file changed, 16 insertions(+), 10 deletions(-)
+
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -3122,7 +3122,7 @@ smb2_query_reparse_tag(const unsigned in
+ struct kvec close_iov[1];
+ struct smb2_ioctl_rsp *ioctl_rsp;
+ struct reparse_data_buffer *reparse_buf;
+- u32 plen;
++ u32 off, count, len;
+
+ cifs_dbg(FYI, "%s: path: %s\n", __func__, full_path);
+
+@@ -3202,16 +3202,22 @@ smb2_query_reparse_tag(const unsigned in
+ */
+ if (rc == 0) {
+ /* See MS-FSCC 2.3.23 */
++ off = le32_to_cpu(ioctl_rsp->OutputOffset);
++ count = le32_to_cpu(ioctl_rsp->OutputCount);
++ if (check_add_overflow(off, count, &len) ||
++ len > rsp_iov[1].iov_len) {
++ cifs_tcon_dbg(VFS, "%s: invalid ioctl: off=%d count=%d\n",
++ __func__, off, count);
++ rc = -EIO;
++ goto query_rp_exit;
++ }
+
+- reparse_buf = (struct reparse_data_buffer *)
+- ((char *)ioctl_rsp +
+- le32_to_cpu(ioctl_rsp->OutputOffset));
+- plen = le32_to_cpu(ioctl_rsp->OutputCount);
+-
+- if (plen + le32_to_cpu(ioctl_rsp->OutputOffset) >
+- rsp_iov[1].iov_len) {
+- cifs_tcon_dbg(FYI, "srv returned invalid ioctl len: %d\n",
+- plen);
++ reparse_buf = (void *)((u8 *)ioctl_rsp + off);
++ len = sizeof(*reparse_buf);
++ if (count < len ||
++ count < le16_to_cpu(reparse_buf->ReparseDataLength) + len) {
++ cifs_tcon_dbg(VFS, "%s: invalid ioctl: off=%d count=%d\n",
++ __func__, off, count);
+ rc = -EIO;
+ goto query_rp_exit;
+ }
--- /dev/null
+From c12296bbecc488623b7d1932080e394d08f3226b Mon Sep 17 00:00:00 2001
+From: Florent Revest <revest@chromium.org>
+Date: Wed, 6 Dec 2023 13:37:18 +0100
+Subject: team: Fix use-after-free when an option instance allocation fails
+
+From: Florent Revest <revest@chromium.org>
+
+commit c12296bbecc488623b7d1932080e394d08f3226b upstream.
+
+In __team_options_register, team_options are allocated and appended to
+the team's option_list.
+If one option instance allocation fails, the "inst_rollback" cleanup
+path frees the previously allocated options but doesn't remove them from
+the team's option_list.
+This leaves dangling pointers that can be dereferenced later by other
+parts of the team driver that iterate over options.
+
+This patch fixes the cleanup path to remove the dangling pointers from
+the list.
+
+As far as I can tell, this uaf doesn't have much security implications
+since it would be fairly hard to exploit (an attacker would need to make
+the allocation of that specific small object fail) but it's still nice
+to fix.
+
+Cc: stable@vger.kernel.org
+Fixes: 80f7c6683fe0 ("team: add support for per-port options")
+Signed-off-by: Florent Revest <revest@chromium.org>
+Reviewed-by: Jiri Pirko <jiri@nvidia.com>
+Reviewed-by: Hangbin Liu <liuhangbin@gmail.com>
+Link: https://lore.kernel.org/r/20231206123719.1963153-1-revest@chromium.org
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/team/team.c | 4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+--- a/drivers/net/team/team.c
++++ b/drivers/net/team/team.c
+@@ -285,8 +285,10 @@ static int __team_options_register(struc
+ return 0;
+
+ inst_rollback:
+- for (i--; i >= 0; i--)
++ for (i--; i >= 0; i--) {
+ __team_option_inst_del_option(team, dst_opts[i]);
++ list_del(&dst_opts[i]->list);
++ }
+
+ i = option_count;
+ alloc_rollback: