--- /dev/null
+From hughd@google.com Mon Mar 23 10:37:40 2026
+From: Hugh Dickins <hughd@google.com>
+Date: Mon, 23 Mar 2026 02:37:35 -0700 (PDT)
+Subject: mm: shmem: avoid unpaired folio_unlock() in shmem_swapin_folio()
+To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Cc: Hugh Dickins <hughd@google.com>, Andrew Morton <akpm@linux-foundation.org>, Baolin Wang <baolin.wang@linux.alibaba.com>, Baoquan He <bhe@redhat.com>, Barry Song <baohua@kernel.org>, Chris Li <chrisl@kernel.org>, David Hildenbrand <david@kernel.org>, Dev Jain <dev.jain@arm.com>, Greg Thelen <gthelen@google.com>, Guenter Roeck <groeck@google.com>, Kairui Song <kasong@tencent.com>, Kemeng Shi <shikemeng@huaweicloud.com>, Lance Yang <lance.yang@linux.dev>, Matthew Wilcox <willy@infradead.org>, Nhat Pham <nphamcs@gmail.com>, linux-mm@kvack.org, stable@vger.kernel.org
+Message-ID: <49bbe4fa-b678-1023-db47-99a730e2827f@google.com>
+
+From: Kemeng Shi <shikemeng@huaweicloud.com>
+
+commit e08d5f515613a9860bfee7312461a19f422adb5e upstream.
+
+If we get a folio from swap_cache_get_folio() successfully but encounter a
+failure before the folio is locked, we will unlock the folio which was not
+previously locked.
+
+Put the folio and set it to NULL when a failure occurs before the folio is
+locked to fix the issue.
+
+Link: https://lkml.kernel.org/r/20250516170939.965736-1-shikemeng@huaweicloud.com
+Link: https://lkml.kernel.org/r/20250516170939.965736-2-shikemeng@huaweicloud.com
+Fixes: 058313515d5a ("mm: shmem: fix potential data corruption during shmem swapin")
+Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com>
+Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
+Reviewed-by: Kairui Song <kasong@tencent.com>
+Cc: Hugh Dickins <hughd@google.com>
+Cc: kernel test robot <oliver.sang@intel.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+
+[ hughd: removed series cover letter comments ]
+Signed-off-by: Hugh Dickins <hughd@google.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/shmem.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -2198,6 +2198,8 @@ static int shmem_swapin_folio(struct ino
+ */
+ split_order = shmem_split_large_entry(inode, index, swap, gfp);
+ if (split_order < 0) {
++ folio_put(folio);
++ folio = NULL;
+ error = split_order;
+ goto failed;
+ }
--- /dev/null
+From stable+bounces-227930-greg=kroah.com@vger.kernel.org Mon Mar 23 10:34:29 2026
+From: Hugh Dickins <hughd@google.com>
+Date: Mon, 23 Mar 2026 02:34:19 -0700 (PDT)
+Subject: mm: shmem: fix potential data corruption during shmem swapin
+To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Cc: Hugh Dickins <hughd@google.com>, Andrew Morton <akpm@linux-foundation.org>, Baolin Wang <baolin.wang@linux.alibaba.com>, Baoquan He <bhe@redhat.com>, Barry Song <baohua@kernel.org>, Chris Li <chrisl@kernel.org>, David Hildenbrand <david@kernel.org>, Dev Jain <dev.jain@arm.com>, Greg Thelen <gthelen@google.com>, Guenter Roeck <groeck@google.com>, Kairui Song <kasong@tencent.com>, Kemeng Shi <shikemeng@huaweicloud.com>, Lance Yang <lance.yang@linux.dev>, Matthew Wilcox <willy@infradead.org>, Nhat Pham <nphamcs@gmail.com>, linux-mm@kvack.org, stable@vger.kernel.org
+Message-ID: <0e918493-29b1-de47-9fca-b1fa93156d63@google.com>
+
+From: Baolin Wang <baolin.wang@linux.alibaba.com>
+
+commit 058313515d5aab10d0a01dd634f92ed4a4e71d4c upstream.
+
+Alex and Kairui reported some issues (system hang or data corruption) when
+swapping out or swapping in large shmem folios. This is especially easy
+to reproduce when the tmpfs is mount with the 'huge=within_size'
+parameter. Thanks to Kairui's reproducer, the issue can be easily
+replicated.
+
+The root cause of the problem is that swap readahead may asynchronously
+swap in order 0 folios into the swap cache, while the shmem mapping can
+still store large swap entries. Then an order 0 folio is inserted into
+the shmem mapping without splitting the large swap entry, which overwrites
+the original large swap entry, leading to data corruption.
+
+When getting a folio from the swap cache, we should split the large swap
+entry stored in the shmem mapping if the orders do not match, to fix this
+issue.
+
+Link: https://lkml.kernel.org/r/2fe47c557e74e9df5fe2437ccdc6c9115fa1bf70.1740476943.git.baolin.wang@linux.alibaba.com
+Fixes: 809bc86517cc ("mm: shmem: support large folio swap out")
+Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
+Reported-by: Alex Xu (Hello71) <alex_y_xu@yahoo.ca>
+Reported-by: Kairui Song <ryncsn@gmail.com>
+Closes: https://lore.kernel.org/all/1738717785.im3r5g2vxc.none@localhost/
+Tested-by: Kairui Song <kasong@tencent.com>
+Cc: David Hildenbrand <david@redhat.com>
+Cc: Lance Yang <ioworker0@gmail.com>
+Cc: Matthew Wilcow <willy@infradead.org>
+Cc: Hugh Dickins <hughd@google.com>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+
+[ hughd: removed skip_swapcache dependencies ]
+Signed-off-by: Hugh Dickins <hughd@google.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/shmem.c | 30 +++++++++++++++++++++++++++---
+ 1 file changed, 27 insertions(+), 3 deletions(-)
+
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -2132,7 +2132,7 @@ static int shmem_swapin_folio(struct ino
+ struct swap_info_struct *si;
+ struct folio *folio = NULL;
+ swp_entry_t swap;
+- int error, nr_pages;
++ int error, nr_pages, order, split_order;
+
+ VM_BUG_ON(!*foliop || !xa_is_value(*foliop));
+ swap = radix_to_swp_entry(*foliop);
+@@ -2151,8 +2151,8 @@ static int shmem_swapin_folio(struct ino
+
+ /* Look it up and read it in.. */
+ folio = swap_cache_get_folio(swap, NULL, 0);
++ order = xa_get_order(&mapping->i_pages, index);
+ if (!folio) {
+- int split_order;
+
+ /* Or update major stats only when swapin succeeds?? */
+ if (fault_type) {
+@@ -2189,13 +2189,37 @@ static int shmem_swapin_folio(struct ino
+ error = -ENOMEM;
+ goto failed;
+ }
++ } else if (order != folio_order(folio)) {
++ /*
++ * Swap readahead may swap in order 0 folios into swapcache
++ * asynchronously, while the shmem mapping can still stores
++ * large swap entries. In such cases, we should split the
++ * large swap entry to prevent possible data corruption.
++ */
++ split_order = shmem_split_large_entry(inode, index, swap, gfp);
++ if (split_order < 0) {
++ error = split_order;
++ goto failed;
++ }
++
++ /*
++ * If the large swap entry has already been split, it is
++ * necessary to recalculate the new swap entry based on
++ * the old order alignment.
++ */
++ if (split_order > 0) {
++ pgoff_t offset = index - round_down(index, 1 << split_order);
++
++ swap = swp_entry(swp_type(swap), swp_offset(swap) + offset);
++ }
+ }
+
+ /* We have to do this with folio locked to prevent races */
+ folio_lock(folio);
+ if (!folio_test_swapcache(folio) ||
+ folio->swap.val != swap.val ||
+- !shmem_confirm_swap(mapping, index, swap)) {
++ !shmem_confirm_swap(mapping, index, swap) ||
++ xa_get_order(&mapping->i_pages, index) != folio_order(folio)) {
+ error = -EEXIST;
+ goto unlock;
+ }
--- /dev/null
+From stable+bounces-227935-greg=kroah.com@vger.kernel.org Mon Mar 23 10:43:43 2026
+From: Hugh Dickins <hughd@google.com>
+Date: Mon, 23 Mar 2026 02:43:31 -0700 (PDT)
+Subject: mm/shmem, swap: avoid redundant Xarray lookup during swapin
+To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Cc: Hugh Dickins <hughd@google.com>, Andrew Morton <akpm@linux-foundation.org>, Baolin Wang <baolin.wang@linux.alibaba.com>, Baoquan He <bhe@redhat.com>, Barry Song <baohua@kernel.org>, Chris Li <chrisl@kernel.org>, David Hildenbrand <david@kernel.org>, Dev Jain <dev.jain@arm.com>, Greg Thelen <gthelen@google.com>, Guenter Roeck <groeck@google.com>, Kairui Song <kasong@tencent.com>, Kemeng Shi <shikemeng@huaweicloud.com>, Lance Yang <lance.yang@linux.dev>, Matthew Wilcox <willy@infradead.org>, Nhat Pham <nphamcs@gmail.com>, linux-mm@kvack.org, stable@vger.kernel.org
+Message-ID: <ebffe1a4-f575-8a38-2584-70cbfeda6913@google.com>
+
+From: Kairui Song <kasong@tencent.com>
+
+commit 0cfc0e7e3d062b93e9eec6828de000981cdfb152 upstream.
+
+Currently shmem calls xa_get_order to get the swap radix entry order,
+requiring a full tree walk. This can be easily combined with the swap
+entry value checking (shmem_confirm_swap) to avoid the duplicated lookup
+and abort early if the entry is gone already. Which should improve the
+performance.
+
+Link: https://lkml.kernel.org/r/20250728075306.12704-1-ryncsn@gmail.com
+Link: https://lkml.kernel.org/r/20250728075306.12704-3-ryncsn@gmail.com
+Signed-off-by: Kairui Song <kasong@tencent.com>
+Reviewed-by: Kemeng Shi <shikemeng@huaweicloud.com>
+Reviewed-by: Dev Jain <dev.jain@arm.com>
+Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
+Cc: Baoquan He <bhe@redhat.com>
+Cc: Barry Song <baohua@kernel.org>
+Cc: Chris Li <chrisl@kernel.org>
+Cc: Hugh Dickins <hughd@google.com>
+Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
+Cc: Nhat Pham <nphamcs@gmail.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+
+Stable-dep-of: 8a1968bd997f ("mm/shmem, swap: fix race of truncate and swap entry split")
+[ hughd: removed series cover letter and skip_swapcache dependencies ]
+Signed-off-by: Hugh Dickins <hughd@google.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/shmem.c | 34 +++++++++++++++++++++++++---------
+ 1 file changed, 25 insertions(+), 9 deletions(-)
+
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -499,15 +499,27 @@ static int shmem_replace_entry(struct ad
+
+ /*
+ * Sometimes, before we decide whether to proceed or to fail, we must check
+- * that an entry was not already brought back from swap by a racing thread.
++ * that an entry was not already brought back or split by a racing thread.
+ *
+ * Checking folio is not enough: by the time a swapcache folio is locked, it
+ * might be reused, and again be swapcache, using the same swap as before.
++ * Returns the swap entry's order if it still presents, else returns -1.
+ */
+-static bool shmem_confirm_swap(struct address_space *mapping,
+- pgoff_t index, swp_entry_t swap)
++static int shmem_confirm_swap(struct address_space *mapping, pgoff_t index,
++ swp_entry_t swap)
+ {
+- return xa_load(&mapping->i_pages, index) == swp_to_radix_entry(swap);
++ XA_STATE(xas, &mapping->i_pages, index);
++ int ret = -1;
++ void *entry;
++
++ rcu_read_lock();
++ do {
++ entry = xas_load(&xas);
++ if (entry == swp_to_radix_entry(swap))
++ ret = xas_get_order(&xas);
++ } while (xas_retry(&xas, entry));
++ rcu_read_unlock();
++ return ret;
+ }
+
+ /*
+@@ -2155,16 +2167,20 @@ static int shmem_swapin_folio(struct ino
+ return -EIO;
+
+ si = get_swap_device(swap);
+- if (!si) {
+- if (!shmem_confirm_swap(mapping, index, swap))
++ order = shmem_confirm_swap(mapping, index, swap);
++ if (unlikely(!si)) {
++ if (order < 0)
+ return -EEXIST;
+ else
+ return -EINVAL;
+ }
++ if (unlikely(order < 0)) {
++ put_swap_device(si);
++ return -EEXIST;
++ }
+
+ /* Look it up and read it in.. */
+ folio = swap_cache_get_folio(swap, NULL, 0);
+- order = xa_get_order(&mapping->i_pages, index);
+ if (!folio) {
+
+ /* Or update major stats only when swapin succeeds?? */
+@@ -2241,7 +2257,7 @@ static int shmem_swapin_folio(struct ino
+ */
+ folio_lock(folio);
+ if (!folio_test_swapcache(folio) ||
+- !shmem_confirm_swap(mapping, index, swap) ||
++ shmem_confirm_swap(mapping, index, swap) < 0 ||
+ folio->swap.val != swap.val) {
+ error = -EEXIST;
+ goto unlock;
+@@ -2284,7 +2300,7 @@ static int shmem_swapin_folio(struct ino
+ *foliop = folio;
+ return 0;
+ failed:
+- if (!shmem_confirm_swap(mapping, index, swap))
++ if (shmem_confirm_swap(mapping, index, swap) < 0)
+ error = -EEXIST;
+ if (error == -EIO)
+ shmem_set_folio_swapin_error(inode, index, folio, swap);
--- /dev/null
+From hughd@google.com Mon Mar 23 10:40:20 2026
+From: Hugh Dickins <hughd@google.com>
+Date: Mon, 23 Mar 2026 02:40:16 -0700 (PDT)
+Subject: mm/shmem, swap: improve cached mTHP handling and fix potential hang
+To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Cc: Hugh Dickins <hughd@google.com>, Andrew Morton <akpm@linux-foundation.org>, Baolin Wang <baolin.wang@linux.alibaba.com>, Baoquan He <bhe@redhat.com>, Barry Song <baohua@kernel.org>, Chris Li <chrisl@kernel.org>, David Hildenbrand <david@kernel.org>, Dev Jain <dev.jain@arm.com>, Greg Thelen <gthelen@google.com>, Guenter Roeck <groeck@google.com>, Kairui Song <kasong@tencent.com>, Kemeng Shi <shikemeng@huaweicloud.com>, Lance Yang <lance.yang@linux.dev>, Matthew Wilcox <willy@infradead.org>, Nhat Pham <nphamcs@gmail.com>, linux-mm@kvack.org, stable@vger.kernel.org
+Message-ID: <318493ca-2bc3-acad-43bf-b9f694e643b0@google.com>
+
+From: Kairui Song <kasong@tencent.com>
+
+commit 5c241ed8d031693dadf33dd98ed2e7cc363e9b66 upstream.
+
+The current swap-in code assumes that, when a swap entry in shmem mapping
+is order 0, its cached folios (if present) must be order 0 too, which
+turns out not always correct.
+
+The problem is shmem_split_large_entry is called before verifying the
+folio will eventually be swapped in, one possible race is:
+
+ CPU1 CPU2
+shmem_swapin_folio
+/* swap in of order > 0 swap entry S1 */
+ folio = swap_cache_get_folio
+ /* folio = NULL */
+ order = xa_get_order
+ /* order > 0 */
+ folio = shmem_swap_alloc_folio
+ /* mTHP alloc failure, folio = NULL */
+ <... Interrupted ...>
+ shmem_swapin_folio
+ /* S1 is swapped in */
+ shmem_writeout
+ /* S1 is swapped out, folio cached */
+ shmem_split_large_entry(..., S1)
+ /* S1 is split, but the folio covering it has order > 0 now */
+
+Now any following swapin of S1 will hang: `xa_get_order` returns 0, and
+folio lookup will return a folio with order > 0. The
+`xa_get_order(&mapping->i_pages, index) != folio_order(folio)` will always
+return false causing swap-in to return -EEXIST.
+
+And this looks fragile. So fix this up by allowing seeing a larger folio
+in swap cache, and check the whole shmem mapping range covered by the
+swapin have the right swap value upon inserting the folio. And drop the
+redundant tree walks before the insertion.
+
+This will actually improve performance, as it avoids two redundant Xarray
+tree walks in the hot path, and the only side effect is that in the
+failure path, shmem may redundantly reallocate a few folios causing
+temporary slight memory pressure.
+
+And worth noting, it may seems the order and value check before inserting
+might help reducing the lock contention, which is not true. The swap
+cache layer ensures raced swapin will either see a swap cache folio or
+failed to do a swapin (we have SWAP_HAS_CACHE bit even if swap cache is
+bypassed), so holding the folio lock and checking the folio flag is
+already good enough for avoiding the lock contention. The chance that a
+folio passes the swap entry value check but the shmem mapping slot has
+changed should be very low.
+
+Link: https://lkml.kernel.org/r/20250728075306.12704-1-ryncsn@gmail.com
+Link: https://lkml.kernel.org/r/20250728075306.12704-2-ryncsn@gmail.com
+Fixes: 809bc86517cc ("mm: shmem: support large folio swap out")
+Signed-off-by: Kairui Song <kasong@tencent.com>
+Reviewed-by: Kemeng Shi <shikemeng@huaweicloud.com>
+Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
+Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
+Cc: Baoquan He <bhe@redhat.com>
+Cc: Barry Song <baohua@kernel.org>
+Cc: Chris Li <chrisl@kernel.org>
+Cc: Hugh Dickins <hughd@google.com>
+Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
+Cc: Nhat Pham <nphamcs@gmail.com>
+Cc: Dev Jain <dev.jain@arm.com>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+
+[ hughd: removed skip_swapcache dependencies ]
+Signed-off-by: Hugh Dickins <hughd@google.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/shmem.c | 39 ++++++++++++++++++++++++++++++---------
+ 1 file changed, 30 insertions(+), 9 deletions(-)
+
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -794,7 +794,9 @@ static int shmem_add_to_page_cache(struc
+ pgoff_t index, void *expected, gfp_t gfp)
+ {
+ XA_STATE_ORDER(xas, &mapping->i_pages, index, folio_order(folio));
+- long nr = folio_nr_pages(folio);
++ unsigned long nr = folio_nr_pages(folio);
++ swp_entry_t iter, swap;
++ void *entry;
+
+ VM_BUG_ON_FOLIO(index != round_down(index, nr), folio);
+ VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
+@@ -806,14 +808,25 @@ static int shmem_add_to_page_cache(struc
+
+ gfp &= GFP_RECLAIM_MASK;
+ folio_throttle_swaprate(folio, gfp);
++ swap = radix_to_swp_entry(expected);
+
+ do {
++ iter = swap;
+ xas_lock_irq(&xas);
+- if (expected != xas_find_conflict(&xas)) {
+- xas_set_err(&xas, -EEXIST);
+- goto unlock;
++ xas_for_each_conflict(&xas, entry) {
++ /*
++ * The range must either be empty, or filled with
++ * expected swap entries. Shmem swap entries are never
++ * partially freed without split of both entry and
++ * folio, so there shouldn't be any holes.
++ */
++ if (!expected || entry != swp_to_radix_entry(iter)) {
++ xas_set_err(&xas, -EEXIST);
++ goto unlock;
++ }
++ iter.val += 1 << xas_get_order(&xas);
+ }
+- if (expected && xas_find_conflict(&xas)) {
++ if (expected && iter.val - nr != swap.val) {
+ xas_set_err(&xas, -EEXIST);
+ goto unlock;
+ }
+@@ -2189,7 +2202,7 @@ static int shmem_swapin_folio(struct ino
+ error = -ENOMEM;
+ goto failed;
+ }
+- } else if (order != folio_order(folio)) {
++ } else if (order > folio_order(folio)) {
+ /*
+ * Swap readahead may swap in order 0 folios into swapcache
+ * asynchronously, while the shmem mapping can still stores
+@@ -2214,14 +2227,22 @@ static int shmem_swapin_folio(struct ino
+
+ swap = swp_entry(swp_type(swap), swp_offset(swap) + offset);
+ }
++ } else if (order < folio_order(folio)) {
++ swap.val = round_down(swap.val, 1 << folio_order(folio));
++ index = round_down(index, 1 << folio_order(folio));
+ }
+
+- /* We have to do this with folio locked to prevent races */
++ /*
++ * We have to do this with the folio locked to prevent races.
++ * The shmem_confirm_swap below only checks if the first swap
++ * entry matches the folio, that's enough to ensure the folio
++ * is not used outside of shmem, as shmem swap entries
++ * and swap cache folios are never partially freed.
++ */
+ folio_lock(folio);
+ if (!folio_test_swapcache(folio) ||
+- folio->swap.val != swap.val ||
+ !shmem_confirm_swap(mapping, index, swap) ||
+- xa_get_order(&mapping->i_pages, index) != folio_order(folio)) {
++ folio->swap.val != swap.val) {
+ error = -EEXIST;
+ goto unlock;
+ }
x86-platform-uv-handle-deconfigured-sockets.patch
mtd-spi-nor-core-avoid-odd-length-address-reads-on-8d-8d-8d-mode.patch
mtd-spi-nor-core-avoid-odd-length-address-writes-in-8d-8d-8d-mode.patch
+mm-shmem-fix-potential-data-corruption-during-shmem-swapin.patch
+mm-shmem-avoid-unpaired-folio_unlock-in-shmem_swapin_folio.patch
+mm-shmem-swap-improve-cached-mthp-handling-and-fix-potential-hang.patch
+mm-shmem-swap-avoid-redundant-xarray-lookup-during-swapin.patch