--- /dev/null
+From ebc5951eea499314f6fbbde20e295f1345c67330 Mon Sep 17 00:00:00 2001
+From: Andrea Righi <andrea.righi@canonical.com>
+Date: Mon, 1 Jun 2020 21:48:43 -0700
+Subject: mm: swap: properly update readahead statistics in unuse_pte_range()
+
+From: Andrea Righi <andrea.righi@canonical.com>
+
+commit ebc5951eea499314f6fbbde20e295f1345c67330 upstream.
+
+In unuse_pte_range() we blindly swap-in pages without checking if the
+swap entry is already present in the swap cache.
+
+By doing this, the hit/miss ratio used by the swap readahead heuristic
+is not properly updated and this leads to non-optimal performance during
+swapoff.
+
+Tracing the distribution of the readahead size returned by the swap
+readahead heuristic during swapoff shows that a small readahead size is
+used most of the time as if we had only misses (this happens both with
+cluster and vma readahead), for example:
+
+r::swapin_nr_pages(unsigned long offset):unsigned long:$retval
+ COUNT EVENT
+ 36948 $retval = 8
+ 44151 $retval = 4
+ 49290 $retval = 1
+ 527771 $retval = 2
+
+Checking if the swap entry is present in the swap cache, instead, allows
+to properly update the readahead statistics and the heuristic behaves in a
+better way during swapoff, selecting a bigger readahead size:
+
+r::swapin_nr_pages(unsigned long offset):unsigned long:$retval
+ COUNT EVENT
+ 1618 $retval = 1
+ 4960 $retval = 2
+ 41315 $retval = 4
+ 103521 $retval = 8
+
+In terms of swapoff performance the result is the following:
+
+Testing environment
+===================
+
+ - Host:
+ CPU: 1.8GHz Intel Core i7-8565U (quad-core, 8MB cache)
+ HDD: PC401 NVMe SK hynix 512GB
+ MEM: 16GB
+
+ - Guest (kvm):
+ 8GB of RAM
+ virtio block driver
+ 16GB swap file on ext4 (/swapfile)
+
+Test case
+=========
+ - allocate 85% of memory
+ - `systemctl hibernate` to force all the pages to be swapped-out to the
+ swap file
+ - resume the system
+ - measure the time that swapoff takes to complete:
+ # /usr/bin/time swapoff /swapfile
+
+Result (swapoff time)
+======
+ 5.6 vanilla 5.6 w/ this patch
+ ----------- -----------------
+cluster-readahead 22.09s 12.19s
+ vma-readahead 18.20s 15.33s
+
+Conclusion
+==========
+
+The specific use case this patch is addressing is to improve swapoff
+performance in cloud environments when a VM has been hibernated, resumed
+and all the memory needs to be forced back to RAM by disabling swap.
+
+This change allows to better exploits the advantages of the readahead
+heuristic during swapoff and this improvement allows to to speed up the
+resume process of such VMs.
+
+[andrea.righi@canonical.com: update changelog]
+ Link: http://lkml.kernel.org/r/20200418084705.GA147642@xps-13
+Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Reviewed-by: "Huang, Ying" <ying.huang@intel.com>
+Cc: Minchan Kim <minchan@kernel.org>
+Cc: Anchal Agarwal <anchalag@amazon.com>
+Cc: Hugh Dickins <hughd@google.com>
+Cc: Vineeth Remanan Pillai <vpillai@digitalocean.com>
+Cc: Kelley Nielsen <kelleynnn@gmail.com>
+Link: http://lkml.kernel.org/r/20200416180132.GB3352@xps-13
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Luiz Capitulino <luizcap@amazon.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/swapfile.c | 12 ++++++++----
+ 1 file changed, 8 insertions(+), 4 deletions(-)
+
+Add missing SOB.
+
+--- a/mm/swapfile.c
++++ b/mm/swapfile.c
+@@ -1951,10 +1951,14 @@ static int unuse_pte_range(struct vm_are
+
+ pte_unmap(pte);
+ swap_map = &si->swap_map[offset];
+- vmf.vma = vma;
+- vmf.address = addr;
+- vmf.pmd = pmd;
+- page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, &vmf);
++ page = lookup_swap_cache(entry, vma, addr);
++ if (!page) {
++ vmf.vma = vma;
++ vmf.address = addr;
++ vmf.pmd = pmd;
++ page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE,
++ &vmf);
++ }
+ if (!page) {
+ if (*swap_map == 0 || *swap_map == SWAP_MAP_BAD)
+ goto try_next;
mm-swapfile-add-cond_resched-in-get_swap_pages.patch
squashfs-fix-handling-and-sanity-checking-of-xattr_ids-count.patch
nvmem-core-fix-cell-removal-on-error.patch
+mm-swap-properly-update-readahead-statistics-in-unuse_pte_range.patch
+xprtrdma-fix-regbuf-data-not-freed-in-rpcrdma_req_create.patch
+udf-avoid-using-stale-lengthofimpuse.patch
--- /dev/null
+From c1ad35dd0548ce947d97aaf92f7f2f9a202951cf Mon Sep 17 00:00:00 2001
+From: Jan Kara <jack@suse.cz>
+Date: Tue, 10 May 2022 12:36:04 +0200
+Subject: udf: Avoid using stale lengthOfImpUse
+
+From: Jan Kara <jack@suse.cz>
+
+commit c1ad35dd0548ce947d97aaf92f7f2f9a202951cf upstream.
+
+udf_write_fi() uses lengthOfImpUse of the entry it is writing to.
+However this field has not yet been initialized so it either contains
+completely bogus value or value from last directory entry at that place.
+In either case this is wrong and can lead to filesystem corruption or
+kernel crashes.
+
+Reported-by: butt3rflyh4ck <butterflyhuangxx@gmail.com>
+CC: stable@vger.kernel.org
+Fixes: 979a6e28dd96 ("udf: Get rid of 0-length arrays in struct fileIdentDesc")
+Signed-off-by: Jan Kara <jack@suse.cz>
+[ This patch deviates from the original upstream patch because in the
+original upstream patch, udf_get_fi_ident(sfi) was being used instead of
+(uint8_t *)sfi->fileIdent + liu as the first arg to memcpy at line 77
+and line 81. Those subsequent lines have been replaced with what the
+upstream patch passes in to memcpy. ]
+Signed-off-by: Nobel Barakat <nobelbarakat@google.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/udf/namei.c | 9 ++++-----
+ 1 file changed, 4 insertions(+), 5 deletions(-)
+
+--- a/fs/udf/namei.c
++++ b/fs/udf/namei.c
+@@ -75,12 +75,11 @@ int udf_write_fi(struct inode *inode, st
+
+ if (fileident) {
+ if (adinicb || (offset + lfi < 0)) {
+- memcpy((uint8_t *)sfi->fileIdent + liu, fileident, lfi);
++ memcpy(sfi->impUse + liu, fileident, lfi);
+ } else if (offset >= 0) {
+ memcpy(fibh->ebh->b_data + offset, fileident, lfi);
+ } else {
+- memcpy((uint8_t *)sfi->fileIdent + liu, fileident,
+- -offset);
++ memcpy(sfi->impUse + liu, fileident, -offset);
+ memcpy(fibh->ebh->b_data, fileident - offset,
+ lfi + offset);
+ }
+@@ -89,11 +88,11 @@ int udf_write_fi(struct inode *inode, st
+ offset += lfi;
+
+ if (adinicb || (offset + padlen < 0)) {
+- memset((uint8_t *)sfi->padding + liu + lfi, 0x00, padlen);
++ memset(sfi->impUse + liu + lfi, 0x00, padlen);
+ } else if (offset >= 0) {
+ memset(fibh->ebh->b_data + offset, 0x00, padlen);
+ } else {
+- memset((uint8_t *)sfi->padding + liu + lfi, 0x00, -offset);
++ memset(sfi->impUse + liu + lfi, 0x00, -offset);
+ memset(fibh->ebh->b_data, 0x00, padlen + offset);
+ }
+
--- /dev/null
+From 9181f40fb2952fd59ecb75e7158620c9c669eee3 Mon Sep 17 00:00:00 2001
+From: Zhang Xiaoxu <zhangxiaoxu5@huawei.com>
+Date: Sun, 20 Nov 2022 15:34:29 +0800
+Subject: xprtrdma: Fix regbuf data not freed in rpcrdma_req_create()
+
+From: Zhang Xiaoxu <zhangxiaoxu5@huawei.com>
+
+commit 9181f40fb2952fd59ecb75e7158620c9c669eee3 upstream.
+
+If rdma receive buffer allocate failed, should call rpcrdma_regbuf_free()
+to free the send buffer, otherwise, the buffer data will be leaked.
+
+Fixes: bb93a1ae2bf4 ("xprtrdma: Allocate req's regbufs at xprt create time")
+Signed-off-by: Zhang Xiaoxu <zhangxiaoxu5@huawei.com>
+Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
+[Harshit: Backport to 5.4.y]
+Also make the same change for 'req->rl_rdmabuf' at the same time as
+this will also have the same memory leak problem as 'req->rl_sendbuf'
+(This is because commit b78de1dca00376aaba7a58bb5fe21c1606524abe is not
+in 5.4.y)
+Signed-off-by: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/sunrpc/xprtrdma/verbs.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/net/sunrpc/xprtrdma/verbs.c
++++ b/net/sunrpc/xprtrdma/verbs.c
+@@ -1034,9 +1034,9 @@ struct rpcrdma_req *rpcrdma_req_create(s
+ return req;
+
+ out4:
+- kfree(req->rl_sendbuf);
++ rpcrdma_regbuf_free(req->rl_sendbuf);
+ out3:
+- kfree(req->rl_rdmabuf);
++ rpcrdma_regbuf_free(req->rl_rdmabuf);
+ out2:
+ kfree(req);
+ out1: