--- /dev/null
+From 2771739a7162782c0aa6424b2e3dd874e884a15d Mon Sep 17 00:00:00 2001
+From: Muchun Song <songmuchun@bytedance.com>
+Date: Tue, 22 Mar 2022 14:41:56 -0700
+Subject: mm: fix missing cache flush for all tail pages of compound page
+
+From: Muchun Song <songmuchun@bytedance.com>
+
+commit 2771739a7162782c0aa6424b2e3dd874e884a15d upstream.
+
+The D-cache maintenance inside move_to_new_page() only consider one
+page, there is still D-cache maintenance issue for tail pages of
+compound page (e.g. THP or HugeTLB).
+
+THP migration is only enabled on x86_64, ARM64 and powerpc, while
+powerpc and arm64 need to maintain the consistency between I-Cache and
+D-Cache, which depends on flush_dcache_page() to maintain the
+consistency between I-Cache and D-Cache.
+
+But there is no issues on arm64 and powerpc since they already considers
+the compound page cache flushing in their icache flush function.
+HugeTLB migration is enabled on arm, arm64, mips, parisc, powerpc,
+riscv, s390 and sh, while arm has handled the compound page cache flush
+in flush_dcache_page(), but most others do not.
+
+In theory, the issue exists on many architectures. Fix this by not
+using flush_dcache_folio() since it is not backportable.
+
+Link: https://lkml.kernel.org/r/20220210123058.79206-3-songmuchun@bytedance.com
+Fixes: 290408d4a250 ("hugetlb: hugepage migration core")
+Signed-off-by: Muchun Song <songmuchun@bytedance.com>
+Reviewed-by: Zi Yan <ziy@nvidia.com>
+Cc: Axel Rasmussen <axelrasmussen@google.com>
+Cc: David Rientjes <rientjes@google.com>
+Cc: Fam Zheng <fam.zheng@bytedance.com>
+Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
+Cc: Lars Persson <lars.persson@axis.com>
+Cc: Mike Kravetz <mike.kravetz@oracle.com>
+Cc: Peter Xu <peterx@redhat.com>
+Cc: Xiongchun Duan <duanxiongchun@bytedance.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/migrate.c | 7 +++++--
+ 1 file changed, 5 insertions(+), 2 deletions(-)
+
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -948,9 +948,12 @@ static int move_to_new_page(struct page
+ if (!PageMappingFlags(page))
+ page->mapping = NULL;
+
+- if (likely(!is_zone_device_page(newpage)))
+- flush_dcache_page(newpage);
++ if (likely(!is_zone_device_page(newpage))) {
++ int i, nr = compound_nr(newpage);
+
++ for (i = 0; i < nr; i++)
++ flush_dcache_page(newpage + i);
++ }
+ }
+ out:
+ return rc;
--- /dev/null
+From e763243cc6cb1fcc720ec58cfd6e7c35ae90a479 Mon Sep 17 00:00:00 2001
+From: Muchun Song <songmuchun@bytedance.com>
+Date: Tue, 22 Mar 2022 14:41:59 -0700
+Subject: mm: hugetlb: fix missing cache flush in copy_huge_page_from_user()
+
+From: Muchun Song <songmuchun@bytedance.com>
+
+commit e763243cc6cb1fcc720ec58cfd6e7c35ae90a479 upstream.
+
+userfaultfd calls copy_huge_page_from_user() which does not do any cache
+flushing for the target page. Then the target page will be mapped to
+the user space with a different address (user address), which might have
+an alias issue with the kernel address used to copy the data from the
+user to.
+
+Fix this issue by flushing dcache in copy_huge_page_from_user().
+
+Link: https://lkml.kernel.org/r/20220210123058.79206-4-songmuchun@bytedance.com
+Fixes: fa4d75c1de13 ("userfaultfd: hugetlbfs: add copy_huge_page_from_user for hugetlb userfaultfd support")
+Signed-off-by: Muchun Song <songmuchun@bytedance.com>
+Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
+Cc: Axel Rasmussen <axelrasmussen@google.com>
+Cc: David Rientjes <rientjes@google.com>
+Cc: Fam Zheng <fam.zheng@bytedance.com>
+Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
+Cc: Lars Persson <lars.persson@axis.com>
+Cc: Peter Xu <peterx@redhat.com>
+Cc: Xiongchun Duan <duanxiongchun@bytedance.com>
+Cc: Zi Yan <ziy@nvidia.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/memory.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -5467,6 +5467,8 @@ long copy_huge_page_from_user(struct pag
+ if (rc)
+ break;
+
++ flush_dcache_page(subpage);
++
+ cond_resched();
+ }
+ return ret_val;
--- /dev/null
+From 046545a661af2beec21de7b90ca0e35f05088a81 Mon Sep 17 00:00:00 2001
+From: Naoya Horiguchi <naoya.horiguchi@nec.com>
+Date: Tue, 22 Mar 2022 14:44:06 -0700
+Subject: mm/hwpoison: fix error page recovered but reported "not recovered"
+
+From: Naoya Horiguchi <naoya.horiguchi@nec.com>
+
+commit 046545a661af2beec21de7b90ca0e35f05088a81 upstream.
+
+When an uncorrected memory error is consumed there is a race between the
+CMCI from the memory controller reporting an uncorrected error with a
+UCNA signature, and the core reporting and SRAR signature machine check
+when the data is about to be consumed.
+
+If the CMCI wins that race, the page is marked poisoned when
+uc_decode_notifier() calls memory_failure() and the machine check
+processing code finds the page already poisoned. It calls
+kill_accessing_process() to make sure a SIGBUS is sent. But returns the
+wrong error code.
+
+Console log looks like this:
+
+ mce: Uncorrected hardware memory error in user-access at 3710b3400
+ Memory failure: 0x3710b3: recovery action for dirty LRU page: Recovered
+ Memory failure: 0x3710b3: already hardware poisoned
+ Memory failure: 0x3710b3: Sending SIGBUS to einj_mem_uc:361438 due to hardware memory corruption
+ mce: Memory error not recovered
+
+kill_accessing_process() is supposed to return -EHWPOISON to notify that
+SIGBUS is already set to the process and kill_me_maybe() doesn't have to
+send it again. But current code simply fails to do this, so fix it to
+make sure to work as intended. This change avoids the noise message
+"Memory error not recovered" and skips duplicate SIGBUSs.
+
+[tony.luck@intel.com: reword some parts of commit message]
+
+Link: https://lkml.kernel.org/r/20220113231117.1021405-1-naoya.horiguchi@linux.dev
+Fixes: a3f5d80ea401 ("mm,hwpoison: send SIGBUS with error virutal address")
+Signed-off-by: Naoya Horiguchi <naoya.horiguchi@nec.com>
+Reported-by: Youquan Song <youquan.song@intel.com>
+Cc: Tony Luck <tony.luck@intel.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/memory-failure.c | 4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -705,8 +705,10 @@ static int kill_accessing_process(struct
+ (void *)&priv);
+ if (ret == 1 && priv.tk.addr)
+ kill_proc(&priv.tk, pfn, flags);
++ else
++ ret = 0;
+ mmap_read_unlock(p->mm);
+- return ret ? -EFAULT : -EHWPOISON;
++ return ret > 0 ? -EHWPOISON : -EFAULT;
+ }
+
+ static const char *action_name[] = {
--- /dev/null
+From 5c2a956c3eea173b2bc89f632507c0eeaebf6c4a Mon Sep 17 00:00:00 2001
+From: Miaohe Lin <linmiaohe@huawei.com>
+Date: Tue, 22 Mar 2022 14:44:56 -0700
+Subject: mm/mlock: fix potential imbalanced rlimit ucounts adjustment
+
+From: Miaohe Lin <linmiaohe@huawei.com>
+
+commit 5c2a956c3eea173b2bc89f632507c0eeaebf6c4a upstream.
+
+user_shm_lock forgets to set allowed to 0 when get_ucounts fails. So
+the later user_shm_unlock might do the extra dec_rlimit_ucounts. Fix
+this by resetting allowed to 0.
+
+Link: https://lkml.kernel.org/r/20220310132417.41189-1-linmiaohe@huawei.com
+Fixes: d7c9e99aee48 ("Reimplement RLIMIT_MEMLOCK on top of ucounts")
+Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
+Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
+Acked-by: Hugh Dickins <hughd@google.com>
+Cc: Herbert van den Bergh <herbert.van.den.bergh@oracle.com>
+Cc: Chris Mason <chris.mason@oracle.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/mlock.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/mm/mlock.c
++++ b/mm/mlock.c
+@@ -837,6 +837,7 @@ int user_shm_lock(size_t size, struct uc
+ }
+ if (!get_ucounts(ucounts)) {
+ dec_rlimit_ucounts(ucounts, UCOUNT_RLIMIT_MEMLOCK, locked);
++ allowed = 0;
+ goto out;
+ }
+ allowed = 1;
--- /dev/null
+From 19b482c29b6f3805f1d8e93015847b89e2f7f3b1 Mon Sep 17 00:00:00 2001
+From: Muchun Song <songmuchun@bytedance.com>
+Date: Tue, 22 Mar 2022 14:42:05 -0700
+Subject: mm: shmem: fix missing cache flush in shmem_mfill_atomic_pte()
+
+From: Muchun Song <songmuchun@bytedance.com>
+
+commit 19b482c29b6f3805f1d8e93015847b89e2f7f3b1 upstream.
+
+userfaultfd calls shmem_mfill_atomic_pte() which does not do any cache
+flushing for the target page. Then the target page will be mapped to
+the user space with a different address (user address), which might have
+an alias issue with the kernel address used to copy the data from the
+user to. Insert flush_dcache_page() in non-zero-page case. And replace
+clear_highpage() with clear_user_highpage() which already considers the
+cache maintenance.
+
+Link: https://lkml.kernel.org/r/20220210123058.79206-6-songmuchun@bytedance.com
+Fixes: 8d1039634206 ("userfaultfd: shmem: add shmem_mfill_zeropage_pte for userfaultfd support")
+Fixes: 4c27fe4c4c84 ("userfaultfd: shmem: add shmem_mcopy_atomic_pte for userfaultfd support")
+Signed-off-by: Muchun Song <songmuchun@bytedance.com>
+Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
+Cc: Axel Rasmussen <axelrasmussen@google.com>
+Cc: David Rientjes <rientjes@google.com>
+Cc: Fam Zheng <fam.zheng@bytedance.com>
+Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
+Cc: Lars Persson <lars.persson@axis.com>
+Cc: Peter Xu <peterx@redhat.com>
+Cc: Xiongchun Duan <duanxiongchun@bytedance.com>
+Cc: Zi Yan <ziy@nvidia.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/shmem.c | 4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -2394,8 +2394,10 @@ int shmem_mfill_atomic_pte(struct mm_str
+ /* don't free the page */
+ goto out_unacct_blocks;
+ }
++
++ flush_dcache_page(page);
+ } else { /* ZEROPAGE */
+- clear_highpage(page);
++ clear_user_highpage(page, dst_addr);
+ }
+ } else {
+ page = *pagep;
--- /dev/null
+From 7c25a0b89a487878b0691e6524fb5a8827322194 Mon Sep 17 00:00:00 2001
+From: Muchun Song <songmuchun@bytedance.com>
+Date: Tue, 22 Mar 2022 14:42:08 -0700
+Subject: mm: userfaultfd: fix missing cache flush in mcopy_atomic_pte() and __mcopy_atomic()
+
+From: Muchun Song <songmuchun@bytedance.com>
+
+commit 7c25a0b89a487878b0691e6524fb5a8827322194 upstream.
+
+userfaultfd calls mcopy_atomic_pte() and __mcopy_atomic() which do not
+do any cache flushing for the target page. Then the target page will be
+mapped to the user space with a different address (user address), which
+might have an alias issue with the kernel address used to copy the data
+from the user to. Fix this by insert flush_dcache_page() after
+copy_from_user() succeeds.
+
+Link: https://lkml.kernel.org/r/20220210123058.79206-7-songmuchun@bytedance.com
+Fixes: b6ebaedb4cb1 ("userfaultfd: avoid mmap_sem read recursion in mcopy_atomic")
+Fixes: c1a4de99fada ("userfaultfd: mcopy_atomic|mfill_zeropage: UFFDIO_COPY|UFFDIO_ZEROPAGE preparation")
+Signed-off-by: Muchun Song <songmuchun@bytedance.com>
+Cc: Axel Rasmussen <axelrasmussen@google.com>
+Cc: David Rientjes <rientjes@google.com>
+Cc: Fam Zheng <fam.zheng@bytedance.com>
+Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
+Cc: Lars Persson <lars.persson@axis.com>
+Cc: Mike Kravetz <mike.kravetz@oracle.com>
+Cc: Peter Xu <peterx@redhat.com>
+Cc: Xiongchun Duan <duanxiongchun@bytedance.com>
+Cc: Zi Yan <ziy@nvidia.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/userfaultfd.c | 3 +++
+ 1 file changed, 3 insertions(+)
+
+--- a/mm/userfaultfd.c
++++ b/mm/userfaultfd.c
+@@ -151,6 +151,8 @@ static int mcopy_atomic_pte(struct mm_st
+ /* don't free the page */
+ goto out;
+ }
++
++ flush_dcache_page(page);
+ } else {
+ page = *pagep;
+ *pagep = NULL;
+@@ -621,6 +623,7 @@ retry:
+ err = -EFAULT;
+ goto out;
+ }
++ flush_dcache_page(page);
+ goto retry;
+ } else
+ BUG_ON(page);
bluetooth-fix-the-creation-of-hdev-name.patch
rfkill-uapi-fix-rfkill_ioctl_max_size-ioctl-request-definition.patch
udf-avoid-using-stale-lengthofimpuse.patch
+mm-fix-missing-cache-flush-for-all-tail-pages-of-compound-page.patch
+mm-hugetlb-fix-missing-cache-flush-in-copy_huge_page_from_user.patch
+mm-shmem-fix-missing-cache-flush-in-shmem_mfill_atomic_pte.patch
+mm-userfaultfd-fix-missing-cache-flush-in-mcopy_atomic_pte-and-__mcopy_atomic.patch
+mm-hwpoison-fix-error-page-recovered-but-reported-not-recovered.patch
+mm-mlock-fix-potential-imbalanced-rlimit-ucounts-adjustment.patch