--- /dev/null
+From ad982c3be4e60c7d39c03f782733503cbd88fd2a Mon Sep 17 00:00:00 2001
+From: Gaosheng Cui <cuigaosheng1@huawei.com>
+Date: Mon, 22 Aug 2022 10:29:05 +0800
+Subject: audit: fix potential double free on error path from fsnotify_add_inode_mark
+
+From: Gaosheng Cui <cuigaosheng1@huawei.com>
+
+commit ad982c3be4e60c7d39c03f782733503cbd88fd2a upstream.
+
+Audit_alloc_mark() assign pathname to audit_mark->path, on error path
+from fsnotify_add_inode_mark(), fsnotify_put_mark will free memory
+of audit_mark->path, but the caller of audit_alloc_mark will free
+the pathname again, so there will be double free problem.
+
+Fix this by resetting audit_mark->path to NULL pointer on error path
+from fsnotify_add_inode_mark().
+
+Cc: stable@vger.kernel.org
+Fixes: 7b1293234084d ("fsnotify: Add group pointer in fsnotify_init_mark()")
+Signed-off-by: Gaosheng Cui <cuigaosheng1@huawei.com>
+Reviewed-by: Jan Kara <jack@suse.cz>
+Signed-off-by: Paul Moore <paul@paul-moore.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/audit_fsnotify.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/kernel/audit_fsnotify.c
++++ b/kernel/audit_fsnotify.c
+@@ -102,6 +102,7 @@ struct audit_fsnotify_mark *audit_alloc_
+
+ ret = fsnotify_add_inode_mark(&audit_mark->mark, inode, 0);
+ if (ret < 0) {
++ audit_mark->path = NULL;
+ fsnotify_put_mark(&audit_mark->mark);
+ audit_mark = ERR_PTR(ret);
+ }
--- /dev/null
+From 763f4fb76e24959c370cdaa889b2492ba6175580 Mon Sep 17 00:00:00 2001
+From: Jing-Ting Wu <Jing-Ting.Wu@mediatek.com>
+Date: Tue, 23 Aug 2022 13:41:46 +0800
+Subject: cgroup: Fix race condition at rebind_subsystems()
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Jing-Ting Wu <Jing-Ting.Wu@mediatek.com>
+
+commit 763f4fb76e24959c370cdaa889b2492ba6175580 upstream.
+
+Root cause:
+The rebind_subsystems() is no lock held when move css object from A
+list to B list,then let B's head be treated as css node at
+list_for_each_entry_rcu().
+
+Solution:
+Add grace period before invalidating the removed rstat_css_node.
+
+Reported-by: Jing-Ting Wu <jing-ting.wu@mediatek.com>
+Suggested-by: Michal Koutný <mkoutny@suse.com>
+Signed-off-by: Jing-Ting Wu <jing-ting.wu@mediatek.com>
+Tested-by: Jing-Ting Wu <jing-ting.wu@mediatek.com>
+Link: https://lore.kernel.org/linux-arm-kernel/d8f0bc5e2fb6ed259f9334c83279b4c011283c41.camel@mediatek.com/T/
+Acked-by: Mukesh Ojha <quic_mojha@quicinc.com>
+Fixes: a7df69b81aac ("cgroup: rstat: support cgroup1")
+Cc: stable@vger.kernel.org # v5.13+
+Signed-off-by: Tejun Heo <tj@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/cgroup/cgroup.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -1811,6 +1811,7 @@ int rebind_subsystems(struct cgroup_root
+
+ if (ss->css_rstat_flush) {
+ list_del_rcu(&css->rstat_css_node);
++ synchronize_rcu();
+ list_add_rcu(&css->rstat_css_node,
+ &dcgrp->rstat_css_list);
+ }
--- /dev/null
+From a8faed3a02eeb75857a3b5d660fa80fe79db77a3 Mon Sep 17 00:00:00 2001
+From: Randy Dunlap <rdunlap@infradead.org>
+Date: Sun, 7 Aug 2022 15:09:34 -0700
+Subject: kernel/sys_ni: add compat entry for fadvise64_64
+
+From: Randy Dunlap <rdunlap@infradead.org>
+
+commit a8faed3a02eeb75857a3b5d660fa80fe79db77a3 upstream.
+
+When CONFIG_ADVISE_SYSCALLS is not set/enabled and CONFIG_COMPAT is
+set/enabled, the riscv compat_syscall_table references
+'compat_sys_fadvise64_64', which is not defined:
+
+riscv64-linux-ld: arch/riscv/kernel/compat_syscall_table.o:(.rodata+0x6f8):
+undefined reference to `compat_sys_fadvise64_64'
+
+Add 'fadvise64_64' to kernel/sys_ni.c as a conditional COMPAT function so
+that when CONFIG_ADVISE_SYSCALLS is not set, there is a fallback function
+available.
+
+Link: https://lkml.kernel.org/r/20220807220934.5689-1-rdunlap@infradead.org
+Fixes: d3ac21cacc24 ("mm: Support compiling out madvise and fadvise")
+Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
+Suggested-by: Arnd Bergmann <arnd@arndb.de>
+Reviewed-by: Arnd Bergmann <arnd@arndb.de>
+Cc: Josh Triplett <josh@joshtriplett.org>
+Cc: Paul Walmsley <paul.walmsley@sifive.com>
+Cc: Palmer Dabbelt <palmer@dabbelt.com>
+Cc: Albert Ou <aou@eecs.berkeley.edu>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/sys_ni.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/kernel/sys_ni.c
++++ b/kernel/sys_ni.c
+@@ -277,6 +277,7 @@ COND_SYSCALL(landlock_restrict_self);
+
+ /* mm/fadvise.c */
+ COND_SYSCALL(fadvise64_64);
++COND_SYSCALL_COMPAT(fadvise64_64);
+
+ /* mm/, CONFIG_MMU only */
+ COND_SYSCALL(swapon);
--- /dev/null
+From 9c80e79906b4ca440d09e7f116609262bb747909 Mon Sep 17 00:00:00 2001
+From: Kuniyuki Iwashima <kuniyu@amazon.com>
+Date: Fri, 12 Aug 2022 19:05:09 -0700
+Subject: kprobes: don't call disarm_kprobe() for disabled kprobes
+
+From: Kuniyuki Iwashima <kuniyu@amazon.com>
+
+commit 9c80e79906b4ca440d09e7f116609262bb747909 upstream.
+
+The assumption in __disable_kprobe() is wrong, and it could try to disarm
+an already disarmed kprobe and fire the WARN_ONCE() below. [0] We can
+easily reproduce this issue.
+
+1. Write 0 to /sys/kernel/debug/kprobes/enabled.
+
+ # echo 0 > /sys/kernel/debug/kprobes/enabled
+
+2. Run execsnoop. At this time, one kprobe is disabled.
+
+ # /usr/share/bcc/tools/execsnoop &
+ [1] 2460
+ PCOMM PID PPID RET ARGS
+
+ # cat /sys/kernel/debug/kprobes/list
+ ffffffff91345650 r __x64_sys_execve+0x0 [FTRACE]
+ ffffffff91345650 k __x64_sys_execve+0x0 [DISABLED][FTRACE]
+
+3. Write 1 to /sys/kernel/debug/kprobes/enabled, which changes
+ kprobes_all_disarmed to false but does not arm the disabled kprobe.
+
+ # echo 1 > /sys/kernel/debug/kprobes/enabled
+
+ # cat /sys/kernel/debug/kprobes/list
+ ffffffff91345650 r __x64_sys_execve+0x0 [FTRACE]
+ ffffffff91345650 k __x64_sys_execve+0x0 [DISABLED][FTRACE]
+
+4. Kill execsnoop, when __disable_kprobe() calls disarm_kprobe() for the
+ disabled kprobe and hits the WARN_ONCE() in __disarm_kprobe_ftrace().
+
+ # fg
+ /usr/share/bcc/tools/execsnoop
+ ^C
+
+Actually, WARN_ONCE() is fired twice, and __unregister_kprobe_top() misses
+some cleanups and leaves the aggregated kprobe in the hash table. Then,
+__unregister_trace_kprobe() initialises tk->rp.kp.list and creates an
+infinite loop like this.
+
+ aggregated kprobe.list -> kprobe.list -.
+ ^ |
+ '.__.'
+
+In this situation, these commands fall into the infinite loop and result
+in RCU stall or soft lockup.
+
+ cat /sys/kernel/debug/kprobes/list : show_kprobe_addr() enters into the
+ infinite loop with RCU.
+
+ /usr/share/bcc/tools/execsnoop : warn_kprobe_rereg() holds kprobe_mutex,
+ and __get_valid_kprobe() is stuck in
+ the loop.
+
+To avoid the issue, make sure we don't call disarm_kprobe() for disabled
+kprobes.
+
+[0]
+Failed to disarm kprobe-ftrace at __x64_sys_execve+0x0/0x40 (error -2)
+WARNING: CPU: 6 PID: 2460 at kernel/kprobes.c:1130 __disarm_kprobe_ftrace.isra.19 (kernel/kprobes.c:1129)
+Modules linked in: ena
+CPU: 6 PID: 2460 Comm: execsnoop Not tainted 5.19.0+ #28
+Hardware name: Amazon EC2 c5.2xlarge/, BIOS 1.0 10/16/2017
+RIP: 0010:__disarm_kprobe_ftrace.isra.19 (kernel/kprobes.c:1129)
+Code: 24 8b 02 eb c1 80 3d c4 83 f2 01 00 75 d4 48 8b 75 00 89 c2 48 c7 c7 90 fa 0f 92 89 04 24 c6 05 ab 83 01 e8 e4 94 f0 ff <0f> 0b 8b 04 24 eb b1 89 c6 48 c7 c7 60 fa 0f 92 89 04 24 e8 cc 94
+RSP: 0018:ffff9e6ec154bd98 EFLAGS: 00010282
+RAX: 0000000000000000 RBX: ffffffff930f7b00 RCX: 0000000000000001
+RDX: 0000000080000001 RSI: ffffffff921461c5 RDI: 00000000ffffffff
+RBP: ffff89c504286da8 R08: 0000000000000000 R09: c0000000fffeffff
+R10: 0000000000000000 R11: ffff9e6ec154bc28 R12: ffff89c502394e40
+R13: ffff89c502394c00 R14: ffff9e6ec154bc00 R15: 0000000000000000
+FS: 00007fe800398740(0000) GS:ffff89c812d80000(0000) knlGS:0000000000000000
+CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
+CR2: 000000c00057f010 CR3: 0000000103b54006 CR4: 00000000007706e0
+DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
+DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
+PKRU: 55555554
+Call Trace:
+<TASK>
+ __disable_kprobe (kernel/kprobes.c:1716)
+ disable_kprobe (kernel/kprobes.c:2392)
+ __disable_trace_kprobe (kernel/trace/trace_kprobe.c:340)
+ disable_trace_kprobe (kernel/trace/trace_kprobe.c:429)
+ perf_trace_event_unreg.isra.2 (./include/linux/tracepoint.h:93 kernel/trace/trace_event_perf.c:168)
+ perf_kprobe_destroy (kernel/trace/trace_event_perf.c:295)
+ _free_event (kernel/events/core.c:4971)
+ perf_event_release_kernel (kernel/events/core.c:5176)
+ perf_release (kernel/events/core.c:5186)
+ __fput (fs/file_table.c:321)
+ task_work_run (./include/linux/sched.h:2056 (discriminator 1) kernel/task_work.c:179 (discriminator 1))
+ exit_to_user_mode_prepare (./include/linux/resume_user_mode.h:49 kernel/entry/common.c:169 kernel/entry/common.c:201)
+ syscall_exit_to_user_mode (./arch/x86/include/asm/jump_label.h:55 ./arch/x86/include/asm/nospec-branch.h:384 ./arch/x86/include/asm/entry-common.h:94 kernel/entry/common.c:133 kernel/entry/common.c:296)
+ do_syscall_64 (arch/x86/entry/common.c:87)
+ entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:120)
+RIP: 0033:0x7fe7ff210654
+Code: 15 79 89 20 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb be 0f 1f 00 8b 05 9a cd 20 00 48 63 ff 85 c0 75 11 b8 03 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 3a f3 c3 48 83 ec 18 48 89 7c 24 08 e8 34 fc
+RSP: 002b:00007ffdbd1d3538 EFLAGS: 00000246 ORIG_RAX: 0000000000000003
+RAX: 0000000000000000 RBX: 0000000000000008 RCX: 00007fe7ff210654
+RDX: 0000000000000000 RSI: 0000000000002401 RDI: 0000000000000008
+RBP: 0000000000000000 R08: 94ae31d6fda838a4 R0900007fe8001c9d30
+R10: 00007ffdbd1d34b0 R11: 0000000000000246 R12: 00007ffdbd1d3600
+R13: 0000000000000000 R14: fffffffffffffffc R15: 00007ffdbd1d3560
+</TASK>
+
+Link: https://lkml.kernel.org/r/20220813020509.90805-1-kuniyu@amazon.com
+Fixes: 69d54b916d83 ("kprobes: makes kprobes/enabled works correctly for optimized kprobes.")
+Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
+Reported-by: Ayushman Dutta <ayudutta@amazon.com>
+Cc: "Naveen N. Rao" <naveen.n.rao@linux.ibm.com>
+Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
+Cc: "David S. Miller" <davem@davemloft.net>
+Cc: Masami Hiramatsu <mhiramat@kernel.org>
+Cc: Wang Nan <wangnan0@huawei.com>
+Cc: Kuniyuki Iwashima <kuniyu@amazon.com>
+Cc: Kuniyuki Iwashima <kuni1840@gmail.com>
+Cc: Ayushman Dutta <ayudutta@amazon.com>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/kprobes.c | 9 +++++----
+ 1 file changed, 5 insertions(+), 4 deletions(-)
+
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -1707,11 +1707,12 @@ static struct kprobe *__disable_kprobe(s
+ /* Try to disarm and disable this/parent probe */
+ if (p == orig_p || aggr_kprobe_disabled(orig_p)) {
+ /*
+- * If 'kprobes_all_disarmed' is set, 'orig_p'
+- * should have already been disarmed, so
+- * skip unneed disarming process.
++ * Don't be lazy here. Even if 'kprobes_all_disarmed'
++ * is false, 'orig_p' might not have been armed yet.
++ * Note arm_all_kprobes() __tries__ to arm all kprobes
++ * on the best effort basis.
+ */
+- if (!kprobes_all_disarmed) {
++ if (!kprobes_all_disarmed && !kprobe_disabled(orig_p)) {
+ ret = disarm_kprobe(orig_p, true);
+ if (ret) {
+ p->flags &= ~KPROBE_FLAG_DISABLED;
--- /dev/null
+From 1d8d14641fd94a01b20a4abbf2749fd8eddcf57b Mon Sep 17 00:00:00 2001
+From: David Hildenbrand <david@redhat.com>
+Date: Thu, 11 Aug 2022 12:34:35 +0200
+Subject: mm/hugetlb: support write-faults in shared mappings
+
+From: David Hildenbrand <david@redhat.com>
+
+commit 1d8d14641fd94a01b20a4abbf2749fd8eddcf57b upstream.
+
+If we ever get a write-fault on a write-protected page in a shared
+mapping, we'd be in trouble (again). Instead, we can simply map the page
+writable.
+
+And in fact, there is even a way right now to trigger that code via
+uffd-wp ever since we stared to support it for shmem in 5.19:
+
+--------------------------------------------------------------------------
+ #include <stdio.h>
+ #include <stdlib.h>
+ #include <string.h>
+ #include <fcntl.h>
+ #include <unistd.h>
+ #include <errno.h>
+ #include <sys/mman.h>
+ #include <sys/syscall.h>
+ #include <sys/ioctl.h>
+ #include <linux/userfaultfd.h>
+
+ #define HUGETLB_SIZE (2 * 1024 * 1024u)
+
+ static char *map;
+ int uffd;
+
+ static int temp_setup_uffd(void)
+ {
+ struct uffdio_api uffdio_api;
+ struct uffdio_register uffdio_register;
+ struct uffdio_writeprotect uffd_writeprotect;
+ struct uffdio_range uffd_range;
+
+ uffd = syscall(__NR_userfaultfd,
+ O_CLOEXEC | O_NONBLOCK | UFFD_USER_MODE_ONLY);
+ if (uffd < 0) {
+ fprintf(stderr, "syscall() failed: %d\n", errno);
+ return -errno;
+ }
+
+ uffdio_api.api = UFFD_API;
+ uffdio_api.features = UFFD_FEATURE_PAGEFAULT_FLAG_WP;
+ if (ioctl(uffd, UFFDIO_API, &uffdio_api) < 0) {
+ fprintf(stderr, "UFFDIO_API failed: %d\n", errno);
+ return -errno;
+ }
+
+ if (!(uffdio_api.features & UFFD_FEATURE_PAGEFAULT_FLAG_WP)) {
+ fprintf(stderr, "UFFD_FEATURE_WRITEPROTECT missing\n");
+ return -ENOSYS;
+ }
+
+ /* Register UFFD-WP */
+ uffdio_register.range.start = (unsigned long) map;
+ uffdio_register.range.len = HUGETLB_SIZE;
+ uffdio_register.mode = UFFDIO_REGISTER_MODE_WP;
+ if (ioctl(uffd, UFFDIO_REGISTER, &uffdio_register) < 0) {
+ fprintf(stderr, "UFFDIO_REGISTER failed: %d\n", errno);
+ return -errno;
+ }
+
+ /* Writeprotect a single page. */
+ uffd_writeprotect.range.start = (unsigned long) map;
+ uffd_writeprotect.range.len = HUGETLB_SIZE;
+ uffd_writeprotect.mode = UFFDIO_WRITEPROTECT_MODE_WP;
+ if (ioctl(uffd, UFFDIO_WRITEPROTECT, &uffd_writeprotect)) {
+ fprintf(stderr, "UFFDIO_WRITEPROTECT failed: %d\n", errno);
+ return -errno;
+ }
+
+ /* Unregister UFFD-WP without prior writeunprotection. */
+ uffd_range.start = (unsigned long) map;
+ uffd_range.len = HUGETLB_SIZE;
+ if (ioctl(uffd, UFFDIO_UNREGISTER, &uffd_range)) {
+ fprintf(stderr, "UFFDIO_UNREGISTER failed: %d\n", errno);
+ return -errno;
+ }
+
+ return 0;
+ }
+
+ int main(int argc, char **argv)
+ {
+ int fd;
+
+ fd = open("/dev/hugepages/tmp", O_RDWR | O_CREAT);
+ if (!fd) {
+ fprintf(stderr, "open() failed\n");
+ return -errno;
+ }
+ if (ftruncate(fd, HUGETLB_SIZE)) {
+ fprintf(stderr, "ftruncate() failed\n");
+ return -errno;
+ }
+
+ map = mmap(NULL, HUGETLB_SIZE, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0);
+ if (map == MAP_FAILED) {
+ fprintf(stderr, "mmap() failed\n");
+ return -errno;
+ }
+
+ *map = 0;
+
+ if (temp_setup_uffd())
+ return 1;
+
+ *map = 0;
+
+ return 0;
+ }
+--------------------------------------------------------------------------
+
+Above test fails with SIGBUS when there is only a single free hugetlb page.
+ # echo 1 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+ # ./test
+ Bus error (core dumped)
+
+And worse, with sufficient free hugetlb pages it will map an anonymous page
+into a shared mapping, for example, messing up accounting during unmap
+and breaking MAP_SHARED semantics:
+ # echo 2 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+ # ./test
+ # cat /proc/meminfo | grep HugePages_
+ HugePages_Total: 2
+ HugePages_Free: 1
+ HugePages_Rsvd: 18446744073709551615
+ HugePages_Surp: 0
+
+Reason is that uffd-wp doesn't clear the uffd-wp PTE bit when
+unregistering and consequently keeps the PTE writeprotected. Reason for
+this is to avoid the additional overhead when unregistering. Note that
+this is the case also for !hugetlb and that we will end up with writable
+PTEs that still have the uffd-wp PTE bit set once we return from
+hugetlb_wp(). I'm not touching the uffd-wp PTE bit for now, because it
+seems to be a generic thing -- wp_page_reuse() also doesn't clear it.
+
+VM_MAYSHARE handling in hugetlb_fault() for FAULT_FLAG_WRITE indicates
+that MAP_SHARED handling was at least envisioned, but could never have
+worked as expected.
+
+While at it, make sure that we never end up in hugetlb_wp() on write
+faults without VM_WRITE, because we don't support maybe_mkwrite()
+semantics as commonly used in the !hugetlb case -- for example, in
+wp_page_reuse().
+
+Note that there is no need to do any kind of reservation in
+hugetlb_fault() in this case ... because we already have a hugetlb page
+mapped R/O that we will simply map writable and we are not dealing with
+COW/unsharing.
+
+Link: https://lkml.kernel.org/r/20220811103435.188481-3-david@redhat.com
+Fixes: b1f9e876862d ("mm/uffd: enable write protection for shmem & hugetlbfs")
+Signed-off-by: David Hildenbrand <david@redhat.com>
+Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
+Cc: Bjorn Helgaas <bhelgaas@google.com>
+Cc: Cyrill Gorcunov <gorcunov@openvz.org>
+Cc: Hugh Dickins <hughd@google.com>
+Cc: Jamie Liu <jamieliu@google.com>
+Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
+Cc: Muchun Song <songmuchun@bytedance.com>
+Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
+Cc: Pavel Emelyanov <xemul@parallels.com>
+Cc: Peter Feiner <pfeiner@google.com>
+Cc: Peter Xu <peterx@redhat.com>
+Cc: <stable@vger.kernel.org> [5.19]
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/hugetlb.c | 26 +++++++++++++++++++-------
+ 1 file changed, 19 insertions(+), 7 deletions(-)
+
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -5232,6 +5232,21 @@ static vm_fault_t hugetlb_wp(struct mm_s
+ VM_BUG_ON(unshare && (flags & FOLL_WRITE));
+ VM_BUG_ON(!unshare && !(flags & FOLL_WRITE));
+
++ /*
++ * hugetlb does not support FOLL_FORCE-style write faults that keep the
++ * PTE mapped R/O such as maybe_mkwrite() would do.
++ */
++ if (WARN_ON_ONCE(!unshare && !(vma->vm_flags & VM_WRITE)))
++ return VM_FAULT_SIGSEGV;
++
++ /* Let's take out MAP_SHARED mappings first. */
++ if (vma->vm_flags & VM_MAYSHARE) {
++ if (unlikely(unshare))
++ return 0;
++ set_huge_ptep_writable(vma, haddr, ptep);
++ return 0;
++ }
++
+ pte = huge_ptep_get(ptep);
+ old_page = pte_page(pte);
+
+@@ -5766,12 +5781,11 @@ vm_fault_t hugetlb_fault(struct mm_struc
+ * If we are going to COW/unshare the mapping later, we examine the
+ * pending reservations for this page now. This will ensure that any
+ * allocations necessary to record that reservation occur outside the
+- * spinlock. For private mappings, we also lookup the pagecache
+- * page now as it is used to determine if a reservation has been
+- * consumed.
++ * spinlock. Also lookup the pagecache page now as it is used to
++ * determine if a reservation has been consumed.
+ */
+ if ((flags & (FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE)) &&
+- !huge_pte_write(entry)) {
++ !(vma->vm_flags & VM_MAYSHARE) && !huge_pte_write(entry)) {
+ if (vma_needs_reservation(h, vma, haddr) < 0) {
+ ret = VM_FAULT_OOM;
+ goto out_mutex;
+@@ -5779,9 +5793,7 @@ vm_fault_t hugetlb_fault(struct mm_struc
+ /* Just decrements count, does not deallocate */
+ vma_end_reservation(h, vma, haddr);
+
+- if (!(vma->vm_flags & VM_MAYSHARE))
+- pagecache_page = hugetlbfs_pagecache_page(h,
+- vma, haddr);
++ pagecache_page = hugetlbfs_pagecache_page(h, vma, haddr);
+ }
+
+ ptl = huge_pte_lock(h, mm, ptep);
--- /dev/null
+From f369b07c861435bd812a9d14493f71b34132ed6f Mon Sep 17 00:00:00 2001
+From: Peter Xu <peterx@redhat.com>
+Date: Thu, 11 Aug 2022 16:13:40 -0400
+Subject: mm/uffd: reset write protection when unregister with wp-mode
+
+From: Peter Xu <peterx@redhat.com>
+
+commit f369b07c861435bd812a9d14493f71b34132ed6f upstream.
+
+The motivation of this patch comes from a recent report and patchfix from
+David Hildenbrand on hugetlb shared handling of wr-protected page [1].
+
+With the reproducer provided in commit message of [1], one can leverage
+the uffd-wp lazy-reset of ptes to trigger a hugetlb issue which can affect
+not only the attacker process, but also the whole system.
+
+The lazy-reset mechanism of uffd-wp was used to make unregister faster,
+meanwhile it has an assumption that any leftover pgtable entries should
+only affect the process on its own, so not only the user should be aware
+of anything it does, but also it should not affect outside of the process.
+
+But it seems that this is not true, and it can also be utilized to make
+some exploit easier.
+
+So far there's no clue showing that the lazy-reset is important to any
+userfaultfd users because normally the unregister will only happen once
+for a specific range of memory of the lifecycle of the process.
+
+Considering all above, what this patch proposes is to do explicit pte
+resets when unregister an uffd region with wr-protect mode enabled.
+
+It should be the same as calling ioctl(UFFDIO_WRITEPROTECT, wp=false)
+right before ioctl(UFFDIO_UNREGISTER) for the user. So potentially it'll
+make the unregister slower. From that pov it's a very slight abi change,
+but hopefully nothing should break with this change either.
+
+Regarding to the change itself - core of uffd write [un]protect operation
+is moved into a separate function (uffd_wp_range()) and it is reused in
+the unregister code path.
+
+Note that the new function will not check for anything, e.g. ranges or
+memory types, because they should have been checked during the previous
+UFFDIO_REGISTER or it should have failed already. It also doesn't check
+mmap_changing because we're with mmap write lock held anyway.
+
+I added a Fixes upon introducing of uffd-wp shmem+hugetlbfs because that's
+the only issue reported so far and that's the commit David's reproducer
+will start working (v5.19+). But the whole idea actually applies to not
+only file memories but also anonymous. It's just that we don't need to
+fix anonymous prior to v5.19- because there's no known way to exploit.
+
+IOW, this patch can also fix the issue reported in [1] as the patch 2 does.
+
+[1] https://lore.kernel.org/all/20220811103435.188481-3-david@redhat.com/
+
+Link: https://lkml.kernel.org/r/20220811201340.39342-1-peterx@redhat.com
+Fixes: b1f9e876862d ("mm/uffd: enable write protection for shmem & hugetlbfs")
+Signed-off-by: Peter Xu <peterx@redhat.com>
+Cc: David Hildenbrand <david@redhat.com>
+Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
+Cc: Mike Kravetz <mike.kravetz@oracle.com>
+Cc: Andrea Arcangeli <aarcange@redhat.com>
+Cc: Nadav Amit <nadav.amit@gmail.com>
+Cc: Axel Rasmussen <axelrasmussen@google.com>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/userfaultfd.c | 4 ++++
+ include/linux/userfaultfd_k.h | 2 ++
+ mm/userfaultfd.c | 29 ++++++++++++++++++-----------
+ 3 files changed, 24 insertions(+), 11 deletions(-)
+
+diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
+index 1c44bf75f916..175de70e3adf 100644
+--- a/fs/userfaultfd.c
++++ b/fs/userfaultfd.c
+@@ -1601,6 +1601,10 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx,
+ wake_userfault(vma->vm_userfaultfd_ctx.ctx, &range);
+ }
+
++ /* Reset ptes for the whole vma range if wr-protected */
++ if (userfaultfd_wp(vma))
++ uffd_wp_range(mm, vma, start, vma_end - start, false);
++
+ new_flags = vma->vm_flags & ~__VM_UFFD_FLAGS;
+ prev = vma_merge(mm, prev, start, vma_end, new_flags,
+ vma->anon_vma, vma->vm_file, vma->vm_pgoff,
+diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h
+index 732b522bacb7..e1b8a915e9e9 100644
+--- a/include/linux/userfaultfd_k.h
++++ b/include/linux/userfaultfd_k.h
+@@ -73,6 +73,8 @@ extern ssize_t mcopy_continue(struct mm_struct *dst_mm, unsigned long dst_start,
+ extern int mwriteprotect_range(struct mm_struct *dst_mm,
+ unsigned long start, unsigned long len,
+ bool enable_wp, atomic_t *mmap_changing);
++extern void uffd_wp_range(struct mm_struct *dst_mm, struct vm_area_struct *vma,
++ unsigned long start, unsigned long len, bool enable_wp);
+
+ /* mm helpers */
+ static inline bool is_mergeable_vm_userfaultfd_ctx(struct vm_area_struct *vma,
+diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
+index 07d3befc80e4..7327b2573f7c 100644
+--- a/mm/userfaultfd.c
++++ b/mm/userfaultfd.c
+@@ -703,14 +703,29 @@ ssize_t mcopy_continue(struct mm_struct *dst_mm, unsigned long start,
+ mmap_changing, 0);
+ }
+
++void uffd_wp_range(struct mm_struct *dst_mm, struct vm_area_struct *dst_vma,
++ unsigned long start, unsigned long len, bool enable_wp)
++{
++ struct mmu_gather tlb;
++ pgprot_t newprot;
++
++ if (enable_wp)
++ newprot = vm_get_page_prot(dst_vma->vm_flags & ~(VM_WRITE));
++ else
++ newprot = vm_get_page_prot(dst_vma->vm_flags);
++
++ tlb_gather_mmu(&tlb, dst_mm);
++ change_protection(&tlb, dst_vma, start, start + len, newprot,
++ enable_wp ? MM_CP_UFFD_WP : MM_CP_UFFD_WP_RESOLVE);
++ tlb_finish_mmu(&tlb);
++}
++
+ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start,
+ unsigned long len, bool enable_wp,
+ atomic_t *mmap_changing)
+ {
+ struct vm_area_struct *dst_vma;
+ unsigned long page_mask;
+- struct mmu_gather tlb;
+- pgprot_t newprot;
+ int err;
+
+ /*
+@@ -750,15 +765,7 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start,
+ goto out_unlock;
+ }
+
+- if (enable_wp)
+- newprot = vm_get_page_prot(dst_vma->vm_flags & ~(VM_WRITE));
+- else
+- newprot = vm_get_page_prot(dst_vma->vm_flags);
+-
+- tlb_gather_mmu(&tlb, dst_mm);
+- change_protection(&tlb, dst_vma, start, start + len, newprot,
+- enable_wp ? MM_CP_UFFD_WP : MM_CP_UFFD_WP_RESOLVE);
+- tlb_finish_mmu(&tlb);
++ uffd_wp_range(dst_mm, dst_vma, start, len, enable_wp);
+
+ err = 0;
+ out_unlock:
+--
+2.37.2
+
--- /dev/null
+From 67f4b5dc49913abcdb5cc736e73674e2f352f81d Mon Sep 17 00:00:00 2001
+From: Trond Myklebust <trond.myklebust@hammerspace.com>
+Date: Sat, 13 Aug 2022 08:22:25 -0400
+Subject: NFS: Fix another fsync() issue after a server reboot
+
+From: Trond Myklebust <trond.myklebust@hammerspace.com>
+
+commit 67f4b5dc49913abcdb5cc736e73674e2f352f81d upstream.
+
+Currently, when the writeback code detects a server reboot, it redirties
+any pages that were not committed to disk, and it sets the flag
+NFS_CONTEXT_RESEND_WRITES in the nfs_open_context of the file descriptor
+that dirtied the file. While this allows the file descriptor in question
+to redrive its own writes, it violates the fsync() requirement that we
+should be synchronising all writes to disk.
+While the problem is infrequent, we do see corner cases where an
+untimely server reboot causes the fsync() call to abandon its attempt to
+sync data to disk and causing data corruption issues due to missed error
+conditions or similar.
+
+In order to tighted up the client's ability to deal with this situation
+without introducing livelocks, add a counter that records the number of
+times pages are redirtied due to a server reboot-like condition, and use
+that in fsync() to redrive the sync to disk.
+
+Fixes: 2197e9b06c22 ("NFS: Fix up fsync() when the server rebooted")
+Cc: stable@vger.kernel.org
+Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/nfs/file.c | 15 ++++++---------
+ fs/nfs/inode.c | 1 +
+ fs/nfs/write.c | 6 ++++--
+ include/linux/nfs_fs.h | 1 +
+ 4 files changed, 12 insertions(+), 11 deletions(-)
+
+--- a/fs/nfs/file.c
++++ b/fs/nfs/file.c
+@@ -221,8 +221,10 @@ nfs_file_fsync_commit(struct file *file,
+ int
+ nfs_file_fsync(struct file *file, loff_t start, loff_t end, int datasync)
+ {
+- struct nfs_open_context *ctx = nfs_file_open_context(file);
+ struct inode *inode = file_inode(file);
++ struct nfs_inode *nfsi = NFS_I(inode);
++ long save_nredirtied = atomic_long_read(&nfsi->redirtied_pages);
++ long nredirtied;
+ int ret;
+
+ trace_nfs_fsync_enter(inode);
+@@ -237,15 +239,10 @@ nfs_file_fsync(struct file *file, loff_t
+ ret = pnfs_sync_inode(inode, !!datasync);
+ if (ret != 0)
+ break;
+- if (!test_and_clear_bit(NFS_CONTEXT_RESEND_WRITES, &ctx->flags))
++ nredirtied = atomic_long_read(&nfsi->redirtied_pages);
++ if (nredirtied == save_nredirtied)
+ break;
+- /*
+- * If nfs_file_fsync_commit detected a server reboot, then
+- * resend all dirty pages that might have been covered by
+- * the NFS_CONTEXT_RESEND_WRITES flag
+- */
+- start = 0;
+- end = LLONG_MAX;
++ save_nredirtied = nredirtied;
+ }
+
+ trace_nfs_fsync_exit(inode, ret);
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -426,6 +426,7 @@ nfs_ilookup(struct super_block *sb, stru
+ static void nfs_inode_init_regular(struct nfs_inode *nfsi)
+ {
+ atomic_long_set(&nfsi->nrequests, 0);
++ atomic_long_set(&nfsi->redirtied_pages, 0);
+ INIT_LIST_HEAD(&nfsi->commit_info.list);
+ atomic_long_set(&nfsi->commit_info.ncommit, 0);
+ atomic_set(&nfsi->commit_info.rpcs_out, 0);
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -1419,10 +1419,12 @@ static void nfs_initiate_write(struct nf
+ */
+ static void nfs_redirty_request(struct nfs_page *req)
+ {
++ struct nfs_inode *nfsi = NFS_I(page_file_mapping(req->wb_page)->host);
++
+ /* Bump the transmission count */
+ req->wb_nio++;
+ nfs_mark_request_dirty(req);
+- set_bit(NFS_CONTEXT_RESEND_WRITES, &nfs_req_openctx(req)->flags);
++ atomic_long_inc(&nfsi->redirtied_pages);
+ nfs_end_page_writeback(req);
+ nfs_release_request(req);
+ }
+@@ -1892,7 +1894,7 @@ static void nfs_commit_release_pages(str
+ /* We have a mismatch. Write the page again */
+ dprintk_cont(" mismatch\n");
+ nfs_mark_request_dirty(req);
+- set_bit(NFS_CONTEXT_RESEND_WRITES, &nfs_req_openctx(req)->flags);
++ atomic_long_inc(&NFS_I(data->inode)->redirtied_pages);
+ next:
+ nfs_unlock_and_release_request(req);
+ /* Latency breaker */
+--- a/include/linux/nfs_fs.h
++++ b/include/linux/nfs_fs.h
+@@ -182,6 +182,7 @@ struct nfs_inode {
+ /* Regular file */
+ struct {
+ atomic_long_t nrequests;
++ atomic_long_t redirtied_pages;
+ struct nfs_mds_commit_info commit_info;
+ struct mutex commit_mutex;
+ };
--- /dev/null
+From 7ae1f5508d9a33fd58ed3059bd2d569961e3b8bd Mon Sep 17 00:00:00 2001
+From: Helge Deller <deller@gmx.de>
+Date: Sat, 20 Aug 2022 17:59:17 +0200
+Subject: parisc: Fix exception handler for fldw and fstw instructions
+
+From: Helge Deller <deller@gmx.de>
+
+commit 7ae1f5508d9a33fd58ed3059bd2d569961e3b8bd upstream.
+
+The exception handler is broken for unaligned memory acceses with fldw
+and fstw instructions, because it trashes or uses randomly some other
+floating point register than the one specified in the instruction word
+on loads and stores.
+
+The instruction "fldw 0(addr),%fr22L" (and the other fldw/fstw
+instructions) encode the target register (%fr22) in the rightmost 5 bits
+of the instruction word. The 7th rightmost bit of the instruction word
+defines if the left or right half of %fr22 should be used.
+
+While processing unaligned address accesses, the FR3() define is used to
+extract the offset into the local floating-point register set. But the
+calculation in FR3() was buggy, so that for example instead of %fr22,
+register %fr12 [((22 * 2) & 0x1f) = 12] was used.
+
+This bug has been since forever in the parisc kernel and I wonder why it
+wasn't detected earlier. Interestingly I noticed this bug just because
+the libime debian package failed to build on *native* hardware, while it
+successfully built in qemu.
+
+This patch corrects the bitshift and masking calculation in FR3().
+
+Signed-off-by: Helge Deller <deller@gmx.de>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/parisc/kernel/unaligned.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/arch/parisc/kernel/unaligned.c
++++ b/arch/parisc/kernel/unaligned.c
+@@ -93,7 +93,7 @@
+ #define R1(i) (((i)>>21)&0x1f)
+ #define R2(i) (((i)>>16)&0x1f)
+ #define R3(i) ((i)&0x1f)
+-#define FR3(i) ((((i)<<1)&0x1f)|(((i)>>6)&1))
++#define FR3(i) ((((i)&0x1f)<<1)|(((i)>>6)&1))
+ #define IM(i,n) (((i)>>1&((1<<(n-1))-1))|((i)&1?((0-1L)<<(n-1)):0))
+ #define IM5_2(i) IM((i)>>16,5)
+ #define IM5_3(i) IM((i),5)
--- /dev/null
+From 3dcfb729b5f4a0c9b50742865cd5e6c4dbcc80dc Mon Sep 17 00:00:00 2001
+From: Helge Deller <deller@gmx.de>
+Date: Fri, 19 Aug 2022 19:30:50 +0200
+Subject: parisc: Make CONFIG_64BIT available for ARCH=parisc64 only
+
+From: Helge Deller <deller@gmx.de>
+
+commit 3dcfb729b5f4a0c9b50742865cd5e6c4dbcc80dc upstream.
+
+With this patch the ARCH= parameter decides if the
+CONFIG_64BIT option will be set or not. This means, the
+ARCH= parameter will give:
+
+ ARCH=parisc -> 32-bit kernel
+ ARCH=parisc64 -> 64-bit kernel
+
+This simplifies the usage of the other config options like
+randconfig, allmodconfig and allyesconfig a lot and produces
+the output which is expected for parisc64 (64-bit) vs. parisc (32-bit).
+
+Suggested-by: Masahiro Yamada <masahiroy@kernel.org>
+Signed-off-by: Helge Deller <deller@gmx.de>
+Tested-by: Randy Dunlap <rdunlap@infradead.org>
+Reviewed-by: Randy Dunlap <rdunlap@infradead.org>
+Cc: <stable@vger.kernel.org> # 5.15+
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/parisc/Kconfig | 21 ++++++---------------
+ 1 file changed, 6 insertions(+), 15 deletions(-)
+
+--- a/arch/parisc/Kconfig
++++ b/arch/parisc/Kconfig
+@@ -147,10 +147,10 @@ menu "Processor type and features"
+
+ choice
+ prompt "Processor type"
+- default PA7000
++ default PA7000 if "$(ARCH)" = "parisc"
+
+ config PA7000
+- bool "PA7000/PA7100"
++ bool "PA7000/PA7100" if "$(ARCH)" = "parisc"
+ help
+ This is the processor type of your CPU. This information is
+ used for optimizing purposes. In order to compile a kernel
+@@ -161,21 +161,21 @@ config PA7000
+ which is required on some machines.
+
+ config PA7100LC
+- bool "PA7100LC"
++ bool "PA7100LC" if "$(ARCH)" = "parisc"
+ help
+ Select this option for the PCX-L processor, as used in the
+ 712, 715/64, 715/80, 715/100, 715/100XC, 725/100, 743, 748,
+ D200, D210, D300, D310 and E-class
+
+ config PA7200
+- bool "PA7200"
++ bool "PA7200" if "$(ARCH)" = "parisc"
+ help
+ Select this option for the PCX-T' processor, as used in the
+ C100, C110, J100, J110, J210XC, D250, D260, D350, D360,
+ K100, K200, K210, K220, K400, K410 and K420
+
+ config PA7300LC
+- bool "PA7300LC"
++ bool "PA7300LC" if "$(ARCH)" = "parisc"
+ help
+ Select this option for the PCX-L2 processor, as used in the
+ 744, A180, B132L, B160L, B180L, C132L, C160L, C180L,
+@@ -225,17 +225,8 @@ config MLONGCALLS
+ Enabling this option will probably slow down your kernel.
+
+ config 64BIT
+- bool "64-bit kernel"
++ def_bool "$(ARCH)" = "parisc64"
+ depends on PA8X00
+- help
+- Enable this if you want to support 64bit kernel on PA-RISC platform.
+-
+- At the moment, only people willing to use more than 2GB of RAM,
+- or having a 64bit-only capable PA-RISC machine should say Y here.
+-
+- Since there is no 64bit userland on PA-RISC, there is no point to
+- enable this option otherwise. The 64bit kernel is significantly bigger
+- and slower than the 32bit one.
+
+ choice
+ prompt "Kernel page size"
mm-gup-fix-foll_force-cow-security-issue-and-remove-foll_cow.patch
+nfs-fix-another-fsync-issue-after-a-server-reboot.patch
+audit-fix-potential-double-free-on-error-path-from-fsnotify_add_inode_mark.patch
+cgroup-fix-race-condition-at-rebind_subsystems.patch
+parisc-make-config_64bit-available-for-arch-parisc64-only.patch
+parisc-fix-exception-handler-for-fldw-and-fstw-instructions.patch
+kernel-sys_ni-add-compat-entry-for-fadvise64_64.patch
+kprobes-don-t-call-disarm_kprobe-for-disabled-kprobes.patch
+mm-uffd-reset-write-protection-when-unregister-with-wp-mode.patch
+mm-hugetlb-support-write-faults-in-shared-mappings.patch