--- /dev/null
+From daniel@iogearbox.net Wed Apr 22 10:22:28 2020
+From: Daniel Borkmann <daniel@iogearbox.net>
+Date: Tue, 21 Apr 2020 15:01:49 +0200
+Subject: bpf: fix buggy r0 retval refinement for tracing helpers
+To: gregkh@linuxfoundation.org
+Cc: alexei.starovoitov@gmail.com, john.fastabend@gmail.com, kpsingh@chromium.org, jannh@google.com, fontanalorenz@gmail.com, leodidonato@gmail.com, yhs@fb.com, bpf@vger.kernel.org, Daniel Borkmann <daniel@iogearbox.net>, Alexei Starovoitov <ast@kernel.org>
+Message-ID: <20200421130152.14348-1-daniel@iogearbox.net>
+
+From: Daniel Borkmann <daniel@iogearbox.net>
+Date: Tue, 21 Apr 2020 15:01:49 +0200
+
+[ no upstream commit ]
+
+See the glory details in 100605035e15 ("bpf: Verifier, do_refine_retval_range
+may clamp umin to 0 incorrectly") for why 849fa50662fb ("bpf/verifier: refine
+retval R0 state for bpf_get_stack helper") is buggy. The whole series however
+is not suitable for stable since it adds significant amount [0] of verifier
+complexity in order to add 32bit subreg tracking. Something simpler is needed.
+
+Unfortunately, reverting 849fa50662fb ("bpf/verifier: refine retval R0 state
+for bpf_get_stack helper") or just cherry-picking 100605035e15 ("bpf: Verifier,
+do_refine_retval_range may clamp umin to 0 incorrectly") is not an option since
+it will break existing tracing programs badly (at least those that are using
+bpf_get_stack() and bpf_probe_read_str() helpers). Not fixing it in stable is
+also not an option since on 4.19 kernels an error will cause a soft-lockup due
+to hitting dead-code sanitized branch since we don't hard-wire such branches
+in old kernels yet. But even then for 5.x 849fa50662fb ("bpf/verifier: refine
+retval R0 state for bpf_get_stack helper") would cause wrong bounds on the
+verifier simluation when an error is hit.
+
+In one of the earlier iterations of mentioned patch series for upstream there
+was the concern that just using smax_value in do_refine_retval_range() would
+nuke bounds by subsequent <<32 >>32 shifts before the comparison against 0 [1]
+which eventually led to the 32bit subreg tracking in the first place. While I
+initially went for implementing the idea [1] to pattern match the two shift
+operations, it turned out to be more complex than actually needed, meaning, we
+could simply treat do_refine_retval_range() similarly to how we branch off
+verification for conditionals or under speculation, that is, pushing a new
+reg state to the stack for later verification. This means, instead of verifying
+the current path with the ret_reg in [S32MIN, msize_max_value] interval where
+later bounds would get nuked, we split this into two: i) for the success case
+where ret_reg can be in [0, msize_max_value], and ii) for the error case with
+ret_reg known to be in interval [S32MIN, -1]. Latter will preserve the bounds
+during these shift patterns and can match reg < 0 test. test_progs also succeed
+with this approach.
+
+ [0] https://lore.kernel.org/bpf/158507130343.15666.8018068546764556975.stgit@john-Precision-5820-Tower/
+ [1] https://lore.kernel.org/bpf/158015334199.28573.4940395881683556537.stgit@john-XPS-13-9370/T/#m2e0ad1d5949131014748b6daa48a3495e7f0456d
+
+Fixes: 849fa50662fb ("bpf/verifier: refine retval R0 state for bpf_get_stack helper")
+Reported-by: Lorenzo Fontana <fontanalorenz@gmail.com>
+Reported-by: Leonardo Di Donato <leodidonato@gmail.com>
+Reported-by: John Fastabend <john.fastabend@gmail.com>
+Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
+Acked-by: Alexei Starovoitov <ast@kernel.org>
+Acked-by: John Fastabend <john.fastabend@gmail.com>
+Tested-by: John Fastabend <john.fastabend@gmail.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -227,8 +227,7 @@ struct bpf_call_arg_meta {
+ bool pkt_access;
+ int regno;
+ int access_size;
+- s64 msize_smax_value;
+- u64 msize_umax_value;
++ u64 msize_max_value;
+ int ref_obj_id;
+ int func_id;
+ u32 btf_id;
+@@ -3568,8 +3567,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno,
+ /* remember the mem_size which may be used later
+ * to refine return values.
+ */
+- meta->msize_smax_value = reg->smax_value;
+- meta->msize_umax_value = reg->umax_value;
++ meta->msize_max_value = reg->umax_value;
+
+ /* The register is SCALAR_VALUE; the access check
+ * happens using its boundaries.
+@@ -4095,21 +4093,44 @@ static int prepare_func_exit(struct bpf_verifier_env *env, int *insn_idx)
+ return 0;
+ }
+
+-static void do_refine_retval_range(struct bpf_reg_state *regs, int ret_type,
+- int func_id,
+- struct bpf_call_arg_meta *meta)
++static int do_refine_retval_range(struct bpf_verifier_env *env,
++ struct bpf_reg_state *regs, int ret_type,
++ int func_id, struct bpf_call_arg_meta *meta)
+ {
+ struct bpf_reg_state *ret_reg = ®s[BPF_REG_0];
++ struct bpf_reg_state tmp_reg = *ret_reg;
++ bool ret;
+
+ if (ret_type != RET_INTEGER ||
+ (func_id != BPF_FUNC_get_stack &&
+ func_id != BPF_FUNC_probe_read_str))
+- return;
++ return 0;
++
++ /* Error case where ret is in interval [S32MIN, -1]. */
++ ret_reg->smin_value = S32_MIN;
++ ret_reg->smax_value = -1;
+
+- ret_reg->smax_value = meta->msize_smax_value;
+- ret_reg->umax_value = meta->msize_umax_value;
+ __reg_deduce_bounds(ret_reg);
+ __reg_bound_offset(ret_reg);
++ __update_reg_bounds(ret_reg);
++
++ ret = push_stack(env, env->insn_idx + 1, env->insn_idx, false);
++ if (!ret)
++ return -EFAULT;
++
++ *ret_reg = tmp_reg;
++
++ /* Success case where ret is in range [0, msize_max_value]. */
++ ret_reg->smin_value = 0;
++ ret_reg->smax_value = meta->msize_max_value;
++ ret_reg->umin_value = ret_reg->smin_value;
++ ret_reg->umax_value = ret_reg->smax_value;
++
++ __reg_deduce_bounds(ret_reg);
++ __reg_bound_offset(ret_reg);
++ __update_reg_bounds(ret_reg);
++
++ return 0;
+ }
+
+ static int
+@@ -4377,7 +4398,9 @@ static int check_helper_call(struct bpf_verifier_env *env, int func_id, int insn
+ regs[BPF_REG_0].ref_obj_id = id;
+ }
+
+- do_refine_retval_range(regs, fn->ret_type, func_id, &meta);
++ err = do_refine_retval_range(env, regs, fn->ret_type, func_id, &meta);
++ if (err)
++ return err;
+
+ err = check_map_func_compatibility(env, meta.map_ptr, func_id);
+ if (err)
+--
+2.20.1
+
--- /dev/null
+From daniel@iogearbox.net Wed Apr 22 10:24:25 2020
+From: Daniel Borkmann <daniel@iogearbox.net>
+Date: Tue, 21 Apr 2020 15:01:52 +0200
+Subject: bpf, test_verifier: switch bpf_get_stack's 0 s> r8 test
+To: gregkh@linuxfoundation.org
+Cc: alexei.starovoitov@gmail.com, john.fastabend@gmail.com, kpsingh@chromium.org, jannh@google.com, fontanalorenz@gmail.com, leodidonato@gmail.com, yhs@fb.com, bpf@vger.kernel.org, Daniel Borkmann <daniel@iogearbox.net>, Alexei Starovoitov <ast@kernel.org>
+Message-ID: <20200421130152.14348-4-daniel@iogearbox.net>
+
+From: Daniel Borkmann <daniel@iogearbox.net>
+
+[ no upstream commit ]
+
+Switch the comparison, so that is_branch_taken() will recognize that below
+branch is never taken:
+
+ [...]
+ 17: [...] R1_w=inv0 [...] R8_w=inv(id=0,smin_value=-2147483648,smax_value=-1,umin_value=18446744071562067968,var_off=(0xffffffff80000000; 0x7fffffff)) [...]
+ 17: (67) r8 <<= 32
+ 18: [...] R8_w=inv(id=0,smax_value=-4294967296,umin_value=9223372036854775808,umax_value=18446744069414584320,var_off=(0x8000000000000000; 0x7fffffff00000000)) [...]
+ 18: (c7) r8 s>>= 32
+ 19: [...] R8_w=inv(id=0,smin_value=-2147483648,smax_value=-1,umin_value=18446744071562067968,var_off=(0xffffffff80000000; 0x7fffffff)) [...]
+ 19: (6d) if r1 s> r8 goto pc+16
+ [...] R1_w=inv0 [...] R8_w=inv(id=0,smin_value=-2147483648,smax_value=-1,umin_value=18446744071562067968,var_off=(0xffffffff80000000; 0x7fffffff)) [...]
+ [...]
+
+Currently we check for is_branch_taken() only if either K is source, or source
+is a scalar value that is const. For upstream it would be good to extend this
+properly to check whether dst is const and src not.
+
+For the sake of the test_verifier, it is probably not needed here:
+
+ # ./test_verifier 101
+ #101/p bpf_get_stack return R0 within range OK
+ Summary: 1 PASSED, 0 SKIPPED, 0 FAILED
+
+I haven't seen this issue in test_progs* though, they are passing fine:
+
+ # ./test_progs-no_alu32 -t get_stack
+ Switching to flavor 'no_alu32' subdirectory...
+ #20 get_stack_raw_tp:OK
+ Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED
+
+ # ./test_progs -t get_stack
+ #20 get_stack_raw_tp:OK
+ Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED
+
+Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
+Acked-by: Alexei Starovoitov <ast@kernel.org>
+Acked-by: John Fastabend <john.fastabend@gmail.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ tools/testing/selftests/bpf/verifier/bpf_get_stack.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/tools/testing/selftests/bpf/verifier/bpf_get_stack.c b/tools/testing/selftests/bpf/verifier/bpf_get_stack.c
+index 69b048cf46d9..371926771db5 100644
+--- a/tools/testing/selftests/bpf/verifier/bpf_get_stack.c
++++ b/tools/testing/selftests/bpf/verifier/bpf_get_stack.c
+@@ -19,7 +19,7 @@
+ BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
+ BPF_ALU64_IMM(BPF_LSH, BPF_REG_8, 32),
+ BPF_ALU64_IMM(BPF_ARSH, BPF_REG_8, 32),
+- BPF_JMP_REG(BPF_JSGT, BPF_REG_1, BPF_REG_8, 16),
++ BPF_JMP_REG(BPF_JSLT, BPF_REG_8, BPF_REG_1, 16),
+ BPF_ALU64_REG(BPF_SUB, BPF_REG_9, BPF_REG_8),
+ BPF_MOV64_REG(BPF_REG_2, BPF_REG_7),
+ BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_8),
+--
+2.20.1
+
--- /dev/null
+From d3d19d6fc5736a798b118971935ce274f7deaa82 Mon Sep 17 00:00:00 2001
+From: Dan Carpenter <dan.carpenter@oracle.com>
+Date: Mon, 13 Jan 2020 14:08:14 +0300
+Subject: fbdev: potential information leak in do_fb_ioctl()
+
+From: Dan Carpenter <dan.carpenter@oracle.com>
+
+commit d3d19d6fc5736a798b118971935ce274f7deaa82 upstream.
+
+The "fix" struct has a 2 byte hole after ->ywrapstep and the
+"fix = info->fix;" assignment doesn't necessarily clear it. It depends
+on the compiler. The solution is just to replace the assignment with an
+memcpy().
+
+Fixes: 1f5e31d7e55a ("fbmem: don't call copy_from/to_user() with mutex held")
+Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
+Cc: Andrew Morton <akpm@linux-foundation.org>
+Cc: Arnd Bergmann <arnd@arndb.de>
+Cc: "Eric W. Biederman" <ebiederm@xmission.com>
+Cc: Andrea Righi <righi.andrea@gmail.com>
+Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
+Cc: Sam Ravnborg <sam@ravnborg.org>
+Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
+Cc: Daniel Thompson <daniel.thompson@linaro.org>
+Cc: Peter Rosin <peda@axentia.se>
+Cc: Jani Nikula <jani.nikula@intel.com>
+Cc: Gerd Hoffmann <kraxel@redhat.com>
+Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
+Link: https://patchwork.freedesktop.org/patch/msgid/20200113100132.ixpaymordi24n3av@kili.mountain
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/video/fbdev/core/fbmem.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -1134,7 +1134,7 @@ static long do_fb_ioctl(struct fb_info *
+ case FBIOGET_FSCREENINFO:
+ if (!lock_fb_info(info))
+ return -ENODEV;
+- fix = info->fix;
++ memcpy(&fix, &info->fix, sizeof(fix));
+ unlock_fb_info(info);
+
+ ret = copy_to_user(argp, &fix, sizeof(fix)) ? -EFAULT : 0;
--- /dev/null
+From d3ec10aa95819bff18a0d936b18884c7816d0914 Mon Sep 17 00:00:00 2001
+From: Waiman Long <longman@redhat.com>
+Date: Sat, 21 Mar 2020 21:11:24 -0400
+Subject: KEYS: Don't write out to userspace while holding key semaphore
+
+From: Waiman Long <longman@redhat.com>
+
+commit d3ec10aa95819bff18a0d936b18884c7816d0914 upstream.
+
+A lockdep circular locking dependency report was seen when running a
+keyutils test:
+
+[12537.027242] ======================================================
+[12537.059309] WARNING: possible circular locking dependency detected
+[12537.088148] 4.18.0-147.7.1.el8_1.x86_64+debug #1 Tainted: G OE --------- - -
+[12537.125253] ------------------------------------------------------
+[12537.153189] keyctl/25598 is trying to acquire lock:
+[12537.175087] 000000007c39f96c (&mm->mmap_sem){++++}, at: __might_fault+0xc4/0x1b0
+[12537.208365]
+[12537.208365] but task is already holding lock:
+[12537.234507] 000000003de5b58d (&type->lock_class){++++}, at: keyctl_read_key+0x15a/0x220
+[12537.270476]
+[12537.270476] which lock already depends on the new lock.
+[12537.270476]
+[12537.307209]
+[12537.307209] the existing dependency chain (in reverse order) is:
+[12537.340754]
+[12537.340754] -> #3 (&type->lock_class){++++}:
+[12537.367434] down_write+0x4d/0x110
+[12537.385202] __key_link_begin+0x87/0x280
+[12537.405232] request_key_and_link+0x483/0xf70
+[12537.427221] request_key+0x3c/0x80
+[12537.444839] dns_query+0x1db/0x5a5 [dns_resolver]
+[12537.468445] dns_resolve_server_name_to_ip+0x1e1/0x4d0 [cifs]
+[12537.496731] cifs_reconnect+0xe04/0x2500 [cifs]
+[12537.519418] cifs_readv_from_socket+0x461/0x690 [cifs]
+[12537.546263] cifs_read_from_socket+0xa0/0xe0 [cifs]
+[12537.573551] cifs_demultiplex_thread+0x311/0x2db0 [cifs]
+[12537.601045] kthread+0x30c/0x3d0
+[12537.617906] ret_from_fork+0x3a/0x50
+[12537.636225]
+[12537.636225] -> #2 (root_key_user.cons_lock){+.+.}:
+[12537.664525] __mutex_lock+0x105/0x11f0
+[12537.683734] request_key_and_link+0x35a/0xf70
+[12537.705640] request_key+0x3c/0x80
+[12537.723304] dns_query+0x1db/0x5a5 [dns_resolver]
+[12537.746773] dns_resolve_server_name_to_ip+0x1e1/0x4d0 [cifs]
+[12537.775607] cifs_reconnect+0xe04/0x2500 [cifs]
+[12537.798322] cifs_readv_from_socket+0x461/0x690 [cifs]
+[12537.823369] cifs_read_from_socket+0xa0/0xe0 [cifs]
+[12537.847262] cifs_demultiplex_thread+0x311/0x2db0 [cifs]
+[12537.873477] kthread+0x30c/0x3d0
+[12537.890281] ret_from_fork+0x3a/0x50
+[12537.908649]
+[12537.908649] -> #1 (&tcp_ses->srv_mutex){+.+.}:
+[12537.935225] __mutex_lock+0x105/0x11f0
+[12537.954450] cifs_call_async+0x102/0x7f0 [cifs]
+[12537.977250] smb2_async_readv+0x6c3/0xc90 [cifs]
+[12538.000659] cifs_readpages+0x120a/0x1e50 [cifs]
+[12538.023920] read_pages+0xf5/0x560
+[12538.041583] __do_page_cache_readahead+0x41d/0x4b0
+[12538.067047] ondemand_readahead+0x44c/0xc10
+[12538.092069] filemap_fault+0xec1/0x1830
+[12538.111637] __do_fault+0x82/0x260
+[12538.129216] do_fault+0x419/0xfb0
+[12538.146390] __handle_mm_fault+0x862/0xdf0
+[12538.167408] handle_mm_fault+0x154/0x550
+[12538.187401] __do_page_fault+0x42f/0xa60
+[12538.207395] do_page_fault+0x38/0x5e0
+[12538.225777] page_fault+0x1e/0x30
+[12538.243010]
+[12538.243010] -> #0 (&mm->mmap_sem){++++}:
+[12538.267875] lock_acquire+0x14c/0x420
+[12538.286848] __might_fault+0x119/0x1b0
+[12538.306006] keyring_read_iterator+0x7e/0x170
+[12538.327936] assoc_array_subtree_iterate+0x97/0x280
+[12538.352154] keyring_read+0xe9/0x110
+[12538.370558] keyctl_read_key+0x1b9/0x220
+[12538.391470] do_syscall_64+0xa5/0x4b0
+[12538.410511] entry_SYSCALL_64_after_hwframe+0x6a/0xdf
+[12538.435535]
+[12538.435535] other info that might help us debug this:
+[12538.435535]
+[12538.472829] Chain exists of:
+[12538.472829] &mm->mmap_sem --> root_key_user.cons_lock --> &type->lock_class
+[12538.472829]
+[12538.524820] Possible unsafe locking scenario:
+[12538.524820]
+[12538.551431] CPU0 CPU1
+[12538.572654] ---- ----
+[12538.595865] lock(&type->lock_class);
+[12538.613737] lock(root_key_user.cons_lock);
+[12538.644234] lock(&type->lock_class);
+[12538.672410] lock(&mm->mmap_sem);
+[12538.687758]
+[12538.687758] *** DEADLOCK ***
+[12538.687758]
+[12538.714455] 1 lock held by keyctl/25598:
+[12538.732097] #0: 000000003de5b58d (&type->lock_class){++++}, at: keyctl_read_key+0x15a/0x220
+[12538.770573]
+[12538.770573] stack backtrace:
+[12538.790136] CPU: 2 PID: 25598 Comm: keyctl Kdump: loaded Tainted: G
+[12538.844855] Hardware name: HP ProLiant DL360 Gen9/ProLiant DL360 Gen9, BIOS P89 12/27/2015
+[12538.881963] Call Trace:
+[12538.892897] dump_stack+0x9a/0xf0
+[12538.907908] print_circular_bug.isra.25.cold.50+0x1bc/0x279
+[12538.932891] ? save_trace+0xd6/0x250
+[12538.948979] check_prev_add.constprop.32+0xc36/0x14f0
+[12538.971643] ? keyring_compare_object+0x104/0x190
+[12538.992738] ? check_usage+0x550/0x550
+[12539.009845] ? sched_clock+0x5/0x10
+[12539.025484] ? sched_clock_cpu+0x18/0x1e0
+[12539.043555] __lock_acquire+0x1f12/0x38d0
+[12539.061551] ? trace_hardirqs_on+0x10/0x10
+[12539.080554] lock_acquire+0x14c/0x420
+[12539.100330] ? __might_fault+0xc4/0x1b0
+[12539.119079] __might_fault+0x119/0x1b0
+[12539.135869] ? __might_fault+0xc4/0x1b0
+[12539.153234] keyring_read_iterator+0x7e/0x170
+[12539.172787] ? keyring_read+0x110/0x110
+[12539.190059] assoc_array_subtree_iterate+0x97/0x280
+[12539.211526] keyring_read+0xe9/0x110
+[12539.227561] ? keyring_gc_check_iterator+0xc0/0xc0
+[12539.249076] keyctl_read_key+0x1b9/0x220
+[12539.266660] do_syscall_64+0xa5/0x4b0
+[12539.283091] entry_SYSCALL_64_after_hwframe+0x6a/0xdf
+
+One way to prevent this deadlock scenario from happening is to not
+allow writing to userspace while holding the key semaphore. Instead,
+an internal buffer is allocated for getting the keys out from the
+read method first before copying them out to userspace without holding
+the lock.
+
+That requires taking out the __user modifier from all the relevant
+read methods as well as additional changes to not use any userspace
+write helpers. That is,
+
+ 1) The put_user() call is replaced by a direct copy.
+ 2) The copy_to_user() call is replaced by memcpy().
+ 3) All the fault handling code is removed.
+
+Compiling on a x86-64 system, the size of the rxrpc_read() function is
+reduced from 3795 bytes to 2384 bytes with this patch.
+
+Fixes: ^1da177e4c3f4 ("Linux-2.6.12-rc2")
+Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
+Signed-off-by: Waiman Long <longman@redhat.com>
+Signed-off-by: David Howells <dhowells@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ include/keys/big_key-type.h | 2
+ include/keys/user-type.h | 3 -
+ include/linux/key-type.h | 2
+ net/dns_resolver/dns_key.c | 2
+ net/rxrpc/key.c | 27 +++--------
+ security/keys/big_key.c | 11 +---
+ security/keys/encrypted-keys/encrypted.c | 7 +-
+ security/keys/keyctl.c | 73 ++++++++++++++++++++++++-------
+ security/keys/keyring.c | 6 --
+ security/keys/request_key_auth.c | 7 +-
+ security/keys/trusted.c | 14 -----
+ security/keys/user_defined.c | 5 --
+ 12 files changed, 85 insertions(+), 74 deletions(-)
+
+--- a/include/keys/big_key-type.h
++++ b/include/keys/big_key-type.h
+@@ -21,6 +21,6 @@ extern void big_key_free_preparse(struct
+ extern void big_key_revoke(struct key *key);
+ extern void big_key_destroy(struct key *key);
+ extern void big_key_describe(const struct key *big_key, struct seq_file *m);
+-extern long big_key_read(const struct key *key, char __user *buffer, size_t buflen);
++extern long big_key_read(const struct key *key, char *buffer, size_t buflen);
+
+ #endif /* _KEYS_BIG_KEY_TYPE_H */
+--- a/include/keys/user-type.h
++++ b/include/keys/user-type.h
+@@ -45,8 +45,7 @@ extern int user_update(struct key *key,
+ extern void user_revoke(struct key *key);
+ extern void user_destroy(struct key *key);
+ extern void user_describe(const struct key *user, struct seq_file *m);
+-extern long user_read(const struct key *key,
+- char __user *buffer, size_t buflen);
++extern long user_read(const struct key *key, char *buffer, size_t buflen);
+
+ static inline const struct user_key_payload *user_key_payload_rcu(const struct key *key)
+ {
+--- a/include/linux/key-type.h
++++ b/include/linux/key-type.h
+@@ -125,7 +125,7 @@ struct key_type {
+ * much is copied into the buffer
+ * - shouldn't do the copy if the buffer is NULL
+ */
+- long (*read)(const struct key *key, char __user *buffer, size_t buflen);
++ long (*read)(const struct key *key, char *buffer, size_t buflen);
+
+ /* handle request_key() for this type instead of invoking
+ * /sbin/request-key (optional)
+--- a/net/dns_resolver/dns_key.c
++++ b/net/dns_resolver/dns_key.c
+@@ -242,7 +242,7 @@ static void dns_resolver_describe(const
+ * - the key's semaphore is read-locked
+ */
+ static long dns_resolver_read(const struct key *key,
+- char __user *buffer, size_t buflen)
++ char *buffer, size_t buflen)
+ {
+ int err = PTR_ERR(key->payload.data[dns_key_error]);
+
+--- a/net/rxrpc/key.c
++++ b/net/rxrpc/key.c
+@@ -35,7 +35,7 @@ static void rxrpc_free_preparse_s(struct
+ static void rxrpc_destroy(struct key *);
+ static void rxrpc_destroy_s(struct key *);
+ static void rxrpc_describe(const struct key *, struct seq_file *);
+-static long rxrpc_read(const struct key *, char __user *, size_t);
++static long rxrpc_read(const struct key *, char *, size_t);
+
+ /*
+ * rxrpc defined keys take an arbitrary string as the description and an
+@@ -1044,12 +1044,12 @@ EXPORT_SYMBOL(rxrpc_get_null_key);
+ * - this returns the result in XDR form
+ */
+ static long rxrpc_read(const struct key *key,
+- char __user *buffer, size_t buflen)
++ char *buffer, size_t buflen)
+ {
+ const struct rxrpc_key_token *token;
+ const struct krb5_principal *princ;
+ size_t size;
+- __be32 __user *xdr, *oldxdr;
++ __be32 *xdr, *oldxdr;
+ u32 cnlen, toksize, ntoks, tok, zero;
+ u16 toksizes[AFSTOKEN_MAX];
+ int loop;
+@@ -1126,30 +1126,25 @@ static long rxrpc_read(const struct key
+ if (!buffer || buflen < size)
+ return size;
+
+- xdr = (__be32 __user *) buffer;
++ xdr = (__be32 *)buffer;
+ zero = 0;
+ #define ENCODE(x) \
+ do { \
+- __be32 y = htonl(x); \
+- if (put_user(y, xdr++) < 0) \
+- goto fault; \
++ *xdr++ = htonl(x); \
+ } while(0)
+ #define ENCODE_DATA(l, s) \
+ do { \
+ u32 _l = (l); \
+ ENCODE(l); \
+- if (copy_to_user(xdr, (s), _l) != 0) \
+- goto fault; \
+- if (_l & 3 && \
+- copy_to_user((u8 __user *)xdr + _l, &zero, 4 - (_l & 3)) != 0) \
+- goto fault; \
++ memcpy(xdr, (s), _l); \
++ if (_l & 3) \
++ memcpy((u8 *)xdr + _l, &zero, 4 - (_l & 3)); \
+ xdr += (_l + 3) >> 2; \
+ } while(0)
+ #define ENCODE64(x) \
+ do { \
+ __be64 y = cpu_to_be64(x); \
+- if (copy_to_user(xdr, &y, 8) != 0) \
+- goto fault; \
++ memcpy(xdr, &y, 8); \
+ xdr += 8 >> 2; \
+ } while(0)
+ #define ENCODE_STR(s) \
+@@ -1240,8 +1235,4 @@ static long rxrpc_read(const struct key
+ ASSERTCMP((char __user *) xdr - buffer, ==, size);
+ _leave(" = %zu", size);
+ return size;
+-
+-fault:
+- _leave(" = -EFAULT");
+- return -EFAULT;
+ }
+--- a/security/keys/big_key.c
++++ b/security/keys/big_key.c
+@@ -353,7 +353,7 @@ void big_key_describe(const struct key *
+ * read the key data
+ * - the key's semaphore is read-locked
+ */
+-long big_key_read(const struct key *key, char __user *buffer, size_t buflen)
++long big_key_read(const struct key *key, char *buffer, size_t buflen)
+ {
+ size_t datalen = (size_t)key->payload.data[big_key_len];
+ long ret;
+@@ -392,9 +392,8 @@ long big_key_read(const struct key *key,
+
+ ret = datalen;
+
+- /* copy decrypted data to user */
+- if (copy_to_user(buffer, buf->virt, datalen) != 0)
+- ret = -EFAULT;
++ /* copy out decrypted data */
++ memcpy(buffer, buf->virt, datalen);
+
+ err_fput:
+ fput(file);
+@@ -402,9 +401,7 @@ error:
+ big_key_free_buffer(buf);
+ } else {
+ ret = datalen;
+- if (copy_to_user(buffer, key->payload.data[big_key_data],
+- datalen) != 0)
+- ret = -EFAULT;
++ memcpy(buffer, key->payload.data[big_key_data], datalen);
+ }
+
+ return ret;
+--- a/security/keys/encrypted-keys/encrypted.c
++++ b/security/keys/encrypted-keys/encrypted.c
+@@ -895,14 +895,14 @@ out:
+ }
+
+ /*
+- * encrypted_read - format and copy the encrypted data to userspace
++ * encrypted_read - format and copy out the encrypted data
+ *
+ * The resulting datablob format is:
+ * <master-key name> <decrypted data length> <encrypted iv> <encrypted data>
+ *
+ * On success, return to userspace the encrypted key datablob size.
+ */
+-static long encrypted_read(const struct key *key, char __user *buffer,
++static long encrypted_read(const struct key *key, char *buffer,
+ size_t buflen)
+ {
+ struct encrypted_key_payload *epayload;
+@@ -950,8 +950,7 @@ static long encrypted_read(const struct
+ key_put(mkey);
+ memzero_explicit(derived_key, sizeof(derived_key));
+
+- if (copy_to_user(buffer, ascii_buf, asciiblob_len) != 0)
+- ret = -EFAULT;
++ memcpy(buffer, ascii_buf, asciiblob_len);
+ kzfree(ascii_buf);
+
+ return asciiblob_len;
+--- a/security/keys/keyctl.c
++++ b/security/keys/keyctl.c
+@@ -743,6 +743,21 @@ error:
+ }
+
+ /*
++ * Call the read method
++ */
++static long __keyctl_read_key(struct key *key, char *buffer, size_t buflen)
++{
++ long ret;
++
++ down_read(&key->sem);
++ ret = key_validate(key);
++ if (ret == 0)
++ ret = key->type->read(key, buffer, buflen);
++ up_read(&key->sem);
++ return ret;
++}
++
++/*
+ * Read a key's payload.
+ *
+ * The key must either grant the caller Read permission, or it must grant the
+@@ -757,26 +772,27 @@ long keyctl_read_key(key_serial_t keyid,
+ struct key *key;
+ key_ref_t key_ref;
+ long ret;
++ char *key_data;
+
+ /* find the key first */
+ key_ref = lookup_user_key(keyid, 0, 0);
+ if (IS_ERR(key_ref)) {
+ ret = -ENOKEY;
+- goto error;
++ goto out;
+ }
+
+ key = key_ref_to_ptr(key_ref);
+
+ ret = key_read_state(key);
+ if (ret < 0)
+- goto error2; /* Negatively instantiated */
++ goto key_put_out; /* Negatively instantiated */
+
+ /* see if we can read it directly */
+ ret = key_permission(key_ref, KEY_NEED_READ);
+ if (ret == 0)
+ goto can_read_key;
+ if (ret != -EACCES)
+- goto error2;
++ goto key_put_out;
+
+ /* we can't; see if it's searchable from this process's keyrings
+ * - we automatically take account of the fact that it may be
+@@ -784,26 +800,51 @@ long keyctl_read_key(key_serial_t keyid,
+ */
+ if (!is_key_possessed(key_ref)) {
+ ret = -EACCES;
+- goto error2;
++ goto key_put_out;
+ }
+
+ /* the key is probably readable - now try to read it */
+ can_read_key:
+- ret = -EOPNOTSUPP;
+- if (key->type->read) {
+- /* Read the data with the semaphore held (since we might sleep)
+- * to protect against the key being updated or revoked.
+- */
+- down_read(&key->sem);
+- ret = key_validate(key);
+- if (ret == 0)
+- ret = key->type->read(key, buffer, buflen);
+- up_read(&key->sem);
++ if (!key->type->read) {
++ ret = -EOPNOTSUPP;
++ goto key_put_out;
++ }
++
++ if (!buffer || !buflen) {
++ /* Get the key length from the read method */
++ ret = __keyctl_read_key(key, NULL, 0);
++ goto key_put_out;
++ }
++
++ /*
++ * Read the data with the semaphore held (since we might sleep)
++ * to protect against the key being updated or revoked.
++ *
++ * Allocating a temporary buffer to hold the keys before
++ * transferring them to user buffer to avoid potential
++ * deadlock involving page fault and mmap_sem.
++ */
++ key_data = kmalloc(buflen, GFP_KERNEL);
++
++ if (!key_data) {
++ ret = -ENOMEM;
++ goto key_put_out;
++ }
++ ret = __keyctl_read_key(key, key_data, buflen);
++
++ /*
++ * Read methods will just return the required length without
++ * any copying if the provided length isn't large enough.
++ */
++ if (ret > 0 && ret <= buflen) {
++ if (copy_to_user(buffer, key_data, ret))
++ ret = -EFAULT;
+ }
++ kzfree(key_data);
+
+-error2:
++key_put_out:
+ key_put(key);
+-error:
++out:
+ return ret;
+ }
+
+--- a/security/keys/keyring.c
++++ b/security/keys/keyring.c
+@@ -432,7 +432,6 @@ static int keyring_read_iterator(const v
+ {
+ struct keyring_read_iterator_context *ctx = data;
+ const struct key *key = keyring_ptr_to_key(object);
+- int ret;
+
+ kenter("{%s,%d},,{%zu/%zu}",
+ key->type->name, key->serial, ctx->count, ctx->buflen);
+@@ -440,10 +439,7 @@ static int keyring_read_iterator(const v
+ if (ctx->count >= ctx->buflen)
+ return 1;
+
+- ret = put_user(key->serial, ctx->buffer);
+- if (ret < 0)
+- return ret;
+- ctx->buffer++;
++ *ctx->buffer++ = key->serial;
+ ctx->count += sizeof(key->serial);
+ return 0;
+ }
+--- a/security/keys/request_key_auth.c
++++ b/security/keys/request_key_auth.c
+@@ -27,7 +27,7 @@ static int request_key_auth_instantiate(
+ static void request_key_auth_describe(const struct key *, struct seq_file *);
+ static void request_key_auth_revoke(struct key *);
+ static void request_key_auth_destroy(struct key *);
+-static long request_key_auth_read(const struct key *, char __user *, size_t);
++static long request_key_auth_read(const struct key *, char *, size_t);
+
+ /*
+ * The request-key authorisation key type definition.
+@@ -85,7 +85,7 @@ static void request_key_auth_describe(co
+ * - the key's semaphore is read-locked
+ */
+ static long request_key_auth_read(const struct key *key,
+- char __user *buffer, size_t buflen)
++ char *buffer, size_t buflen)
+ {
+ struct request_key_auth *rka = get_request_key_auth(key);
+ size_t datalen;
+@@ -102,8 +102,7 @@ static long request_key_auth_read(const
+ if (buflen > datalen)
+ buflen = datalen;
+
+- if (copy_to_user(buffer, rka->callout_info, buflen) != 0)
+- ret = -EFAULT;
++ memcpy(buffer, rka->callout_info, buflen);
+ }
+
+ return ret;
+--- a/security/keys/trusted.c
++++ b/security/keys/trusted.c
+@@ -1136,11 +1136,10 @@ out:
+ * trusted_read - copy the sealed blob data to userspace in hex.
+ * On success, return to userspace the trusted key datablob size.
+ */
+-static long trusted_read(const struct key *key, char __user *buffer,
++static long trusted_read(const struct key *key, char *buffer,
+ size_t buflen)
+ {
+ const struct trusted_key_payload *p;
+- char *ascii_buf;
+ char *bufp;
+ int i;
+
+@@ -1149,18 +1148,9 @@ static long trusted_read(const struct ke
+ return -EINVAL;
+
+ if (buffer && buflen >= 2 * p->blob_len) {
+- ascii_buf = kmalloc(2 * p->blob_len, GFP_KERNEL);
+- if (!ascii_buf)
+- return -ENOMEM;
+-
+- bufp = ascii_buf;
++ bufp = buffer;
+ for (i = 0; i < p->blob_len; i++)
+ bufp = hex_byte_pack(bufp, p->blob[i]);
+- if (copy_to_user(buffer, ascii_buf, 2 * p->blob_len) != 0) {
+- kzfree(ascii_buf);
+- return -EFAULT;
+- }
+- kzfree(ascii_buf);
+ }
+ return 2 * p->blob_len;
+ }
+--- a/security/keys/user_defined.c
++++ b/security/keys/user_defined.c
+@@ -172,7 +172,7 @@ EXPORT_SYMBOL_GPL(user_describe);
+ * read the key data
+ * - the key's semaphore is read-locked
+ */
+-long user_read(const struct key *key, char __user *buffer, size_t buflen)
++long user_read(const struct key *key, char *buffer, size_t buflen)
+ {
+ const struct user_key_payload *upayload;
+ long ret;
+@@ -185,8 +185,7 @@ long user_read(const struct key *key, ch
+ if (buflen > upayload->datalen)
+ buflen = upayload->datalen;
+
+- if (copy_to_user(buffer, upayload->data, buflen) != 0)
+- ret = -EFAULT;
++ memcpy(buffer, upayload->data, buflen);
+ }
+
+ return ret;
--- /dev/null
+From d9f4bb1a0f4db493efe6d7c58ffe696a57de7eb3 Mon Sep 17 00:00:00 2001
+From: David Howells <dhowells@redhat.com>
+Date: Thu, 22 Feb 2018 14:38:34 +0000
+Subject: KEYS: Use individual pages in big_key for crypto buffers
+
+From: David Howells <dhowells@redhat.com>
+
+commit d9f4bb1a0f4db493efe6d7c58ffe696a57de7eb3 upstream.
+
+kmalloc() can't always allocate large enough buffers for big_key to use for
+crypto (1MB + some metadata) so we cannot use that to allocate the buffer.
+Further, vmalloc'd pages can't be passed to sg_init_one() and the aead
+crypto accessors cannot be called progressively and must be passed all the
+data in one go (which means we can't pass the data in one block at a time).
+
+Fix this by allocating the buffer pages individually and passing them
+through a multientry scatterlist to the crypto layer. This has the bonus
+advantage that we don't have to allocate a contiguous series of pages.
+
+We then vmap() the page list and pass that through to the VFS read/write
+routines.
+
+This can trigger a warning:
+
+ WARNING: CPU: 0 PID: 60912 at mm/page_alloc.c:3883 __alloc_pages_nodemask+0xb7c/0x15f8
+ ([<00000000002acbb6>] __alloc_pages_nodemask+0x1ee/0x15f8)
+ [<00000000002dd356>] kmalloc_order+0x46/0x90
+ [<00000000002dd3e0>] kmalloc_order_trace+0x40/0x1f8
+ [<0000000000326a10>] __kmalloc+0x430/0x4c0
+ [<00000000004343e4>] big_key_preparse+0x7c/0x210
+ [<000000000042c040>] key_create_or_update+0x128/0x420
+ [<000000000042e52c>] SyS_add_key+0x124/0x220
+ [<00000000007bba2c>] system_call+0xc4/0x2b0
+
+from the keyctl/padd/useradd test of the keyutils testsuite on s390x.
+
+Note that it might be better to shovel data through in page-sized lumps
+instead as there's no particular need to use a monolithic buffer unless the
+kernel itself wants to access the data.
+
+Fixes: 13100a72f40f ("Security: Keys: Big keys stored encrypted")
+Reported-by: Paul Bunyan <pbunyan@redhat.com>
+Signed-off-by: David Howells <dhowells@redhat.com>
+cc: Kirill Marinushkin <k.marinushkin@gmail.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ security/keys/big_key.c | 110 +++++++++++++++++++++++++++++++++++++-----------
+ 1 file changed, 87 insertions(+), 23 deletions(-)
+
+--- a/security/keys/big_key.c
++++ b/security/keys/big_key.c
+@@ -22,6 +22,13 @@
+ #include <keys/big_key-type.h>
+ #include <crypto/aead.h>
+
++struct big_key_buf {
++ unsigned int nr_pages;
++ void *virt;
++ struct scatterlist *sg;
++ struct page *pages[];
++};
++
+ /*
+ * Layout of key payload words.
+ */
+@@ -91,10 +98,9 @@ static DEFINE_MUTEX(big_key_aead_lock);
+ /*
+ * Encrypt/decrypt big_key data
+ */
+-static int big_key_crypt(enum big_key_op op, u8 *data, size_t datalen, u8 *key)
++static int big_key_crypt(enum big_key_op op, struct big_key_buf *buf, size_t datalen, u8 *key)
+ {
+ int ret;
+- struct scatterlist sgio;
+ struct aead_request *aead_req;
+ /* We always use a zero nonce. The reason we can get away with this is
+ * because we're using a different randomly generated key for every
+@@ -109,8 +115,7 @@ static int big_key_crypt(enum big_key_op
+ return -ENOMEM;
+
+ memset(zero_nonce, 0, sizeof(zero_nonce));
+- sg_init_one(&sgio, data, datalen + (op == BIG_KEY_ENC ? ENC_AUTHTAG_SIZE : 0));
+- aead_request_set_crypt(aead_req, &sgio, &sgio, datalen, zero_nonce);
++ aead_request_set_crypt(aead_req, buf->sg, buf->sg, datalen, zero_nonce);
+ aead_request_set_callback(aead_req, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL);
+ aead_request_set_ad(aead_req, 0);
+
+@@ -130,21 +135,81 @@ error:
+ }
+
+ /*
++ * Free up the buffer.
++ */
++static void big_key_free_buffer(struct big_key_buf *buf)
++{
++ unsigned int i;
++
++ if (buf->virt) {
++ memset(buf->virt, 0, buf->nr_pages * PAGE_SIZE);
++ vunmap(buf->virt);
++ }
++
++ for (i = 0; i < buf->nr_pages; i++)
++ if (buf->pages[i])
++ __free_page(buf->pages[i]);
++
++ kfree(buf);
++}
++
++/*
++ * Allocate a buffer consisting of a set of pages with a virtual mapping
++ * applied over them.
++ */
++static void *big_key_alloc_buffer(size_t len)
++{
++ struct big_key_buf *buf;
++ unsigned int npg = (len + PAGE_SIZE - 1) >> PAGE_SHIFT;
++ unsigned int i, l;
++
++ buf = kzalloc(sizeof(struct big_key_buf) +
++ sizeof(struct page) * npg +
++ sizeof(struct scatterlist) * npg,
++ GFP_KERNEL);
++ if (!buf)
++ return NULL;
++
++ buf->nr_pages = npg;
++ buf->sg = (void *)(buf->pages + npg);
++ sg_init_table(buf->sg, npg);
++
++ for (i = 0; i < buf->nr_pages; i++) {
++ buf->pages[i] = alloc_page(GFP_KERNEL);
++ if (!buf->pages[i])
++ goto nomem;
++
++ l = min_t(size_t, len, PAGE_SIZE);
++ sg_set_page(&buf->sg[i], buf->pages[i], l, 0);
++ len -= l;
++ }
++
++ buf->virt = vmap(buf->pages, buf->nr_pages, VM_MAP, PAGE_KERNEL);
++ if (!buf->virt)
++ goto nomem;
++
++ return buf;
++
++nomem:
++ big_key_free_buffer(buf);
++ return NULL;
++}
++
++/*
+ * Preparse a big key
+ */
+ int big_key_preparse(struct key_preparsed_payload *prep)
+ {
++ struct big_key_buf *buf;
+ struct path *path = (struct path *)&prep->payload.data[big_key_path];
+ struct file *file;
+ u8 *enckey;
+- u8 *data = NULL;
+ ssize_t written;
+- size_t datalen = prep->datalen;
++ size_t datalen = prep->datalen, enclen = datalen + ENC_AUTHTAG_SIZE;
+ int ret;
+
+- ret = -EINVAL;
+ if (datalen <= 0 || datalen > 1024 * 1024 || !prep->data)
+- goto error;
++ return -EINVAL;
+
+ /* Set an arbitrary quota */
+ prep->quotalen = 16;
+@@ -157,13 +222,12 @@ int big_key_preparse(struct key_preparse
+ *
+ * File content is stored encrypted with randomly generated key.
+ */
+- size_t enclen = datalen + ENC_AUTHTAG_SIZE;
+ loff_t pos = 0;
+
+- data = kmalloc(enclen, GFP_KERNEL);
+- if (!data)
++ buf = big_key_alloc_buffer(enclen);
++ if (!buf)
+ return -ENOMEM;
+- memcpy(data, prep->data, datalen);
++ memcpy(buf->virt, prep->data, datalen);
+
+ /* generate random key */
+ enckey = kmalloc(ENC_KEY_SIZE, GFP_KERNEL);
+@@ -176,7 +240,7 @@ int big_key_preparse(struct key_preparse
+ goto err_enckey;
+
+ /* encrypt aligned data */
+- ret = big_key_crypt(BIG_KEY_ENC, data, datalen, enckey);
++ ret = big_key_crypt(BIG_KEY_ENC, buf, datalen, enckey);
+ if (ret)
+ goto err_enckey;
+
+@@ -187,7 +251,7 @@ int big_key_preparse(struct key_preparse
+ goto err_enckey;
+ }
+
+- written = kernel_write(file, data, enclen, &pos);
++ written = kernel_write(file, buf->virt, enclen, &pos);
+ if (written != enclen) {
+ ret = written;
+ if (written >= 0)
+@@ -202,7 +266,7 @@ int big_key_preparse(struct key_preparse
+ *path = file->f_path;
+ path_get(path);
+ fput(file);
+- kzfree(data);
++ big_key_free_buffer(buf);
+ } else {
+ /* Just store the data in a buffer */
+ void *data = kmalloc(datalen, GFP_KERNEL);
+@@ -220,7 +284,7 @@ err_fput:
+ err_enckey:
+ kzfree(enckey);
+ error:
+- kzfree(data);
++ big_key_free_buffer(buf);
+ return ret;
+ }
+
+@@ -298,15 +362,15 @@ long big_key_read(const struct key *key,
+ return datalen;
+
+ if (datalen > BIG_KEY_FILE_THRESHOLD) {
++ struct big_key_buf *buf;
+ struct path *path = (struct path *)&key->payload.data[big_key_path];
+ struct file *file;
+- u8 *data;
+ u8 *enckey = (u8 *)key->payload.data[big_key_data];
+ size_t enclen = datalen + ENC_AUTHTAG_SIZE;
+ loff_t pos = 0;
+
+- data = kmalloc(enclen, GFP_KERNEL);
+- if (!data)
++ buf = big_key_alloc_buffer(enclen);
++ if (!buf)
+ return -ENOMEM;
+
+ file = dentry_open(path, O_RDONLY, current_cred());
+@@ -316,26 +380,26 @@ long big_key_read(const struct key *key,
+ }
+
+ /* read file to kernel and decrypt */
+- ret = kernel_read(file, data, enclen, &pos);
++ ret = kernel_read(file, buf->virt, enclen, &pos);
+ if (ret >= 0 && ret != enclen) {
+ ret = -EIO;
+ goto err_fput;
+ }
+
+- ret = big_key_crypt(BIG_KEY_DEC, data, enclen, enckey);
++ ret = big_key_crypt(BIG_KEY_DEC, buf, enclen, enckey);
+ if (ret)
+ goto err_fput;
+
+ ret = datalen;
+
+ /* copy decrypted data to user */
+- if (copy_to_user(buffer, data, datalen) != 0)
++ if (copy_to_user(buffer, buf->virt, datalen) != 0)
+ ret = -EFAULT;
+
+ err_fput:
+ fput(file);
+ error:
+- kzfree(data);
++ big_key_free_buffer(buf);
+ } else {
+ ret = datalen;
+ if (copy_to_user(buffer, key->payload.data[big_key_data],
--- /dev/null
+From 80c503e0e68fbe271680ab48f0fe29bc034b01b7 Mon Sep 17 00:00:00 2001
+From: "Paul E. McKenney" <paulmck@kernel.org>
+Date: Thu, 23 Jan 2020 09:19:01 -0800
+Subject: locktorture: Print ratio of acquisitions, not failures
+
+From: Paul E. McKenney <paulmck@kernel.org>
+
+commit 80c503e0e68fbe271680ab48f0fe29bc034b01b7 upstream.
+
+The __torture_print_stats() function in locktorture.c carefully
+initializes local variable "min" to statp[0].n_lock_acquired, but
+then compares it to statp[i].n_lock_fail. Given that the .n_lock_fail
+field should normally be zero, and given the initialization, it seems
+reasonable to display the maximum and minimum number acquisitions
+instead of miscomputing the maximum and minimum number of failures.
+This commit therefore switches from failures to acquisitions.
+
+And this turns out to be not only a day-zero bug, but entirely my
+own fault. I hate it when that happens!
+
+Fixes: 0af3fe1efa53 ("locktorture: Add a lock-torture kernel module")
+Reported-by: Will Deacon <will@kernel.org>
+Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
+Acked-by: Will Deacon <will@kernel.org>
+Cc: Davidlohr Bueso <dave@stgolabs.net>
+Cc: Josh Triplett <josh@joshtriplett.org>
+Cc: Peter Zijlstra <peterz@infradead.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ kernel/locking/locktorture.c | 8 ++++----
+ 1 file changed, 4 insertions(+), 4 deletions(-)
+
+--- a/kernel/locking/locktorture.c
++++ b/kernel/locking/locktorture.c
+@@ -723,10 +723,10 @@ static void __torture_print_stats(char *
+ if (statp[i].n_lock_fail)
+ fail = true;
+ sum += statp[i].n_lock_acquired;
+- if (max < statp[i].n_lock_fail)
+- max = statp[i].n_lock_fail;
+- if (min > statp[i].n_lock_fail)
+- min = statp[i].n_lock_fail;
++ if (max < statp[i].n_lock_acquired)
++ max = statp[i].n_lock_acquired;
++ if (min > statp[i].n_lock_acquired)
++ min = statp[i].n_lock_acquired;
+ }
+ page += sprintf(page,
+ "%s: Total: %lld Max/Min: %ld/%ld %s Fail: %d %s\n",
--- /dev/null
+From 4da0ea71ea934af18db4c63396ba2af1a679ef02 Mon Sep 17 00:00:00 2001
+From: Dan Carpenter <dan.carpenter@oracle.com>
+Date: Fri, 28 Feb 2020 12:25:54 +0300
+Subject: mtd: lpddr: Fix a double free in probe()
+
+From: Dan Carpenter <dan.carpenter@oracle.com>
+
+commit 4da0ea71ea934af18db4c63396ba2af1a679ef02 upstream.
+
+This function is only called from lpddr_probe(). We free "lpddr" both
+here and in the caller, so it's a double free. The best place to free
+"lpddr" is in lpddr_probe() so let's delete this one.
+
+Fixes: 8dc004395d5e ("[MTD] LPDDR qinfo probing.")
+Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
+Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
+Link: https://lore.kernel.org/linux-mtd/20200228092554.o57igp3nqhyvf66t@kili.mountain
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/mtd/lpddr/lpddr_cmds.c | 1 -
+ 1 file changed, 1 deletion(-)
+
+--- a/drivers/mtd/lpddr/lpddr_cmds.c
++++ b/drivers/mtd/lpddr/lpddr_cmds.c
+@@ -81,7 +81,6 @@ struct mtd_info *lpddr_cmdset(struct map
+ shared = kmalloc(sizeof(struct flchip_shared) * lpddr->numchips,
+ GFP_KERNEL);
+ if (!shared) {
+- kfree(lpddr);
+ kfree(mtd);
+ return NULL;
+ }
--- /dev/null
+From 49c64df880570034308e4a9a49c4bc95cf8cdb33 Mon Sep 17 00:00:00 2001
+From: Wen Yang <wenyang@linux.alibaba.com>
+Date: Wed, 18 Mar 2020 23:31:56 +0800
+Subject: mtd: phram: fix a double free issue in error path
+
+From: Wen Yang <wenyang@linux.alibaba.com>
+
+commit 49c64df880570034308e4a9a49c4bc95cf8cdb33 upstream.
+
+The variable 'name' is released multiple times in the error path,
+which may cause double free issues.
+This problem is avoided by adding a goto label to release the memory
+uniformly. And this change also makes the code a bit more cleaner.
+
+Fixes: 4f678a58d335 ("mtd: fix memory leaks in phram_setup")
+Signed-off-by: Wen Yang <wenyang@linux.alibaba.com>
+Cc: Joern Engel <joern@lazybastard.org>
+Cc: Miquel Raynal <miquel.raynal@bootlin.com>
+Cc: Richard Weinberger <richard@nod.at>
+Cc: Vignesh Raghavendra <vigneshr@ti.com>
+Cc: linux-mtd@lists.infradead.org
+Cc: linux-kernel@vger.kernel.org
+Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
+Link: https://lore.kernel.org/linux-mtd/20200318153156.25612-1-wenyang@linux.alibaba.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/mtd/devices/phram.c | 15 +++++++++------
+ 1 file changed, 9 insertions(+), 6 deletions(-)
+
+--- a/drivers/mtd/devices/phram.c
++++ b/drivers/mtd/devices/phram.c
+@@ -247,22 +247,25 @@ static int phram_setup(const char *val)
+
+ ret = parse_num64(&start, token[1]);
+ if (ret) {
+- kfree(name);
+ parse_err("illegal start address\n");
++ goto error;
+ }
+
+ ret = parse_num64(&len, token[2]);
+ if (ret) {
+- kfree(name);
+ parse_err("illegal device length\n");
++ goto error;
+ }
+
+ ret = register_device(name, start, len);
+- if (!ret)
+- pr_info("%s device: %#llx at %#llx\n", name, len, start);
+- else
+- kfree(name);
++ if (ret)
++ goto error;
+
++ pr_info("%s device: %#llx at %#llx\n", name, len, start);
++ return 0;
++
++error:
++ kfree(name);
+ return ret;
+ }
+
--- /dev/null
+From d0802dc411f469569a537283b6f3833af47aece9 Mon Sep 17 00:00:00 2001
+From: Florian Fainelli <f.fainelli@gmail.com>
+Date: Mon, 30 Mar 2020 14:38:46 -0700
+Subject: net: dsa: bcm_sf2: Fix overflow checks
+
+From: Florian Fainelli <f.fainelli@gmail.com>
+
+commit d0802dc411f469569a537283b6f3833af47aece9 upstream.
+
+Commit f949a12fd697 ("net: dsa: bcm_sf2: fix buffer overflow doing
+set_rxnfc") tried to fix the some user controlled buffer overflows in
+bcm_sf2_cfp_rule_set() and bcm_sf2_cfp_rule_del() but the fix was using
+CFP_NUM_RULES, which while it is correct not to overflow the bitmaps, is
+not representative of what the device actually supports. Correct that by
+using bcm_sf2_cfp_rule_size() instead.
+
+The latter subtracts the number of rules by 1, so change the checks from
+greater than or equal to greater than accordingly.
+
+Fixes: f949a12fd697 ("net: dsa: bcm_sf2: fix buffer overflow doing set_rxnfc")
+Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/net/dsa/bcm_sf2_cfp.c | 9 +++------
+ 1 file changed, 3 insertions(+), 6 deletions(-)
+
+--- a/drivers/net/dsa/bcm_sf2_cfp.c
++++ b/drivers/net/dsa/bcm_sf2_cfp.c
+@@ -130,17 +130,14 @@ static int bcm_sf2_cfp_rule_set(struct d
+ (fs->m_ext.vlan_etype || fs->m_ext.data[1]))
+ return -EINVAL;
+
+- if (fs->location != RX_CLS_LOC_ANY && fs->location >= CFP_NUM_RULES)
++ if (fs->location != RX_CLS_LOC_ANY &&
++ fs->location > bcm_sf2_cfp_rule_size(priv))
+ return -EINVAL;
+
+ if (fs->location != RX_CLS_LOC_ANY &&
+ test_bit(fs->location, priv->cfp.used))
+ return -EBUSY;
+
+- if (fs->location != RX_CLS_LOC_ANY &&
+- fs->location > bcm_sf2_cfp_rule_size(priv))
+- return -EINVAL;
+-
+ ip_frag = be32_to_cpu(fs->m_ext.data[0]);
+
+ /* We do not support discarding packets, check that the
+@@ -333,7 +330,7 @@ static int bcm_sf2_cfp_rule_del(struct b
+ int ret;
+ u32 reg;
+
+- if (loc >= CFP_NUM_RULES)
++ if (loc > bcm_sf2_cfp_rule_size(priv))
+ return -EINVAL;
+
+ /* Refuse deletion of unused rules, and the default reserved rule */
ext2-fix-debug-reference-to-ext2_xattr_cache.patch
libnvdimm-out-of-bounds-read-in-__nd_ioctl.patch
iommu-amd-fix-the-configuration-of-gcr3-table-root-p.patch
+net-dsa-bcm_sf2-fix-overflow-checks.patch
+fbdev-potential-information-leak-in-do_fb_ioctl.patch
+tty-evh_bytechan-fix-out-of-bounds-accesses.patch
+locktorture-print-ratio-of-acquisitions-not-failures.patch
+mtd-lpddr-fix-a-double-free-in-probe.patch
+mtd-phram-fix-a-double-free-issue-in-error-path.patch
+keys-use-individual-pages-in-big_key-for-crypto-buffers.patch
+keys-don-t-write-out-to-userspace-while-holding-key-semaphore.patch
+x86-microcode-intel-replace-sync_core-with-native_cpuid_reg-eax.patch
+bpf-test_verifier-switch-bpf_get_stack-s-0-s-r8-test.patch
+bpf-fix-buggy-r0-retval-refinement-for-tracing-helpers.patch
--- /dev/null
+From 3670664b5da555a2a481449b3baafff113b0ac35 Mon Sep 17 00:00:00 2001
+From: Stephen Rothwell <sfr@canb.auug.org.au>
+Date: Thu, 9 Jan 2020 18:39:12 +1100
+Subject: tty: evh_bytechan: Fix out of bounds accesses
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Stephen Rothwell <sfr@canb.auug.org.au>
+
+commit 3670664b5da555a2a481449b3baafff113b0ac35 upstream.
+
+ev_byte_channel_send() assumes that its third argument is a 16 byte
+array. Some places where it is called it may not be (or we can't
+easily tell if it is). Newer compilers have started producing warnings
+about this, so make sure we actually pass a 16 byte array.
+
+There may be more elegant solutions to this, but the driver is quite
+old and hasn't been updated in many years.
+
+The warnings (from a powerpc allyesconfig build) are:
+
+ In file included from include/linux/byteorder/big_endian.h:5,
+ from arch/powerpc/include/uapi/asm/byteorder.h:14,
+ from include/asm-generic/bitops/le.h:6,
+ from arch/powerpc/include/asm/bitops.h:250,
+ from include/linux/bitops.h:29,
+ from include/linux/kernel.h:12,
+ from include/asm-generic/bug.h:19,
+ from arch/powerpc/include/asm/bug.h:109,
+ from include/linux/bug.h:5,
+ from include/linux/mmdebug.h:5,
+ from include/linux/gfp.h:5,
+ from include/linux/slab.h:15,
+ from drivers/tty/ehv_bytechan.c:24:
+ drivers/tty/ehv_bytechan.c: In function ‘ehv_bc_udbg_putc’:
+ arch/powerpc/include/asm/epapr_hcalls.h:298:20: warning: array subscript 1 is outside array bounds of ‘const char[1]’ [-Warray-bounds]
+ 298 | r6 = be32_to_cpu(p[1]);
+ include/uapi/linux/byteorder/big_endian.h:40:51: note: in definition of macro ‘__be32_to_cpu’
+ 40 | #define __be32_to_cpu(x) ((__force __u32)(__be32)(x))
+ | ^
+ arch/powerpc/include/asm/epapr_hcalls.h:298:7: note: in expansion of macro ‘be32_to_cpu’
+ 298 | r6 = be32_to_cpu(p[1]);
+ | ^~~~~~~~~~~
+ drivers/tty/ehv_bytechan.c:166:13: note: while referencing ‘data’
+ 166 | static void ehv_bc_udbg_putc(char c)
+ | ^~~~~~~~~~~~~~~~
+
+Fixes: dcd83aaff1c8 ("tty/powerpc: introduce the ePAPR embedded hypervisor byte channel driver")
+Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
+Tested-by: Laurentiu Tudor <laurentiu.tudor@nxp.com>
+[mpe: Trim warnings from change log]
+Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
+Link: https://lore.kernel.org/r/20200109183912.5fcb52aa@canb.auug.org.au
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/tty/ehv_bytechan.c | 21 ++++++++++++++++++---
+ 1 file changed, 18 insertions(+), 3 deletions(-)
+
+--- a/drivers/tty/ehv_bytechan.c
++++ b/drivers/tty/ehv_bytechan.c
+@@ -139,6 +139,21 @@ static int find_console_handle(void)
+ return 1;
+ }
+
++static unsigned int local_ev_byte_channel_send(unsigned int handle,
++ unsigned int *count,
++ const char *p)
++{
++ char buffer[EV_BYTE_CHANNEL_MAX_BYTES];
++ unsigned int c = *count;
++
++ if (c < sizeof(buffer)) {
++ memcpy(buffer, p, c);
++ memset(&buffer[c], 0, sizeof(buffer) - c);
++ p = buffer;
++ }
++ return ev_byte_channel_send(handle, count, p);
++}
++
+ /*************************** EARLY CONSOLE DRIVER ***************************/
+
+ #ifdef CONFIG_PPC_EARLY_DEBUG_EHV_BC
+@@ -157,7 +172,7 @@ static void byte_channel_spin_send(const
+
+ do {
+ count = 1;
+- ret = ev_byte_channel_send(CONFIG_PPC_EARLY_DEBUG_EHV_BC_HANDLE,
++ ret = local_ev_byte_channel_send(CONFIG_PPC_EARLY_DEBUG_EHV_BC_HANDLE,
+ &count, &data);
+ } while (ret == EV_EAGAIN);
+ }
+@@ -224,7 +239,7 @@ static int ehv_bc_console_byte_channel_s
+ while (count) {
+ len = min_t(unsigned int, count, EV_BYTE_CHANNEL_MAX_BYTES);
+ do {
+- ret = ev_byte_channel_send(handle, &len, s);
++ ret = local_ev_byte_channel_send(handle, &len, s);
+ } while (ret == EV_EAGAIN);
+ count -= len;
+ s += len;
+@@ -404,7 +419,7 @@ static void ehv_bc_tx_dequeue(struct ehv
+ CIRC_CNT_TO_END(bc->head, bc->tail, BUF_SIZE),
+ EV_BYTE_CHANNEL_MAX_BYTES);
+
+- ret = ev_byte_channel_send(bc->handle, &len, bc->buf + bc->tail);
++ ret = local_ev_byte_channel_send(bc->handle, &len, bc->buf + bc->tail);
+
+ /* 'len' is valid only if the return code is 0 or EV_EAGAIN */
+ if (!ret || (ret == EV_EAGAIN))
--- /dev/null
+From evalds.iodzevics@gmail.com Wed Apr 22 10:26:17 2020
+From: Evalds Iodzevics <evalds.iodzevics@gmail.com>
+Date: Wed, 22 Apr 2020 11:17:59 +0300
+Subject: x86/microcode/intel: replace sync_core() with native_cpuid_reg(eax)
+To: linux-kernel@vger.kernel.org
+Cc: gregkh@linuxfoundation.org, tglx@linutronix.de, ben@decadent.org.uk, bp@suse.de, Evalds Iodzevics <evalds.iodzevics@gmail.com>, stable@vger.kernel.org
+Message-ID: <20200422081759.1632-1-evalds.iodzevics@gmail.com>
+
+From: Evalds Iodzevics <evalds.iodzevics@gmail.com>
+
+On Intel it is required to do CPUID(1) before reading the microcode
+revision MSR. Current code in 4.4 an 4.9 relies on sync_core() to call
+CPUID, unfortunately on 32 bit machines code inside sync_core() always
+jumps past CPUID instruction as it depends on data structure boot_cpu_data
+witch are not populated correctly so early in boot sequence.
+
+It depends on:
+commit 5dedade6dfa2 ("x86/CPU: Add native CPUID variants returning a single
+datum")
+
+This patch is for 4.4 but also should apply to 4.9
+
+Signed-off-by: Evalds Iodzevics <evalds.iodzevics@gmail.com>
+Cc: stable@vger.kernel.org
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/include/asm/microcode_intel.h | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/arch/x86/include/asm/microcode_intel.h b/arch/x86/include/asm/microcode_intel.h
+index 90343ba50485..92ce9c8a508b 100644
+--- a/arch/x86/include/asm/microcode_intel.h
++++ b/arch/x86/include/asm/microcode_intel.h
+@@ -60,7 +60,7 @@ static inline u32 intel_get_microcode_revision(void)
+ native_wrmsrl(MSR_IA32_UCODE_REV, 0);
+
+ /* As documented in the SDM: Do a CPUID 1 here */
+- sync_core();
++ native_cpuid_eax(1);
+
+ /* get the current revision from MSR 0x8B */
+ native_rdmsr(MSR_IA32_UCODE_REV, dummy, rev);
+--
+2.17.4
+