+++ /dev/null
-From daniel@iogearbox.net Wed Apr 22 10:22:28 2020
-From: Daniel Borkmann <daniel@iogearbox.net>
-Date: Tue, 21 Apr 2020 15:01:49 +0200
-Subject: bpf: fix buggy r0 retval refinement for tracing helpers
-To: gregkh@linuxfoundation.org
-Cc: alexei.starovoitov@gmail.com, john.fastabend@gmail.com, kpsingh@chromium.org, jannh@google.com, fontanalorenz@gmail.com, leodidonato@gmail.com, yhs@fb.com, bpf@vger.kernel.org, Daniel Borkmann <daniel@iogearbox.net>, Alexei Starovoitov <ast@kernel.org>
-Message-ID: <20200421130152.14348-1-daniel@iogearbox.net>
-
-From: Daniel Borkmann <daniel@iogearbox.net>
-Date: Tue, 21 Apr 2020 15:01:49 +0200
-
-[ no upstream commit ]
-
-See the glory details in 100605035e15 ("bpf: Verifier, do_refine_retval_range
-may clamp umin to 0 incorrectly") for why 849fa50662fb ("bpf/verifier: refine
-retval R0 state for bpf_get_stack helper") is buggy. The whole series however
-is not suitable for stable since it adds significant amount [0] of verifier
-complexity in order to add 32bit subreg tracking. Something simpler is needed.
-
-Unfortunately, reverting 849fa50662fb ("bpf/verifier: refine retval R0 state
-for bpf_get_stack helper") or just cherry-picking 100605035e15 ("bpf: Verifier,
-do_refine_retval_range may clamp umin to 0 incorrectly") is not an option since
-it will break existing tracing programs badly (at least those that are using
-bpf_get_stack() and bpf_probe_read_str() helpers). Not fixing it in stable is
-also not an option since on 4.19 kernels an error will cause a soft-lockup due
-to hitting dead-code sanitized branch since we don't hard-wire such branches
-in old kernels yet. But even then for 5.x 849fa50662fb ("bpf/verifier: refine
-retval R0 state for bpf_get_stack helper") would cause wrong bounds on the
-verifier simluation when an error is hit.
-
-In one of the earlier iterations of mentioned patch series for upstream there
-was the concern that just using smax_value in do_refine_retval_range() would
-nuke bounds by subsequent <<32 >>32 shifts before the comparison against 0 [1]
-which eventually led to the 32bit subreg tracking in the first place. While I
-initially went for implementing the idea [1] to pattern match the two shift
-operations, it turned out to be more complex than actually needed, meaning, we
-could simply treat do_refine_retval_range() similarly to how we branch off
-verification for conditionals or under speculation, that is, pushing a new
-reg state to the stack for later verification. This means, instead of verifying
-the current path with the ret_reg in [S32MIN, msize_max_value] interval where
-later bounds would get nuked, we split this into two: i) for the success case
-where ret_reg can be in [0, msize_max_value], and ii) for the error case with
-ret_reg known to be in interval [S32MIN, -1]. Latter will preserve the bounds
-during these shift patterns and can match reg < 0 test. test_progs also succeed
-with this approach.
-
- [0] https://lore.kernel.org/bpf/158507130343.15666.8018068546764556975.stgit@john-Precision-5820-Tower/
- [1] https://lore.kernel.org/bpf/158015334199.28573.4940395881683556537.stgit@john-XPS-13-9370/T/#m2e0ad1d5949131014748b6daa48a3495e7f0456d
-
-Fixes: 849fa50662fb ("bpf/verifier: refine retval R0 state for bpf_get_stack helper")
-Reported-by: Lorenzo Fontana <fontanalorenz@gmail.com>
-Reported-by: Leonardo Di Donato <leodidonato@gmail.com>
-Reported-by: John Fastabend <john.fastabend@gmail.com>
-Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-Acked-by: Alexei Starovoitov <ast@kernel.org>
-Acked-by: John Fastabend <john.fastabend@gmail.com>
-Tested-by: John Fastabend <john.fastabend@gmail.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
---- a/kernel/bpf/verifier.c
-+++ b/kernel/bpf/verifier.c
-@@ -227,8 +227,7 @@ struct bpf_call_arg_meta {
- bool pkt_access;
- int regno;
- int access_size;
-- s64 msize_smax_value;
-- u64 msize_umax_value;
-+ u64 msize_max_value;
- int ref_obj_id;
- int func_id;
- u32 btf_id;
-@@ -3568,8 +3567,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno,
- /* remember the mem_size which may be used later
- * to refine return values.
- */
-- meta->msize_smax_value = reg->smax_value;
-- meta->msize_umax_value = reg->umax_value;
-+ meta->msize_max_value = reg->umax_value;
-
- /* The register is SCALAR_VALUE; the access check
- * happens using its boundaries.
-@@ -4095,21 +4093,44 @@ static int prepare_func_exit(struct bpf_verifier_env *env, int *insn_idx)
- return 0;
- }
-
--static void do_refine_retval_range(struct bpf_reg_state *regs, int ret_type,
-- int func_id,
-- struct bpf_call_arg_meta *meta)
-+static int do_refine_retval_range(struct bpf_verifier_env *env,
-+ struct bpf_reg_state *regs, int ret_type,
-+ int func_id, struct bpf_call_arg_meta *meta)
- {
- struct bpf_reg_state *ret_reg = ®s[BPF_REG_0];
-+ struct bpf_reg_state tmp_reg = *ret_reg;
-+ bool ret;
-
- if (ret_type != RET_INTEGER ||
- (func_id != BPF_FUNC_get_stack &&
- func_id != BPF_FUNC_probe_read_str))
-- return;
-+ return 0;
-+
-+ /* Error case where ret is in interval [S32MIN, -1]. */
-+ ret_reg->smin_value = S32_MIN;
-+ ret_reg->smax_value = -1;
-
-- ret_reg->smax_value = meta->msize_smax_value;
-- ret_reg->umax_value = meta->msize_umax_value;
- __reg_deduce_bounds(ret_reg);
- __reg_bound_offset(ret_reg);
-+ __update_reg_bounds(ret_reg);
-+
-+ ret = push_stack(env, env->insn_idx + 1, env->insn_idx, false);
-+ if (!ret)
-+ return -EFAULT;
-+
-+ *ret_reg = tmp_reg;
-+
-+ /* Success case where ret is in range [0, msize_max_value]. */
-+ ret_reg->smin_value = 0;
-+ ret_reg->smax_value = meta->msize_max_value;
-+ ret_reg->umin_value = ret_reg->smin_value;
-+ ret_reg->umax_value = ret_reg->smax_value;
-+
-+ __reg_deduce_bounds(ret_reg);
-+ __reg_bound_offset(ret_reg);
-+ __update_reg_bounds(ret_reg);
-+
-+ return 0;
- }
-
- static int
-@@ -4377,7 +4398,9 @@ static int check_helper_call(struct bpf_verifier_env *env, int func_id, int insn
- regs[BPF_REG_0].ref_obj_id = id;
- }
-
-- do_refine_retval_range(regs, fn->ret_type, func_id, &meta);
-+ err = do_refine_retval_range(env, regs, fn->ret_type, func_id, &meta);
-+ if (err)
-+ return err;
-
- err = check_map_func_compatibility(env, meta.map_ptr, func_id);
- if (err)
---
-2.20.1
-
+++ /dev/null
-From daniel@iogearbox.net Wed Apr 22 10:24:25 2020
-From: Daniel Borkmann <daniel@iogearbox.net>
-Date: Tue, 21 Apr 2020 15:01:52 +0200
-Subject: bpf, test_verifier: switch bpf_get_stack's 0 s> r8 test
-To: gregkh@linuxfoundation.org
-Cc: alexei.starovoitov@gmail.com, john.fastabend@gmail.com, kpsingh@chromium.org, jannh@google.com, fontanalorenz@gmail.com, leodidonato@gmail.com, yhs@fb.com, bpf@vger.kernel.org, Daniel Borkmann <daniel@iogearbox.net>, Alexei Starovoitov <ast@kernel.org>
-Message-ID: <20200421130152.14348-4-daniel@iogearbox.net>
-
-From: Daniel Borkmann <daniel@iogearbox.net>
-
-[ no upstream commit ]
-
-Switch the comparison, so that is_branch_taken() will recognize that below
-branch is never taken:
-
- [...]
- 17: [...] R1_w=inv0 [...] R8_w=inv(id=0,smin_value=-2147483648,smax_value=-1,umin_value=18446744071562067968,var_off=(0xffffffff80000000; 0x7fffffff)) [...]
- 17: (67) r8 <<= 32
- 18: [...] R8_w=inv(id=0,smax_value=-4294967296,umin_value=9223372036854775808,umax_value=18446744069414584320,var_off=(0x8000000000000000; 0x7fffffff00000000)) [...]
- 18: (c7) r8 s>>= 32
- 19: [...] R8_w=inv(id=0,smin_value=-2147483648,smax_value=-1,umin_value=18446744071562067968,var_off=(0xffffffff80000000; 0x7fffffff)) [...]
- 19: (6d) if r1 s> r8 goto pc+16
- [...] R1_w=inv0 [...] R8_w=inv(id=0,smin_value=-2147483648,smax_value=-1,umin_value=18446744071562067968,var_off=(0xffffffff80000000; 0x7fffffff)) [...]
- [...]
-
-Currently we check for is_branch_taken() only if either K is source, or source
-is a scalar value that is const. For upstream it would be good to extend this
-properly to check whether dst is const and src not.
-
-For the sake of the test_verifier, it is probably not needed here:
-
- # ./test_verifier 101
- #101/p bpf_get_stack return R0 within range OK
- Summary: 1 PASSED, 0 SKIPPED, 0 FAILED
-
-I haven't seen this issue in test_progs* though, they are passing fine:
-
- # ./test_progs-no_alu32 -t get_stack
- Switching to flavor 'no_alu32' subdirectory...
- #20 get_stack_raw_tp:OK
- Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED
-
- # ./test_progs -t get_stack
- #20 get_stack_raw_tp:OK
- Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED
-
-Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
-Acked-by: Alexei Starovoitov <ast@kernel.org>
-Acked-by: John Fastabend <john.fastabend@gmail.com>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
----
- tools/testing/selftests/bpf/verifier/bpf_get_stack.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/tools/testing/selftests/bpf/verifier/bpf_get_stack.c b/tools/testing/selftests/bpf/verifier/bpf_get_stack.c
-index 69b048cf46d9..371926771db5 100644
---- a/tools/testing/selftests/bpf/verifier/bpf_get_stack.c
-+++ b/tools/testing/selftests/bpf/verifier/bpf_get_stack.c
-@@ -19,7 +19,7 @@
- BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
- BPF_ALU64_IMM(BPF_LSH, BPF_REG_8, 32),
- BPF_ALU64_IMM(BPF_ARSH, BPF_REG_8, 32),
-- BPF_JMP_REG(BPF_JSGT, BPF_REG_1, BPF_REG_8, 16),
-+ BPF_JMP_REG(BPF_JSLT, BPF_REG_8, BPF_REG_1, 16),
- BPF_ALU64_REG(BPF_SUB, BPF_REG_9, BPF_REG_8),
- BPF_MOV64_REG(BPF_REG_2, BPF_REG_7),
- BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_8),
---
-2.20.1
-
+++ /dev/null
-From evalds.iodzevics@gmail.com Wed Apr 22 10:26:17 2020
-From: Evalds Iodzevics <evalds.iodzevics@gmail.com>
-Date: Wed, 22 Apr 2020 11:17:59 +0300
-Subject: x86/microcode/intel: replace sync_core() with native_cpuid_reg(eax)
-To: linux-kernel@vger.kernel.org
-Cc: gregkh@linuxfoundation.org, tglx@linutronix.de, ben@decadent.org.uk, bp@suse.de, Evalds Iodzevics <evalds.iodzevics@gmail.com>, stable@vger.kernel.org
-Message-ID: <20200422081759.1632-1-evalds.iodzevics@gmail.com>
-
-From: Evalds Iodzevics <evalds.iodzevics@gmail.com>
-
-On Intel it is required to do CPUID(1) before reading the microcode
-revision MSR. Current code in 4.4 an 4.9 relies on sync_core() to call
-CPUID, unfortunately on 32 bit machines code inside sync_core() always
-jumps past CPUID instruction as it depends on data structure boot_cpu_data
-witch are not populated correctly so early in boot sequence.
-
-It depends on:
-commit 5dedade6dfa2 ("x86/CPU: Add native CPUID variants returning a single
-datum")
-
-This patch is for 4.4 but also should apply to 4.9
-
-Signed-off-by: Evalds Iodzevics <evalds.iodzevics@gmail.com>
-Cc: stable@vger.kernel.org
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
----
- arch/x86/include/asm/microcode_intel.h | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/arch/x86/include/asm/microcode_intel.h b/arch/x86/include/asm/microcode_intel.h
-index 90343ba50485..92ce9c8a508b 100644
---- a/arch/x86/include/asm/microcode_intel.h
-+++ b/arch/x86/include/asm/microcode_intel.h
-@@ -60,7 +60,7 @@ static inline u32 intel_get_microcode_revision(void)
- native_wrmsrl(MSR_IA32_UCODE_REV, 0);
-
- /* As documented in the SDM: Do a CPUID 1 here */
-- sync_core();
-+ native_cpuid_eax(1);
-
- /* get the current revision from MSR 0x8B */
- native_rdmsr(MSR_IA32_UCODE_REV, dummy, rev);
---
-2.17.4
-