--- /dev/null
+From foo@baz Fri Aug 6 10:05:36 AM CEST 2021
+From: Ovidiu Panait <ovidiu.panait@windriver.com>
+Date: Thu, 5 Aug 2021 18:53:39 +0300
+Subject: bpf: Do not mark insn as seen under speculative path verification
+To: stable@vger.kernel.org
+Cc: bpf@vger.kernel.org, daniel@iogearbox.net, ast@kernel.org, john.fastabend@gmail.com, benedict.schlueter@rub.de, piotras@gmail.com
+Message-ID: <20210805155343.3618696-3-ovidiu.panait@windriver.com>
+
+From: Daniel Borkmann <daniel@iogearbox.net>
+
+commit fe9a5ca7e370e613a9a75a13008a3845ea759d6e upstream
+
+... in such circumstances, we do not want to mark the instruction as seen given
+the goal is still to jmp-1 rewrite/sanitize dead code, if it is not reachable
+from the non-speculative path verification. We do however want to verify it for
+safety regardless.
+
+With the patch as-is all the insns that have been marked as seen before the
+patch will also be marked as seen after the patch (just with a potentially
+different non-zero count). An upcoming patch will also verify paths that are
+unreachable in the non-speculative domain, hence this extension is needed.
+
+Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
+Reviewed-by: John Fastabend <john.fastabend@gmail.com>
+Reviewed-by: Benedict Schlueter <benedict.schlueter@rub.de>
+Reviewed-by: Piotr Krysiuk <piotras@gmail.com>
+Acked-by: Alexei Starovoitov <ast@kernel.org>
+[OP: - env->pass_cnt is not used in 5.4, so adjust sanitize_mark_insn_seen()
+ to assign "true" instead
+ - drop sanitize_insn_aux_data() comment changes, as the function is not
+ present in 5.4]
+Signed-off-by: Ovidiu Panait <ovidiu.panait@windriver.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/bpf/verifier.c | 17 +++++++++++++++--
+ 1 file changed, 15 insertions(+), 2 deletions(-)
+
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -4435,6 +4435,19 @@ do_sim:
+ return !ret ? REASON_STACK : 0;
+ }
+
++static void sanitize_mark_insn_seen(struct bpf_verifier_env *env)
++{
++ struct bpf_verifier_state *vstate = env->cur_state;
++
++ /* If we simulate paths under speculation, we don't update the
++ * insn as 'seen' such that when we verify unreachable paths in
++ * the non-speculative domain, sanitize_dead_code() can still
++ * rewrite/sanitize them.
++ */
++ if (!vstate->speculative)
++ env->insn_aux_data[env->insn_idx].seen = true;
++}
++
+ static int sanitize_err(struct bpf_verifier_env *env,
+ const struct bpf_insn *insn, int reason,
+ const struct bpf_reg_state *off_reg,
+@@ -7790,7 +7803,7 @@ static int do_check(struct bpf_verifier_
+ }
+
+ regs = cur_regs(env);
+- env->insn_aux_data[env->insn_idx].seen = true;
++ sanitize_mark_insn_seen(env);
+ prev_insn_idx = env->insn_idx;
+
+ if (class == BPF_ALU || class == BPF_ALU64) {
+@@ -8025,7 +8038,7 @@ process_bpf_exit:
+ return err;
+
+ env->insn_idx++;
+- env->insn_aux_data[env->insn_idx].seen = true;
++ sanitize_mark_insn_seen(env);
+ } else {
+ verbose(env, "invalid BPF_LD mode\n");
+ return -EINVAL;
--- /dev/null
+From foo@baz Fri Aug 6 10:05:36 AM CEST 2021
+From: Ovidiu Panait <ovidiu.panait@windriver.com>
+Date: Thu, 5 Aug 2021 18:53:40 +0300
+Subject: bpf: Fix leakage under speculation on mispredicted branches
+To: stable@vger.kernel.org
+Cc: bpf@vger.kernel.org, daniel@iogearbox.net, ast@kernel.org, john.fastabend@gmail.com, benedict.schlueter@rub.de, piotras@gmail.com
+Message-ID: <20210805155343.3618696-4-ovidiu.panait@windriver.com>
+
+From: Daniel Borkmann <daniel@iogearbox.net>
+
+commit 9183671af6dbf60a1219371d4ed73e23f43b49db upstream
+
+The verifier only enumerates valid control-flow paths and skips paths that
+are unreachable in the non-speculative domain. And so it can miss issues
+under speculative execution on mispredicted branches.
+
+For example, a type confusion has been demonstrated with the following
+crafted program:
+
+ // r0 = pointer to a map array entry
+ // r6 = pointer to readable stack slot
+ // r9 = scalar controlled by attacker
+ 1: r0 = *(u64 *)(r0) // cache miss
+ 2: if r0 != 0x0 goto line 4
+ 3: r6 = r9
+ 4: if r0 != 0x1 goto line 6
+ 5: r9 = *(u8 *)(r6)
+ 6: // leak r9
+
+Since line 3 runs iff r0 == 0 and line 5 runs iff r0 == 1, the verifier
+concludes that the pointer dereference on line 5 is safe. But: if the
+attacker trains both the branches to fall-through, such that the following
+is speculatively executed ...
+
+ r6 = r9
+ r9 = *(u8 *)(r6)
+ // leak r9
+
+... then the program will dereference an attacker-controlled value and could
+leak its content under speculative execution via side-channel. This requires
+to mistrain the branch predictor, which can be rather tricky, because the
+branches are mutually exclusive. However such training can be done at
+congruent addresses in user space using different branches that are not
+mutually exclusive. That is, by training branches in user space ...
+
+ A: if r0 != 0x0 goto line C
+ B: ...
+ C: if r0 != 0x0 goto line D
+ D: ...
+
+... such that addresses A and C collide to the same CPU branch prediction
+entries in the PHT (pattern history table) as those of the BPF program's
+lines 2 and 4, respectively. A non-privileged attacker could simply brute
+force such collisions in the PHT until observing the attack succeeding.
+
+Alternative methods to mistrain the branch predictor are also possible that
+avoid brute forcing the collisions in the PHT. A reliable attack has been
+demonstrated, for example, using the following crafted program:
+
+ // r0 = pointer to a [control] map array entry
+ // r7 = *(u64 *)(r0 + 0), training/attack phase
+ // r8 = *(u64 *)(r0 + 8), oob address
+ // [...]
+ // r0 = pointer to a [data] map array entry
+ 1: if r7 == 0x3 goto line 3
+ 2: r8 = r0
+ // crafted sequence of conditional jumps to separate the conditional
+ // branch in line 193 from the current execution flow
+ 3: if r0 != 0x0 goto line 5
+ 4: if r0 == 0x0 goto exit
+ 5: if r0 != 0x0 goto line 7
+ 6: if r0 == 0x0 goto exit
+ [...]
+ 187: if r0 != 0x0 goto line 189
+ 188: if r0 == 0x0 goto exit
+ // load any slowly-loaded value (due to cache miss in phase 3) ...
+ 189: r3 = *(u64 *)(r0 + 0x1200)
+ // ... and turn it into known zero for verifier, while preserving slowly-
+ // loaded dependency when executing:
+ 190: r3 &= 1
+ 191: r3 &= 2
+ // speculatively bypassed phase dependency
+ 192: r7 += r3
+ 193: if r7 == 0x3 goto exit
+ 194: r4 = *(u8 *)(r8 + 0)
+ // leak r4
+
+As can be seen, in training phase (phase != 0x3), the condition in line 1
+turns into false and therefore r8 with the oob address is overridden with
+the valid map value address, which in line 194 we can read out without
+issues. However, in attack phase, line 2 is skipped, and due to the cache
+miss in line 189 where the map value is (zeroed and later) added to the
+phase register, the condition in line 193 takes the fall-through path due
+to prior branch predictor training, where under speculation, it'll load the
+byte at oob address r8 (unknown scalar type at that point) which could then
+be leaked via side-channel.
+
+One way to mitigate these is to 'branch off' an unreachable path, meaning,
+the current verification path keeps following the is_branch_taken() path
+and we push the other branch to the verification stack. Given this is
+unreachable from the non-speculative domain, this branch's vstate is
+explicitly marked as speculative. This is needed for two reasons: i) if
+this path is solely seen from speculative execution, then we later on still
+want the dead code elimination to kick in in order to sanitize these
+instructions with jmp-1s, and ii) to ensure that paths walked in the
+non-speculative domain are not pruned from earlier walks of paths walked in
+the speculative domain. Additionally, for robustness, we mark the registers
+which have been part of the conditional as unknown in the speculative path
+given there should be no assumptions made on their content.
+
+The fix in here mitigates type confusion attacks described earlier due to
+i) all code paths in the BPF program being explored and ii) existing
+verifier logic already ensuring that given memory access instruction
+references one specific data structure.
+
+An alternative to this fix that has also been looked at in this scope was to
+mark aux->alu_state at the jump instruction with a BPF_JMP_TAKEN state as
+well as direction encoding (always-goto, always-fallthrough, unknown), such
+that mixing of different always-* directions themselves as well as mixing of
+always-* with unknown directions would cause a program rejection by the
+verifier, e.g. programs with constructs like 'if ([...]) { x = 0; } else
+{ x = 1; }' with subsequent 'if (x == 1) { [...] }'. For unprivileged, this
+would result in only single direction always-* taken paths, and unknown taken
+paths being allowed, such that the former could be patched from a conditional
+jump to an unconditional jump (ja). Compared to this approach here, it would
+have two downsides: i) valid programs that otherwise are not performing any
+pointer arithmetic, etc, would potentially be rejected/broken, and ii) we are
+required to turn off path pruning for unprivileged, where both can be avoided
+in this work through pushing the invalid branch to the verification stack.
+
+The issue was originally discovered by Adam and Ofek, and later independently
+discovered and reported as a result of Benedict and Piotr's research work.
+
+Fixes: b2157399cc98 ("bpf: prevent out-of-bounds speculation")
+Reported-by: Adam Morrison <mad@cs.tau.ac.il>
+Reported-by: Ofek Kirzner <ofekkir@gmail.com>
+Reported-by: Benedict Schlueter <benedict.schlueter@rub.de>
+Reported-by: Piotr Krysiuk <piotras@gmail.com>
+Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
+Reviewed-by: John Fastabend <john.fastabend@gmail.com>
+Reviewed-by: Benedict Schlueter <benedict.schlueter@rub.de>
+Reviewed-by: Piotr Krysiuk <piotras@gmail.com>
+Acked-by: Alexei Starovoitov <ast@kernel.org>
+[OP: use allow_ptr_leaks instead of bypass_spec_v1]
+Signed-off-by: Ovidiu Panait <ovidiu.panait@windriver.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/bpf/verifier.c | 46 +++++++++++++++++++++++++++++++++++++++++-----
+ 1 file changed, 41 insertions(+), 5 deletions(-)
+
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -4346,6 +4346,27 @@ struct bpf_sanitize_info {
+ bool mask_to_left;
+ };
+
++static struct bpf_verifier_state *
++sanitize_speculative_path(struct bpf_verifier_env *env,
++ const struct bpf_insn *insn,
++ u32 next_idx, u32 curr_idx)
++{
++ struct bpf_verifier_state *branch;
++ struct bpf_reg_state *regs;
++
++ branch = push_stack(env, next_idx, curr_idx, true);
++ if (branch && insn) {
++ regs = branch->frame[branch->curframe]->regs;
++ if (BPF_SRC(insn->code) == BPF_K) {
++ mark_reg_unknown(env, regs, insn->dst_reg);
++ } else if (BPF_SRC(insn->code) == BPF_X) {
++ mark_reg_unknown(env, regs, insn->dst_reg);
++ mark_reg_unknown(env, regs, insn->src_reg);
++ }
++ }
++ return branch;
++}
++
+ static int sanitize_ptr_alu(struct bpf_verifier_env *env,
+ struct bpf_insn *insn,
+ const struct bpf_reg_state *ptr_reg,
+@@ -4429,7 +4450,8 @@ do_sim:
+ tmp = *dst_reg;
+ *dst_reg = *ptr_reg;
+ }
+- ret = push_stack(env, env->insn_idx + 1, env->insn_idx, true);
++ ret = sanitize_speculative_path(env, NULL, env->insn_idx + 1,
++ env->insn_idx);
+ if (!ptr_is_dst_reg && ret)
+ *dst_reg = tmp;
+ return !ret ? REASON_STACK : 0;
+@@ -6079,14 +6101,28 @@ static int check_cond_jmp_op(struct bpf_
+ if (err)
+ return err;
+ }
++
+ if (pred == 1) {
+- /* only follow the goto, ignore fall-through */
++ /* Only follow the goto, ignore fall-through. If needed, push
++ * the fall-through branch for simulation under speculative
++ * execution.
++ */
++ if (!env->allow_ptr_leaks &&
++ !sanitize_speculative_path(env, insn, *insn_idx + 1,
++ *insn_idx))
++ return -EFAULT;
+ *insn_idx += insn->off;
+ return 0;
+ } else if (pred == 0) {
+- /* only follow fall-through branch, since
+- * that's where the program will go
+- */
++ /* Only follow the fall-through branch, since that's where the
++ * program will go. If needed, push the goto branch for
++ * simulation under speculative execution.
++ */
++ if (!env->allow_ptr_leaks &&
++ !sanitize_speculative_path(env, insn,
++ *insn_idx + insn->off + 1,
++ *insn_idx))
++ return -EFAULT;
+ return 0;
+ }
+
--- /dev/null
+From foo@baz Fri Aug 6 10:05:36 AM CEST 2021
+From: Ovidiu Panait <ovidiu.panait@windriver.com>
+Date: Thu, 5 Aug 2021 18:53:38 +0300
+Subject: bpf: Inherit expanded/patched seen count from old aux data
+To: stable@vger.kernel.org
+Cc: bpf@vger.kernel.org, daniel@iogearbox.net, ast@kernel.org, john.fastabend@gmail.com, benedict.schlueter@rub.de, piotras@gmail.com
+Message-ID: <20210805155343.3618696-2-ovidiu.panait@windriver.com>
+
+From: Daniel Borkmann <daniel@iogearbox.net>
+
+commit d203b0fd863a2261e5d00b97f3d060c4c2a6db71 upstream
+
+Instead of relying on current env->pass_cnt, use the seen count from the
+old aux data in adjust_insn_aux_data(), and expand it to the new range of
+patched instructions. This change is valid given we always expand 1:n
+with n>=1, so what applies to the old/original instruction needs to apply
+for the replacement as well.
+
+Not relying on env->pass_cnt is a prerequisite for a later change where we
+want to avoid marking an instruction seen when verified under speculative
+execution path.
+
+Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
+Reviewed-by: John Fastabend <john.fastabend@gmail.com>
+Reviewed-by: Benedict Schlueter <benedict.schlueter@rub.de>
+Reviewed-by: Piotr Krysiuk <piotras@gmail.com>
+Acked-by: Alexei Starovoitov <ast@kernel.org>
+[OP: declare old_data as bool instead of u32 (struct bpf_insn_aux_data.seen
+ is bool in 5.4)]
+Signed-off-by: Ovidiu Panait <ovidiu.panait@windriver.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/bpf/verifier.c | 4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -8304,6 +8304,7 @@ static int adjust_insn_aux_data(struct b
+ {
+ struct bpf_insn_aux_data *new_data, *old_data = env->insn_aux_data;
+ struct bpf_insn *insn = new_prog->insnsi;
++ bool old_seen = old_data[off].seen;
+ u32 prog_len;
+ int i;
+
+@@ -8324,7 +8325,8 @@ static int adjust_insn_aux_data(struct b
+ memcpy(new_data + off + cnt - 1, old_data + off,
+ sizeof(struct bpf_insn_aux_data) * (prog_len - off - cnt + 1));
+ for (i = off; i < off + cnt - 1; i++) {
+- new_data[i].seen = true;
++ /* Expand insni[off]'s seen count to the patched range. */
++ new_data[i].seen = old_seen;
+ new_data[i].zext_dst = insn_has_def32(env, insn + i);
+ }
+ env->insn_aux_data = new_data;
--- /dev/null
+From foo@baz Fri Aug 6 10:05:36 AM CEST 2021
+From: Ovidiu Panait <ovidiu.panait@windriver.com>
+Date: Thu, 5 Aug 2021 18:53:42 +0300
+Subject: bpf, selftests: Add a verifier test for assigning 32bit reg states to 64bit ones
+To: stable@vger.kernel.org
+Cc: bpf@vger.kernel.org, daniel@iogearbox.net, ast@kernel.org, john.fastabend@gmail.com, benedict.schlueter@rub.de, piotras@gmail.com
+Message-ID: <20210805155343.3618696-6-ovidiu.panait@windriver.com>
+
+From: John Fastabend <john.fastabend@gmail.com>
+
+commit cf66c29bd7534813d2e1971fab71e25fe87c7e0a upstream
+
+Added a verifier test for assigning 32bit reg states to
+64bit where 32bit reg holds a constant value of 0.
+
+Without previous kernel verifier.c fix, the test in
+this patch will fail.
+
+Signed-off-by: Yonghong Song <yhs@fb.com>
+Signed-off-by: John Fastabend <john.fastabend@gmail.com>
+Signed-off-by: Alexei Starovoitov <ast@kernel.org>
+Link: https://lore.kernel.org/bpf/159077335867.6014.2075350327073125374.stgit@john-Precision-5820-Tower
+Signed-off-by: Ovidiu Panait <ovidiu.panait@windriver.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ tools/testing/selftests/bpf/verifier/bounds.c | 22 ++++++++++++++++++++++
+ 1 file changed, 22 insertions(+)
+
+--- a/tools/testing/selftests/bpf/verifier/bounds.c
++++ b/tools/testing/selftests/bpf/verifier/bounds.c
+@@ -545,3 +545,25 @@
+ },
+ .result = ACCEPT
+ },
++{
++ "assigning 32bit bounds to 64bit for wA = 0, wB = wA",
++ .insns = {
++ BPF_LDX_MEM(BPF_W, BPF_REG_8, BPF_REG_1,
++ offsetof(struct __sk_buff, data_end)),
++ BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1,
++ offsetof(struct __sk_buff, data)),
++ BPF_MOV32_IMM(BPF_REG_9, 0),
++ BPF_MOV32_REG(BPF_REG_2, BPF_REG_9),
++ BPF_MOV64_REG(BPF_REG_6, BPF_REG_7),
++ BPF_ALU64_REG(BPF_ADD, BPF_REG_6, BPF_REG_2),
++ BPF_MOV64_REG(BPF_REG_3, BPF_REG_6),
++ BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 8),
++ BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_8, 1),
++ BPF_LDX_MEM(BPF_W, BPF_REG_5, BPF_REG_6, 0),
++ BPF_MOV64_IMM(BPF_REG_0, 0),
++ BPF_EXIT_INSN(),
++ },
++ .prog_type = BPF_PROG_TYPE_SCHED_CLS,
++ .result = ACCEPT,
++ .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
++},
--- /dev/null
+From foo@baz Fri Aug 6 10:05:36 AM CEST 2021
+From: Ovidiu Panait <ovidiu.panait@windriver.com>
+Date: Thu, 5 Aug 2021 18:53:43 +0300
+Subject: bpf, selftests: Adjust few selftest outcomes wrt unreachable code
+To: stable@vger.kernel.org
+Cc: bpf@vger.kernel.org, daniel@iogearbox.net, ast@kernel.org, john.fastabend@gmail.com, benedict.schlueter@rub.de, piotras@gmail.com
+Message-ID: <20210805155343.3618696-7-ovidiu.panait@windriver.com>
+
+From: Daniel Borkmann <daniel@iogearbox.net>
+
+commit 973377ffe8148180b2651825b92ae91988141b05 upstream
+
+In almost all cases from test_verifier that have been changed in here, we've
+had an unreachable path with a load from a register which has an invalid
+address on purpose. This was basically to make sure that we never walk this
+path and to have the verifier complain if it would otherwise. Change it to
+match on the right error for unprivileged given we now test these paths
+under speculative execution.
+
+There's one case where we match on exact # of insns_processed. Due to the
+extra path, this will of course mismatch on unprivileged. Thus, restrict the
+test->insn_processed check to privileged-only.
+
+In one other case, we result in a 'pointer comparison prohibited' error. This
+is similarly due to verifying an 'invalid' branch where we end up with a value
+pointer on one side of the comparison.
+
+Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
+Reviewed-by: John Fastabend <john.fastabend@gmail.com>
+Acked-by: Alexei Starovoitov <ast@kernel.org>
+[OP: ignore changes to tests that do not exist in 5.4]
+Signed-off-by: Ovidiu Panait <ovidiu.panait@windriver.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ tools/testing/selftests/bpf/test_verifier.c | 2 -
+ tools/testing/selftests/bpf/verifier/bounds.c | 4 +++
+ tools/testing/selftests/bpf/verifier/dead_code.c | 2 +
+ tools/testing/selftests/bpf/verifier/jmp32.c | 22 +++++++++++++++++
+ tools/testing/selftests/bpf/verifier/jset.c | 10 ++++---
+ tools/testing/selftests/bpf/verifier/unpriv.c | 2 +
+ tools/testing/selftests/bpf/verifier/value_ptr_arith.c | 7 +++--
+ 7 files changed, 41 insertions(+), 8 deletions(-)
+
+--- a/tools/testing/selftests/bpf/test_verifier.c
++++ b/tools/testing/selftests/bpf/test_verifier.c
+@@ -980,7 +980,7 @@ static void do_test_single(struct bpf_te
+ }
+ }
+
+- if (test->insn_processed) {
++ if (!unpriv && test->insn_processed) {
+ uint32_t insn_processed;
+ char *proc;
+
+--- a/tools/testing/selftests/bpf/verifier/bounds.c
++++ b/tools/testing/selftests/bpf/verifier/bounds.c
+@@ -523,6 +523,8 @@
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, -1),
+ BPF_EXIT_INSN(),
+ },
++ .errstr_unpriv = "R0 invalid mem access 'inv'",
++ .result_unpriv = REJECT,
+ .result = ACCEPT
+ },
+ {
+@@ -543,6 +545,8 @@
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, -1),
+ BPF_EXIT_INSN(),
+ },
++ .errstr_unpriv = "R0 invalid mem access 'inv'",
++ .result_unpriv = REJECT,
+ .result = ACCEPT
+ },
+ {
+--- a/tools/testing/selftests/bpf/verifier/dead_code.c
++++ b/tools/testing/selftests/bpf/verifier/dead_code.c
+@@ -8,6 +8,8 @@
+ BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 10, -4),
+ BPF_EXIT_INSN(),
+ },
++ .errstr_unpriv = "R9 !read_ok",
++ .result_unpriv = REJECT,
+ .result = ACCEPT,
+ .retval = 7,
+ },
+--- a/tools/testing/selftests/bpf/verifier/jmp32.c
++++ b/tools/testing/selftests/bpf/verifier/jmp32.c
+@@ -72,6 +72,8 @@
+ BPF_LDX_MEM(BPF_B, BPF_REG_8, BPF_REG_9, 0),
+ BPF_EXIT_INSN(),
+ },
++ .errstr_unpriv = "R9 !read_ok",
++ .result_unpriv = REJECT,
+ .result = ACCEPT,
+ },
+ {
+@@ -135,6 +137,8 @@
+ BPF_LDX_MEM(BPF_B, BPF_REG_8, BPF_REG_9, 0),
+ BPF_EXIT_INSN(),
+ },
++ .errstr_unpriv = "R9 !read_ok",
++ .result_unpriv = REJECT,
+ .result = ACCEPT,
+ },
+ {
+@@ -198,6 +202,8 @@
+ BPF_LDX_MEM(BPF_B, BPF_REG_8, BPF_REG_9, 0),
+ BPF_EXIT_INSN(),
+ },
++ .errstr_unpriv = "R9 !read_ok",
++ .result_unpriv = REJECT,
+ .result = ACCEPT,
+ },
+ {
+@@ -265,6 +271,8 @@
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
++ .errstr_unpriv = "R0 invalid mem access 'inv'",
++ .result_unpriv = REJECT,
+ .result = ACCEPT,
+ .retval = 2,
+ },
+@@ -333,6 +341,8 @@
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
++ .errstr_unpriv = "R0 invalid mem access 'inv'",
++ .result_unpriv = REJECT,
+ .result = ACCEPT,
+ .retval = 2,
+ },
+@@ -401,6 +411,8 @@
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
++ .errstr_unpriv = "R0 invalid mem access 'inv'",
++ .result_unpriv = REJECT,
+ .result = ACCEPT,
+ .retval = 2,
+ },
+@@ -469,6 +481,8 @@
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
++ .errstr_unpriv = "R0 invalid mem access 'inv'",
++ .result_unpriv = REJECT,
+ .result = ACCEPT,
+ .retval = 2,
+ },
+@@ -537,6 +551,8 @@
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
++ .errstr_unpriv = "R0 invalid mem access 'inv'",
++ .result_unpriv = REJECT,
+ .result = ACCEPT,
+ .retval = 2,
+ },
+@@ -605,6 +621,8 @@
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
++ .errstr_unpriv = "R0 invalid mem access 'inv'",
++ .result_unpriv = REJECT,
+ .result = ACCEPT,
+ .retval = 2,
+ },
+@@ -673,6 +691,8 @@
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
++ .errstr_unpriv = "R0 invalid mem access 'inv'",
++ .result_unpriv = REJECT,
+ .result = ACCEPT,
+ .retval = 2,
+ },
+@@ -741,6 +761,8 @@
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
++ .errstr_unpriv = "R0 invalid mem access 'inv'",
++ .result_unpriv = REJECT,
+ .result = ACCEPT,
+ .retval = 2,
+ },
+--- a/tools/testing/selftests/bpf/verifier/jset.c
++++ b/tools/testing/selftests/bpf/verifier/jset.c
+@@ -82,8 +82,8 @@
+ BPF_EXIT_INSN(),
+ },
+ .prog_type = BPF_PROG_TYPE_SOCKET_FILTER,
+- .retval_unpriv = 1,
+- .result_unpriv = ACCEPT,
++ .errstr_unpriv = "R9 !read_ok",
++ .result_unpriv = REJECT,
+ .retval = 1,
+ .result = ACCEPT,
+ },
+@@ -141,7 +141,8 @@
+ BPF_EXIT_INSN(),
+ },
+ .prog_type = BPF_PROG_TYPE_SOCKET_FILTER,
+- .result_unpriv = ACCEPT,
++ .errstr_unpriv = "R9 !read_ok",
++ .result_unpriv = REJECT,
+ .result = ACCEPT,
+ },
+ {
+@@ -162,6 +163,7 @@
+ BPF_EXIT_INSN(),
+ },
+ .prog_type = BPF_PROG_TYPE_SOCKET_FILTER,
+- .result_unpriv = ACCEPT,
++ .errstr_unpriv = "R9 !read_ok",
++ .result_unpriv = REJECT,
+ .result = ACCEPT,
+ },
+--- a/tools/testing/selftests/bpf/verifier/unpriv.c
++++ b/tools/testing/selftests/bpf/verifier/unpriv.c
+@@ -418,6 +418,8 @@
+ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0),
+ BPF_EXIT_INSN(),
+ },
++ .errstr_unpriv = "R7 invalid mem access 'inv'",
++ .result_unpriv = REJECT,
+ .result = ACCEPT,
+ .retval = 0,
+ },
+--- a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
++++ b/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
+@@ -120,7 +120,7 @@
+ .fixup_map_array_48b = { 1 },
+ .result = ACCEPT,
+ .result_unpriv = REJECT,
+- .errstr_unpriv = "R2 tried to add from different maps, paths or scalars",
++ .errstr_unpriv = "R2 pointer comparison prohibited",
+ .retval = 0,
+ },
+ {
+@@ -159,7 +159,8 @@
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ // fake-dead code; targeted from branch A to
+- // prevent dead code sanitization
++ // prevent dead code sanitization, rejected
++ // via branch B however
+ BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+@@ -167,7 +168,7 @@
+ .fixup_map_array_48b = { 1 },
+ .result = ACCEPT,
+ .result_unpriv = REJECT,
+- .errstr_unpriv = "R2 tried to add from different maps, paths or scalars",
++ .errstr_unpriv = "R0 invalid mem access 'inv'",
+ .retval = 0,
+ },
+ {
--- /dev/null
+From foo@baz Fri Aug 6 10:05:36 AM CEST 2021
+From: Ovidiu Panait <ovidiu.panait@windriver.com>
+Date: Thu, 5 Aug 2021 18:53:41 +0300
+Subject: bpf: Test_verifier, add alu32 bounds tracking tests
+To: stable@vger.kernel.org
+Cc: bpf@vger.kernel.org, daniel@iogearbox.net, ast@kernel.org, john.fastabend@gmail.com, benedict.schlueter@rub.de, piotras@gmail.com
+Message-ID: <20210805155343.3618696-5-ovidiu.panait@windriver.com>
+
+From: John Fastabend <john.fastabend@gmail.com>
+
+commit 41f70fe0649dddf02046315dc566e06da5a2dc91 upstream
+
+Its possible to have divergent ALU32 and ALU64 bounds when using JMP32
+instructins and ALU64 arithmatic operations. Sometimes the clang will
+even generate this code. Because the case is a bit tricky lets add
+a specific test for it.
+
+Here is pseudocode asm version to illustrate the idea,
+
+ 1 r0 = 0xffffffff00000001;
+ 2 if w0 > 1 goto %l[fail];
+ 3 r0 += 1
+ 5 if w0 > 2 goto %l[fail]
+ 6 exit
+
+The intent here is the verifier will fail the load if the 32bit bounds
+are not tracked correctly through ALU64 op. Similarly we can check the
+64bit bounds are correctly zero extended after ALU32 ops.
+
+ 1 r0 = 0xffffffff00000001;
+ 2 w0 += 1
+ 2 if r0 > 3 goto %l[fail];
+ 6 exit
+
+The above will fail if we do not correctly zero extend 64bit bounds
+after 32bit op.
+
+Signed-off-by: John Fastabend <john.fastabend@gmail.com>
+Signed-off-by: Alexei Starovoitov <ast@kernel.org>
+Link: https://lore.kernel.org/bpf/158560430155.10843.514209255758200922.stgit@john-Precision-5820-Tower
+Signed-off-by: Ovidiu Panait <ovidiu.panait@windriver.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ tools/testing/selftests/bpf/verifier/bounds.c | 39 ++++++++++++++++++++++++++
+ 1 file changed, 39 insertions(+)
+
+--- a/tools/testing/selftests/bpf/verifier/bounds.c
++++ b/tools/testing/selftests/bpf/verifier/bounds.c
+@@ -506,3 +506,42 @@
+ .errstr = "map_value pointer and 1000000000000",
+ .result = REJECT
+ },
++{
++ "bounds check mixed 32bit and 64bit arithmatic. test1",
++ .insns = {
++ BPF_MOV64_IMM(BPF_REG_0, 0),
++ BPF_MOV64_IMM(BPF_REG_1, -1),
++ BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 32),
++ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
++ /* r1 = 0xffffFFFF00000001 */
++ BPF_JMP32_IMM(BPF_JGT, BPF_REG_1, 1, 3),
++ /* check ALU64 op keeps 32bit bounds */
++ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
++ BPF_JMP32_IMM(BPF_JGT, BPF_REG_1, 2, 1),
++ BPF_JMP_A(1),
++ /* invalid ldx if bounds are lost above */
++ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, -1),
++ BPF_EXIT_INSN(),
++ },
++ .result = ACCEPT
++},
++{
++ "bounds check mixed 32bit and 64bit arithmatic. test2",
++ .insns = {
++ BPF_MOV64_IMM(BPF_REG_0, 0),
++ BPF_MOV64_IMM(BPF_REG_1, -1),
++ BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 32),
++ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
++ /* r1 = 0xffffFFFF00000001 */
++ BPF_MOV64_IMM(BPF_REG_2, 3),
++ /* r1 = 0x2 */
++ BPF_ALU32_IMM(BPF_ADD, BPF_REG_1, 1),
++ /* check ALU32 op zero extends 64bit bounds */
++ BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 1),
++ BPF_JMP_A(1),
++ /* invalid ldx if bounds are lost above */
++ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, -1),
++ BPF_EXIT_INSN(),
++ },
++ .result = ACCEPT
++},
firmware-arm_scmi-ensure-drivers-provide-a-probe-function.patch
firmware-arm_scmi-add-delayed-response-status-check.patch
revert-watchdog-itco_wdt-account-for-rebooting-on-second-timeout.patch
+bpf-inherit-expanded-patched-seen-count-from-old-aux-data.patch
+bpf-do-not-mark-insn-as-seen-under-speculative-path-verification.patch
+bpf-fix-leakage-under-speculation-on-mispredicted-branches.patch
+bpf-test_verifier-add-alu32-bounds-tracking-tests.patch
+bpf-selftests-add-a-verifier-test-for-assigning-32bit-reg-states-to-64bit-ones.patch
+bpf-selftests-adjust-few-selftest-outcomes-wrt-unreachable-code.patch