]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/commitdiff
5.4-stable patches
authorGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Mon, 15 Nov 2021 15:05:38 +0000 (16:05 +0100)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Mon, 15 Nov 2021 15:05:38 +0000 (16:05 +0100)
added patches:
mm-oom-do-not-trigger-out_of_memory-from-the-pf.patch
mm-oom-pagefault_out_of_memory-don-t-force-global-oom-for-dying-tasks.patch
powerpc-bpf-emit-stf-barrier-instruction-sequences-for-bpf_nospec.patch
powerpc-bpf-fix-bpf_sub-when-imm-0x80000000.patch
powerpc-bpf-validate-branch-ranges.patch
powerpc-lib-add-helper-to-check-if-offset-is-within-conditional-branch-range.patch
powerpc-powernv-prd-unregister-opal_msg_prd2-notifier-during-module-unload.patch
powerpc-security-add-a-helper-to-query-stf_barrier-type.patch
s390-cio-check-the-subchannel-validity-for-dev_busid.patch
s390-cio-make-ccw_device_dma_-more-robust.patch
s390-tape-fix-timer-initialization-in-tape_std_assign.patch
video-backlight-drop-maximum-brightness-override-for-brightness-zero.patch

13 files changed:
queue-5.4/mm-oom-do-not-trigger-out_of_memory-from-the-pf.patch [new file with mode: 0644]
queue-5.4/mm-oom-pagefault_out_of_memory-don-t-force-global-oom-for-dying-tasks.patch [new file with mode: 0644]
queue-5.4/powerpc-bpf-emit-stf-barrier-instruction-sequences-for-bpf_nospec.patch [new file with mode: 0644]
queue-5.4/powerpc-bpf-fix-bpf_sub-when-imm-0x80000000.patch [new file with mode: 0644]
queue-5.4/powerpc-bpf-validate-branch-ranges.patch [new file with mode: 0644]
queue-5.4/powerpc-lib-add-helper-to-check-if-offset-is-within-conditional-branch-range.patch [new file with mode: 0644]
queue-5.4/powerpc-powernv-prd-unregister-opal_msg_prd2-notifier-during-module-unload.patch [new file with mode: 0644]
queue-5.4/powerpc-security-add-a-helper-to-query-stf_barrier-type.patch [new file with mode: 0644]
queue-5.4/s390-cio-check-the-subchannel-validity-for-dev_busid.patch [new file with mode: 0644]
queue-5.4/s390-cio-make-ccw_device_dma_-more-robust.patch [new file with mode: 0644]
queue-5.4/s390-tape-fix-timer-initialization-in-tape_std_assign.patch [new file with mode: 0644]
queue-5.4/series
queue-5.4/video-backlight-drop-maximum-brightness-override-for-brightness-zero.patch [new file with mode: 0644]

diff --git a/queue-5.4/mm-oom-do-not-trigger-out_of_memory-from-the-pf.patch b/queue-5.4/mm-oom-do-not-trigger-out_of_memory-from-the-pf.patch
new file mode 100644 (file)
index 0000000..dfab9de
--- /dev/null
@@ -0,0 +1,102 @@
+From 60e2793d440a3ec95abb5d6d4fc034a4b480472d Mon Sep 17 00:00:00 2001
+From: Michal Hocko <mhocko@suse.com>
+Date: Fri, 5 Nov 2021 13:38:06 -0700
+Subject: mm, oom: do not trigger out_of_memory from the #PF
+
+From: Michal Hocko <mhocko@suse.com>
+
+commit 60e2793d440a3ec95abb5d6d4fc034a4b480472d upstream.
+
+Any allocation failure during the #PF path will return with VM_FAULT_OOM
+which in turn results in pagefault_out_of_memory.  This can happen for 2
+different reasons.  a) Memcg is out of memory and we rely on
+mem_cgroup_oom_synchronize to perform the memcg OOM handling or b)
+normal allocation fails.
+
+The latter is quite problematic because allocation paths already trigger
+out_of_memory and the page allocator tries really hard to not fail
+allocations.  Anyway, if the OOM killer has been already invoked there
+is no reason to invoke it again from the #PF path.  Especially when the
+OOM condition might be gone by that time and we have no way to find out
+other than allocate.
+
+Moreover if the allocation failed and the OOM killer hasn't been invoked
+then we are unlikely to do the right thing from the #PF context because
+we have already lost the allocation context and restictions and
+therefore might oom kill a task from a different NUMA domain.
+
+This all suggests that there is no legitimate reason to trigger
+out_of_memory from pagefault_out_of_memory so drop it.  Just to be sure
+that no #PF path returns with VM_FAULT_OOM without allocation print a
+warning that this is happening before we restart the #PF.
+
+[VvS: #PF allocation can hit into limit of cgroup v1 kmem controller.
+This is a local problem related to memcg, however, it causes unnecessary
+global OOM kills that are repeated over and over again and escalate into a
+real disaster.  This has been broken since kmem accounting has been
+introduced for cgroup v1 (3.8).  There was no kmem specific reclaim for
+the separate limit so the only way to handle kmem hard limit was to return
+with ENOMEM.  In upstream the problem will be fixed by removing the
+outdated kmem limit, however stable and LTS kernels cannot do it and are
+still affected.  This patch fixes the problem and should be backported
+into stable/LTS.]
+
+Link: https://lkml.kernel.org/r/f5fd8dd8-0ad4-c524-5f65-920b01972a42@virtuozzo.com
+Signed-off-by: Michal Hocko <mhocko@suse.com>
+Signed-off-by: Vasily Averin <vvs@virtuozzo.com>
+Acked-by: Michal Hocko <mhocko@suse.com>
+Cc: Johannes Weiner <hannes@cmpxchg.org>
+Cc: Mel Gorman <mgorman@techsingularity.net>
+Cc: Roman Gushchin <guro@fb.com>
+Cc: Shakeel Butt <shakeelb@google.com>
+Cc: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
+Cc: Uladzislau Rezki <urezki@gmail.com>
+Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
+Cc: Vlastimil Babka <vbabka@suse.cz>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/oom_kill.c |   22 ++++++++--------------
+ 1 file changed, 8 insertions(+), 14 deletions(-)
+
+--- a/mm/oom_kill.c
++++ b/mm/oom_kill.c
+@@ -1114,19 +1114,15 @@ bool out_of_memory(struct oom_control *o
+ }
+ /*
+- * The pagefault handler calls here because it is out of memory, so kill a
+- * memory-hogging task. If oom_lock is held by somebody else, a parallel oom
+- * killing is already in progress so do nothing.
++ * The pagefault handler calls here because some allocation has failed. We have
++ * to take care of the memcg OOM here because this is the only safe context without
++ * any locks held but let the oom killer triggered from the allocation context care
++ * about the global OOM.
+  */
+ void pagefault_out_of_memory(void)
+ {
+-      struct oom_control oc = {
+-              .zonelist = NULL,
+-              .nodemask = NULL,
+-              .memcg = NULL,
+-              .gfp_mask = 0,
+-              .order = 0,
+-      };
++      static DEFINE_RATELIMIT_STATE(pfoom_rs, DEFAULT_RATELIMIT_INTERVAL,
++                                    DEFAULT_RATELIMIT_BURST);
+       if (mem_cgroup_oom_synchronize(true))
+               return;
+@@ -1134,8 +1130,6 @@ void pagefault_out_of_memory(void)
+       if (fatal_signal_pending(current))
+               return;
+-      if (!mutex_trylock(&oom_lock))
+-              return;
+-      out_of_memory(&oc);
+-      mutex_unlock(&oom_lock);
++      if (__ratelimit(&pfoom_rs))
++              pr_warn("Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF\n");
+ }
diff --git a/queue-5.4/mm-oom-pagefault_out_of_memory-don-t-force-global-oom-for-dying-tasks.patch b/queue-5.4/mm-oom-pagefault_out_of_memory-don-t-force-global-oom-for-dying-tasks.patch
new file mode 100644 (file)
index 0000000..98e3435
--- /dev/null
@@ -0,0 +1,74 @@
+From 0b28179a6138a5edd9d82ad2687c05b3773c387b Mon Sep 17 00:00:00 2001
+From: Vasily Averin <vvs@virtuozzo.com>
+Date: Fri, 5 Nov 2021 13:38:02 -0700
+Subject: mm, oom: pagefault_out_of_memory: don't force global OOM for dying tasks
+
+From: Vasily Averin <vvs@virtuozzo.com>
+
+commit 0b28179a6138a5edd9d82ad2687c05b3773c387b upstream.
+
+Patch series "memcg: prohibit unconditional exceeding the limit of dying tasks", v3.
+
+Memory cgroup charging allows killed or exiting tasks to exceed the hard
+limit.  It can be misused and allowed to trigger global OOM from inside
+a memcg-limited container.  On the other hand if memcg fails allocation,
+called from inside #PF handler it triggers global OOM from inside
+pagefault_out_of_memory().
+
+To prevent these problems this patchset:
+ (a) removes execution of out_of_memory() from
+     pagefault_out_of_memory(), becasue nobody can explain why it is
+     necessary.
+ (b) allow memcg to fail allocation of dying/killed tasks.
+
+This patch (of 3):
+
+Any allocation failure during the #PF path will return with VM_FAULT_OOM
+which in turn results in pagefault_out_of_memory which in turn executes
+out_out_memory() and can kill a random task.
+
+An allocation might fail when the current task is the oom victim and
+there are no memory reserves left.  The OOM killer is already handled at
+the page allocator level for the global OOM and at the charging level
+for the memcg one.  Both have much more information about the scope of
+allocation/charge request.  This means that either the OOM killer has
+been invoked properly and didn't lead to the allocation success or it
+has been skipped because it couldn't have been invoked.  In both cases
+triggering it from here is pointless and even harmful.
+
+It makes much more sense to let the killed task die rather than to wake
+up an eternally hungry oom-killer and send him to choose a fatter victim
+for breakfast.
+
+Link: https://lkml.kernel.org/r/0828a149-786e-7c06-b70a-52d086818ea3@virtuozzo.com
+Signed-off-by: Vasily Averin <vvs@virtuozzo.com>
+Suggested-by: Michal Hocko <mhocko@suse.com>
+Acked-by: Michal Hocko <mhocko@suse.com>
+Cc: Johannes Weiner <hannes@cmpxchg.org>
+Cc: Mel Gorman <mgorman@techsingularity.net>
+Cc: Roman Gushchin <guro@fb.com>
+Cc: Shakeel Butt <shakeelb@google.com>
+Cc: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
+Cc: Uladzislau Rezki <urezki@gmail.com>
+Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
+Cc: Vlastimil Babka <vbabka@suse.cz>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/oom_kill.c |    3 +++
+ 1 file changed, 3 insertions(+)
+
+--- a/mm/oom_kill.c
++++ b/mm/oom_kill.c
+@@ -1131,6 +1131,9 @@ void pagefault_out_of_memory(void)
+       if (mem_cgroup_oom_synchronize(true))
+               return;
++      if (fatal_signal_pending(current))
++              return;
++
+       if (!mutex_trylock(&oom_lock))
+               return;
+       out_of_memory(&oc);
diff --git a/queue-5.4/powerpc-bpf-emit-stf-barrier-instruction-sequences-for-bpf_nospec.patch b/queue-5.4/powerpc-bpf-emit-stf-barrier-instruction-sequences-for-bpf_nospec.patch
new file mode 100644 (file)
index 0000000..a142aa8
--- /dev/null
@@ -0,0 +1,161 @@
+From foo@baz Mon Nov 15 03:31:49 PM CET 2021
+From: "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com>
+Date: Mon, 15 Nov 2021 16:36:04 +0530
+Subject: powerpc/bpf: Emit stf barrier instruction sequences for BPF_NOSPEC
+To: <stable@vger.kernel.org>
+Cc: Michael Ellerman <mpe@ellerman.id.au>, Daniel Borkmann <daniel@iogearbox.net>
+Message-ID: <6fb4c544efe6efb2139252e49de7f893b53a61b6.1636963359.git.naveen.n.rao@linux.vnet.ibm.com>
+
+From: "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com>
+
+upstream commit b7540d62509453263604a155bf2d5f0ed450cba2
+
+Emit similar instruction sequences to commit a048a07d7f4535
+("powerpc/64s: Add support for a store forwarding barrier at kernel
+entry/exit") when encountering BPF_NOSPEC.
+
+Mitigations are enabled depending on what the firmware advertises. In
+particular, we do not gate these mitigations based on current settings,
+just like in x86. Due to this, we don't need to take any action if
+mitigations are enabled or disabled at runtime.
+
+Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
+Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
+Link: https://lore.kernel.org/r/956570cbc191cd41f8274bed48ee757a86dac62a.1633464148.git.naveen.n.rao@linux.vnet.ibm.com
+[adjust macros to account for commits 0654186510a40e, 3a181237916310 and ef909ba954145e.
+adjust security feature checks to account for commit 84ed26fd00c514]
+Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/powerpc/net/bpf_jit64.h      |    8 ++---
+ arch/powerpc/net/bpf_jit_comp64.c |   56 +++++++++++++++++++++++++++++++++++---
+ 2 files changed, 56 insertions(+), 8 deletions(-)
+
+--- a/arch/powerpc/net/bpf_jit64.h
++++ b/arch/powerpc/net/bpf_jit64.h
+@@ -16,18 +16,18 @@
+  * with our redzone usage.
+  *
+  *            [       prev sp         ] <-------------
+- *            [   nv gpr save area    ] 6*8           |
++ *            [   nv gpr save area    ] 5*8           |
+  *            [    tail_call_cnt      ] 8             |
+- *            [    local_tmp_var      ] 8             |
++ *            [    local_tmp_var      ] 16            |
+  * fp (r31) -->       [   ebpf stack space    ] upto 512      |
+  *            [     frame header      ] 32/112        |
+  * sp (r1) --->       [    stack pointer      ] --------------
+  */
+ /* for gpr non volatile registers BPG_REG_6 to 10 */
+-#define BPF_PPC_STACK_SAVE    (6*8)
++#define BPF_PPC_STACK_SAVE    (5*8)
+ /* for bpf JIT code internal usage */
+-#define BPF_PPC_STACK_LOCALS  16
++#define BPF_PPC_STACK_LOCALS  24
+ /* stack frame excluding BPF stack, ensure this is quadword aligned */
+ #define BPF_PPC_STACKFRAME    (STACK_FRAME_MIN_SIZE + \
+                                BPF_PPC_STACK_LOCALS + BPF_PPC_STACK_SAVE)
+--- a/arch/powerpc/net/bpf_jit_comp64.c
++++ b/arch/powerpc/net/bpf_jit_comp64.c
+@@ -15,6 +15,7 @@
+ #include <linux/if_vlan.h>
+ #include <asm/kprobes.h>
+ #include <linux/bpf.h>
++#include <asm/security_features.h>
+ #include "bpf_jit64.h"
+@@ -56,9 +57,9 @@ static inline bool bpf_has_stack_frame(s
+  *            [       prev sp         ] <-------------
+  *            [         ...           ]               |
+  * sp (r1) --->       [    stack pointer      ] --------------
+- *            [   nv gpr save area    ] 6*8
++ *            [   nv gpr save area    ] 5*8
+  *            [    tail_call_cnt      ] 8
+- *            [    local_tmp_var      ] 8
++ *            [    local_tmp_var      ] 16
+  *            [   unused red zone     ] 208 bytes protected
+  */
+ static int bpf_jit_stack_local(struct codegen_context *ctx)
+@@ -66,12 +67,12 @@ static int bpf_jit_stack_local(struct co
+       if (bpf_has_stack_frame(ctx))
+               return STACK_FRAME_MIN_SIZE + ctx->stack_size;
+       else
+-              return -(BPF_PPC_STACK_SAVE + 16);
++              return -(BPF_PPC_STACK_SAVE + 24);
+ }
+ static int bpf_jit_stack_tailcallcnt(struct codegen_context *ctx)
+ {
+-      return bpf_jit_stack_local(ctx) + 8;
++      return bpf_jit_stack_local(ctx) + 16;
+ }
+ static int bpf_jit_stack_offsetof(struct codegen_context *ctx, int reg)
+@@ -290,11 +291,34 @@ static int bpf_jit_emit_tail_call(u32 *i
+       return 0;
+ }
++/*
++ * We spill into the redzone always, even if the bpf program has its own stackframe.
++ * Offsets hardcoded based on BPF_PPC_STACK_SAVE -- see bpf_jit_stack_local()
++ */
++void bpf_stf_barrier(void);
++
++asm (
++"             .global bpf_stf_barrier         ;"
++"     bpf_stf_barrier:                        ;"
++"             std     21,-64(1)               ;"
++"             std     22,-56(1)               ;"
++"             sync                            ;"
++"             ld      21,-64(1)               ;"
++"             ld      22,-56(1)               ;"
++"             ori     31,31,0                 ;"
++"             .rept 14                        ;"
++"             b       1f                      ;"
++"     1:                                      ;"
++"             .endr                           ;"
++"             blr                             ;"
++);
++
+ /* Assemble the body code between the prologue & epilogue */
+ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,
+                             struct codegen_context *ctx,
+                             u32 *addrs, bool extra_pass)
+ {
++      enum stf_barrier_type stf_barrier = stf_barrier_type_get();
+       const struct bpf_insn *insn = fp->insnsi;
+       int flen = fp->len;
+       int i, ret;
+@@ -663,6 +687,30 @@ emit_clear:
+                * BPF_ST NOSPEC (speculation barrier)
+                */
+               case BPF_ST | BPF_NOSPEC:
++                      if (!security_ftr_enabled(SEC_FTR_FAVOUR_SECURITY) ||
++                                      (!security_ftr_enabled(SEC_FTR_L1D_FLUSH_PR) &&
++                                      (!security_ftr_enabled(SEC_FTR_L1D_FLUSH_HV) || !cpu_has_feature(CPU_FTR_HVMODE))))
++                              break;
++
++                      switch (stf_barrier) {
++                      case STF_BARRIER_EIEIO:
++                              EMIT(0x7c0006ac | 0x02000000);
++                              break;
++                      case STF_BARRIER_SYNC_ORI:
++                              EMIT(PPC_INST_SYNC);
++                              PPC_LD(b2p[TMP_REG_1], 13, 0);
++                              PPC_ORI(31, 31, 0);
++                              break;
++                      case STF_BARRIER_FALLBACK:
++                              EMIT(PPC_INST_MFLR | ___PPC_RT(b2p[TMP_REG_1]));
++                              PPC_LI64(12, dereference_kernel_function_descriptor(bpf_stf_barrier));
++                              PPC_MTCTR(12);
++                              EMIT(PPC_INST_BCTR | 0x1);
++                              PPC_MTLR(b2p[TMP_REG_1]);
++                              break;
++                      case STF_BARRIER_NONE:
++                              break;
++                      }
+                       break;
+               /*
diff --git a/queue-5.4/powerpc-bpf-fix-bpf_sub-when-imm-0x80000000.patch b/queue-5.4/powerpc-bpf-fix-bpf_sub-when-imm-0x80000000.patch
new file mode 100644 (file)
index 0000000..ca8bf20
--- /dev/null
@@ -0,0 +1,66 @@
+From foo@baz Mon Nov 15 03:31:49 PM CET 2021
+From: "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com>
+Date: Mon, 15 Nov 2021 16:36:02 +0530
+Subject: powerpc/bpf: Fix BPF_SUB when imm == 0x80000000
+To: <stable@vger.kernel.org>
+Cc: Michael Ellerman <mpe@ellerman.id.au>, Daniel Borkmann <daniel@iogearbox.net>
+Message-ID: <e229a9f62f5870d783eeb1a831ba60a2576a70a6.1636963359.git.naveen.n.rao@linux.vnet.ibm.com>
+
+From: "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com>
+
+upstream commit 5855c4c1f415ca3ba1046e77c0b3d3dfc96c9025
+
+We aren't handling subtraction involving an immediate value of
+0x80000000 properly. Fix the same.
+
+Fixes: 156d0e290e969c ("powerpc/ebpf/jit: Implement JIT compiler for extended BPF")
+Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
+Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
+[mpe: Fold in fix from Naveen to use imm <= 32768]
+Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
+Link: https://lore.kernel.org/r/fc4b1276eb10761fd7ce0814c8dd089da2815251.1633464148.git.naveen.n.rao@linux.vnet.ibm.com
+[adjust macros to account for commits 0654186510a40e and 3a181237916310]
+Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/powerpc/net/bpf_jit_comp64.c |   27 +++++++++++++++++----------
+ 1 file changed, 17 insertions(+), 10 deletions(-)
+
+--- a/arch/powerpc/net/bpf_jit_comp64.c
++++ b/arch/powerpc/net/bpf_jit_comp64.c
+@@ -349,18 +349,25 @@ static int bpf_jit_build_body(struct bpf
+                       PPC_SUB(dst_reg, dst_reg, src_reg);
+                       goto bpf_alu32_trunc;
+               case BPF_ALU | BPF_ADD | BPF_K: /* (u32) dst += (u32) imm */
+-              case BPF_ALU | BPF_SUB | BPF_K: /* (u32) dst -= (u32) imm */
+               case BPF_ALU64 | BPF_ADD | BPF_K: /* dst += imm */
++                      if (!imm) {
++                              goto bpf_alu32_trunc;
++                      } else if (imm >= -32768 && imm < 32768) {
++                              PPC_ADDI(dst_reg, dst_reg, IMM_L(imm));
++                      } else {
++                              PPC_LI32(b2p[TMP_REG_1], imm);
++                              PPC_ADD(dst_reg, dst_reg, b2p[TMP_REG_1]);
++                      }
++                      goto bpf_alu32_trunc;
++              case BPF_ALU | BPF_SUB | BPF_K: /* (u32) dst -= (u32) imm */
+               case BPF_ALU64 | BPF_SUB | BPF_K: /* dst -= imm */
+-                      if (BPF_OP(code) == BPF_SUB)
+-                              imm = -imm;
+-                      if (imm) {
+-                              if (imm >= -32768 && imm < 32768)
+-                                      PPC_ADDI(dst_reg, dst_reg, IMM_L(imm));
+-                              else {
+-                                      PPC_LI32(b2p[TMP_REG_1], imm);
+-                                      PPC_ADD(dst_reg, dst_reg, b2p[TMP_REG_1]);
+-                              }
++                      if (!imm) {
++                              goto bpf_alu32_trunc;
++                      } else if (imm > -32768 && imm <= 32768) {
++                              PPC_ADDI(dst_reg, dst_reg, IMM_L(-imm));
++                      } else {
++                              PPC_LI32(b2p[TMP_REG_1], imm);
++                              PPC_SUB(dst_reg, dst_reg, b2p[TMP_REG_1]);
+                       }
+                       goto bpf_alu32_trunc;
+               case BPF_ALU | BPF_MUL | BPF_X: /* (u32) dst *= (u32) src */
diff --git a/queue-5.4/powerpc-bpf-validate-branch-ranges.patch b/queue-5.4/powerpc-bpf-validate-branch-ranges.patch
new file mode 100644 (file)
index 0000000..8055d73
--- /dev/null
@@ -0,0 +1,106 @@
+From foo@baz Mon Nov 15 03:31:49 PM CET 2021
+From: "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com>
+Date: Mon, 15 Nov 2021 16:36:01 +0530
+Subject: powerpc/bpf: Validate branch ranges
+To: <stable@vger.kernel.org>
+Cc: Michael Ellerman <mpe@ellerman.id.au>, Daniel Borkmann <daniel@iogearbox.net>
+Message-ID: <32e658e662d1310c33f7e2aa75b16d00f8e825e9.1636963359.git.naveen.n.rao@linux.vnet.ibm.com>
+
+From: "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com>
+
+upstream commit 3832ba4e283d7052b783dab8311df7e3590fed93
+
+Add checks to ensure that we never emit branch instructions with
+truncated branch offsets.
+
+Suggested-by: Michael Ellerman <mpe@ellerman.id.au>
+Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
+Tested-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
+Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
+Acked-by: Song Liu <songliubraving@fb.com>
+Acked-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>
+Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
+Link: https://lore.kernel.org/r/71d33a6b7603ec1013c9734dd8bdd4ff5e929142.1633464148.git.naveen.n.rao@linux.vnet.ibm.com
+[include header, drop ppc32 changes]
+Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/powerpc/net/bpf_jit.h        |   26 ++++++++++++++++++++------
+ arch/powerpc/net/bpf_jit_comp64.c |    8 ++++++--
+ 2 files changed, 26 insertions(+), 8 deletions(-)
+
+--- a/arch/powerpc/net/bpf_jit.h
++++ b/arch/powerpc/net/bpf_jit.h
+@@ -11,6 +11,7 @@
+ #ifndef __ASSEMBLY__
+ #include <asm/types.h>
++#include <asm/code-patching.h>
+ #ifdef PPC64_ELF_ABI_v1
+ #define FUNCTION_DESCR_SIZE   24
+@@ -180,13 +181,26 @@
+ #define PPC_NEG(d, a)         EMIT(PPC_INST_NEG | ___PPC_RT(d) | ___PPC_RA(a))
+ /* Long jump; (unconditional 'branch') */
+-#define PPC_JMP(dest)         EMIT(PPC_INST_BRANCH |                        \
+-                                   (((dest) - (ctx->idx * 4)) & 0x03fffffc))
++#define PPC_JMP(dest)                                                       \
++      do {                                                                  \
++              long offset = (long)(dest) - (ctx->idx * 4);                  \
++              if (!is_offset_in_branch_range(offset)) {                     \
++                      pr_err_ratelimited("Branch offset 0x%lx (@%u) out of range\n", offset, ctx->idx);                       \
++                      return -ERANGE;                                       \
++              }                                                             \
++              EMIT(PPC_INST_BRANCH | (offset & 0x03fffffc));                \
++      } while (0)
+ /* "cond" here covers BO:BI fields. */
+-#define PPC_BCC_SHORT(cond, dest)     EMIT(PPC_INST_BRANCH_COND |           \
+-                                           (((cond) & 0x3ff) << 16) |       \
+-                                           (((dest) - (ctx->idx * 4)) &     \
+-                                            0xfffc))
++#define PPC_BCC_SHORT(cond, dest)                                           \
++      do {                                                                  \
++              long offset = (long)(dest) - (ctx->idx * 4);                  \
++              if (!is_offset_in_cond_branch_range(offset)) {                \
++                      pr_err_ratelimited("Conditional branch offset 0x%lx (@%u) out of range\n", offset, ctx->idx);           \
++                      return -ERANGE;                                       \
++              }                                                             \
++              EMIT(PPC_INST_BRANCH_COND | (((cond) & 0x3ff) << 16) | (offset & 0xfffc));                                      \
++      } while (0)
++
+ /* Sign-extended 32-bit immediate load */
+ #define PPC_LI32(d, i)                do {                                          \
+               if ((int)(uintptr_t)(i) >= -32768 &&                          \
+--- a/arch/powerpc/net/bpf_jit_comp64.c
++++ b/arch/powerpc/net/bpf_jit_comp64.c
+@@ -224,7 +224,7 @@ static void bpf_jit_emit_func_call_rel(u
+       PPC_BLRL();
+ }
+-static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
++static int bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
+ {
+       /*
+        * By now, the eBPF program has already setup parameters in r3, r4 and r5
+@@ -285,7 +285,9 @@ static void bpf_jit_emit_tail_call(u32 *
+       bpf_jit_emit_common_epilogue(image, ctx);
+       PPC_BCTR();
++
+       /* out: */
++      return 0;
+ }
+ /* Assemble the body code between the prologue & epilogue */
+@@ -1001,7 +1003,9 @@ cond_branch:
+                */
+               case BPF_JMP | BPF_TAIL_CALL:
+                       ctx->seen |= SEEN_TAILCALL;
+-                      bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
++                      ret = bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
++                      if (ret < 0)
++                              return ret;
+                       break;
+               default:
diff --git a/queue-5.4/powerpc-lib-add-helper-to-check-if-offset-is-within-conditional-branch-range.patch b/queue-5.4/powerpc-lib-add-helper-to-check-if-offset-is-within-conditional-branch-range.patch
new file mode 100644 (file)
index 0000000..97ed570
--- /dev/null
@@ -0,0 +1,85 @@
+From foo@baz Mon Nov 15 03:31:49 PM CET 2021
+From: "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com>
+Date: Mon, 15 Nov 2021 16:36:00 +0530
+Subject: powerpc/lib: Add helper to check if offset is within conditional branch range
+To: <stable@vger.kernel.org>
+Cc: Michael Ellerman <mpe@ellerman.id.au>, Daniel Borkmann <daniel@iogearbox.net>
+Message-ID: <2ea819633858eb1fb4e1563aeffa5598bd028b3d.1636963359.git.naveen.n.rao@linux.vnet.ibm.com>
+
+From: "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com>
+
+upstream commit 4549c3ea3160fa8b3f37dfe2f957657bb265eda9
+
+Add a helper to check if a given offset is within the branch range for a
+powerpc conditional branch instruction, and update some sites to use the
+new helper.
+
+Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
+Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
+Acked-by: Song Liu <songliubraving@fb.com>
+Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
+Link: https://lore.kernel.org/r/442b69a34ced32ca346a0d9a855f3f6cfdbbbd41.1633464148.git.naveen.n.rao@linux.vnet.ibm.com
+Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/powerpc/include/asm/code-patching.h |    1 +
+ arch/powerpc/lib/code-patching.c         |    7 ++++++-
+ arch/powerpc/net/bpf_jit.h               |    7 +------
+ 3 files changed, 8 insertions(+), 7 deletions(-)
+
+--- a/arch/powerpc/include/asm/code-patching.h
++++ b/arch/powerpc/include/asm/code-patching.h
+@@ -22,6 +22,7 @@
+ #define BRANCH_ABSOLUTE       0x2
+ bool is_offset_in_branch_range(long offset);
++bool is_offset_in_cond_branch_range(long offset);
+ unsigned int create_branch(const unsigned int *addr,
+                          unsigned long target, int flags);
+ unsigned int create_cond_branch(const unsigned int *addr,
+--- a/arch/powerpc/lib/code-patching.c
++++ b/arch/powerpc/lib/code-patching.c
+@@ -221,6 +221,11 @@ bool is_offset_in_branch_range(long offs
+       return (offset >= -0x2000000 && offset <= 0x1fffffc && !(offset & 0x3));
+ }
++bool is_offset_in_cond_branch_range(long offset)
++{
++      return offset >= -0x8000 && offset <= 0x7fff && !(offset & 0x3);
++}
++
+ /*
+  * Helper to check if a given instruction is a conditional branch
+  * Derived from the conditional checks in analyse_instr()
+@@ -274,7 +279,7 @@ unsigned int create_cond_branch(const un
+               offset = offset - (unsigned long)addr;
+       /* Check we can represent the target in the instruction format */
+-      if (offset < -0x8000 || offset > 0x7FFF || offset & 0x3)
++      if (!is_offset_in_cond_branch_range(offset))
+               return 0;
+       /* Mask out the flags and target, so they don't step on each other. */
+--- a/arch/powerpc/net/bpf_jit.h
++++ b/arch/powerpc/net/bpf_jit.h
+@@ -225,11 +225,6 @@
+ #define PPC_FUNC_ADDR(d,i) do { PPC_LI32(d, i); } while(0)
+ #endif
+-static inline bool is_nearbranch(int offset)
+-{
+-      return (offset < 32768) && (offset >= -32768);
+-}
+-
+ /*
+  * The fly in the ointment of code size changing from pass to pass is
+  * avoided by padding the short branch case with a NOP.        If code size differs
+@@ -238,7 +233,7 @@ static inline bool is_nearbranch(int off
+  * state.
+  */
+ #define PPC_BCC(cond, dest)   do {                                          \
+-              if (is_nearbranch((dest) - (ctx->idx * 4))) {                 \
++              if (is_offset_in_cond_branch_range((long)(dest) - (ctx->idx * 4))) {    \
+                       PPC_BCC_SHORT(cond, dest);                            \
+                       PPC_NOP();                                            \
+               } else {                                                      \
diff --git a/queue-5.4/powerpc-powernv-prd-unregister-opal_msg_prd2-notifier-during-module-unload.patch b/queue-5.4/powerpc-powernv-prd-unregister-opal_msg_prd2-notifier-during-module-unload.patch
new file mode 100644 (file)
index 0000000..4925542
--- /dev/null
@@ -0,0 +1,106 @@
+From 52862ab33c5d97490f3fa345d6529829e6d6637b Mon Sep 17 00:00:00 2001
+From: Vasant Hegde <hegdevasant@linux.vnet.ibm.com>
+Date: Thu, 28 Oct 2021 22:27:16 +0530
+Subject: powerpc/powernv/prd: Unregister OPAL_MSG_PRD2 notifier during module unload
+
+From: Vasant Hegde <hegdevasant@linux.vnet.ibm.com>
+
+commit 52862ab33c5d97490f3fa345d6529829e6d6637b upstream.
+
+Commit 587164cd, introduced new opal message type (OPAL_MSG_PRD2) and
+added opal notifier. But I missed to unregister the notifier during
+module unload path. This results in below call trace if you try to
+unload and load opal_prd module.
+
+Also add new notifier_block for OPAL_MSG_PRD2 message.
+
+Sample calltrace (modprobe -r opal_prd; modprobe opal_prd)
+  BUG: Unable to handle kernel data access on read at 0xc0080000192200e0
+  Faulting instruction address: 0xc00000000018d1cc
+  Oops: Kernel access of bad area, sig: 11 [#1]
+  LE PAGE_SIZE=64K MMU=Radix SMP NR_CPUS=2048 NUMA PowerNV
+  CPU: 66 PID: 7446 Comm: modprobe Kdump: loaded Tainted: G            E     5.14.0prd #759
+  NIP:  c00000000018d1cc LR: c00000000018d2a8 CTR: c0000000000cde10
+  REGS: c0000003c4c0f0a0 TRAP: 0300   Tainted: G            E      (5.14.0prd)
+  MSR:  9000000002009033 <SF,HV,VEC,EE,ME,IR,DR,RI,LE>  CR: 24224824  XER: 20040000
+  CFAR: c00000000018d2a4 DAR: c0080000192200e0 DSISR: 40000000 IRQMASK: 1
+  ...
+  NIP notifier_chain_register+0x2c/0xc0
+  LR  atomic_notifier_chain_register+0x48/0x80
+  Call Trace:
+    0xc000000002090610 (unreliable)
+    atomic_notifier_chain_register+0x58/0x80
+    opal_message_notifier_register+0x7c/0x1e0
+    opal_prd_probe+0x84/0x150 [opal_prd]
+    platform_probe+0x78/0x130
+    really_probe+0x110/0x5d0
+    __driver_probe_device+0x17c/0x230
+    driver_probe_device+0x60/0x130
+    __driver_attach+0xfc/0x220
+    bus_for_each_dev+0xa8/0x130
+    driver_attach+0x34/0x50
+    bus_add_driver+0x1b0/0x300
+    driver_register+0x98/0x1a0
+    __platform_driver_register+0x38/0x50
+    opal_prd_driver_init+0x34/0x50 [opal_prd]
+    do_one_initcall+0x60/0x2d0
+    do_init_module+0x7c/0x320
+    load_module+0x3394/0x3650
+    __do_sys_finit_module+0xd4/0x160
+    system_call_exception+0x140/0x290
+    system_call_common+0xf4/0x258
+
+Fixes: 587164cd593c ("powerpc/powernv: Add new opal message type")
+Cc: stable@vger.kernel.org # v5.4+
+Signed-off-by: Vasant Hegde <hegdevasant@linux.vnet.ibm.com>
+Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
+Link: https://lore.kernel.org/r/20211028165716.41300-1-hegdevasant@linux.vnet.ibm.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/powerpc/platforms/powernv/opal-prd.c |   12 +++++++++++-
+ 1 file changed, 11 insertions(+), 1 deletion(-)
+
+--- a/arch/powerpc/platforms/powernv/opal-prd.c
++++ b/arch/powerpc/platforms/powernv/opal-prd.c
+@@ -372,6 +372,12 @@ static struct notifier_block opal_prd_ev
+       .priority       = 0,
+ };
++static struct notifier_block opal_prd_event_nb2 = {
++      .notifier_call  = opal_prd_msg_notifier,
++      .next           = NULL,
++      .priority       = 0,
++};
++
+ static int opal_prd_probe(struct platform_device *pdev)
+ {
+       int rc;
+@@ -393,9 +399,10 @@ static int opal_prd_probe(struct platfor
+               return rc;
+       }
+-      rc = opal_message_notifier_register(OPAL_MSG_PRD2, &opal_prd_event_nb);
++      rc = opal_message_notifier_register(OPAL_MSG_PRD2, &opal_prd_event_nb2);
+       if (rc) {
+               pr_err("Couldn't register PRD2 event notifier\n");
++              opal_message_notifier_unregister(OPAL_MSG_PRD, &opal_prd_event_nb);
+               return rc;
+       }
+@@ -404,6 +411,8 @@ static int opal_prd_probe(struct platfor
+               pr_err("failed to register miscdev\n");
+               opal_message_notifier_unregister(OPAL_MSG_PRD,
+                               &opal_prd_event_nb);
++              opal_message_notifier_unregister(OPAL_MSG_PRD2,
++                              &opal_prd_event_nb2);
+               return rc;
+       }
+@@ -414,6 +423,7 @@ static int opal_prd_remove(struct platfo
+ {
+       misc_deregister(&opal_prd_dev);
+       opal_message_notifier_unregister(OPAL_MSG_PRD, &opal_prd_event_nb);
++      opal_message_notifier_unregister(OPAL_MSG_PRD2, &opal_prd_event_nb2);
+       return 0;
+ }
diff --git a/queue-5.4/powerpc-security-add-a-helper-to-query-stf_barrier-type.patch b/queue-5.4/powerpc-security-add-a-helper-to-query-stf_barrier-type.patch
new file mode 100644 (file)
index 0000000..654be8c
--- /dev/null
@@ -0,0 +1,52 @@
+From foo@baz Mon Nov 15 03:31:49 PM CET 2021
+From: "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com>
+Date: Mon, 15 Nov 2021 16:36:03 +0530
+Subject: powerpc/security: Add a helper to query stf_barrier type
+To: <stable@vger.kernel.org>
+Cc: Michael Ellerman <mpe@ellerman.id.au>, Daniel Borkmann <daniel@iogearbox.net>
+Message-ID: <06399607dfbc4a20e30700cd6b07f4a8c8491672.1636963359.git.naveen.n.rao@linux.vnet.ibm.com>
+
+From: "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com>
+
+upstream commit 030905920f32e91a52794937f67434ac0b3ea41a
+
+Add a helper to return the stf_barrier type for the current processor.
+
+Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
+Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
+Link: https://lore.kernel.org/r/3bd5d7f96ea1547991ac2ce3137dc2b220bae285.1633464148.git.naveen.n.rao@linux.vnet.ibm.com
+Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/powerpc/include/asm/security_features.h |    5 +++++
+ arch/powerpc/kernel/security.c               |    5 +++++
+ 2 files changed, 10 insertions(+)
+
+--- a/arch/powerpc/include/asm/security_features.h
++++ b/arch/powerpc/include/asm/security_features.h
+@@ -39,6 +39,11 @@ static inline bool security_ftr_enabled(
+       return !!(powerpc_security_features & feature);
+ }
++#ifdef CONFIG_PPC_BOOK3S_64
++enum stf_barrier_type stf_barrier_type_get(void);
++#else
++static inline enum stf_barrier_type stf_barrier_type_get(void) { return STF_BARRIER_NONE; }
++#endif
+ // Features indicating support for Spectre/Meltdown mitigations
+--- a/arch/powerpc/kernel/security.c
++++ b/arch/powerpc/kernel/security.c
+@@ -256,6 +256,11 @@ static int __init handle_no_stf_barrier(
+ early_param("no_stf_barrier", handle_no_stf_barrier);
++enum stf_barrier_type stf_barrier_type_get(void)
++{
++      return stf_enabled_flush_types;
++}
++
+ /* This is the generic flag used by other architectures */
+ static int __init handle_ssbd(char *p)
+ {
diff --git a/queue-5.4/s390-cio-check-the-subchannel-validity-for-dev_busid.patch b/queue-5.4/s390-cio-check-the-subchannel-validity-for-dev_busid.patch
new file mode 100644 (file)
index 0000000..e846ae6
--- /dev/null
@@ -0,0 +1,37 @@
+From a4751f157c194431fae9e9c493f456df8272b871 Mon Sep 17 00:00:00 2001
+From: Vineeth Vijayan <vneethv@linux.ibm.com>
+Date: Fri, 5 Nov 2021 16:44:51 +0100
+Subject: s390/cio: check the subchannel validity for dev_busid
+
+From: Vineeth Vijayan <vneethv@linux.ibm.com>
+
+commit a4751f157c194431fae9e9c493f456df8272b871 upstream.
+
+Check the validity of subchanel before reading other fields in
+the schib.
+
+Fixes: d3683c055212 ("s390/cio: add dev_busid sysfs entry for each subchannel")
+CC: <stable@vger.kernel.org>
+Reported-by: Cornelia Huck <cohuck@redhat.com>
+Signed-off-by: Vineeth Vijayan <vneethv@linux.ibm.com>
+Reviewed-by: Cornelia Huck <cohuck@redhat.com>
+Link: https://lore.kernel.org/r/20211105154451.847288-1-vneethv@linux.ibm.com
+Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/s390/cio/css.c |    4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/drivers/s390/cio/css.c
++++ b/drivers/s390/cio/css.c
+@@ -433,8 +433,8 @@ static ssize_t dev_busid_show(struct dev
+       struct subchannel *sch = to_subchannel(dev);
+       struct pmcw *pmcw = &sch->schib.pmcw;
+-      if ((pmcw->st == SUBCHANNEL_TYPE_IO ||
+-           pmcw->st == SUBCHANNEL_TYPE_MSG) && pmcw->dnv)
++      if ((pmcw->st == SUBCHANNEL_TYPE_IO && pmcw->dnv) ||
++          (pmcw->st == SUBCHANNEL_TYPE_MSG && pmcw->w))
+               return sysfs_emit(buf, "0.%x.%04x\n", sch->schid.ssid,
+                                 pmcw->dev);
+       else
diff --git a/queue-5.4/s390-cio-make-ccw_device_dma_-more-robust.patch b/queue-5.4/s390-cio-make-ccw_device_dma_-more-robust.patch
new file mode 100644 (file)
index 0000000..5521fbb
--- /dev/null
@@ -0,0 +1,81 @@
+From ad9a14517263a16af040598c7920c09ca9670a31 Mon Sep 17 00:00:00 2001
+From: Halil Pasic <pasic@linux.ibm.com>
+Date: Wed, 8 Sep 2021 17:36:23 +0200
+Subject: s390/cio: make ccw_device_dma_* more robust
+
+From: Halil Pasic <pasic@linux.ibm.com>
+
+commit ad9a14517263a16af040598c7920c09ca9670a31 upstream.
+
+Since commit 48720ba56891 ("virtio/s390: use DMA memory for ccw I/O and
+classic notifiers") we were supposed to make sure that
+virtio_ccw_release_dev() completes before the ccw device and the
+attached dma pool are torn down, but unfortunately we did not.  Before
+that commit it used to be OK to delay cleaning up the memory allocated
+by virtio-ccw indefinitely (which isn't really intuitive for guys used
+to destruction happens in reverse construction order), but now we
+trigger a BUG_ON if the genpool is destroyed before all memory allocated
+from it is deallocated. Which brings down the guest. We can observe this
+problem, when unregister_virtio_device() does not give up the last
+reference to the virtio_device (e.g. because a virtio-scsi attached scsi
+disk got removed without previously unmounting its previously mounted
+partition).
+
+To make sure that the genpool is only destroyed after all the necessary
+freeing is done let us take a reference on the ccw device on each
+ccw_device_dma_zalloc() and give it up on each ccw_device_dma_free().
+
+Actually there are multiple approaches to fixing the problem at hand
+that can work. The upside of this one is that it is the safest one while
+remaining simple. We don't crash the guest even if the driver does not
+pair allocations and frees. The downside is the reference counting
+overhead, that the reference counting for ccw devices becomes more
+complex, in a sense that we need to pair the calls to the aforementioned
+functions for it to be correct, and that if we happen to leak, we leak
+more than necessary (the whole ccw device instead of just the genpool).
+
+Some alternatives to this approach are taking a reference in
+virtio_ccw_online() and giving it up in virtio_ccw_release_dev() or
+making sure virtio_ccw_release_dev() completes its work before
+virtio_ccw_remove() returns. The downside of these approaches is that
+these are less safe against programming errors.
+
+Cc: <stable@vger.kernel.org> # v5.3
+Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
+Fixes: 48720ba56891 ("virtio/s390: use DMA memory for ccw I/O and classic notifiers")
+Reported-by: bfu@redhat.com
+Reviewed-by: Vineeth Vijayan <vneethv@linux.ibm.com>
+Acked-by: Cornelia Huck <cohuck@redhat.com>
+Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/s390/cio/device_ops.c |   12 +++++++++++-
+ 1 file changed, 11 insertions(+), 1 deletion(-)
+
+--- a/drivers/s390/cio/device_ops.c
++++ b/drivers/s390/cio/device_ops.c
+@@ -717,13 +717,23 @@ EXPORT_SYMBOL_GPL(ccw_device_get_schid);
+  */
+ void *ccw_device_dma_zalloc(struct ccw_device *cdev, size_t size)
+ {
+-      return cio_gp_dma_zalloc(cdev->private->dma_pool, &cdev->dev, size);
++      void *addr;
++
++      if (!get_device(&cdev->dev))
++              return NULL;
++      addr = cio_gp_dma_zalloc(cdev->private->dma_pool, &cdev->dev, size);
++      if (IS_ERR_OR_NULL(addr))
++              put_device(&cdev->dev);
++      return addr;
+ }
+ EXPORT_SYMBOL(ccw_device_dma_zalloc);
+ void ccw_device_dma_free(struct ccw_device *cdev, void *cpu_addr, size_t size)
+ {
++      if (!cpu_addr)
++              return;
+       cio_gp_dma_free(cdev->private->dma_pool, cpu_addr, size);
++      put_device(&cdev->dev);
+ }
+ EXPORT_SYMBOL(ccw_device_dma_free);
diff --git a/queue-5.4/s390-tape-fix-timer-initialization-in-tape_std_assign.patch b/queue-5.4/s390-tape-fix-timer-initialization-in-tape_std_assign.patch
new file mode 100644 (file)
index 0000000..5e1a444
--- /dev/null
@@ -0,0 +1,44 @@
+From 213fca9e23b59581c573d558aa477556f00b8198 Mon Sep 17 00:00:00 2001
+From: Sven Schnelle <svens@linux.ibm.com>
+Date: Tue, 2 Nov 2021 10:55:30 +0100
+Subject: s390/tape: fix timer initialization in tape_std_assign()
+
+From: Sven Schnelle <svens@linux.ibm.com>
+
+commit 213fca9e23b59581c573d558aa477556f00b8198 upstream.
+
+commit 9c6c273aa424 ("timer: Remove init_timer_on_stack() in favor
+of timer_setup_on_stack()") changed the timer setup from
+init_timer_on_stack(() to timer_setup(), but missed to change the
+mod_timer() call. And while at it, use msecs_to_jiffies() instead
+of the open coded timeout calculation.
+
+Cc: stable@vger.kernel.org
+Fixes: 9c6c273aa424 ("timer: Remove init_timer_on_stack() in favor of timer_setup_on_stack()")
+Signed-off-by: Sven Schnelle <svens@linux.ibm.com>
+Reviewed-by: Vasily Gorbik <gor@linux.ibm.com>
+Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/s390/char/tape_std.c |    3 +--
+ 1 file changed, 1 insertion(+), 2 deletions(-)
+
+--- a/drivers/s390/char/tape_std.c
++++ b/drivers/s390/char/tape_std.c
+@@ -53,7 +53,6 @@ int
+ tape_std_assign(struct tape_device *device)
+ {
+       int                  rc;
+-      struct timer_list    timeout;
+       struct tape_request *request;
+       request = tape_alloc_request(2, 11);
+@@ -70,7 +69,7 @@ tape_std_assign(struct tape_device *devi
+        * So we set up a timeout for this call.
+        */
+       timer_setup(&request->timer, tape_std_assign_timeout, 0);
+-      mod_timer(&timeout, jiffies + 2 * HZ);
++      mod_timer(&request->timer, jiffies + msecs_to_jiffies(2000));
+       rc = tape_do_io_interruptible(device, request);
index 344f5548529abb11c03b099950f51868afe2729e..08e2f4d0c59e3388c93b6391e2dde7b9183befb5 100644 (file)
@@ -339,3 +339,15 @@ f2fs-should-use-gfp_nofs-for-directory-inodes.patch
 net-neigh-enable-state-migration-between-nud_permane.patch
 9p-net-fix-missing-error-check-in-p9_check_errors.patch
 ovl-fix-deadlock-in-splice-write.patch
+powerpc-lib-add-helper-to-check-if-offset-is-within-conditional-branch-range.patch
+powerpc-bpf-validate-branch-ranges.patch
+powerpc-bpf-fix-bpf_sub-when-imm-0x80000000.patch
+powerpc-security-add-a-helper-to-query-stf_barrier-type.patch
+powerpc-bpf-emit-stf-barrier-instruction-sequences-for-bpf_nospec.patch
+mm-oom-pagefault_out_of_memory-don-t-force-global-oom-for-dying-tasks.patch
+mm-oom-do-not-trigger-out_of_memory-from-the-pf.patch
+video-backlight-drop-maximum-brightness-override-for-brightness-zero.patch
+s390-cio-check-the-subchannel-validity-for-dev_busid.patch
+s390-tape-fix-timer-initialization-in-tape_std_assign.patch
+s390-cio-make-ccw_device_dma_-more-robust.patch
+powerpc-powernv-prd-unregister-opal_msg_prd2-notifier-during-module-unload.patch
diff --git a/queue-5.4/video-backlight-drop-maximum-brightness-override-for-brightness-zero.patch b/queue-5.4/video-backlight-drop-maximum-brightness-override-for-brightness-zero.patch
new file mode 100644 (file)
index 0000000..95b52ab
--- /dev/null
@@ -0,0 +1,49 @@
+From 33a5471f8da976bf271a1ebbd6b9d163cb0cb6aa Mon Sep 17 00:00:00 2001
+From: Marek Vasut <marex@denx.de>
+Date: Tue, 21 Sep 2021 19:35:06 +0200
+Subject: video: backlight: Drop maximum brightness override for brightness zero
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Marek Vasut <marex@denx.de>
+
+commit 33a5471f8da976bf271a1ebbd6b9d163cb0cb6aa upstream.
+
+The note in c2adda27d202f ("video: backlight: Add of_find_backlight helper
+in backlight.c") says that gpio-backlight uses brightness as power state.
+This has been fixed since in ec665b756e6f7 ("backlight: gpio-backlight:
+Correct initial power state handling") and other backlight drivers do not
+require this workaround. Drop the workaround.
+
+This fixes the case where e.g. pwm-backlight can perfectly well be set to
+brightness 0 on boot in DT, which without this patch leads to the display
+brightness to be max instead of off.
+
+Fixes: c2adda27d202f ("video: backlight: Add of_find_backlight helper in backlight.c")
+Cc: <stable@vger.kernel.org> # 5.4+
+Cc: <stable@vger.kernel.org> # 4.19.x: ec665b756e6f7: backlight: gpio-backlight: Correct initial power state handling
+Signed-off-by: Marek Vasut <marex@denx.de>
+Acked-by: Noralf Trønnes <noralf@tronnes.org>
+Reviewed-by: Daniel Thompson <daniel.thompson@linaro.org>
+Signed-off-by: Lee Jones <lee.jones@linaro.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/video/backlight/backlight.c |    6 ------
+ 1 file changed, 6 deletions(-)
+
+--- a/drivers/video/backlight/backlight.c
++++ b/drivers/video/backlight/backlight.c
+@@ -630,12 +630,6 @@ struct backlight_device *of_find_backlig
+                       of_node_put(np);
+                       if (!bd)
+                               return ERR_PTR(-EPROBE_DEFER);
+-                      /*
+-                       * Note: gpio_backlight uses brightness as
+-                       * power state during probe
+-                       */
+-                      if (!bd->props.brightness)
+-                              bd->props.brightness = bd->props.max_brightness;
+               }
+       }