--- /dev/null
+From 79b4a9cf0e2ea8203ce777c8d5cfa86c71eae86e Mon Sep 17 00:00:00 2001
+From: Aurelien Jarno <aurelien@aurel32.net>
+Date: Tue, 9 Apr 2019 16:53:55 +0200
+Subject: MIPS: scall64-o32: Fix indirect syscall number load
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Aurelien Jarno <aurelien@aurel32.net>
+
+commit 79b4a9cf0e2ea8203ce777c8d5cfa86c71eae86e upstream.
+
+Commit 4c21b8fd8f14 (MIPS: seccomp: Handle indirect system calls (o32))
+added indirect syscall detection for O32 processes running on MIPS64,
+but it did not work correctly for big endian kernel/processes. The
+reason is that the syscall number is loaded from ARG1 using the lw
+instruction while this is a 64-bit value, so zero is loaded instead of
+the syscall number.
+
+Fix the code by using the ld instruction instead. When running a 32-bit
+processes on a 64 bit CPU, the values are properly sign-extended, so it
+ensures the value passed to syscall_trace_enter is correct.
+
+Recent systemd versions with seccomp enabled whitelist the getpid
+syscall for their internal processes (e.g. systemd-journald), but call
+it through syscall(SYS_getpid). This fix therefore allows O32 big endian
+systems with a 64-bit kernel to run recent systemd versions.
+
+Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
+Cc: <stable@vger.kernel.org> # v3.15+
+Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
+Signed-off-by: Paul Burton <paul.burton@mips.com>
+Cc: Ralf Baechle <ralf@linux-mips.org>
+Cc: James Hogan <jhogan@kernel.org>
+Cc: linux-mips@vger.kernel.org
+Cc: linux-kernel@vger.kernel.org
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/mips/kernel/scall64-o32.S | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/arch/mips/kernel/scall64-o32.S
++++ b/arch/mips/kernel/scall64-o32.S
+@@ -126,7 +126,7 @@ trace_a_syscall:
+ subu t1, v0, __NR_O32_Linux
+ move a1, v0
+ bnez t1, 1f /* __NR_syscall at offset 0 */
+- lw a1, PT_R4(sp) /* Arg1 for __NR_syscall case */
++ ld a1, PT_R4(sp) /* Arg1 for __NR_syscall case */
+ .set pop
+
+ 1: jal syscall_trace_enter
--- /dev/null
+From a860fa7b96e1a1c974556327aa1aee852d434c21 Mon Sep 17 00:00:00 2001
+From: Xie XiuQi <xiexiuqi@huawei.com>
+Date: Sat, 20 Apr 2019 16:34:16 +0800
+Subject: sched/numa: Fix a possible divide-by-zero
+
+From: Xie XiuQi <xiexiuqi@huawei.com>
+
+commit a860fa7b96e1a1c974556327aa1aee852d434c21 upstream.
+
+sched_clock_cpu() may not be consistent between CPUs. If a task
+migrates to another CPU, then se.exec_start is set to that CPU's
+rq_clock_task() by update_stats_curr_start(). Specifically, the new
+value might be before the old value due to clock skew.
+
+So then if in numa_get_avg_runtime() the expression:
+
+ 'now - p->last_task_numa_placement'
+
+ends up as -1, then the divider '*period + 1' in task_numa_placement()
+is 0 and things go bang. Similar to update_curr(), check if time goes
+backwards to avoid this.
+
+[ peterz: Wrote new changelog. ]
+[ mingo: Tweaked the code comment. ]
+
+Signed-off-by: Xie XiuQi <xiexiuqi@huawei.com>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Cc: Linus Torvalds <torvalds@linux-foundation.org>
+Cc: Peter Zijlstra <peterz@infradead.org>
+Cc: Thomas Gleixner <tglx@linutronix.de>
+Cc: cj.chengjian@huawei.com
+Cc: <stable@vger.kernel.org>
+Link: http://lkml.kernel.org/r/20190425080016.GX11158@hirez.programming.kicks-ass.net
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ kernel/sched/fair.c | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -1722,6 +1722,10 @@ static u64 numa_get_avg_runtime(struct t
+ if (p->last_task_numa_placement) {
+ delta = runtime - p->last_sum_exec_runtime;
+ *period = now - p->last_task_numa_placement;
++
++ /* Avoid time going backwards, prevent potential divide error: */
++ if (unlikely((s64)*period < 0))
++ *period = 0;
+ } else {
+ delta = p->se.avg.load_sum / p->se.load.weight;
+ *period = LOAD_AVG_MAX;
kbuild-simplify-ld-option-implementation.patch
kvm-fail-kvm_set_vcpu_events-with-invalid-exception-.patch
cifs-do-not-attempt-cifs-operation-on-smb2-rename-error.patch
+mips-scall64-o32-fix-indirect-syscall-number-load.patch
+trace-fix-preempt_enable_no_resched-abuse.patch
+sched-numa-fix-a-possible-divide-by-zero.patch
--- /dev/null
+From d6097c9e4454adf1f8f2c9547c2fa6060d55d952 Mon Sep 17 00:00:00 2001
+From: Peter Zijlstra <peterz@infradead.org>
+Date: Tue, 23 Apr 2019 22:03:18 +0200
+Subject: trace: Fix preempt_enable_no_resched() abuse
+
+From: Peter Zijlstra <peterz@infradead.org>
+
+commit d6097c9e4454adf1f8f2c9547c2fa6060d55d952 upstream.
+
+Unless the very next line is schedule(), or implies it, one must not use
+preempt_enable_no_resched(). It can cause a preemption to go missing and
+thereby cause arbitrary delays, breaking the PREEMPT=y invariant.
+
+Link: http://lkml.kernel.org/r/20190423200318.GY14281@hirez.programming.kicks-ass.net
+
+Cc: Waiman Long <longman@redhat.com>
+Cc: Linus Torvalds <torvalds@linux-foundation.org>
+Cc: Ingo Molnar <mingo@redhat.com>
+Cc: Will Deacon <will.deacon@arm.com>
+Cc: Thomas Gleixner <tglx@linutronix.de>
+Cc: the arch/x86 maintainers <x86@kernel.org>
+Cc: Davidlohr Bueso <dave@stgolabs.net>
+Cc: Tim Chen <tim.c.chen@linux.intel.com>
+Cc: huang ying <huang.ying.caritas@gmail.com>
+Cc: Roman Gushchin <guro@fb.com>
+Cc: Alexei Starovoitov <ast@kernel.org>
+Cc: Daniel Borkmann <daniel@iogearbox.net>
+Cc: stable@vger.kernel.org
+Fixes: 2c2d7329d8af ("tracing/ftrace: use preempt_enable_no_resched_notrace in ring_buffer_time_stamp()")
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ kernel/trace/ring_buffer.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -701,7 +701,7 @@ u64 ring_buffer_time_stamp(struct ring_b
+
+ preempt_disable_notrace();
+ time = rb_time_stamp(buffer);
+- preempt_enable_no_resched_notrace();
++ preempt_enable_notrace();
+
+ return time;
+ }