]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/commitdiff
5.0-stable patches
authorGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Mon, 1 Apr 2019 10:29:44 +0000 (12:29 +0200)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Mon, 1 Apr 2019 10:29:44 +0000 (12:29 +0200)
added patches:
cpu-hotplug-prevent-crash-when-cpu-bringup-fails-on-config_hotplug_cpu-n.patch
kvm-reject-device-ioctls-from-processes-other-than-the-vm-s-creator.patch
kvm-x86-emulate-msr_ia32_arch_capabilities-on-amd-hosts.patch
kvm-x86-update-rip-after-emulating-io.patch
objtool-query-pkg-config-for-libelf-location.patch
perf-intel-pt-fix-tsc-slip.patch
perf-pmu-fix-parser-error-for-uncore-event-alias.patch
powerpc-64-fix-memcmp-reading-past-the-end-of-src-dest.patch
powerpc-pseries-energy-use-of-accessor-functions-to-read-ibm-drc-indexes.patch
powerpc-pseries-mce-fix-misleading-print-for-tlb-mutlihit.patch
watchdog-respect-watchdog-cpumask-on-cpu-hotplug.patch
x86-smp-enforce-config_hotplug_cpu-when-smp-y.patch

13 files changed:
queue-5.0/cpu-hotplug-prevent-crash-when-cpu-bringup-fails-on-config_hotplug_cpu-n.patch [new file with mode: 0644]
queue-5.0/kvm-reject-device-ioctls-from-processes-other-than-the-vm-s-creator.patch [new file with mode: 0644]
queue-5.0/kvm-x86-emulate-msr_ia32_arch_capabilities-on-amd-hosts.patch [new file with mode: 0644]
queue-5.0/kvm-x86-update-rip-after-emulating-io.patch [new file with mode: 0644]
queue-5.0/objtool-query-pkg-config-for-libelf-location.patch [new file with mode: 0644]
queue-5.0/perf-intel-pt-fix-tsc-slip.patch [new file with mode: 0644]
queue-5.0/perf-pmu-fix-parser-error-for-uncore-event-alias.patch [new file with mode: 0644]
queue-5.0/powerpc-64-fix-memcmp-reading-past-the-end-of-src-dest.patch [new file with mode: 0644]
queue-5.0/powerpc-pseries-energy-use-of-accessor-functions-to-read-ibm-drc-indexes.patch [new file with mode: 0644]
queue-5.0/powerpc-pseries-mce-fix-misleading-print-for-tlb-mutlihit.patch [new file with mode: 0644]
queue-5.0/series
queue-5.0/watchdog-respect-watchdog-cpumask-on-cpu-hotplug.patch [new file with mode: 0644]
queue-5.0/x86-smp-enforce-config_hotplug_cpu-when-smp-y.patch [new file with mode: 0644]

diff --git a/queue-5.0/cpu-hotplug-prevent-crash-when-cpu-bringup-fails-on-config_hotplug_cpu-n.patch b/queue-5.0/cpu-hotplug-prevent-crash-when-cpu-bringup-fails-on-config_hotplug_cpu-n.patch
new file mode 100644 (file)
index 0000000..73330f7
--- /dev/null
@@ -0,0 +1,142 @@
+From 206b92353c839c0b27a0b9bec24195f93fd6cf7a Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx@linutronix.de>
+Date: Tue, 26 Mar 2019 17:36:05 +0100
+Subject: cpu/hotplug: Prevent crash when CPU bringup fails on CONFIG_HOTPLUG_CPU=n
+
+From: Thomas Gleixner <tglx@linutronix.de>
+
+commit 206b92353c839c0b27a0b9bec24195f93fd6cf7a upstream.
+
+Tianyu reported a crash in a CPU hotplug teardown callback when booting a
+kernel which has CONFIG_HOTPLUG_CPU disabled with the 'nosmt' boot
+parameter.
+
+It turns out that the SMP=y CONFIG_HOTPLUG_CPU=n case has been broken
+forever in case that a bringup callback fails. Unfortunately this issue was
+not recognized when the CPU hotplug code was reworked, so the shortcoming
+just stayed in place.
+
+When a bringup callback fails, the CPU hotplug code rolls back the
+operation and takes the CPU offline.
+
+The 'nosmt' command line argument uses a bringup failure to abort the
+bringup of SMT sibling CPUs. This partial bringup is required due to the
+MCE misdesign on Intel CPUs.
+
+With CONFIG_HOTPLUG_CPU=y the rollback works perfectly fine, but
+CONFIG_HOTPLUG_CPU=n lacks essential mechanisms to exercise the low level
+teardown of a CPU including the synchronizations in various facilities like
+RCU, NOHZ and others.
+
+As a consequence the teardown callbacks which must be executed on the
+outgoing CPU within stop machine with interrupts disabled are executed on
+the control CPU in interrupt enabled and preemptible context causing the
+kernel to crash and burn. The pre state machine code has a different
+failure mode which is more subtle and resulting in a less obvious use after
+free crash because the control side frees resources which are still in use
+by the undead CPU.
+
+But this is not a x86 only problem. Any architecture which supports the
+SMP=y HOTPLUG_CPU=n combination suffers from the same issue. It's just less
+likely to be triggered because in 99.99999% of the cases all bringup
+callbacks succeed.
+
+The easy solution of making HOTPLUG_CPU mandatory for SMP is not working on
+all architectures as the following architectures have either no hotplug
+support at all or not all subarchitectures support it:
+
+ alpha, arc, hexagon, openrisc, riscv, sparc (32bit), mips (partial).
+
+Crashing the kernel in such a situation is not an acceptable state
+either.
+
+Implement a minimal rollback variant by limiting the teardown to the point
+where all regular teardown callbacks have been invoked and leave the CPU in
+the 'dead' idle state. This has the following consequences:
+
+ - the CPU is brought down to the point where the stop_machine takedown
+   would happen.
+
+ - the CPU stays there forever and is idle
+
+ - The CPU is cleared in the CPU active mask, but not in the CPU online
+   mask which is a legit state.
+
+ - Interrupts are not forced away from the CPU
+
+ - All facilities which only look at online mask would still see it, but
+   that is the case during normal hotplug/unplug operations as well. It's
+   just a (way) longer time frame.
+
+This will expose issues, which haven't been exposed before or only seldom,
+because now the normally transient state of being non active but online is
+a permanent state. In testing this exposed already an issue vs. work queues
+where the vmstat code schedules work on the almost dead CPU which ends up
+in an unbound workqueue and triggers 'preemtible context' warnings. This is
+not a problem of this change, it merily exposes an already existing issue.
+Still this is better than crashing fully without a chance to debug it.
+
+This is mainly thought as workaround for those architectures which do not
+support HOTPLUG_CPU. All others should enforce HOTPLUG_CPU for SMP.
+
+Fixes: 2e1a3483ce74 ("cpu/hotplug: Split out the state walk into functions")
+Reported-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Tested-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
+Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Cc: Konrad Wilk <konrad.wilk@oracle.com>
+Cc: Josh Poimboeuf <jpoimboe@redhat.com>
+Cc: Mukesh Ojha <mojha@codeaurora.org>
+Cc: Peter Zijlstra <peterz@infradead.org>
+Cc: Jiri Kosina <jkosina@suse.cz>
+Cc: Rik van Riel <riel@surriel.com>
+Cc: Andy Lutomirski <luto@kernel.org>
+Cc: Micheal Kelley <michael.h.kelley@microsoft.com>
+Cc: "K. Y. Srinivasan" <kys@microsoft.com>
+Cc: Linus Torvalds <torvalds@linux-foundation.org>
+Cc: Borislav Petkov <bp@alien8.de>
+Cc: K. Y. Srinivasan <kys@microsoft.com>
+Cc: stable@vger.kernel.org
+Link: https://lkml.kernel.org/r/20190326163811.503390616@linutronix.de
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ kernel/cpu.c |   20 ++++++++++++++++++--
+ 1 file changed, 18 insertions(+), 2 deletions(-)
+
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -555,6 +555,20 @@ static void undo_cpu_up(unsigned int cpu
+               cpuhp_invoke_callback(cpu, st->state, false, NULL, NULL);
+ }
++static inline bool can_rollback_cpu(struct cpuhp_cpu_state *st)
++{
++      if (IS_ENABLED(CONFIG_HOTPLUG_CPU))
++              return true;
++      /*
++       * When CPU hotplug is disabled, then taking the CPU down is not
++       * possible because takedown_cpu() and the architecture and
++       * subsystem specific mechanisms are not available. So the CPU
++       * which would be completely unplugged again needs to stay around
++       * in the current state.
++       */
++      return st->state <= CPUHP_BRINGUP_CPU;
++}
++
+ static int cpuhp_up_callbacks(unsigned int cpu, struct cpuhp_cpu_state *st,
+                             enum cpuhp_state target)
+ {
+@@ -565,8 +579,10 @@ static int cpuhp_up_callbacks(unsigned i
+               st->state++;
+               ret = cpuhp_invoke_callback(cpu, st->state, true, NULL, NULL);
+               if (ret) {
+-                      st->target = prev_state;
+-                      undo_cpu_up(cpu, st);
++                      if (can_rollback_cpu(st)) {
++                              st->target = prev_state;
++                              undo_cpu_up(cpu, st);
++                      }
+                       break;
+               }
+       }
diff --git a/queue-5.0/kvm-reject-device-ioctls-from-processes-other-than-the-vm-s-creator.patch b/queue-5.0/kvm-reject-device-ioctls-from-processes-other-than-the-vm-s-creator.patch
new file mode 100644 (file)
index 0000000..453e001
--- /dev/null
@@ -0,0 +1,78 @@
+From ddba91801aeb5c160b660caed1800eb3aef403f8 Mon Sep 17 00:00:00 2001
+From: Sean Christopherson <sean.j.christopherson@intel.com>
+Date: Fri, 15 Feb 2019 12:48:39 -0800
+Subject: KVM: Reject device ioctls from processes other than the VM's creator
+
+From: Sean Christopherson <sean.j.christopherson@intel.com>
+
+commit ddba91801aeb5c160b660caed1800eb3aef403f8 upstream.
+
+KVM's API requires thats ioctls must be issued from the same process
+that created the VM.  In other words, userspace can play games with a
+VM's file descriptors, e.g. fork(), SCM_RIGHTS, etc..., but only the
+creator can do anything useful.  Explicitly reject device ioctls that
+are issued by a process other than the VM's creator, and update KVM's
+API documentation to extend its requirements to device ioctls.
+
+Fixes: 852b6d57dc7f ("kvm: add device control API")
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ Documentation/virtual/kvm/api.txt |   16 +++++++++++-----
+ virt/kvm/kvm_main.c               |    3 +++
+ 2 files changed, 14 insertions(+), 5 deletions(-)
+
+--- a/Documentation/virtual/kvm/api.txt
++++ b/Documentation/virtual/kvm/api.txt
+@@ -13,7 +13,7 @@ of a virtual machine.  The ioctls belong
+  - VM ioctls: These query and set attributes that affect an entire virtual
+    machine, for example memory layout.  In addition a VM ioctl is used to
+-   create virtual cpus (vcpus).
++   create virtual cpus (vcpus) and devices.
+    Only run VM ioctls from the same process (address space) that was used
+    to create the VM.
+@@ -24,6 +24,11 @@ of a virtual machine.  The ioctls belong
+    Only run vcpu ioctls from the same thread that was used to create the
+    vcpu.
++ - device ioctls: These query and set attributes that control the operation
++   of a single device.
++
++   device ioctls must be issued from the same process (address space) that
++   was used to create the VM.
+ 2. File descriptors
+ -------------------
+@@ -32,10 +37,11 @@ The kvm API is centered around file desc
+ open("/dev/kvm") obtains a handle to the kvm subsystem; this handle
+ can be used to issue system ioctls.  A KVM_CREATE_VM ioctl on this
+ handle will create a VM file descriptor which can be used to issue VM
+-ioctls.  A KVM_CREATE_VCPU ioctl on a VM fd will create a virtual cpu
+-and return a file descriptor pointing to it.  Finally, ioctls on a vcpu
+-fd can be used to control the vcpu, including the important task of
+-actually running guest code.
++ioctls.  A KVM_CREATE_VCPU or KVM_CREATE_DEVICE ioctl on a VM fd will
++create a virtual cpu or device and return a file descriptor pointing to
++the new resource.  Finally, ioctls on a vcpu or device fd can be used
++to control the vcpu or device.  For vcpus, this includes the important
++task of actually running guest code.
+ In general file descriptors can be migrated among processes by means
+ of fork() and the SCM_RIGHTS facility of unix domain socket.  These
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -2902,6 +2902,9 @@ static long kvm_device_ioctl(struct file
+ {
+       struct kvm_device *dev = filp->private_data;
++      if (dev->kvm->mm != current->mm)
++              return -EIO;
++
+       switch (ioctl) {
+       case KVM_SET_DEVICE_ATTR:
+               return kvm_device_ioctl_attr(dev, dev->ops->set_attr, arg);
diff --git a/queue-5.0/kvm-x86-emulate-msr_ia32_arch_capabilities-on-amd-hosts.patch b/queue-5.0/kvm-x86-emulate-msr_ia32_arch_capabilities-on-amd-hosts.patch
new file mode 100644 (file)
index 0000000..d38d3d3
--- /dev/null
@@ -0,0 +1,125 @@
+From 0cf9135b773bf32fba9dd8e6699c1b331ee4b749 Mon Sep 17 00:00:00 2001
+From: Sean Christopherson <sean.j.christopherson@intel.com>
+Date: Thu, 7 Mar 2019 15:43:02 -0800
+Subject: KVM: x86: Emulate MSR_IA32_ARCH_CAPABILITIES on AMD hosts
+
+From: Sean Christopherson <sean.j.christopherson@intel.com>
+
+commit 0cf9135b773bf32fba9dd8e6699c1b331ee4b749 upstream.
+
+The CPUID flag ARCH_CAPABILITIES is unconditioinally exposed to host
+userspace for all x86 hosts, i.e. KVM advertises ARCH_CAPABILITIES
+regardless of hardware support under the pretense that KVM fully
+emulates MSR_IA32_ARCH_CAPABILITIES.  Unfortunately, only VMX hosts
+handle accesses to MSR_IA32_ARCH_CAPABILITIES (despite KVM_GET_MSRS
+also reporting MSR_IA32_ARCH_CAPABILITIES for all hosts).
+
+Move the MSR_IA32_ARCH_CAPABILITIES handling to common x86 code so
+that it's emulated on AMD hosts.
+
+Fixes: 1eaafe91a0df4 ("kvm: x86: IA32_ARCH_CAPABILITIES is always supported")
+Cc: stable@vger.kernel.org
+Reported-by: Xiaoyao Li <xiaoyao.li@linux.intel.com>
+Cc: Jim Mattson <jmattson@google.com>
+Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/x86/include/asm/kvm_host.h |    1 +
+ arch/x86/kvm/vmx/vmx.c          |   13 -------------
+ arch/x86/kvm/vmx/vmx.h          |    1 -
+ arch/x86/kvm/x86.c              |   12 ++++++++++++
+ 4 files changed, 13 insertions(+), 14 deletions(-)
+
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -570,6 +570,7 @@ struct kvm_vcpu_arch {
+       bool tpr_access_reporting;
+       u64 ia32_xss;
+       u64 microcode_version;
++      u64 arch_capabilities;
+       /*
+        * Paging state of the vcpu
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -1679,12 +1679,6 @@ static int vmx_get_msr(struct kvm_vcpu *
+               msr_info->data = to_vmx(vcpu)->spec_ctrl;
+               break;
+-      case MSR_IA32_ARCH_CAPABILITIES:
+-              if (!msr_info->host_initiated &&
+-                  !guest_cpuid_has(vcpu, X86_FEATURE_ARCH_CAPABILITIES))
+-                      return 1;
+-              msr_info->data = to_vmx(vcpu)->arch_capabilities;
+-              break;
+       case MSR_IA32_SYSENTER_CS:
+               msr_info->data = vmcs_read32(GUEST_SYSENTER_CS);
+               break;
+@@ -1891,11 +1885,6 @@ static int vmx_set_msr(struct kvm_vcpu *
+               vmx_disable_intercept_for_msr(vmx->vmcs01.msr_bitmap, MSR_IA32_PRED_CMD,
+                                             MSR_TYPE_W);
+               break;
+-      case MSR_IA32_ARCH_CAPABILITIES:
+-              if (!msr_info->host_initiated)
+-                      return 1;
+-              vmx->arch_capabilities = data;
+-              break;
+       case MSR_IA32_CR_PAT:
+               if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT) {
+                       if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data))
+@@ -4083,8 +4072,6 @@ static void vmx_vcpu_setup(struct vcpu_v
+               ++vmx->nmsrs;
+       }
+-      vmx->arch_capabilities = kvm_get_arch_capabilities();
+-
+       vm_exit_controls_init(vmx, vmx_vmexit_ctrl());
+       /* 22.2.1, 20.8.1 */
+--- a/arch/x86/kvm/vmx/vmx.h
++++ b/arch/x86/kvm/vmx/vmx.h
+@@ -191,7 +191,6 @@ struct vcpu_vmx {
+       u64                   msr_guest_kernel_gs_base;
+ #endif
+-      u64                   arch_capabilities;
+       u64                   spec_ctrl;
+       u32 vm_entry_controls_shadow;
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -2443,6 +2443,11 @@ int kvm_set_msr_common(struct kvm_vcpu *
+               if (msr_info->host_initiated)
+                       vcpu->arch.microcode_version = data;
+               break;
++      case MSR_IA32_ARCH_CAPABILITIES:
++              if (!msr_info->host_initiated)
++                      return 1;
++              vcpu->arch.arch_capabilities = data;
++              break;
+       case MSR_EFER:
+               return set_efer(vcpu, data);
+       case MSR_K7_HWCR:
+@@ -2747,6 +2752,12 @@ int kvm_get_msr_common(struct kvm_vcpu *
+       case MSR_IA32_UCODE_REV:
+               msr_info->data = vcpu->arch.microcode_version;
+               break;
++      case MSR_IA32_ARCH_CAPABILITIES:
++              if (!msr_info->host_initiated &&
++                  !guest_cpuid_has(vcpu, X86_FEATURE_ARCH_CAPABILITIES))
++                      return 1;
++              msr_info->data = vcpu->arch.arch_capabilities;
++              break;
+       case MSR_IA32_TSC:
+               msr_info->data = kvm_scale_tsc(vcpu, rdtsc()) + vcpu->arch.tsc_offset;
+               break;
+@@ -8725,6 +8736,7 @@ struct kvm_vcpu *kvm_arch_vcpu_create(st
+ int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
+ {
++      vcpu->arch.arch_capabilities = kvm_get_arch_capabilities();
+       vcpu->arch.msr_platform_info = MSR_PLATFORM_INFO_CPUID_FAULT;
+       kvm_vcpu_mtrr_init(vcpu);
+       vcpu_load(vcpu);
diff --git a/queue-5.0/kvm-x86-update-rip-after-emulating-io.patch b/queue-5.0/kvm-x86-update-rip-after-emulating-io.patch
new file mode 100644 (file)
index 0000000..c594849
--- /dev/null
@@ -0,0 +1,156 @@
+From 45def77ebf79e2e8942b89ed79294d97ce914fa0 Mon Sep 17 00:00:00 2001
+From: Sean Christopherson <sean.j.christopherson@intel.com>
+Date: Mon, 11 Mar 2019 20:01:05 -0700
+Subject: KVM: x86: update %rip after emulating IO
+
+From: Sean Christopherson <sean.j.christopherson@intel.com>
+
+commit 45def77ebf79e2e8942b89ed79294d97ce914fa0 upstream.
+
+Most (all?) x86 platforms provide a port IO based reset mechanism, e.g.
+OUT 92h or CF9h.  Userspace may emulate said mechanism, i.e. reset a
+vCPU in response to KVM_EXIT_IO, without explicitly announcing to KVM
+that it is doing a reset, e.g. Qemu jams vCPU state and resumes running.
+
+To avoid corruping %rip after such a reset, commit 0967b7bf1c22 ("KVM:
+Skip pio instruction when it is emulated, not executed") changed the
+behavior of PIO handlers, i.e. today's "fast" PIO handling to skip the
+instruction prior to exiting to userspace.  Full emulation doesn't need
+such tricks becase re-emulating the instruction will naturally handle
+%rip being changed to point at the reset vector.
+
+Updating %rip prior to executing to userspace has several drawbacks:
+
+  - Userspace sees the wrong %rip on the exit, e.g. if PIO emulation
+    fails it will likely yell about the wrong address.
+  - Single step exits to userspace for are effectively dropped as
+    KVM_EXIT_DEBUG is overwritten with KVM_EXIT_IO.
+  - Behavior of PIO emulation is different depending on whether it
+    goes down the fast path or the slow path.
+
+Rather than skip the PIO instruction before exiting to userspace,
+snapshot the linear %rip and cancel PIO completion if the current
+value does not match the snapshot.  For a 64-bit vCPU, i.e. the most
+common scenario, the snapshot and comparison has negligible overhead
+as VMCS.GUEST_RIP will be cached regardless, i.e. there is no extra
+VMREAD in this case.
+
+All other alternatives to snapshotting the linear %rip that don't
+rely on an explicit reset announcenment suffer from one corner case
+or another.  For example, canceling PIO completion on any write to
+%rip fails if userspace does a save/restore of %rip, and attempting to
+avoid that issue by canceling PIO only if %rip changed then fails if PIO
+collides with the reset %rip.  Attempting to zero in on the exact reset
+vector won't work for APs, which means adding more hooks such as the
+vCPU's MP_STATE, and so on and so forth.
+
+Checking for a linear %rip match technically suffers from corner cases,
+e.g. userspace could theoretically rewrite the underlying code page and
+expect a different instruction to execute, or the guest hardcodes a PIO
+reset at 0xfffffff0, but those are far, far outside of what can be
+considered normal operation.
+
+Fixes: 432baf60eee3 ("KVM: VMX: use kvm_fast_pio_in for handling IN I/O")
+Cc: <stable@vger.kernel.org>
+Reported-by: Jim Mattson <jmattson@google.com>
+Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/x86/include/asm/kvm_host.h |    1 +
+ arch/x86/kvm/x86.c              |   36 ++++++++++++++++++++++++++----------
+ 2 files changed, 27 insertions(+), 10 deletions(-)
+
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -352,6 +352,7 @@ struct kvm_mmu_page {
+ };
+ struct kvm_pio_request {
++      unsigned long linear_rip;
+       unsigned long count;
+       int in;
+       int port;
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -6533,14 +6533,27 @@ int kvm_emulate_instruction_from_buffer(
+ }
+ EXPORT_SYMBOL_GPL(kvm_emulate_instruction_from_buffer);
++static int complete_fast_pio_out(struct kvm_vcpu *vcpu)
++{
++      vcpu->arch.pio.count = 0;
++
++      if (unlikely(!kvm_is_linear_rip(vcpu, vcpu->arch.pio.linear_rip)))
++              return 1;
++
++      return kvm_skip_emulated_instruction(vcpu);
++}
++
+ static int kvm_fast_pio_out(struct kvm_vcpu *vcpu, int size,
+                           unsigned short port)
+ {
+       unsigned long val = kvm_register_read(vcpu, VCPU_REGS_RAX);
+       int ret = emulator_pio_out_emulated(&vcpu->arch.emulate_ctxt,
+                                           size, port, &val, 1);
+-      /* do not return to emulator after return from userspace */
+-      vcpu->arch.pio.count = 0;
++
++      if (!ret) {
++              vcpu->arch.pio.linear_rip = kvm_get_linear_rip(vcpu);
++              vcpu->arch.complete_userspace_io = complete_fast_pio_out;
++      }
+       return ret;
+ }
+@@ -6551,6 +6564,11 @@ static int complete_fast_pio_in(struct k
+       /* We should only ever be called with arch.pio.count equal to 1 */
+       BUG_ON(vcpu->arch.pio.count != 1);
++      if (unlikely(!kvm_is_linear_rip(vcpu, vcpu->arch.pio.linear_rip))) {
++              vcpu->arch.pio.count = 0;
++              return 1;
++      }
++
+       /* For size less than 4 we merge, else we zero extend */
+       val = (vcpu->arch.pio.size < 4) ? kvm_register_read(vcpu, VCPU_REGS_RAX)
+                                       : 0;
+@@ -6563,7 +6581,7 @@ static int complete_fast_pio_in(struct k
+                                vcpu->arch.pio.port, &val, 1);
+       kvm_register_write(vcpu, VCPU_REGS_RAX, val);
+-      return 1;
++      return kvm_skip_emulated_instruction(vcpu);
+ }
+ static int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size,
+@@ -6582,6 +6600,7 @@ static int kvm_fast_pio_in(struct kvm_vc
+               return ret;
+       }
++      vcpu->arch.pio.linear_rip = kvm_get_linear_rip(vcpu);
+       vcpu->arch.complete_userspace_io = complete_fast_pio_in;
+       return 0;
+@@ -6589,16 +6608,13 @@ static int kvm_fast_pio_in(struct kvm_vc
+ int kvm_fast_pio(struct kvm_vcpu *vcpu, int size, unsigned short port, int in)
+ {
+-      int ret = kvm_skip_emulated_instruction(vcpu);
++      int ret;
+-      /*
+-       * TODO: we might be squashing a KVM_GUESTDBG_SINGLESTEP-triggered
+-       * KVM_EXIT_DEBUG here.
+-       */
+       if (in)
+-              return kvm_fast_pio_in(vcpu, size, port) && ret;
++              ret = kvm_fast_pio_in(vcpu, size, port);
+       else
+-              return kvm_fast_pio_out(vcpu, size, port) && ret;
++              ret = kvm_fast_pio_out(vcpu, size, port);
++      return ret && kvm_skip_emulated_instruction(vcpu);
+ }
+ EXPORT_SYMBOL_GPL(kvm_fast_pio);
diff --git a/queue-5.0/objtool-query-pkg-config-for-libelf-location.patch b/queue-5.0/objtool-query-pkg-config-for-libelf-location.patch
new file mode 100644 (file)
index 0000000..21b0c07
--- /dev/null
@@ -0,0 +1,60 @@
+From 056d28d135bca0b1d0908990338e00e9dadaf057 Mon Sep 17 00:00:00 2001
+From: Rolf Eike Beer <eb@emlix.com>
+Date: Tue, 26 Mar 2019 12:48:39 -0500
+Subject: objtool: Query pkg-config for libelf location
+
+From: Rolf Eike Beer <eb@emlix.com>
+
+commit 056d28d135bca0b1d0908990338e00e9dadaf057 upstream.
+
+If it is not in the default location, compilation fails at several points.
+
+Signed-off-by: Rolf Eike Beer <eb@emlix.com>
+Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Cc: stable@vger.kernel.org
+Link: https://lkml.kernel.org/r/91a25e992566a7968fedc89ec80e7f4c83ad0548.1553622500.git.jpoimboe@redhat.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ Makefile               |    4 +++-
+ tools/objtool/Makefile |    7 +++++--
+ 2 files changed, 8 insertions(+), 3 deletions(-)
+
+--- a/Makefile
++++ b/Makefile
+@@ -944,9 +944,11 @@ mod_sign_cmd = true
+ endif
+ export mod_sign_cmd
++HOST_LIBELF_LIBS = $(shell pkg-config libelf --libs 2>/dev/null || echo -lelf)
++
+ ifdef CONFIG_STACK_VALIDATION
+   has_libelf := $(call try-run,\
+-              echo "int main() {}" | $(HOSTCC) -xc -o /dev/null -lelf -,1,0)
++              echo "int main() {}" | $(HOSTCC) -xc -o /dev/null $(HOST_LIBELF_LIBS) -,1,0)
+   ifeq ($(has_libelf),1)
+     objtool_target := tools/objtool FORCE
+   else
+--- a/tools/objtool/Makefile
++++ b/tools/objtool/Makefile
+@@ -25,14 +25,17 @@ LIBSUBCMD          = $(LIBSUBCMD_OUTPUT)libsubcm
+ OBJTOOL    := $(OUTPUT)objtool
+ OBJTOOL_IN := $(OBJTOOL)-in.o
++LIBELF_FLAGS := $(shell pkg-config libelf --cflags 2>/dev/null)
++LIBELF_LIBS  := $(shell pkg-config libelf --libs 2>/dev/null || echo -lelf)
++
+ all: $(OBJTOOL)
+ INCLUDES := -I$(srctree)/tools/include \
+           -I$(srctree)/tools/arch/$(HOSTARCH)/include/uapi \
+           -I$(srctree)/tools/objtool/arch/$(ARCH)/include
+ WARNINGS := $(EXTRA_WARNINGS) -Wno-switch-default -Wno-switch-enum -Wno-packed
+-CFLAGS   += -Werror $(WARNINGS) $(KBUILD_HOSTCFLAGS) -g $(INCLUDES)
+-LDFLAGS  += -lelf $(LIBSUBCMD) $(KBUILD_HOSTLDFLAGS)
++CFLAGS   += -Werror $(WARNINGS) $(KBUILD_HOSTCFLAGS) -g $(INCLUDES) $(LIBELF_FLAGS)
++LDFLAGS  += $(LIBELF_LIBS) $(LIBSUBCMD) $(KBUILD_HOSTLDFLAGS)
+ # Allow old libelf to be used:
+ elfshdr := $(shell echo '$(pound)include <libelf.h>' | $(CC) $(CFLAGS) -x c -E - | grep elf_getshdr)
diff --git a/queue-5.0/perf-intel-pt-fix-tsc-slip.patch b/queue-5.0/perf-intel-pt-fix-tsc-slip.patch
new file mode 100644 (file)
index 0000000..923ef39
--- /dev/null
@@ -0,0 +1,56 @@
+From f3b4e06b3bda759afd042d3d5fa86bea8f1fe278 Mon Sep 17 00:00:00 2001
+From: Adrian Hunter <adrian.hunter@intel.com>
+Date: Mon, 25 Mar 2019 15:51:35 +0200
+Subject: perf intel-pt: Fix TSC slip
+
+From: Adrian Hunter <adrian.hunter@intel.com>
+
+commit f3b4e06b3bda759afd042d3d5fa86bea8f1fe278 upstream.
+
+A TSC packet can slip past MTC packets so that the timestamp appears to
+go backwards. One estimate is that can be up to about 40 CPU cycles,
+which is certainly less than 0x1000 TSC ticks, but accept slippage an
+order of magnitude more to be on the safe side.
+
+Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
+Cc: Jiri Olsa <jolsa@redhat.com>
+Cc: stable@vger.kernel.org
+Fixes: 79b58424b821c ("perf tools: Add Intel PT support for decoding MTC packets")
+Link: http://lkml.kernel.org/r/20190325135135.18348-1-adrian.hunter@intel.com
+Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ tools/perf/util/intel-pt-decoder/intel-pt-decoder.c |   20 ++++++++------------
+ 1 file changed, 8 insertions(+), 12 deletions(-)
+
+--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
++++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+@@ -251,19 +251,15 @@ struct intel_pt_decoder *intel_pt_decode
+               if (!(decoder->tsc_ctc_ratio_n % decoder->tsc_ctc_ratio_d))
+                       decoder->tsc_ctc_mult = decoder->tsc_ctc_ratio_n /
+                                               decoder->tsc_ctc_ratio_d;
+-
+-              /*
+-               * Allow for timestamps appearing to backwards because a TSC
+-               * packet has slipped past a MTC packet, so allow 2 MTC ticks
+-               * or ...
+-               */
+-              decoder->tsc_slip = multdiv(2 << decoder->mtc_shift,
+-                                      decoder->tsc_ctc_ratio_n,
+-                                      decoder->tsc_ctc_ratio_d);
+       }
+-      /* ... or 0x100 paranoia */
+-      if (decoder->tsc_slip < 0x100)
+-              decoder->tsc_slip = 0x100;
++
++      /*
++       * A TSC packet can slip past MTC packets so that the timestamp appears
++       * to go backwards. One estimate is that can be up to about 40 CPU
++       * cycles, which is certainly less than 0x1000 TSC ticks, but accept
++       * slippage an order of magnitude more to be on the safe side.
++       */
++      decoder->tsc_slip = 0x10000;
+       intel_pt_log("timestamp: mtc_shift %u\n", decoder->mtc_shift);
+       intel_pt_log("timestamp: tsc_ctc_ratio_n %u\n", decoder->tsc_ctc_ratio_n);
diff --git a/queue-5.0/perf-pmu-fix-parser-error-for-uncore-event-alias.patch b/queue-5.0/perf-pmu-fix-parser-error-for-uncore-event-alias.patch
new file mode 100644 (file)
index 0000000..836da38
--- /dev/null
@@ -0,0 +1,83 @@
+From e94d6b7f615e6dfbaf9fba7db6011db561461d0c Mon Sep 17 00:00:00 2001
+From: Kan Liang <kan.liang@linux.intel.com>
+Date: Fri, 15 Mar 2019 11:00:14 -0700
+Subject: perf pmu: Fix parser error for uncore event alias
+
+From: Kan Liang <kan.liang@linux.intel.com>
+
+commit e94d6b7f615e6dfbaf9fba7db6011db561461d0c upstream.
+
+Perf fails to parse uncore event alias, for example:
+
+  # perf stat -e unc_m_clockticks -a --no-merge sleep 1
+  event syntax error: 'unc_m_clockticks'
+                       \___ parser error
+
+Current code assumes that the event alias is from one specific PMU.
+
+To find the PMU, perf strcmps the PMU name of event alias with the real
+PMU name on the system.
+
+However, the uncore event alias may be from multiple PMUs with common
+prefix. The PMU name of uncore event alias is the common prefix.
+
+For example, UNC_M_CLOCKTICKS is clock event for iMC, which include 6
+PMUs with the same prefix "uncore_imc" on a skylake server.
+
+The real PMU names on the system for iMC are uncore_imc_0 ...
+uncore_imc_5.
+
+The strncmp is used to only check the common prefix for uncore event
+alias.
+
+With the patch:
+
+  # perf stat -e unc_m_clockticks -a --no-merge sleep 1
+  Performance counter stats for 'system wide':
+
+       723,594,722      unc_m_clockticks [uncore_imc_5]
+       724,001,954      unc_m_clockticks [uncore_imc_3]
+       724,042,655      unc_m_clockticks [uncore_imc_1]
+       724,161,001      unc_m_clockticks [uncore_imc_4]
+       724,293,713      unc_m_clockticks [uncore_imc_2]
+       724,340,901      unc_m_clockticks [uncore_imc_0]
+
+       1.002090060 seconds time elapsed
+
+Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
+Acked-by: Jiri Olsa <jolsa@kernel.org>
+Cc: Andi Kleen <ak@linux.intel.com>
+Cc: Thomas Richter <tmricht@linux.ibm.com>
+Cc: stable@vger.kernel.org
+Fixes: ea1fa48c055f ("perf stat: Handle different PMU names with common prefix")
+Link: http://lkml.kernel.org/r/1552672814-156173-1-git-send-email-kan.liang@linux.intel.com
+Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ tools/perf/util/pmu.c |   10 ++++++++++
+ 1 file changed, 10 insertions(+)
+
+--- a/tools/perf/util/pmu.c
++++ b/tools/perf/util/pmu.c
+@@ -734,10 +734,20 @@ static void pmu_add_cpu_aliases(struct l
+               if (!is_arm_pmu_core(name)) {
+                       pname = pe->pmu ? pe->pmu : "cpu";
++
++                      /*
++                       * uncore alias may be from different PMU
++                       * with common prefix
++                       */
++                      if (pmu_is_uncore(name) &&
++                          !strncmp(pname, name, strlen(pname)))
++                              goto new_alias;
++
+                       if (strcmp(pname, name))
+                               continue;
+               }
++new_alias:
+               /* need type casts to override 'const' */
+               __perf_pmu__new_alias(head, NULL, (char *)pe->name,
+                               (char *)pe->desc, (char *)pe->event,
diff --git a/queue-5.0/powerpc-64-fix-memcmp-reading-past-the-end-of-src-dest.patch b/queue-5.0/powerpc-64-fix-memcmp-reading-past-the-end-of-src-dest.patch
new file mode 100644 (file)
index 0000000..f97f685
--- /dev/null
@@ -0,0 +1,115 @@
+From d9470757398a700d9450a43508000bcfd010c7a4 Mon Sep 17 00:00:00 2001
+From: Michael Ellerman <mpe@ellerman.id.au>
+Date: Fri, 22 Mar 2019 23:37:24 +1100
+Subject: powerpc/64: Fix memcmp reading past the end of src/dest
+
+From: Michael Ellerman <mpe@ellerman.id.au>
+
+commit d9470757398a700d9450a43508000bcfd010c7a4 upstream.
+
+Chandan reported that fstests' generic/026 test hit a crash:
+
+  BUG: Unable to handle kernel data access at 0xc00000062ac40000
+  Faulting instruction address: 0xc000000000092240
+  Oops: Kernel access of bad area, sig: 11 [#1]
+  LE SMP NR_CPUS=2048 DEBUG_PAGEALLOC NUMA pSeries
+  CPU: 0 PID: 27828 Comm: chacl Not tainted 5.0.0-rc2-next-20190115-00001-g6de6dba64dda #1
+  NIP:  c000000000092240 LR: c00000000066a55c CTR: 0000000000000000
+  REGS: c00000062c0c3430 TRAP: 0300   Not tainted  (5.0.0-rc2-next-20190115-00001-g6de6dba64dda)
+  MSR:  8000000002009033 <SF,VEC,EE,ME,IR,DR,RI,LE>  CR: 44000842  XER: 20000000
+  CFAR: 00007fff7f3108ac DAR: c00000062ac40000 DSISR: 40000000 IRQMASK: 0
+  GPR00: 0000000000000000 c00000062c0c36c0 c0000000017f4c00 c00000000121a660
+  GPR04: c00000062ac3fff9 0000000000000004 0000000000000020 00000000275b19c4
+  GPR08: 000000000000000c 46494c4500000000 5347495f41434c5f c0000000026073a0
+  GPR12: 0000000000000000 c0000000027a0000 0000000000000000 0000000000000000
+  GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
+  GPR20: c00000062ea70020 c00000062c0c38d0 0000000000000002 0000000000000002
+  GPR24: c00000062ac3ffe8 00000000275b19c4 0000000000000001 c00000062ac30000
+  GPR28: c00000062c0c38d0 c00000062ac30050 c00000062ac30058 0000000000000000
+  NIP memcmp+0x120/0x690
+  LR  xfs_attr3_leaf_lookup_int+0x53c/0x5b0
+  Call Trace:
+    xfs_attr3_leaf_lookup_int+0x78/0x5b0 (unreliable)
+    xfs_da3_node_lookup_int+0x32c/0x5a0
+    xfs_attr_node_addname+0x170/0x6b0
+    xfs_attr_set+0x2ac/0x340
+    __xfs_set_acl+0xf0/0x230
+    xfs_set_acl+0xd0/0x160
+    set_posix_acl+0xc0/0x130
+    posix_acl_xattr_set+0x68/0x110
+    __vfs_setxattr+0xa4/0x110
+    __vfs_setxattr_noperm+0xac/0x240
+    vfs_setxattr+0x128/0x130
+    setxattr+0x248/0x600
+    path_setxattr+0x108/0x120
+    sys_setxattr+0x28/0x40
+    system_call+0x5c/0x70
+  Instruction dump:
+  7d201c28 7d402428 7c295040 38630008 38840008 408201f0 4200ffe8 2c050000
+  4182ff6c 20c50008 54c61838 7d201c28 <7d402428> 7d293436 7d4a3436 7c295040
+
+The instruction dump decodes as:
+  subfic  r6,r5,8
+  rlwinm  r6,r6,3,0,28
+  ldbrx   r9,0,r3
+  ldbrx   r10,0,r4      <-
+
+Which shows us doing an 8 byte load from c00000062ac3fff9, which
+crosses the page boundary at c00000062ac40000 and faults.
+
+It's not OK for memcmp to read past the end of the source or
+destination buffers if that would cross a page boundary, because we
+don't know that the next page is mapped.
+
+As pointed out by Segher, we can read past the end of the source or
+destination as long as we don't cross a 4K boundary, because that's
+our minimum page size on all platforms.
+
+The bug is in the code at the .Lcmp_rest_lt8bytes label. When we get
+there we know that s1 is 8-byte aligned and we have at least 1 byte to
+read, so a single 8-byte load won't read past the end of s1 and cross
+a page boundary.
+
+But we have to be more careful with s2. So check if it's within 8
+bytes of a 4K boundary and if so go to the byte-by-byte loop.
+
+Fixes: 2d9ee327adce ("powerpc/64: Align bytes before fall back to .Lshort in powerpc64 memcmp()")
+Cc: stable@vger.kernel.org # v4.19+
+Reported-by: Chandan Rajendra <chandan@linux.ibm.com>
+Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
+Reviewed-by: Segher Boessenkool <segher@kernel.crashing.org>
+Tested-by: Chandan Rajendra <chandan@linux.ibm.com>
+Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/powerpc/lib/memcmp_64.S |   17 +++++++++++++----
+ 1 file changed, 13 insertions(+), 4 deletions(-)
+
+--- a/arch/powerpc/lib/memcmp_64.S
++++ b/arch/powerpc/lib/memcmp_64.S
+@@ -215,11 +215,20 @@ _GLOBAL_TOC(memcmp)
+       beq     .Lzero
+ .Lcmp_rest_lt8bytes:
+-      /* Here we have only less than 8 bytes to compare with. at least s1
+-       * Address is aligned with 8 bytes.
+-       * The next double words are load and shift right with appropriate
+-       * bits.
++      /*
++       * Here we have less than 8 bytes to compare. At least s1 is aligned to
++       * 8 bytes, but s2 may not be. We must make sure s2 + 7 doesn't cross a
++       * page boundary, otherwise we might read past the end of the buffer and
++       * trigger a page fault. We use 4K as the conservative minimum page
++       * size. If we detect that case we go to the byte-by-byte loop.
++       *
++       * Otherwise the next double word is loaded from s1 and s2, and shifted
++       * right to compare the appropriate bits.
+        */
++      clrldi  r6,r4,(64-12)   // r6 = r4 & 0xfff
++      cmpdi   r6,0xff8
++      bgt     .Lshort
++
+       subfic  r6,r5,8
+       slwi    r6,r6,3
+       LD      rA,0,r3
diff --git a/queue-5.0/powerpc-pseries-energy-use-of-accessor-functions-to-read-ibm-drc-indexes.patch b/queue-5.0/powerpc-pseries-energy-use-of-accessor-functions-to-read-ibm-drc-indexes.patch
new file mode 100644 (file)
index 0000000..8b5c985
--- /dev/null
@@ -0,0 +1,70 @@
+From ce9afe08e71e3f7d64f337a6e932e50849230fc2 Mon Sep 17 00:00:00 2001
+From: "Gautham R. Shenoy" <ego@linux.vnet.ibm.com>
+Date: Fri, 8 Mar 2019 21:03:24 +0530
+Subject: powerpc/pseries/energy: Use OF accessor functions to read ibm,drc-indexes
+
+From: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
+
+commit ce9afe08e71e3f7d64f337a6e932e50849230fc2 upstream.
+
+In cpu_to_drc_index() in the case when FW_FEATURE_DRC_INFO is absent,
+we currently use of_read_property() to obtain the pointer to the array
+corresponding to the property "ibm,drc-indexes". The elements of this
+array are of type __be32, but are accessed without any conversion to
+the OS-endianness, which is buggy on a Little Endian OS.
+
+Fix this by using of_property_read_u32_index() accessor function to
+safely read the elements of the array.
+
+Fixes: e83636ac3334 ("pseries/drc-info: Search DRC properties for CPU indexes")
+Cc: stable@vger.kernel.org # v4.16+
+Reported-by: Pavithra R. Prakash <pavrampu@in.ibm.com>
+Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
+Reviewed-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
+[mpe: Make the WARN_ON a WARN_ON_ONCE so it's not retriggerable]
+Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/powerpc/platforms/pseries/pseries_energy.c |   27 ++++++++++++++++--------
+ 1 file changed, 18 insertions(+), 9 deletions(-)
+
+--- a/arch/powerpc/platforms/pseries/pseries_energy.c
++++ b/arch/powerpc/platforms/pseries/pseries_energy.c
+@@ -77,18 +77,27 @@ static u32 cpu_to_drc_index(int cpu)
+               ret = drc.drc_index_start + (thread_index * drc.sequential_inc);
+       } else {
+-              const __be32 *indexes;
+-
+-              indexes = of_get_property(dn, "ibm,drc-indexes", NULL);
+-              if (indexes == NULL)
+-                      goto err_of_node_put;
++              u32 nr_drc_indexes, thread_drc_index;
+               /*
+-               * The first element indexes[0] is the number of drc_indexes
+-               * returned in the list.  Hence thread_index+1 will get the
+-               * drc_index corresponding to core number thread_index.
++               * The first element of ibm,drc-indexes array is the
++               * number of drc_indexes returned in the list.  Hence
++               * thread_index+1 will get the drc_index corresponding
++               * to core number thread_index.
+                */
+-              ret = indexes[thread_index + 1];
++              rc = of_property_read_u32_index(dn, "ibm,drc-indexes",
++                                              0, &nr_drc_indexes);
++              if (rc)
++                      goto err_of_node_put;
++
++              WARN_ON_ONCE(thread_index > nr_drc_indexes);
++              rc = of_property_read_u32_index(dn, "ibm,drc-indexes",
++                                              thread_index + 1,
++                                              &thread_drc_index);
++              if (rc)
++                      goto err_of_node_put;
++
++              ret = thread_drc_index;
+       }
+       rc = 0;
diff --git a/queue-5.0/powerpc-pseries-mce-fix-misleading-print-for-tlb-mutlihit.patch b/queue-5.0/powerpc-pseries-mce-fix-misleading-print-for-tlb-mutlihit.patch
new file mode 100644 (file)
index 0000000..0433bcb
--- /dev/null
@@ -0,0 +1,43 @@
+From 6f845ebec2706841d15831fab3ffffcfd9e676fa Mon Sep 17 00:00:00 2001
+From: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
+Date: Tue, 26 Mar 2019 18:00:31 +0530
+Subject: powerpc/pseries/mce: Fix misleading print for TLB mutlihit
+
+From: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
+
+commit 6f845ebec2706841d15831fab3ffffcfd9e676fa upstream.
+
+On pseries, TLB multihit are reported as D-Cache Multihit. This is because
+the wrongly populated mc_err_types[] array. Per PAPR, TLB error type is 0x04
+and mc_err_types[4] points to "D-Cache" instead of "TLB" string. Fixup the
+mc_err_types[] array.
+
+Machine check error type per PAPR:
+  0x00 = Uncorrectable Memory Error (UE)
+  0x01 = SLB error
+  0x02 = ERAT Error
+  0x04 = TLB error
+  0x05 = D-Cache error
+  0x07 = I-Cache error
+
+Fixes: 8f0b80561f21 ("powerpc/pseries: Display machine check error details.")
+Cc: stable@vger.kernel.org # v4.20+
+Reported-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
+Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
+Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/powerpc/platforms/pseries/ras.c |    1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/arch/powerpc/platforms/pseries/ras.c
++++ b/arch/powerpc/platforms/pseries/ras.c
+@@ -550,6 +550,7 @@ static void pseries_print_mce_info(struc
+               "UE",
+               "SLB",
+               "ERAT",
++              "Unknown",
+               "TLB",
+               "D-Cache",
+               "Unknown",
index 247b57a11ca4f2268c92daa668d5536e0aa5c471..e923a52637683a254d36ded171e94d4528b797f0 100644 (file)
@@ -130,3 +130,15 @@ mm-debug.c-fix-__dump_page-when-mapping-host-is-not-set.patch
 mm-memory_hotplug.c-fix-notification-in-offline-error-path.patch
 mm-page_isolation.c-fix-a-wrong-flag-in-set_migratetype_isolate.patch
 mm-migrate.c-add-missing-flush_dcache_page-for-non-mapped-page-migrate.patch
+perf-pmu-fix-parser-error-for-uncore-event-alias.patch
+perf-intel-pt-fix-tsc-slip.patch
+objtool-query-pkg-config-for-libelf-location.patch
+powerpc-pseries-energy-use-of-accessor-functions-to-read-ibm-drc-indexes.patch
+powerpc-64-fix-memcmp-reading-past-the-end-of-src-dest.patch
+powerpc-pseries-mce-fix-misleading-print-for-tlb-mutlihit.patch
+watchdog-respect-watchdog-cpumask-on-cpu-hotplug.patch
+cpu-hotplug-prevent-crash-when-cpu-bringup-fails-on-config_hotplug_cpu-n.patch
+x86-smp-enforce-config_hotplug_cpu-when-smp-y.patch
+kvm-reject-device-ioctls-from-processes-other-than-the-vm-s-creator.patch
+kvm-x86-emulate-msr_ia32_arch_capabilities-on-amd-hosts.patch
+kvm-x86-update-rip-after-emulating-io.patch
diff --git a/queue-5.0/watchdog-respect-watchdog-cpumask-on-cpu-hotplug.patch b/queue-5.0/watchdog-respect-watchdog-cpumask-on-cpu-hotplug.patch
new file mode 100644 (file)
index 0000000..94742e3
--- /dev/null
@@ -0,0 +1,56 @@
+From 7dd47617114921fdd8c095509e5e7b4373cc44a1 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx@linutronix.de>
+Date: Tue, 26 Mar 2019 22:51:02 +0100
+Subject: watchdog: Respect watchdog cpumask on CPU hotplug
+
+From: Thomas Gleixner <tglx@linutronix.de>
+
+commit 7dd47617114921fdd8c095509e5e7b4373cc44a1 upstream.
+
+The rework of the watchdog core to use cpu_stop_work broke the watchdog
+cpumask on CPU hotplug.
+
+The watchdog_enable/disable() functions are now called unconditionally from
+the hotplug callback, i.e. even on CPUs which are not in the watchdog
+cpumask. As a consequence the watchdog can become unstoppable.
+
+Only invoke them when the plugged CPU is in the watchdog cpumask.
+
+Fixes: 9cf57731b63e ("watchdog/softlockup: Replace "watchdog/%u" threads with cpu_stop_work")
+Reported-by: Maxime Coquelin <maxime.coquelin@redhat.com>
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Tested-by: Maxime Coquelin <maxime.coquelin@redhat.com>
+Cc: Peter Zijlstra <peterz@infradead.org>
+Cc: Oleg Nesterov <oleg@redhat.com>
+Cc: Michael Ellerman <mpe@ellerman.id.au>
+Cc: Nicholas Piggin <npiggin@gmail.com>
+Cc: Don Zickus <dzickus@redhat.com>
+Cc: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
+Cc: stable@vger.kernel.org
+Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1903262245490.1789@nanos.tec.linutronix.de
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ kernel/watchdog.c |    6 ++++--
+ 1 file changed, 4 insertions(+), 2 deletions(-)
+
+--- a/kernel/watchdog.c
++++ b/kernel/watchdog.c
+@@ -547,13 +547,15 @@ static void softlockup_start_all(void)
+ int lockup_detector_online_cpu(unsigned int cpu)
+ {
+-      watchdog_enable(cpu);
++      if (cpumask_test_cpu(cpu, &watchdog_allowed_mask))
++              watchdog_enable(cpu);
+       return 0;
+ }
+ int lockup_detector_offline_cpu(unsigned int cpu)
+ {
+-      watchdog_disable(cpu);
++      if (cpumask_test_cpu(cpu, &watchdog_allowed_mask))
++              watchdog_disable(cpu);
+       return 0;
+ }
diff --git a/queue-5.0/x86-smp-enforce-config_hotplug_cpu-when-smp-y.patch b/queue-5.0/x86-smp-enforce-config_hotplug_cpu-when-smp-y.patch
new file mode 100644 (file)
index 0000000..0e80bc6
--- /dev/null
@@ -0,0 +1,62 @@
+From bebd024e4815b1a170fcd21ead9c2222b23ce9e6 Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx@linutronix.de>
+Date: Tue, 26 Mar 2019 17:36:06 +0100
+Subject: x86/smp: Enforce CONFIG_HOTPLUG_CPU when SMP=y
+
+From: Thomas Gleixner <tglx@linutronix.de>
+
+commit bebd024e4815b1a170fcd21ead9c2222b23ce9e6 upstream.
+
+The SMT disable 'nosmt' command line argument is not working properly when
+CONFIG_HOTPLUG_CPU is disabled. The teardown of the sibling CPUs which are
+required to be brought up due to the MCE issues, cannot work. The CPUs are
+then kept in a half dead state.
+
+As the 'nosmt' functionality has become popular due to the speculative
+hardware vulnerabilities, the half torn down state is not a proper solution
+to the problem.
+
+Enforce CONFIG_HOTPLUG_CPU=y when SMP is enabled so the full operation is
+possible.
+
+Reported-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Cc: Konrad Wilk <konrad.wilk@oracle.com>
+Cc: Josh Poimboeuf <jpoimboe@redhat.com>
+Cc: Mukesh Ojha <mojha@codeaurora.org>
+Cc: Peter Zijlstra <peterz@infradead.org>
+Cc: Jiri Kosina <jkosina@suse.cz>
+Cc: Rik van Riel <riel@surriel.com>
+Cc: Andy Lutomirski <luto@kernel.org>
+Cc: Micheal Kelley <michael.h.kelley@microsoft.com>
+Cc: "K. Y. Srinivasan" <kys@microsoft.com>
+Cc: Linus Torvalds <torvalds@linux-foundation.org>
+Cc: Borislav Petkov <bp@alien8.de>
+Cc: K. Y. Srinivasan <kys@microsoft.com>
+Cc: stable@vger.kernel.org
+Link: https://lkml.kernel.org/r/20190326163811.598166056@linutronix.de
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/x86/Kconfig |    8 +-------
+ 1 file changed, 1 insertion(+), 7 deletions(-)
+
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -2221,14 +2221,8 @@ config RANDOMIZE_MEMORY_PHYSICAL_PADDING
+          If unsure, leave at the default value.
+ config HOTPLUG_CPU
+-      bool "Support for hot-pluggable CPUs"
++      def_bool y
+       depends on SMP
+-      ---help---
+-        Say Y here to allow turning CPUs off and on. CPUs can be
+-        controlled through /sys/devices/system/cpu.
+-        ( Note: power management support will enable this option
+-          automatically on SMP systems. )
+-        Say N if you want to disable CPU hotplug.
+ config BOOTPARAM_HOTPLUG_CPU0
+       bool "Set default setting of cpu0_hotpluggable"