From: Greg Kroah-Hartman Date: Mon, 3 Jul 2017 13:03:47 +0000 (+0200) Subject: 4.4-stable patches X-Git-Tag: v3.18.60~7 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=a7b68143a08af1d58dd9e2823c44e6659fbbfdbe;p=thirdparty%2Fkernel%2Fstable-queue.git 4.4-stable patches added patches: cpufreq-s3c2416-double-free-on-driver-init-error-path.patch iommu-amd-fix-incorrect-error-handling-in-amd_iommu_bind_pasid.patch iommu-handle-default-domain-attach-failure.patch iommu-vt-d-don-t-over-free-page-table-directories.patch kvm-nvmx-fix-exception-injection.patch kvm-x86-fix-emulation-of-rsm-and-iret-instructions.patch kvm-x86-vpmu-fix-undefined-shift-in-intel_pmu_refresh.patch kvm-x86-zero-base3-of-unusable-segments.patch --- diff --git a/queue-4.4/cpufreq-s3c2416-double-free-on-driver-init-error-path.patch b/queue-4.4/cpufreq-s3c2416-double-free-on-driver-init-error-path.patch new file mode 100644 index 00000000000..8e3ca13fbcd --- /dev/null +++ b/queue-4.4/cpufreq-s3c2416-double-free-on-driver-init-error-path.patch @@ -0,0 +1,33 @@ +From a69261e4470d680185a15f748d9cdafb37c57a33 Mon Sep 17 00:00:00 2001 +From: Dan Carpenter +Date: Tue, 7 Feb 2017 16:19:06 +0300 +Subject: cpufreq: s3c2416: double free on driver init error path + +From: Dan Carpenter + +commit a69261e4470d680185a15f748d9cdafb37c57a33 upstream. + +The "goto err_armclk;" error path already does a clk_put(s3c_freq->hclk); +so this is a double free. + +Fixes: 34ee55075265 ([CPUFREQ] Add S3C2416/S3C2450 cpufreq driver) +Signed-off-by: Dan Carpenter +Reviewed-by: Krzysztof Kozlowski +Acked-by: Viresh Kumar +Signed-off-by: Rafael J. Wysocki +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/cpufreq/s3c2416-cpufreq.c | 1 - + 1 file changed, 1 deletion(-) + +--- a/drivers/cpufreq/s3c2416-cpufreq.c ++++ b/drivers/cpufreq/s3c2416-cpufreq.c +@@ -400,7 +400,6 @@ static int s3c2416_cpufreq_driver_init(s + rate = clk_get_rate(s3c_freq->hclk); + if (rate < 133 * 1000 * 1000) { + pr_err("cpufreq: HCLK not at 133MHz\n"); +- clk_put(s3c_freq->hclk); + ret = -EINVAL; + goto err_armclk; + } diff --git a/queue-4.4/iommu-amd-fix-incorrect-error-handling-in-amd_iommu_bind_pasid.patch b/queue-4.4/iommu-amd-fix-incorrect-error-handling-in-amd_iommu_bind_pasid.patch new file mode 100644 index 00000000000..9e60f35189e --- /dev/null +++ b/queue-4.4/iommu-amd-fix-incorrect-error-handling-in-amd_iommu_bind_pasid.patch @@ -0,0 +1,37 @@ +From 73dbd4a4230216b6a5540a362edceae0c9b4876b Mon Sep 17 00:00:00 2001 +From: Pan Bian +Date: Sun, 23 Apr 2017 18:23:21 +0800 +Subject: iommu/amd: Fix incorrect error handling in amd_iommu_bind_pasid() + +From: Pan Bian + +commit 73dbd4a4230216b6a5540a362edceae0c9b4876b upstream. + +In function amd_iommu_bind_pasid(), the control flow jumps +to label out_free when pasid_state->mm and mm is NULL. And +mmput(mm) is called. In function mmput(mm), mm is +referenced without validation. This will result in a NULL +dereference bug. This patch fixes the bug. + +Signed-off-by: Pan Bian +Fixes: f0aac63b873b ('iommu/amd: Don't hold a reference to mm_struct') +Signed-off-by: Joerg Roedel +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/iommu/amd_iommu_v2.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/drivers/iommu/amd_iommu_v2.c ++++ b/drivers/iommu/amd_iommu_v2.c +@@ -699,9 +699,9 @@ out_clear_state: + + out_unregister: + mmu_notifier_unregister(&pasid_state->mn, mm); ++ mmput(mm); + + out_free: +- mmput(mm); + free_pasid_state(pasid_state); + + out: diff --git a/queue-4.4/iommu-handle-default-domain-attach-failure.patch b/queue-4.4/iommu-handle-default-domain-attach-failure.patch new file mode 100644 index 00000000000..4e825f7ce54 --- /dev/null +++ b/queue-4.4/iommu-handle-default-domain-attach-failure.patch @@ -0,0 +1,111 @@ +From 797a8b4d768c58caac58ee3e8cb36a164d1b7751 Mon Sep 17 00:00:00 2001 +From: Robin Murphy +Date: Mon, 16 Jan 2017 12:58:07 +0000 +Subject: iommu: Handle default domain attach failure + +From: Robin Murphy + +commit 797a8b4d768c58caac58ee3e8cb36a164d1b7751 upstream. + +We wouldn't normally expect ops->attach_dev() to fail, but on IOMMUs +with limited hardware resources, or generally misconfigured systems, +it is certainly possible. We report failure correctly from the external +iommu_attach_device() interface, but do not do so in iommu_group_add() +when attaching to the default domain. The result of failure there is +that the device, group and domain all get left in a broken, +part-configured state which leads to weird errors and misbehaviour down +the line when IOMMU API calls sort-of-but-don't-quite work. + +Check the return value of __iommu_attach_device() on the default domain, +and refactor the error handling paths to cope with its failure and clean +up correctly in such cases. + +Fixes: e39cb8a3aa98 ("iommu: Make sure a device is always attached to a domain") +Reported-by: Punit Agrawal +Signed-off-by: Robin Murphy +Signed-off-by: Joerg Roedel +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/iommu/iommu.c | 37 ++++++++++++++++++++++++------------- + 1 file changed, 24 insertions(+), 13 deletions(-) + +--- a/drivers/iommu/iommu.c ++++ b/drivers/iommu/iommu.c +@@ -391,36 +391,30 @@ int iommu_group_add_device(struct iommu_ + device->dev = dev; + + ret = sysfs_create_link(&dev->kobj, &group->kobj, "iommu_group"); +- if (ret) { +- kfree(device); +- return ret; +- } ++ if (ret) ++ goto err_free_device; + + device->name = kasprintf(GFP_KERNEL, "%s", kobject_name(&dev->kobj)); + rename: + if (!device->name) { +- sysfs_remove_link(&dev->kobj, "iommu_group"); +- kfree(device); +- return -ENOMEM; ++ ret = -ENOMEM; ++ goto err_remove_link; + } + + ret = sysfs_create_link_nowarn(group->devices_kobj, + &dev->kobj, device->name); + if (ret) { +- kfree(device->name); + if (ret == -EEXIST && i >= 0) { + /* + * Account for the slim chance of collision + * and append an instance to the name. + */ ++ kfree(device->name); + device->name = kasprintf(GFP_KERNEL, "%s.%d", + kobject_name(&dev->kobj), i++); + goto rename; + } +- +- sysfs_remove_link(&dev->kobj, "iommu_group"); +- kfree(device); +- return ret; ++ goto err_free_name; + } + + kobject_get(group->devices_kobj); +@@ -432,8 +426,10 @@ rename: + mutex_lock(&group->mutex); + list_add_tail(&device->list, &group->devices); + if (group->domain) +- __iommu_attach_device(group->domain, dev); ++ ret = __iommu_attach_device(group->domain, dev); + mutex_unlock(&group->mutex); ++ if (ret) ++ goto err_put_group; + + /* Notify any listeners about change to group. */ + blocking_notifier_call_chain(&group->notifier, +@@ -444,6 +440,21 @@ rename: + pr_info("Adding device %s to group %d\n", dev_name(dev), group->id); + + return 0; ++ ++err_put_group: ++ mutex_lock(&group->mutex); ++ list_del(&device->list); ++ mutex_unlock(&group->mutex); ++ dev->iommu_group = NULL; ++ kobject_put(group->devices_kobj); ++err_free_name: ++ kfree(device->name); ++err_remove_link: ++ sysfs_remove_link(&dev->kobj, "iommu_group"); ++err_free_device: ++ kfree(device); ++ pr_err("Failed to add device %s to group %d: %d\n", dev_name(dev), group->id, ret); ++ return ret; + } + EXPORT_SYMBOL_GPL(iommu_group_add_device); + diff --git a/queue-4.4/iommu-vt-d-don-t-over-free-page-table-directories.patch b/queue-4.4/iommu-vt-d-don-t-over-free-page-table-directories.patch new file mode 100644 index 00000000000..da891a6cdfa --- /dev/null +++ b/queue-4.4/iommu-vt-d-don-t-over-free-page-table-directories.patch @@ -0,0 +1,57 @@ +From f7116e115acdd74bc75a4daf6492b11d43505125 Mon Sep 17 00:00:00 2001 +From: David Dillow +Date: Mon, 30 Jan 2017 19:11:11 -0800 +Subject: iommu/vt-d: Don't over-free page table directories + +From: David Dillow + +commit f7116e115acdd74bc75a4daf6492b11d43505125 upstream. + +dma_pte_free_level() recurses down the IOMMU page tables and frees +directory pages that are entirely contained in the given PFN range. +Unfortunately, it incorrectly calculates the starting address covered +by the PTE under consideration, which can lead to it clearing an entry +that is still in use. + +This occurs if we have a scatterlist with an entry that has a length +greater than 1026 MB and is aligned to 2 MB for both the IOMMU and +physical addresses. For example, if __domain_mapping() is asked to map a +two-entry scatterlist with 2 MB and 1028 MB segments to PFN 0xffff80000, +it will ask if dma_pte_free_pagetable() is asked to PFNs from +0xffff80200 to 0xffffc05ff, it will also incorrectly clear the PFNs from +0xffff80000 to 0xffff801ff because of this issue. The current code will +set level_pfn to 0xffff80200, and 0xffff80200-0xffffc01ff fits inside +the range being cleared. Properly setting the level_pfn for the current +level under consideration catches that this PTE is outside of the range +being cleared. + +This patch also changes the value passed into dma_pte_free_level() when +it recurses. This only affects the first PTE of the range being cleared, +and is handled by the existing code that ensures we start our cursor no +lower than start_pfn. + +This was found when using dma_map_sg() to map large chunks of contiguous +memory, which immediatedly led to faults on the first access of the +erroneously-deleted mappings. + +Fixes: 3269ee0bd668 ("intel-iommu: Fix leaks in pagetable freeing") +Reviewed-by: Benjamin Serebrin +Signed-off-by: David Dillow +Signed-off-by: Joerg Roedel +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/iommu/intel-iommu.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/drivers/iommu/intel-iommu.c ++++ b/drivers/iommu/intel-iommu.c +@@ -1137,7 +1137,7 @@ static void dma_pte_free_level(struct dm + if (!dma_pte_present(pte) || dma_pte_superpage(pte)) + goto next; + +- level_pfn = pfn & level_mask(level - 1); ++ level_pfn = pfn & level_mask(level); + level_pte = phys_to_virt(dma_pte_addr(pte)); + + if (level > 2) diff --git a/queue-4.4/kvm-nvmx-fix-exception-injection.patch b/queue-4.4/kvm-nvmx-fix-exception-injection.patch new file mode 100644 index 00000000000..c39d06fc669 --- /dev/null +++ b/queue-4.4/kvm-nvmx-fix-exception-injection.patch @@ -0,0 +1,73 @@ +From d4912215d1031e4fb3d1038d2e1857218dba0d0a Mon Sep 17 00:00:00 2001 +From: Wanpeng Li +Date: Mon, 5 Jun 2017 05:19:09 -0700 +Subject: KVM: nVMX: Fix exception injection +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Wanpeng Li + +commit d4912215d1031e4fb3d1038d2e1857218dba0d0a upstream. + + WARNING: CPU: 3 PID: 2840 at arch/x86/kvm/vmx.c:10966 nested_vmx_vmexit+0xdcd/0xde0 [kvm_intel] + CPU: 3 PID: 2840 Comm: qemu-system-x86 Tainted: G OE 4.12.0-rc3+ #23 + RIP: 0010:nested_vmx_vmexit+0xdcd/0xde0 [kvm_intel] + Call Trace: + ? kvm_check_async_pf_completion+0xef/0x120 [kvm] + ? rcu_read_lock_sched_held+0x79/0x80 + vmx_queue_exception+0x104/0x160 [kvm_intel] + ? vmx_queue_exception+0x104/0x160 [kvm_intel] + kvm_arch_vcpu_ioctl_run+0x1171/0x1ce0 [kvm] + ? kvm_arch_vcpu_load+0x47/0x240 [kvm] + ? kvm_arch_vcpu_load+0x62/0x240 [kvm] + kvm_vcpu_ioctl+0x384/0x7b0 [kvm] + ? kvm_vcpu_ioctl+0x384/0x7b0 [kvm] + ? __fget+0xf3/0x210 + do_vfs_ioctl+0xa4/0x700 + ? __fget+0x114/0x210 + SyS_ioctl+0x79/0x90 + do_syscall_64+0x81/0x220 + entry_SYSCALL64_slow_path+0x25/0x25 + +This is triggered occasionally by running both win7 and win2016 in L2, in +addition, EPT is disabled on both L1 and L2. It can't be reproduced easily. + +Commit 0b6ac343fc (KVM: nVMX: Correct handling of exception injection) mentioned +that "KVM wants to inject page-faults which it got to the guest. This function +assumes it is called with the exit reason in vmcs02 being a #PF exception". +Commit e011c663 (KVM: nVMX: Check all exceptions for intercept during delivery to +L2) allows to check all exceptions for intercept during delivery to L2. However, +there is no guarantee the exit reason is exception currently, when there is an +external interrupt occurred on host, maybe a time interrupt for host which should +not be injected to guest, and somewhere queues an exception, then the function +nested_vmx_check_exception() will be called and the vmexit emulation codes will +try to emulate the "Acknowledge interrupt on exit" behavior, the warning is +triggered. + +Reusing the exit reason from the L2->L0 vmexit is wrong in this case, +the reason must always be EXCEPTION_NMI when injecting an exception into +L1 as a nested vmexit. + +Cc: Paolo Bonzini +Cc: Radim Krčmář +Signed-off-by: Wanpeng Li +Fixes: e011c663b9c7 ("KVM: nVMX: Check all exceptions for intercept during delivery to L2") +Signed-off-by: Radim Krčmář +Signed-off-by: Greg Kroah-Hartman + +--- + arch/x86/kvm/vmx.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/arch/x86/kvm/vmx.c ++++ b/arch/x86/kvm/vmx.c +@@ -2264,7 +2264,7 @@ static int nested_vmx_check_exception(st + if (!(vmcs12->exception_bitmap & (1u << nr))) + return 0; + +- nested_vmx_vmexit(vcpu, to_vmx(vcpu)->exit_reason, ++ nested_vmx_vmexit(vcpu, EXIT_REASON_EXCEPTION_NMI, + vmcs_read32(VM_EXIT_INTR_INFO), + vmcs_readl(EXIT_QUALIFICATION)); + return 1; diff --git a/queue-4.4/kvm-x86-fix-emulation-of-rsm-and-iret-instructions.patch b/queue-4.4/kvm-x86-fix-emulation-of-rsm-and-iret-instructions.patch new file mode 100644 index 00000000000..9aab9bee53a --- /dev/null +++ b/queue-4.4/kvm-x86-fix-emulation-of-rsm-and-iret-instructions.patch @@ -0,0 +1,183 @@ +From 6ed071f051e12cf7baa1b69d3becb8f232fdfb7b Mon Sep 17 00:00:00 2001 +From: Ladi Prosek +Date: Tue, 25 Apr 2017 16:42:44 +0200 +Subject: KVM: x86: fix emulation of RSM and IRET instructions + +From: Ladi Prosek + +commit 6ed071f051e12cf7baa1b69d3becb8f232fdfb7b upstream. + +On AMD, the effect of set_nmi_mask called by emulate_iret_real and em_rsm +on hflags is reverted later on in x86_emulate_instruction where hflags are +overwritten with ctxt->emul_flags (the kvm_set_hflags call). This manifests +as a hang when rebooting Windows VMs with QEMU, OVMF, and >1 vcpu. + +Instead of trying to merge ctxt->emul_flags into vcpu->arch.hflags after +an instruction is emulated, this commit deletes emul_flags altogether and +makes the emulator access vcpu->arch.hflags using two new accessors. This +way all changes, on the emulator side as well as in functions called from +the emulator and accessing vcpu state with emul_to_vcpu, are preserved. + +More details on the bug and its manifestation with Windows and OVMF: + + It's a KVM bug in the interaction between SMI/SMM and NMI, specific to AMD. + I believe that the SMM part explains why we started seeing this only with + OVMF. + + KVM masks and unmasks NMI when entering and leaving SMM. When KVM emulates + the RSM instruction in em_rsm, the set_nmi_mask call doesn't stick because + later on in x86_emulate_instruction we overwrite arch.hflags with + ctxt->emul_flags, effectively reverting the effect of the set_nmi_mask call. + The AMD-specific hflag of interest here is HF_NMI_MASK. + + When rebooting the system, Windows sends an NMI IPI to all but the current + cpu to shut them down. Only after all of them are parked in HLT will the + initiating cpu finish the restart. If NMI is masked, other cpus never get + the memo and the initiating cpu spins forever, waiting for + hal!HalpInterruptProcessorsStarted to drop. That's the symptom we observe. + +Fixes: a584539b24b8 ("KVM: x86: pass the whole hflags field to emulator and back") +Signed-off-by: Ladi Prosek +Signed-off-by: Paolo Bonzini +Signed-off-by: Greg Kroah-Hartman + +--- + arch/x86/include/asm/kvm_emulate.h | 4 +++- + arch/x86/kvm/emulate.c | 16 +++++++++------- + arch/x86/kvm/x86.c | 15 ++++++++++++--- + 3 files changed, 24 insertions(+), 11 deletions(-) + +--- a/arch/x86/include/asm/kvm_emulate.h ++++ b/arch/x86/include/asm/kvm_emulate.h +@@ -221,6 +221,9 @@ struct x86_emulate_ops { + void (*get_cpuid)(struct x86_emulate_ctxt *ctxt, + u32 *eax, u32 *ebx, u32 *ecx, u32 *edx); + void (*set_nmi_mask)(struct x86_emulate_ctxt *ctxt, bool masked); ++ ++ unsigned (*get_hflags)(struct x86_emulate_ctxt *ctxt); ++ void (*set_hflags)(struct x86_emulate_ctxt *ctxt, unsigned hflags); + }; + + typedef u32 __attribute__((vector_size(16))) sse128_t; +@@ -290,7 +293,6 @@ struct x86_emulate_ctxt { + + /* interruptibility state, as a result of execution of STI or MOV SS */ + int interruptibility; +- int emul_flags; + + bool perm_ok; /* do not check permissions if true */ + bool ud; /* inject an #UD if host doesn't support insn */ +--- a/arch/x86/kvm/emulate.c ++++ b/arch/x86/kvm/emulate.c +@@ -2531,7 +2531,7 @@ static int em_rsm(struct x86_emulate_ctx + u64 smbase; + int ret; + +- if ((ctxt->emul_flags & X86EMUL_SMM_MASK) == 0) ++ if ((ctxt->ops->get_hflags(ctxt) & X86EMUL_SMM_MASK) == 0) + return emulate_ud(ctxt); + + /* +@@ -2580,11 +2580,11 @@ static int em_rsm(struct x86_emulate_ctx + return X86EMUL_UNHANDLEABLE; + } + +- if ((ctxt->emul_flags & X86EMUL_SMM_INSIDE_NMI_MASK) == 0) ++ if ((ctxt->ops->get_hflags(ctxt) & X86EMUL_SMM_INSIDE_NMI_MASK) == 0) + ctxt->ops->set_nmi_mask(ctxt, false); + +- ctxt->emul_flags &= ~X86EMUL_SMM_INSIDE_NMI_MASK; +- ctxt->emul_flags &= ~X86EMUL_SMM_MASK; ++ ctxt->ops->set_hflags(ctxt, ctxt->ops->get_hflags(ctxt) & ++ ~(X86EMUL_SMM_INSIDE_NMI_MASK | X86EMUL_SMM_MASK)); + return X86EMUL_CONTINUE; + } + +@@ -5296,6 +5296,7 @@ int x86_emulate_insn(struct x86_emulate_ + const struct x86_emulate_ops *ops = ctxt->ops; + int rc = X86EMUL_CONTINUE; + int saved_dst_type = ctxt->dst.type; ++ unsigned emul_flags; + + ctxt->mem_read.pos = 0; + +@@ -5310,6 +5311,7 @@ int x86_emulate_insn(struct x86_emulate_ + goto done; + } + ++ emul_flags = ctxt->ops->get_hflags(ctxt); + if (unlikely(ctxt->d & + (No64|Undefined|Sse|Mmx|Intercept|CheckPerm|Priv|Prot|String))) { + if ((ctxt->mode == X86EMUL_MODE_PROT64 && (ctxt->d & No64)) || +@@ -5343,7 +5345,7 @@ int x86_emulate_insn(struct x86_emulate_ + fetch_possible_mmx_operand(ctxt, &ctxt->dst); + } + +- if (unlikely(ctxt->emul_flags & X86EMUL_GUEST_MASK) && ctxt->intercept) { ++ if (unlikely(emul_flags & X86EMUL_GUEST_MASK) && ctxt->intercept) { + rc = emulator_check_intercept(ctxt, ctxt->intercept, + X86_ICPT_PRE_EXCEPT); + if (rc != X86EMUL_CONTINUE) +@@ -5372,7 +5374,7 @@ int x86_emulate_insn(struct x86_emulate_ + goto done; + } + +- if (unlikely(ctxt->emul_flags & X86EMUL_GUEST_MASK) && (ctxt->d & Intercept)) { ++ if (unlikely(emul_flags & X86EMUL_GUEST_MASK) && (ctxt->d & Intercept)) { + rc = emulator_check_intercept(ctxt, ctxt->intercept, + X86_ICPT_POST_EXCEPT); + if (rc != X86EMUL_CONTINUE) +@@ -5426,7 +5428,7 @@ int x86_emulate_insn(struct x86_emulate_ + + special_insn: + +- if (unlikely(ctxt->emul_flags & X86EMUL_GUEST_MASK) && (ctxt->d & Intercept)) { ++ if (unlikely(emul_flags & X86EMUL_GUEST_MASK) && (ctxt->d & Intercept)) { + rc = emulator_check_intercept(ctxt, ctxt->intercept, + X86_ICPT_POST_MEMACCESS); + if (rc != X86EMUL_CONTINUE) +--- a/arch/x86/kvm/x86.c ++++ b/arch/x86/kvm/x86.c +@@ -4999,6 +4999,16 @@ static void emulator_set_nmi_mask(struct + kvm_x86_ops->set_nmi_mask(emul_to_vcpu(ctxt), masked); + } + ++static unsigned emulator_get_hflags(struct x86_emulate_ctxt *ctxt) ++{ ++ return emul_to_vcpu(ctxt)->arch.hflags; ++} ++ ++static void emulator_set_hflags(struct x86_emulate_ctxt *ctxt, unsigned emul_flags) ++{ ++ kvm_set_hflags(emul_to_vcpu(ctxt), emul_flags); ++} ++ + static const struct x86_emulate_ops emulate_ops = { + .read_gpr = emulator_read_gpr, + .write_gpr = emulator_write_gpr, +@@ -5038,6 +5048,8 @@ static const struct x86_emulate_ops emul + .intercept = emulator_intercept, + .get_cpuid = emulator_get_cpuid, + .set_nmi_mask = emulator_set_nmi_mask, ++ .get_hflags = emulator_get_hflags, ++ .set_hflags = emulator_set_hflags, + }; + + static void toggle_interruptibility(struct kvm_vcpu *vcpu, u32 mask) +@@ -5090,7 +5102,6 @@ static void init_emulate_ctxt(struct kvm + BUILD_BUG_ON(HF_GUEST_MASK != X86EMUL_GUEST_MASK); + BUILD_BUG_ON(HF_SMM_MASK != X86EMUL_SMM_MASK); + BUILD_BUG_ON(HF_SMM_INSIDE_NMI_MASK != X86EMUL_SMM_INSIDE_NMI_MASK); +- ctxt->emul_flags = vcpu->arch.hflags; + + init_decode_cache(ctxt); + vcpu->arch.emulate_regs_need_sync_from_vcpu = false; +@@ -5486,8 +5497,6 @@ restart: + unsigned long rflags = kvm_x86_ops->get_rflags(vcpu); + toggle_interruptibility(vcpu, ctxt->interruptibility); + vcpu->arch.emulate_regs_need_sync_to_vcpu = false; +- if (vcpu->arch.hflags != ctxt->emul_flags) +- kvm_set_hflags(vcpu, ctxt->emul_flags); + kvm_rip_write(vcpu, ctxt->eip); + if (r == EMULATE_DONE) + kvm_vcpu_check_singlestep(vcpu, rflags, &r); diff --git a/queue-4.4/kvm-x86-vpmu-fix-undefined-shift-in-intel_pmu_refresh.patch b/queue-4.4/kvm-x86-vpmu-fix-undefined-shift-in-intel_pmu_refresh.patch new file mode 100644 index 00000000000..c0e4fb52bb5 --- /dev/null +++ b/queue-4.4/kvm-x86-vpmu-fix-undefined-shift-in-intel_pmu_refresh.patch @@ -0,0 +1,39 @@ +From 34b0dadbdf698f9b277a31b2747b625b9a75ea1f Mon Sep 17 00:00:00 2001 +From: =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= +Date: Thu, 18 May 2017 19:37:31 +0200 +Subject: KVM: x86/vPMU: fix undefined shift in intel_pmu_refresh() +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Radim Krčmář + +commit 34b0dadbdf698f9b277a31b2747b625b9a75ea1f upstream. + +Static analysis noticed that pmu->nr_arch_gp_counters can be 32 +(INTEL_PMC_MAX_GENERIC) and therefore cannot be used to shift 'int'. + +I didn't add BUILD_BUG_ON for it as we have a better checker. + +Reported-by: Dan Carpenter +Fixes: 25462f7f5295 ("KVM: x86/vPMU: Define kvm_pmu_ops to support vPMU function dispatch") +Reviewed-by: Paolo Bonzini +Reviewed-by: David Hildenbrand +Signed-off-by: Radim Krčmář +Signed-off-by: Greg Kroah-Hartman + +--- + arch/x86/kvm/pmu_intel.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/arch/x86/kvm/pmu_intel.c ++++ b/arch/x86/kvm/pmu_intel.c +@@ -294,7 +294,7 @@ static void intel_pmu_refresh(struct kvm + ((u64)1 << edx.split.bit_width_fixed) - 1; + } + +- pmu->global_ctrl = ((1 << pmu->nr_arch_gp_counters) - 1) | ++ pmu->global_ctrl = ((1ull << pmu->nr_arch_gp_counters) - 1) | + (((1ull << pmu->nr_arch_fixed_counters) - 1) << INTEL_PMC_IDX_FIXED); + pmu->global_ctrl_mask = ~pmu->global_ctrl; + diff --git a/queue-4.4/kvm-x86-zero-base3-of-unusable-segments.patch b/queue-4.4/kvm-x86-zero-base3-of-unusable-segments.patch new file mode 100644 index 00000000000..41a531c0042 --- /dev/null +++ b/queue-4.4/kvm-x86-zero-base3-of-unusable-segments.patch @@ -0,0 +1,38 @@ +From f0367ee1d64d27fa08be2407df5c125442e885e3 Mon Sep 17 00:00:00 2001 +From: =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= +Date: Thu, 18 May 2017 19:37:30 +0200 +Subject: KVM: x86: zero base3 of unusable segments +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Radim Krčmář + +commit f0367ee1d64d27fa08be2407df5c125442e885e3 upstream. + +Static checker noticed that base3 could be used uninitialized if the +segment was not present (useable). Random stack values probably would +not pass VMCS entry checks. + +Reported-by: Dan Carpenter +Fixes: 1aa366163b8b ("KVM: x86 emulator: consolidate segment accessors") +Reviewed-by: Paolo Bonzini +Reviewed-by: David Hildenbrand +Signed-off-by: Radim Krčmář +Signed-off-by: Greg Kroah-Hartman + +--- + arch/x86/kvm/x86.c | 2 ++ + 1 file changed, 2 insertions(+) + +--- a/arch/x86/kvm/x86.c ++++ b/arch/x86/kvm/x86.c +@@ -4844,6 +4844,8 @@ static bool emulator_get_segment(struct + + if (var.unusable) { + memset(desc, 0, sizeof(*desc)); ++ if (base3) ++ *base3 = 0; + return false; + } + diff --git a/queue-4.4/series b/queue-4.4/series index f367e96350c..c1760bc6fb8 100644 --- a/queue-4.4/series +++ b/queue-4.4/series @@ -91,3 +91,11 @@ arm-8685-1-ensure-memblock-limit-is-pmd-aligned.patch x86-mpx-correctly-report-do_mpx_bt_fault-failures-to-user-space.patch x86-mm-fix-flush_tlb_page-on-xen.patch ocfs2-o2hb-revert-hb-threshold-to-keep-compatible.patch +iommu-vt-d-don-t-over-free-page-table-directories.patch +iommu-handle-default-domain-attach-failure.patch +iommu-amd-fix-incorrect-error-handling-in-amd_iommu_bind_pasid.patch +cpufreq-s3c2416-double-free-on-driver-init-error-path.patch +kvm-x86-fix-emulation-of-rsm-and-iret-instructions.patch +kvm-x86-vpmu-fix-undefined-shift-in-intel_pmu_refresh.patch +kvm-x86-zero-base3-of-unusable-segments.patch +kvm-nvmx-fix-exception-injection.patch