x86-l1tf-fix-build-error-seen-if-config_kvm_intel-is-disabled.patch
x86-i8259-add-missing-include-file.patch
+x86-hyper-v-check-for-vp_inval-in-hyperv_flush_tlb_others.patch
+x86-platform-uv-mark-memblock-related-init-code-and-data-correctly.patch
+x86-mm-pti-clear-global-bit-more-aggressively.patch
+xen-pv-call-get_cpu_address_sizes-to-set-x86_virt-phys_bits.patch
+x86-mm-disable-ioremap-free-page-handling-on-x86-pae.patch
--- /dev/null
+From 110d2a7fc39725d2c29d2fede4f34a35a4f98882 Mon Sep 17 00:00:00 2001
+From: Vitaly Kuznetsov <vkuznets@redhat.com>
+Date: Mon, 9 Jul 2018 19:40:12 +0200
+Subject: x86/hyper-v: Check for VP_INVAL in hyperv_flush_tlb_others()
+
+From: Vitaly Kuznetsov <vkuznets@redhat.com>
+
+commit 110d2a7fc39725d2c29d2fede4f34a35a4f98882 upstream.
+
+Commit 1268ed0c474a ("x86/hyper-v: Fix the circular dependency in IPI
+ enlightenment") pre-filled hv_vp_index with VP_INVAL so it is now
+(theoretically) possible to observe hv_cpu_number_to_vp_number()
+returning VP_INVAL. We need to check for that in hyperv_flush_tlb_others().
+
+Not checking for VP_INVAL on the first call site where we do
+
+ if (hv_cpu_number_to_vp_number(cpumask_last(cpus)) >= 64)
+ goto do_ex_hypercall;
+
+is OK, in case we're eligible for non-ex hypercall we'll catch the
+issue later in for_each_cpu() cycle and in case we'll be doing ex-
+hypercall cpumask_to_vpset() will fail.
+
+It would be nice to change hv_cpu_number_to_vp_number() return
+value's type to 'u32' but this will likely be a bigger change as
+all call sites need to be checked first.
+
+Fixes: 1268ed0c474a ("x86/hyper-v: Fix the circular dependency in IPI enlightenment")
+Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Reviewed-by: Michael Kelley <mikelley@microsoft.com>
+Cc: "K. Y. Srinivasan" <kys@microsoft.com>
+Cc: Haiyang Zhang <haiyangz@microsoft.com>
+Cc: Stephen Hemminger <sthemmin@microsoft.com>
+Cc: "Michael Kelley (EOSG)" <Michael.H.Kelley@microsoft.com>
+Cc: devel@linuxdriverproject.org
+Cc: "H. Peter Anvin" <hpa@zytor.com>
+Link: https://lkml.kernel.org/r/20180709174012.17429-3-vkuznets@redhat.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/x86/hyperv/mmu.c | 5 +++++
+ 1 file changed, 5 insertions(+)
+
+--- a/arch/x86/hyperv/mmu.c
++++ b/arch/x86/hyperv/mmu.c
+@@ -95,6 +95,11 @@ static void hyperv_flush_tlb_others(cons
+ } else {
+ for_each_cpu(cpu, cpus) {
+ vcpu = hv_cpu_number_to_vp_number(cpu);
++ if (vcpu == VP_INVAL) {
++ local_irq_restore(flags);
++ goto do_native;
++ }
++
+ if (vcpu >= 64)
+ goto do_native;
+
--- /dev/null
+From f967db0b9ed44ec3057a28f3b28efc51df51b835 Mon Sep 17 00:00:00 2001
+From: Toshi Kani <toshi.kani@hpe.com>
+Date: Wed, 27 Jun 2018 08:13:46 -0600
+Subject: x86/mm: Disable ioremap free page handling on x86-PAE
+
+From: Toshi Kani <toshi.kani@hpe.com>
+
+commit f967db0b9ed44ec3057a28f3b28efc51df51b835 upstream.
+
+ioremap() supports pmd mappings on x86-PAE. However, kernel's pmd
+tables are not shared among processes on x86-PAE. Therefore, any
+update to sync'd pmd entries need re-syncing. Freeing a pte page
+also leads to a vmalloc fault and hits the BUG_ON in vmalloc_sync_one().
+
+Disable free page handling on x86-PAE. pud_free_pmd_page() and
+pmd_free_pte_page() simply return 0 if a given pud/pmd entry is present.
+This assures that ioremap() does not update sync'd pmd entries at the
+cost of falling back to pte mappings.
+
+Fixes: 28ee90fe6048 ("x86/mm: implement free pmd/pte page interfaces")
+Reported-by: Joerg Roedel <joro@8bytes.org>
+Signed-off-by: Toshi Kani <toshi.kani@hpe.com>
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Cc: mhocko@suse.com
+Cc: akpm@linux-foundation.org
+Cc: hpa@zytor.com
+Cc: cpandya@codeaurora.org
+Cc: linux-mm@kvack.org
+Cc: linux-arm-kernel@lists.infradead.org
+Cc: stable@vger.kernel.org
+Cc: Andrew Morton <akpm@linux-foundation.org>
+Cc: Michal Hocko <mhocko@suse.com>
+Cc: "H. Peter Anvin" <hpa@zytor.com>
+Cc: <stable@vger.kernel.org>
+Link: https://lkml.kernel.org/r/20180627141348.21777-2-toshi.kani@hpe.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/x86/mm/pgtable.c | 19 +++++++++++++++++++
+ 1 file changed, 19 insertions(+)
+
+--- a/arch/x86/mm/pgtable.c
++++ b/arch/x86/mm/pgtable.c
+@@ -719,6 +719,7 @@ int pmd_clear_huge(pmd_t *pmd)
+ return 0;
+ }
+
++#ifdef CONFIG_X86_64
+ /**
+ * pud_free_pmd_page - Clear pud entry and free pmd page.
+ * @pud: Pointer to a PUD.
+@@ -766,4 +767,22 @@ int pmd_free_pte_page(pmd_t *pmd)
+
+ return 1;
+ }
++
++#else /* !CONFIG_X86_64 */
++
++int pud_free_pmd_page(pud_t *pud)
++{
++ return pud_none(*pud);
++}
++
++/*
++ * Disable free page handling on x86-PAE. This assures that ioremap()
++ * does not update sync'd pmd entries. See vmalloc_sync_one().
++ */
++int pmd_free_pte_page(pmd_t *pmd)
++{
++ return pmd_none(*pmd);
++}
++
++#endif /* CONFIG_X86_64 */
+ #endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */
--- /dev/null
+From eac7073aa69aa1cac819aa712146284f53f642b1 Mon Sep 17 00:00:00 2001
+From: Dave Hansen <dave.hansen@linux.intel.com>
+Date: Thu, 2 Aug 2018 15:58:25 -0700
+Subject: x86/mm/pti: Clear Global bit more aggressively
+
+From: Dave Hansen <dave.hansen@linux.intel.com>
+
+commit eac7073aa69aa1cac819aa712146284f53f642b1 upstream.
+
+The kernel image starts out with the Global bit set across the entire
+kernel image. The bit is cleared with set_memory_nonglobal() in the
+configurations with PCIDs where the performance benefits of the Global bit
+are not needed.
+
+However, this is fragile. It means that we are stuck opting *out* of the
+less-secure (Global bit set) configuration, which seems backwards. Let's
+start more secure (Global bit clear) and then let things opt back in if
+they want performance, or are truly mapping common data between kernel and
+userspace.
+
+This fixes a bug. Before this patch, there are areas that are unmapped
+from the user page tables (like like everything above 0xffffffff82600000 in
+the example below). These have the hallmark of being a wrong Global area:
+they are not identical in the 'current_kernel' and 'current_user' page
+table dumps. They are also read-write, which means they're much more
+likely to contain secrets.
+
+Before this patch:
+
+current_kernel:---[ High Kernel Mapping ]---
+current_kernel-0xffffffff80000000-0xffffffff81000000 16M pmd
+current_kernel-0xffffffff81000000-0xffffffff81e00000 14M ro PSE GLB x pmd
+current_kernel-0xffffffff81e00000-0xffffffff81e11000 68K ro GLB x pte
+current_kernel-0xffffffff81e11000-0xffffffff82000000 1980K RW GLB NX pte
+current_kernel-0xffffffff82000000-0xffffffff82600000 6M ro PSE GLB NX pmd
+current_kernel-0xffffffff82600000-0xffffffff82c00000 6M RW PSE GLB NX pmd
+current_kernel-0xffffffff82c00000-0xffffffff82e00000 2M RW GLB NX pte
+current_kernel-0xffffffff82e00000-0xffffffff83200000 4M RW PSE GLB NX pmd
+current_kernel-0xffffffff83200000-0xffffffffa0000000 462M pmd
+
+ current_user:---[ High Kernel Mapping ]---
+ current_user-0xffffffff80000000-0xffffffff81000000 16M pmd
+ current_user-0xffffffff81000000-0xffffffff81e00000 14M ro PSE GLB x pmd
+ current_user-0xffffffff81e00000-0xffffffff81e11000 68K ro GLB x pte
+ current_user-0xffffffff81e11000-0xffffffff82000000 1980K RW GLB NX pte
+ current_user-0xffffffff82000000-0xffffffff82600000 6M ro PSE GLB NX pmd
+ current_user-0xffffffff82600000-0xffffffffa0000000 474M pmd
+
+After this patch:
+
+current_kernel:---[ High Kernel Mapping ]---
+current_kernel-0xffffffff80000000-0xffffffff81000000 16M pmd
+current_kernel-0xffffffff81000000-0xffffffff81e00000 14M ro PSE GLB x pmd
+current_kernel-0xffffffff81e00000-0xffffffff81e11000 68K ro GLB x pte
+current_kernel-0xffffffff81e11000-0xffffffff82000000 1980K RW NX pte
+current_kernel-0xffffffff82000000-0xffffffff82600000 6M ro PSE GLB NX pmd
+current_kernel-0xffffffff82600000-0xffffffff82c00000 6M RW PSE NX pmd
+current_kernel-0xffffffff82c00000-0xffffffff82e00000 2M RW NX pte
+current_kernel-0xffffffff82e00000-0xffffffff83200000 4M RW PSE NX pmd
+current_kernel-0xffffffff83200000-0xffffffffa0000000 462M pmd
+
+ current_user:---[ High Kernel Mapping ]---
+ current_user-0xffffffff80000000-0xffffffff81000000 16M pmd
+ current_user-0xffffffff81000000-0xffffffff81e00000 14M ro PSE GLB x pmd
+ current_user-0xffffffff81e00000-0xffffffff81e11000 68K ro GLB x pte
+ current_user-0xffffffff81e11000-0xffffffff82000000 1980K RW NX pte
+ current_user-0xffffffff82000000-0xffffffff82600000 6M ro PSE GLB NX pmd
+ current_user-0xffffffff82600000-0xffffffffa0000000 474M pmd
+
+Fixes: 0f561fce4d69 ("x86/pti: Enable global pages for shared areas")
+Reported-by: Hugh Dickins <hughd@google.com>
+Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Cc: keescook@google.com
+Cc: aarcange@redhat.com
+Cc: jgross@suse.com
+Cc: jpoimboe@redhat.com
+Cc: gregkh@linuxfoundation.org
+Cc: peterz@infradead.org
+Cc: torvalds@linux-foundation.org
+Cc: bp@alien8.de
+Cc: luto@kernel.org
+Cc: ak@linux.intel.com
+Cc: Kees Cook <keescook@google.com>
+Cc: Andrea Arcangeli <aarcange@redhat.com>
+Cc: Juergen Gross <jgross@suse.com>
+Cc: Josh Poimboeuf <jpoimboe@redhat.com>
+Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Cc: Peter Zijlstra <peterz@infradead.org>
+Cc: Linus Torvalds <torvalds@linux-foundation.org>
+Cc: Borislav Petkov <bp@alien8.de>
+Cc: Andy Lutomirski <luto@kernel.org>
+Cc: Andi Kleen <ak@linux.intel.com>
+Link: https://lkml.kernel.org/r/20180802225825.A100C071@viggo.jf.intel.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/x86/mm/pageattr.c | 6 ++++++
+ arch/x86/mm/pti.c | 34 ++++++++++++++++++++++++----------
+ 2 files changed, 30 insertions(+), 10 deletions(-)
+
+--- a/arch/x86/mm/pageattr.c
++++ b/arch/x86/mm/pageattr.c
+@@ -1784,6 +1784,12 @@ int set_memory_nonglobal(unsigned long a
+ __pgprot(_PAGE_GLOBAL), 0);
+ }
+
++int set_memory_global(unsigned long addr, int numpages)
++{
++ return change_page_attr_set(&addr, numpages,
++ __pgprot(_PAGE_GLOBAL), 0);
++}
++
+ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc)
+ {
+ struct cpa_data cpa;
+--- a/arch/x86/mm/pti.c
++++ b/arch/x86/mm/pti.c
+@@ -436,6 +436,13 @@ static inline bool pti_kernel_image_glob
+ }
+
+ /*
++ * This is the only user for these and it is not arch-generic
++ * like the other set_memory.h functions. Just extern them.
++ */
++extern int set_memory_nonglobal(unsigned long addr, int numpages);
++extern int set_memory_global(unsigned long addr, int numpages);
++
++/*
+ * For some configurations, map all of kernel text into the user page
+ * tables. This reduces TLB misses, especially on non-PCID systems.
+ */
+@@ -447,7 +454,8 @@ void pti_clone_kernel_text(void)
+ * clone the areas past rodata, they might contain secrets.
+ */
+ unsigned long start = PFN_ALIGN(_text);
+- unsigned long end = (unsigned long)__end_rodata_hpage_align;
++ unsigned long end_clone = (unsigned long)__end_rodata_hpage_align;
++ unsigned long end_global = PFN_ALIGN((unsigned long)__stop___ex_table);
+
+ if (!pti_kernel_image_global_ok())
+ return;
+@@ -459,14 +467,18 @@ void pti_clone_kernel_text(void)
+ * pti_set_kernel_image_nonglobal() did to clear the
+ * global bit.
+ */
+- pti_clone_pmds(start, end, _PAGE_RW);
++ pti_clone_pmds(start, end_clone, _PAGE_RW);
++
++ /*
++ * pti_clone_pmds() will set the global bit in any PMDs
++ * that it clones, but we also need to get any PTEs in
++ * the last level for areas that are not huge-page-aligned.
++ */
++
++ /* Set the global bit for normal non-__init kernel text: */
++ set_memory_global(start, (end_global - start) >> PAGE_SHIFT);
+ }
+
+-/*
+- * This is the only user for it and it is not arch-generic like
+- * the other set_memory.h functions. Just extern it.
+- */
+-extern int set_memory_nonglobal(unsigned long addr, int numpages);
+ void pti_set_kernel_image_nonglobal(void)
+ {
+ /*
+@@ -478,9 +490,11 @@ void pti_set_kernel_image_nonglobal(void
+ unsigned long start = PFN_ALIGN(_text);
+ unsigned long end = ALIGN((unsigned long)_end, PMD_PAGE_SIZE);
+
+- if (pti_kernel_image_global_ok())
+- return;
+-
++ /*
++ * This clears _PAGE_GLOBAL from the entire kernel image.
++ * pti_clone_kernel_text() map put _PAGE_GLOBAL back for
++ * areas that are mapped to userspace.
++ */
+ set_memory_nonglobal(start, (end - start) >> PAGE_SHIFT);
+ }
+
--- /dev/null
+From 24cfd8ca1d28331b9dad3b88d1958c976b2cfab6 Mon Sep 17 00:00:00 2001
+From: Dou Liyang <douly.fnst@cn.fujitsu.com>
+Date: Mon, 30 Jul 2018 15:59:47 +0800
+Subject: x86/platform/UV: Mark memblock related init code and data correctly
+
+From: Dou Liyang <douly.fnst@cn.fujitsu.com>
+
+commit 24cfd8ca1d28331b9dad3b88d1958c976b2cfab6 upstream.
+
+parse_mem_block_size() and mem_block_size are only used during init. Mark
+them accordingly.
+
+Fixes: d7609f4210cb ("x86/platform/UV: Add kernel parameter to set memory block size")
+Signed-off-by: Dou Liyang <douly.fnst@cn.fujitsu.com>
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Cc: hpa@zytor.com
+Cc: Mike Travis <mike.travis@hpe.com>
+Cc: Andrew Banman <andrew.banman@hpe.com>
+Link: https://lkml.kernel.org/r/20180730075947.23023-1-douly.fnst@cn.fujitsu.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/x86/kernel/apic/x2apic_uv_x.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/arch/x86/kernel/apic/x2apic_uv_x.c
++++ b/arch/x86/kernel/apic/x2apic_uv_x.c
+@@ -394,10 +394,10 @@ extern int uv_hub_info_version(void)
+ EXPORT_SYMBOL(uv_hub_info_version);
+
+ /* Default UV memory block size is 2GB */
+-static unsigned long mem_block_size = (2UL << 30);
++static unsigned long mem_block_size __initdata = (2UL << 30);
+
+ /* Kernel parameter to specify UV mem block size */
+-static int parse_mem_block_size(char *ptr)
++static int __init parse_mem_block_size(char *ptr)
+ {
+ unsigned long size = memparse(ptr, NULL);
+
--- /dev/null
+From 405c018a25fe464dc68057bbc8014a58f2bd4422 Mon Sep 17 00:00:00 2001
+From: "M. Vefa Bicakci" <m.v.b@runbox.com>
+Date: Tue, 24 Jul 2018 08:45:47 -0400
+Subject: xen/pv: Call get_cpu_address_sizes to set x86_virt/phys_bits
+
+From: M. Vefa Bicakci <m.v.b@runbox.com>
+
+commit 405c018a25fe464dc68057bbc8014a58f2bd4422 upstream.
+
+Commit d94a155c59c9 ("x86/cpu: Prevent cpuinfo_x86::x86_phys_bits
+adjustment corruption") has moved the query and calculation of the
+x86_virt_bits and x86_phys_bits fields of the cpuinfo_x86 struct
+from the get_cpu_cap function to a new function named
+get_cpu_address_sizes.
+
+One of the call sites related to Xen PV VMs was unfortunately missed
+in the aforementioned commit. This prevents successful boot-up of
+kernel versions 4.17 and up in Xen PV VMs if CONFIG_DEBUG_VIRTUAL
+is enabled, due to the following code path:
+
+ enlighten_pv.c::xen_start_kernel
+ mmu_pv.c::xen_reserve_special_pages
+ page.h::__pa
+ physaddr.c::__phys_addr
+ physaddr.h::phys_addr_valid
+
+phys_addr_valid uses boot_cpu_data.x86_phys_bits to validate physical
+addresses. boot_cpu_data.x86_phys_bits is no longer populated before
+the call to xen_reserve_special_pages due to the aforementioned commit
+though, so the validation performed by phys_addr_valid fails, which
+causes __phys_addr to trigger a BUG, preventing boot-up.
+
+Signed-off-by: M. Vefa Bicakci <m.v.b@runbox.com>
+Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
+Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
+Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
+Cc: Andy Lutomirski <luto@kernel.org>
+Cc: Ingo Molnar <mingo@redhat.com>
+Cc: "H. Peter Anvin" <hpa@zytor.com>
+Cc: Thomas Gleixner <tglx@linutronix.de>
+Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
+Cc: Juergen Gross <jgross@suse.com>
+Cc: xen-devel@lists.xenproject.org
+Cc: x86@kernel.org
+Cc: stable@vger.kernel.org # for v4.17 and up
+Fixes: d94a155c59c9 ("x86/cpu: Prevent cpuinfo_x86::x86_phys_bits adjustment corruption")
+Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/x86/kernel/cpu/common.c | 2 +-
+ arch/x86/kernel/cpu/cpu.h | 1 +
+ arch/x86/xen/enlighten_pv.c | 3 +++
+ 3 files changed, 5 insertions(+), 1 deletion(-)
+
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -905,7 +905,7 @@ void get_cpu_cap(struct cpuinfo_x86 *c)
+ apply_forced_caps(c);
+ }
+
+-static void get_cpu_address_sizes(struct cpuinfo_x86 *c)
++void get_cpu_address_sizes(struct cpuinfo_x86 *c)
+ {
+ u32 eax, ebx, ecx, edx;
+
+--- a/arch/x86/kernel/cpu/cpu.h
++++ b/arch/x86/kernel/cpu/cpu.h
+@@ -46,6 +46,7 @@ extern const struct cpu_dev *const __x86
+ *const __x86_cpu_dev_end[];
+
+ extern void get_cpu_cap(struct cpuinfo_x86 *c);
++extern void get_cpu_address_sizes(struct cpuinfo_x86 *c);
+ extern void cpu_detect_cache_sizes(struct cpuinfo_x86 *c);
+ extern void init_scattered_cpuid_features(struct cpuinfo_x86 *c);
+ extern u32 get_scattered_cpuid_leaf(unsigned int level,
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -1259,6 +1259,9 @@ asmlinkage __visible void __init xen_sta
+ get_cpu_cap(&boot_cpu_data);
+ x86_configure_nx();
+
++ /* Determine virtual and physical address sizes */
++ get_cpu_address_sizes(&boot_cpu_data);
++
+ /* Let's presume PV guests always boot on vCPU with id 0. */
+ per_cpu(xen_vcpu_id, 0) = 0;
+