timekeeping-repair-ktime_get_coarse-granularity.patch
ras-cec-convert-the-timer-callback-to-a-workqueue.patch
ras-cec-fix-binary-search-function.patch
+x86-microcode-cpuhotplug-add-a-microcode-loader-cpu-hotplug-callback.patch
+x86-kasan-fix-boot-with-5-level-paging-and-kasan.patch
+x86-mm-kaslr-compute-the-size-of-the-vmemmap-section-properly.patch
+x86-resctrl-prevent-null-pointer-dereference-when-local-mbm-is-disabled.patch
--- /dev/null
+From f3176ec9420de0c385023afa3e4970129444ac2f Mon Sep 17 00:00:00 2001
+From: Andrey Ryabinin <aryabinin@virtuozzo.com>
+Date: Fri, 14 Jun 2019 17:31:49 +0300
+Subject: x86/kasan: Fix boot with 5-level paging and KASAN
+
+From: Andrey Ryabinin <aryabinin@virtuozzo.com>
+
+commit f3176ec9420de0c385023afa3e4970129444ac2f upstream.
+
+Since commit d52888aa2753 ("x86/mm: Move LDT remap out of KASLR region on
+5-level paging") kernel doesn't boot with KASAN on 5-level paging machines.
+The bug is actually in early_p4d_offset() and introduced by commit
+12a8cc7fcf54 ("x86/kasan: Use the same shadow offset for 4- and 5-level paging")
+
+early_p4d_offset() tries to convert pgd_val(*pgd) value to a physical
+address. This doesn't make sense because pgd_val() already contains the
+physical address.
+
+It did work prior to commit d52888aa2753 because the result of
+"__pa_nodebug(pgd_val(*pgd)) & PTE_PFN_MASK" was the same as "pgd_val(*pgd)
+& PTE_PFN_MASK". __pa_nodebug() just set some high bits which were masked
+out by applying PTE_PFN_MASK.
+
+After the change of the PAGE_OFFSET offset in commit d52888aa2753
+__pa_nodebug(pgd_val(*pgd)) started to return a value with more high bits
+set and PTE_PFN_MASK wasn't enough to mask out all of them. So it returns a
+wrong not even canonical address and crashes on the attempt to dereference
+it.
+
+Switch back to pgd_val() & PTE_PFN_MASK to cure the issue.
+
+Fixes: 12a8cc7fcf54 ("x86/kasan: Use the same shadow offset for 4- and 5-level paging")
+Reported-by: Kirill A. Shutemov <kirill@shutemov.name>
+Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Cc: Borislav Petkov <bp@alien8.de>
+Cc: "H. Peter Anvin" <hpa@zytor.com>
+Cc: Alexander Potapenko <glider@google.com>
+Cc: Dmitry Vyukov <dvyukov@google.com>
+Cc: kasan-dev@googlegroups.com
+Cc: stable@vger.kernel.org
+Cc: <stable@vger.kernel.org>
+Link: https://lkml.kernel.org/r/20190614143149.2227-1-aryabinin@virtuozzo.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/x86/mm/kasan_init_64.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/arch/x86/mm/kasan_init_64.c
++++ b/arch/x86/mm/kasan_init_64.c
+@@ -199,7 +199,7 @@ static inline p4d_t *early_p4d_offset(pg
+ if (!pgtable_l5_enabled())
+ return (p4d_t *)pgd;
+
+- p4d = __pa_nodebug(pgd_val(*pgd)) & PTE_PFN_MASK;
++ p4d = pgd_val(*pgd) & PTE_PFN_MASK;
+ p4d += __START_KERNEL_map - phys_base;
+ return (p4d_t *)p4d + p4d_index(addr);
+ }
--- /dev/null
+From 78f4e932f7760d965fb1569025d1576ab77557c5 Mon Sep 17 00:00:00 2001
+From: Borislav Petkov <bp@suse.de>
+Date: Thu, 13 Jun 2019 15:49:02 +0200
+Subject: x86/microcode, cpuhotplug: Add a microcode loader CPU hotplug callback
+
+From: Borislav Petkov <bp@suse.de>
+
+commit 78f4e932f7760d965fb1569025d1576ab77557c5 upstream.
+
+Adric Blake reported the following warning during suspend-resume:
+
+ Enabling non-boot CPUs ...
+ x86: Booting SMP configuration:
+ smpboot: Booting Node 0 Processor 1 APIC 0x2
+ unchecked MSR access error: WRMSR to 0x10f (tried to write 0x0000000000000000) \
+ at rIP: 0xffffffff8d267924 (native_write_msr+0x4/0x20)
+ Call Trace:
+ intel_set_tfa
+ intel_pmu_cpu_starting
+ ? x86_pmu_dead_cpu
+ x86_pmu_starting_cpu
+ cpuhp_invoke_callback
+ ? _raw_spin_lock_irqsave
+ notify_cpu_starting
+ start_secondary
+ secondary_startup_64
+ microcode: sig=0x806ea, pf=0x80, revision=0x96
+ microcode: updated to revision 0xb4, date = 2019-04-01
+ CPU1 is up
+
+The MSR in question is MSR_TFA_RTM_FORCE_ABORT and that MSR is emulated
+by microcode. The log above shows that the microcode loader callback
+happens after the PMU restoration, leading to the conjecture that
+because the microcode hasn't been updated yet, that MSR is not present
+yet, leading to the #GP.
+
+Add a microcode loader-specific hotplug vector which comes before
+the PERF vectors and thus executes earlier and makes sure the MSR is
+present.
+
+Fixes: 400816f60c54 ("perf/x86/intel: Implement support for TSX Force Abort")
+Reported-by: Adric Blake <promarbler14@gmail.com>
+Signed-off-by: Borislav Petkov <bp@suse.de>
+Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
+Cc: Peter Zijlstra <peterz@infradead.org>
+Cc: <stable@vger.kernel.org>
+Cc: x86@kernel.org
+Link: https://bugzilla.kernel.org/show_bug.cgi?id=203637
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/x86/kernel/cpu/microcode/core.c | 2 +-
+ include/linux/cpuhotplug.h | 1 +
+ 2 files changed, 2 insertions(+), 1 deletion(-)
+
+--- a/arch/x86/kernel/cpu/microcode/core.c
++++ b/arch/x86/kernel/cpu/microcode/core.c
+@@ -876,7 +876,7 @@ int __init microcode_init(void)
+ goto out_ucode_group;
+
+ register_syscore_ops(&mc_syscore_ops);
+- cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "x86/microcode:online",
++ cpuhp_setup_state_nocalls(CPUHP_AP_MICROCODE_LOADER, "x86/microcode:online",
+ mc_cpu_online, mc_cpu_down_prep);
+
+ pr_info("Microcode Update Driver: v%s.", DRIVER_VERSION);
+--- a/include/linux/cpuhotplug.h
++++ b/include/linux/cpuhotplug.h
+@@ -101,6 +101,7 @@ enum cpuhp_state {
+ CPUHP_AP_IRQ_BCM2836_STARTING,
+ CPUHP_AP_IRQ_MIPS_GIC_STARTING,
+ CPUHP_AP_ARM_MVEBU_COHERENCY,
++ CPUHP_AP_MICROCODE_LOADER,
+ CPUHP_AP_PERF_X86_AMD_UNCORE_STARTING,
+ CPUHP_AP_PERF_X86_STARTING,
+ CPUHP_AP_PERF_X86_AMD_IBS_STARTING,
--- /dev/null
+From 00e5a2bbcc31d5fea853f8daeba0f06c1c88c3ff Mon Sep 17 00:00:00 2001
+From: Baoquan He <bhe@redhat.com>
+Date: Thu, 23 May 2019 10:57:44 +0800
+Subject: x86/mm/KASLR: Compute the size of the vmemmap section properly
+
+From: Baoquan He <bhe@redhat.com>
+
+commit 00e5a2bbcc31d5fea853f8daeba0f06c1c88c3ff upstream.
+
+The size of the vmemmap section is hardcoded to 1 TB to support the
+maximum amount of system RAM in 4-level paging mode - 64 TB.
+
+However, 1 TB is not enough for vmemmap in 5-level paging mode. Assuming
+the size of struct page is 64 Bytes, to support 4 PB system RAM in 5-level,
+64 TB of vmemmap area is needed:
+
+ 4 * 1000^5 PB / 4096 bytes page size * 64 bytes per page struct / 1000^4 TB = 62.5 TB.
+
+This hardcoding may cause vmemmap to corrupt the following
+cpu_entry_area section, if KASLR puts vmemmap very close to it and the
+actual vmemmap size is bigger than 1 TB.
+
+So calculate the actual size of the vmemmap region needed and then align
+it up to 1 TB boundary.
+
+In 4-level paging mode it is always 1 TB. In 5-level it's adjusted on
+demand. The current code reserves 0.5 PB for vmemmap on 5-level. With
+this change, the space can be saved and thus used to increase entropy
+for the randomization.
+
+ [ bp: Spell out how the 64 TB needed for vmemmap is computed and massage commit
+ message. ]
+
+Fixes: eedb92abb9bb ("x86/mm: Make virtual memory layout dynamic for CONFIG_X86_5LEVEL=y")
+Signed-off-by: Baoquan He <bhe@redhat.com>
+Signed-off-by: Borislav Petkov <bp@suse.de>
+Reviewed-by: Kees Cook <keescook@chromium.org>
+Acked-by: Kirill A. Shutemov <kirill@linux.intel.com>
+Cc: Andy Lutomirski <luto@kernel.org>
+Cc: Dave Hansen <dave.hansen@linux.intel.com>
+Cc: "H. Peter Anvin" <hpa@zytor.com>
+Cc: Ingo Molnar <mingo@kernel.org>
+Cc: kirill.shutemov@linux.intel.com
+Cc: Peter Zijlstra <peterz@infradead.org>
+Cc: stable <stable@vger.kernel.org>
+Cc: Thomas Gleixner <tglx@linutronix.de>
+Cc: x86-ml <x86@kernel.org>
+Link: https://lkml.kernel.org/r/20190523025744.3756-1-bhe@redhat.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/x86/mm/kaslr.c | 11 ++++++++++-
+ 1 file changed, 10 insertions(+), 1 deletion(-)
+
+--- a/arch/x86/mm/kaslr.c
++++ b/arch/x86/mm/kaslr.c
+@@ -52,7 +52,7 @@ static __initdata struct kaslr_memory_re
+ } kaslr_regions[] = {
+ { &page_offset_base, 0 },
+ { &vmalloc_base, 0 },
+- { &vmemmap_base, 1 },
++ { &vmemmap_base, 0 },
+ };
+
+ /* Get size in bytes used by the memory region */
+@@ -78,6 +78,7 @@ void __init kernel_randomize_memory(void
+ unsigned long rand, memory_tb;
+ struct rnd_state rand_state;
+ unsigned long remain_entropy;
++ unsigned long vmemmap_size;
+
+ vaddr_start = pgtable_l5_enabled() ? __PAGE_OFFSET_BASE_L5 : __PAGE_OFFSET_BASE_L4;
+ vaddr = vaddr_start;
+@@ -109,6 +110,14 @@ void __init kernel_randomize_memory(void
+ if (memory_tb < kaslr_regions[0].size_tb)
+ kaslr_regions[0].size_tb = memory_tb;
+
++ /*
++ * Calculate the vmemmap region size in TBs, aligned to a TB
++ * boundary.
++ */
++ vmemmap_size = (kaslr_regions[0].size_tb << (TB_SHIFT - PAGE_SHIFT)) *
++ sizeof(struct page);
++ kaslr_regions[2].size_tb = DIV_ROUND_UP(vmemmap_size, 1UL << TB_SHIFT);
++
+ /* Calculate entropy available between regions */
+ remain_entropy = vaddr_end - vaddr_start;
+ for (i = 0; i < ARRAY_SIZE(kaslr_regions); i++)
--- /dev/null
+From c7563e62a6d720aa3b068e26ddffab5f0df29263 Mon Sep 17 00:00:00 2001
+From: Prarit Bhargava <prarit@redhat.com>
+Date: Mon, 10 Jun 2019 13:15:44 -0400
+Subject: x86/resctrl: Prevent NULL pointer dereference when local MBM is disabled
+
+From: Prarit Bhargava <prarit@redhat.com>
+
+commit c7563e62a6d720aa3b068e26ddffab5f0df29263 upstream.
+
+Booting with kernel parameter "rdt=cmt,mbmtotal,memlocal,l3cat,mba" and
+executing "mount -t resctrl resctrl -o mba_MBps /sys/fs/resctrl" results in
+a NULL pointer dereference on systems which do not have local MBM support
+enabled..
+
+BUG: kernel NULL pointer dereference, address: 0000000000000020
+PGD 0 P4D 0
+Oops: 0000 [#1] SMP PTI
+CPU: 0 PID: 722 Comm: kworker/0:3 Not tainted 5.2.0-0.rc3.git0.1.el7_UNSUPPORTED.x86_64 #2
+Workqueue: events mbm_handle_overflow
+RIP: 0010:mbm_handle_overflow+0x150/0x2b0
+
+Only enter the bandwith update loop if the system has local MBM enabled.
+
+Fixes: de73f38f7680 ("x86/intel_rdt/mba_sc: Feedback loop to dynamically update mem bandwidth")
+Signed-off-by: Prarit Bhargava <prarit@redhat.com>
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Cc: Fenghua Yu <fenghua.yu@intel.com>
+Cc: Reinette Chatre <reinette.chatre@intel.com>
+Cc: Borislav Petkov <bp@alien8.de>
+Cc: "H. Peter Anvin" <hpa@zytor.com>
+Cc: stable@vger.kernel.org
+Link: https://lkml.kernel.org/r/20190610171544.13474-1-prarit@redhat.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/x86/kernel/cpu/resctrl/monitor.c | 3 +++
+ 1 file changed, 3 insertions(+)
+
+--- a/arch/x86/kernel/cpu/resctrl/monitor.c
++++ b/arch/x86/kernel/cpu/resctrl/monitor.c
+@@ -368,6 +368,9 @@ static void update_mba_bw(struct rdtgrou
+ struct list_head *head;
+ struct rdtgroup *entry;
+
++ if (!is_mbm_local_enabled())
++ return;
++
+ r_mba = &rdt_resources_all[RDT_RESOURCE_MBA];
+ closid = rgrp->closid;
+ rmid = rgrp->mon.rmid;