--- /dev/null
+From e02b50ca442e88122e1302d4dbc1b71a4808c13f Mon Sep 17 00:00:00 2001
+From: KP Singh <kpsingh@kernel.org>
+Date: Mon, 27 Feb 2023 07:05:41 +0100
+Subject: Documentation/hw-vuln: Document the interaction between IBRS and STIBP
+
+From: KP Singh <kpsingh@kernel.org>
+
+commit e02b50ca442e88122e1302d4dbc1b71a4808c13f upstream.
+
+Explain why STIBP is needed with legacy IBRS as currently implemented
+(KERNEL_IBRS) and why STIBP is not needed when enhanced IBRS is enabled.
+
+Fixes: 7c693f54c873 ("x86/speculation: Add spectre_v2=ibrs option to support Kernel IBRS")
+Signed-off-by: KP Singh <kpsingh@kernel.org>
+Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
+Link: https://lore.kernel.org/r/20230227060541.1939092-2-kpsingh@kernel.org
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ Documentation/admin-guide/hw-vuln/spectre.rst | 21 ++++++++++++++++-----
+ 1 file changed, 16 insertions(+), 5 deletions(-)
+
+--- a/Documentation/admin-guide/hw-vuln/spectre.rst
++++ b/Documentation/admin-guide/hw-vuln/spectre.rst
+@@ -479,8 +479,16 @@ Spectre variant 2
+ On Intel Skylake-era systems the mitigation covers most, but not all,
+ cases. See :ref:`[3] <spec_ref3>` for more details.
+
+- On CPUs with hardware mitigation for Spectre variant 2 (e.g. Enhanced
+- IBRS on x86), retpoline is automatically disabled at run time.
++ On CPUs with hardware mitigation for Spectre variant 2 (e.g. IBRS
++ or enhanced IBRS on x86), retpoline is automatically disabled at run time.
++
++ Systems which support enhanced IBRS (eIBRS) enable IBRS protection once at
++ boot, by setting the IBRS bit, and they're automatically protected against
++ Spectre v2 variant attacks, including cross-thread branch target injections
++ on SMT systems (STIBP). In other words, eIBRS enables STIBP too.
++
++ Legacy IBRS systems clear the IBRS bit on exit to userspace and
++ therefore explicitly enable STIBP for that
+
+ The retpoline mitigation is turned on by default on vulnerable
+ CPUs. It can be forced on or off by the administrator
+@@ -504,9 +512,12 @@ Spectre variant 2
+ For Spectre variant 2 mitigation, individual user programs
+ can be compiled with return trampolines for indirect branches.
+ This protects them from consuming poisoned entries in the branch
+- target buffer left by malicious software. Alternatively, the
+- programs can disable their indirect branch speculation via prctl()
+- (See :ref:`Documentation/userspace-api/spec_ctrl.rst <set_spec_ctrl>`).
++ target buffer left by malicious software.
++
++ On legacy IBRS systems, at return to userspace, implicit STIBP is disabled
++ because the kernel clears the IBRS bit. In this case, the userspace programs
++ can disable indirect branch speculation via prctl() (See
++ :ref:`Documentation/userspace-api/spec_ctrl.rst <set_spec_ctrl>`).
+ On x86, this will turn on STIBP to guard against attacks from the
+ sibling thread when the user program is running, and use IBPB to
+ flush the branch target buffer when switching to/from the program.
udf-do-not-bother-merging-very-long-extents.patch
udf-do-not-update-file-length-for-failed-writes-to-inline-files.patch
udf-fix-file-corruption-when-appending-just-after-end-of-preallocated-extent.patch
+x86-virt-force-gif-1-prior-to-disabling-svm-for-reboot-flows.patch
+x86-crash-disable-virt-in-core-nmi-crash-handler-to-avoid-double-shootdown.patch
+x86-reboot-disable-virtualization-in-an-emergency-if-svm-is-supported.patch
+x86-reboot-disable-svm-not-just-vmx-when-stopping-cpus.patch
+x86-kprobes-fix-__recover_optprobed_insn-check-optimizing-logic.patch
+x86-kprobes-fix-arch_check_optimized_kprobe-check-within-optimized_kprobe-range.patch
+x86-microcode-amd-remove-load_microcode_amd-s-bsp-parameter.patch
+x86-microcode-amd-add-a-cpu-parameter-to-the-reloading-functions.patch
+x86-microcode-amd-fix-mixed-steppings-support.patch
+x86-speculation-allow-enabling-stibp-with-legacy-ibrs.patch
+documentation-hw-vuln-document-the-interaction-between-ibrs-and-stibp.patch
--- /dev/null
+From 26044aff37a5455b19a91785086914fd33053ef4 Mon Sep 17 00:00:00 2001
+From: Sean Christopherson <seanjc@google.com>
+Date: Wed, 30 Nov 2022 23:36:47 +0000
+Subject: x86/crash: Disable virt in core NMI crash handler to avoid double shootdown
+
+From: Sean Christopherson <seanjc@google.com>
+
+commit 26044aff37a5455b19a91785086914fd33053ef4 upstream.
+
+Disable virtualization in crash_nmi_callback() and rework the
+emergency_vmx_disable_all() path to do an NMI shootdown if and only if a
+shootdown has not already occurred. NMI crash shootdown fundamentally
+can't support multiple invocations as responding CPUs are deliberately
+put into halt state without unblocking NMIs. But, the emergency reboot
+path doesn't have any work of its own, it simply cares about disabling
+virtualization, i.e. so long as a shootdown occurred, emergency reboot
+doesn't care who initiated the shootdown, or when.
+
+If "crash_kexec_post_notifiers" is specified on the kernel command line,
+panic() will invoke crash_smp_send_stop() and result in a second call to
+nmi_shootdown_cpus() during native_machine_emergency_restart().
+
+Invoke the callback _before_ disabling virtualization, as the current
+VMCS needs to be cleared before doing VMXOFF. Note, this results in a
+subtle change in ordering between disabling virtualization and stopping
+Intel PT on the responding CPUs. While VMX and Intel PT do interact,
+VMXOFF and writes to MSR_IA32_RTIT_CTL do not induce faults between one
+another, which is all that matters when panicking.
+
+Harden nmi_shootdown_cpus() against multiple invocations to try and
+capture any such kernel bugs via a WARN instead of hanging the system
+during a crash/dump, e.g. prior to the recent hardening of
+register_nmi_handler(), re-registering the NMI handler would trigger a
+double list_add() and hang the system if CONFIG_BUG_ON_DATA_CORRUPTION=y.
+
+ list_add double add: new=ffffffff82220800, prev=ffffffff8221cfe8, next=ffffffff82220800.
+ WARNING: CPU: 2 PID: 1319 at lib/list_debug.c:29 __list_add_valid+0x67/0x70
+ Call Trace:
+ __register_nmi_handler+0xcf/0x130
+ nmi_shootdown_cpus+0x39/0x90
+ native_machine_emergency_restart+0x1c9/0x1d0
+ panic+0x237/0x29b
+
+Extract the disabling logic to a common helper to deduplicate code, and
+to prepare for doing the shootdown in the emergency reboot path if SVM
+is supported.
+
+Note, prior to commit ed72736183c4 ("x86/reboot: Force all cpus to exit
+VMX root if VMX is supported"), nmi_shootdown_cpus() was subtly protected
+against a second invocation by a cpu_vmx_enabled() check as the kdump
+handler would disable VMX if it ran first.
+
+Fixes: ed72736183c4 ("x86/reboot: Force all cpus to exit VMX root if VMX is supported")
+Cc: stable@vger.kernel.org
+Reported-by: Guilherme G. Piccoli <gpiccoli@igalia.com>
+Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
+Cc: Paolo Bonzini <pbonzini@redhat.com>
+Link: https://lore.kernel.org/all/20220427224924.592546-2-gpiccoli@igalia.com
+Tested-by: Guilherme G. Piccoli <gpiccoli@igalia.com>
+Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
+Link: https://lore.kernel.org/r/20221130233650.1404148-2-seanjc@google.com
+Signed-off-by: Sean Christopherson <seanjc@google.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/include/asm/reboot.h | 2 +
+ arch/x86/kernel/crash.c | 17 ----------
+ arch/x86/kernel/reboot.c | 65 ++++++++++++++++++++++++++++++++++--------
+ 3 files changed, 56 insertions(+), 28 deletions(-)
+
+--- a/arch/x86/include/asm/reboot.h
++++ b/arch/x86/include/asm/reboot.h
+@@ -25,6 +25,8 @@ void __noreturn machine_real_restart(uns
+ #define MRR_BIOS 0
+ #define MRR_APM 1
+
++void cpu_emergency_disable_virtualization(void);
++
+ typedef void (*nmi_shootdown_cb)(int, struct pt_regs*);
+ void nmi_shootdown_cpus(nmi_shootdown_cb callback);
+ void run_crash_ipi_callback(struct pt_regs *regs);
+--- a/arch/x86/kernel/crash.c
++++ b/arch/x86/kernel/crash.c
+@@ -36,7 +36,6 @@
+ #include <linux/kdebug.h>
+ #include <asm/cpu.h>
+ #include <asm/reboot.h>
+-#include <asm/virtext.h>
+ #include <asm/intel_pt.h>
+
+ /* Alignment required for elf header segment */
+@@ -118,15 +117,6 @@ static void kdump_nmi_callback(int cpu,
+ */
+ cpu_crash_vmclear_loaded_vmcss();
+
+- /* Disable VMX or SVM if needed.
+- *
+- * We need to disable virtualization on all CPUs.
+- * Having VMX or SVM enabled on any CPU may break rebooting
+- * after the kdump kernel has finished its task.
+- */
+- cpu_emergency_vmxoff();
+- cpu_emergency_svm_disable();
+-
+ /*
+ * Disable Intel PT to stop its logging
+ */
+@@ -185,12 +175,7 @@ void native_machine_crash_shutdown(struc
+ */
+ cpu_crash_vmclear_loaded_vmcss();
+
+- /* Booting kdump kernel with VMX or SVM enabled won't work,
+- * because (among other limitations) we can't disable paging
+- * with the virt flags.
+- */
+- cpu_emergency_vmxoff();
+- cpu_emergency_svm_disable();
++ cpu_emergency_disable_virtualization();
+
+ /*
+ * Disable Intel PT to stop its logging
+--- a/arch/x86/kernel/reboot.c
++++ b/arch/x86/kernel/reboot.c
+@@ -536,10 +536,7 @@ static inline void kb_wait(void)
+ }
+ }
+
+-static void vmxoff_nmi(int cpu, struct pt_regs *regs)
+-{
+- cpu_emergency_vmxoff();
+-}
++static inline void nmi_shootdown_cpus_on_restart(void);
+
+ /* Use NMIs as IPIs to tell all CPUs to disable virtualization */
+ static void emergency_vmx_disable_all(void)
+@@ -562,7 +559,7 @@ static void emergency_vmx_disable_all(vo
+ __cpu_emergency_vmxoff();
+
+ /* Halt and exit VMX root operation on the other CPUs. */
+- nmi_shootdown_cpus(vmxoff_nmi);
++ nmi_shootdown_cpus_on_restart();
+
+ }
+ }
+@@ -803,6 +800,17 @@ void machine_crash_shutdown(struct pt_re
+ /* This is the CPU performing the emergency shutdown work. */
+ int crashing_cpu = -1;
+
++/*
++ * Disable virtualization, i.e. VMX or SVM, to ensure INIT is recognized during
++ * reboot. VMX blocks INIT if the CPU is post-VMXON, and SVM blocks INIT if
++ * GIF=0, i.e. if the crash occurred between CLGI and STGI.
++ */
++void cpu_emergency_disable_virtualization(void)
++{
++ cpu_emergency_vmxoff();
++ cpu_emergency_svm_disable();
++}
++
+ #if defined(CONFIG_SMP)
+
+ static nmi_shootdown_cb shootdown_callback;
+@@ -825,7 +833,14 @@ static int crash_nmi_callback(unsigned i
+ return NMI_HANDLED;
+ local_irq_disable();
+
+- shootdown_callback(cpu, regs);
++ if (shootdown_callback)
++ shootdown_callback(cpu, regs);
++
++ /*
++ * Prepare the CPU for reboot _after_ invoking the callback so that the
++ * callback can safely use virtualization instructions, e.g. VMCLEAR.
++ */
++ cpu_emergency_disable_virtualization();
+
+ atomic_dec(&waiting_for_crash_ipi);
+ /* Assume hlt works */
+@@ -841,18 +856,32 @@ static void smp_send_nmi_allbutself(void
+ apic->send_IPI_allbutself(NMI_VECTOR);
+ }
+
+-/*
+- * Halt all other CPUs, calling the specified function on each of them
++/**
++ * nmi_shootdown_cpus - Stop other CPUs via NMI
++ * @callback: Optional callback to be invoked from the NMI handler
++ *
++ * The NMI handler on the remote CPUs invokes @callback, if not
++ * NULL, first and then disables virtualization to ensure that
++ * INIT is recognized during reboot.
+ *
+- * This function can be used to halt all other CPUs on crash
+- * or emergency reboot time. The function passed as parameter
+- * will be called inside a NMI handler on all CPUs.
++ * nmi_shootdown_cpus() can only be invoked once. After the first
++ * invocation all other CPUs are stuck in crash_nmi_callback() and
++ * cannot respond to a second NMI.
+ */
+ void nmi_shootdown_cpus(nmi_shootdown_cb callback)
+ {
+ unsigned long msecs;
++
+ local_irq_disable();
+
++ /*
++ * Avoid certain doom if a shootdown already occurred; re-registering
++ * the NMI handler will cause list corruption, modifying the callback
++ * will do who knows what, etc...
++ */
++ if (WARN_ON_ONCE(crash_ipi_issued))
++ return;
++
+ /* Make a note of crashing cpu. Will be used in NMI callback. */
+ crashing_cpu = safe_smp_processor_id();
+
+@@ -880,7 +909,17 @@ void nmi_shootdown_cpus(nmi_shootdown_cb
+ msecs--;
+ }
+
+- /* Leave the nmi callback set */
++ /*
++ * Leave the nmi callback set, shootdown is a one-time thing. Clearing
++ * the callback could result in a NULL pointer dereference if a CPU
++ * (finally) responds after the timeout expires.
++ */
++}
++
++static inline void nmi_shootdown_cpus_on_restart(void)
++{
++ if (!crash_ipi_issued)
++ nmi_shootdown_cpus(NULL);
+ }
+
+ /*
+@@ -910,6 +949,8 @@ void nmi_shootdown_cpus(nmi_shootdown_cb
+ /* No other CPUs to shoot down */
+ }
+
++static inline void nmi_shootdown_cpus_on_restart(void) { }
++
+ void run_crash_ipi_callback(struct pt_regs *regs)
+ {
+ }
--- /dev/null
+From 868a6fc0ca2407622d2833adefe1c4d284766c4c Mon Sep 17 00:00:00 2001
+From: Yang Jihong <yangjihong1@huawei.com>
+Date: Tue, 21 Feb 2023 08:49:16 +0900
+Subject: x86/kprobes: Fix __recover_optprobed_insn check optimizing logic
+
+From: Yang Jihong <yangjihong1@huawei.com>
+
+commit 868a6fc0ca2407622d2833adefe1c4d284766c4c upstream.
+
+Since the following commit:
+
+ commit f66c0447cca1 ("kprobes: Set unoptimized flag after unoptimizing code")
+
+modified the update timing of the KPROBE_FLAG_OPTIMIZED, a optimized_kprobe
+may be in the optimizing or unoptimizing state when op.kp->flags
+has KPROBE_FLAG_OPTIMIZED and op->list is not empty.
+
+The __recover_optprobed_insn check logic is incorrect, a kprobe in the
+unoptimizing state may be incorrectly determined as unoptimizing.
+As a result, incorrect instructions are copied.
+
+The optprobe_queued_unopt function needs to be exported for invoking in
+arch directory.
+
+Link: https://lore.kernel.org/all/20230216034247.32348-2-yangjihong1@huawei.com/
+
+Fixes: f66c0447cca1 ("kprobes: Set unoptimized flag after unoptimizing code")
+Cc: stable@vger.kernel.org
+Signed-off-by: Yang Jihong <yangjihong1@huawei.com>
+Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
+Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kernel/kprobes/opt.c | 4 ++--
+ include/linux/kprobes.h | 1 +
+ kernel/kprobes.c | 2 +-
+ 3 files changed, 4 insertions(+), 3 deletions(-)
+
+--- a/arch/x86/kernel/kprobes/opt.c
++++ b/arch/x86/kernel/kprobes/opt.c
+@@ -56,8 +56,8 @@ unsigned long __recover_optprobed_insn(k
+ /* This function only handles jump-optimized kprobe */
+ if (kp && kprobe_optimized(kp)) {
+ op = container_of(kp, struct optimized_kprobe, kp);
+- /* If op->list is not empty, op is under optimizing */
+- if (list_empty(&op->list))
++ /* If op is optimized or under unoptimizing */
++ if (list_empty(&op->list) || optprobe_queued_unopt(op))
+ goto found;
+ }
+ }
+--- a/include/linux/kprobes.h
++++ b/include/linux/kprobes.h
+@@ -353,6 +353,7 @@ extern int proc_kprobes_optimization_han
+ size_t *length, loff_t *ppos);
+ #endif
+ extern void wait_for_kprobe_optimizer(void);
++bool optprobe_queued_unopt(struct optimized_kprobe *op);
+ #else
+ static inline void wait_for_kprobe_optimizer(void) { }
+ #endif /* CONFIG_OPTPROBES */
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -626,7 +626,7 @@ void wait_for_kprobe_optimizer(void)
+ mutex_unlock(&kprobe_mutex);
+ }
+
+-static bool optprobe_queued_unopt(struct optimized_kprobe *op)
++bool optprobe_queued_unopt(struct optimized_kprobe *op)
+ {
+ struct optimized_kprobe *_op;
+
--- /dev/null
+From f1c97a1b4ef709e3f066f82e3ba3108c3b133ae6 Mon Sep 17 00:00:00 2001
+From: Yang Jihong <yangjihong1@huawei.com>
+Date: Tue, 21 Feb 2023 08:49:16 +0900
+Subject: x86/kprobes: Fix arch_check_optimized_kprobe check within optimized_kprobe range
+
+From: Yang Jihong <yangjihong1@huawei.com>
+
+commit f1c97a1b4ef709e3f066f82e3ba3108c3b133ae6 upstream.
+
+When arch_prepare_optimized_kprobe calculating jump destination address,
+it copies original instructions from jmp-optimized kprobe (see
+__recover_optprobed_insn), and calculated based on length of original
+instruction.
+
+arch_check_optimized_kprobe does not check KPROBE_FLAG_OPTIMATED when
+checking whether jmp-optimized kprobe exists.
+As a result, setup_detour_execution may jump to a range that has been
+overwritten by jump destination address, resulting in an inval opcode error.
+
+For example, assume that register two kprobes whose addresses are
+<func+9> and <func+11> in "func" function.
+The original code of "func" function is as follows:
+
+ 0xffffffff816cb5e9 <+9>: push %r12
+ 0xffffffff816cb5eb <+11>: xor %r12d,%r12d
+ 0xffffffff816cb5ee <+14>: test %rdi,%rdi
+ 0xffffffff816cb5f1 <+17>: setne %r12b
+ 0xffffffff816cb5f5 <+21>: push %rbp
+
+1.Register the kprobe for <func+11>, assume that is kp1, corresponding optimized_kprobe is op1.
+ After the optimization, "func" code changes to:
+
+ 0xffffffff816cc079 <+9>: push %r12
+ 0xffffffff816cc07b <+11>: jmp 0xffffffffa0210000
+ 0xffffffff816cc080 <+16>: incl 0xf(%rcx)
+ 0xffffffff816cc083 <+19>: xchg %eax,%ebp
+ 0xffffffff816cc084 <+20>: (bad)
+ 0xffffffff816cc085 <+21>: push %rbp
+
+Now op1->flags == KPROBE_FLAG_OPTIMATED;
+
+2. Register the kprobe for <func+9>, assume that is kp2, corresponding optimized_kprobe is op2.
+
+register_kprobe(kp2)
+ register_aggr_kprobe
+ alloc_aggr_kprobe
+ __prepare_optimized_kprobe
+ arch_prepare_optimized_kprobe
+ __recover_optprobed_insn // copy original bytes from kp1->optinsn.copied_insn,
+ // jump address = <func+14>
+
+3. disable kp1:
+
+disable_kprobe(kp1)
+ __disable_kprobe
+ ...
+ if (p == orig_p || aggr_kprobe_disabled(orig_p)) {
+ ret = disarm_kprobe(orig_p, true) // add op1 in unoptimizing_list, not unoptimized
+ orig_p->flags |= KPROBE_FLAG_DISABLED; // op1->flags == KPROBE_FLAG_OPTIMATED | KPROBE_FLAG_DISABLED
+ ...
+
+4. unregister kp2
+__unregister_kprobe_top
+ ...
+ if (!kprobe_disabled(ap) && !kprobes_all_disarmed) {
+ optimize_kprobe(op)
+ ...
+ if (arch_check_optimized_kprobe(op) < 0) // because op1 has KPROBE_FLAG_DISABLED, here not return
+ return;
+ p->kp.flags |= KPROBE_FLAG_OPTIMIZED; // now op2 has KPROBE_FLAG_OPTIMIZED
+ }
+
+"func" code now is:
+
+ 0xffffffff816cc079 <+9>: int3
+ 0xffffffff816cc07a <+10>: push %rsp
+ 0xffffffff816cc07b <+11>: jmp 0xffffffffa0210000
+ 0xffffffff816cc080 <+16>: incl 0xf(%rcx)
+ 0xffffffff816cc083 <+19>: xchg %eax,%ebp
+ 0xffffffff816cc084 <+20>: (bad)
+ 0xffffffff816cc085 <+21>: push %rbp
+
+5. if call "func", int3 handler call setup_detour_execution:
+
+ if (p->flags & KPROBE_FLAG_OPTIMIZED) {
+ ...
+ regs->ip = (unsigned long)op->optinsn.insn + TMPL_END_IDX;
+ ...
+ }
+
+The code for the destination address is
+
+ 0xffffffffa021072c: push %r12
+ 0xffffffffa021072e: xor %r12d,%r12d
+ 0xffffffffa0210731: jmp 0xffffffff816cb5ee <func+14>
+
+However, <func+14> is not a valid start instruction address. As a result, an error occurs.
+
+Link: https://lore.kernel.org/all/20230216034247.32348-3-yangjihong1@huawei.com/
+
+Fixes: f66c0447cca1 ("kprobes: Set unoptimized flag after unoptimizing code")
+Signed-off-by: Yang Jihong <yangjihong1@huawei.com>
+Cc: stable@vger.kernel.org
+Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
+Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kernel/kprobes/opt.c | 2 +-
+ include/linux/kprobes.h | 1 +
+ kernel/kprobes.c | 4 ++--
+ 3 files changed, 4 insertions(+), 3 deletions(-)
+
+--- a/arch/x86/kernel/kprobes/opt.c
++++ b/arch/x86/kernel/kprobes/opt.c
+@@ -330,7 +330,7 @@ int arch_check_optimized_kprobe(struct o
+
+ for (i = 1; i < op->optinsn.size; i++) {
+ p = get_kprobe(op->kp.addr + i);
+- if (p && !kprobe_disabled(p))
++ if (p && !kprobe_disarmed(p))
+ return -EEXIST;
+ }
+
+--- a/include/linux/kprobes.h
++++ b/include/linux/kprobes.h
+@@ -354,6 +354,7 @@ extern int proc_kprobes_optimization_han
+ #endif
+ extern void wait_for_kprobe_optimizer(void);
+ bool optprobe_queued_unopt(struct optimized_kprobe *op);
++bool kprobe_disarmed(struct kprobe *p);
+ #else
+ static inline void wait_for_kprobe_optimizer(void) { }
+ #endif /* CONFIG_OPTPROBES */
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -418,8 +418,8 @@ static inline int kprobe_optready(struct
+ return 0;
+ }
+
+-/* Return true(!0) if the kprobe is disarmed. Note: p must be on hash list */
+-static inline int kprobe_disarmed(struct kprobe *p)
++/* Return true if the kprobe is disarmed. Note: p must be on hash list */
++bool kprobe_disarmed(struct kprobe *p)
+ {
+ struct optimized_kprobe *op;
+
--- /dev/null
+From a5ad92134bd153a9ccdcddf09a95b088f36c3cce Mon Sep 17 00:00:00 2001
+From: "Borislav Petkov (AMD)" <bp@alien8.de>
+Date: Thu, 26 Jan 2023 00:08:03 +0100
+Subject: x86/microcode/AMD: Add a @cpu parameter to the reloading functions
+
+From: Borislav Petkov (AMD) <bp@alien8.de>
+
+commit a5ad92134bd153a9ccdcddf09a95b088f36c3cce upstream.
+
+Will be used in a subsequent change.
+
+Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
+Link: https://lore.kernel.org/r/20230130161709.11615-3-bp@alien8.de
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/include/asm/microcode.h | 4 ++--
+ arch/x86/include/asm/microcode_amd.h | 4 ++--
+ arch/x86/kernel/cpu/microcode/amd.c | 2 +-
+ arch/x86/kernel/cpu/microcode/core.c | 6 +++---
+ 4 files changed, 8 insertions(+), 8 deletions(-)
+
+--- a/arch/x86/include/asm/microcode.h
++++ b/arch/x86/include/asm/microcode.h
+@@ -144,7 +144,7 @@ static inline unsigned int x86_cpuid_fam
+ int __init microcode_init(void);
+ extern void __init load_ucode_bsp(void);
+ extern void load_ucode_ap(void);
+-void reload_early_microcode(void);
++void reload_early_microcode(unsigned int cpu);
+ extern bool get_builtin_firmware(struct cpio_data *cd, const char *name);
+ extern bool initrd_gone;
+ void microcode_bsp_resume(void);
+@@ -152,7 +152,7 @@ void microcode_bsp_resume(void);
+ static inline int __init microcode_init(void) { return 0; };
+ static inline void __init load_ucode_bsp(void) { }
+ static inline void load_ucode_ap(void) { }
+-static inline void reload_early_microcode(void) { }
++static inline void reload_early_microcode(unsigned int cpu) { }
+ static inline void microcode_bsp_resume(void) { }
+ static inline bool
+ get_builtin_firmware(struct cpio_data *cd, const char *name) { return false; }
+--- a/arch/x86/include/asm/microcode_amd.h
++++ b/arch/x86/include/asm/microcode_amd.h
+@@ -47,12 +47,12 @@ struct microcode_amd {
+ extern void __init load_ucode_amd_bsp(unsigned int family);
+ extern void load_ucode_amd_ap(unsigned int family);
+ extern int __init save_microcode_in_initrd_amd(unsigned int family);
+-void reload_ucode_amd(void);
++void reload_ucode_amd(unsigned int cpu);
+ #else
+ static inline void __init load_ucode_amd_bsp(unsigned int family) {}
+ static inline void load_ucode_amd_ap(unsigned int family) {}
+ static inline int __init
+ save_microcode_in_initrd_amd(unsigned int family) { return -EINVAL; }
+-void reload_ucode_amd(void) {}
++static inline void reload_ucode_amd(unsigned int cpu) {}
+ #endif
+ #endif /* _ASM_X86_MICROCODE_AMD_H */
+--- a/arch/x86/kernel/cpu/microcode/amd.c
++++ b/arch/x86/kernel/cpu/microcode/amd.c
+@@ -354,7 +354,7 @@ int __init save_microcode_in_initrd_amd(
+ return 0;
+ }
+
+-void reload_ucode_amd(void)
++void reload_ucode_amd(unsigned int cpu)
+ {
+ struct microcode_amd *mc;
+ u32 rev, dummy;
+--- a/arch/x86/kernel/cpu/microcode/core.c
++++ b/arch/x86/kernel/cpu/microcode/core.c
+@@ -326,7 +326,7 @@ struct cpio_data find_microcode_in_initr
+ #endif
+ }
+
+-void reload_early_microcode(void)
++void reload_early_microcode(unsigned int cpu)
+ {
+ int vendor, family;
+
+@@ -340,7 +340,7 @@ void reload_early_microcode(void)
+ break;
+ case X86_VENDOR_AMD:
+ if (family >= 0x10)
+- reload_ucode_amd();
++ reload_ucode_amd(cpu);
+ break;
+ default:
+ break;
+@@ -783,7 +783,7 @@ void microcode_bsp_resume(void)
+ if (uci->valid && uci->mc)
+ microcode_ops->apply_microcode(cpu);
+ else if (!uci->mc)
+- reload_early_microcode();
++ reload_early_microcode(cpu);
+ }
+
+ static struct syscore_ops mc_syscore_ops = {
--- /dev/null
+From 7ff6edf4fef38ab404ee7861f257e28eaaeed35f Mon Sep 17 00:00:00 2001
+From: "Borislav Petkov (AMD)" <bp@alien8.de>
+Date: Thu, 26 Jan 2023 16:26:17 +0100
+Subject: x86/microcode/AMD: Fix mixed steppings support
+
+From: Borislav Petkov (AMD) <bp@alien8.de>
+
+commit 7ff6edf4fef38ab404ee7861f257e28eaaeed35f upstream.
+
+The AMD side of the loader has always claimed to support mixed
+steppings. But somewhere along the way, it broke that by assuming that
+the cached patch blob is a single one instead of it being one per
+*node*.
+
+So turn it into a per-node one so that each node can stash the blob
+relevant for it.
+
+ [ NB: Fixes tag is not really the exactly correct one but it is good
+ enough. ]
+
+Fixes: fe055896c040 ("x86/microcode: Merge the early microcode loader")
+Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
+Cc: <stable@kernel.org> # 2355370cd941 ("x86/microcode/amd: Remove load_microcode_amd()'s bsp parameter")
+Cc: <stable@kernel.org> # a5ad92134bd1 ("x86/microcode/AMD: Add a @cpu parameter to the reloading functions")
+Link: https://lore.kernel.org/r/20230130161709.11615-4-bp@alien8.de
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kernel/cpu/microcode/amd.c | 34 +++++++++++++++++++++-------------
+ 1 file changed, 21 insertions(+), 13 deletions(-)
+
+--- a/arch/x86/kernel/cpu/microcode/amd.c
++++ b/arch/x86/kernel/cpu/microcode/amd.c
+@@ -54,7 +54,9 @@ struct cont_desc {
+ };
+
+ static u32 ucode_new_rev;
+-static u8 amd_ucode_patch[PATCH_MAX_SIZE];
++
++/* One blob per node. */
++static u8 amd_ucode_patch[MAX_NUMNODES][PATCH_MAX_SIZE];
+
+ /*
+ * Microcode patch container file is prepended to the initrd in cpio
+@@ -210,7 +212,7 @@ apply_microcode_early_amd(u32 cpuid_1_ea
+ patch = (u8 (*)[PATCH_MAX_SIZE])__pa_nodebug(&amd_ucode_patch);
+ #else
+ new_rev = &ucode_new_rev;
+- patch = &amd_ucode_patch;
++ patch = &amd_ucode_patch[0];
+ #endif
+
+ desc.cpuid_1_eax = cpuid_1_eax;
+@@ -356,10 +358,10 @@ int __init save_microcode_in_initrd_amd(
+
+ void reload_ucode_amd(unsigned int cpu)
+ {
+- struct microcode_amd *mc;
+ u32 rev, dummy;
++ struct microcode_amd *mc;
+
+- mc = (struct microcode_amd *)amd_ucode_patch;
++ mc = (struct microcode_amd *)amd_ucode_patch[cpu_to_node(cpu)];
+
+ rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
+
+@@ -699,6 +701,8 @@ static enum ucode_state __load_microcode
+
+ static enum ucode_state load_microcode_amd(u8 family, const u8 *data, size_t size)
+ {
++ struct cpuinfo_x86 *c;
++ unsigned int nid, cpu;
+ struct ucode_patch *p;
+ enum ucode_state ret;
+
+@@ -711,18 +715,22 @@ static enum ucode_state load_microcode_a
+ return ret;
+ }
+
+- p = find_patch(0);
+- if (!p) {
+- return ret;
+- } else {
+- if (boot_cpu_data.microcode >= p->patch_id)
+- return ret;
++ for_each_node(nid) {
++ cpu = cpumask_first(cpumask_of_node(nid));
++ c = &cpu_data(cpu);
++
++ p = find_patch(cpu);
++ if (!p)
++ continue;
++
++ if (c->microcode >= p->patch_id)
++ continue;
+
+ ret = UCODE_NEW;
+- }
+
+- memset(amd_ucode_patch, 0, PATCH_MAX_SIZE);
+- memcpy(amd_ucode_patch, p->data, min_t(u32, ksize(p->data), PATCH_MAX_SIZE));
++ memset(&amd_ucode_patch[nid], 0, PATCH_MAX_SIZE);
++ memcpy(&amd_ucode_patch[nid], p->data, min_t(u32, ksize(p->data), PATCH_MAX_SIZE));
++ }
+
+ return ret;
+ }
--- /dev/null
+From 2355370cd941cbb20882cc3f34460f9f2b8f9a18 Mon Sep 17 00:00:00 2001
+From: "Borislav Petkov (AMD)" <bp@alien8.de>
+Date: Tue, 17 Jan 2023 23:59:24 +0100
+Subject: x86/microcode/amd: Remove load_microcode_amd()'s bsp parameter
+
+From: Borislav Petkov (AMD) <bp@alien8.de>
+
+commit 2355370cd941cbb20882cc3f34460f9f2b8f9a18 upstream.
+
+It is always the BSP.
+
+No functional changes.
+
+Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
+Link: https://lore.kernel.org/r/20230130161709.11615-2-bp@alien8.de
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kernel/cpu/microcode/amd.c | 17 +++++------------
+ 1 file changed, 5 insertions(+), 12 deletions(-)
+
+--- a/arch/x86/kernel/cpu/microcode/amd.c
++++ b/arch/x86/kernel/cpu/microcode/amd.c
+@@ -329,8 +329,7 @@ void load_ucode_amd_ap(unsigned int cpui
+ apply_microcode_early_amd(cpuid_1_eax, cp.data, cp.size, false);
+ }
+
+-static enum ucode_state
+-load_microcode_amd(bool save, u8 family, const u8 *data, size_t size);
++static enum ucode_state load_microcode_amd(u8 family, const u8 *data, size_t size);
+
+ int __init save_microcode_in_initrd_amd(unsigned int cpuid_1_eax)
+ {
+@@ -348,7 +347,7 @@ int __init save_microcode_in_initrd_amd(
+ if (!desc.mc)
+ return -EINVAL;
+
+- ret = load_microcode_amd(true, x86_family(cpuid_1_eax), desc.data, desc.size);
++ ret = load_microcode_amd(x86_family(cpuid_1_eax), desc.data, desc.size);
+ if (ret > UCODE_UPDATED)
+ return -EINVAL;
+
+@@ -698,8 +697,7 @@ static enum ucode_state __load_microcode
+ return UCODE_OK;
+ }
+
+-static enum ucode_state
+-load_microcode_amd(bool save, u8 family, const u8 *data, size_t size)
++static enum ucode_state load_microcode_amd(u8 family, const u8 *data, size_t size)
+ {
+ struct ucode_patch *p;
+ enum ucode_state ret;
+@@ -723,10 +721,6 @@ load_microcode_amd(bool save, u8 family,
+ ret = UCODE_NEW;
+ }
+
+- /* save BSP's matching patch for early load */
+- if (!save)
+- return ret;
+-
+ memset(amd_ucode_patch, 0, PATCH_MAX_SIZE);
+ memcpy(amd_ucode_patch, p->data, min_t(u32, ksize(p->data), PATCH_MAX_SIZE));
+
+@@ -754,12 +748,11 @@ static enum ucode_state request_microcod
+ {
+ char fw_name[36] = "amd-ucode/microcode_amd.bin";
+ struct cpuinfo_x86 *c = &cpu_data(cpu);
+- bool bsp = c->cpu_index == boot_cpu_data.cpu_index;
+ enum ucode_state ret = UCODE_NFOUND;
+ const struct firmware *fw;
+
+ /* reload ucode container only on the boot cpu */
+- if (!refresh_fw || !bsp)
++ if (!refresh_fw)
+ return UCODE_OK;
+
+ if (c->x86 >= 0x15)
+@@ -776,7 +769,7 @@ static enum ucode_state request_microcod
+ goto fw_release;
+ }
+
+- ret = load_microcode_amd(bsp, c->x86, fw->data, fw->size);
++ ret = load_microcode_amd(c->x86, fw->data, fw->size);
+
+ fw_release:
+ release_firmware(fw);
--- /dev/null
+From a2b07fa7b93321c059af0c6d492cc9a4f1e390aa Mon Sep 17 00:00:00 2001
+From: Sean Christopherson <seanjc@google.com>
+Date: Wed, 30 Nov 2022 23:36:50 +0000
+Subject: x86/reboot: Disable SVM, not just VMX, when stopping CPUs
+
+From: Sean Christopherson <seanjc@google.com>
+
+commit a2b07fa7b93321c059af0c6d492cc9a4f1e390aa upstream.
+
+Disable SVM and more importantly force GIF=1 when halting a CPU or
+rebooting the machine. Similar to VMX, SVM allows software to block
+INITs via CLGI, and thus can be problematic for a crash/reboot. The
+window for failure is smaller with SVM as INIT is only blocked while
+GIF=0, i.e. between CLGI and STGI, but the window does exist.
+
+Fixes: fba4f472b33a ("x86/reboot: Turn off KVM when halting a CPU")
+Cc: stable@vger.kernel.org
+Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
+Link: https://lore.kernel.org/r/20221130233650.1404148-5-seanjc@google.com
+Signed-off-by: Sean Christopherson <seanjc@google.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kernel/smp.c | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+--- a/arch/x86/kernel/smp.c
++++ b/arch/x86/kernel/smp.c
+@@ -33,7 +33,7 @@
+ #include <asm/mce.h>
+ #include <asm/trace/irq_vectors.h>
+ #include <asm/kexec.h>
+-#include <asm/virtext.h>
++#include <asm/reboot.h>
+
+ /*
+ * Some notes on x86 processor bugs affecting SMP operation:
+@@ -163,7 +163,7 @@ static int smp_stop_nmi_callback(unsigne
+ if (raw_smp_processor_id() == atomic_read(&stopping_cpu))
+ return NMI_HANDLED;
+
+- cpu_emergency_vmxoff();
++ cpu_emergency_disable_virtualization();
+ stop_this_cpu(NULL);
+
+ return NMI_HANDLED;
+@@ -176,7 +176,7 @@ static int smp_stop_nmi_callback(unsigne
+ asmlinkage __visible void smp_reboot_interrupt(void)
+ {
+ ipi_entering_ack_irq();
+- cpu_emergency_vmxoff();
++ cpu_emergency_disable_virtualization();
+ stop_this_cpu(NULL);
+ irq_exit();
+ }
--- /dev/null
+From d81f952aa657b76cea381384bef1fea35c5fd266 Mon Sep 17 00:00:00 2001
+From: Sean Christopherson <seanjc@google.com>
+Date: Wed, 30 Nov 2022 23:36:49 +0000
+Subject: x86/reboot: Disable virtualization in an emergency if SVM is supported
+
+From: Sean Christopherson <seanjc@google.com>
+
+commit d81f952aa657b76cea381384bef1fea35c5fd266 upstream.
+
+Disable SVM on all CPUs via NMI shootdown during an emergency reboot.
+Like VMX, SVM can block INIT, e.g. if the emergency reboot is triggered
+between CLGI and STGI, and thus can prevent bringing up other CPUs via
+INIT-SIPI-SIPI.
+
+Cc: stable@vger.kernel.org
+Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
+Link: https://lore.kernel.org/r/20221130233650.1404148-4-seanjc@google.com
+Signed-off-by: Sean Christopherson <seanjc@google.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kernel/reboot.c | 23 +++++++++++------------
+ 1 file changed, 11 insertions(+), 12 deletions(-)
+
+--- a/arch/x86/kernel/reboot.c
++++ b/arch/x86/kernel/reboot.c
+@@ -538,27 +538,26 @@ static inline void kb_wait(void)
+
+ static inline void nmi_shootdown_cpus_on_restart(void);
+
+-/* Use NMIs as IPIs to tell all CPUs to disable virtualization */
+-static void emergency_vmx_disable_all(void)
++static void emergency_reboot_disable_virtualization(void)
+ {
+ /* Just make sure we won't change CPUs while doing this */
+ local_irq_disable();
+
+ /*
+- * Disable VMX on all CPUs before rebooting, otherwise we risk hanging
+- * the machine, because the CPU blocks INIT when it's in VMX root.
++ * Disable virtualization on all CPUs before rebooting to avoid hanging
++ * the system, as VMX and SVM block INIT when running in the host.
+ *
+ * We can't take any locks and we may be on an inconsistent state, so
+- * use NMIs as IPIs to tell the other CPUs to exit VMX root and halt.
++ * use NMIs as IPIs to tell the other CPUs to disable VMX/SVM and halt.
+ *
+- * Do the NMI shootdown even if VMX if off on _this_ CPU, as that
+- * doesn't prevent a different CPU from being in VMX root operation.
++ * Do the NMI shootdown even if virtualization is off on _this_ CPU, as
++ * other CPUs may have virtualization enabled.
+ */
+- if (cpu_has_vmx()) {
+- /* Safely force _this_ CPU out of VMX root operation. */
+- __cpu_emergency_vmxoff();
++ if (cpu_has_vmx() || cpu_has_svm(NULL)) {
++ /* Safely force _this_ CPU out of VMX/SVM operation. */
++ cpu_emergency_disable_virtualization();
+
+- /* Halt and exit VMX root operation on the other CPUs. */
++ /* Disable VMX/SVM and halt on other CPUs. */
+ nmi_shootdown_cpus_on_restart();
+
+ }
+@@ -596,7 +595,7 @@ static void native_machine_emergency_res
+ unsigned short mode;
+
+ if (reboot_emergency)
+- emergency_vmx_disable_all();
++ emergency_reboot_disable_virtualization();
+
+ tboot_shutdown(TB_SHUTDOWN_REBOOT);
+
--- /dev/null
+From 6921ed9049bc7457f66c1596c5b78aec0dae4a9d Mon Sep 17 00:00:00 2001
+From: KP Singh <kpsingh@kernel.org>
+Date: Mon, 27 Feb 2023 07:05:40 +0100
+Subject: x86/speculation: Allow enabling STIBP with legacy IBRS
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: KP Singh <kpsingh@kernel.org>
+
+commit 6921ed9049bc7457f66c1596c5b78aec0dae4a9d upstream.
+
+When plain IBRS is enabled (not enhanced IBRS), the logic in
+spectre_v2_user_select_mitigation() determines that STIBP is not needed.
+
+The IBRS bit implicitly protects against cross-thread branch target
+injection. However, with legacy IBRS, the IBRS bit is cleared on
+returning to userspace for performance reasons which leaves userspace
+threads vulnerable to cross-thread branch target injection against which
+STIBP protects.
+
+Exclude IBRS from the spectre_v2_in_ibrs_mode() check to allow for
+enabling STIBP (through seccomp/prctl() by default or always-on, if
+selected by spectre_v2_user kernel cmdline parameter).
+
+ [ bp: Massage. ]
+
+Fixes: 7c693f54c873 ("x86/speculation: Add spectre_v2=ibrs option to support Kernel IBRS")
+Reported-by: José Oliveira <joseloliveira11@gmail.com>
+Reported-by: Rodrigo Branco <rodrigo@kernelhacking.com>
+Signed-off-by: KP Singh <kpsingh@kernel.org>
+Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
+Cc: stable@vger.kernel.org
+Link: https://lore.kernel.org/r/20230220120127.1975241-1-kpsingh@kernel.org
+Link: https://lore.kernel.org/r/20230221184908.2349578-1-kpsingh@kernel.org
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kernel/cpu/bugs.c | 25 ++++++++++++++++++-------
+ 1 file changed, 18 insertions(+), 7 deletions(-)
+
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -981,14 +981,18 @@ spectre_v2_parse_user_cmdline(void)
+ return SPECTRE_V2_USER_CMD_AUTO;
+ }
+
+-static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode)
++static inline bool spectre_v2_in_eibrs_mode(enum spectre_v2_mitigation mode)
+ {
+- return mode == SPECTRE_V2_IBRS ||
+- mode == SPECTRE_V2_EIBRS ||
++ return mode == SPECTRE_V2_EIBRS ||
+ mode == SPECTRE_V2_EIBRS_RETPOLINE ||
+ mode == SPECTRE_V2_EIBRS_LFENCE;
+ }
+
++static inline bool spectre_v2_in_ibrs_mode(enum spectre_v2_mitigation mode)
++{
++ return spectre_v2_in_eibrs_mode(mode) || mode == SPECTRE_V2_IBRS;
++}
++
+ static void __init
+ spectre_v2_user_select_mitigation(void)
+ {
+@@ -1051,12 +1055,19 @@ spectre_v2_user_select_mitigation(void)
+ }
+
+ /*
+- * If no STIBP, IBRS or enhanced IBRS is enabled, or SMT impossible,
+- * STIBP is not required.
++ * If no STIBP, enhanced IBRS is enabled, or SMT impossible, STIBP
++ * is not required.
++ *
++ * Enhanced IBRS also protects against cross-thread branch target
++ * injection in user-mode as the IBRS bit remains always set which
++ * implicitly enables cross-thread protections. However, in legacy IBRS
++ * mode, the IBRS bit is set only on kernel entry and cleared on return
++ * to userspace. This disables the implicit cross-thread protection,
++ * so allow for STIBP to be selected in that case.
+ */
+ if (!boot_cpu_has(X86_FEATURE_STIBP) ||
+ !smt_possible ||
+- spectre_v2_in_ibrs_mode(spectre_v2_enabled))
++ spectre_v2_in_eibrs_mode(spectre_v2_enabled))
+ return;
+
+ /*
+@@ -2108,7 +2119,7 @@ static ssize_t mmio_stale_data_show_stat
+
+ static char *stibp_state(void)
+ {
+- if (spectre_v2_in_ibrs_mode(spectre_v2_enabled))
++ if (spectre_v2_in_eibrs_mode(spectre_v2_enabled))
+ return "";
+
+ switch (spectre_v2_user_stibp) {
--- /dev/null
+From 6a3236580b0b1accc3976345e723104f74f6f8e6 Mon Sep 17 00:00:00 2001
+From: Sean Christopherson <seanjc@google.com>
+Date: Wed, 30 Nov 2022 23:36:48 +0000
+Subject: x86/virt: Force GIF=1 prior to disabling SVM (for reboot flows)
+
+From: Sean Christopherson <seanjc@google.com>
+
+commit 6a3236580b0b1accc3976345e723104f74f6f8e6 upstream.
+
+Set GIF=1 prior to disabling SVM to ensure that INIT is recognized if the
+kernel is disabling SVM in an emergency, e.g. if the kernel is about to
+jump into a crash kernel or may reboot without doing a full CPU RESET.
+If GIF is left cleared, the new kernel (or firmware) will be unabled to
+awaken APs. Eat faults on STGI (due to EFER.SVME=0) as it's possible
+that SVM could be disabled via NMI shootdown between reading EFER.SVME
+and executing STGI.
+
+Link: https://lore.kernel.org/all/cbcb6f35-e5d7-c1c9-4db9-fe5cc4de579a@amd.com
+Cc: stable@vger.kernel.org
+Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>
+Cc: Tom Lendacky <thomas.lendacky@amd.com>
+Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
+Link: https://lore.kernel.org/r/20221130233650.1404148-3-seanjc@google.com
+Signed-off-by: Sean Christopherson <seanjc@google.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/include/asm/virtext.h | 16 +++++++++++++++-
+ 1 file changed, 15 insertions(+), 1 deletion(-)
+
+--- a/arch/x86/include/asm/virtext.h
++++ b/arch/x86/include/asm/virtext.h
+@@ -114,7 +114,21 @@ static inline void cpu_svm_disable(void)
+
+ wrmsrl(MSR_VM_HSAVE_PA, 0);
+ rdmsrl(MSR_EFER, efer);
+- wrmsrl(MSR_EFER, efer & ~EFER_SVME);
++ if (efer & EFER_SVME) {
++ /*
++ * Force GIF=1 prior to disabling SVM to ensure INIT and NMI
++ * aren't blocked, e.g. if a fatal error occurred between CLGI
++ * and STGI. Note, STGI may #UD if SVM is disabled from NMI
++ * context between reading EFER and executing STGI. In that
++ * case, GIF must already be set, otherwise the NMI would have
++ * been blocked, so just eat the fault.
++ */
++ asm_volatile_goto("1: stgi\n\t"
++ _ASM_EXTABLE(1b, %l[fault])
++ ::: "memory" : fault);
++fault:
++ wrmsrl(MSR_EFER, efer & ~EFER_SVME);
++ }
+ }
+
+ /** Makes sure SVM is disabled, if it is supported on the CPU