From: Borislav Petkov (AMD) Date: Mon, 11 Nov 2024 16:22:08 +0000 (+0100) Subject: x86/bugs: Add SRSO_USER_KERNEL_NO support X-Git-Tag: v6.12.49~19 X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=f9c6aec2a6dd0d5651d421592463c74aeaa54a8c;p=thirdparty%2Fkernel%2Fstable.git x86/bugs: Add SRSO_USER_KERNEL_NO support commit 877818802c3e970f67ccb53012facc78bef5f97a upstream. If the machine has: CPUID Fn8000_0021_EAX[30] (SRSO_USER_KERNEL_NO) -- If this bit is 1, it indicates the CPU is not subject to the SRSO vulnerability across user/kernel boundaries. have it fall back to IBPB on VMEXIT only, in the case it is going to run VMs: Speculative Return Stack Overflow: Mitigation: IBPB on VMEXIT only Signed-off-by: Borislav Petkov (AMD) Reviewed-by: Nikolay Borisov Link: https://lore.kernel.org/r/20241202120416.6054-2-bp@kernel.org [ Harshit: Conflicts resolved as this commit: 7c62c442b6eb ("x86/vmscape: Enumerate VMSCAPE bug") has been applied already to 6.12.y ] Signed-off-by: Harshit Mogalapalli Signed-off-by: Greg Kroah-Hartman --- diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 90f1f2f9d3140..3fc47f25cafcd 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -464,6 +464,7 @@ #define X86_FEATURE_SBPB (20*32+27) /* Selective Branch Prediction Barrier */ #define X86_FEATURE_IBPB_BRTYPE (20*32+28) /* MSR_PRED_CMD[IBPB] flushes all branch type predictions */ #define X86_FEATURE_SRSO_NO (20*32+29) /* CPU is not affected by SRSO */ +#define X86_FEATURE_SRSO_USER_KERNEL_NO (20*32+30) /* CPU is not affected by SRSO across user/kernel boundaries */ /* * Extended auxiliary flags: Linux defined - for features scattered in various diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 06bbc297c26c0..c3ea29efe26fd 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -2810,6 +2810,9 @@ static void __init srso_select_mitigation(void) break; case SRSO_CMD_SAFE_RET: + if (boot_cpu_has(X86_FEATURE_SRSO_USER_KERNEL_NO)) + goto ibpb_on_vmexit; + if (IS_ENABLED(CONFIG_MITIGATION_SRSO)) { /* * Enable the return thunk for generated code @@ -2861,6 +2864,7 @@ static void __init srso_select_mitigation(void) } break; +ibpb_on_vmexit: case SRSO_CMD_IBPB_ON_VMEXIT: if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) { if (has_microcode) {