From: Uros Bizjak Date: Mon, 30 Mar 2026 05:57:43 +0000 (+0200) Subject: x86/asm/segment: Remove unnecessary "memory" clobber from savesegment() X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=8379ca68a01c38485b620d36a72d8aeeca86e743;p=thirdparty%2Flinux.git x86/asm/segment: Remove unnecessary "memory" clobber from savesegment() The savesegment() macro uses inline assembly to copy a segment register into a general-purpose register: movl %seg, reg This instruction does not access memory, yet the inline asm currently declares a "memory" clobber, which unnecessarily acts as a compiler barrier and may inhibit optimization. Remove the "memory" clobber and mark the asm as `asm volatile` instead. Segment register loads in the kernel are implemented using `asm volatile`, so the compiler will not schedule segment register reads before those loads. Using `asm volatile` preserves the intended ordering with other segment register operations without imposing an unnecessary global memory barrier. No functional change intended. Signed-off-by: Uros Bizjak Signed-off-by: Ingo Molnar Acked-by: Peter Zijlstra (Intel) Cc: Linus Torvalds Cc: H. Peter Anvin Link: https://patch.msgid.link/20260330055823.5793-2-ubizjak@gmail.com --- diff --git a/arch/x86/include/asm/segment.h b/arch/x86/include/asm/segment.h index 9f5be2bbd2918..3fe3a310844c4 100644 --- a/arch/x86/include/asm/segment.h +++ b/arch/x86/include/asm/segment.h @@ -348,7 +348,7 @@ static inline void __loadsegment_fs(unsigned short value) * Save a segment register away: */ #define savesegment(seg, value) \ - asm("movl %%" #seg ",%k0" : "=r" (value) : : "memory") + asm volatile("movl %%" #seg ",%k0" : "=r" (value)) #endif /* !__ASSEMBLER__ */ #endif /* __KERNEL__ */