The savesegment() macro uses inline assembly to copy a segment register
into a general-purpose register:
movl %seg, reg
This instruction does not access memory, yet the inline asm currently
declares a "memory" clobber, which unnecessarily acts as a compiler
barrier and may inhibit optimization.
Remove the "memory" clobber and mark the asm as `asm volatile` instead.
Segment register loads in the kernel are implemented using `asm volatile`,
so the compiler will not schedule segment register reads before those
loads. Using `asm volatile` preserves the intended ordering with other
segment register operations without imposing an unnecessary global memory
barrier.
No functional change intended.
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Link: https://patch.msgid.link/20260330055823.5793-2-ubizjak@gmail.com
* Save a segment register away:
*/
#define savesegment(seg, value) \
- asm("movl %%" #seg ",%k0" : "=r" (value) : : "memory")
+ asm volatile("movl %%" #seg ",%k0" : "=r" (value))
#endif /* !__ASSEMBLER__ */
#endif /* __KERNEL__ */