]> git.ipfire.org Git - thirdparty/kernel/stable.git/commitdiff
LoongArch: Change __my_cpu_offset definition to avoid mis-optimization
authorHuacai Chen <chenhuacai@loongson.cn>
Tue, 19 Mar 2024 07:50:34 +0000 (15:50 +0800)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Wed, 3 Apr 2024 13:28:36 +0000 (15:28 +0200)
[ Upstream commit c87e12e0e8c1241410e758e181ca6bf23efa5b5b ]

From GCC commit 3f13154553f8546a ("df-scan: remove ad-hoc handling of
global regs in asms"), global registers will no longer be forced to add
to the def-use chain. Then current_thread_info(), current_stack_pointer
and __my_cpu_offset may be lifted out of the loop because they are no
longer treated as "volatile variables".

This optimization is still correct for the current_thread_info() and
current_stack_pointer usages because they are associated to a thread.
However it is wrong for __my_cpu_offset because it is associated to a
CPU rather than a thread: if the thread migrates to a different CPU in
the loop, __my_cpu_offset should be changed.

Change __my_cpu_offset definition to treat it as a "volatile variable",
in order to avoid such a mis-optimization.

Cc: stable@vger.kernel.org
Reported-by: Xiaotian Wu <wuxiaotian@loongson.cn>
Reported-by: Miao Wang <shankerwangmiao@gmail.com>
Signed-off-by: Xing Li <lixing@loongson.cn>
Signed-off-by: Hongchen Zhang <zhanghongchen@loongson.cn>
Signed-off-by: Rui Wang <wangrui@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
Signed-off-by: Sasha Levin <sashal@kernel.org>
arch/loongarch/include/asm/percpu.h

index ed5da02b1cf6f1611ac4b83e560d1f544ed6e270..7e804140500f15516e7b9512e0a05016268398cd 100644 (file)
@@ -29,7 +29,12 @@ static inline void set_my_cpu_offset(unsigned long off)
        __my_cpu_offset = off;
        csr_write64(off, PERCPU_BASE_KS);
 }
-#define __my_cpu_offset __my_cpu_offset
+
+#define __my_cpu_offset                                        \
+({                                                     \
+       __asm__ __volatile__("":"+r"(__my_cpu_offset)); \
+       __my_cpu_offset;                                \
+})
 
 #define PERCPU_OP(op, asm_op, c_op)                                    \
 static __always_inline unsigned long __percpu_##op(void *ptr,          \