From: Heiko Carstens Date: Tue, 13 Jan 2026 19:44:00 +0000 (+0100) Subject: s390/preempt: Optimize __preempt_count_dec_and_test() X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=48b4790f054994d4df6d1025ec9267b19618f0ec;p=thirdparty%2Fkernel%2Flinux.git s390/preempt: Optimize __preempt_count_dec_and_test() Provide an inline assembly using alternatives to avoid the need of a base register due to relocatable lowcore when adding or subtracting small constants from preempt_count. Main user is preempt_enable(), which subtracts one from preempt_count and tests if the result is zero. With this the generated code changes from 1000b8: a7 19 00 00 lghi %r1,0 1000bc: eb ff 13 a8 00 6e alsi 936(%r1),-1 1000c2: a7 54 00 05 jnhe 1000cc <__rcu_read_unlock+0x14> to something like this: 1000b8: eb ff 03 a8 00 6e alsi 936,-1 1000be: a7 54 00 05 jnhe 1000c8 <__rcu_read_unlock+0x10> Kernel image size is reduced by 45kb (bloat-o-meter -t, defconfig, gcc15). Reviewed-by: Sven Schnelle Signed-off-by: Heiko Carstens --- diff --git a/arch/s390/include/asm/preempt.h b/arch/s390/include/asm/preempt.h index ef6ea3163c27e..6e5821bb047e2 100644 --- a/arch/s390/include/asm/preempt.h +++ b/arch/s390/include/asm/preempt.h @@ -113,7 +113,22 @@ static __always_inline void __preempt_count_sub(int val) */ static __always_inline bool __preempt_count_dec_and_test(void) { +#ifdef __HAVE_ASM_FLAG_OUTPUTS__ + unsigned long lc_preempt; + int cc; + + lc_preempt = offsetof(struct lowcore, preempt_count); + asm_inline( + ALTERNATIVE("alsi %[offzero](%%r0),%[val]\n", + "alsi %[offalt](%%r0),%[val]\n", + ALT_FEATURE(MFEATURE_LOWCORE)) + : "=@cc" (cc), "+m" (((struct lowcore *)0)->preempt_count) + : [offzero] "i" (lc_preempt), [val] "i" (-1), + [offalt] "i" (lc_preempt + LOWCORE_ALT_ADDRESS)); + return (cc == 0) || (cc == 2); +#else return __atomic_add_const_and_test(-1, &get_lowcore()->preempt_count); +#endif } /*