From: Peter Zijlstra Date: Tue, 24 Feb 2026 16:38:07 +0000 (+0100) Subject: softirq: Prepare for deferred hrtimer rearming X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=7e641e52cf5f284706514f789df8c497aea984e1;p=thirdparty%2Fkernel%2Flinux.git softirq: Prepare for deferred hrtimer rearming The hrtimer interrupt expires timers and at the end of the interrupt it rearms the clockevent device for the next expiring timer. That's obviously correct, but in the case that a expired timer sets NEED_RESCHED the return from interrupt ends up in schedule(). If HRTICK is enabled then schedule() will modify the hrtick timer, which causes another reprogramming of the hardware. That can be avoided by deferring the rearming to the return from interrupt path and if the return results in a immediate schedule() invocation then it can be deferred until the end of schedule(), which avoids multiple rearms and re-evaluation of the timer wheel. In case that the return from interrupt ends up handling softirqs before reaching the rearm conditions in the return to user entry code functions, a deferred rearm has to be handled before softirq handling enables interrupts as soft interrupt handling can be long and would therefore introduce hard to diagnose latencies to the timer interrupt. Place the for now empty stub call right before invoking the softirq handling routine. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Thomas Gleixner Signed-off-by: Peter Zijlstra (Intel) Link: https://patch.msgid.link/20260224163431.142854488@kernel.org --- diff --git a/kernel/softirq.c b/kernel/softirq.c index 77198911b8dd4..4425d8dce44b4 100644 --- a/kernel/softirq.c +++ b/kernel/softirq.c @@ -663,6 +663,13 @@ void irq_enter_rcu(void) { __irq_enter_raw(); + /* + * If this is a nested interrupt that hits the exit_to_user_mode_loop + * where it has enabled interrupts but before it has hit schedule() we + * could have hrtimers in an undefined state. Fix it up here. + */ + hrtimer_rearm_deferred(); + if (tick_nohz_full_cpu(smp_processor_id()) || (is_idle_task(current) && (irq_count() == HARDIRQ_OFFSET))) tick_irq_enter(); @@ -719,8 +726,14 @@ static inline void __irq_exit_rcu(void) #endif account_hardirq_exit(current); preempt_count_sub(HARDIRQ_OFFSET); - if (!in_interrupt() && local_softirq_pending()) + if (!in_interrupt() && local_softirq_pending()) { + /* + * If we left hrtimers unarmed, make sure to arm them now, + * before enabling interrupts to run SoftIRQ. + */ + hrtimer_rearm_deferred(); invoke_softirq(); + } if (IS_ENABLED(CONFIG_IRQ_FORCED_THREADING) && force_irqthreads() && local_timers_pending_force_th() && !(in_nmi() | in_hardirq()))