There is no point in iterating through individual tick dependency bits when
the tick_stop tracepoint is disabled, which is the common case.
When the trace point is disabled, return immediately based on the atomic
value being zero or non-zero, skipping the per-bit evaluation.
This optimization improves the hot path performance of tick dependency
checks across all contexts (idle and non-idle), not just nohz_full CPUs.
Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ionut Nechita (Sunlight Linux) <sunlightlinux@gmail.com>
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
Link: https://patch.msgid.link/20260128074558.15433-3-sunlightlinux@gmail.com
{
int val = atomic_read(dep);
+ if (likely(!tracepoint_enabled(tick_stop)))
+ return !val;
+
if (val & TICK_DEP_MASK_POSIX_TIMER) {
trace_tick_stop(0, TICK_DEP_MASK_POSIX_TIMER);
return true;