]> git.ipfire.org Git - thirdparty/linux.git/commitdiff
workqueue: Better describe stall check
authorPetr Mladek <pmladek@suse.com>
Wed, 25 Mar 2026 12:34:18 +0000 (13:34 +0100)
committerTejun Heo <tj@kernel.org>
Wed, 25 Mar 2026 15:51:02 +0000 (05:51 -1000)
Try to be more explicit why the workqueue watchdog does not take
pool->lock by default. Spin locks are full memory barriers which
delay anything. Obviously, they would primary delay operations
on the related worker pools.

Explain why it is enough to prevent the false positive by re-checking
the timestamp under the pool->lock.

Finally, make it clear what would be the alternative solution in
__queue_work() which is a hotter path.

Signed-off-by: Petr Mladek <pmladek@suse.com>
Acked-by: Song Liu <song@kernel.org>
Signed-off-by: Tejun Heo <tj@kernel.org>
kernel/workqueue.c

index ff97b705f25ed125326691a1be98b360c302f0b4..eda756556341abe62020925b6dabf01e09da627c 100644 (file)
@@ -7702,13 +7702,14 @@ static void wq_watchdog_timer_fn(struct timer_list *unused)
                /*
                 * Did we stall?
                 *
-                * Do a lockless check first. On weakly ordered
-                * architectures, the lockless check can observe a
-                * reordering between worklist insert_work() and
-                * last_progress_ts update from __queue_work(). Since
-                * __queue_work() is a much hotter path than the timer
-                * function, we handle false positive here by reading
-                * last_progress_ts again with pool->lock held.
+                * Do a lockless check first to do not disturb the system.
+                *
+                * Prevent false positives by double checking the timestamp
+                * under pool->lock. The lock makes sure that the check reads
+                * an updated pool->last_progress_ts when this CPU saw
+                * an already updated pool->worklist above. It seems better
+                * than adding another barrier into __queue_work() which
+                * is a hotter path.
                 */
                if (time_after(now, ts + thresh)) {
                        scoped_guard(raw_spinlock_irqsave, &pool->lock) {