Currently if a user enqueues a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistency cannot be addressed without refactoring the API.
This continues the effort to refactor worqueue APIs, which has begun
with the change introducing new workqueues and a new alloc_workqueue flag:
commit
128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
commit
930c2ea566af ("workqueue: Add new WQ_PERCPU flag")
This specific workload do not benefit from a per-cpu workqueue, so use
the default unbound workqueue (system_dfl_wq) instead.
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
Link: https://patch.msgid.link/20251106142914.227875-1-marco.crivellari@suse.com
Signed-off-by: Mark Brown <broonie@kernel.org>
reschedule:
if (!d->high_prio)
- mod_delayed_work(system_wq, &h->isr_work,
+ mod_delayed_work(system_dfl_wq, &h->isr_work,
msecs_to_jiffies(tmo));
else
mod_delayed_work(system_highpri_wq, &h->isr_work,