]> git.ipfire.org Git - thirdparty/kernel/stable.git/blob - Documentation/core-api/workqueue.rst
KVM: x86/pmu: Add documentation for fixed ctr on PMU filter
[thirdparty/kernel/stable.git] / Documentation / core-api / workqueue.rst
1 =========
2 Workqueue
3 =========
4
5 :Date: September, 2010
6 :Author: Tejun Heo <tj@kernel.org>
7 :Author: Florian Mickler <florian@mickler.org>
8
9
10 Introduction
11 ============
12
13 There are many cases where an asynchronous process execution context
14 is needed and the workqueue (wq) API is the most commonly used
15 mechanism for such cases.
16
17 When such an asynchronous execution context is needed, a work item
18 describing which function to execute is put on a queue. An
19 independent thread serves as the asynchronous execution context. The
20 queue is called workqueue and the thread is called worker.
21
22 While there are work items on the workqueue the worker executes the
23 functions associated with the work items one after the other. When
24 there is no work item left on the workqueue the worker becomes idle.
25 When a new work item gets queued, the worker begins executing again.
26
27
28 Why Concurrency Managed Workqueue?
29 ==================================
30
31 In the original wq implementation, a multi threaded (MT) wq had one
32 worker thread per CPU and a single threaded (ST) wq had one worker
33 thread system-wide. A single MT wq needed to keep around the same
34 number of workers as the number of CPUs. The kernel grew a lot of MT
35 wq users over the years and with the number of CPU cores continuously
36 rising, some systems saturated the default 32k PID space just booting
37 up.
38
39 Although MT wq wasted a lot of resource, the level of concurrency
40 provided was unsatisfactory. The limitation was common to both ST and
41 MT wq albeit less severe on MT. Each wq maintained its own separate
42 worker pool. An MT wq could provide only one execution context per CPU
43 while an ST wq one for the whole system. Work items had to compete for
44 those very limited execution contexts leading to various problems
45 including proneness to deadlocks around the single execution context.
46
47 The tension between the provided level of concurrency and resource
48 usage also forced its users to make unnecessary tradeoffs like libata
49 choosing to use ST wq for polling PIOs and accepting an unnecessary
50 limitation that no two polling PIOs can progress at the same time. As
51 MT wq don't provide much better concurrency, users which require
52 higher level of concurrency, like async or fscache, had to implement
53 their own thread pool.
54
55 Concurrency Managed Workqueue (cmwq) is a reimplementation of wq with
56 focus on the following goals.
57
58 * Maintain compatibility with the original workqueue API.
59
60 * Use per-CPU unified worker pools shared by all wq to provide
61 flexible level of concurrency on demand without wasting a lot of
62 resource.
63
64 * Automatically regulate worker pool and level of concurrency so that
65 the API users don't need to worry about such details.
66
67
68 The Design
69 ==========
70
71 In order to ease the asynchronous execution of functions a new
72 abstraction, the work item, is introduced.
73
74 A work item is a simple struct that holds a pointer to the function
75 that is to be executed asynchronously. Whenever a driver or subsystem
76 wants a function to be executed asynchronously it has to set up a work
77 item pointing to that function and queue that work item on a
78 workqueue.
79
80 Special purpose threads, called worker threads, execute the functions
81 off of the queue, one after the other. If no work is queued, the
82 worker threads become idle. These worker threads are managed in so
83 called worker-pools.
84
85 The cmwq design differentiates between the user-facing workqueues that
86 subsystems and drivers queue work items on and the backend mechanism
87 which manages worker-pools and processes the queued work items.
88
89 There are two worker-pools, one for normal work items and the other
90 for high priority ones, for each possible CPU and some extra
91 worker-pools to serve work items queued on unbound workqueues - the
92 number of these backing pools is dynamic.
93
94 Subsystems and drivers can create and queue work items through special
95 workqueue API functions as they see fit. They can influence some
96 aspects of the way the work items are executed by setting flags on the
97 workqueue they are putting the work item on. These flags include
98 things like CPU locality, concurrency limits, priority and more. To
99 get a detailed overview refer to the API description of
100 ``alloc_workqueue()`` below.
101
102 When a work item is queued to a workqueue, the target worker-pool is
103 determined according to the queue parameters and workqueue attributes
104 and appended on the shared worklist of the worker-pool. For example,
105 unless specifically overridden, a work item of a bound workqueue will
106 be queued on the worklist of either normal or highpri worker-pool that
107 is associated to the CPU the issuer is running on.
108
109 For any worker pool implementation, managing the concurrency level
110 (how many execution contexts are active) is an important issue. cmwq
111 tries to keep the concurrency at a minimal but sufficient level.
112 Minimal to save resources and sufficient in that the system is used at
113 its full capacity.
114
115 Each worker-pool bound to an actual CPU implements concurrency
116 management by hooking into the scheduler. The worker-pool is notified
117 whenever an active worker wakes up or sleeps and keeps track of the
118 number of the currently runnable workers. Generally, work items are
119 not expected to hog a CPU and consume many cycles. That means
120 maintaining just enough concurrency to prevent work processing from
121 stalling should be optimal. As long as there are one or more runnable
122 workers on the CPU, the worker-pool doesn't start execution of a new
123 work, but, when the last running worker goes to sleep, it immediately
124 schedules a new worker so that the CPU doesn't sit idle while there
125 are pending work items. This allows using a minimal number of workers
126 without losing execution bandwidth.
127
128 Keeping idle workers around doesn't cost other than the memory space
129 for kthreads, so cmwq holds onto idle ones for a while before killing
130 them.
131
132 For unbound workqueues, the number of backing pools is dynamic.
133 Unbound workqueue can be assigned custom attributes using
134 ``apply_workqueue_attrs()`` and workqueue will automatically create
135 backing worker pools matching the attributes. The responsibility of
136 regulating concurrency level is on the users. There is also a flag to
137 mark a bound wq to ignore the concurrency management. Please refer to
138 the API section for details.
139
140 Forward progress guarantee relies on that workers can be created when
141 more execution contexts are necessary, which in turn is guaranteed
142 through the use of rescue workers. All work items which might be used
143 on code paths that handle memory reclaim are required to be queued on
144 wq's that have a rescue-worker reserved for execution under memory
145 pressure. Else it is possible that the worker-pool deadlocks waiting
146 for execution contexts to free up.
147
148
149 Application Programming Interface (API)
150 =======================================
151
152 ``alloc_workqueue()`` allocates a wq. The original
153 ``create_*workqueue()`` functions are deprecated and scheduled for
154 removal. ``alloc_workqueue()`` takes three arguments - ``@name``,
155 ``@flags`` and ``@max_active``. ``@name`` is the name of the wq and
156 also used as the name of the rescuer thread if there is one.
157
158 A wq no longer manages execution resources but serves as a domain for
159 forward progress guarantee, flush and work item attributes. ``@flags``
160 and ``@max_active`` control how work items are assigned execution
161 resources, scheduled and executed.
162
163
164 ``flags``
165 ---------
166
167 ``WQ_UNBOUND``
168 Work items queued to an unbound wq are served by the special
169 worker-pools which host workers which are not bound to any
170 specific CPU. This makes the wq behave as a simple execution
171 context provider without concurrency management. The unbound
172 worker-pools try to start execution of work items as soon as
173 possible. Unbound wq sacrifices locality but is useful for
174 the following cases.
175
176 * Wide fluctuation in the concurrency level requirement is
177 expected and using bound wq may end up creating large number
178 of mostly unused workers across different CPUs as the issuer
179 hops through different CPUs.
180
181 * Long running CPU intensive workloads which can be better
182 managed by the system scheduler.
183
184 ``WQ_FREEZABLE``
185 A freezable wq participates in the freeze phase of the system
186 suspend operations. Work items on the wq are drained and no
187 new work item starts execution until thawed.
188
189 ``WQ_MEM_RECLAIM``
190 All wq which might be used in the memory reclaim paths **MUST**
191 have this flag set. The wq is guaranteed to have at least one
192 execution context regardless of memory pressure.
193
194 ``WQ_HIGHPRI``
195 Work items of a highpri wq are queued to the highpri
196 worker-pool of the target cpu. Highpri worker-pools are
197 served by worker threads with elevated nice level.
198
199 Note that normal and highpri worker-pools don't interact with
200 each other. Each maintains its separate pool of workers and
201 implements concurrency management among its workers.
202
203 ``WQ_CPU_INTENSIVE``
204 Work items of a CPU intensive wq do not contribute to the
205 concurrency level. In other words, runnable CPU intensive
206 work items will not prevent other work items in the same
207 worker-pool from starting execution. This is useful for bound
208 work items which are expected to hog CPU cycles so that their
209 execution is regulated by the system scheduler.
210
211 Although CPU intensive work items don't contribute to the
212 concurrency level, start of their executions is still
213 regulated by the concurrency management and runnable
214 non-CPU-intensive work items can delay execution of CPU
215 intensive work items.
216
217 This flag is meaningless for unbound wq.
218
219
220 ``max_active``
221 --------------
222
223 ``@max_active`` determines the maximum number of execution contexts per
224 CPU which can be assigned to the work items of a wq. For example, with
225 ``@max_active`` of 16, at most 16 work items of the wq can be executing
226 at the same time per CPU. This is always a per-CPU attribute, even for
227 unbound workqueues.
228
229 The maximum limit for ``@max_active`` is 512 and the default value used
230 when 0 is specified is 256. These values are chosen sufficiently high
231 such that they are not the limiting factor while providing protection in
232 runaway cases.
233
234 The number of active work items of a wq is usually regulated by the
235 users of the wq, more specifically, by how many work items the users
236 may queue at the same time. Unless there is a specific need for
237 throttling the number of active work items, specifying '0' is
238 recommended.
239
240 Some users depend on the strict execution ordering of ST wq. The
241 combination of ``@max_active`` of 1 and ``WQ_UNBOUND`` used to
242 achieve this behavior. Work items on such wq were always queued to the
243 unbound worker-pools and only one work item could be active at any given
244 time thus achieving the same ordering property as ST wq.
245
246 In the current implementation the above configuration only guarantees
247 ST behavior within a given NUMA node. Instead ``alloc_ordered_queue()`` should
248 be used to achieve system-wide ST behavior.
249
250
251 Example Execution Scenarios
252 ===========================
253
254 The following example execution scenarios try to illustrate how cmwq
255 behave under different configurations.
256
257 Work items w0, w1, w2 are queued to a bound wq q0 on the same CPU.
258 w0 burns CPU for 5ms then sleeps for 10ms then burns CPU for 5ms
259 again before finishing. w1 and w2 burn CPU for 5ms then sleep for
260 10ms.
261
262 Ignoring all other tasks, works and processing overhead, and assuming
263 simple FIFO scheduling, the following is one highly simplified version
264 of possible sequences of events with the original wq. ::
265
266 TIME IN MSECS EVENT
267 0 w0 starts and burns CPU
268 5 w0 sleeps
269 15 w0 wakes up and burns CPU
270 20 w0 finishes
271 20 w1 starts and burns CPU
272 25 w1 sleeps
273 35 w1 wakes up and finishes
274 35 w2 starts and burns CPU
275 40 w2 sleeps
276 50 w2 wakes up and finishes
277
278 And with cmwq with ``@max_active`` >= 3, ::
279
280 TIME IN MSECS EVENT
281 0 w0 starts and burns CPU
282 5 w0 sleeps
283 5 w1 starts and burns CPU
284 10 w1 sleeps
285 10 w2 starts and burns CPU
286 15 w2 sleeps
287 15 w0 wakes up and burns CPU
288 20 w0 finishes
289 20 w1 wakes up and finishes
290 25 w2 wakes up and finishes
291
292 If ``@max_active`` == 2, ::
293
294 TIME IN MSECS EVENT
295 0 w0 starts and burns CPU
296 5 w0 sleeps
297 5 w1 starts and burns CPU
298 10 w1 sleeps
299 15 w0 wakes up and burns CPU
300 20 w0 finishes
301 20 w1 wakes up and finishes
302 20 w2 starts and burns CPU
303 25 w2 sleeps
304 35 w2 wakes up and finishes
305
306 Now, let's assume w1 and w2 are queued to a different wq q1 which has
307 ``WQ_CPU_INTENSIVE`` set, ::
308
309 TIME IN MSECS EVENT
310 0 w0 starts and burns CPU
311 5 w0 sleeps
312 5 w1 and w2 start and burn CPU
313 10 w1 sleeps
314 15 w2 sleeps
315 15 w0 wakes up and burns CPU
316 20 w0 finishes
317 20 w1 wakes up and finishes
318 25 w2 wakes up and finishes
319
320
321 Guidelines
322 ==========
323
324 * Do not forget to use ``WQ_MEM_RECLAIM`` if a wq may process work
325 items which are used during memory reclaim. Each wq with
326 ``WQ_MEM_RECLAIM`` set has an execution context reserved for it. If
327 there is dependency among multiple work items used during memory
328 reclaim, they should be queued to separate wq each with
329 ``WQ_MEM_RECLAIM``.
330
331 * Unless strict ordering is required, there is no need to use ST wq.
332
333 * Unless there is a specific need, using 0 for @max_active is
334 recommended. In most use cases, concurrency level usually stays
335 well under the default limit.
336
337 * A wq serves as a domain for forward progress guarantee
338 (``WQ_MEM_RECLAIM``, flush and work item attributes. Work items
339 which are not involved in memory reclaim and don't need to be
340 flushed as a part of a group of work items, and don't require any
341 special attribute, can use one of the system wq. There is no
342 difference in execution characteristics between using a dedicated wq
343 and a system wq.
344
345 * Unless work items are expected to consume a huge amount of CPU
346 cycles, using a bound wq is usually beneficial due to the increased
347 level of locality in wq operations and work item execution.
348
349
350 Affinity Scopes
351 ===============
352
353 An unbound workqueue groups CPUs according to its affinity scope to improve
354 cache locality. For example, if a workqueue is using the default affinity
355 scope of "cache", it will group CPUs according to last level cache
356 boundaries. A work item queued on the workqueue will be assigned to a worker
357 on one of the CPUs which share the last level cache with the issuing CPU.
358 Once started, the worker may or may not be allowed to move outside the scope
359 depending on the ``affinity_strict`` setting of the scope.
360
361 Workqueue currently supports the following affinity scopes.
362
363 ``default``
364 Use the scope in module parameter ``workqueue.default_affinity_scope``
365 which is always set to one of the scopes below.
366
367 ``cpu``
368 CPUs are not grouped. A work item issued on one CPU is processed by a
369 worker on the same CPU. This makes unbound workqueues behave as per-cpu
370 workqueues without concurrency management.
371
372 ``smt``
373 CPUs are grouped according to SMT boundaries. This usually means that the
374 logical threads of each physical CPU core are grouped together.
375
376 ``cache``
377 CPUs are grouped according to cache boundaries. Which specific cache
378 boundary is used is determined by the arch code. L3 is used in a lot of
379 cases. This is the default affinity scope.
380
381 ``numa``
382 CPUs are grouped according to NUMA bounaries.
383
384 ``system``
385 All CPUs are put in the same group. Workqueue makes no effort to process a
386 work item on a CPU close to the issuing CPU.
387
388 The default affinity scope can be changed with the module parameter
389 ``workqueue.default_affinity_scope`` and a specific workqueue's affinity
390 scope can be changed using ``apply_workqueue_attrs()``.
391
392 If ``WQ_SYSFS`` is set, the workqueue will have the following affinity scope
393 related interface files under its ``/sys/devices/virtual/WQ_NAME/``
394 directory.
395
396 ``affinity_scope``
397 Read to see the current affinity scope. Write to change.
398
399 When default is the current scope, reading this file will also show the
400 current effective scope in parentheses, for example, ``default (cache)``.
401
402 ``affinity_strict``
403 0 by default indicating that affinity scopes are not strict. When a work
404 item starts execution, workqueue makes a best-effort attempt to ensure
405 that the worker is inside its affinity scope, which is called
406 repatriation. Once started, the scheduler is free to move the worker
407 anywhere in the system as it sees fit. This enables benefiting from scope
408 locality while still being able to utilize other CPUs if necessary and
409 available.
410
411 If set to 1, all workers of the scope are guaranteed always to be in the
412 scope. This may be useful when crossing affinity scopes has other
413 implications, for example, in terms of power consumption or workload
414 isolation. Strict NUMA scope can also be used to match the workqueue
415 behavior of older kernels.
416
417
418 Affinity Scopes and Performance
419 ===============================
420
421 It'd be ideal if an unbound workqueue's behavior is optimal for vast
422 majority of use cases without further tuning. Unfortunately, in the current
423 kernel, there exists a pronounced trade-off between locality and utilization
424 necessitating explicit configurations when workqueues are heavily used.
425
426 Higher locality leads to higher efficiency where more work is performed for
427 the same number of consumed CPU cycles. However, higher locality may also
428 cause lower overall system utilization if the work items are not spread
429 enough across the affinity scopes by the issuers. The following performance
430 testing with dm-crypt clearly illustrates this trade-off.
431
432 The tests are run on a CPU with 12-cores/24-threads split across four L3
433 caches (AMD Ryzen 9 3900x). CPU clock boost is turned off for consistency.
434 ``/dev/dm-0`` is a dm-crypt device created on NVME SSD (Samsung 990 PRO) and
435 opened with ``cryptsetup`` with default settings.
436
437
438 Scenario 1: Enough issuers and work spread across the machine
439 -------------------------------------------------------------
440
441 The command used: ::
442
443 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k --ioengine=libaio \
444 --iodepth=64 --runtime=60 --numjobs=24 --time_based --group_reporting \
445 --name=iops-test-job --verify=sha512
446
447 There are 24 issuers, each issuing 64 IOs concurrently. ``--verify=sha512``
448 makes ``fio`` generate and read back the content each time which makes
449 execution locality matter between the issuer and ``kcryptd``. The followings
450 are the read bandwidths and CPU utilizations depending on different affinity
451 scope settings on ``kcryptd`` measured over five runs. Bandwidths are in
452 MiBps, and CPU util in percents.
453
454 .. list-table::
455 :widths: 16 20 20
456 :header-rows: 1
457
458 * - Affinity
459 - Bandwidth (MiBps)
460 - CPU util (%)
461
462 * - system
463 - 1159.40 ±1.34
464 - 99.31 ±0.02
465
466 * - cache
467 - 1166.40 ±0.89
468 - 99.34 ±0.01
469
470 * - cache (strict)
471 - 1166.00 ±0.71
472 - 99.35 ±0.01
473
474 With enough issuers spread across the system, there is no downside to
475 "cache", strict or otherwise. All three configurations saturate the whole
476 machine but the cache-affine ones outperform by 0.6% thanks to improved
477 locality.
478
479
480 Scenario 2: Fewer issuers, enough work for saturation
481 -----------------------------------------------------
482
483 The command used: ::
484
485 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k \
486 --ioengine=libaio --iodepth=64 --runtime=60 --numjobs=8 \
487 --time_based --group_reporting --name=iops-test-job --verify=sha512
488
489 The only difference from the previous scenario is ``--numjobs=8``. There are
490 a third of the issuers but is still enough total work to saturate the
491 system.
492
493 .. list-table::
494 :widths: 16 20 20
495 :header-rows: 1
496
497 * - Affinity
498 - Bandwidth (MiBps)
499 - CPU util (%)
500
501 * - system
502 - 1155.40 ±0.89
503 - 97.41 ±0.05
504
505 * - cache
506 - 1154.40 ±1.14
507 - 96.15 ±0.09
508
509 * - cache (strict)
510 - 1112.00 ±4.64
511 - 93.26 ±0.35
512
513 This is more than enough work to saturate the system. Both "system" and
514 "cache" are nearly saturating the machine but not fully. "cache" is using
515 less CPU but the better efficiency puts it at the same bandwidth as
516 "system".
517
518 Eight issuers moving around over four L3 cache scope still allow "cache
519 (strict)" to mostly saturate the machine but the loss of work conservation
520 is now starting to hurt with 3.7% bandwidth loss.
521
522
523 Scenario 3: Even fewer issuers, not enough work to saturate
524 -----------------------------------------------------------
525
526 The command used: ::
527
528 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k \
529 --ioengine=libaio --iodepth=64 --runtime=60 --numjobs=4 \
530 --time_based --group_reporting --name=iops-test-job --verify=sha512
531
532 Again, the only difference is ``--numjobs=4``. With the number of issuers
533 reduced to four, there now isn't enough work to saturate the whole system
534 and the bandwidth becomes dependent on completion latencies.
535
536 .. list-table::
537 :widths: 16 20 20
538 :header-rows: 1
539
540 * - Affinity
541 - Bandwidth (MiBps)
542 - CPU util (%)
543
544 * - system
545 - 993.60 ±1.82
546 - 75.49 ±0.06
547
548 * - cache
549 - 973.40 ±1.52
550 - 74.90 ±0.07
551
552 * - cache (strict)
553 - 828.20 ±4.49
554 - 66.84 ±0.29
555
556 Now, the tradeoff between locality and utilization is clearer. "cache" shows
557 2% bandwidth loss compared to "system" and "cache (struct)" whopping 20%.
558
559
560 Conclusion and Recommendations
561 ------------------------------
562
563 In the above experiments, the efficiency advantage of the "cache" affinity
564 scope over "system" is, while consistent and noticeable, small. However, the
565 impact is dependent on the distances between the scopes and may be more
566 pronounced in processors with more complex topologies.
567
568 While the loss of work-conservation in certain scenarios hurts, it is a lot
569 better than "cache (strict)" and maximizing workqueue utilization is
570 unlikely to be the common case anyway. As such, "cache" is the default
571 affinity scope for unbound pools.
572
573 * As there is no one option which is great for most cases, workqueue usages
574 that may consume a significant amount of CPU are recommended to configure
575 the workqueues using ``apply_workqueue_attrs()`` and/or enable
576 ``WQ_SYSFS``.
577
578 * An unbound workqueue with strict "cpu" affinity scope behaves the same as
579 ``WQ_CPU_INTENSIVE`` per-cpu workqueue. There is no real advanage to the
580 latter and an unbound workqueue provides a lot more flexibility.
581
582 * Affinity scopes are introduced in Linux v6.5. To emulate the previous
583 behavior, use strict "numa" affinity scope.
584
585 * The loss of work-conservation in non-strict affinity scopes is likely
586 originating from the scheduler. There is no theoretical reason why the
587 kernel wouldn't be able to do the right thing and maintain
588 work-conservation in most cases. As such, it is possible that future
589 scheduler improvements may make most of these tunables unnecessary.
590
591
592 Examining Configuration
593 =======================
594
595 Use tools/workqueue/wq_dump.py to examine unbound CPU affinity
596 configuration, worker pools and how workqueues map to the pools: ::
597
598 $ tools/workqueue/wq_dump.py
599 Affinity Scopes
600 ===============
601 wq_unbound_cpumask=0000000f
602
603 CPU
604 nr_pods 4
605 pod_cpus [0]=00000001 [1]=00000002 [2]=00000004 [3]=00000008
606 pod_node [0]=0 [1]=0 [2]=1 [3]=1
607 cpu_pod [0]=0 [1]=1 [2]=2 [3]=3
608
609 SMT
610 nr_pods 4
611 pod_cpus [0]=00000001 [1]=00000002 [2]=00000004 [3]=00000008
612 pod_node [0]=0 [1]=0 [2]=1 [3]=1
613 cpu_pod [0]=0 [1]=1 [2]=2 [3]=3
614
615 CACHE (default)
616 nr_pods 2
617 pod_cpus [0]=00000003 [1]=0000000c
618 pod_node [0]=0 [1]=1
619 cpu_pod [0]=0 [1]=0 [2]=1 [3]=1
620
621 NUMA
622 nr_pods 2
623 pod_cpus [0]=00000003 [1]=0000000c
624 pod_node [0]=0 [1]=1
625 cpu_pod [0]=0 [1]=0 [2]=1 [3]=1
626
627 SYSTEM
628 nr_pods 1
629 pod_cpus [0]=0000000f
630 pod_node [0]=-1
631 cpu_pod [0]=0 [1]=0 [2]=0 [3]=0
632
633 Worker Pools
634 ============
635 pool[00] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 0
636 pool[01] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 0
637 pool[02] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 1
638 pool[03] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 1
639 pool[04] ref= 1 nice= 0 idle/workers= 4/ 4 cpu= 2
640 pool[05] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 2
641 pool[06] ref= 1 nice= 0 idle/workers= 3/ 3 cpu= 3
642 pool[07] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 3
643 pool[08] ref=42 nice= 0 idle/workers= 6/ 6 cpus=0000000f
644 pool[09] ref=28 nice= 0 idle/workers= 3/ 3 cpus=00000003
645 pool[10] ref=28 nice= 0 idle/workers= 17/ 17 cpus=0000000c
646 pool[11] ref= 1 nice=-20 idle/workers= 1/ 1 cpus=0000000f
647 pool[12] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=00000003
648 pool[13] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=0000000c
649
650 Workqueue CPU -> pool
651 =====================
652 [ workqueue \ CPU 0 1 2 3 dfl]
653 events percpu 0 2 4 6
654 events_highpri percpu 1 3 5 7
655 events_long percpu 0 2 4 6
656 events_unbound unbound 9 9 10 10 8
657 events_freezable percpu 0 2 4 6
658 events_power_efficient percpu 0 2 4 6
659 events_freezable_power_ percpu 0 2 4 6
660 rcu_gp percpu 0 2 4 6
661 rcu_par_gp percpu 0 2 4 6
662 slub_flushwq percpu 0 2 4 6
663 netns ordered 8 8 8 8 8
664 ...
665
666 See the command's help message for more info.
667
668
669 Monitoring
670 ==========
671
672 Use tools/workqueue/wq_monitor.py to monitor workqueue operations: ::
673
674 $ tools/workqueue/wq_monitor.py events
675 total infl CPUtime CPUhog CMW/RPR mayday rescued
676 events 18545 0 6.1 0 5 - -
677 events_highpri 8 0 0.0 0 0 - -
678 events_long 3 0 0.0 0 0 - -
679 events_unbound 38306 0 0.1 - 7 - -
680 events_freezable 0 0 0.0 0 0 - -
681 events_power_efficient 29598 0 0.2 0 0 - -
682 events_freezable_power_ 10 0 0.0 0 0 - -
683 sock_diag_events 0 0 0.0 0 0 - -
684
685 total infl CPUtime CPUhog CMW/RPR mayday rescued
686 events 18548 0 6.1 0 5 - -
687 events_highpri 8 0 0.0 0 0 - -
688 events_long 3 0 0.0 0 0 - -
689 events_unbound 38322 0 0.1 - 7 - -
690 events_freezable 0 0 0.0 0 0 - -
691 events_power_efficient 29603 0 0.2 0 0 - -
692 events_freezable_power_ 10 0 0.0 0 0 - -
693 sock_diag_events 0 0 0.0 0 0 - -
694
695 ...
696
697 See the command's help message for more info.
698
699
700 Debugging
701 =========
702
703 Because the work functions are executed by generic worker threads
704 there are a few tricks needed to shed some light on misbehaving
705 workqueue users.
706
707 Worker threads show up in the process list as: ::
708
709 root 5671 0.0 0.0 0 0 ? S 12:07 0:00 [kworker/0:1]
710 root 5672 0.0 0.0 0 0 ? S 12:07 0:00 [kworker/1:2]
711 root 5673 0.0 0.0 0 0 ? S 12:12 0:00 [kworker/0:0]
712 root 5674 0.0 0.0 0 0 ? S 12:13 0:00 [kworker/1:0]
713
714 If kworkers are going crazy (using too much cpu), there are two types
715 of possible problems:
716
717 1. Something being scheduled in rapid succession
718 2. A single work item that consumes lots of cpu cycles
719
720 The first one can be tracked using tracing: ::
721
722 $ echo workqueue:workqueue_queue_work > /sys/kernel/tracing/set_event
723 $ cat /sys/kernel/tracing/trace_pipe > out.txt
724 (wait a few secs)
725 ^C
726
727 If something is busy looping on work queueing, it would be dominating
728 the output and the offender can be determined with the work item
729 function.
730
731 For the second type of problems it should be possible to just check
732 the stack trace of the offending worker thread. ::
733
734 $ cat /proc/THE_OFFENDING_KWORKER/stack
735
736 The work item's function should be trivially visible in the stack
737 trace.
738
739
740 Non-reentrance Conditions
741 =========================
742
743 Workqueue guarantees that a work item cannot be re-entrant if the following
744 conditions hold after a work item gets queued:
745
746 1. The work function hasn't been changed.
747 2. No one queues the work item to another workqueue.
748 3. The work item hasn't been reinitiated.
749
750 In other words, if the above conditions hold, the work item is guaranteed to be
751 executed by at most one worker system-wide at any given time.
752
753 Note that requeuing the work item (to the same queue) in the self function
754 doesn't break these conditions, so it's safe to do. Otherwise, caution is
755 required when breaking the conditions inside a work function.
756
757
758 Kernel Inline Documentations Reference
759 ======================================
760
761 .. kernel-doc:: include/linux/workqueue.h
762
763 .. kernel-doc:: kernel/workqueue.c