]> git.ipfire.org Git - thirdparty/kernel/linux.git/blob - Documentation/memory-barriers.txt
writeback: also update wbc->nr_to_write on writeback failure
[thirdparty/kernel/linux.git] / Documentation / memory-barriers.txt
1 ============================
2 LINUX KERNEL MEMORY BARRIERS
3 ============================
4
5 By: David Howells <dhowells@redhat.com>
6 Paul E. McKenney <paulmck@linux.ibm.com>
7 Will Deacon <will.deacon@arm.com>
8 Peter Zijlstra <peterz@infradead.org>
9
10 ==========
11 DISCLAIMER
12 ==========
13
14 This document is not a specification; it is intentionally (for the sake of
15 brevity) and unintentionally (due to being human) incomplete. This document is
16 meant as a guide to using the various memory barriers provided by Linux, but
17 in case of any doubt (and there are many) please ask. Some doubts may be
18 resolved by referring to the formal memory consistency model and related
19 documentation at tools/memory-model/. Nevertheless, even this memory
20 model should be viewed as the collective opinion of its maintainers rather
21 than as an infallible oracle.
22
23 To repeat, this document is not a specification of what Linux expects from
24 hardware.
25
26 The purpose of this document is twofold:
27
28 (1) to specify the minimum functionality that one can rely on for any
29 particular barrier, and
30
31 (2) to provide a guide as to how to use the barriers that are available.
32
33 Note that an architecture can provide more than the minimum requirement
34 for any particular barrier, but if the architecture provides less than
35 that, that architecture is incorrect.
36
37 Note also that it is possible that a barrier may be a no-op for an
38 architecture because the way that arch works renders an explicit barrier
39 unnecessary in that case.
40
41
42 ========
43 CONTENTS
44 ========
45
46 (*) Abstract memory access model.
47
48 - Device operations.
49 - Guarantees.
50
51 (*) What are memory barriers?
52
53 - Varieties of memory barrier.
54 - What may not be assumed about memory barriers?
55 - Address-dependency barriers (historical).
56 - Control dependencies.
57 - SMP barrier pairing.
58 - Examples of memory barrier sequences.
59 - Read memory barriers vs load speculation.
60 - Multicopy atomicity.
61
62 (*) Explicit kernel barriers.
63
64 - Compiler barrier.
65 - CPU memory barriers.
66
67 (*) Implicit kernel memory barriers.
68
69 - Lock acquisition functions.
70 - Interrupt disabling functions.
71 - Sleep and wake-up functions.
72 - Miscellaneous functions.
73
74 (*) Inter-CPU acquiring barrier effects.
75
76 - Acquires vs memory accesses.
77
78 (*) Where are memory barriers needed?
79
80 - Interprocessor interaction.
81 - Atomic operations.
82 - Accessing devices.
83 - Interrupts.
84
85 (*) Kernel I/O barrier effects.
86
87 (*) Assumed minimum execution ordering model.
88
89 (*) The effects of the cpu cache.
90
91 - Cache coherency.
92 - Cache coherency vs DMA.
93 - Cache coherency vs MMIO.
94
95 (*) The things CPUs get up to.
96
97 - And then there's the Alpha.
98 - Virtual Machine Guests.
99
100 (*) Example uses.
101
102 - Circular buffers.
103
104 (*) References.
105
106
107 ============================
108 ABSTRACT MEMORY ACCESS MODEL
109 ============================
110
111 Consider the following abstract model of the system:
112
113 : :
114 : :
115 : :
116 +-------+ : +--------+ : +-------+
117 | | : | | : | |
118 | | : | | : | |
119 | CPU 1 |<----->| Memory |<----->| CPU 2 |
120 | | : | | : | |
121 | | : | | : | |
122 +-------+ : +--------+ : +-------+
123 ^ : ^ : ^
124 | : | : |
125 | : | : |
126 | : v : |
127 | : +--------+ : |
128 | : | | : |
129 | : | | : |
130 +---------->| Device |<----------+
131 : | | :
132 : | | :
133 : +--------+ :
134 : :
135
136 Each CPU executes a program that generates memory access operations. In the
137 abstract CPU, memory operation ordering is very relaxed, and a CPU may actually
138 perform the memory operations in any order it likes, provided program causality
139 appears to be maintained. Similarly, the compiler may also arrange the
140 instructions it emits in any order it likes, provided it doesn't affect the
141 apparent operation of the program.
142
143 So in the above diagram, the effects of the memory operations performed by a
144 CPU are perceived by the rest of the system as the operations cross the
145 interface between the CPU and rest of the system (the dotted lines).
146
147
148 For example, consider the following sequence of events:
149
150 CPU 1 CPU 2
151 =============== ===============
152 { A == 1; B == 2 }
153 A = 3; x = B;
154 B = 4; y = A;
155
156 The set of accesses as seen by the memory system in the middle can be arranged
157 in 24 different combinations:
158
159 STORE A=3, STORE B=4, y=LOAD A->3, x=LOAD B->4
160 STORE A=3, STORE B=4, x=LOAD B->4, y=LOAD A->3
161 STORE A=3, y=LOAD A->3, STORE B=4, x=LOAD B->4
162 STORE A=3, y=LOAD A->3, x=LOAD B->2, STORE B=4
163 STORE A=3, x=LOAD B->2, STORE B=4, y=LOAD A->3
164 STORE A=3, x=LOAD B->2, y=LOAD A->3, STORE B=4
165 STORE B=4, STORE A=3, y=LOAD A->3, x=LOAD B->4
166 STORE B=4, ...
167 ...
168
169 and can thus result in four different combinations of values:
170
171 x == 2, y == 1
172 x == 2, y == 3
173 x == 4, y == 1
174 x == 4, y == 3
175
176
177 Furthermore, the stores committed by a CPU to the memory system may not be
178 perceived by the loads made by another CPU in the same order as the stores were
179 committed.
180
181
182 As a further example, consider this sequence of events:
183
184 CPU 1 CPU 2
185 =============== ===============
186 { A == 1, B == 2, C == 3, P == &A, Q == &C }
187 B = 4; Q = P;
188 P = &B; D = *Q;
189
190 There is an obvious address dependency here, as the value loaded into D depends
191 on the address retrieved from P by CPU 2. At the end of the sequence, any of
192 the following results are possible:
193
194 (Q == &A) and (D == 1)
195 (Q == &B) and (D == 2)
196 (Q == &B) and (D == 4)
197
198 Note that CPU 2 will never try and load C into D because the CPU will load P
199 into Q before issuing the load of *Q.
200
201
202 DEVICE OPERATIONS
203 -----------------
204
205 Some devices present their control interfaces as collections of memory
206 locations, but the order in which the control registers are accessed is very
207 important. For instance, imagine an ethernet card with a set of internal
208 registers that are accessed through an address port register (A) and a data
209 port register (D). To read internal register 5, the following code might then
210 be used:
211
212 *A = 5;
213 x = *D;
214
215 but this might show up as either of the following two sequences:
216
217 STORE *A = 5, x = LOAD *D
218 x = LOAD *D, STORE *A = 5
219
220 the second of which will almost certainly result in a malfunction, since it set
221 the address _after_ attempting to read the register.
222
223
224 GUARANTEES
225 ----------
226
227 There are some minimal guarantees that may be expected of a CPU:
228
229 (*) On any given CPU, dependent memory accesses will be issued in order, with
230 respect to itself. This means that for:
231
232 Q = READ_ONCE(P); D = READ_ONCE(*Q);
233
234 the CPU will issue the following memory operations:
235
236 Q = LOAD P, D = LOAD *Q
237
238 and always in that order. However, on DEC Alpha, READ_ONCE() also
239 emits a memory-barrier instruction, so that a DEC Alpha CPU will
240 instead issue the following memory operations:
241
242 Q = LOAD P, MEMORY_BARRIER, D = LOAD *Q, MEMORY_BARRIER
243
244 Whether on DEC Alpha or not, the READ_ONCE() also prevents compiler
245 mischief.
246
247 (*) Overlapping loads and stores within a particular CPU will appear to be
248 ordered within that CPU. This means that for:
249
250 a = READ_ONCE(*X); WRITE_ONCE(*X, b);
251
252 the CPU will only issue the following sequence of memory operations:
253
254 a = LOAD *X, STORE *X = b
255
256 And for:
257
258 WRITE_ONCE(*X, c); d = READ_ONCE(*X);
259
260 the CPU will only issue:
261
262 STORE *X = c, d = LOAD *X
263
264 (Loads and stores overlap if they are targeted at overlapping pieces of
265 memory).
266
267 And there are a number of things that _must_ or _must_not_ be assumed:
268
269 (*) It _must_not_ be assumed that the compiler will do what you want
270 with memory references that are not protected by READ_ONCE() and
271 WRITE_ONCE(). Without them, the compiler is within its rights to
272 do all sorts of "creative" transformations, which are covered in
273 the COMPILER BARRIER section.
274
275 (*) It _must_not_ be assumed that independent loads and stores will be issued
276 in the order given. This means that for:
277
278 X = *A; Y = *B; *D = Z;
279
280 we may get any of the following sequences:
281
282 X = LOAD *A, Y = LOAD *B, STORE *D = Z
283 X = LOAD *A, STORE *D = Z, Y = LOAD *B
284 Y = LOAD *B, X = LOAD *A, STORE *D = Z
285 Y = LOAD *B, STORE *D = Z, X = LOAD *A
286 STORE *D = Z, X = LOAD *A, Y = LOAD *B
287 STORE *D = Z, Y = LOAD *B, X = LOAD *A
288
289 (*) It _must_ be assumed that overlapping memory accesses may be merged or
290 discarded. This means that for:
291
292 X = *A; Y = *(A + 4);
293
294 we may get any one of the following sequences:
295
296 X = LOAD *A; Y = LOAD *(A + 4);
297 Y = LOAD *(A + 4); X = LOAD *A;
298 {X, Y} = LOAD {*A, *(A + 4) };
299
300 And for:
301
302 *A = X; *(A + 4) = Y;
303
304 we may get any of:
305
306 STORE *A = X; STORE *(A + 4) = Y;
307 STORE *(A + 4) = Y; STORE *A = X;
308 STORE {*A, *(A + 4) } = {X, Y};
309
310 And there are anti-guarantees:
311
312 (*) These guarantees do not apply to bitfields, because compilers often
313 generate code to modify these using non-atomic read-modify-write
314 sequences. Do not attempt to use bitfields to synchronize parallel
315 algorithms.
316
317 (*) Even in cases where bitfields are protected by locks, all fields
318 in a given bitfield must be protected by one lock. If two fields
319 in a given bitfield are protected by different locks, the compiler's
320 non-atomic read-modify-write sequences can cause an update to one
321 field to corrupt the value of an adjacent field.
322
323 (*) These guarantees apply only to properly aligned and sized scalar
324 variables. "Properly sized" currently means variables that are
325 the same size as "char", "short", "int" and "long". "Properly
326 aligned" means the natural alignment, thus no constraints for
327 "char", two-byte alignment for "short", four-byte alignment for
328 "int", and either four-byte or eight-byte alignment for "long",
329 on 32-bit and 64-bit systems, respectively. Note that these
330 guarantees were introduced into the C11 standard, so beware when
331 using older pre-C11 compilers (for example, gcc 4.6). The portion
332 of the standard containing this guarantee is Section 3.14, which
333 defines "memory location" as follows:
334
335 memory location
336 either an object of scalar type, or a maximal sequence
337 of adjacent bit-fields all having nonzero width
338
339 NOTE 1: Two threads of execution can update and access
340 separate memory locations without interfering with
341 each other.
342
343 NOTE 2: A bit-field and an adjacent non-bit-field member
344 are in separate memory locations. The same applies
345 to two bit-fields, if one is declared inside a nested
346 structure declaration and the other is not, or if the two
347 are separated by a zero-length bit-field declaration,
348 or if they are separated by a non-bit-field member
349 declaration. It is not safe to concurrently update two
350 bit-fields in the same structure if all members declared
351 between them are also bit-fields, no matter what the
352 sizes of those intervening bit-fields happen to be.
353
354
355 =========================
356 WHAT ARE MEMORY BARRIERS?
357 =========================
358
359 As can be seen above, independent memory operations are effectively performed
360 in random order, but this can be a problem for CPU-CPU interaction and for I/O.
361 What is required is some way of intervening to instruct the compiler and the
362 CPU to restrict the order.
363
364 Memory barriers are such interventions. They impose a perceived partial
365 ordering over the memory operations on either side of the barrier.
366
367 Such enforcement is important because the CPUs and other devices in a system
368 can use a variety of tricks to improve performance, including reordering,
369 deferral and combination of memory operations; speculative loads; speculative
370 branch prediction and various types of caching. Memory barriers are used to
371 override or suppress these tricks, allowing the code to sanely control the
372 interaction of multiple CPUs and/or devices.
373
374
375 VARIETIES OF MEMORY BARRIER
376 ---------------------------
377
378 Memory barriers come in four basic varieties:
379
380 (1) Write (or store) memory barriers.
381
382 A write memory barrier gives a guarantee that all the STORE operations
383 specified before the barrier will appear to happen before all the STORE
384 operations specified after the barrier with respect to the other
385 components of the system.
386
387 A write barrier is a partial ordering on stores only; it is not required
388 to have any effect on loads.
389
390 A CPU can be viewed as committing a sequence of store operations to the
391 memory system as time progresses. All stores _before_ a write barrier
392 will occur _before_ all the stores after the write barrier.
393
394 [!] Note that write barriers should normally be paired with read or
395 address-dependency barriers; see the "SMP barrier pairing" subsection.
396
397
398 (2) Address-dependency barriers (historical).
399 [!] This section is marked as HISTORICAL: it covers the long-obsolete
400 smp_read_barrier_depends() macro, the semantics of which are now
401 implicit in all marked accesses. For more up-to-date information,
402 including how compiler transformations can sometimes break address
403 dependencies, see Documentation/RCU/rcu_dereference.rst.
404
405 An address-dependency barrier is a weaker form of read barrier. In the
406 case where two loads are performed such that the second depends on the
407 result of the first (eg: the first load retrieves the address to which
408 the second load will be directed), an address-dependency barrier would
409 be required to make sure that the target of the second load is updated
410 after the address obtained by the first load is accessed.
411
412 An address-dependency barrier is a partial ordering on interdependent
413 loads only; it is not required to have any effect on stores, independent
414 loads or overlapping loads.
415
416 As mentioned in (1), the other CPUs in the system can be viewed as
417 committing sequences of stores to the memory system that the CPU being
418 considered can then perceive. An address-dependency barrier issued by
419 the CPU under consideration guarantees that for any load preceding it,
420 if that load touches one of a sequence of stores from another CPU, then
421 by the time the barrier completes, the effects of all the stores prior to
422 that touched by the load will be perceptible to any loads issued after
423 the address-dependency barrier.
424
425 See the "Examples of memory barrier sequences" subsection for diagrams
426 showing the ordering constraints.
427
428 [!] Note that the first load really has to have an _address_ dependency and
429 not a control dependency. If the address for the second load is dependent
430 on the first load, but the dependency is through a conditional rather than
431 actually loading the address itself, then it's a _control_ dependency and
432 a full read barrier or better is required. See the "Control dependencies"
433 subsection for more information.
434
435 [!] Note that address-dependency barriers should normally be paired with
436 write barriers; see the "SMP barrier pairing" subsection.
437
438 [!] Kernel release v5.9 removed kernel APIs for explicit address-
439 dependency barriers. Nowadays, APIs for marking loads from shared
440 variables such as READ_ONCE() and rcu_dereference() provide implicit
441 address-dependency barriers.
442
443 (3) Read (or load) memory barriers.
444
445 A read barrier is an address-dependency barrier plus a guarantee that all
446 the LOAD operations specified before the barrier will appear to happen
447 before all the LOAD operations specified after the barrier with respect to
448 the other components of the system.
449
450 A read barrier is a partial ordering on loads only; it is not required to
451 have any effect on stores.
452
453 Read memory barriers imply address-dependency barriers, and so can
454 substitute for them.
455
456 [!] Note that read barriers should normally be paired with write barriers;
457 see the "SMP barrier pairing" subsection.
458
459
460 (4) General memory barriers.
461
462 A general memory barrier gives a guarantee that all the LOAD and STORE
463 operations specified before the barrier will appear to happen before all
464 the LOAD and STORE operations specified after the barrier with respect to
465 the other components of the system.
466
467 A general memory barrier is a partial ordering over both loads and stores.
468
469 General memory barriers imply both read and write memory barriers, and so
470 can substitute for either.
471
472
473 And a couple of implicit varieties:
474
475 (5) ACQUIRE operations.
476
477 This acts as a one-way permeable barrier. It guarantees that all memory
478 operations after the ACQUIRE operation will appear to happen after the
479 ACQUIRE operation with respect to the other components of the system.
480 ACQUIRE operations include LOCK operations and both smp_load_acquire()
481 and smp_cond_load_acquire() operations.
482
483 Memory operations that occur before an ACQUIRE operation may appear to
484 happen after it completes.
485
486 An ACQUIRE operation should almost always be paired with a RELEASE
487 operation.
488
489
490 (6) RELEASE operations.
491
492 This also acts as a one-way permeable barrier. It guarantees that all
493 memory operations before the RELEASE operation will appear to happen
494 before the RELEASE operation with respect to the other components of the
495 system. RELEASE operations include UNLOCK operations and
496 smp_store_release() operations.
497
498 Memory operations that occur after a RELEASE operation may appear to
499 happen before it completes.
500
501 The use of ACQUIRE and RELEASE operations generally precludes the need
502 for other sorts of memory barrier. In addition, a RELEASE+ACQUIRE pair is
503 -not- guaranteed to act as a full memory barrier. However, after an
504 ACQUIRE on a given variable, all memory accesses preceding any prior
505 RELEASE on that same variable are guaranteed to be visible. In other
506 words, within a given variable's critical section, all accesses of all
507 previous critical sections for that variable are guaranteed to have
508 completed.
509
510 This means that ACQUIRE acts as a minimal "acquire" operation and
511 RELEASE acts as a minimal "release" operation.
512
513 A subset of the atomic operations described in atomic_t.txt have ACQUIRE and
514 RELEASE variants in addition to fully-ordered and relaxed (no barrier
515 semantics) definitions. For compound atomics performing both a load and a
516 store, ACQUIRE semantics apply only to the load and RELEASE semantics apply
517 only to the store portion of the operation.
518
519 Memory barriers are only required where there's a possibility of interaction
520 between two CPUs or between a CPU and a device. If it can be guaranteed that
521 there won't be any such interaction in any particular piece of code, then
522 memory barriers are unnecessary in that piece of code.
523
524
525 Note that these are the _minimum_ guarantees. Different architectures may give
526 more substantial guarantees, but they may _not_ be relied upon outside of arch
527 specific code.
528
529
530 WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS?
531 ----------------------------------------------
532
533 There are certain things that the Linux kernel memory barriers do not guarantee:
534
535 (*) There is no guarantee that any of the memory accesses specified before a
536 memory barrier will be _complete_ by the completion of a memory barrier
537 instruction; the barrier can be considered to draw a line in that CPU's
538 access queue that accesses of the appropriate type may not cross.
539
540 (*) There is no guarantee that issuing a memory barrier on one CPU will have
541 any direct effect on another CPU or any other hardware in the system. The
542 indirect effect will be the order in which the second CPU sees the effects
543 of the first CPU's accesses occur, but see the next point:
544
545 (*) There is no guarantee that a CPU will see the correct order of effects
546 from a second CPU's accesses, even _if_ the second CPU uses a memory
547 barrier, unless the first CPU _also_ uses a matching memory barrier (see
548 the subsection on "SMP Barrier Pairing").
549
550 (*) There is no guarantee that some intervening piece of off-the-CPU
551 hardware[*] will not reorder the memory accesses. CPU cache coherency
552 mechanisms should propagate the indirect effects of a memory barrier
553 between CPUs, but might not do so in order.
554
555 [*] For information on bus mastering DMA and coherency please read:
556
557 Documentation/driver-api/pci/pci.rst
558 Documentation/core-api/dma-api-howto.rst
559 Documentation/core-api/dma-api.rst
560
561
562 ADDRESS-DEPENDENCY BARRIERS (HISTORICAL)
563 ----------------------------------------
564 [!] This section is marked as HISTORICAL: it covers the long-obsolete
565 smp_read_barrier_depends() macro, the semantics of which are now implicit
566 in all marked accesses. For more up-to-date information, including
567 how compiler transformations can sometimes break address dependencies,
568 see Documentation/RCU/rcu_dereference.rst.
569
570 As of v4.15 of the Linux kernel, an smp_mb() was added to READ_ONCE() for
571 DEC Alpha, which means that about the only people who need to pay attention
572 to this section are those working on DEC Alpha architecture-specific code
573 and those working on READ_ONCE() itself. For those who need it, and for
574 those who are interested in the history, here is the story of
575 address-dependency barriers.
576
577 [!] While address dependencies are observed in both load-to-load and
578 load-to-store relations, address-dependency barriers are not necessary
579 for load-to-store situations.
580
581 The requirement of address-dependency barriers is a little subtle, and
582 it's not always obvious that they're needed. To illustrate, consider the
583 following sequence of events:
584
585 CPU 1 CPU 2
586 =============== ===============
587 { A == 1, B == 2, C == 3, P == &A, Q == &C }
588 B = 4;
589 <write barrier>
590 WRITE_ONCE(P, &B);
591 Q = READ_ONCE_OLD(P);
592 D = *Q;
593
594 [!] READ_ONCE_OLD() corresponds to READ_ONCE() of pre-4.15 kernel, which
595 doesn't imply an address-dependency barrier.
596
597 There's a clear address dependency here, and it would seem that by the end of
598 the sequence, Q must be either &A or &B, and that:
599
600 (Q == &A) implies (D == 1)
601 (Q == &B) implies (D == 4)
602
603 But! CPU 2's perception of P may be updated _before_ its perception of B, thus
604 leading to the following situation:
605
606 (Q == &B) and (D == 2) ????
607
608 While this may seem like a failure of coherency or causality maintenance, it
609 isn't, and this behaviour can be observed on certain real CPUs (such as the DEC
610 Alpha).
611
612 To deal with this, READ_ONCE() provides an implicit address-dependency barrier
613 since kernel release v4.15:
614
615 CPU 1 CPU 2
616 =============== ===============
617 { A == 1, B == 2, C == 3, P == &A, Q == &C }
618 B = 4;
619 <write barrier>
620 WRITE_ONCE(P, &B);
621 Q = READ_ONCE(P);
622 <implicit address-dependency barrier>
623 D = *Q;
624
625 This enforces the occurrence of one of the two implications, and prevents the
626 third possibility from arising.
627
628
629 [!] Note that this extremely counterintuitive situation arises most easily on
630 machines with split caches, so that, for example, one cache bank processes
631 even-numbered cache lines and the other bank processes odd-numbered cache
632 lines. The pointer P might be stored in an odd-numbered cache line, and the
633 variable B might be stored in an even-numbered cache line. Then, if the
634 even-numbered bank of the reading CPU's cache is extremely busy while the
635 odd-numbered bank is idle, one can see the new value of the pointer P (&B),
636 but the old value of the variable B (2).
637
638
639 An address-dependency barrier is not required to order dependent writes
640 because the CPUs that the Linux kernel supports don't do writes until they
641 are certain (1) that the write will actually happen, (2) of the location of
642 the write, and (3) of the value to be written.
643 But please carefully read the "CONTROL DEPENDENCIES" section and the
644 Documentation/RCU/rcu_dereference.rst file: The compiler can and does break
645 dependencies in a great many highly creative ways.
646
647 CPU 1 CPU 2
648 =============== ===============
649 { A == 1, B == 2, C = 3, P == &A, Q == &C }
650 B = 4;
651 <write barrier>
652 WRITE_ONCE(P, &B);
653 Q = READ_ONCE_OLD(P);
654 WRITE_ONCE(*Q, 5);
655
656 Therefore, no address-dependency barrier is required to order the read into
657 Q with the store into *Q. In other words, this outcome is prohibited,
658 even without an implicit address-dependency barrier of modern READ_ONCE():
659
660 (Q == &B) && (B == 4)
661
662 Please note that this pattern should be rare. After all, the whole point
663 of dependency ordering is to -prevent- writes to the data structure, along
664 with the expensive cache misses associated with those writes. This pattern
665 can be used to record rare error conditions and the like, and the CPUs'
666 naturally occurring ordering prevents such records from being lost.
667
668
669 Note well that the ordering provided by an address dependency is local to
670 the CPU containing it. See the section on "Multicopy atomicity" for
671 more information.
672
673
674 The address-dependency barrier is very important to the RCU system,
675 for example. See rcu_assign_pointer() and rcu_dereference() in
676 include/linux/rcupdate.h. This permits the current target of an RCU'd
677 pointer to be replaced with a new modified target, without the replacement
678 target appearing to be incompletely initialised.
679
680 See also the subsection on "Cache Coherency" for a more thorough example.
681
682
683 CONTROL DEPENDENCIES
684 --------------------
685
686 Control dependencies can be a bit tricky because current compilers do
687 not understand them. The purpose of this section is to help you prevent
688 the compiler's ignorance from breaking your code.
689
690 A load-load control dependency requires a full read memory barrier, not
691 simply an (implicit) address-dependency barrier to make it work correctly.
692 Consider the following bit of code:
693
694 q = READ_ONCE(a);
695 <implicit address-dependency barrier>
696 if (q) {
697 /* BUG: No address dependency!!! */
698 p = READ_ONCE(b);
699 }
700
701 This will not have the desired effect because there is no actual address
702 dependency, but rather a control dependency that the CPU may short-circuit
703 by attempting to predict the outcome in advance, so that other CPUs see
704 the load from b as having happened before the load from a. In such a case
705 what's actually required is:
706
707 q = READ_ONCE(a);
708 if (q) {
709 <read barrier>
710 p = READ_ONCE(b);
711 }
712
713 However, stores are not speculated. This means that ordering -is- provided
714 for load-store control dependencies, as in the following example:
715
716 q = READ_ONCE(a);
717 if (q) {
718 WRITE_ONCE(b, 1);
719 }
720
721 Control dependencies pair normally with other types of barriers.
722 That said, please note that neither READ_ONCE() nor WRITE_ONCE()
723 are optional! Without the READ_ONCE(), the compiler might combine the
724 load from 'a' with other loads from 'a'. Without the WRITE_ONCE(),
725 the compiler might combine the store to 'b' with other stores to 'b'.
726 Either can result in highly counterintuitive effects on ordering.
727
728 Worse yet, if the compiler is able to prove (say) that the value of
729 variable 'a' is always non-zero, it would be well within its rights
730 to optimize the original example by eliminating the "if" statement
731 as follows:
732
733 q = a;
734 b = 1; /* BUG: Compiler and CPU can both reorder!!! */
735
736 So don't leave out the READ_ONCE().
737
738 It is tempting to try to enforce ordering on identical stores on both
739 branches of the "if" statement as follows:
740
741 q = READ_ONCE(a);
742 if (q) {
743 barrier();
744 WRITE_ONCE(b, 1);
745 do_something();
746 } else {
747 barrier();
748 WRITE_ONCE(b, 1);
749 do_something_else();
750 }
751
752 Unfortunately, current compilers will transform this as follows at high
753 optimization levels:
754
755 q = READ_ONCE(a);
756 barrier();
757 WRITE_ONCE(b, 1); /* BUG: No ordering vs. load from a!!! */
758 if (q) {
759 /* WRITE_ONCE(b, 1); -- moved up, BUG!!! */
760 do_something();
761 } else {
762 /* WRITE_ONCE(b, 1); -- moved up, BUG!!! */
763 do_something_else();
764 }
765
766 Now there is no conditional between the load from 'a' and the store to
767 'b', which means that the CPU is within its rights to reorder them:
768 The conditional is absolutely required, and must be present in the
769 assembly code even after all compiler optimizations have been applied.
770 Therefore, if you need ordering in this example, you need explicit
771 memory barriers, for example, smp_store_release():
772
773 q = READ_ONCE(a);
774 if (q) {
775 smp_store_release(&b, 1);
776 do_something();
777 } else {
778 smp_store_release(&b, 1);
779 do_something_else();
780 }
781
782 In contrast, without explicit memory barriers, two-legged-if control
783 ordering is guaranteed only when the stores differ, for example:
784
785 q = READ_ONCE(a);
786 if (q) {
787 WRITE_ONCE(b, 1);
788 do_something();
789 } else {
790 WRITE_ONCE(b, 2);
791 do_something_else();
792 }
793
794 The initial READ_ONCE() is still required to prevent the compiler from
795 proving the value of 'a'.
796
797 In addition, you need to be careful what you do with the local variable 'q',
798 otherwise the compiler might be able to guess the value and again remove
799 the needed conditional. For example:
800
801 q = READ_ONCE(a);
802 if (q % MAX) {
803 WRITE_ONCE(b, 1);
804 do_something();
805 } else {
806 WRITE_ONCE(b, 2);
807 do_something_else();
808 }
809
810 If MAX is defined to be 1, then the compiler knows that (q % MAX) is
811 equal to zero, in which case the compiler is within its rights to
812 transform the above code into the following:
813
814 q = READ_ONCE(a);
815 WRITE_ONCE(b, 2);
816 do_something_else();
817
818 Given this transformation, the CPU is not required to respect the ordering
819 between the load from variable 'a' and the store to variable 'b'. It is
820 tempting to add a barrier(), but this does not help. The conditional
821 is gone, and the barrier won't bring it back. Therefore, if you are
822 relying on this ordering, you should make sure that MAX is greater than
823 one, perhaps as follows:
824
825 q = READ_ONCE(a);
826 BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
827 if (q % MAX) {
828 WRITE_ONCE(b, 1);
829 do_something();
830 } else {
831 WRITE_ONCE(b, 2);
832 do_something_else();
833 }
834
835 Please note once again that the stores to 'b' differ. If they were
836 identical, as noted earlier, the compiler could pull this store outside
837 of the 'if' statement.
838
839 You must also be careful not to rely too much on boolean short-circuit
840 evaluation. Consider this example:
841
842 q = READ_ONCE(a);
843 if (q || 1 > 0)
844 WRITE_ONCE(b, 1);
845
846 Because the first condition cannot fault and the second condition is
847 always true, the compiler can transform this example as following,
848 defeating control dependency:
849
850 q = READ_ONCE(a);
851 WRITE_ONCE(b, 1);
852
853 This example underscores the need to ensure that the compiler cannot
854 out-guess your code. More generally, although READ_ONCE() does force
855 the compiler to actually emit code for a given load, it does not force
856 the compiler to use the results.
857
858 In addition, control dependencies apply only to the then-clause and
859 else-clause of the if-statement in question. In particular, it does
860 not necessarily apply to code following the if-statement:
861
862 q = READ_ONCE(a);
863 if (q) {
864 WRITE_ONCE(b, 1);
865 } else {
866 WRITE_ONCE(b, 2);
867 }
868 WRITE_ONCE(c, 1); /* BUG: No ordering against the read from 'a'. */
869
870 It is tempting to argue that there in fact is ordering because the
871 compiler cannot reorder volatile accesses and also cannot reorder
872 the writes to 'b' with the condition. Unfortunately for this line
873 of reasoning, the compiler might compile the two writes to 'b' as
874 conditional-move instructions, as in this fanciful pseudo-assembly
875 language:
876
877 ld r1,a
878 cmp r1,$0
879 cmov,ne r4,$1
880 cmov,eq r4,$2
881 st r4,b
882 st $1,c
883
884 A weakly ordered CPU would have no dependency of any sort between the load
885 from 'a' and the store to 'c'. The control dependencies would extend
886 only to the pair of cmov instructions and the store depending on them.
887 In short, control dependencies apply only to the stores in the then-clause
888 and else-clause of the if-statement in question (including functions
889 invoked by those two clauses), not to code following that if-statement.
890
891
892 Note well that the ordering provided by a control dependency is local
893 to the CPU containing it. See the section on "Multicopy atomicity"
894 for more information.
895
896
897 In summary:
898
899 (*) Control dependencies can order prior loads against later stores.
900 However, they do -not- guarantee any other sort of ordering:
901 Not prior loads against later loads, nor prior stores against
902 later anything. If you need these other forms of ordering,
903 use smp_rmb(), smp_wmb(), or, in the case of prior stores and
904 later loads, smp_mb().
905
906 (*) If both legs of the "if" statement begin with identical stores to
907 the same variable, then those stores must be ordered, either by
908 preceding both of them with smp_mb() or by using smp_store_release()
909 to carry out the stores. Please note that it is -not- sufficient
910 to use barrier() at beginning of each leg of the "if" statement
911 because, as shown by the example above, optimizing compilers can
912 destroy the control dependency while respecting the letter of the
913 barrier() law.
914
915 (*) Control dependencies require at least one run-time conditional
916 between the prior load and the subsequent store, and this
917 conditional must involve the prior load. If the compiler is able
918 to optimize the conditional away, it will have also optimized
919 away the ordering. Careful use of READ_ONCE() and WRITE_ONCE()
920 can help to preserve the needed conditional.
921
922 (*) Control dependencies require that the compiler avoid reordering the
923 dependency into nonexistence. Careful use of READ_ONCE() or
924 atomic{,64}_read() can help to preserve your control dependency.
925 Please see the COMPILER BARRIER section for more information.
926
927 (*) Control dependencies apply only to the then-clause and else-clause
928 of the if-statement containing the control dependency, including
929 any functions that these two clauses call. Control dependencies
930 do -not- apply to code following the if-statement containing the
931 control dependency.
932
933 (*) Control dependencies pair normally with other types of barriers.
934
935 (*) Control dependencies do -not- provide multicopy atomicity. If you
936 need all the CPUs to see a given store at the same time, use smp_mb().
937
938 (*) Compilers do not understand control dependencies. It is therefore
939 your job to ensure that they do not break your code.
940
941
942 SMP BARRIER PAIRING
943 -------------------
944
945 When dealing with CPU-CPU interactions, certain types of memory barrier should
946 always be paired. A lack of appropriate pairing is almost certainly an error.
947
948 General barriers pair with each other, though they also pair with most
949 other types of barriers, albeit without multicopy atomicity. An acquire
950 barrier pairs with a release barrier, but both may also pair with other
951 barriers, including of course general barriers. A write barrier pairs
952 with an address-dependency barrier, a control dependency, an acquire barrier,
953 a release barrier, a read barrier, or a general barrier. Similarly a
954 read barrier, control dependency, or an address-dependency barrier pairs
955 with a write barrier, an acquire barrier, a release barrier, or a
956 general barrier:
957
958 CPU 1 CPU 2
959 =============== ===============
960 WRITE_ONCE(a, 1);
961 <write barrier>
962 WRITE_ONCE(b, 2); x = READ_ONCE(b);
963 <read barrier>
964 y = READ_ONCE(a);
965
966 Or:
967
968 CPU 1 CPU 2
969 =============== ===============================
970 a = 1;
971 <write barrier>
972 WRITE_ONCE(b, &a); x = READ_ONCE(b);
973 <implicit address-dependency barrier>
974 y = *x;
975
976 Or even:
977
978 CPU 1 CPU 2
979 =============== ===============================
980 r1 = READ_ONCE(y);
981 <general barrier>
982 WRITE_ONCE(x, 1); if (r2 = READ_ONCE(x)) {
983 <implicit control dependency>
984 WRITE_ONCE(y, 1);
985 }
986
987 assert(r1 == 0 || r2 == 0);
988
989 Basically, the read barrier always has to be there, even though it can be of
990 the "weaker" type.
991
992 [!] Note that the stores before the write barrier would normally be expected to
993 match the loads after the read barrier or the address-dependency barrier, and
994 vice versa:
995
996 CPU 1 CPU 2
997 =================== ===================
998 WRITE_ONCE(a, 1); }---- --->{ v = READ_ONCE(c);
999 WRITE_ONCE(b, 2); } \ / { w = READ_ONCE(d);
1000 <write barrier> \ <read barrier>
1001 WRITE_ONCE(c, 3); } / \ { x = READ_ONCE(a);
1002 WRITE_ONCE(d, 4); }---- --->{ y = READ_ONCE(b);
1003
1004
1005 EXAMPLES OF MEMORY BARRIER SEQUENCES
1006 ------------------------------------
1007
1008 Firstly, write barriers act as partial orderings on store operations.
1009 Consider the following sequence of events:
1010
1011 CPU 1
1012 =======================
1013 STORE A = 1
1014 STORE B = 2
1015 STORE C = 3
1016 <write barrier>
1017 STORE D = 4
1018 STORE E = 5
1019
1020 This sequence of events is committed to the memory coherence system in an order
1021 that the rest of the system might perceive as the unordered set of { STORE A,
1022 STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E
1023 }:
1024
1025 +-------+ : :
1026 | | +------+
1027 | |------>| C=3 | } /\
1028 | | : +------+ }----- \ -----> Events perceptible to
1029 | | : | A=1 | } \/ the rest of the system
1030 | | : +------+ }
1031 | CPU 1 | : | B=2 | }
1032 | | +------+ }
1033 | | wwwwwwwwwwwwwwww } <--- At this point the write barrier
1034 | | +------+ } requires all stores prior to the
1035 | | : | E=5 | } barrier to be committed before
1036 | | : +------+ } further stores may take place
1037 | |------>| D=4 | }
1038 | | +------+
1039 +-------+ : :
1040 |
1041 | Sequence in which stores are committed to the
1042 | memory system by CPU 1
1043 V
1044
1045
1046 Secondly, address-dependency barriers act as partial orderings on address-
1047 dependent loads. Consider the following sequence of events:
1048
1049 CPU 1 CPU 2
1050 ======================= =======================
1051 { B = 7; X = 9; Y = 8; C = &Y }
1052 STORE A = 1
1053 STORE B = 2
1054 <write barrier>
1055 STORE C = &B LOAD X
1056 STORE D = 4 LOAD C (gets &B)
1057 LOAD *C (reads B)
1058
1059 Without intervention, CPU 2 may perceive the events on CPU 1 in some
1060 effectively random order, despite the write barrier issued by CPU 1:
1061
1062 +-------+ : : : :
1063 | | +------+ +-------+ | Sequence of update
1064 | |------>| B=2 |----- --->| Y->8 | | of perception on
1065 | | : +------+ \ +-------+ | CPU 2
1066 | CPU 1 | : | A=1 | \ --->| C->&Y | V
1067 | | +------+ | +-------+
1068 | | wwwwwwwwwwwwwwww | : :
1069 | | +------+ | : :
1070 | | : | C=&B |--- | : : +-------+
1071 | | : +------+ \ | +-------+ | |
1072 | |------>| D=4 | ----------->| C->&B |------>| |
1073 | | +------+ | +-------+ | |
1074 +-------+ : : | : : | |
1075 | : : | |
1076 | : : | CPU 2 |
1077 | +-------+ | |
1078 Apparently incorrect ---> | | B->7 |------>| |
1079 perception of B (!) | +-------+ | |
1080 | : : | |
1081 | +-------+ | |
1082 The load of X holds ---> \ | X->9 |------>| |
1083 up the maintenance \ +-------+ | |
1084 of coherence of B ----->| B->2 | +-------+
1085 +-------+
1086 : :
1087
1088
1089 In the above example, CPU 2 perceives that B is 7, despite the load of *C
1090 (which would be B) coming after the LOAD of C.
1091
1092 If, however, an address-dependency barrier were to be placed between the load
1093 of C and the load of *C (ie: B) on CPU 2:
1094
1095 CPU 1 CPU 2
1096 ======================= =======================
1097 { B = 7; X = 9; Y = 8; C = &Y }
1098 STORE A = 1
1099 STORE B = 2
1100 <write barrier>
1101 STORE C = &B LOAD X
1102 STORE D = 4 LOAD C (gets &B)
1103 <address-dependency barrier>
1104 LOAD *C (reads B)
1105
1106 then the following will occur:
1107
1108 +-------+ : : : :
1109 | | +------+ +-------+
1110 | |------>| B=2 |----- --->| Y->8 |
1111 | | : +------+ \ +-------+
1112 | CPU 1 | : | A=1 | \ --->| C->&Y |
1113 | | +------+ | +-------+
1114 | | wwwwwwwwwwwwwwww | : :
1115 | | +------+ | : :
1116 | | : | C=&B |--- | : : +-------+
1117 | | : +------+ \ | +-------+ | |
1118 | |------>| D=4 | ----------->| C->&B |------>| |
1119 | | +------+ | +-------+ | |
1120 +-------+ : : | : : | |
1121 | : : | |
1122 | : : | CPU 2 |
1123 | +-------+ | |
1124 | | X->9 |------>| |
1125 | +-------+ | |
1126 Makes sure all effects ---> \ aaaaaaaaaaaaaaaaa | |
1127 prior to the store of C \ +-------+ | |
1128 are perceptible to ----->| B->2 |------>| |
1129 subsequent loads +-------+ | |
1130 : : +-------+
1131
1132
1133 And thirdly, a read barrier acts as a partial order on loads. Consider the
1134 following sequence of events:
1135
1136 CPU 1 CPU 2
1137 ======================= =======================
1138 { A = 0, B = 9 }
1139 STORE A=1
1140 <write barrier>
1141 STORE B=2
1142 LOAD B
1143 LOAD A
1144
1145 Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in
1146 some effectively random order, despite the write barrier issued by CPU 1:
1147
1148 +-------+ : : : :
1149 | | +------+ +-------+
1150 | |------>| A=1 |------ --->| A->0 |
1151 | | +------+ \ +-------+
1152 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1153 | | +------+ | +-------+
1154 | |------>| B=2 |--- | : :
1155 | | +------+ \ | : : +-------+
1156 +-------+ : : \ | +-------+ | |
1157 ---------->| B->2 |------>| |
1158 | +-------+ | CPU 2 |
1159 | | A->0 |------>| |
1160 | +-------+ | |
1161 | : : +-------+
1162 \ : :
1163 \ +-------+
1164 ---->| A->1 |
1165 +-------+
1166 : :
1167
1168
1169 If, however, a read barrier were to be placed between the load of B and the
1170 load of A on CPU 2:
1171
1172 CPU 1 CPU 2
1173 ======================= =======================
1174 { A = 0, B = 9 }
1175 STORE A=1
1176 <write barrier>
1177 STORE B=2
1178 LOAD B
1179 <read barrier>
1180 LOAD A
1181
1182 then the partial ordering imposed by CPU 1 will be perceived correctly by CPU
1183 2:
1184
1185 +-------+ : : : :
1186 | | +------+ +-------+
1187 | |------>| A=1 |------ --->| A->0 |
1188 | | +------+ \ +-------+
1189 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1190 | | +------+ | +-------+
1191 | |------>| B=2 |--- | : :
1192 | | +------+ \ | : : +-------+
1193 +-------+ : : \ | +-------+ | |
1194 ---------->| B->2 |------>| |
1195 | +-------+ | CPU 2 |
1196 | : : | |
1197 | : : | |
1198 At this point the read ----> \ rrrrrrrrrrrrrrrrr | |
1199 barrier causes all effects \ +-------+ | |
1200 prior to the storage of B ---->| A->1 |------>| |
1201 to be perceptible to CPU 2 +-------+ | |
1202 : : +-------+
1203
1204
1205 To illustrate this more completely, consider what could happen if the code
1206 contained a load of A either side of the read barrier:
1207
1208 CPU 1 CPU 2
1209 ======================= =======================
1210 { A = 0, B = 9 }
1211 STORE A=1
1212 <write barrier>
1213 STORE B=2
1214 LOAD B
1215 LOAD A [first load of A]
1216 <read barrier>
1217 LOAD A [second load of A]
1218
1219 Even though the two loads of A both occur after the load of B, they may both
1220 come up with different values:
1221
1222 +-------+ : : : :
1223 | | +------+ +-------+
1224 | |------>| A=1 |------ --->| A->0 |
1225 | | +------+ \ +-------+
1226 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1227 | | +------+ | +-------+
1228 | |------>| B=2 |--- | : :
1229 | | +------+ \ | : : +-------+
1230 +-------+ : : \ | +-------+ | |
1231 ---------->| B->2 |------>| |
1232 | +-------+ | CPU 2 |
1233 | : : | |
1234 | : : | |
1235 | +-------+ | |
1236 | | A->0 |------>| 1st |
1237 | +-------+ | |
1238 At this point the read ----> \ rrrrrrrrrrrrrrrrr | |
1239 barrier causes all effects \ +-------+ | |
1240 prior to the storage of B ---->| A->1 |------>| 2nd |
1241 to be perceptible to CPU 2 +-------+ | |
1242 : : +-------+
1243
1244
1245 But it may be that the update to A from CPU 1 becomes perceptible to CPU 2
1246 before the read barrier completes anyway:
1247
1248 +-------+ : : : :
1249 | | +------+ +-------+
1250 | |------>| A=1 |------ --->| A->0 |
1251 | | +------+ \ +-------+
1252 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1253 | | +------+ | +-------+
1254 | |------>| B=2 |--- | : :
1255 | | +------+ \ | : : +-------+
1256 +-------+ : : \ | +-------+ | |
1257 ---------->| B->2 |------>| |
1258 | +-------+ | CPU 2 |
1259 | : : | |
1260 \ : : | |
1261 \ +-------+ | |
1262 ---->| A->1 |------>| 1st |
1263 +-------+ | |
1264 rrrrrrrrrrrrrrrrr | |
1265 +-------+ | |
1266 | A->1 |------>| 2nd |
1267 +-------+ | |
1268 : : +-------+
1269
1270
1271 The guarantee is that the second load will always come up with A == 1 if the
1272 load of B came up with B == 2. No such guarantee exists for the first load of
1273 A; that may come up with either A == 0 or A == 1.
1274
1275
1276 READ MEMORY BARRIERS VS LOAD SPECULATION
1277 ----------------------------------------
1278
1279 Many CPUs speculate with loads: that is they see that they will need to load an
1280 item from memory, and they find a time where they're not using the bus for any
1281 other loads, and so do the load in advance - even though they haven't actually
1282 got to that point in the instruction execution flow yet. This permits the
1283 actual load instruction to potentially complete immediately because the CPU
1284 already has the value to hand.
1285
1286 It may turn out that the CPU didn't actually need the value - perhaps because a
1287 branch circumvented the load - in which case it can discard the value or just
1288 cache it for later use.
1289
1290 Consider:
1291
1292 CPU 1 CPU 2
1293 ======================= =======================
1294 LOAD B
1295 DIVIDE } Divide instructions generally
1296 DIVIDE } take a long time to perform
1297 LOAD A
1298
1299 Which might appear as this:
1300
1301 : : +-------+
1302 +-------+ | |
1303 --->| B->2 |------>| |
1304 +-------+ | CPU 2 |
1305 : :DIVIDE | |
1306 +-------+ | |
1307 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1308 division speculates on the +-------+ ~ | |
1309 LOAD of A : : ~ | |
1310 : :DIVIDE | |
1311 : : ~ | |
1312 Once the divisions are complete --> : : ~-->| |
1313 the CPU can then perform the : : | |
1314 LOAD with immediate effect : : +-------+
1315
1316
1317 Placing a read barrier or an address-dependency barrier just before the second
1318 load:
1319
1320 CPU 1 CPU 2
1321 ======================= =======================
1322 LOAD B
1323 DIVIDE
1324 DIVIDE
1325 <read barrier>
1326 LOAD A
1327
1328 will force any value speculatively obtained to be reconsidered to an extent
1329 dependent on the type of barrier used. If there was no change made to the
1330 speculated memory location, then the speculated value will just be used:
1331
1332 : : +-------+
1333 +-------+ | |
1334 --->| B->2 |------>| |
1335 +-------+ | CPU 2 |
1336 : :DIVIDE | |
1337 +-------+ | |
1338 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1339 division speculates on the +-------+ ~ | |
1340 LOAD of A : : ~ | |
1341 : :DIVIDE | |
1342 : : ~ | |
1343 : : ~ | |
1344 rrrrrrrrrrrrrrrr~ | |
1345 : : ~ | |
1346 : : ~-->| |
1347 : : | |
1348 : : +-------+
1349
1350
1351 but if there was an update or an invalidation from another CPU pending, then
1352 the speculation will be cancelled and the value reloaded:
1353
1354 : : +-------+
1355 +-------+ | |
1356 --->| B->2 |------>| |
1357 +-------+ | CPU 2 |
1358 : :DIVIDE | |
1359 +-------+ | |
1360 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1361 division speculates on the +-------+ ~ | |
1362 LOAD of A : : ~ | |
1363 : :DIVIDE | |
1364 : : ~ | |
1365 : : ~ | |
1366 rrrrrrrrrrrrrrrrr | |
1367 +-------+ | |
1368 The speculation is discarded ---> --->| A->1 |------>| |
1369 and an updated value is +-------+ | |
1370 retrieved : : +-------+
1371
1372
1373 MULTICOPY ATOMICITY
1374 --------------------
1375
1376 Multicopy atomicity is a deeply intuitive notion about ordering that is
1377 not always provided by real computer systems, namely that a given store
1378 becomes visible at the same time to all CPUs, or, alternatively, that all
1379 CPUs agree on the order in which all stores become visible. However,
1380 support of full multicopy atomicity would rule out valuable hardware
1381 optimizations, so a weaker form called ``other multicopy atomicity''
1382 instead guarantees only that a given store becomes visible at the same
1383 time to all -other- CPUs. The remainder of this document discusses this
1384 weaker form, but for brevity will call it simply ``multicopy atomicity''.
1385
1386 The following example demonstrates multicopy atomicity:
1387
1388 CPU 1 CPU 2 CPU 3
1389 ======================= ======================= =======================
1390 { X = 0, Y = 0 }
1391 STORE X=1 r1=LOAD X (reads 1) LOAD Y (reads 1)
1392 <general barrier> <read barrier>
1393 STORE Y=r1 LOAD X
1394
1395 Suppose that CPU 2's load from X returns 1, which it then stores to Y,
1396 and CPU 3's load from Y returns 1. This indicates that CPU 1's store
1397 to X precedes CPU 2's load from X and that CPU 2's store to Y precedes
1398 CPU 3's load from Y. In addition, the memory barriers guarantee that
1399 CPU 2 executes its load before its store, and CPU 3 loads from Y before
1400 it loads from X. The question is then "Can CPU 3's load from X return 0?"
1401
1402 Because CPU 3's load from X in some sense comes after CPU 2's load, it
1403 is natural to expect that CPU 3's load from X must therefore return 1.
1404 This expectation follows from multicopy atomicity: if a load executing
1405 on CPU B follows a load from the same variable executing on CPU A (and
1406 CPU A did not originally store the value which it read), then on
1407 multicopy-atomic systems, CPU B's load must return either the same value
1408 that CPU A's load did or some later value. However, the Linux kernel
1409 does not require systems to be multicopy atomic.
1410
1411 The use of a general memory barrier in the example above compensates
1412 for any lack of multicopy atomicity. In the example, if CPU 2's load
1413 from X returns 1 and CPU 3's load from Y returns 1, then CPU 3's load
1414 from X must indeed also return 1.
1415
1416 However, dependencies, read barriers, and write barriers are not always
1417 able to compensate for non-multicopy atomicity. For example, suppose
1418 that CPU 2's general barrier is removed from the above example, leaving
1419 only the data dependency shown below:
1420
1421 CPU 1 CPU 2 CPU 3
1422 ======================= ======================= =======================
1423 { X = 0, Y = 0 }
1424 STORE X=1 r1=LOAD X (reads 1) LOAD Y (reads 1)
1425 <data dependency> <read barrier>
1426 STORE Y=r1 LOAD X (reads 0)
1427
1428 This substitution allows non-multicopy atomicity to run rampant: in
1429 this example, it is perfectly legal for CPU 2's load from X to return 1,
1430 CPU 3's load from Y to return 1, and its load from X to return 0.
1431
1432 The key point is that although CPU 2's data dependency orders its load
1433 and store, it does not guarantee to order CPU 1's store. Thus, if this
1434 example runs on a non-multicopy-atomic system where CPUs 1 and 2 share a
1435 store buffer or a level of cache, CPU 2 might have early access to CPU 1's
1436 writes. General barriers are therefore required to ensure that all CPUs
1437 agree on the combined order of multiple accesses.
1438
1439 General barriers can compensate not only for non-multicopy atomicity,
1440 but can also generate additional ordering that can ensure that -all-
1441 CPUs will perceive the same order of -all- operations. In contrast, a
1442 chain of release-acquire pairs do not provide this additional ordering,
1443 which means that only those CPUs on the chain are guaranteed to agree
1444 on the combined order of the accesses. For example, switching to C code
1445 in deference to the ghost of Herman Hollerith:
1446
1447 int u, v, x, y, z;
1448
1449 void cpu0(void)
1450 {
1451 r0 = smp_load_acquire(&x);
1452 WRITE_ONCE(u, 1);
1453 smp_store_release(&y, 1);
1454 }
1455
1456 void cpu1(void)
1457 {
1458 r1 = smp_load_acquire(&y);
1459 r4 = READ_ONCE(v);
1460 r5 = READ_ONCE(u);
1461 smp_store_release(&z, 1);
1462 }
1463
1464 void cpu2(void)
1465 {
1466 r2 = smp_load_acquire(&z);
1467 smp_store_release(&x, 1);
1468 }
1469
1470 void cpu3(void)
1471 {
1472 WRITE_ONCE(v, 1);
1473 smp_mb();
1474 r3 = READ_ONCE(u);
1475 }
1476
1477 Because cpu0(), cpu1(), and cpu2() participate in a chain of
1478 smp_store_release()/smp_load_acquire() pairs, the following outcome
1479 is prohibited:
1480
1481 r0 == 1 && r1 == 1 && r2 == 1
1482
1483 Furthermore, because of the release-acquire relationship between cpu0()
1484 and cpu1(), cpu1() must see cpu0()'s writes, so that the following
1485 outcome is prohibited:
1486
1487 r1 == 1 && r5 == 0
1488
1489 However, the ordering provided by a release-acquire chain is local
1490 to the CPUs participating in that chain and does not apply to cpu3(),
1491 at least aside from stores. Therefore, the following outcome is possible:
1492
1493 r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0
1494
1495 As an aside, the following outcome is also possible:
1496
1497 r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0 && r5 == 1
1498
1499 Although cpu0(), cpu1(), and cpu2() will see their respective reads and
1500 writes in order, CPUs not involved in the release-acquire chain might
1501 well disagree on the order. This disagreement stems from the fact that
1502 the weak memory-barrier instructions used to implement smp_load_acquire()
1503 and smp_store_release() are not required to order prior stores against
1504 subsequent loads in all cases. This means that cpu3() can see cpu0()'s
1505 store to u as happening -after- cpu1()'s load from v, even though
1506 both cpu0() and cpu1() agree that these two operations occurred in the
1507 intended order.
1508
1509 However, please keep in mind that smp_load_acquire() is not magic.
1510 In particular, it simply reads from its argument with ordering. It does
1511 -not- ensure that any particular value will be read. Therefore, the
1512 following outcome is possible:
1513
1514 r0 == 0 && r1 == 0 && r2 == 0 && r5 == 0
1515
1516 Note that this outcome can happen even on a mythical sequentially
1517 consistent system where nothing is ever reordered.
1518
1519 To reiterate, if your code requires full ordering of all operations,
1520 use general barriers throughout.
1521
1522
1523 ========================
1524 EXPLICIT KERNEL BARRIERS
1525 ========================
1526
1527 The Linux kernel has a variety of different barriers that act at different
1528 levels:
1529
1530 (*) Compiler barrier.
1531
1532 (*) CPU memory barriers.
1533
1534
1535 COMPILER BARRIER
1536 ----------------
1537
1538 The Linux kernel has an explicit compiler barrier function that prevents the
1539 compiler from moving the memory accesses either side of it to the other side:
1540
1541 barrier();
1542
1543 This is a general barrier -- there are no read-read or write-write
1544 variants of barrier(). However, READ_ONCE() and WRITE_ONCE() can be
1545 thought of as weak forms of barrier() that affect only the specific
1546 accesses flagged by the READ_ONCE() or WRITE_ONCE().
1547
1548 The barrier() function has the following effects:
1549
1550 (*) Prevents the compiler from reordering accesses following the
1551 barrier() to precede any accesses preceding the barrier().
1552 One example use for this property is to ease communication between
1553 interrupt-handler code and the code that was interrupted.
1554
1555 (*) Within a loop, forces the compiler to load the variables used
1556 in that loop's conditional on each pass through that loop.
1557
1558 The READ_ONCE() and WRITE_ONCE() functions can prevent any number of
1559 optimizations that, while perfectly safe in single-threaded code, can
1560 be fatal in concurrent code. Here are some examples of these sorts
1561 of optimizations:
1562
1563 (*) The compiler is within its rights to reorder loads and stores
1564 to the same variable, and in some cases, the CPU is within its
1565 rights to reorder loads to the same variable. This means that
1566 the following code:
1567
1568 a[0] = x;
1569 a[1] = x;
1570
1571 Might result in an older value of x stored in a[1] than in a[0].
1572 Prevent both the compiler and the CPU from doing this as follows:
1573
1574 a[0] = READ_ONCE(x);
1575 a[1] = READ_ONCE(x);
1576
1577 In short, READ_ONCE() and WRITE_ONCE() provide cache coherence for
1578 accesses from multiple CPUs to a single variable.
1579
1580 (*) The compiler is within its rights to merge successive loads from
1581 the same variable. Such merging can cause the compiler to "optimize"
1582 the following code:
1583
1584 while (tmp = a)
1585 do_something_with(tmp);
1586
1587 into the following code, which, although in some sense legitimate
1588 for single-threaded code, is almost certainly not what the developer
1589 intended:
1590
1591 if (tmp = a)
1592 for (;;)
1593 do_something_with(tmp);
1594
1595 Use READ_ONCE() to prevent the compiler from doing this to you:
1596
1597 while (tmp = READ_ONCE(a))
1598 do_something_with(tmp);
1599
1600 (*) The compiler is within its rights to reload a variable, for example,
1601 in cases where high register pressure prevents the compiler from
1602 keeping all data of interest in registers. The compiler might
1603 therefore optimize the variable 'tmp' out of our previous example:
1604
1605 while (tmp = a)
1606 do_something_with(tmp);
1607
1608 This could result in the following code, which is perfectly safe in
1609 single-threaded code, but can be fatal in concurrent code:
1610
1611 while (a)
1612 do_something_with(a);
1613
1614 For example, the optimized version of this code could result in
1615 passing a zero to do_something_with() in the case where the variable
1616 a was modified by some other CPU between the "while" statement and
1617 the call to do_something_with().
1618
1619 Again, use READ_ONCE() to prevent the compiler from doing this:
1620
1621 while (tmp = READ_ONCE(a))
1622 do_something_with(tmp);
1623
1624 Note that if the compiler runs short of registers, it might save
1625 tmp onto the stack. The overhead of this saving and later restoring
1626 is why compilers reload variables. Doing so is perfectly safe for
1627 single-threaded code, so you need to tell the compiler about cases
1628 where it is not safe.
1629
1630 (*) The compiler is within its rights to omit a load entirely if it knows
1631 what the value will be. For example, if the compiler can prove that
1632 the value of variable 'a' is always zero, it can optimize this code:
1633
1634 while (tmp = a)
1635 do_something_with(tmp);
1636
1637 Into this:
1638
1639 do { } while (0);
1640
1641 This transformation is a win for single-threaded code because it
1642 gets rid of a load and a branch. The problem is that the compiler
1643 will carry out its proof assuming that the current CPU is the only
1644 one updating variable 'a'. If variable 'a' is shared, then the
1645 compiler's proof will be erroneous. Use READ_ONCE() to tell the
1646 compiler that it doesn't know as much as it thinks it does:
1647
1648 while (tmp = READ_ONCE(a))
1649 do_something_with(tmp);
1650
1651 But please note that the compiler is also closely watching what you
1652 do with the value after the READ_ONCE(). For example, suppose you
1653 do the following and MAX is a preprocessor macro with the value 1:
1654
1655 while ((tmp = READ_ONCE(a)) % MAX)
1656 do_something_with(tmp);
1657
1658 Then the compiler knows that the result of the "%" operator applied
1659 to MAX will always be zero, again allowing the compiler to optimize
1660 the code into near-nonexistence. (It will still load from the
1661 variable 'a'.)
1662
1663 (*) Similarly, the compiler is within its rights to omit a store entirely
1664 if it knows that the variable already has the value being stored.
1665 Again, the compiler assumes that the current CPU is the only one
1666 storing into the variable, which can cause the compiler to do the
1667 wrong thing for shared variables. For example, suppose you have
1668 the following:
1669
1670 a = 0;
1671 ... Code that does not store to variable a ...
1672 a = 0;
1673
1674 The compiler sees that the value of variable 'a' is already zero, so
1675 it might well omit the second store. This would come as a fatal
1676 surprise if some other CPU might have stored to variable 'a' in the
1677 meantime.
1678
1679 Use WRITE_ONCE() to prevent the compiler from making this sort of
1680 wrong guess:
1681
1682 WRITE_ONCE(a, 0);
1683 ... Code that does not store to variable a ...
1684 WRITE_ONCE(a, 0);
1685
1686 (*) The compiler is within its rights to reorder memory accesses unless
1687 you tell it not to. For example, consider the following interaction
1688 between process-level code and an interrupt handler:
1689
1690 void process_level(void)
1691 {
1692 msg = get_message();
1693 flag = true;
1694 }
1695
1696 void interrupt_handler(void)
1697 {
1698 if (flag)
1699 process_message(msg);
1700 }
1701
1702 There is nothing to prevent the compiler from transforming
1703 process_level() to the following, in fact, this might well be a
1704 win for single-threaded code:
1705
1706 void process_level(void)
1707 {
1708 flag = true;
1709 msg = get_message();
1710 }
1711
1712 If the interrupt occurs between these two statement, then
1713 interrupt_handler() might be passed a garbled msg. Use WRITE_ONCE()
1714 to prevent this as follows:
1715
1716 void process_level(void)
1717 {
1718 WRITE_ONCE(msg, get_message());
1719 WRITE_ONCE(flag, true);
1720 }
1721
1722 void interrupt_handler(void)
1723 {
1724 if (READ_ONCE(flag))
1725 process_message(READ_ONCE(msg));
1726 }
1727
1728 Note that the READ_ONCE() and WRITE_ONCE() wrappers in
1729 interrupt_handler() are needed if this interrupt handler can itself
1730 be interrupted by something that also accesses 'flag' and 'msg',
1731 for example, a nested interrupt or an NMI. Otherwise, READ_ONCE()
1732 and WRITE_ONCE() are not needed in interrupt_handler() other than
1733 for documentation purposes. (Note also that nested interrupts
1734 do not typically occur in modern Linux kernels, in fact, if an
1735 interrupt handler returns with interrupts enabled, you will get a
1736 WARN_ONCE() splat.)
1737
1738 You should assume that the compiler can move READ_ONCE() and
1739 WRITE_ONCE() past code not containing READ_ONCE(), WRITE_ONCE(),
1740 barrier(), or similar primitives.
1741
1742 This effect could also be achieved using barrier(), but READ_ONCE()
1743 and WRITE_ONCE() are more selective: With READ_ONCE() and
1744 WRITE_ONCE(), the compiler need only forget the contents of the
1745 indicated memory locations, while with barrier() the compiler must
1746 discard the value of all memory locations that it has currently
1747 cached in any machine registers. Of course, the compiler must also
1748 respect the order in which the READ_ONCE()s and WRITE_ONCE()s occur,
1749 though the CPU of course need not do so.
1750
1751 (*) The compiler is within its rights to invent stores to a variable,
1752 as in the following example:
1753
1754 if (a)
1755 b = a;
1756 else
1757 b = 42;
1758
1759 The compiler might save a branch by optimizing this as follows:
1760
1761 b = 42;
1762 if (a)
1763 b = a;
1764
1765 In single-threaded code, this is not only safe, but also saves
1766 a branch. Unfortunately, in concurrent code, this optimization
1767 could cause some other CPU to see a spurious value of 42 -- even
1768 if variable 'a' was never zero -- when loading variable 'b'.
1769 Use WRITE_ONCE() to prevent this as follows:
1770
1771 if (a)
1772 WRITE_ONCE(b, a);
1773 else
1774 WRITE_ONCE(b, 42);
1775
1776 The compiler can also invent loads. These are usually less
1777 damaging, but they can result in cache-line bouncing and thus in
1778 poor performance and scalability. Use READ_ONCE() to prevent
1779 invented loads.
1780
1781 (*) For aligned memory locations whose size allows them to be accessed
1782 with a single memory-reference instruction, prevents "load tearing"
1783 and "store tearing," in which a single large access is replaced by
1784 multiple smaller accesses. For example, given an architecture having
1785 16-bit store instructions with 7-bit immediate fields, the compiler
1786 might be tempted to use two 16-bit store-immediate instructions to
1787 implement the following 32-bit store:
1788
1789 p = 0x00010002;
1790
1791 Please note that GCC really does use this sort of optimization,
1792 which is not surprising given that it would likely take more
1793 than two instructions to build the constant and then store it.
1794 This optimization can therefore be a win in single-threaded code.
1795 In fact, a recent bug (since fixed) caused GCC to incorrectly use
1796 this optimization in a volatile store. In the absence of such bugs,
1797 use of WRITE_ONCE() prevents store tearing in the following example:
1798
1799 WRITE_ONCE(p, 0x00010002);
1800
1801 Use of packed structures can also result in load and store tearing,
1802 as in this example:
1803
1804 struct __attribute__((__packed__)) foo {
1805 short a;
1806 int b;
1807 short c;
1808 };
1809 struct foo foo1, foo2;
1810 ...
1811
1812 foo2.a = foo1.a;
1813 foo2.b = foo1.b;
1814 foo2.c = foo1.c;
1815
1816 Because there are no READ_ONCE() or WRITE_ONCE() wrappers and no
1817 volatile markings, the compiler would be well within its rights to
1818 implement these three assignment statements as a pair of 32-bit
1819 loads followed by a pair of 32-bit stores. This would result in
1820 load tearing on 'foo1.b' and store tearing on 'foo2.b'. READ_ONCE()
1821 and WRITE_ONCE() again prevent tearing in this example:
1822
1823 foo2.a = foo1.a;
1824 WRITE_ONCE(foo2.b, READ_ONCE(foo1.b));
1825 foo2.c = foo1.c;
1826
1827 All that aside, it is never necessary to use READ_ONCE() and
1828 WRITE_ONCE() on a variable that has been marked volatile. For example,
1829 because 'jiffies' is marked volatile, it is never necessary to
1830 say READ_ONCE(jiffies). The reason for this is that READ_ONCE() and
1831 WRITE_ONCE() are implemented as volatile casts, which has no effect when
1832 its argument is already marked volatile.
1833
1834 Please note that these compiler barriers have no direct effect on the CPU,
1835 which may then reorder things however it wishes.
1836
1837
1838 CPU MEMORY BARRIERS
1839 -------------------
1840
1841 The Linux kernel has seven basic CPU memory barriers:
1842
1843 TYPE MANDATORY SMP CONDITIONAL
1844 ======================= =============== ===============
1845 GENERAL mb() smp_mb()
1846 WRITE wmb() smp_wmb()
1847 READ rmb() smp_rmb()
1848 ADDRESS DEPENDENCY READ_ONCE()
1849
1850
1851 All memory barriers except the address-dependency barriers imply a compiler
1852 barrier. Address dependencies do not impose any additional compiler ordering.
1853
1854 Aside: In the case of address dependencies, the compiler would be expected
1855 to issue the loads in the correct order (eg. `a[b]` would have to load
1856 the value of b before loading a[b]), however there is no guarantee in
1857 the C specification that the compiler may not speculate the value of b
1858 (eg. is equal to 1) and load a[b] before b (eg. tmp = a[1]; if (b != 1)
1859 tmp = a[b]; ). There is also the problem of a compiler reloading b after
1860 having loaded a[b], thus having a newer copy of b than a[b]. A consensus
1861 has not yet been reached about these problems, however the READ_ONCE()
1862 macro is a good place to start looking.
1863
1864 SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
1865 systems because it is assumed that a CPU will appear to be self-consistent,
1866 and will order overlapping accesses correctly with respect to itself.
1867 However, see the subsection on "Virtual Machine Guests" below.
1868
1869 [!] Note that SMP memory barriers _must_ be used to control the ordering of
1870 references to shared memory on SMP systems, though the use of locking instead
1871 is sufficient.
1872
1873 Mandatory barriers should not be used to control SMP effects, since mandatory
1874 barriers impose unnecessary overhead on both SMP and UP systems. They may,
1875 however, be used to control MMIO effects on accesses through relaxed memory I/O
1876 windows. These barriers are required even on non-SMP systems as they affect
1877 the order in which memory operations appear to a device by prohibiting both the
1878 compiler and the CPU from reordering them.
1879
1880
1881 There are some more advanced barrier functions:
1882
1883 (*) smp_store_mb(var, value)
1884
1885 This assigns the value to the variable and then inserts a full memory
1886 barrier after it. It isn't guaranteed to insert anything more than a
1887 compiler barrier in a UP compilation.
1888
1889
1890 (*) smp_mb__before_atomic();
1891 (*) smp_mb__after_atomic();
1892
1893 These are for use with atomic RMW functions that do not imply memory
1894 barriers, but where the code needs a memory barrier. Examples for atomic
1895 RMW functions that do not imply a memory barrier are e.g. add,
1896 subtract, (failed) conditional operations, _relaxed functions,
1897 but not atomic_read or atomic_set. A common example where a memory
1898 barrier may be required is when atomic ops are used for reference
1899 counting.
1900
1901 These are also used for atomic RMW bitop functions that do not imply a
1902 memory barrier (such as set_bit and clear_bit).
1903
1904 As an example, consider a piece of code that marks an object as being dead
1905 and then decrements the object's reference count:
1906
1907 obj->dead = 1;
1908 smp_mb__before_atomic();
1909 atomic_dec(&obj->ref_count);
1910
1911 This makes sure that the death mark on the object is perceived to be set
1912 *before* the reference counter is decremented.
1913
1914 See Documentation/atomic_{t,bitops}.txt for more information.
1915
1916
1917 (*) dma_wmb();
1918 (*) dma_rmb();
1919 (*) dma_mb();
1920
1921 These are for use with consistent memory to guarantee the ordering
1922 of writes or reads of shared memory accessible to both the CPU and a
1923 DMA capable device. See Documentation/core-api/dma-api.rst file for more
1924 information about consistent memory.
1925
1926 For example, consider a device driver that shares memory with a device
1927 and uses a descriptor status value to indicate if the descriptor belongs
1928 to the device or the CPU, and a doorbell to notify it when new
1929 descriptors are available:
1930
1931 if (desc->status != DEVICE_OWN) {
1932 /* do not read data until we own descriptor */
1933 dma_rmb();
1934
1935 /* read/modify data */
1936 read_data = desc->data;
1937 desc->data = write_data;
1938
1939 /* flush modifications before status update */
1940 dma_wmb();
1941
1942 /* assign ownership */
1943 desc->status = DEVICE_OWN;
1944
1945 /* Make descriptor status visible to the device followed by
1946 * notify device of new descriptor
1947 */
1948 writel(DESC_NOTIFY, doorbell);
1949 }
1950
1951 The dma_rmb() allows us to guarantee that the device has released ownership
1952 before we read the data from the descriptor, and the dma_wmb() allows
1953 us to guarantee the data is written to the descriptor before the device
1954 can see it now has ownership. The dma_mb() implies both a dma_rmb() and
1955 a dma_wmb().
1956
1957 Note that the dma_*() barriers do not provide any ordering guarantees for
1958 accesses to MMIO regions. See the later "KERNEL I/O BARRIER EFFECTS"
1959 subsection for more information about I/O accessors and MMIO ordering.
1960
1961 (*) pmem_wmb();
1962
1963 This is for use with persistent memory to ensure that stores for which
1964 modifications are written to persistent storage reached a platform
1965 durability domain.
1966
1967 For example, after a non-temporal write to pmem region, we use pmem_wmb()
1968 to ensure that stores have reached a platform durability domain. This ensures
1969 that stores have updated persistent storage before any data access or
1970 data transfer caused by subsequent instructions is initiated. This is
1971 in addition to the ordering done by wmb().
1972
1973 For load from persistent memory, existing read memory barriers are sufficient
1974 to ensure read ordering.
1975
1976 (*) io_stop_wc();
1977
1978 For memory accesses with write-combining attributes (e.g. those returned
1979 by ioremap_wc()), the CPU may wait for prior accesses to be merged with
1980 subsequent ones. io_stop_wc() can be used to prevent the merging of
1981 write-combining memory accesses before this macro with those after it when
1982 such wait has performance implications.
1983
1984 ===============================
1985 IMPLICIT KERNEL MEMORY BARRIERS
1986 ===============================
1987
1988 Some of the other functions in the linux kernel imply memory barriers, amongst
1989 which are locking and scheduling functions.
1990
1991 This specification is a _minimum_ guarantee; any particular architecture may
1992 provide more substantial guarantees, but these may not be relied upon outside
1993 of arch specific code.
1994
1995
1996 LOCK ACQUISITION FUNCTIONS
1997 --------------------------
1998
1999 The Linux kernel has a number of locking constructs:
2000
2001 (*) spin locks
2002 (*) R/W spin locks
2003 (*) mutexes
2004 (*) semaphores
2005 (*) R/W semaphores
2006
2007 In all cases there are variants on "ACQUIRE" operations and "RELEASE" operations
2008 for each construct. These operations all imply certain barriers:
2009
2010 (1) ACQUIRE operation implication:
2011
2012 Memory operations issued after the ACQUIRE will be completed after the
2013 ACQUIRE operation has completed.
2014
2015 Memory operations issued before the ACQUIRE may be completed after
2016 the ACQUIRE operation has completed.
2017
2018 (2) RELEASE operation implication:
2019
2020 Memory operations issued before the RELEASE will be completed before the
2021 RELEASE operation has completed.
2022
2023 Memory operations issued after the RELEASE may be completed before the
2024 RELEASE operation has completed.
2025
2026 (3) ACQUIRE vs ACQUIRE implication:
2027
2028 All ACQUIRE operations issued before another ACQUIRE operation will be
2029 completed before that ACQUIRE operation.
2030
2031 (4) ACQUIRE vs RELEASE implication:
2032
2033 All ACQUIRE operations issued before a RELEASE operation will be
2034 completed before the RELEASE operation.
2035
2036 (5) Failed conditional ACQUIRE implication:
2037
2038 Certain locking variants of the ACQUIRE operation may fail, either due to
2039 being unable to get the lock immediately, or due to receiving an unblocked
2040 signal while asleep waiting for the lock to become available. Failed
2041 locks do not imply any sort of barrier.
2042
2043 [!] Note: one of the consequences of lock ACQUIREs and RELEASEs being only
2044 one-way barriers is that the effects of instructions outside of a critical
2045 section may seep into the inside of the critical section.
2046
2047 An ACQUIRE followed by a RELEASE may not be assumed to be full memory barrier
2048 because it is possible for an access preceding the ACQUIRE to happen after the
2049 ACQUIRE, and an access following the RELEASE to happen before the RELEASE, and
2050 the two accesses can themselves then cross:
2051
2052 *A = a;
2053 ACQUIRE M
2054 RELEASE M
2055 *B = b;
2056
2057 may occur as:
2058
2059 ACQUIRE M, STORE *B, STORE *A, RELEASE M
2060
2061 When the ACQUIRE and RELEASE are a lock acquisition and release,
2062 respectively, this same reordering can occur if the lock's ACQUIRE and
2063 RELEASE are to the same lock variable, but only from the perspective of
2064 another CPU not holding that lock. In short, a ACQUIRE followed by an
2065 RELEASE may -not- be assumed to be a full memory barrier.
2066
2067 Similarly, the reverse case of a RELEASE followed by an ACQUIRE does
2068 not imply a full memory barrier. Therefore, the CPU's execution of the
2069 critical sections corresponding to the RELEASE and the ACQUIRE can cross,
2070 so that:
2071
2072 *A = a;
2073 RELEASE M
2074 ACQUIRE N
2075 *B = b;
2076
2077 could occur as:
2078
2079 ACQUIRE N, STORE *B, STORE *A, RELEASE M
2080
2081 It might appear that this reordering could introduce a deadlock.
2082 However, this cannot happen because if such a deadlock threatened,
2083 the RELEASE would simply complete, thereby avoiding the deadlock.
2084
2085 Why does this work?
2086
2087 One key point is that we are only talking about the CPU doing
2088 the reordering, not the compiler. If the compiler (or, for
2089 that matter, the developer) switched the operations, deadlock
2090 -could- occur.
2091
2092 But suppose the CPU reordered the operations. In this case,
2093 the unlock precedes the lock in the assembly code. The CPU
2094 simply elected to try executing the later lock operation first.
2095 If there is a deadlock, this lock operation will simply spin (or
2096 try to sleep, but more on that later). The CPU will eventually
2097 execute the unlock operation (which preceded the lock operation
2098 in the assembly code), which will unravel the potential deadlock,
2099 allowing the lock operation to succeed.
2100
2101 But what if the lock is a sleeplock? In that case, the code will
2102 try to enter the scheduler, where it will eventually encounter
2103 a memory barrier, which will force the earlier unlock operation
2104 to complete, again unraveling the deadlock. There might be
2105 a sleep-unlock race, but the locking primitive needs to resolve
2106 such races properly in any case.
2107
2108 Locks and semaphores may not provide any guarantee of ordering on UP compiled
2109 systems, and so cannot be counted on in such a situation to actually achieve
2110 anything at all - especially with respect to I/O accesses - unless combined
2111 with interrupt disabling operations.
2112
2113 See also the section on "Inter-CPU acquiring barrier effects".
2114
2115
2116 As an example, consider the following:
2117
2118 *A = a;
2119 *B = b;
2120 ACQUIRE
2121 *C = c;
2122 *D = d;
2123 RELEASE
2124 *E = e;
2125 *F = f;
2126
2127 The following sequence of events is acceptable:
2128
2129 ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RELEASE
2130
2131 [+] Note that {*F,*A} indicates a combined access.
2132
2133 But none of the following are:
2134
2135 {*F,*A}, *B, ACQUIRE, *C, *D, RELEASE, *E
2136 *A, *B, *C, ACQUIRE, *D, RELEASE, *E, *F
2137 *A, *B, ACQUIRE, *C, RELEASE, *D, *E, *F
2138 *B, ACQUIRE, *C, *D, RELEASE, {*F,*A}, *E
2139
2140
2141
2142 INTERRUPT DISABLING FUNCTIONS
2143 -----------------------------
2144
2145 Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts
2146 (RELEASE equivalent) will act as compiler barriers only. So if memory or I/O
2147 barriers are required in such a situation, they must be provided from some
2148 other means.
2149
2150
2151 SLEEP AND WAKE-UP FUNCTIONS
2152 ---------------------------
2153
2154 Sleeping and waking on an event flagged in global data can be viewed as an
2155 interaction between two pieces of data: the task state of the task waiting for
2156 the event and the global data used to indicate the event. To make sure that
2157 these appear to happen in the right order, the primitives to begin the process
2158 of going to sleep, and the primitives to initiate a wake up imply certain
2159 barriers.
2160
2161 Firstly, the sleeper normally follows something like this sequence of events:
2162
2163 for (;;) {
2164 set_current_state(TASK_UNINTERRUPTIBLE);
2165 if (event_indicated)
2166 break;
2167 schedule();
2168 }
2169
2170 A general memory barrier is interpolated automatically by set_current_state()
2171 after it has altered the task state:
2172
2173 CPU 1
2174 ===============================
2175 set_current_state();
2176 smp_store_mb();
2177 STORE current->state
2178 <general barrier>
2179 LOAD event_indicated
2180
2181 set_current_state() may be wrapped by:
2182
2183 prepare_to_wait();
2184 prepare_to_wait_exclusive();
2185
2186 which therefore also imply a general memory barrier after setting the state.
2187 The whole sequence above is available in various canned forms, all of which
2188 interpolate the memory barrier in the right place:
2189
2190 wait_event();
2191 wait_event_interruptible();
2192 wait_event_interruptible_exclusive();
2193 wait_event_interruptible_timeout();
2194 wait_event_killable();
2195 wait_event_timeout();
2196 wait_on_bit();
2197 wait_on_bit_lock();
2198
2199
2200 Secondly, code that performs a wake up normally follows something like this:
2201
2202 event_indicated = 1;
2203 wake_up(&event_wait_queue);
2204
2205 or:
2206
2207 event_indicated = 1;
2208 wake_up_process(event_daemon);
2209
2210 A general memory barrier is executed by wake_up() if it wakes something up.
2211 If it doesn't wake anything up then a memory barrier may or may not be
2212 executed; you must not rely on it. The barrier occurs before the task state
2213 is accessed, in particular, it sits between the STORE to indicate the event
2214 and the STORE to set TASK_RUNNING:
2215
2216 CPU 1 (Sleeper) CPU 2 (Waker)
2217 =============================== ===============================
2218 set_current_state(); STORE event_indicated
2219 smp_store_mb(); wake_up();
2220 STORE current->state ...
2221 <general barrier> <general barrier>
2222 LOAD event_indicated if ((LOAD task->state) & TASK_NORMAL)
2223 STORE task->state
2224
2225 where "task" is the thread being woken up and it equals CPU 1's "current".
2226
2227 To repeat, a general memory barrier is guaranteed to be executed by wake_up()
2228 if something is actually awakened, but otherwise there is no such guarantee.
2229 To see this, consider the following sequence of events, where X and Y are both
2230 initially zero:
2231
2232 CPU 1 CPU 2
2233 =============================== ===============================
2234 X = 1; Y = 1;
2235 smp_mb(); wake_up();
2236 LOAD Y LOAD X
2237
2238 If a wakeup does occur, one (at least) of the two loads must see 1. If, on
2239 the other hand, a wakeup does not occur, both loads might see 0.
2240
2241 wake_up_process() always executes a general memory barrier. The barrier again
2242 occurs before the task state is accessed. In particular, if the wake_up() in
2243 the previous snippet were replaced by a call to wake_up_process() then one of
2244 the two loads would be guaranteed to see 1.
2245
2246 The available waker functions include:
2247
2248 complete();
2249 wake_up();
2250 wake_up_all();
2251 wake_up_bit();
2252 wake_up_interruptible();
2253 wake_up_interruptible_all();
2254 wake_up_interruptible_nr();
2255 wake_up_interruptible_poll();
2256 wake_up_interruptible_sync();
2257 wake_up_interruptible_sync_poll();
2258 wake_up_locked();
2259 wake_up_locked_poll();
2260 wake_up_nr();
2261 wake_up_poll();
2262 wake_up_process();
2263
2264 In terms of memory ordering, these functions all provide the same guarantees of
2265 a wake_up() (or stronger).
2266
2267 [!] Note that the memory barriers implied by the sleeper and the waker do _not_
2268 order multiple stores before the wake-up with respect to loads of those stored
2269 values after the sleeper has called set_current_state(). For instance, if the
2270 sleeper does:
2271
2272 set_current_state(TASK_INTERRUPTIBLE);
2273 if (event_indicated)
2274 break;
2275 __set_current_state(TASK_RUNNING);
2276 do_something(my_data);
2277
2278 and the waker does:
2279
2280 my_data = value;
2281 event_indicated = 1;
2282 wake_up(&event_wait_queue);
2283
2284 there's no guarantee that the change to event_indicated will be perceived by
2285 the sleeper as coming after the change to my_data. In such a circumstance, the
2286 code on both sides must interpolate its own memory barriers between the
2287 separate data accesses. Thus the above sleeper ought to do:
2288
2289 set_current_state(TASK_INTERRUPTIBLE);
2290 if (event_indicated) {
2291 smp_rmb();
2292 do_something(my_data);
2293 }
2294
2295 and the waker should do:
2296
2297 my_data = value;
2298 smp_wmb();
2299 event_indicated = 1;
2300 wake_up(&event_wait_queue);
2301
2302
2303 MISCELLANEOUS FUNCTIONS
2304 -----------------------
2305
2306 Other functions that imply barriers:
2307
2308 (*) schedule() and similar imply full memory barriers.
2309
2310
2311 ===================================
2312 INTER-CPU ACQUIRING BARRIER EFFECTS
2313 ===================================
2314
2315 On SMP systems locking primitives give a more substantial form of barrier: one
2316 that does affect memory access ordering on other CPUs, within the context of
2317 conflict on any particular lock.
2318
2319
2320 ACQUIRES VS MEMORY ACCESSES
2321 ---------------------------
2322
2323 Consider the following: the system has a pair of spinlocks (M) and (Q), and
2324 three CPUs; then should the following sequence of events occur:
2325
2326 CPU 1 CPU 2
2327 =============================== ===============================
2328 WRITE_ONCE(*A, a); WRITE_ONCE(*E, e);
2329 ACQUIRE M ACQUIRE Q
2330 WRITE_ONCE(*B, b); WRITE_ONCE(*F, f);
2331 WRITE_ONCE(*C, c); WRITE_ONCE(*G, g);
2332 RELEASE M RELEASE Q
2333 WRITE_ONCE(*D, d); WRITE_ONCE(*H, h);
2334
2335 Then there is no guarantee as to what order CPU 3 will see the accesses to *A
2336 through *H occur in, other than the constraints imposed by the separate locks
2337 on the separate CPUs. It might, for example, see:
2338
2339 *E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M
2340
2341 But it won't see any of:
2342
2343 *B, *C or *D preceding ACQUIRE M
2344 *A, *B or *C following RELEASE M
2345 *F, *G or *H preceding ACQUIRE Q
2346 *E, *F or *G following RELEASE Q
2347
2348
2349 =================================
2350 WHERE ARE MEMORY BARRIERS NEEDED?
2351 =================================
2352
2353 Under normal operation, memory operation reordering is generally not going to
2354 be a problem as a single-threaded linear piece of code will still appear to
2355 work correctly, even if it's in an SMP kernel. There are, however, four
2356 circumstances in which reordering definitely _could_ be a problem:
2357
2358 (*) Interprocessor interaction.
2359
2360 (*) Atomic operations.
2361
2362 (*) Accessing devices.
2363
2364 (*) Interrupts.
2365
2366
2367 INTERPROCESSOR INTERACTION
2368 --------------------------
2369
2370 When there's a system with more than one processor, more than one CPU in the
2371 system may be working on the same data set at the same time. This can cause
2372 synchronisation problems, and the usual way of dealing with them is to use
2373 locks. Locks, however, are quite expensive, and so it may be preferable to
2374 operate without the use of a lock if at all possible. In such a case
2375 operations that affect both CPUs may have to be carefully ordered to prevent
2376 a malfunction.
2377
2378 Consider, for example, the R/W semaphore slow path. Here a waiting process is
2379 queued on the semaphore, by virtue of it having a piece of its stack linked to
2380 the semaphore's list of waiting processes:
2381
2382 struct rw_semaphore {
2383 ...
2384 spinlock_t lock;
2385 struct list_head waiters;
2386 };
2387
2388 struct rwsem_waiter {
2389 struct list_head list;
2390 struct task_struct *task;
2391 };
2392
2393 To wake up a particular waiter, the up_read() or up_write() functions have to:
2394
2395 (1) read the next pointer from this waiter's record to know as to where the
2396 next waiter record is;
2397
2398 (2) read the pointer to the waiter's task structure;
2399
2400 (3) clear the task pointer to tell the waiter it has been given the semaphore;
2401
2402 (4) call wake_up_process() on the task; and
2403
2404 (5) release the reference held on the waiter's task struct.
2405
2406 In other words, it has to perform this sequence of events:
2407
2408 LOAD waiter->list.next;
2409 LOAD waiter->task;
2410 STORE waiter->task;
2411 CALL wakeup
2412 RELEASE task
2413
2414 and if any of these steps occur out of order, then the whole thing may
2415 malfunction.
2416
2417 Once it has queued itself and dropped the semaphore lock, the waiter does not
2418 get the lock again; it instead just waits for its task pointer to be cleared
2419 before proceeding. Since the record is on the waiter's stack, this means that
2420 if the task pointer is cleared _before_ the next pointer in the list is read,
2421 another CPU might start processing the waiter and might clobber the waiter's
2422 stack before the up*() function has a chance to read the next pointer.
2423
2424 Consider then what might happen to the above sequence of events:
2425
2426 CPU 1 CPU 2
2427 =============================== ===============================
2428 down_xxx()
2429 Queue waiter
2430 Sleep
2431 up_yyy()
2432 LOAD waiter->task;
2433 STORE waiter->task;
2434 Woken up by other event
2435 <preempt>
2436 Resume processing
2437 down_xxx() returns
2438 call foo()
2439 foo() clobbers *waiter
2440 </preempt>
2441 LOAD waiter->list.next;
2442 --- OOPS ---
2443
2444 This could be dealt with using the semaphore lock, but then the down_xxx()
2445 function has to needlessly get the spinlock again after being woken up.
2446
2447 The way to deal with this is to insert a general SMP memory barrier:
2448
2449 LOAD waiter->list.next;
2450 LOAD waiter->task;
2451 smp_mb();
2452 STORE waiter->task;
2453 CALL wakeup
2454 RELEASE task
2455
2456 In this case, the barrier makes a guarantee that all memory accesses before the
2457 barrier will appear to happen before all the memory accesses after the barrier
2458 with respect to the other CPUs on the system. It does _not_ guarantee that all
2459 the memory accesses before the barrier will be complete by the time the barrier
2460 instruction itself is complete.
2461
2462 On a UP system - where this wouldn't be a problem - the smp_mb() is just a
2463 compiler barrier, thus making sure the compiler emits the instructions in the
2464 right order without actually intervening in the CPU. Since there's only one
2465 CPU, that CPU's dependency ordering logic will take care of everything else.
2466
2467
2468 ATOMIC OPERATIONS
2469 -----------------
2470
2471 While they are technically interprocessor interaction considerations, atomic
2472 operations are noted specially as some of them imply full memory barriers and
2473 some don't, but they're very heavily relied on as a group throughout the
2474 kernel.
2475
2476 See Documentation/atomic_t.txt for more information.
2477
2478
2479 ACCESSING DEVICES
2480 -----------------
2481
2482 Many devices can be memory mapped, and so appear to the CPU as if they're just
2483 a set of memory locations. To control such a device, the driver usually has to
2484 make the right memory accesses in exactly the right order.
2485
2486 However, having a clever CPU or a clever compiler creates a potential problem
2487 in that the carefully sequenced accesses in the driver code won't reach the
2488 device in the requisite order if the CPU or the compiler thinks it is more
2489 efficient to reorder, combine or merge accesses - something that would cause
2490 the device to malfunction.
2491
2492 Inside of the Linux kernel, I/O should be done through the appropriate accessor
2493 routines - such as inb() or writel() - which know how to make such accesses
2494 appropriately sequential. While this, for the most part, renders the explicit
2495 use of memory barriers unnecessary, if the accessor functions are used to refer
2496 to an I/O memory window with relaxed memory access properties, then _mandatory_
2497 memory barriers are required to enforce ordering.
2498
2499 See Documentation/driver-api/device-io.rst for more information.
2500
2501
2502 INTERRUPTS
2503 ----------
2504
2505 A driver may be interrupted by its own interrupt service routine, and thus the
2506 two parts of the driver may interfere with each other's attempts to control or
2507 access the device.
2508
2509 This may be alleviated - at least in part - by disabling local interrupts (a
2510 form of locking), such that the critical operations are all contained within
2511 the interrupt-disabled section in the driver. While the driver's interrupt
2512 routine is executing, the driver's core may not run on the same CPU, and its
2513 interrupt is not permitted to happen again until the current interrupt has been
2514 handled, thus the interrupt handler does not need to lock against that.
2515
2516 However, consider a driver that was talking to an ethernet card that sports an
2517 address register and a data register. If that driver's core talks to the card
2518 under interrupt-disablement and then the driver's interrupt handler is invoked:
2519
2520 LOCAL IRQ DISABLE
2521 writew(ADDR, 3);
2522 writew(DATA, y);
2523 LOCAL IRQ ENABLE
2524 <interrupt>
2525 writew(ADDR, 4);
2526 q = readw(DATA);
2527 </interrupt>
2528
2529 The store to the data register might happen after the second store to the
2530 address register if ordering rules are sufficiently relaxed:
2531
2532 STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA
2533
2534
2535 If ordering rules are relaxed, it must be assumed that accesses done inside an
2536 interrupt disabled section may leak outside of it and may interleave with
2537 accesses performed in an interrupt - and vice versa - unless implicit or
2538 explicit barriers are used.
2539
2540 Normally this won't be a problem because the I/O accesses done inside such
2541 sections will include synchronous load operations on strictly ordered I/O
2542 registers that form implicit I/O barriers.
2543
2544
2545 A similar situation may occur between an interrupt routine and two routines
2546 running on separate CPUs that communicate with each other. If such a case is
2547 likely, then interrupt-disabling locks should be used to guarantee ordering.
2548
2549
2550 ==========================
2551 KERNEL I/O BARRIER EFFECTS
2552 ==========================
2553
2554 Interfacing with peripherals via I/O accesses is deeply architecture and device
2555 specific. Therefore, drivers which are inherently non-portable may rely on
2556 specific behaviours of their target systems in order to achieve synchronization
2557 in the most lightweight manner possible. For drivers intending to be portable
2558 between multiple architectures and bus implementations, the kernel offers a
2559 series of accessor functions that provide various degrees of ordering
2560 guarantees:
2561
2562 (*) readX(), writeX():
2563
2564 The readX() and writeX() MMIO accessors take a pointer to the
2565 peripheral being accessed as an __iomem * parameter. For pointers
2566 mapped with the default I/O attributes (e.g. those returned by
2567 ioremap()), the ordering guarantees are as follows:
2568
2569 1. All readX() and writeX() accesses to the same peripheral are ordered
2570 with respect to each other. This ensures that MMIO register accesses
2571 by the same CPU thread to a particular device will arrive in program
2572 order.
2573
2574 2. A writeX() issued by a CPU thread holding a spinlock is ordered
2575 before a writeX() to the same peripheral from another CPU thread
2576 issued after a later acquisition of the same spinlock. This ensures
2577 that MMIO register writes to a particular device issued while holding
2578 a spinlock will arrive in an order consistent with acquisitions of
2579 the lock.
2580
2581 3. A writeX() by a CPU thread to the peripheral will first wait for the
2582 completion of all prior writes to memory either issued by, or
2583 propagated to, the same thread. This ensures that writes by the CPU
2584 to an outbound DMA buffer allocated by dma_alloc_coherent() will be
2585 visible to a DMA engine when the CPU writes to its MMIO control
2586 register to trigger the transfer.
2587
2588 4. A readX() by a CPU thread from the peripheral will complete before
2589 any subsequent reads from memory by the same thread can begin. This
2590 ensures that reads by the CPU from an incoming DMA buffer allocated
2591 by dma_alloc_coherent() will not see stale data after reading from
2592 the DMA engine's MMIO status register to establish that the DMA
2593 transfer has completed.
2594
2595 5. A readX() by a CPU thread from the peripheral will complete before
2596 any subsequent delay() loop can begin execution on the same thread.
2597 This ensures that two MMIO register writes by the CPU to a peripheral
2598 will arrive at least 1us apart if the first write is immediately read
2599 back with readX() and udelay(1) is called prior to the second
2600 writeX():
2601
2602 writel(42, DEVICE_REGISTER_0); // Arrives at the device...
2603 readl(DEVICE_REGISTER_0);
2604 udelay(1);
2605 writel(42, DEVICE_REGISTER_1); // ...at least 1us before this.
2606
2607 The ordering properties of __iomem pointers obtained with non-default
2608 attributes (e.g. those returned by ioremap_wc()) are specific to the
2609 underlying architecture and therefore the guarantees listed above cannot
2610 generally be relied upon for accesses to these types of mappings.
2611
2612 (*) readX_relaxed(), writeX_relaxed():
2613
2614 These are similar to readX() and writeX(), but provide weaker memory
2615 ordering guarantees. Specifically, they do not guarantee ordering with
2616 respect to locking, normal memory accesses or delay() loops (i.e.
2617 bullets 2-5 above) but they are still guaranteed to be ordered with
2618 respect to other accesses from the same CPU thread to the same
2619 peripheral when operating on __iomem pointers mapped with the default
2620 I/O attributes.
2621
2622 (*) readsX(), writesX():
2623
2624 The readsX() and writesX() MMIO accessors are designed for accessing
2625 register-based, memory-mapped FIFOs residing on peripherals that are not
2626 capable of performing DMA. Consequently, they provide only the ordering
2627 guarantees of readX_relaxed() and writeX_relaxed(), as documented above.
2628
2629 (*) inX(), outX():
2630
2631 The inX() and outX() accessors are intended to access legacy port-mapped
2632 I/O peripherals, which may require special instructions on some
2633 architectures (notably x86). The port number of the peripheral being
2634 accessed is passed as an argument.
2635
2636 Since many CPU architectures ultimately access these peripherals via an
2637 internal virtual memory mapping, the portable ordering guarantees
2638 provided by inX() and outX() are the same as those provided by readX()
2639 and writeX() respectively when accessing a mapping with the default I/O
2640 attributes.
2641
2642 Device drivers may expect outX() to emit a non-posted write transaction
2643 that waits for a completion response from the I/O peripheral before
2644 returning. This is not guaranteed by all architectures and is therefore
2645 not part of the portable ordering semantics.
2646
2647 (*) insX(), outsX():
2648
2649 As above, the insX() and outsX() accessors provide the same ordering
2650 guarantees as readsX() and writesX() respectively when accessing a
2651 mapping with the default I/O attributes.
2652
2653 (*) ioreadX(), iowriteX():
2654
2655 These will perform appropriately for the type of access they're actually
2656 doing, be it inX()/outX() or readX()/writeX().
2657
2658 With the exception of the string accessors (insX(), outsX(), readsX() and
2659 writesX()), all of the above assume that the underlying peripheral is
2660 little-endian and will therefore perform byte-swapping operations on big-endian
2661 architectures.
2662
2663
2664 ========================================
2665 ASSUMED MINIMUM EXECUTION ORDERING MODEL
2666 ========================================
2667
2668 It has to be assumed that the conceptual CPU is weakly-ordered but that it will
2669 maintain the appearance of program causality with respect to itself. Some CPUs
2670 (such as i386 or x86_64) are more constrained than others (such as powerpc or
2671 frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside
2672 of arch-specific code.
2673
2674 This means that it must be considered that the CPU will execute its instruction
2675 stream in any order it feels like - or even in parallel - provided that if an
2676 instruction in the stream depends on an earlier instruction, then that
2677 earlier instruction must be sufficiently complete[*] before the later
2678 instruction may proceed; in other words: provided that the appearance of
2679 causality is maintained.
2680
2681 [*] Some instructions have more than one effect - such as changing the
2682 condition codes, changing registers or changing memory - and different
2683 instructions may depend on different effects.
2684
2685 A CPU may also discard any instruction sequence that winds up having no
2686 ultimate effect. For example, if two adjacent instructions both load an
2687 immediate value into the same register, the first may be discarded.
2688
2689
2690 Similarly, it has to be assumed that compiler might reorder the instruction
2691 stream in any way it sees fit, again provided the appearance of causality is
2692 maintained.
2693
2694
2695 ============================
2696 THE EFFECTS OF THE CPU CACHE
2697 ============================
2698
2699 The way cached memory operations are perceived across the system is affected to
2700 a certain extent by the caches that lie between CPUs and memory, and by the
2701 memory coherence system that maintains the consistency of state in the system.
2702
2703 As far as the way a CPU interacts with another part of the system through the
2704 caches goes, the memory system has to include the CPU's caches, and memory
2705 barriers for the most part act at the interface between the CPU and its cache
2706 (memory barriers logically act on the dotted line in the following diagram):
2707
2708 <--- CPU ---> : <----------- Memory ----------->
2709 :
2710 +--------+ +--------+ : +--------+ +-----------+
2711 | | | | : | | | | +--------+
2712 | CPU | | Memory | : | CPU | | | | |
2713 | Core |--->| Access |----->| Cache |<-->| | | |
2714 | | | Queue | : | | | |--->| Memory |
2715 | | | | : | | | | | |
2716 +--------+ +--------+ : +--------+ | | | |
2717 : | Cache | +--------+
2718 : | Coherency |
2719 : | Mechanism | +--------+
2720 +--------+ +--------+ : +--------+ | | | |
2721 | | | | : | | | | | |
2722 | CPU | | Memory | : | CPU | | |--->| Device |
2723 | Core |--->| Access |----->| Cache |<-->| | | |
2724 | | | Queue | : | | | | | |
2725 | | | | : | | | | +--------+
2726 +--------+ +--------+ : +--------+ +-----------+
2727 :
2728 :
2729
2730 Although any particular load or store may not actually appear outside of the
2731 CPU that issued it since it may have been satisfied within the CPU's own cache,
2732 it will still appear as if the full memory access had taken place as far as the
2733 other CPUs are concerned since the cache coherency mechanisms will migrate the
2734 cacheline over to the accessing CPU and propagate the effects upon conflict.
2735
2736 The CPU core may execute instructions in any order it deems fit, provided the
2737 expected program causality appears to be maintained. Some of the instructions
2738 generate load and store operations which then go into the queue of memory
2739 accesses to be performed. The core may place these in the queue in any order
2740 it wishes, and continue execution until it is forced to wait for an instruction
2741 to complete.
2742
2743 What memory barriers are concerned with is controlling the order in which
2744 accesses cross from the CPU side of things to the memory side of things, and
2745 the order in which the effects are perceived to happen by the other observers
2746 in the system.
2747
2748 [!] Memory barriers are _not_ needed within a given CPU, as CPUs always see
2749 their own loads and stores as if they had happened in program order.
2750
2751 [!] MMIO or other device accesses may bypass the cache system. This depends on
2752 the properties of the memory window through which devices are accessed and/or
2753 the use of any special device communication instructions the CPU may have.
2754
2755
2756 CACHE COHERENCY VS DMA
2757 ----------------------
2758
2759 Not all systems maintain cache coherency with respect to devices doing DMA. In
2760 such cases, a device attempting DMA may obtain stale data from RAM because
2761 dirty cache lines may be resident in the caches of various CPUs, and may not
2762 have been written back to RAM yet. To deal with this, the appropriate part of
2763 the kernel must flush the overlapping bits of cache on each CPU (and maybe
2764 invalidate them as well).
2765
2766 In addition, the data DMA'd to RAM by a device may be overwritten by dirty
2767 cache lines being written back to RAM from a CPU's cache after the device has
2768 installed its own data, or cache lines present in the CPU's cache may simply
2769 obscure the fact that RAM has been updated, until at such time as the cacheline
2770 is discarded from the CPU's cache and reloaded. To deal with this, the
2771 appropriate part of the kernel must invalidate the overlapping bits of the
2772 cache on each CPU.
2773
2774 See Documentation/core-api/cachetlb.rst for more information on cache
2775 management.
2776
2777
2778 CACHE COHERENCY VS MMIO
2779 -----------------------
2780
2781 Memory mapped I/O usually takes place through memory locations that are part of
2782 a window in the CPU's memory space that has different properties assigned than
2783 the usual RAM directed window.
2784
2785 Amongst these properties is usually the fact that such accesses bypass the
2786 caching entirely and go directly to the device buses. This means MMIO accesses
2787 may, in effect, overtake accesses to cached memory that were emitted earlier.
2788 A memory barrier isn't sufficient in such a case, but rather the cache must be
2789 flushed between the cached memory write and the MMIO access if the two are in
2790 any way dependent.
2791
2792
2793 =========================
2794 THE THINGS CPUS GET UP TO
2795 =========================
2796
2797 A programmer might take it for granted that the CPU will perform memory
2798 operations in exactly the order specified, so that if the CPU is, for example,
2799 given the following piece of code to execute:
2800
2801 a = READ_ONCE(*A);
2802 WRITE_ONCE(*B, b);
2803 c = READ_ONCE(*C);
2804 d = READ_ONCE(*D);
2805 WRITE_ONCE(*E, e);
2806
2807 they would then expect that the CPU will complete the memory operation for each
2808 instruction before moving on to the next one, leading to a definite sequence of
2809 operations as seen by external observers in the system:
2810
2811 LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E.
2812
2813
2814 Reality is, of course, much messier. With many CPUs and compilers, the above
2815 assumption doesn't hold because:
2816
2817 (*) loads are more likely to need to be completed immediately to permit
2818 execution progress, whereas stores can often be deferred without a
2819 problem;
2820
2821 (*) loads may be done speculatively, and the result discarded should it prove
2822 to have been unnecessary;
2823
2824 (*) loads may be done speculatively, leading to the result having been fetched
2825 at the wrong time in the expected sequence of events;
2826
2827 (*) the order of the memory accesses may be rearranged to promote better use
2828 of the CPU buses and caches;
2829
2830 (*) loads and stores may be combined to improve performance when talking to
2831 memory or I/O hardware that can do batched accesses of adjacent locations,
2832 thus cutting down on transaction setup costs (memory and PCI devices may
2833 both be able to do this); and
2834
2835 (*) the CPU's data cache may affect the ordering, and while cache-coherency
2836 mechanisms may alleviate this - once the store has actually hit the cache
2837 - there's no guarantee that the coherency management will be propagated in
2838 order to other CPUs.
2839
2840 So what another CPU, say, might actually observe from the above piece of code
2841 is:
2842
2843 LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B
2844
2845 (Where "LOAD {*C,*D}" is a combined load)
2846
2847
2848 However, it is guaranteed that a CPU will be self-consistent: it will see its
2849 _own_ accesses appear to be correctly ordered, without the need for a memory
2850 barrier. For instance with the following code:
2851
2852 U = READ_ONCE(*A);
2853 WRITE_ONCE(*A, V);
2854 WRITE_ONCE(*A, W);
2855 X = READ_ONCE(*A);
2856 WRITE_ONCE(*A, Y);
2857 Z = READ_ONCE(*A);
2858
2859 and assuming no intervention by an external influence, it can be assumed that
2860 the final result will appear to be:
2861
2862 U == the original value of *A
2863 X == W
2864 Z == Y
2865 *A == Y
2866
2867 The code above may cause the CPU to generate the full sequence of memory
2868 accesses:
2869
2870 U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A
2871
2872 in that order, but, without intervention, the sequence may have almost any
2873 combination of elements combined or discarded, provided the program's view
2874 of the world remains consistent. Note that READ_ONCE() and WRITE_ONCE()
2875 are -not- optional in the above example, as there are architectures
2876 where a given CPU might reorder successive loads to the same location.
2877 On such architectures, READ_ONCE() and WRITE_ONCE() do whatever is
2878 necessary to prevent this, for example, on Itanium the volatile casts
2879 used by READ_ONCE() and WRITE_ONCE() cause GCC to emit the special ld.acq
2880 and st.rel instructions (respectively) that prevent such reordering.
2881
2882 The compiler may also combine, discard or defer elements of the sequence before
2883 the CPU even sees them.
2884
2885 For instance:
2886
2887 *A = V;
2888 *A = W;
2889
2890 may be reduced to:
2891
2892 *A = W;
2893
2894 since, without either a write barrier or an WRITE_ONCE(), it can be
2895 assumed that the effect of the storage of V to *A is lost. Similarly:
2896
2897 *A = Y;
2898 Z = *A;
2899
2900 may, without a memory barrier or an READ_ONCE() and WRITE_ONCE(), be
2901 reduced to:
2902
2903 *A = Y;
2904 Z = Y;
2905
2906 and the LOAD operation never appear outside of the CPU.
2907
2908
2909 AND THEN THERE'S THE ALPHA
2910 --------------------------
2911
2912 The DEC Alpha CPU is one of the most relaxed CPUs there is. Not only that,
2913 some versions of the Alpha CPU have a split data cache, permitting them to have
2914 two semantically-related cache lines updated at separate times. This is where
2915 the address-dependency barrier really becomes necessary as this synchronises
2916 both caches with the memory coherence system, thus making it seem like pointer
2917 changes vs new data occur in the right order.
2918
2919 The Alpha defines the Linux kernel's memory model, although as of v4.15
2920 the Linux kernel's addition of smp_mb() to READ_ONCE() on Alpha greatly
2921 reduced its impact on the memory model.
2922
2923
2924 VIRTUAL MACHINE GUESTS
2925 ----------------------
2926
2927 Guests running within virtual machines might be affected by SMP effects even if
2928 the guest itself is compiled without SMP support. This is an artifact of
2929 interfacing with an SMP host while running an UP kernel. Using mandatory
2930 barriers for this use-case would be possible but is often suboptimal.
2931
2932 To handle this case optimally, low-level virt_mb() etc macros are available.
2933 These have the same effect as smp_mb() etc when SMP is enabled, but generate
2934 identical code for SMP and non-SMP systems. For example, virtual machine guests
2935 should use virt_mb() rather than smp_mb() when synchronizing against a
2936 (possibly SMP) host.
2937
2938 These are equivalent to smp_mb() etc counterparts in all other respects,
2939 in particular, they do not control MMIO effects: to control
2940 MMIO effects, use mandatory barriers.
2941
2942
2943 ============
2944 EXAMPLE USES
2945 ============
2946
2947 CIRCULAR BUFFERS
2948 ----------------
2949
2950 Memory barriers can be used to implement circular buffering without the need
2951 of a lock to serialise the producer with the consumer. See:
2952
2953 Documentation/core-api/circular-buffers.rst
2954
2955 for details.
2956
2957
2958 ==========
2959 REFERENCES
2960 ==========
2961
2962 Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek,
2963 Digital Press)
2964 Chapter 5.2: Physical Address Space Characteristics
2965 Chapter 5.4: Caches and Write Buffers
2966 Chapter 5.5: Data Sharing
2967 Chapter 5.6: Read/Write Ordering
2968
2969 AMD64 Architecture Programmer's Manual Volume 2: System Programming
2970 Chapter 7.1: Memory-Access Ordering
2971 Chapter 7.4: Buffering and Combining Memory Writes
2972
2973 ARM Architecture Reference Manual (ARMv8, for ARMv8-A architecture profile)
2974 Chapter B2: The AArch64 Application Level Memory Model
2975
2976 IA-32 Intel Architecture Software Developer's Manual, Volume 3:
2977 System Programming Guide
2978 Chapter 7.1: Locked Atomic Operations
2979 Chapter 7.2: Memory Ordering
2980 Chapter 7.4: Serializing Instructions
2981
2982 The SPARC Architecture Manual, Version 9
2983 Chapter 8: Memory Models
2984 Appendix D: Formal Specification of the Memory Models
2985 Appendix J: Programming with the Memory Models
2986
2987 Storage in the PowerPC (Stone and Fitzgerald)
2988
2989 UltraSPARC Programmer Reference Manual
2990 Chapter 5: Memory Accesses and Cacheability
2991 Chapter 15: Sparc-V9 Memory Models
2992
2993 UltraSPARC III Cu User's Manual
2994 Chapter 9: Memory Models
2995
2996 UltraSPARC IIIi Processor User's Manual
2997 Chapter 8: Memory Models
2998
2999 UltraSPARC Architecture 2005
3000 Chapter 9: Memory
3001 Appendix D: Formal Specifications of the Memory Models
3002
3003 UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005
3004 Chapter 8: Memory Models
3005 Appendix F: Caches and Cache Coherency
3006
3007 Solaris Internals, Core Kernel Architecture, p63-68:
3008 Chapter 3.3: Hardware Considerations for Locks and
3009 Synchronization
3010
3011 Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching
3012 for Kernel Programmers:
3013 Chapter 13: Other Memory Models
3014
3015 Intel Itanium Architecture Software Developer's Manual: Volume 1:
3016 Section 2.6: Speculation
3017 Section 4.4: Memory Access