]> git.ipfire.org Git - thirdparty/gcc.git/blob
fcabf48e2137a712109348872306061b3f92c8fa
[thirdparty/gcc.git] /
1 ..
2 Copyright 1988-2022 Free Software Foundation, Inc.
3 This is part of the GCC manual.
4 For copying conditions, see the copyright.rst file.
5
6 .. _atomic-builtins:
7
8 Built-in Functions for Memory Model Aware Atomic Operations
9 ***********************************************************
10
11 The following built-in functions approximately match the requirements
12 for the C++11 memory model. They are all
13 identified by being prefixed with :samp:`__atomic` and most are
14 overloaded so that they work with multiple types.
15
16 These functions are intended to replace the legacy :samp:`__sync`
17 builtins. The main difference is that the memory order that is requested
18 is a parameter to the functions. New code should always use the
19 :samp:`__atomic` builtins rather than the :samp:`__sync` builtins.
20
21 Note that the :samp:`__atomic` builtins assume that programs will
22 conform to the C++11 memory model. In particular, they assume
23 that programs are free of data races. See the C++11 standard for
24 detailed requirements.
25
26 The :samp:`__atomic` builtins can be used with any integral scalar or
27 pointer type that is 1, 2, 4, or 8 bytes in length. 16-byte integral
28 types are also allowed if :samp:`__int128` (see :ref:`int128`) is
29 supported by the architecture.
30
31 The four non-arithmetic functions (load, store, exchange, and
32 compare_exchange) all have a generic version as well. This generic
33 version works on any data type. It uses the lock-free built-in function
34 if the specific data type size makes that possible; otherwise, an
35 external call is left to be resolved at run time. This external call is
36 the same format with the addition of a :samp:`size_t` parameter inserted
37 as the first parameter indicating the size of the object being pointed to.
38 All objects must be the same size.
39
40 There are 6 different memory orders that can be specified. These map
41 to the C++11 memory orders with the same names, see the C++11 standard
42 or the `GCC wiki
43 on atomic synchronization <https://gcc.gnu.org/wiki/Atomic/GCCMM/AtomicSync>`_ for detailed definitions. Individual
44 targets may also support additional memory orders for use on specific
45 architectures. Refer to the target documentation for details of
46 these.
47
48 An atomic operation can both constrain code motion and
49 be mapped to hardware instructions for synchronization between threads
50 (e.g., a fence). To which extent this happens is controlled by the
51 memory orders, which are listed here in approximately ascending order of
52 strength. The description of each memory order is only meant to roughly
53 illustrate the effects and is not a specification; see the C++11
54 memory model for precise semantics.
55
56 ``__ATOMIC_RELAXED``
57 Implies no inter-thread ordering constraints.
58
59 ``__ATOMIC_CONSUME``
60 This is currently implemented using the stronger ``__ATOMIC_ACQUIRE``
61 memory order because of a deficiency in C++11's semantics for
62 ``memory_order_consume``.
63
64 ``__ATOMIC_ACQUIRE``
65 Creates an inter-thread happens-before constraint from the release (or
66 stronger) semantic store to this acquire load. Can prevent hoisting
67 of code to before the operation.
68
69 ``__ATOMIC_RELEASE``
70 Creates an inter-thread happens-before constraint to acquire (or stronger)
71 semantic loads that read from this release store. Can prevent sinking
72 of code to after the operation.
73
74 ``__ATOMIC_ACQ_REL``
75 Combines the effects of both ``__ATOMIC_ACQUIRE`` and
76 ``__ATOMIC_RELEASE``.
77
78 ``__ATOMIC_SEQ_CST``
79 Enforces total ordering with all other ``__ATOMIC_SEQ_CST`` operations.
80
81 Note that in the C++11 memory model, *fences* (e.g.,
82 :samp:`__atomic_thread_fence`) take effect in combination with other
83 atomic operations on specific memory locations (e.g., atomic loads);
84 operations on specific memory locations do not necessarily affect other
85 operations in the same way.
86
87 Target architectures are encouraged to provide their own patterns for
88 each of the atomic built-in functions. If no target is provided, the original
89 non-memory model set of :samp:`__sync` atomic built-in functions are
90 used, along with any required synchronization fences surrounding it in
91 order to achieve the proper behavior. Execution in this case is subject
92 to the same restrictions as those built-in functions.
93
94 If there is no pattern or mechanism to provide a lock-free instruction
95 sequence, a call is made to an external routine with the same parameters
96 to be resolved at run time.
97
98 When implementing patterns for these built-in functions, the memory order
99 parameter can be ignored as long as the pattern implements the most
100 restrictive ``__ATOMIC_SEQ_CST`` memory order. Any of the other memory
101 orders execute correctly with this memory order but they may not execute as
102 efficiently as they could with a more appropriate implementation of the
103 relaxed requirements.
104
105 Note that the C++11 standard allows for the memory order parameter to be
106 determined at run time rather than at compile time. These built-in
107 functions map any run-time value to ``__ATOMIC_SEQ_CST`` rather
108 than invoke a runtime library call or inline a switch statement. This is
109 standard compliant, safe, and the simplest approach for now.
110
111 The memory order parameter is a signed int, but only the lower 16 bits are
112 reserved for the memory order. The remainder of the signed int is reserved
113 for target use and should be 0. Use of the predefined atomic values
114 ensures proper usage.
115
116 .. function:: type __atomic_load_n (type *ptr, int memorder)
117
118 This built-in function implements an atomic load operation. It returns the
119 contents of ``*ptr``.
120
121 The valid memory order variants are
122 ``__ATOMIC_RELAXED``, ``__ATOMIC_SEQ_CST``, ``__ATOMIC_ACQUIRE``,
123 and ``__ATOMIC_CONSUME``.
124
125 .. function:: void __atomic_load (type *ptr, type *ret, int memorder)
126
127 This is the generic version of an atomic load. It returns the
128 contents of ``*ptr`` in ``*ret``.
129
130 .. function:: void __atomic_store_n (type *ptr, type val, int memorder)
131
132 This built-in function implements an atomic store operation. It writes
133 ``val`` into ``*ptr``.
134
135 The valid memory order variants are
136 ``__ATOMIC_RELAXED``, ``__ATOMIC_SEQ_CST``, and ``__ATOMIC_RELEASE``.
137
138 .. function:: void __atomic_store (type *ptr, type *val, int memorder)
139
140 This is the generic version of an atomic store. It stores the value
141 of ``*val`` into ``*ptr``.
142
143 .. function:: type __atomic_exchange_n (type *ptr, type val, int memorder)
144
145 This built-in function implements an atomic exchange operation. It writes
146 :samp:`{val}` into ``*ptr``, and returns the previous contents of
147 ``*ptr``.
148
149 All memory order variants are valid.
150
151 .. function:: void __atomic_exchange (type *ptr, type *val, type *ret, int memorder)
152
153 This is the generic version of an atomic exchange. It stores the
154 contents of ``*val`` into ``*ptr``. The original value
155 of ``*ptr`` is copied into ``*ret``.
156
157 .. function:: bool __atomic_compare_exchange_n (type *ptr, type *expected, type desired, bool weak, int success_memorder, int failure_memorder)
158
159 This built-in function implements an atomic compare and exchange operation.
160 This compares the contents of ``*ptr`` with the contents of
161 ``*expected``. If equal, the operation is a *read-modify-write*
162 operation that writes :samp:`{desired}` into ``*ptr``. If they are not
163 equal, the operation is a *read* and the current contents of
164 ``*ptr`` are written into ``*expected``. :samp:`{weak}` is ``true``
165 for weak compare_exchange, which may fail spuriously, and ``false`` for
166 the strong variation, which never fails spuriously. Many targets
167 only offer the strong variation and ignore the parameter. When in doubt, use
168 the strong variation.
169
170 If :samp:`{desired}` is written into ``*ptr`` then ``true`` is returned
171 and memory is affected according to the
172 memory order specified by :samp:`{success_memorder}`. There are no
173 restrictions on what memory order can be used here.
174
175 Otherwise, ``false`` is returned and memory is affected according
176 to :samp:`{failure_memorder}`. This memory order cannot be
177 ``__ATOMIC_RELEASE`` nor ``__ATOMIC_ACQ_REL``. It also cannot be a
178 stronger order than that specified by :samp:`{success_memorder}`.
179
180 .. function:: bool __atomic_compare_exchange (type *ptr, type *expected, type *desired, bool weak, int success_memorder, int failure_memorder)
181
182 This built-in function implements the generic version of
183 ``__atomic_compare_exchange``. The function is virtually identical to
184 ``__atomic_compare_exchange_n``, except the desired value is also a
185 pointer.
186
187 .. function:: type __atomic_add_fetch (type *ptr, type val, int memorder)
188 type __atomic_sub_fetch (type *ptr, type val, int memorder)
189 type __atomic_and_fetch (type *ptr, type val, int memorder)
190 type __atomic_xor_fetch (type *ptr, type val, int memorder)
191 type __atomic_or_fetch (type *ptr, type val, int memorder)
192 type __atomic_nand_fetch (type *ptr, type val, int memorder)
193
194 These built-in functions perform the operation suggested by the name, and
195 return the result of the operation. Operations on pointer arguments are
196 performed as if the operands were of the ``uintptr_t`` type. That is,
197 they are not scaled by the size of the type to which the pointer points.
198
199 .. code-block:: c++
200
201 { *ptr op= val; return *ptr; }
202 { *ptr = ~(*ptr & val); return *ptr; } // nand
203
204 The object pointed to by the first argument must be of integer or pointer
205 type. It must not be a boolean type. All memory orders are valid.
206
207 .. function:: type __atomic_fetch_add (type *ptr, type val, int memorder)
208 type __atomic_fetch_sub (type *ptr, type val, int memorder)
209 type __atomic_fetch_and (type *ptr, type val, int memorder)
210 type __atomic_fetch_xor (type *ptr, type val, int memorder)
211 type __atomic_fetch_or (type *ptr, type val, int memorder)
212 type __atomic_fetch_nand (type *ptr, type val, int memorder)
213
214 These built-in functions perform the operation suggested by the name, and
215 return the value that had previously been in ``*ptr``. Operations
216 on pointer arguments are performed as if the operands were of
217 the ``uintptr_t`` type. That is, they are not scaled by the size of
218 the type to which the pointer points.
219
220 .. code-block:: c++
221
222 { tmp = *ptr; *ptr op= val; return tmp; }
223 { tmp = *ptr; *ptr = ~(*ptr & val); return tmp; } // nand
224
225 The same constraints on arguments apply as for the corresponding
226 ``__atomic_op_fetch`` built-in functions. All memory orders are valid.
227
228 .. function:: bool __atomic_test_and_set (void *ptr, int memorder)
229
230 This built-in function performs an atomic test-and-set operation on
231 the byte at ``*ptr``. The byte is set to some implementation
232 defined nonzero 'set' value and the return value is ``true`` if and only
233 if the previous contents were 'set'.
234 It should be only used for operands of type ``bool`` or ``char``. For
235 other types only part of the value may be set.
236
237 All memory orders are valid.
238
239 .. function:: void __atomic_clear (bool *ptr, int memorder)
240
241 This built-in function performs an atomic clear operation on
242 ``*ptr``. After the operation, ``*ptr`` contains 0.
243 It should be only used for operands of type ``bool`` or ``char`` and
244 in conjunction with ``__atomic_test_and_set``.
245 For other types it may only clear partially. If the type is not ``bool``
246 prefer using ``__atomic_store``.
247
248 The valid memory order variants are
249 ``__ATOMIC_RELAXED``, ``__ATOMIC_SEQ_CST``, and
250 ``__ATOMIC_RELEASE``.
251
252 .. function:: void __atomic_thread_fence (int memorder)
253
254 This built-in function acts as a synchronization fence between threads
255 based on the specified memory order.
256
257 All memory orders are valid.
258
259 .. function:: void __atomic_signal_fence (int memorder)
260
261 This built-in function acts as a synchronization fence between a thread
262 and signal handlers based in the same thread.
263
264 All memory orders are valid.
265
266 .. function:: bool __atomic_always_lock_free (size_t size, void *ptr)
267
268 This built-in function returns ``true`` if objects of :samp:`{size}` bytes always
269 generate lock-free atomic instructions for the target architecture.
270 :samp:`{size}` must resolve to a compile-time constant and the result also
271 resolves to a compile-time constant.
272
273 :samp:`{ptr}` is an optional pointer to the object that may be used to determine
274 alignment. A value of 0 indicates typical alignment should be used. The
275 compiler may also ignore this parameter.
276
277 .. code-block:: c++
278
279 if (__atomic_always_lock_free (sizeof (long long), 0))
280
281 .. function:: bool __atomic_is_lock_free (size_t size, void *ptr)
282
283 This built-in function returns ``true`` if objects of :samp:`{size}` bytes always
284 generate lock-free atomic instructions for the target architecture. If
285 the built-in function is not known to be lock-free, a call is made to a
286 runtime routine named ``__atomic_is_lock_free``.
287
288 :samp:`{ptr}` is an optional pointer to the object that may be used to determine
289 alignment. A value of 0 indicates typical alignment should be used. The
290 compiler may also ignore this parameter.