1 \input texinfo @c -*-texinfo-*-
4 @setfilename libgomp.info
10 Copyright @copyright{} 2006-2023 Free Software Foundation, Inc.
12 Permission is granted to copy, distribute and/or modify this document
13 under the terms of the GNU Free Documentation License, Version 1.3 or
14 any later version published by the Free Software Foundation; with the
15 Invariant Sections being ``Funding Free Software'', the Front-Cover
16 texts being (a) (see below), and with the Back-Cover Texts being (b)
17 (see below). A copy of the license is included in the section entitled
18 ``GNU Free Documentation License''.
20 (a) The FSF's Front-Cover Text is:
24 (b) The FSF's Back-Cover Text is:
26 You have freedom to copy and modify this GNU Manual, like GNU
27 software. Copies published by the Free Software Foundation raise
28 funds for GNU development.
32 @dircategory GNU Libraries
34 * libgomp: (libgomp). GNU Offloading and Multi Processing Runtime Library.
37 This manual documents libgomp, the GNU Offloading and Multi Processing
38 Runtime library. This is the GNU implementation of the OpenMP and
39 OpenACC APIs for parallel and accelerator programming in C/C++ and
42 Published by the Free Software Foundation
43 51 Franklin Street, Fifth Floor
44 Boston, MA 02110-1301 USA
50 @setchapternewpage odd
53 @title GNU Offloading and Multi Processing Runtime Library
54 @subtitle The GNU OpenMP and OpenACC Implementation
56 @vskip 0pt plus 1filll
57 @comment For the @value{version-GCC} Version*
59 Published by the Free Software Foundation @*
60 51 Franklin Street, Fifth Floor@*
61 Boston, MA 02110-1301, USA@*
71 @node Top, Enabling OpenMP
75 This manual documents the usage of libgomp, the GNU Offloading and
76 Multi Processing Runtime Library. This includes the GNU
77 implementation of the @uref{https://www.openmp.org, OpenMP} Application
78 Programming Interface (API) for multi-platform shared-memory parallel
79 programming in C/C++ and Fortran, and the GNU implementation of the
80 @uref{https://www.openacc.org, OpenACC} Application Programming
81 Interface (API) for offloading of code to accelerator devices in C/C++
84 Originally, libgomp implemented the GNU OpenMP Runtime Library. Based
85 on this, support for OpenACC and offloading (both OpenACC and OpenMP
86 4's target construct) has been added later on, and the library's name
87 changed to GNU Offloading and Multi Processing Runtime Library.
92 @comment When you add a new menu item, please keep the right hand
93 @comment aligned to the same column. Do not use tabs. This provides
94 @comment better formatting.
97 * Enabling OpenMP:: How to enable OpenMP for your applications.
98 * OpenMP Implementation Status:: List of implemented features by OpenMP version
99 * OpenMP Runtime Library Routines: Runtime Library Routines.
100 The OpenMP runtime application programming
102 * OpenMP Environment Variables: Environment Variables.
103 Influencing OpenMP runtime behavior with
104 environment variables.
105 * Enabling OpenACC:: How to enable OpenACC for your
107 * OpenACC Runtime Library Routines:: The OpenACC runtime application
108 programming interface.
109 * OpenACC Environment Variables:: Influencing OpenACC runtime behavior with
110 environment variables.
111 * CUDA Streams Usage:: Notes on the implementation of
112 asynchronous operations.
113 * OpenACC Library Interoperability:: OpenACC library interoperability with the
114 NVIDIA CUBLAS library.
115 * OpenACC Profiling Interface::
116 * OpenMP-Implementation Specifics:: Notes specifics of this OpenMP
118 * Offload-Target Specifics:: Notes on offload-target specific internals
119 * The libgomp ABI:: Notes on the external ABI presented by libgomp.
120 * Reporting Bugs:: How to report bugs in the GNU Offloading and
121 Multi Processing Runtime Library.
122 * Copying:: GNU general public license says
123 how you can copy and share libgomp.
124 * GNU Free Documentation License::
125 How you can copy and share this manual.
126 * Funding:: How to help assure continued work for free
128 * Library Index:: Index of this documentation.
132 @c ---------------------------------------------------------------------
134 @c ---------------------------------------------------------------------
136 @node Enabling OpenMP
137 @chapter Enabling OpenMP
139 To activate the OpenMP extensions for C/C++ and Fortran, the compile-time
140 flag @option{-fopenmp} must be specified. For C and C++, this enables
141 the handling of the OpenMP directives using @code{#pragma omp} and the
142 @code{[[omp::directive(...)]]}, @code{[[omp::sequence(...)]]} and
143 @code{[[omp::decl(...)]]} attributes. For Fortran, it enables for
144 free source form the @code{!$omp} sentinel for directives and the
145 @code{!$} conditional compilation sentinel and for fixed source form the
146 @code{c$omp}, @code{*$omp} and @code{!$omp} sentinels for directives and
147 the @code{c$}, @code{*$} and @code{!$} conditional compilation sentinels.
148 The flag also arranges for automatic linking of the OpenMP runtime library
149 (@ref{Runtime Library Routines}).
151 The @option{-fopenmp-simd} flag can be used to enable a subset of
152 OpenMP directives that do not require the linking of either the
153 OpenMP runtime library or the POSIX threads library.
155 A complete description of all OpenMP directives may be found in the
156 @uref{https://www.openmp.org, OpenMP Application Program Interface} manuals.
157 See also @ref{OpenMP Implementation Status}.
160 @c ---------------------------------------------------------------------
161 @c OpenMP Implementation Status
162 @c ---------------------------------------------------------------------
164 @node OpenMP Implementation Status
165 @chapter OpenMP Implementation Status
168 * OpenMP 4.5:: Feature completion status to 4.5 specification
169 * OpenMP 5.0:: Feature completion status to 5.0 specification
170 * OpenMP 5.1:: Feature completion status to 5.1 specification
171 * OpenMP 5.2:: Feature completion status to 5.2 specification
172 * OpenMP Technical Report 12:: Feature completion status to second 6.0 preview
175 The @code{_OPENMP} preprocessor macro and Fortran's @code{openmp_version}
176 parameter, provided by @code{omp_lib.h} and the @code{omp_lib} module, have
177 the value @code{201511} (i.e. OpenMP 4.5).
182 The OpenMP 4.5 specification is fully supported.
187 @unnumberedsubsec New features listed in Appendix B of the OpenMP specification
188 @c This list is sorted as in OpenMP 5.1's B.3 not as in OpenMP 5.0's B.2
190 @multitable @columnfractions .60 .10 .25
191 @headitem Description @tab Status @tab Comments
192 @item Array shaping @tab N @tab
193 @item Array sections with non-unit strides in C and C++ @tab N @tab
194 @item Iterators @tab Y @tab
195 @item @code{metadirective} directive @tab N @tab
196 @item @code{declare variant} directive
197 @tab P @tab @emph{simd} traits not handled correctly
198 @item @var{target-offload-var} ICV and @code{OMP_TARGET_OFFLOAD}
199 env variable @tab Y @tab
200 @item Nested-parallel changes to @var{max-active-levels-var} ICV @tab Y @tab
201 @item @code{requires} directive @tab P
202 @tab complete but no non-host device provides @code{unified_shared_memory}
203 @item @code{teams} construct outside an enclosing target region @tab Y @tab
204 @item Non-rectangular loop nests @tab P
205 @tab Full support for C/C++, partial for Fortran
206 (@uref{https://gcc.gnu.org/PR110735,PR110735})
207 @item @code{!=} as relational-op in canonical loop form for C/C++ @tab Y @tab
208 @item @code{nonmonotonic} as default loop schedule modifier for worksharing-loop
209 constructs @tab Y @tab
210 @item Collapse of associated loops that are imperfectly nested loops @tab Y @tab
211 @item Clauses @code{if}, @code{nontemporal} and @code{order(concurrent)} in
212 @code{simd} construct @tab Y @tab
213 @item @code{atomic} constructs in @code{simd} @tab Y @tab
214 @item @code{loop} construct @tab Y @tab
215 @item @code{order(concurrent)} clause @tab Y @tab
216 @item @code{scan} directive and @code{in_scan} modifier for the
217 @code{reduction} clause @tab Y @tab
218 @item @code{in_reduction} clause on @code{task} constructs @tab Y @tab
219 @item @code{in_reduction} clause on @code{target} constructs @tab P
220 @tab @code{nowait} only stub
221 @item @code{task_reduction} clause with @code{taskgroup} @tab Y @tab
222 @item @code{task} modifier to @code{reduction} clause @tab Y @tab
223 @item @code{affinity} clause to @code{task} construct @tab Y @tab Stub only
224 @item @code{detach} clause to @code{task} construct @tab Y @tab
225 @item @code{omp_fulfill_event} runtime routine @tab Y @tab
226 @item @code{reduction} and @code{in_reduction} clauses on @code{taskloop}
227 and @code{taskloop simd} constructs @tab Y @tab
228 @item @code{taskloop} construct cancelable by @code{cancel} construct
230 @item @code{mutexinoutset} @emph{dependence-type} for @code{depend} clause
232 @item Predefined memory spaces, memory allocators, allocator traits
233 @tab Y @tab See also @ref{Memory allocation}
234 @item Memory management routines @tab Y @tab
235 @item @code{allocate} directive @tab P
236 @tab Only C for stack/automatic and Fortran for stack/automatic
237 and allocatable/pointer variables
238 @item @code{allocate} clause @tab P @tab Initial support
239 @item @code{use_device_addr} clause on @code{target data} @tab Y @tab
240 @item @code{ancestor} modifier on @code{device} clause @tab Y @tab
241 @item Implicit declare target directive @tab Y @tab
242 @item Discontiguous array section with @code{target update} construct
244 @item C/C++'s lvalue expressions in @code{to}, @code{from}
245 and @code{map} clauses @tab N @tab
246 @item C/C++'s lvalue expressions in @code{depend} clauses @tab Y @tab
247 @item Nested @code{declare target} directive @tab Y @tab
248 @item Combined @code{master} constructs @tab Y @tab
249 @item @code{depend} clause on @code{taskwait} @tab Y @tab
250 @item Weak memory ordering clauses on @code{atomic} and @code{flush} construct
252 @item @code{hint} clause on the @code{atomic} construct @tab Y @tab Stub only
253 @item @code{depobj} construct and depend objects @tab Y @tab
254 @item Lock hints were renamed to synchronization hints @tab Y @tab
255 @item @code{conditional} modifier to @code{lastprivate} clause @tab Y @tab
256 @item Map-order clarifications @tab P @tab
257 @item @code{close} @emph{map-type-modifier} @tab Y @tab
258 @item Mapping C/C++ pointer variables and to assign the address of
259 device memory mapped by an array section @tab P @tab
260 @item Mapping of Fortran pointer and allocatable variables, including pointer
261 and allocatable components of variables
262 @tab P @tab Mapping of vars with allocatable components unsupported
263 @item @code{defaultmap} extensions @tab Y @tab
264 @item @code{declare mapper} directive @tab N @tab
265 @item @code{omp_get_supported_active_levels} routine @tab Y @tab
266 @item Runtime routines and environment variables to display runtime thread
267 affinity information @tab Y @tab
268 @item @code{omp_pause_resource} and @code{omp_pause_resource_all} runtime
270 @item @code{omp_get_device_num} runtime routine @tab Y @tab
271 @item OMPT interface @tab N @tab
272 @item OMPD interface @tab N @tab
275 @unnumberedsubsec Other new OpenMP 5.0 features
277 @multitable @columnfractions .60 .10 .25
278 @headitem Description @tab Status @tab Comments
279 @item Supporting C++'s range-based for loop @tab Y @tab
286 @unnumberedsubsec New features listed in Appendix B of the OpenMP specification
288 @multitable @columnfractions .60 .10 .25
289 @headitem Description @tab Status @tab Comments
290 @item OpenMP directive as C++ attribute specifiers @tab Y @tab
291 @item @code{omp_all_memory} reserved locator @tab Y @tab
292 @item @emph{target_device trait} in OpenMP Context @tab N @tab
293 @item @code{target_device} selector set in context selectors @tab N @tab
294 @item C/C++'s @code{declare variant} directive: elision support of
295 preprocessed code @tab N @tab
296 @item @code{declare variant}: new clauses @code{adjust_args} and
297 @code{append_args} @tab N @tab
298 @item @code{dispatch} construct @tab N @tab
299 @item device-specific ICV settings with environment variables @tab Y @tab
300 @item @code{assume} and @code{assumes} directives @tab Y @tab
301 @item @code{nothing} directive @tab Y @tab
302 @item @code{error} directive @tab Y @tab
303 @item @code{masked} construct @tab Y @tab
304 @item @code{scope} directive @tab Y @tab
305 @item Loop transformation constructs @tab N @tab
306 @item @code{strict} modifier in the @code{grainsize} and @code{num_tasks}
307 clauses of the @code{taskloop} construct @tab Y @tab
308 @item @code{align} clause in @code{allocate} directive @tab P
309 @tab Only C and Fortran (and not for static variables)
310 @item @code{align} modifier in @code{allocate} clause @tab Y @tab
311 @item @code{thread_limit} clause to @code{target} construct @tab Y @tab
312 @item @code{has_device_addr} clause to @code{target} construct @tab Y @tab
313 @item Iterators in @code{target update} motion clauses and @code{map}
315 @item Indirect calls to the device version of a procedure or function in
316 @code{target} regions @tab P @tab Only C and C++
317 @item @code{interop} directive @tab N @tab
318 @item @code{omp_interop_t} object support in runtime routines @tab N @tab
319 @item @code{nowait} clause in @code{taskwait} directive @tab Y @tab
320 @item Extensions to the @code{atomic} directive @tab Y @tab
321 @item @code{seq_cst} clause on a @code{flush} construct @tab Y @tab
322 @item @code{inoutset} argument to the @code{depend} clause @tab Y @tab
323 @item @code{private} and @code{firstprivate} argument to @code{default}
324 clause in C and C++ @tab Y @tab
325 @item @code{present} argument to @code{defaultmap} clause @tab Y @tab
326 @item @code{omp_set_num_teams}, @code{omp_set_teams_thread_limit},
327 @code{omp_get_max_teams}, @code{omp_get_teams_thread_limit} runtime
329 @item @code{omp_target_is_accessible} runtime routine @tab Y @tab
330 @item @code{omp_target_memcpy_async} and @code{omp_target_memcpy_rect_async}
331 runtime routines @tab Y @tab
332 @item @code{omp_get_mapped_ptr} runtime routine @tab Y @tab
333 @item @code{omp_calloc}, @code{omp_realloc}, @code{omp_aligned_alloc} and
334 @code{omp_aligned_calloc} runtime routines @tab Y @tab
335 @item @code{omp_alloctrait_key_t} enum: @code{omp_atv_serialized} added,
336 @code{omp_atv_default} changed @tab Y @tab
337 @item @code{omp_display_env} runtime routine @tab Y @tab
338 @item @code{ompt_scope_endpoint_t} enum: @code{ompt_scope_beginend} @tab N @tab
339 @item @code{ompt_sync_region_t} enum additions @tab N @tab
340 @item @code{ompt_state_t} enum: @code{ompt_state_wait_barrier_implementation}
341 and @code{ompt_state_wait_barrier_teams} @tab N @tab
342 @item @code{ompt_callback_target_data_op_emi_t},
343 @code{ompt_callback_target_emi_t}, @code{ompt_callback_target_map_emi_t}
344 and @code{ompt_callback_target_submit_emi_t} @tab N @tab
345 @item @code{ompt_callback_error_t} type @tab N @tab
346 @item @code{OMP_PLACES} syntax extensions @tab Y @tab
347 @item @code{OMP_NUM_TEAMS} and @code{OMP_TEAMS_THREAD_LIMIT} environment
348 variables @tab Y @tab
351 @unnumberedsubsec Other new OpenMP 5.1 features
353 @multitable @columnfractions .60 .10 .25
354 @headitem Description @tab Status @tab Comments
355 @item Support of strictly structured blocks in Fortran @tab Y @tab
356 @item Support of structured block sequences in C/C++ @tab Y @tab
357 @item @code{unconstrained} and @code{reproducible} modifiers on @code{order}
359 @item Support @code{begin/end declare target} syntax in C/C++ @tab Y @tab
360 @item Pointer predetermined firstprivate getting initialized
361 to address of matching mapped list item per 5.1, Sect. 2.21.7.2 @tab N @tab
362 @item For Fortran, diagnose placing declarative before/between @code{USE},
363 @code{IMPORT}, and @code{IMPLICIT} as invalid @tab N @tab
364 @item Optional comma between directive and clause in the @code{#pragma} form @tab Y @tab
365 @item @code{indirect} clause in @code{declare target} @tab P @tab Only C and C++
366 @item @code{device_type(nohost)}/@code{device_type(host)} for variables @tab N @tab
367 @item @code{present} modifier to the @code{map}, @code{to} and @code{from}
375 @unnumberedsubsec New features listed in Appendix B of the OpenMP specification
377 @multitable @columnfractions .60 .10 .25
378 @headitem Description @tab Status @tab Comments
379 @item @code{omp_in_explicit_task} routine and @var{explicit-task-var} ICV
381 @item @code{omp}/@code{ompx}/@code{omx} sentinels and @code{omp_}/@code{ompx_}
383 @tab warning for @code{ompx/omx} sentinels@footnote{The @code{ompx}
384 sentinel as C/C++ pragma and C++ attributes are warned for with
385 @code{-Wunknown-pragmas} (implied by @code{-Wall}) and @code{-Wattributes}
386 (enabled by default), respectively; for Fortran free-source code, there is
387 a warning enabled by default and, for fixed-source code, the @code{omx}
388 sentinel is warned for with with @code{-Wsurprising} (enabled by
389 @code{-Wall}). Unknown clauses are always rejected with an error.}
390 @item Clauses on @code{end} directive can be on directive @tab Y @tab
391 @item @code{destroy} clause with destroy-var argument on @code{depobj}
393 @item Deprecation of no-argument @code{destroy} clause on @code{depobj}
395 @item @code{linear} clause syntax changes and @code{step} modifier @tab Y @tab
396 @item Deprecation of minus operator for reductions @tab N @tab
397 @item Deprecation of separating @code{map} modifiers without comma @tab N @tab
398 @item @code{declare mapper} with iterator and @code{present} modifiers
400 @item If a matching mapped list item is not found in the data environment, the
401 pointer retains its original value @tab Y @tab
402 @item New @code{enter} clause as alias for @code{to} on declare target directive
404 @item Deprecation of @code{to} clause on declare target directive @tab N @tab
405 @item Extended list of directives permitted in Fortran pure procedures
407 @item New @code{allocators} directive for Fortran @tab Y @tab
408 @item Deprecation of @code{allocate} directive for Fortran
409 allocatables/pointers @tab N @tab
410 @item Optional paired @code{end} directive with @code{dispatch} @tab N @tab
411 @item New @code{memspace} and @code{traits} modifiers for @code{uses_allocators}
413 @item Deprecation of traits array following the allocator_handle expression in
414 @code{uses_allocators} @tab N @tab
415 @item New @code{otherwise} clause as alias for @code{default} on metadirectives
417 @item Deprecation of @code{default} clause on metadirectives @tab N @tab
418 @item Deprecation of delimited form of @code{declare target} @tab N @tab
419 @item Reproducible semantics changed for @code{order(concurrent)} @tab N @tab
420 @item @code{allocate} and @code{firstprivate} clauses on @code{scope}
422 @item @code{ompt_callback_work} @tab N @tab
423 @item Default map-type for the @code{map} clause in @code{target enter/exit data}
425 @item New @code{doacross} clause as alias for @code{depend} with
426 @code{source}/@code{sink} modifier @tab Y @tab
427 @item Deprecation of @code{depend} with @code{source}/@code{sink} modifier
429 @item @code{omp_cur_iteration} keyword @tab Y @tab
432 @unnumberedsubsec Other new OpenMP 5.2 features
434 @multitable @columnfractions .60 .10 .25
435 @headitem Description @tab Status @tab Comments
436 @item For Fortran, optional comma between directive and clause @tab N @tab
437 @item Conforming device numbers and @code{omp_initial_device} and
438 @code{omp_invalid_device} enum/PARAMETER @tab Y @tab
439 @item Initial value of @var{default-device-var} ICV with
440 @code{OMP_TARGET_OFFLOAD=mandatory} @tab Y @tab
441 @item @code{all} as @emph{implicit-behavior} for @code{defaultmap} @tab Y @tab
442 @item @emph{interop_types} in any position of the modifier list for the @code{init} clause
443 of the @code{interop} construct @tab N @tab
444 @item Invoke virtual member functions of C++ objects created on the host device
445 on other devices @tab N @tab
449 @node OpenMP Technical Report 12
450 @section OpenMP Technical Report 12
452 Technical Report (TR) 12 is the second preview for OpenMP 6.0.
454 @unnumberedsubsec New features listed in Appendix B of the OpenMP specification
455 @multitable @columnfractions .60 .10 .25
456 @item Features deprecated in versions 5.2, 5.1 and 5.0 were removed
457 @tab N/A @tab Backward compatibility
458 @item Full support for C23 was added @tab P @tab
459 @item Full support for C++23 was added @tab P @tab
460 @item @code{_ALL} suffix to the device-scope environment variables
461 @tab P @tab Host device number wrongly accepted
462 @item @code{num_threads} now accepts a list @tab N @tab
463 @item Supporting increments with abstract names in @code{OMP_PLACES} @tab N @tab
464 @item Extension of @code{OMP_DEFAULT_DEVICE} and new
465 @code{OMP_AVAILABLE_DEVICES} environment vars @tab N @tab
466 @item New @code{OMP_THREADS_RESERVE} environment variable @tab N @tab
467 @item The @code{decl} attribute was added to the C++ attribute syntax
469 @item The OpenMP directive syntax was extended to include C 23 attribute
470 specifiers @tab Y @tab
471 @item All inarguable clauses take now an optional Boolean argument @tab N @tab
472 @item For Fortran, @emph{locator list} can be also function reference with
473 data pointer result @tab N @tab
474 @item Concept of @emph{assumed-size arrays} in C and C++
476 @item @emph{directive-name-modifier} accepted in all clauses @tab N @tab
477 @item For Fortran, atomic with BLOCK construct and, for C/C++, with
478 unlimited curly braces supported @tab N @tab
479 @item For Fortran, atomic compare with storing the comparison result
481 @item New @code{looprange} clause @tab N @tab
482 @item Ref-count change for @code{use_device_ptr}/@code{use_device_addr}
484 @item Support for inductions @tab N @tab
485 @item Implicit reduction identifiers of C++ classes
487 @item Change of the @emph{map-type} property from @emph{ultimate} to
488 @emph{default} @tab N @tab
489 @item @code{self} modifier to @code{map} and @code{self} as
490 @code{defaultmap} argument @tab N @tab
491 @item Mapping of @emph{assumed-size arrays} in C, C++ and Fortran
493 @item @code{groupprivate} directive @tab N @tab
494 @item @code{local} clause to @code{declare target} directive @tab N @tab
495 @item @code{part_size} allocator trait @tab N @tab
496 @item @code{pin_device}, @code{preferred_device} and @code{target_access}
499 @item @code{access} allocator trait changes @tab N @tab
500 @item Extension of @code{interop} operation of @code{append_args}, allowing all
501 modifiers of the @code{init} clause
503 @item @code{interop} clause to @code{dispatch} @tab N @tab
504 @item @code{message} and @code{severity} calauses to @code{parallel} directive
506 @item @code{self} clause to @code{requires} directive @tab N @tab
507 @item @code{no_openmp_constructs} assumptions clause @tab N @tab
508 @item @code{reverse} loop-transformation construct @tab N @tab
509 @item @code{interchange} loop-transformation construct @tab N @tab
510 @item @code{fuse} loop-transformation construct @tab N @tab
511 @item @code{apply} code to loop-transforming constructs @tab N @tab
512 @item @code{omp_curr_progress_width} identifier @tab N @tab
513 @item @code{safesync} clause to the @code{parallel} construct @tab N @tab
514 @item @code{omp_get_max_progress_width} runtime routine @tab N @tab
515 @item @code{strict} modifier keyword to @code{num_threads} @tab N @tab
516 @item @code{atomic} permitted in a construct with @code{order(concurrent)}
518 @item @code{coexecute} directive for Fortran @tab N @tab
519 @item Fortran DO CONCURRENT as associated loop in a @code{loop} construct
521 @item @code{threadset} clause in task-generating constructs @tab N @tab
522 @item @code{nowait} clause with reverse-offload @code{target} directives
524 @item Boolean argument to @code{nowait} and @code{nogroup} may be non constant
526 @item @code{memscope} clause to @code{atomic} and @code{flush} @tab N @tab
527 @item @code{omp_is_free_agent} and @code{omp_ancestor_is_free_agent} routines
529 @item @code{omp_target_memset} and @code{omp_target_memset_rect_async} routines
531 @item Routines for obtaining memory spaces/allocators for shared/device memory
533 @item @code{omp_get_memspace_num_resources} routine @tab N @tab
534 @item @code{omp_get_submemspace} routine @tab N @tab
535 @item @code{ompt_target_data_transfer} and @code{ompt_target_data_transfer_async}
536 values in @code{ompt_target_data_op_t} enum @tab N @tab
537 @item @code{ompt_get_buffer_limits} OMPT routine @tab N @tab
540 @unnumberedsubsec Other new TR 12 features
541 @multitable @columnfractions .60 .10 .25
542 @item Relaxed Fortran restrictions to the @code{aligned} clause @tab N @tab
543 @item Mapping lambda captures @tab N @tab
544 @item New @code{omp_pause_stop_tool} constant for omp_pause_resource @tab N @tab
549 @c ---------------------------------------------------------------------
550 @c OpenMP Runtime Library Routines
551 @c ---------------------------------------------------------------------
553 @node Runtime Library Routines
554 @chapter OpenMP Runtime Library Routines
556 The runtime routines described here are defined by Section 18 of the OpenMP
557 specification in version 5.2.
560 * Thread Team Routines::
561 * Thread Affinity Routines::
562 * Teams Region Routines::
564 @c * Resource Relinquishing Routines::
565 * Device Information Routines::
566 * Device Memory Routines::
570 @c * Interoperability Routines::
571 * Memory Management Routines::
572 @c * Tool Control Routine::
573 @c * Environment Display Routine::
578 @node Thread Team Routines
579 @section Thread Team Routines
581 Routines controlling threads in the current contention group.
582 They have C linkage and do not throw exceptions.
585 * omp_set_num_threads:: Set upper team size limit
586 * omp_get_num_threads:: Size of the active team
587 * omp_get_max_threads:: Maximum number of threads of parallel region
588 * omp_get_thread_num:: Current thread ID
589 * omp_in_parallel:: Whether a parallel region is active
590 * omp_set_dynamic:: Enable/disable dynamic teams
591 * omp_get_dynamic:: Dynamic teams setting
592 * omp_get_cancellation:: Whether cancellation support is enabled
593 * omp_set_nested:: Enable/disable nested parallel regions
594 * omp_get_nested:: Nested parallel regions
595 * omp_set_schedule:: Set the runtime scheduling method
596 * omp_get_schedule:: Obtain the runtime scheduling method
597 * omp_get_teams_thread_limit:: Maximum number of threads imposed by teams
598 * omp_get_supported_active_levels:: Maximum number of active regions supported
599 * omp_set_max_active_levels:: Limits the number of active parallel regions
600 * omp_get_max_active_levels:: Current maximum number of active regions
601 * omp_get_level:: Number of parallel regions
602 * omp_get_ancestor_thread_num:: Ancestor thread ID
603 * omp_get_team_size:: Number of threads in a team
604 * omp_get_active_level:: Number of active parallel regions
609 @node omp_set_num_threads
610 @subsection @code{omp_set_num_threads} -- Set upper team size limit
612 @item @emph{Description}:
613 Specifies the number of threads used by default in subsequent parallel
614 sections, if those do not specify a @code{num_threads} clause. The
615 argument of @code{omp_set_num_threads} shall be a positive integer.
618 @multitable @columnfractions .20 .80
619 @item @emph{Prototype}: @tab @code{void omp_set_num_threads(int num_threads);}
622 @item @emph{Fortran}:
623 @multitable @columnfractions .20 .80
624 @item @emph{Interface}: @tab @code{subroutine omp_set_num_threads(num_threads)}
625 @item @tab @code{integer, intent(in) :: num_threads}
628 @item @emph{See also}:
629 @ref{OMP_NUM_THREADS}, @ref{omp_get_num_threads}, @ref{omp_get_max_threads}
631 @item @emph{Reference}:
632 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.1.
637 @node omp_get_num_threads
638 @subsection @code{omp_get_num_threads} -- Size of the active team
640 @item @emph{Description}:
641 Returns the number of threads in the current team. In a sequential section of
642 the program @code{omp_get_num_threads} returns 1.
644 The default team size may be initialized at startup by the
645 @env{OMP_NUM_THREADS} environment variable. At runtime, the size
646 of the current team may be set either by the @code{NUM_THREADS}
647 clause or by @code{omp_set_num_threads}. If none of the above were
648 used to define a specific value and @env{OMP_DYNAMIC} is disabled,
649 one thread per CPU online is used.
652 @multitable @columnfractions .20 .80
653 @item @emph{Prototype}: @tab @code{int omp_get_num_threads(void);}
656 @item @emph{Fortran}:
657 @multitable @columnfractions .20 .80
658 @item @emph{Interface}: @tab @code{integer function omp_get_num_threads()}
661 @item @emph{See also}:
662 @ref{omp_get_max_threads}, @ref{omp_set_num_threads}, @ref{OMP_NUM_THREADS}
664 @item @emph{Reference}:
665 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.2.
670 @node omp_get_max_threads
671 @subsection @code{omp_get_max_threads} -- Maximum number of threads of parallel region
673 @item @emph{Description}:
674 Return the maximum number of threads used for the current parallel region
675 that does not use the clause @code{num_threads}.
678 @multitable @columnfractions .20 .80
679 @item @emph{Prototype}: @tab @code{int omp_get_max_threads(void);}
682 @item @emph{Fortran}:
683 @multitable @columnfractions .20 .80
684 @item @emph{Interface}: @tab @code{integer function omp_get_max_threads()}
687 @item @emph{See also}:
688 @ref{omp_set_num_threads}, @ref{omp_set_dynamic}, @ref{omp_get_thread_limit}
690 @item @emph{Reference}:
691 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.3.
696 @node omp_get_thread_num
697 @subsection @code{omp_get_thread_num} -- Current thread ID
699 @item @emph{Description}:
700 Returns a unique thread identification number within the current team.
701 In a sequential parts of the program, @code{omp_get_thread_num}
702 always returns 0. In parallel regions the return value varies
703 from 0 to @code{omp_get_num_threads}-1 inclusive. The return
704 value of the primary thread of a team is always 0.
707 @multitable @columnfractions .20 .80
708 @item @emph{Prototype}: @tab @code{int omp_get_thread_num(void);}
711 @item @emph{Fortran}:
712 @multitable @columnfractions .20 .80
713 @item @emph{Interface}: @tab @code{integer function omp_get_thread_num()}
716 @item @emph{See also}:
717 @ref{omp_get_num_threads}, @ref{omp_get_ancestor_thread_num}
719 @item @emph{Reference}:
720 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.4.
725 @node omp_in_parallel
726 @subsection @code{omp_in_parallel} -- Whether a parallel region is active
728 @item @emph{Description}:
729 This function returns @code{true} if currently running in parallel,
730 @code{false} otherwise. Here, @code{true} and @code{false} represent
731 their language-specific counterparts.
734 @multitable @columnfractions .20 .80
735 @item @emph{Prototype}: @tab @code{int omp_in_parallel(void);}
738 @item @emph{Fortran}:
739 @multitable @columnfractions .20 .80
740 @item @emph{Interface}: @tab @code{logical function omp_in_parallel()}
743 @item @emph{Reference}:
744 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.6.
748 @node omp_set_dynamic
749 @subsection @code{omp_set_dynamic} -- Enable/disable dynamic teams
751 @item @emph{Description}:
752 Enable or disable the dynamic adjustment of the number of threads
753 within a team. The function takes the language-specific equivalent
754 of @code{true} and @code{false}, where @code{true} enables dynamic
755 adjustment of team sizes and @code{false} disables it.
758 @multitable @columnfractions .20 .80
759 @item @emph{Prototype}: @tab @code{void omp_set_dynamic(int dynamic_threads);}
762 @item @emph{Fortran}:
763 @multitable @columnfractions .20 .80
764 @item @emph{Interface}: @tab @code{subroutine omp_set_dynamic(dynamic_threads)}
765 @item @tab @code{logical, intent(in) :: dynamic_threads}
768 @item @emph{See also}:
769 @ref{OMP_DYNAMIC}, @ref{omp_get_dynamic}
771 @item @emph{Reference}:
772 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.7.
777 @node omp_get_dynamic
778 @subsection @code{omp_get_dynamic} -- Dynamic teams setting
780 @item @emph{Description}:
781 This function returns @code{true} if enabled, @code{false} otherwise.
782 Here, @code{true} and @code{false} represent their language-specific
785 The dynamic team setting may be initialized at startup by the
786 @env{OMP_DYNAMIC} environment variable or at runtime using
787 @code{omp_set_dynamic}. If undefined, dynamic adjustment is
791 @multitable @columnfractions .20 .80
792 @item @emph{Prototype}: @tab @code{int omp_get_dynamic(void);}
795 @item @emph{Fortran}:
796 @multitable @columnfractions .20 .80
797 @item @emph{Interface}: @tab @code{logical function omp_get_dynamic()}
800 @item @emph{See also}:
801 @ref{omp_set_dynamic}, @ref{OMP_DYNAMIC}
803 @item @emph{Reference}:
804 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.8.
809 @node omp_get_cancellation
810 @subsection @code{omp_get_cancellation} -- Whether cancellation support is enabled
812 @item @emph{Description}:
813 This function returns @code{true} if cancellation is activated, @code{false}
814 otherwise. Here, @code{true} and @code{false} represent their language-specific
815 counterparts. Unless @env{OMP_CANCELLATION} is set true, cancellations are
819 @multitable @columnfractions .20 .80
820 @item @emph{Prototype}: @tab @code{int omp_get_cancellation(void);}
823 @item @emph{Fortran}:
824 @multitable @columnfractions .20 .80
825 @item @emph{Interface}: @tab @code{logical function omp_get_cancellation()}
828 @item @emph{See also}:
829 @ref{OMP_CANCELLATION}
831 @item @emph{Reference}:
832 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.9.
838 @subsection @code{omp_set_nested} -- Enable/disable nested parallel regions
840 @item @emph{Description}:
841 Enable or disable nested parallel regions, i.e., whether team members
842 are allowed to create new teams. The function takes the language-specific
843 equivalent of @code{true} and @code{false}, where @code{true} enables
844 dynamic adjustment of team sizes and @code{false} disables it.
846 Enabling nested parallel regions also sets the maximum number of
847 active nested regions to the maximum supported. Disabling nested parallel
848 regions sets the maximum number of active nested regions to one.
850 Note that the @code{omp_set_nested} API routine was deprecated
851 in the OpenMP specification 5.2 in favor of @code{omp_set_max_active_levels}.
854 @multitable @columnfractions .20 .80
855 @item @emph{Prototype}: @tab @code{void omp_set_nested(int nested);}
858 @item @emph{Fortran}:
859 @multitable @columnfractions .20 .80
860 @item @emph{Interface}: @tab @code{subroutine omp_set_nested(nested)}
861 @item @tab @code{logical, intent(in) :: nested}
864 @item @emph{See also}:
865 @ref{omp_get_nested}, @ref{omp_set_max_active_levels},
866 @ref{OMP_MAX_ACTIVE_LEVELS}, @ref{OMP_NESTED}
868 @item @emph{Reference}:
869 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.10.
875 @subsection @code{omp_get_nested} -- Nested parallel regions
877 @item @emph{Description}:
878 This function returns @code{true} if nested parallel regions are
879 enabled, @code{false} otherwise. Here, @code{true} and @code{false}
880 represent their language-specific counterparts.
882 The state of nested parallel regions at startup depends on several
883 environment variables. If @env{OMP_MAX_ACTIVE_LEVELS} is defined
884 and is set to greater than one, then nested parallel regions will be
885 enabled. If not defined, then the value of the @env{OMP_NESTED}
886 environment variable will be followed if defined. If neither are
887 defined, then if either @env{OMP_NUM_THREADS} or @env{OMP_PROC_BIND}
888 are defined with a list of more than one value, then nested parallel
889 regions are enabled. If none of these are defined, then nested parallel
890 regions are disabled by default.
892 Nested parallel regions can be enabled or disabled at runtime using
893 @code{omp_set_nested}, or by setting the maximum number of nested
894 regions with @code{omp_set_max_active_levels} to one to disable, or
897 Note that the @code{omp_get_nested} API routine was deprecated
898 in the OpenMP specification 5.2 in favor of @code{omp_get_max_active_levels}.
901 @multitable @columnfractions .20 .80
902 @item @emph{Prototype}: @tab @code{int omp_get_nested(void);}
905 @item @emph{Fortran}:
906 @multitable @columnfractions .20 .80
907 @item @emph{Interface}: @tab @code{logical function omp_get_nested()}
910 @item @emph{See also}:
911 @ref{omp_get_max_active_levels}, @ref{omp_set_nested},
912 @ref{OMP_MAX_ACTIVE_LEVELS}, @ref{OMP_NESTED}
914 @item @emph{Reference}:
915 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.11.
920 @node omp_set_schedule
921 @subsection @code{omp_set_schedule} -- Set the runtime scheduling method
923 @item @emph{Description}:
924 Sets the runtime scheduling method. The @var{kind} argument can have the
925 value @code{omp_sched_static}, @code{omp_sched_dynamic},
926 @code{omp_sched_guided} or @code{omp_sched_auto}. Except for
927 @code{omp_sched_auto}, the chunk size is set to the value of
928 @var{chunk_size} if positive, or to the default value if zero or negative.
929 For @code{omp_sched_auto} the @var{chunk_size} argument is ignored.
932 @multitable @columnfractions .20 .80
933 @item @emph{Prototype}: @tab @code{void omp_set_schedule(omp_sched_t kind, int chunk_size);}
936 @item @emph{Fortran}:
937 @multitable @columnfractions .20 .80
938 @item @emph{Interface}: @tab @code{subroutine omp_set_schedule(kind, chunk_size)}
939 @item @tab @code{integer(kind=omp_sched_kind) kind}
940 @item @tab @code{integer chunk_size}
943 @item @emph{See also}:
944 @ref{omp_get_schedule}
947 @item @emph{Reference}:
948 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.12.
953 @node omp_get_schedule
954 @subsection @code{omp_get_schedule} -- Obtain the runtime scheduling method
956 @item @emph{Description}:
957 Obtain the runtime scheduling method. The @var{kind} argument is set to
958 @code{omp_sched_static}, @code{omp_sched_dynamic},
959 @code{omp_sched_guided} or @code{omp_sched_auto}. The second argument,
960 @var{chunk_size}, is set to the chunk size.
963 @multitable @columnfractions .20 .80
964 @item @emph{Prototype}: @tab @code{void omp_get_schedule(omp_sched_t *kind, int *chunk_size);}
967 @item @emph{Fortran}:
968 @multitable @columnfractions .20 .80
969 @item @emph{Interface}: @tab @code{subroutine omp_get_schedule(kind, chunk_size)}
970 @item @tab @code{integer(kind=omp_sched_kind) kind}
971 @item @tab @code{integer chunk_size}
974 @item @emph{See also}:
975 @ref{omp_set_schedule}, @ref{OMP_SCHEDULE}
977 @item @emph{Reference}:
978 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.13.
982 @node omp_get_teams_thread_limit
983 @subsection @code{omp_get_teams_thread_limit} -- Maximum number of threads imposed by teams
985 @item @emph{Description}:
986 Return the maximum number of threads that are able to participate in
987 each team created by a teams construct.
990 @multitable @columnfractions .20 .80
991 @item @emph{Prototype}: @tab @code{int omp_get_teams_thread_limit(void);}
994 @item @emph{Fortran}:
995 @multitable @columnfractions .20 .80
996 @item @emph{Interface}: @tab @code{integer function omp_get_teams_thread_limit()}
999 @item @emph{See also}:
1000 @ref{omp_set_teams_thread_limit}, @ref{OMP_TEAMS_THREAD_LIMIT}
1002 @item @emph{Reference}:
1003 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.6.
1008 @node omp_get_supported_active_levels
1009 @subsection @code{omp_get_supported_active_levels} -- Maximum number of active regions supported
1011 @item @emph{Description}:
1012 This function returns the maximum number of nested, active parallel regions
1013 supported by this implementation.
1016 @multitable @columnfractions .20 .80
1017 @item @emph{Prototype}: @tab @code{int omp_get_supported_active_levels(void);}
1020 @item @emph{Fortran}:
1021 @multitable @columnfractions .20 .80
1022 @item @emph{Interface}: @tab @code{integer function omp_get_supported_active_levels()}
1025 @item @emph{See also}:
1026 @ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
1028 @item @emph{Reference}:
1029 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.2.15.
1034 @node omp_set_max_active_levels
1035 @subsection @code{omp_set_max_active_levels} -- Limits the number of active parallel regions
1037 @item @emph{Description}:
1038 This function limits the maximum allowed number of nested, active
1039 parallel regions. @var{max_levels} must be less or equal to
1040 the value returned by @code{omp_get_supported_active_levels}.
1043 @multitable @columnfractions .20 .80
1044 @item @emph{Prototype}: @tab @code{void omp_set_max_active_levels(int max_levels);}
1047 @item @emph{Fortran}:
1048 @multitable @columnfractions .20 .80
1049 @item @emph{Interface}: @tab @code{subroutine omp_set_max_active_levels(max_levels)}
1050 @item @tab @code{integer max_levels}
1053 @item @emph{See also}:
1054 @ref{omp_get_max_active_levels}, @ref{omp_get_active_level},
1055 @ref{omp_get_supported_active_levels}
1057 @item @emph{Reference}:
1058 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.15.
1063 @node omp_get_max_active_levels
1064 @subsection @code{omp_get_max_active_levels} -- Current maximum number of active regions
1066 @item @emph{Description}:
1067 This function obtains the maximum allowed number of nested, active parallel regions.
1070 @multitable @columnfractions .20 .80
1071 @item @emph{Prototype}: @tab @code{int omp_get_max_active_levels(void);}
1074 @item @emph{Fortran}:
1075 @multitable @columnfractions .20 .80
1076 @item @emph{Interface}: @tab @code{integer function omp_get_max_active_levels()}
1079 @item @emph{See also}:
1080 @ref{omp_set_max_active_levels}, @ref{omp_get_active_level}
1082 @item @emph{Reference}:
1083 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.16.
1088 @subsection @code{omp_get_level} -- Obtain the current nesting level
1090 @item @emph{Description}:
1091 This function returns the nesting level for the parallel blocks,
1092 which enclose the calling call.
1095 @multitable @columnfractions .20 .80
1096 @item @emph{Prototype}: @tab @code{int omp_get_level(void);}
1099 @item @emph{Fortran}:
1100 @multitable @columnfractions .20 .80
1101 @item @emph{Interface}: @tab @code{integer function omp_level()}
1104 @item @emph{See also}:
1105 @ref{omp_get_active_level}
1107 @item @emph{Reference}:
1108 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.17.
1113 @node omp_get_ancestor_thread_num
1114 @subsection @code{omp_get_ancestor_thread_num} -- Ancestor thread ID
1116 @item @emph{Description}:
1117 This function returns the thread identification number for the given
1118 nesting level of the current thread. For values of @var{level} outside
1119 zero to @code{omp_get_level} -1 is returned; if @var{level} is
1120 @code{omp_get_level} the result is identical to @code{omp_get_thread_num}.
1123 @multitable @columnfractions .20 .80
1124 @item @emph{Prototype}: @tab @code{int omp_get_ancestor_thread_num(int level);}
1127 @item @emph{Fortran}:
1128 @multitable @columnfractions .20 .80
1129 @item @emph{Interface}: @tab @code{integer function omp_get_ancestor_thread_num(level)}
1130 @item @tab @code{integer level}
1133 @item @emph{See also}:
1134 @ref{omp_get_level}, @ref{omp_get_thread_num}, @ref{omp_get_team_size}
1136 @item @emph{Reference}:
1137 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.18.
1142 @node omp_get_team_size
1143 @subsection @code{omp_get_team_size} -- Number of threads in a team
1145 @item @emph{Description}:
1146 This function returns the number of threads in a thread team to which
1147 either the current thread or its ancestor belongs. For values of @var{level}
1148 outside zero to @code{omp_get_level}, -1 is returned; if @var{level} is zero,
1149 1 is returned, and for @code{omp_get_level}, the result is identical
1150 to @code{omp_get_num_threads}.
1153 @multitable @columnfractions .20 .80
1154 @item @emph{Prototype}: @tab @code{int omp_get_team_size(int level);}
1157 @item @emph{Fortran}:
1158 @multitable @columnfractions .20 .80
1159 @item @emph{Interface}: @tab @code{integer function omp_get_team_size(level)}
1160 @item @tab @code{integer level}
1163 @item @emph{See also}:
1164 @ref{omp_get_num_threads}, @ref{omp_get_level}, @ref{omp_get_ancestor_thread_num}
1166 @item @emph{Reference}:
1167 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.19.
1172 @node omp_get_active_level
1173 @subsection @code{omp_get_active_level} -- Number of parallel regions
1175 @item @emph{Description}:
1176 This function returns the nesting level for the active parallel blocks,
1177 which enclose the calling call.
1180 @multitable @columnfractions .20 .80
1181 @item @emph{Prototype}: @tab @code{int omp_get_active_level(void);}
1184 @item @emph{Fortran}:
1185 @multitable @columnfractions .20 .80
1186 @item @emph{Interface}: @tab @code{integer function omp_get_active_level()}
1189 @item @emph{See also}:
1190 @ref{omp_get_level}, @ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
1192 @item @emph{Reference}:
1193 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.20.
1198 @node Thread Affinity Routines
1199 @section Thread Affinity Routines
1201 Routines controlling and accessing thread-affinity policies.
1202 They have C linkage and do not throw exceptions.
1205 * omp_get_proc_bind:: Whether threads may be moved between CPUs
1206 @c * omp_get_num_places:: <fixme>
1207 @c * omp_get_place_num_procs:: <fixme>
1208 @c * omp_get_place_proc_ids:: <fixme>
1209 @c * omp_get_place_num:: <fixme>
1210 @c * omp_get_partition_num_places:: <fixme>
1211 @c * omp_get_partition_place_nums:: <fixme>
1212 @c * omp_set_affinity_format:: <fixme>
1213 @c * omp_get_affinity_format:: <fixme>
1214 @c * omp_display_affinity:: <fixme>
1215 @c * omp_capture_affinity:: <fixme>
1220 @node omp_get_proc_bind
1221 @subsection @code{omp_get_proc_bind} -- Whether threads may be moved between CPUs
1223 @item @emph{Description}:
1224 This functions returns the currently active thread affinity policy, which is
1225 set via @env{OMP_PROC_BIND}. Possible values are @code{omp_proc_bind_false},
1226 @code{omp_proc_bind_true}, @code{omp_proc_bind_primary},
1227 @code{omp_proc_bind_master}, @code{omp_proc_bind_close} and @code{omp_proc_bind_spread},
1228 where @code{omp_proc_bind_master} is an alias for @code{omp_proc_bind_primary}.
1231 @multitable @columnfractions .20 .80
1232 @item @emph{Prototype}: @tab @code{omp_proc_bind_t omp_get_proc_bind(void);}
1235 @item @emph{Fortran}:
1236 @multitable @columnfractions .20 .80
1237 @item @emph{Interface}: @tab @code{integer(kind=omp_proc_bind_kind) function omp_get_proc_bind()}
1240 @item @emph{See also}:
1241 @ref{OMP_PROC_BIND}, @ref{OMP_PLACES}, @ref{GOMP_CPU_AFFINITY},
1243 @item @emph{Reference}:
1244 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.22.
1249 @node Teams Region Routines
1250 @section Teams Region Routines
1252 Routines controlling the league of teams that are executed in a @code{teams}
1253 region. They have C linkage and do not throw exceptions.
1256 * omp_get_num_teams:: Number of teams
1257 * omp_get_team_num:: Get team number
1258 * omp_set_num_teams:: Set upper teams limit for teams region
1259 * omp_get_max_teams:: Maximum number of teams for teams region
1260 * omp_set_teams_thread_limit:: Set upper thread limit for teams construct
1261 * omp_get_thread_limit:: Maximum number of threads
1266 @node omp_get_num_teams
1267 @subsection @code{omp_get_num_teams} -- Number of teams
1269 @item @emph{Description}:
1270 Returns the number of teams in the current team region.
1273 @multitable @columnfractions .20 .80
1274 @item @emph{Prototype}: @tab @code{int omp_get_num_teams(void);}
1277 @item @emph{Fortran}:
1278 @multitable @columnfractions .20 .80
1279 @item @emph{Interface}: @tab @code{integer function omp_get_num_teams()}
1282 @item @emph{Reference}:
1283 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.32.
1288 @node omp_get_team_num
1289 @subsection @code{omp_get_team_num} -- Get team number
1291 @item @emph{Description}:
1292 Returns the team number of the calling thread.
1295 @multitable @columnfractions .20 .80
1296 @item @emph{Prototype}: @tab @code{int omp_get_team_num(void);}
1299 @item @emph{Fortran}:
1300 @multitable @columnfractions .20 .80
1301 @item @emph{Interface}: @tab @code{integer function omp_get_team_num()}
1304 @item @emph{Reference}:
1305 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.33.
1310 @node omp_set_num_teams
1311 @subsection @code{omp_set_num_teams} -- Set upper teams limit for teams construct
1313 @item @emph{Description}:
1314 Specifies the upper bound for number of teams created by the teams construct
1315 which does not specify a @code{num_teams} clause. The
1316 argument of @code{omp_set_num_teams} shall be a positive integer.
1319 @multitable @columnfractions .20 .80
1320 @item @emph{Prototype}: @tab @code{void omp_set_num_teams(int num_teams);}
1323 @item @emph{Fortran}:
1324 @multitable @columnfractions .20 .80
1325 @item @emph{Interface}: @tab @code{subroutine omp_set_num_teams(num_teams)}
1326 @item @tab @code{integer, intent(in) :: num_teams}
1329 @item @emph{See also}:
1330 @ref{OMP_NUM_TEAMS}, @ref{omp_get_num_teams}, @ref{omp_get_max_teams}
1332 @item @emph{Reference}:
1333 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.3.
1338 @node omp_get_max_teams
1339 @subsection @code{omp_get_max_teams} -- Maximum number of teams of teams region
1341 @item @emph{Description}:
1342 Return the maximum number of teams used for the teams region
1343 that does not use the clause @code{num_teams}.
1346 @multitable @columnfractions .20 .80
1347 @item @emph{Prototype}: @tab @code{int omp_get_max_teams(void);}
1350 @item @emph{Fortran}:
1351 @multitable @columnfractions .20 .80
1352 @item @emph{Interface}: @tab @code{integer function omp_get_max_teams()}
1355 @item @emph{See also}:
1356 @ref{omp_set_num_teams}, @ref{omp_get_num_teams}
1358 @item @emph{Reference}:
1359 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.4.
1364 @node omp_set_teams_thread_limit
1365 @subsection @code{omp_set_teams_thread_limit} -- Set upper thread limit for teams construct
1367 @item @emph{Description}:
1368 Specifies the upper bound for number of threads that are available
1369 for each team created by the teams construct which does not specify a
1370 @code{thread_limit} clause. The argument of
1371 @code{omp_set_teams_thread_limit} shall be a positive integer.
1374 @multitable @columnfractions .20 .80
1375 @item @emph{Prototype}: @tab @code{void omp_set_teams_thread_limit(int thread_limit);}
1378 @item @emph{Fortran}:
1379 @multitable @columnfractions .20 .80
1380 @item @emph{Interface}: @tab @code{subroutine omp_set_teams_thread_limit(thread_limit)}
1381 @item @tab @code{integer, intent(in) :: thread_limit}
1384 @item @emph{See also}:
1385 @ref{OMP_TEAMS_THREAD_LIMIT}, @ref{omp_get_teams_thread_limit}, @ref{omp_get_thread_limit}
1387 @item @emph{Reference}:
1388 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.5.
1393 @node omp_get_thread_limit
1394 @subsection @code{omp_get_thread_limit} -- Maximum number of threads
1396 @item @emph{Description}:
1397 Return the maximum number of threads of the program.
1400 @multitable @columnfractions .20 .80
1401 @item @emph{Prototype}: @tab @code{int omp_get_thread_limit(void);}
1404 @item @emph{Fortran}:
1405 @multitable @columnfractions .20 .80
1406 @item @emph{Interface}: @tab @code{integer function omp_get_thread_limit()}
1409 @item @emph{See also}:
1410 @ref{omp_get_max_threads}, @ref{OMP_THREAD_LIMIT}
1412 @item @emph{Reference}:
1413 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.14.
1418 @node Tasking Routines
1419 @section Tasking Routines
1421 Routines relating to explicit tasks.
1422 They have C linkage and do not throw exceptions.
1425 * omp_get_max_task_priority:: Maximum task priority value that can be set
1426 * omp_in_explicit_task:: Whether a given task is an explicit task
1427 * omp_in_final:: Whether in final or included task region
1428 @c * omp_is_free_agent:: <fixme>/TR12
1429 @c * omp_ancestor_is_free_agent:: <fixme>/TR12
1434 @node omp_get_max_task_priority
1435 @subsection @code{omp_get_max_task_priority} -- Maximum priority value
1436 that can be set for tasks.
1438 @item @emph{Description}:
1439 This function obtains the maximum allowed priority number for tasks.
1442 @multitable @columnfractions .20 .80
1443 @item @emph{Prototype}: @tab @code{int omp_get_max_task_priority(void);}
1446 @item @emph{Fortran}:
1447 @multitable @columnfractions .20 .80
1448 @item @emph{Interface}: @tab @code{integer function omp_get_max_task_priority()}
1451 @item @emph{Reference}:
1452 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
1457 @node omp_in_explicit_task
1458 @subsection @code{omp_in_explicit_task} -- Whether a given task is an explicit task
1460 @item @emph{Description}:
1461 The function returns the @var{explicit-task-var} ICV; it returns true when the
1462 encountering task was generated by a task-generating construct such as
1463 @code{target}, @code{task} or @code{taskloop}. Otherwise, the encountering task
1464 is in an implicit task region such as generated by the implicit or explicit
1465 @code{parallel} region and @code{omp_in_explicit_task} returns false.
1468 @multitable @columnfractions .20 .80
1469 @item @emph{Prototype}: @tab @code{int omp_in_explicit_task(void);}
1472 @item @emph{Fortran}:
1473 @multitable @columnfractions .20 .80
1474 @item @emph{Interface}: @tab @code{logical function omp_in_explicit_task()}
1477 @item @emph{Reference}:
1478 @uref{https://www.openmp.org, OpenMP specification v5.2}, Section 18.5.2.
1484 @subsection @code{omp_in_final} -- Whether in final or included task region
1486 @item @emph{Description}:
1487 This function returns @code{true} if currently running in a final
1488 or included task region, @code{false} otherwise. Here, @code{true}
1489 and @code{false} represent their language-specific counterparts.
1492 @multitable @columnfractions .20 .80
1493 @item @emph{Prototype}: @tab @code{int omp_in_final(void);}
1496 @item @emph{Fortran}:
1497 @multitable @columnfractions .20 .80
1498 @item @emph{Interface}: @tab @code{logical function omp_in_final()}
1501 @item @emph{Reference}:
1502 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.21.
1507 @c @node Resource Relinquishing Routines
1508 @c @section Resource Relinquishing Routines
1510 @c Routines releasing resources used by the OpenMP runtime.
1511 @c They have C linkage and do not throw exceptions.
1514 @c * omp_pause_resource:: <fixme>
1515 @c * omp_pause_resource_all:: <fixme>
1518 @node Device Information Routines
1519 @section Device Information Routines
1521 Routines related to devices available to an OpenMP program.
1522 They have C linkage and do not throw exceptions.
1525 * omp_get_num_procs:: Number of processors online
1526 @c * omp_get_max_progress_width:: <fixme>/TR11
1527 * omp_set_default_device:: Set the default device for target regions
1528 * omp_get_default_device:: Get the default device for target regions
1529 * omp_get_num_devices:: Number of target devices
1530 * omp_get_device_num:: Get device that current thread is running on
1531 * omp_is_initial_device:: Whether executing on the host device
1532 * omp_get_initial_device:: Device number of host device
1537 @node omp_get_num_procs
1538 @subsection @code{omp_get_num_procs} -- Number of processors online
1540 @item @emph{Description}:
1541 Returns the number of processors online on that device.
1544 @multitable @columnfractions .20 .80
1545 @item @emph{Prototype}: @tab @code{int omp_get_num_procs(void);}
1548 @item @emph{Fortran}:
1549 @multitable @columnfractions .20 .80
1550 @item @emph{Interface}: @tab @code{integer function omp_get_num_procs()}
1553 @item @emph{Reference}:
1554 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.5.
1559 @node omp_set_default_device
1560 @subsection @code{omp_set_default_device} -- Set the default device for target regions
1562 @item @emph{Description}:
1563 Set the default device for target regions without device clause. The argument
1564 shall be a nonnegative device number.
1567 @multitable @columnfractions .20 .80
1568 @item @emph{Prototype}: @tab @code{void omp_set_default_device(int device_num);}
1571 @item @emph{Fortran}:
1572 @multitable @columnfractions .20 .80
1573 @item @emph{Interface}: @tab @code{subroutine omp_set_default_device(device_num)}
1574 @item @tab @code{integer device_num}
1577 @item @emph{See also}:
1578 @ref{OMP_DEFAULT_DEVICE}, @ref{omp_get_default_device}
1580 @item @emph{Reference}:
1581 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
1586 @node omp_get_default_device
1587 @subsection @code{omp_get_default_device} -- Get the default device for target regions
1589 @item @emph{Description}:
1590 Get the default device for target regions without device clause.
1593 @multitable @columnfractions .20 .80
1594 @item @emph{Prototype}: @tab @code{int omp_get_default_device(void);}
1597 @item @emph{Fortran}:
1598 @multitable @columnfractions .20 .80
1599 @item @emph{Interface}: @tab @code{integer function omp_get_default_device()}
1602 @item @emph{See also}:
1603 @ref{OMP_DEFAULT_DEVICE}, @ref{omp_set_default_device}
1605 @item @emph{Reference}:
1606 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.30.
1611 @node omp_get_num_devices
1612 @subsection @code{omp_get_num_devices} -- Number of target devices
1614 @item @emph{Description}:
1615 Returns the number of target devices.
1618 @multitable @columnfractions .20 .80
1619 @item @emph{Prototype}: @tab @code{int omp_get_num_devices(void);}
1622 @item @emph{Fortran}:
1623 @multitable @columnfractions .20 .80
1624 @item @emph{Interface}: @tab @code{integer function omp_get_num_devices()}
1627 @item @emph{Reference}:
1628 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.31.
1633 @node omp_get_device_num
1634 @subsection @code{omp_get_device_num} -- Return device number of current device
1636 @item @emph{Description}:
1637 This function returns a device number that represents the device that the
1638 current thread is executing on. For OpenMP 5.0, this must be equal to the
1639 value returned by the @code{omp_get_initial_device} function when called
1643 @multitable @columnfractions .20 .80
1644 @item @emph{Prototype}: @tab @code{int omp_get_device_num(void);}
1647 @item @emph{Fortran}:
1648 @multitable @columnfractions .20 .80
1649 @item @emph{Interface}: @tab @code{integer function omp_get_device_num()}
1652 @item @emph{See also}:
1653 @ref{omp_get_initial_device}
1655 @item @emph{Reference}:
1656 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.2.37.
1661 @node omp_is_initial_device
1662 @subsection @code{omp_is_initial_device} -- Whether executing on the host device
1664 @item @emph{Description}:
1665 This function returns @code{true} if currently running on the host device,
1666 @code{false} otherwise. Here, @code{true} and @code{false} represent
1667 their language-specific counterparts.
1670 @multitable @columnfractions .20 .80
1671 @item @emph{Prototype}: @tab @code{int omp_is_initial_device(void);}
1674 @item @emph{Fortran}:
1675 @multitable @columnfractions .20 .80
1676 @item @emph{Interface}: @tab @code{logical function omp_is_initial_device()}
1679 @item @emph{Reference}:
1680 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.34.
1685 @node omp_get_initial_device
1686 @subsection @code{omp_get_initial_device} -- Return device number of initial device
1688 @item @emph{Description}:
1689 This function returns a device number that represents the host device.
1690 For OpenMP 5.1, this must be equal to the value returned by the
1691 @code{omp_get_num_devices} function.
1694 @multitable @columnfractions .20 .80
1695 @item @emph{Prototype}: @tab @code{int omp_get_initial_device(void);}
1698 @item @emph{Fortran}:
1699 @multitable @columnfractions .20 .80
1700 @item @emph{Interface}: @tab @code{integer function omp_get_initial_device()}
1703 @item @emph{See also}:
1704 @ref{omp_get_num_devices}
1706 @item @emph{Reference}:
1707 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.35.
1712 @node Device Memory Routines
1713 @section Device Memory Routines
1715 Routines related to memory allocation and managing corresponding
1716 pointers on devices. They have C linkage and do not throw exceptions.
1719 * omp_target_alloc:: Allocate device memory
1720 * omp_target_free:: Free device memory
1721 * omp_target_is_present:: Check whether storage is mapped
1722 @c * omp_target_is_accessible:: <fixme>
1723 @c * omp_target_memcpy:: <fixme>
1724 @c * omp_target_memcpy_rect:: <fixme>
1725 @c * omp_target_memcpy_async:: <fixme>
1726 @c * omp_target_memcpy_rect_async:: <fixme>
1727 @c * omp_target_memset:: <fixme>/TR12
1728 @c * omp_target_memset_async:: <fixme>/TR12
1729 * omp_target_associate_ptr:: Associate a device pointer with a host pointer
1730 * omp_target_disassociate_ptr:: Remove device--host pointer association
1731 * omp_get_mapped_ptr:: Return device pointer to a host pointer
1736 @node omp_target_alloc
1737 @subsection @code{omp_target_alloc} -- Allocate device memory
1739 @item @emph{Description}:
1740 This routine allocates @var{size} bytes of memory in the device environment
1741 associated with the device number @var{device_num}. If successful, a device
1742 pointer is returned, otherwise a null pointer.
1744 In GCC, when the device is the host or the device shares memory with the host,
1745 the memory is allocated on the host; in that case, when @var{size} is zero,
1746 either NULL or a unique pointer value that can later be successfully passed to
1747 @code{omp_target_free} is returned. When the allocation is not performed on
1748 the host, a null pointer is returned when @var{size} is zero; in that case,
1749 additionally a diagnostic might be printed to standard error (stderr).
1751 Running this routine in a @code{target} region except on the initial device
1755 @multitable @columnfractions .20 .80
1756 @item @emph{Prototype}: @tab @code{void *omp_target_alloc(size_t size, int device_num)}
1759 @item @emph{Fortran}:
1760 @multitable @columnfractions .20 .80
1761 @item @emph{Interface}: @tab @code{type(c_ptr) function omp_target_alloc(size, device_num) bind(C)}
1762 @item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int, c_size_t}
1763 @item @tab @code{integer(c_size_t), value :: size}
1764 @item @tab @code{integer(c_int), value :: device_num}
1767 @item @emph{See also}:
1768 @ref{omp_target_free}, @ref{omp_target_associate_ptr}
1770 @item @emph{Reference}:
1771 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 18.8.1
1776 @node omp_target_free
1777 @subsection @code{omp_target_free} -- Free device memory
1779 @item @emph{Description}:
1780 This routine frees memory allocated by the @code{omp_target_alloc} routine.
1781 The @var{device_ptr} argument must be either a null pointer or a device pointer
1782 returned by @code{omp_target_alloc} for the specified @code{device_num}. The
1783 device number @var{device_num} must be a conforming device number.
1785 Running this routine in a @code{target} region except on the initial device
1789 @multitable @columnfractions .20 .80
1790 @item @emph{Prototype}: @tab @code{void omp_target_free(void *device_ptr, int device_num)}
1793 @item @emph{Fortran}:
1794 @multitable @columnfractions .20 .80
1795 @item @emph{Interface}: @tab @code{subroutine omp_target_free(device_ptr, device_num) bind(C)}
1796 @item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int}
1797 @item @tab @code{type(c_ptr), value :: device_ptr}
1798 @item @tab @code{integer(c_int), value :: device_num}
1801 @item @emph{See also}:
1802 @ref{omp_target_alloc}, @ref{omp_target_disassociate_ptr}
1804 @item @emph{Reference}:
1805 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 18.8.2
1810 @node omp_target_is_present
1811 @subsection @code{omp_target_is_present} -- Check whether storage is mapped
1813 @item @emph{Description}:
1814 This routine tests whether storage, identified by the host pointer @var{ptr}
1815 is mapped to the device specified by @var{device_num}. If so, it returns
1816 @emph{true} and otherwise @emph{false}.
1818 In GCC, this includes self mapping such that @code{omp_target_is_present}
1819 returns @emph{true} when @var{device_num} specifies the host or when the host
1820 and the device share memory. If @var{ptr} is a null pointer, @var{true} is
1821 returned and if @var{device_num} is an invalid device number, @var{false} is
1824 If those conditions do not apply, @emph{true} is returned if the association has
1825 been established by an explicit or implicit @code{map} clause, the
1826 @code{declare target} directive or a call to the @code{omp_target_associate_ptr}
1829 Running this routine in a @code{target} region except on the initial device
1833 @multitable @columnfractions .20 .80
1834 @item @emph{Prototype}: @tab @code{int omp_target_is_present(const void *ptr,}
1835 @item @tab @code{ int device_num)}
1838 @item @emph{Fortran}:
1839 @multitable @columnfractions .20 .80
1840 @item @emph{Interface}: @tab @code{integer(c_int) function omp_target_is_present(ptr, &}
1841 @item @tab @code{ device_num) bind(C)}
1842 @item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int}
1843 @item @tab @code{type(c_ptr), value :: ptr}
1844 @item @tab @code{integer(c_int), value :: device_num}
1847 @item @emph{See also}:
1848 @ref{omp_target_associate_ptr}
1850 @item @emph{Reference}:
1851 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 18.8.3
1856 @node omp_target_associate_ptr
1857 @subsection @code{omp_target_associate_ptr} -- Associate a device pointer with a host pointer
1859 @item @emph{Description}:
1860 This routine associates storage on the host with storage on a device identified
1861 by @var{device_num}. The device pointer is usually obtained by calling
1862 @code{omp_target_alloc} or by other means (but not by using the @code{map}
1863 clauses or the @code{declare target} directive). The host pointer should point
1864 to memory that has a storage size of at least @var{size}.
1866 The @var{device_offset} parameter specifies the offset into @var{device_ptr}
1867 that is used as the base address for the device side of the mapping; the
1868 storage size should be at least @var{device_offset} plus @var{size}.
1870 After the association, the host pointer can be used in a @code{map} clause and
1871 in the @code{to} and @code{from} clauses of the @code{target update} directive
1872 to transfer data between the associated pointers. The reference count of such
1873 associated storage is infinite. The association can be removed by calling
1874 @code{omp_target_disassociate_ptr} which should be done before the lifetime
1875 of either either storage ends.
1877 The routine returns nonzero (@code{EINVAL}) when the @var{device_num} invalid,
1878 for when the initial device or the associated device shares memory with the
1879 host. @code{omp_target_associate_ptr} returns zero if @var{host_ptr} points
1880 into already associated storage that is fully inside of a previously associated
1881 memory. Otherwise, if the association was successful zero is returned; if none
1882 of the cases above apply, nonzero (@code{EINVAL}) is returned.
1884 The @code{omp_target_is_present} routine can be used to test whether
1885 associated storage for a device pointer exists.
1887 Running this routine in a @code{target} region except on the initial device
1891 @multitable @columnfractions .20 .80
1892 @item @emph{Prototype}: @tab @code{int omp_target_associate_ptr(const void *host_ptr,}
1893 @item @tab @code{ const void *device_ptr,}
1894 @item @tab @code{ size_t size,}
1895 @item @tab @code{ size_t device_offset,}
1896 @item @tab @code{ int device_num)}
1899 @item @emph{Fortran}:
1900 @multitable @columnfractions .20 .80
1901 @item @emph{Interface}: @tab @code{integer(c_int) function omp_target_associate_ptr(host_ptr, &}
1902 @item @tab @code{ device_ptr, size, device_offset, device_num) bind(C)}
1903 @item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int, c_size_t}
1904 @item @tab @code{type(c_ptr), value :: host_ptr, device_ptr}
1905 @item @tab @code{integer(c_size_t), value :: size, device_offset}
1906 @item @tab @code{integer(c_int), value :: device_num}
1909 @item @emph{See also}:
1910 @ref{omp_target_disassociate_ptr}, @ref{omp_target_is_present},
1911 @ref{omp_target_alloc}
1913 @item @emph{Reference}:
1914 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 18.8.9
1919 @node omp_target_disassociate_ptr
1920 @subsection @code{omp_target_disassociate_ptr} -- Remove device--host pointer association
1922 @item @emph{Description}:
1923 This routine removes the storage association established by calling
1924 @code{omp_target_associate_ptr} and sets the reference count to zero,
1925 even if @code{omp_target_associate_ptr} was invoked multiple times for
1926 for host pointer @code{ptr}. If applicable, the device memory needs
1927 to be freed by the user.
1929 If an associated device storage location for the @var{device_num} was
1930 found and has infinite reference count, the association is removed and
1931 zero is returned. In all other cases, nonzero (@code{EINVAL}) is returned
1932 and no other action is taken.
1934 Note that passing a host pointer where the association to the device pointer
1935 was established with the @code{declare target} directive yields undefined
1938 Running this routine in a @code{target} region except on the initial device
1942 @multitable @columnfractions .20 .80
1943 @item @emph{Prototype}: @tab @code{int omp_target_disassociate_ptr(const void *ptr,}
1944 @item @tab @code{ int device_num)}
1947 @item @emph{Fortran}:
1948 @multitable @columnfractions .20 .80
1949 @item @emph{Interface}: @tab @code{integer(c_int) function omp_target_disassociate_ptr(ptr, &}
1950 @item @tab @code{ device_num) bind(C)}
1951 @item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int}
1952 @item @tab @code{type(c_ptr), value :: ptr}
1953 @item @tab @code{integer(c_int), value :: device_num}
1956 @item @emph{See also}:
1957 @ref{omp_target_associate_ptr}
1959 @item @emph{Reference}:
1960 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 18.8.10
1965 @node omp_get_mapped_ptr
1966 @subsection @code{omp_get_mapped_ptr} -- Return device pointer to a host pointer
1968 @item @emph{Description}:
1969 If the device number is refers to the initial device or to a device with
1970 memory accessible from the host (shared memory), the @code{omp_get_mapped_ptr}
1971 routines returns the value of the passed @var{ptr}. Otherwise, if associated
1972 storage to the passed host pointer @var{ptr} exists on device associated with
1973 @var{device_num}, it returns that pointer. In all other cases and in cases of
1974 an error, a null pointer is returned.
1976 The association of storage location is established either via an explicit or
1977 implicit @code{map} clause, the @code{declare target} directive or the
1978 @code{omp_target_associate_ptr} routine.
1980 Running this routine in a @code{target} region except on the initial device
1984 @multitable @columnfractions .20 .80
1985 @item @emph{Prototype}: @tab @code{void *omp_get_mapped_ptr(const void *ptr, int device_num);}
1988 @item @emph{Fortran}:
1989 @multitable @columnfractions .20 .80
1990 @item @emph{Interface}: @tab @code{type(c_ptr) function omp_get_mapped_ptr(ptr, device_num) bind(C)}
1991 @item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int}
1992 @item @tab @code{type(c_ptr), value :: ptr}
1993 @item @tab @code{integer(c_int), value :: device_num}
1996 @item @emph{See also}:
1997 @ref{omp_target_associate_ptr}
1999 @item @emph{Reference}:
2000 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 18.8.11
2006 @section Lock Routines
2008 Initialize, set, test, unset and destroy simple and nested locks.
2009 The routines have C linkage and do not throw exceptions.
2012 * omp_init_lock:: Initialize simple lock
2013 * omp_init_nest_lock:: Initialize nested lock
2014 @c * omp_init_lock_with_hint:: <fixme>
2015 @c * omp_init_nest_lock_with_hint:: <fixme>
2016 * omp_destroy_lock:: Destroy simple lock
2017 * omp_destroy_nest_lock:: Destroy nested lock
2018 * omp_set_lock:: Wait for and set simple lock
2019 * omp_set_nest_lock:: Wait for and set simple lock
2020 * omp_unset_lock:: Unset simple lock
2021 * omp_unset_nest_lock:: Unset nested lock
2022 * omp_test_lock:: Test and set simple lock if available
2023 * omp_test_nest_lock:: Test and set nested lock if available
2029 @subsection @code{omp_init_lock} -- Initialize simple lock
2031 @item @emph{Description}:
2032 Initialize a simple lock. After initialization, the lock is in
2036 @multitable @columnfractions .20 .80
2037 @item @emph{Prototype}: @tab @code{void omp_init_lock(omp_lock_t *lock);}
2040 @item @emph{Fortran}:
2041 @multitable @columnfractions .20 .80
2042 @item @emph{Interface}: @tab @code{subroutine omp_init_lock(svar)}
2043 @item @tab @code{integer(omp_lock_kind), intent(out) :: svar}
2046 @item @emph{See also}:
2047 @ref{omp_destroy_lock}
2049 @item @emph{Reference}:
2050 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
2055 @node omp_init_nest_lock
2056 @subsection @code{omp_init_nest_lock} -- Initialize nested lock
2058 @item @emph{Description}:
2059 Initialize a nested lock. After initialization, the lock is in
2060 an unlocked state and the nesting count is set to zero.
2063 @multitable @columnfractions .20 .80
2064 @item @emph{Prototype}: @tab @code{void omp_init_nest_lock(omp_nest_lock_t *lock);}
2067 @item @emph{Fortran}:
2068 @multitable @columnfractions .20 .80
2069 @item @emph{Interface}: @tab @code{subroutine omp_init_nest_lock(nvar)}
2070 @item @tab @code{integer(omp_nest_lock_kind), intent(out) :: nvar}
2073 @item @emph{See also}:
2074 @ref{omp_destroy_nest_lock}
2076 @item @emph{Reference}:
2077 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
2082 @node omp_destroy_lock
2083 @subsection @code{omp_destroy_lock} -- Destroy simple lock
2085 @item @emph{Description}:
2086 Destroy a simple lock. In order to be destroyed, a simple lock must be
2087 in the unlocked state.
2090 @multitable @columnfractions .20 .80
2091 @item @emph{Prototype}: @tab @code{void omp_destroy_lock(omp_lock_t *lock);}
2094 @item @emph{Fortran}:
2095 @multitable @columnfractions .20 .80
2096 @item @emph{Interface}: @tab @code{subroutine omp_destroy_lock(svar)}
2097 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
2100 @item @emph{See also}:
2103 @item @emph{Reference}:
2104 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
2109 @node omp_destroy_nest_lock
2110 @subsection @code{omp_destroy_nest_lock} -- Destroy nested lock
2112 @item @emph{Description}:
2113 Destroy a nested lock. In order to be destroyed, a nested lock must be
2114 in the unlocked state and its nesting count must equal zero.
2117 @multitable @columnfractions .20 .80
2118 @item @emph{Prototype}: @tab @code{void omp_destroy_nest_lock(omp_nest_lock_t *);}
2121 @item @emph{Fortran}:
2122 @multitable @columnfractions .20 .80
2123 @item @emph{Interface}: @tab @code{subroutine omp_destroy_nest_lock(nvar)}
2124 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
2127 @item @emph{See also}:
2130 @item @emph{Reference}:
2131 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
2137 @subsection @code{omp_set_lock} -- Wait for and set simple lock
2139 @item @emph{Description}:
2140 Before setting a simple lock, the lock variable must be initialized by
2141 @code{omp_init_lock}. The calling thread is blocked until the lock
2142 is available. If the lock is already held by the current thread,
2146 @multitable @columnfractions .20 .80
2147 @item @emph{Prototype}: @tab @code{void omp_set_lock(omp_lock_t *lock);}
2150 @item @emph{Fortran}:
2151 @multitable @columnfractions .20 .80
2152 @item @emph{Interface}: @tab @code{subroutine omp_set_lock(svar)}
2153 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
2156 @item @emph{See also}:
2157 @ref{omp_init_lock}, @ref{omp_test_lock}, @ref{omp_unset_lock}
2159 @item @emph{Reference}:
2160 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
2165 @node omp_set_nest_lock
2166 @subsection @code{omp_set_nest_lock} -- Wait for and set nested lock
2168 @item @emph{Description}:
2169 Before setting a nested lock, the lock variable must be initialized by
2170 @code{omp_init_nest_lock}. The calling thread is blocked until the lock
2171 is available. If the lock is already held by the current thread, the
2172 nesting count for the lock is incremented.
2175 @multitable @columnfractions .20 .80
2176 @item @emph{Prototype}: @tab @code{void omp_set_nest_lock(omp_nest_lock_t *lock);}
2179 @item @emph{Fortran}:
2180 @multitable @columnfractions .20 .80
2181 @item @emph{Interface}: @tab @code{subroutine omp_set_nest_lock(nvar)}
2182 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
2185 @item @emph{See also}:
2186 @ref{omp_init_nest_lock}, @ref{omp_unset_nest_lock}
2188 @item @emph{Reference}:
2189 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
2194 @node omp_unset_lock
2195 @subsection @code{omp_unset_lock} -- Unset simple lock
2197 @item @emph{Description}:
2198 A simple lock about to be unset must have been locked by @code{omp_set_lock}
2199 or @code{omp_test_lock} before. In addition, the lock must be held by the
2200 thread calling @code{omp_unset_lock}. Then, the lock becomes unlocked. If one
2201 or more threads attempted to set the lock before, one of them is chosen to,
2202 again, set the lock to itself.
2205 @multitable @columnfractions .20 .80
2206 @item @emph{Prototype}: @tab @code{void omp_unset_lock(omp_lock_t *lock);}
2209 @item @emph{Fortran}:
2210 @multitable @columnfractions .20 .80
2211 @item @emph{Interface}: @tab @code{subroutine omp_unset_lock(svar)}
2212 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
2215 @item @emph{See also}:
2216 @ref{omp_set_lock}, @ref{omp_test_lock}
2218 @item @emph{Reference}:
2219 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
2224 @node omp_unset_nest_lock
2225 @subsection @code{omp_unset_nest_lock} -- Unset nested lock
2227 @item @emph{Description}:
2228 A nested lock about to be unset must have been locked by @code{omp_set_nested_lock}
2229 or @code{omp_test_nested_lock} before. In addition, the lock must be held by the
2230 thread calling @code{omp_unset_nested_lock}. If the nesting count drops to zero, the
2231 lock becomes unlocked. If one ore more threads attempted to set the lock before,
2232 one of them is chosen to, again, set the lock to itself.
2235 @multitable @columnfractions .20 .80
2236 @item @emph{Prototype}: @tab @code{void omp_unset_nest_lock(omp_nest_lock_t *lock);}
2239 @item @emph{Fortran}:
2240 @multitable @columnfractions .20 .80
2241 @item @emph{Interface}: @tab @code{subroutine omp_unset_nest_lock(nvar)}
2242 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
2245 @item @emph{See also}:
2246 @ref{omp_set_nest_lock}
2248 @item @emph{Reference}:
2249 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
2255 @subsection @code{omp_test_lock} -- Test and set simple lock if available
2257 @item @emph{Description}:
2258 Before setting a simple lock, the lock variable must be initialized by
2259 @code{omp_init_lock}. Contrary to @code{omp_set_lock}, @code{omp_test_lock}
2260 does not block if the lock is not available. This function returns
2261 @code{true} upon success, @code{false} otherwise. Here, @code{true} and
2262 @code{false} represent their language-specific counterparts.
2265 @multitable @columnfractions .20 .80
2266 @item @emph{Prototype}: @tab @code{int omp_test_lock(omp_lock_t *lock);}
2269 @item @emph{Fortran}:
2270 @multitable @columnfractions .20 .80
2271 @item @emph{Interface}: @tab @code{logical function omp_test_lock(svar)}
2272 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
2275 @item @emph{See also}:
2276 @ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
2278 @item @emph{Reference}:
2279 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
2284 @node omp_test_nest_lock
2285 @subsection @code{omp_test_nest_lock} -- Test and set nested lock if available
2287 @item @emph{Description}:
2288 Before setting a nested lock, the lock variable must be initialized by
2289 @code{omp_init_nest_lock}. Contrary to @code{omp_set_nest_lock},
2290 @code{omp_test_nest_lock} does not block if the lock is not available.
2291 If the lock is already held by the current thread, the new nesting count
2292 is returned. Otherwise, the return value equals zero.
2295 @multitable @columnfractions .20 .80
2296 @item @emph{Prototype}: @tab @code{int omp_test_nest_lock(omp_nest_lock_t *lock);}
2299 @item @emph{Fortran}:
2300 @multitable @columnfractions .20 .80
2301 @item @emph{Interface}: @tab @code{logical function omp_test_nest_lock(nvar)}
2302 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
2306 @item @emph{See also}:
2307 @ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
2309 @item @emph{Reference}:
2310 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
2315 @node Timing Routines
2316 @section Timing Routines
2318 Portable, thread-based, wall clock timer.
2319 The routines have C linkage and do not throw exceptions.
2322 * omp_get_wtick:: Get timer precision.
2323 * omp_get_wtime:: Elapsed wall clock time.
2329 @subsection @code{omp_get_wtick} -- Get timer precision
2331 @item @emph{Description}:
2332 Gets the timer precision, i.e., the number of seconds between two
2333 successive clock ticks.
2336 @multitable @columnfractions .20 .80
2337 @item @emph{Prototype}: @tab @code{double omp_get_wtick(void);}
2340 @item @emph{Fortran}:
2341 @multitable @columnfractions .20 .80
2342 @item @emph{Interface}: @tab @code{double precision function omp_get_wtick()}
2345 @item @emph{See also}:
2348 @item @emph{Reference}:
2349 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.4.2.
2355 @subsection @code{omp_get_wtime} -- Elapsed wall clock time
2357 @item @emph{Description}:
2358 Elapsed wall clock time in seconds. The time is measured per thread, no
2359 guarantee can be made that two distinct threads measure the same time.
2360 Time is measured from some "time in the past", which is an arbitrary time
2361 guaranteed not to change during the execution of the program.
2364 @multitable @columnfractions .20 .80
2365 @item @emph{Prototype}: @tab @code{double omp_get_wtime(void);}
2368 @item @emph{Fortran}:
2369 @multitable @columnfractions .20 .80
2370 @item @emph{Interface}: @tab @code{double precision function omp_get_wtime()}
2373 @item @emph{See also}:
2376 @item @emph{Reference}:
2377 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.4.1.
2383 @section Event Routine
2385 Support for event objects.
2386 The routine has C linkage and do not throw exceptions.
2389 * omp_fulfill_event:: Fulfill and destroy an OpenMP event.
2394 @node omp_fulfill_event
2395 @subsection @code{omp_fulfill_event} -- Fulfill and destroy an OpenMP event
2397 @item @emph{Description}:
2398 Fulfill the event associated with the event handle argument. Currently, it
2399 is only used to fulfill events generated by detach clauses on task
2400 constructs - the effect of fulfilling the event is to allow the task to
2403 The result of calling @code{omp_fulfill_event} with an event handle other
2404 than that generated by a detach clause is undefined. Calling it with an
2405 event handle that has already been fulfilled is also undefined.
2408 @multitable @columnfractions .20 .80
2409 @item @emph{Prototype}: @tab @code{void omp_fulfill_event(omp_event_handle_t event);}
2412 @item @emph{Fortran}:
2413 @multitable @columnfractions .20 .80
2414 @item @emph{Interface}: @tab @code{subroutine omp_fulfill_event(event)}
2415 @item @tab @code{integer (kind=omp_event_handle_kind) :: event}
2418 @item @emph{Reference}:
2419 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.5.1.
2424 @c @node Interoperability Routines
2425 @c @section Interoperability Routines
2427 @c Routines to obtain properties from an @code{omp_interop_t} object.
2428 @c They have C linkage and do not throw exceptions.
2431 @c * omp_get_num_interop_properties:: <fixme>
2432 @c * omp_get_interop_int:: <fixme>
2433 @c * omp_get_interop_ptr:: <fixme>
2434 @c * omp_get_interop_str:: <fixme>
2435 @c * omp_get_interop_name:: <fixme>
2436 @c * omp_get_interop_type_desc:: <fixme>
2437 @c * omp_get_interop_rc_desc:: <fixme>
2440 @node Memory Management Routines
2441 @section Memory Management Routines
2443 Routines to manage and allocate memory on the current device.
2444 They have C linkage and do not throw exceptions.
2447 * omp_init_allocator:: Create an allocator
2448 * omp_destroy_allocator:: Destroy an allocator
2449 * omp_set_default_allocator:: Set the default allocator
2450 * omp_get_default_allocator:: Get the default allocator
2451 * omp_alloc:: Memory allocation with an allocator
2452 * omp_aligned_alloc:: Memory allocation with an allocator and alignment
2453 * omp_free:: Freeing memory allocated with OpenMP routines
2454 * omp_calloc:: Allocate nullified memory with an allocator
2455 * omp_aligned_calloc:: Allocate nullified aligned memory with an allocator
2456 * omp_realloc:: Reallocate memory allocated with OpenMP routines
2457 @c * omp_get_memspace_num_resources:: <fixme>/TR11
2458 @c * omp_get_submemspace:: <fixme>/TR11
2463 @node omp_init_allocator
2464 @subsection @code{omp_init_allocator} -- Create an allocator
2466 @item @emph{Description}:
2467 Create an allocator that uses the specified memory space and has the specified
2468 traits; if an allocator that fulfills the requirements cannot be created,
2469 @code{omp_null_allocator} is returned.
2471 The predefined memory spaces and available traits can be found at
2472 @ref{OMP_ALLOCATOR}, where the trait names have to be be prefixed by
2473 @code{omp_atk_} (e.g. @code{omp_atk_pinned}) and the named trait values by
2474 @code{omp_atv_} (e.g. @code{omp_atv_true}); additionally, @code{omp_atv_default}
2475 may be used as trait value to specify that the default value should be used.
2478 @multitable @columnfractions .20 .80
2479 @item @emph{Prototype}: @tab @code{omp_allocator_handle_t omp_init_allocator(}
2480 @item @tab @code{ omp_memspace_handle_t memspace,}
2481 @item @tab @code{ int ntraits,}
2482 @item @tab @code{ const omp_alloctrait_t traits[]);}
2485 @item @emph{Fortran}:
2486 @multitable @columnfractions .20 .80
2487 @item @emph{Interface}: @tab @code{function omp_init_allocator(memspace, ntraits, traits)}
2488 @item @tab @code{integer (omp_allocator_handle_kind) :: omp_init_allocator}
2489 @item @tab @code{integer (omp_memspace_handle_kind), intent(in) :: memspace}
2490 @item @tab @code{integer, intent(in) :: ntraits}
2491 @item @tab @code{type (omp_alloctrait), intent(in) :: traits(*)}
2494 @item @emph{See also}:
2495 @ref{OMP_ALLOCATOR}, @ref{Memory allocation}, @ref{omp_destroy_allocator}
2497 @item @emph{Reference}:
2498 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.7.2
2503 @node omp_destroy_allocator
2504 @subsection @code{omp_destroy_allocator} -- Destroy an allocator
2506 @item @emph{Description}:
2507 Releases all resources used by a memory allocator, which must not represent
2508 a predefined memory allocator. Accessing memory after its allocator has been
2509 destroyed has unspecified behavior. Passing @code{omp_null_allocator} to the
2510 routine is permitted but has no effect.
2514 @multitable @columnfractions .20 .80
2515 @item @emph{Prototype}: @tab @code{void omp_destroy_allocator (omp_allocator_handle_t allocator);}
2518 @item @emph{Fortran}:
2519 @multitable @columnfractions .20 .80
2520 @item @emph{Interface}: @tab @code{subroutine omp_destroy_allocator(allocator)}
2521 @item @tab @code{integer (omp_allocator_handle_kind), intent(in) :: allocator}
2524 @item @emph{See also}:
2525 @ref{omp_init_allocator}
2527 @item @emph{Reference}:
2528 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.7.3
2533 @node omp_set_default_allocator
2534 @subsection @code{omp_set_default_allocator} -- Set the default allocator
2536 @item @emph{Description}:
2537 Sets the default allocator that is used when no allocator has been specified
2538 in the @code{allocate} or @code{allocator} clause or if an OpenMP memory
2539 routine is invoked with the @code{omp_null_allocator} allocator.
2542 @multitable @columnfractions .20 .80
2543 @item @emph{Prototype}: @tab @code{void omp_set_default_allocator(omp_allocator_handle_t allocator);}
2546 @item @emph{Fortran}:
2547 @multitable @columnfractions .20 .80
2548 @item @emph{Interface}: @tab @code{subroutine omp_set_default_allocator(allocator)}
2549 @item @tab @code{integer (omp_allocator_handle_kind), intent(in) :: allocator}
2552 @item @emph{See also}:
2553 @ref{omp_get_default_allocator}, @ref{omp_init_allocator}, @ref{OMP_ALLOCATOR},
2554 @ref{Memory allocation}
2556 @item @emph{Reference}:
2557 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.7.4
2562 @node omp_get_default_allocator
2563 @subsection @code{omp_get_default_allocator} -- Get the default allocator
2565 @item @emph{Description}:
2566 The routine returns the default allocator that is used when no allocator has
2567 been specified in the @code{allocate} or @code{allocator} clause or if an
2568 OpenMP memory routine is invoked with the @code{omp_null_allocator} allocator.
2571 @multitable @columnfractions .20 .80
2572 @item @emph{Prototype}: @tab @code{omp_allocator_handle_t omp_get_default_allocator();}
2575 @item @emph{Fortran}:
2576 @multitable @columnfractions .20 .80
2577 @item @emph{Interface}: @tab @code{function omp_get_default_allocator()}
2578 @item @tab @code{integer (omp_allocator_handle_kind) :: omp_get_default_allocator}
2581 @item @emph{See also}:
2582 @ref{omp_set_default_allocator}, @ref{OMP_ALLOCATOR}
2584 @item @emph{Reference}:
2585 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.7.5
2591 @subsection @code{omp_alloc} -- Memory allocation with an allocator
2593 @item @emph{Description}:
2594 Allocate memory with the specified allocator, which can either be a predefined
2595 allocator, an allocator handle or @code{omp_null_allocator}. If the allocators
2596 is @code{omp_null_allocator}, the allocator specified by the
2597 @var{def-allocator-var} ICV is used. @var{size} must be a nonnegative number
2598 denoting the number of bytes to be allocated; if @var{size} is zero,
2599 @code{omp_alloc} will return a null pointer. If successful, a pointer to the
2600 allocated memory is returned, otherwise the @code{fallback} trait of the
2601 allocator determines the behavior. The content of the allocated memory is
2604 In @code{target} regions, either the @code{dynamic_allocators} clause must
2605 appear on a @code{requires} directive in the same compilation unit -- or the
2606 @var{allocator} argument may only be a constant expression with the value of
2607 one of the predefined allocators and may not be @code{omp_null_allocator}.
2609 Memory allocated by @code{omp_alloc} must be freed using @code{omp_free}.
2612 @multitable @columnfractions .20 .80
2613 @item @emph{Prototype}: @tab @code{void* omp_alloc(size_t size,}
2614 @item @tab @code{ omp_allocator_handle_t allocator)}
2618 @multitable @columnfractions .20 .80
2619 @item @emph{Prototype}: @tab @code{void* omp_alloc(size_t size,}
2620 @item @tab @code{ omp_allocator_handle_t allocator=omp_null_allocator)}
2623 @item @emph{Fortran}:
2624 @multitable @columnfractions .20 .80
2625 @item @emph{Interface}: @tab @code{type(c_ptr) function omp_alloc(size, allocator) bind(C)}
2626 @item @tab @code{use, intrinsic :: iso_c_binding, only : c_ptr, c_size_t}
2627 @item @tab @code{integer (c_size_t), value :: size}
2628 @item @tab @code{integer (omp_allocator_handle_kind), value :: allocator}
2631 @item @emph{See also}:
2632 @ref{OMP_ALLOCATOR}, @ref{Memory allocation}, @ref{omp_set_default_allocator},
2633 @ref{omp_free}, @ref{omp_init_allocator}
2635 @item @emph{Reference}:
2636 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.7.6
2641 @node omp_aligned_alloc
2642 @subsection @code{omp_aligned_alloc} -- Memory allocation with an allocator and alignment
2644 @item @emph{Description}:
2645 Allocate memory with the specified allocator, which can either be a predefined
2646 allocator, an allocator handle or @code{omp_null_allocator}. If the allocators
2647 is @code{omp_null_allocator}, the allocator specified by the
2648 @var{def-allocator-var} ICV is used. @var{alignment} must be a positive power
2649 of two and @var{size} must be a nonnegative number that is a multiple of the
2650 alignment and denotes the number of bytes to be allocated; if @var{size} is
2651 zero, @code{omp_aligned_alloc} will return a null pointer. The alignment will
2652 be at least the maximal value required by @code{alignment} trait of the
2653 allocator and the value of the passed @var{alignment} argument. If successful,
2654 a pointer to the allocated memory is returned, otherwise the @code{fallback}
2655 trait of the allocator determines the behavior. The content of the allocated
2656 memory is unspecified.
2658 In @code{target} regions, either the @code{dynamic_allocators} clause must
2659 appear on a @code{requires} directive in the same compilation unit -- or the
2660 @var{allocator} argument may only be a constant expression with the value of
2661 one of the predefined allocators and may not be @code{omp_null_allocator}.
2663 Memory allocated by @code{omp_aligned_alloc} must be freed using
2667 @multitable @columnfractions .20 .80
2668 @item @emph{Prototype}: @tab @code{void* omp_aligned_alloc(size_t alignment,}
2669 @item @tab @code{ size_t size,}
2670 @item @tab @code{ omp_allocator_handle_t allocator)}
2674 @multitable @columnfractions .20 .80
2675 @item @emph{Prototype}: @tab @code{void* omp_aligned_alloc(size_t alignment,}
2676 @item @tab @code{ size_t size,}
2677 @item @tab @code{ omp_allocator_handle_t allocator=omp_null_allocator)}
2680 @item @emph{Fortran}:
2681 @multitable @columnfractions .20 .80
2682 @item @emph{Interface}: @tab @code{type(c_ptr) function omp_aligned_alloc(alignment, size, allocator) bind(C)}
2683 @item @tab @code{use, intrinsic :: iso_c_binding, only : c_ptr, c_size_t}
2684 @item @tab @code{integer (c_size_t), value :: alignment, size}
2685 @item @tab @code{integer (omp_allocator_handle_kind), value :: allocator}
2688 @item @emph{See also}:
2689 @ref{OMP_ALLOCATOR}, @ref{Memory allocation}, @ref{omp_set_default_allocator},
2690 @ref{omp_free}, @ref{omp_init_allocator}
2692 @item @emph{Reference}:
2693 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.13.6
2699 @subsection @code{omp_free} -- Freeing memory allocated with OpenMP routines
2701 @item @emph{Description}:
2702 The @code{omp_free} routine deallocates memory previously allocated by an
2703 OpenMP memory-management routine. The @var{ptr} argument must point to such
2704 memory or be a null pointer; if it is a null pointer, no operation is
2705 performed. If specified, the @var{allocator} argument must be either the
2706 memory allocator that was used for the allocation or @code{omp_null_allocator};
2707 if it is @code{omp_null_allocator}, the implementation will determine the value
2710 Calling @code{omp_free} invokes undefined behavior if the memory
2711 was already deallocated or when the used allocator has already been destroyed.
2714 @multitable @columnfractions .20 .80
2715 @item @emph{Prototype}: @tab @code{void omp_free(void *ptr,}
2716 @item @tab @code{ omp_allocator_handle_t allocator)}
2720 @multitable @columnfractions .20 .80
2721 @item @emph{Prototype}: @tab @code{void omp_free(void *ptr,}
2722 @item @tab @code{ omp_allocator_handle_t allocator=omp_null_allocator)}
2725 @item @emph{Fortran}:
2726 @multitable @columnfractions .20 .80
2727 @item @emph{Interface}: @tab @code{subroutine omp_free(ptr, allocator) bind(C)}
2728 @item @tab @code{use, intrinsic :: iso_c_binding, only : c_ptr}
2729 @item @tab @code{type (c_ptr), value :: ptr}
2730 @item @tab @code{integer (omp_allocator_handle_kind), value :: allocator}
2733 @item @emph{See also}:
2734 @ref{omp_alloc}, @ref{omp_aligned_alloc}, @ref{omp_calloc},
2735 @ref{omp_aligned_calloc}, @ref{omp_realloc}
2737 @item @emph{Reference}:
2738 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.7.7
2744 @subsection @code{omp_calloc} -- Allocate nullified memory with an allocator
2746 @item @emph{Description}:
2747 Allocate zero-initialized memory with the specified allocator, which can either
2748 be a predefined allocator, an allocator handle or @code{omp_null_allocator}. If
2749 the allocators is @code{omp_null_allocator}, the allocator specified by the
2750 @var{def-allocator-var} ICV is used. The to-be allocated memory is for an
2751 array with @var{nmemb} elements, each having a size of @var{size} bytes. Both
2752 @var{nmemb} and @var{size} must be nonnegative numbers; if either of them is
2753 zero, @code{omp_calloc} will return a null pointer. If successful, a pointer to
2754 the zero-initialized allocated memory is returned, otherwise the @code{fallback}
2755 trait of the allocator determines the behavior.
2757 In @code{target} regions, either the @code{dynamic_allocators} clause must
2758 appear on a @code{requires} directive in the same compilation unit -- or the
2759 @var{allocator} argument may only be a constant expression with the value of
2760 one of the predefined allocators and may not be @code{omp_null_allocator}.
2762 Memory allocated by @code{omp_calloc} must be freed using @code{omp_free}.
2765 @multitable @columnfractions .20 .80
2766 @item @emph{Prototype}: @tab @code{void* omp_calloc(size_t nmemb, size_t size,}
2767 @item @tab @code{ omp_allocator_handle_t allocator)}
2771 @multitable @columnfractions .20 .80
2772 @item @emph{Prototype}: @tab @code{void* omp_calloc(size_t nmemb, size_t size,}
2773 @item @tab @code{ omp_allocator_handle_t allocator=omp_null_allocator)}
2776 @item @emph{Fortran}:
2777 @multitable @columnfractions .20 .80
2778 @item @emph{Interface}: @tab @code{type(c_ptr) function omp_calloc(nmemb, size, allocator) bind(C)}
2779 @item @tab @code{use, intrinsic :: iso_c_binding, only : c_ptr, c_size_t}
2780 @item @tab @code{integer (c_size_t), value :: nmemb, size}
2781 @item @tab @code{integer (omp_allocator_handle_kind), value :: allocator}
2784 @item @emph{See also}:
2785 @ref{OMP_ALLOCATOR}, @ref{Memory allocation}, @ref{omp_set_default_allocator},
2786 @ref{omp_free}, @ref{omp_init_allocator}
2788 @item @emph{Reference}:
2789 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.13.8
2794 @node omp_aligned_calloc
2795 @subsection @code{omp_aligned_calloc} -- Allocate aligned nullified memory with an allocator
2797 @item @emph{Description}:
2798 Allocate zero-initialized memory with the specified allocator, which can either
2799 be a predefined allocator, an allocator handle or @code{omp_null_allocator}. If
2800 the allocators is @code{omp_null_allocator}, the allocator specified by the
2801 @var{def-allocator-var} ICV is used. The to-be allocated memory is for an
2802 array with @var{nmemb} elements, each having a size of @var{size} bytes. Both
2803 @var{nmemb} and @var{size} must be nonnegative numbers; if either of them is
2804 zero, @code{omp_aligned_calloc} will return a null pointer. @var{alignment}
2805 must be a positive power of two and @var{size} must be a multiple of the
2806 alignment; the alignment will be at least the maximal value required by
2807 @code{alignment} trait of the allocator and the value of the passed
2808 @var{alignment} argument. If successful, a pointer to the zero-initialized
2809 allocated memory is returned, otherwise the @code{fallback} trait of the
2810 allocator determines the behavior.
2812 In @code{target} regions, either the @code{dynamic_allocators} clause must
2813 appear on a @code{requires} directive in the same compilation unit -- or the
2814 @var{allocator} argument may only be a constant expression with the value of
2815 one of the predefined allocators and may not be @code{omp_null_allocator}.
2817 Memory allocated by @code{omp_aligned_calloc} must be freed using
2821 @multitable @columnfractions .20 .80
2822 @item @emph{Prototype}: @tab @code{void* omp_aligned_calloc(size_t nmemb, size_t size,}
2823 @item @tab @code{ omp_allocator_handle_t allocator)}
2827 @multitable @columnfractions .20 .80
2828 @item @emph{Prototype}: @tab @code{void* omp_aligned_calloc(size_t nmemb, size_t size,}
2829 @item @tab @code{ omp_allocator_handle_t allocator=omp_null_allocator)}
2832 @item @emph{Fortran}:
2833 @multitable @columnfractions .20 .80
2834 @item @emph{Interface}: @tab @code{type(c_ptr) function omp_aligned_calloc(nmemb, size, allocator) bind(C)}
2835 @item @tab @code{use, intrinsic :: iso_c_binding, only : c_ptr, c_size_t}
2836 @item @tab @code{integer (c_size_t), value :: nmemb, size}
2837 @item @tab @code{integer (omp_allocator_handle_kind), value :: allocator}
2840 @item @emph{See also}:
2841 @ref{OMP_ALLOCATOR}, @ref{Memory allocation}, @ref{omp_set_default_allocator},
2842 @ref{omp_free}, @ref{omp_init_allocator}
2844 @item @emph{Reference}:
2845 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.13.8
2851 @subsection @code{omp_realloc} -- Reallocate memory allocated with OpenMP routines
2853 @item @emph{Description}:
2854 The @code{omp_realloc} routine deallocates memory to which @var{ptr} points to
2855 and allocates new memory with the specified @var{allocator} argument; the
2856 new memory will have the content of the old memory up to the minimum of the
2857 old size and the new @var{size}, otherwise the content of the returned memory
2858 is unspecified. If the new allocator is the same as the old one, the routine
2859 tries to resize the existing memory allocation, returning the same address as
2860 @var{ptr} if successful. @var{ptr} must point to memory allocated by an OpenMP
2861 memory-management routine.
2863 The @var{allocator} and @var{free_allocator} arguments must be a predefined
2864 allocator, an allocator handle or @code{omp_null_allocator}. If
2865 @var{free_allocator} is @code{omp_null_allocator}, the implementation
2866 automatically determines the allocator used for the allocation of @var{ptr}.
2867 If @var{allocator} is @code{omp_null_allocator} and @var{ptr} is is not a
2868 null pointer, the same allocator as @code{free_allocator} is used and
2869 when @var{ptr} is a null pointer the allocator specified by the
2870 @var{def-allocator-var} ICV is used.
2872 The @var{size} must be a nonnegative number denoting the number of bytes to be
2873 allocated; if @var{size} is zero, @code{omp_realloc} will return free the
2874 memory and return a null pointer. When @var{size} is nonzero: if successful,
2875 a pointer to the allocated memory is returned, otherwise the @code{fallback}
2876 trait of the allocator determines the behavior.
2878 In @code{target} regions, either the @code{dynamic_allocators} clause must
2879 appear on a @code{requires} directive in the same compilation unit -- or the
2880 @var{free_allocator} and @var{allocator} arguments may only be a constant
2881 expression with the value of one of the predefined allocators and may not be
2882 @code{omp_null_allocator}.
2884 Memory allocated by @code{omp_realloc} must be freed using @code{omp_free}.
2885 Calling @code{omp_free} invokes undefined behavior if the memory
2886 was already deallocated or when the used allocator has already been destroyed.
2889 @multitable @columnfractions .20 .80
2890 @item @emph{Prototype}: @tab @code{void* omp_realloc(void *ptr, size_t size,}
2891 @item @tab @code{ omp_allocator_handle_t allocator,}
2892 @item @tab @code{ omp_allocator_handle_t free_allocator)}
2896 @multitable @columnfractions .20 .80
2897 @item @emph{Prototype}: @tab @code{void* omp_realloc(void *ptr, size_t size,}
2898 @item @tab @code{ omp_allocator_handle_t allocator=omp_null_allocator,}
2899 @item @tab @code{ omp_allocator_handle_t free_allocator=omp_null_allocator)}
2902 @item @emph{Fortran}:
2903 @multitable @columnfractions .20 .80
2904 @item @emph{Interface}: @tab @code{type(c_ptr) function omp_realloc(ptr, size, allocator, free_allocator) bind(C)}
2905 @item @tab @code{use, intrinsic :: iso_c_binding, only : c_ptr, c_size_t}
2906 @item @tab @code{type(C_ptr), value :: ptr}
2907 @item @tab @code{integer (c_size_t), value :: size}
2908 @item @tab @code{integer (omp_allocator_handle_kind), value :: allocator, free_allocator}
2911 @item @emph{See also}:
2912 @ref{OMP_ALLOCATOR}, @ref{Memory allocation}, @ref{omp_set_default_allocator},
2913 @ref{omp_free}, @ref{omp_init_allocator}
2915 @item @emph{Reference}:
2916 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.7.9
2921 @c @node Tool Control Routine
2925 @c @node Environment Display Routine
2926 @c @section Environment Display Routine
2928 @c Routine to display the OpenMP number and the initial value of ICVs.
2929 @c It has C linkage and do not throw exceptions.
2932 @c * omp_display_env:: <fixme>
2935 @c ---------------------------------------------------------------------
2936 @c OpenMP Environment Variables
2937 @c ---------------------------------------------------------------------
2939 @node Environment Variables
2940 @chapter OpenMP Environment Variables
2942 The environment variables which beginning with @env{OMP_} are defined by
2943 section 4 of the OpenMP specification in version 4.5 or in a later version
2944 of the specification, while those beginning with @env{GOMP_} are GNU extensions.
2945 Most @env{OMP_} environment variables have an associated internal control
2948 For any OpenMP environment variable that sets an ICV and is neither
2949 @code{OMP_DEFAULT_DEVICE} nor has global ICV scope, associated
2950 device-specific environment variables exist. For them, the environment
2951 variable without suffix affects the host. The suffix @code{_DEV_} followed
2952 by a non-negative device number less that the number of available devices sets
2953 the ICV for the corresponding device. The suffix @code{_DEV} sets the ICV
2954 of all non-host devices for which a device-specific corresponding environment
2955 variable has not been set while the @code{_ALL} suffix sets the ICV of all
2956 host and non-host devices for which a more specific corresponding environment
2957 variable is not set.
2960 * OMP_ALLOCATOR:: Set the default allocator
2961 * OMP_AFFINITY_FORMAT:: Set the format string used for affinity display
2962 * OMP_CANCELLATION:: Set whether cancellation is activated
2963 * OMP_DISPLAY_AFFINITY:: Display thread affinity information
2964 * OMP_DISPLAY_ENV:: Show OpenMP version and environment variables
2965 * OMP_DEFAULT_DEVICE:: Set the device used in target regions
2966 * OMP_DYNAMIC:: Dynamic adjustment of threads
2967 * OMP_MAX_ACTIVE_LEVELS:: Set the maximum number of nested parallel regions
2968 * OMP_MAX_TASK_PRIORITY:: Set the maximum task priority value
2969 * OMP_NESTED:: Nested parallel regions
2970 * OMP_NUM_TEAMS:: Specifies the number of teams to use by teams region
2971 * OMP_NUM_THREADS:: Specifies the number of threads to use
2972 * OMP_PROC_BIND:: Whether threads may be moved between CPUs
2973 * OMP_PLACES:: Specifies on which CPUs the threads should be placed
2974 * OMP_STACKSIZE:: Set default thread stack size
2975 * OMP_SCHEDULE:: How threads are scheduled
2976 * OMP_TARGET_OFFLOAD:: Controls offloading behavior
2977 * OMP_TEAMS_THREAD_LIMIT:: Set the maximum number of threads imposed by teams
2978 * OMP_THREAD_LIMIT:: Set the maximum number of threads
2979 * OMP_WAIT_POLICY:: How waiting threads are handled
2980 * GOMP_CPU_AFFINITY:: Bind threads to specific CPUs
2981 * GOMP_DEBUG:: Enable debugging output
2982 * GOMP_STACKSIZE:: Set default thread stack size
2983 * GOMP_SPINCOUNT:: Set the busy-wait spin count
2984 * GOMP_RTEMS_THREAD_POOLS:: Set the RTEMS specific thread pools
2989 @section @env{OMP_ALLOCATOR} -- Set the default allocator
2990 @cindex Environment Variable
2992 @item @emph{ICV:} @var{def-allocator-var}
2993 @item @emph{Scope:} data environment
2994 @item @emph{Description}:
2995 Sets the default allocator that is used when no allocator has been specified
2996 in the @code{allocate} or @code{allocator} clause or if an OpenMP memory
2997 routine is invoked with the @code{omp_null_allocator} allocator.
2998 If unset, @code{omp_default_mem_alloc} is used.
3000 The value can either be a predefined allocator or a predefined memory space
3001 or a predefined memory space followed by a colon and a comma-separated list
3002 of memory trait and value pairs, separated by @code{=}.
3004 Note: The corresponding device environment variables are currently not
3005 supported. Therefore, the non-host @var{def-allocator-var} ICVs are always
3006 initialized to @code{omp_default_mem_alloc}. However, on all devices,
3007 the @code{omp_set_default_allocator} API routine can be used to change
3010 @multitable @columnfractions .45 .45
3011 @headitem Predefined allocators @tab Associated predefined memory spaces
3012 @item omp_default_mem_alloc @tab omp_default_mem_space
3013 @item omp_large_cap_mem_alloc @tab omp_large_cap_mem_space
3014 @item omp_const_mem_alloc @tab omp_const_mem_space
3015 @item omp_high_bw_mem_alloc @tab omp_high_bw_mem_space
3016 @item omp_low_lat_mem_alloc @tab omp_low_lat_mem_space
3017 @item omp_cgroup_mem_alloc @tab omp_low_lat_mem_space (implementation defined)
3018 @item omp_pteam_mem_alloc @tab omp_low_lat_mem_space (implementation defined)
3019 @item omp_thread_mem_alloc @tab omp_low_lat_mem_space (implementation defined)
3022 The predefined allocators use the default values for the traits,
3023 as listed below. Except that the last three allocators have the
3024 @code{access} trait set to @code{cgroup}, @code{pteam}, and
3025 @code{thread}, respectively.
3027 @multitable @columnfractions .25 .40 .25
3028 @headitem Trait @tab Allowed values @tab Default value
3029 @item @code{sync_hint} @tab @code{contended}, @code{uncontended},
3030 @code{serialized}, @code{private}
3031 @tab @code{contended}
3032 @item @code{alignment} @tab Positive integer being a power of two
3034 @item @code{access} @tab @code{all}, @code{cgroup},
3035 @code{pteam}, @code{thread}
3037 @item @code{pool_size} @tab Positive integer
3038 @tab See @ref{Memory allocation}
3039 @item @code{fallback} @tab @code{default_mem_fb}, @code{null_fb},
3040 @code{abort_fb}, @code{allocator_fb}
3042 @item @code{fb_data} @tab @emph{unsupported as it needs an allocator handle}
3044 @item @code{pinned} @tab @code{true}, @code{false}
3046 @item @code{partition} @tab @code{environment}, @code{nearest},
3047 @code{blocked}, @code{interleaved}
3048 @tab @code{environment}
3051 For the @code{fallback} trait, the default value is @code{null_fb} for the
3052 @code{omp_default_mem_alloc} allocator and any allocator that is associated
3053 with device memory; for all other other allocators, it is @code{default_mem_fb}
3058 OMP_ALLOCATOR=omp_high_bw_mem_alloc
3059 OMP_ALLOCATOR=omp_large_cap_mem_space
3060 OMP_ALLOCATOR=omp_low_lat_mem_space:pinned=true,partition=nearest
3063 @item @emph{See also}:
3064 @ref{Memory allocation}, @ref{omp_get_default_allocator},
3065 @ref{omp_set_default_allocator}, @ref{Offload-Target Specifics}
3067 @item @emph{Reference}:
3068 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 6.21
3073 @node OMP_AFFINITY_FORMAT
3074 @section @env{OMP_AFFINITY_FORMAT} -- Set the format string used for affinity display
3075 @cindex Environment Variable
3077 @item @emph{ICV:} @var{affinity-format-var}
3078 @item @emph{Scope:} device
3079 @item @emph{Description}:
3080 Sets the format string used when displaying OpenMP thread affinity information.
3081 Special values are output using @code{%} followed by an optional size
3082 specification and then either the single-character field type or its long
3083 name enclosed in curly braces; using @code{%%} displays a literal percent.
3084 The size specification consists of an optional @code{0.} or @code{.} followed
3085 by a positive integer, specifying the minimal width of the output. With
3086 @code{0.} and numerical values, the output is padded with zeros on the left;
3087 with @code{.}, the output is padded by spaces on the left; otherwise, the
3088 output is padded by spaces on the right. If unset, the value is
3089 ``@code{level %L thread %i affinity %A}''.
3091 Supported field types are:
3093 @multitable @columnfractions .10 .25 .60
3094 @item t @tab team_num @tab value returned by @code{omp_get_team_num}
3095 @item T @tab num_teams @tab value returned by @code{omp_get_num_teams}
3096 @item L @tab nesting_level @tab value returned by @code{omp_get_level}
3097 @item n @tab thread_num @tab value returned by @code{omp_get_thread_num}
3098 @item N @tab num_threads @tab value returned by @code{omp_get_num_threads}
3099 @item a @tab ancestor_tnum
3100 @tab value returned by
3101 @code{omp_get_ancestor_thread_num(omp_get_level()-1)}
3102 @item H @tab host @tab name of the host that executes the thread
3103 @item P @tab process_id @tab process identifier
3104 @item i @tab native_thread_id @tab native thread identifier
3105 @item A @tab thread_affinity
3106 @tab comma separated list of integer values or ranges, representing the
3107 processors on which a process might execute, subject to affinity
3111 For instance, after setting
3114 OMP_AFFINITY_FORMAT="%0.2a!%n!%.4L!%N;%.2t;%0.2T;%@{team_num@};%@{num_teams@};%A"
3117 with either @code{OMP_DISPLAY_AFFINITY} being set or when calling
3118 @code{omp_display_affinity} with @code{NULL} or an empty string, the program
3119 might display the following:
3122 00!0! 1!4; 0;01;0;1;0-11
3123 00!3! 1!4; 0;01;0;1;0-11
3124 00!2! 1!4; 0;01;0;1;0-11
3125 00!1! 1!4; 0;01;0;1;0-11
3128 @item @emph{See also}:
3129 @ref{OMP_DISPLAY_AFFINITY}
3131 @item @emph{Reference}:
3132 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 6.14
3137 @node OMP_CANCELLATION
3138 @section @env{OMP_CANCELLATION} -- Set whether cancellation is activated
3139 @cindex Environment Variable
3141 @item @emph{ICV:} @var{cancel-var}
3142 @item @emph{Scope:} global
3143 @item @emph{Description}:
3144 If set to @code{TRUE}, the cancellation is activated. If set to @code{FALSE} or
3145 if unset, cancellation is disabled and the @code{cancel} construct is ignored.
3147 @item @emph{See also}:
3148 @ref{omp_get_cancellation}
3150 @item @emph{Reference}:
3151 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.11
3156 @node OMP_DISPLAY_AFFINITY
3157 @section @env{OMP_DISPLAY_AFFINITY} -- Display thread affinity information
3158 @cindex Environment Variable
3160 @item @emph{ICV:} @var{display-affinity-var}
3161 @item @emph{Scope:} global
3162 @item @emph{Description}:
3163 If set to @code{FALSE} or if unset, affinity displaying is disabled.
3164 If set to @code{TRUE}, the runtime displays affinity information about
3165 OpenMP threads in a parallel region upon entering the region and every time
3168 @item @emph{See also}:
3169 @ref{OMP_AFFINITY_FORMAT}
3171 @item @emph{Reference}:
3172 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 6.13
3178 @node OMP_DISPLAY_ENV
3179 @section @env{OMP_DISPLAY_ENV} -- Show OpenMP version and environment variables
3180 @cindex Environment Variable
3182 @item @emph{ICV:} none
3183 @item @emph{Scope:} not applicable
3184 @item @emph{Description}:
3185 If set to @code{TRUE}, the OpenMP version number and the values
3186 associated with the OpenMP environment variables are printed to @code{stderr}.
3187 If set to @code{VERBOSE}, it additionally shows the value of the environment
3188 variables which are GNU extensions. If undefined or set to @code{FALSE},
3189 this information is not shown.
3192 @item @emph{Reference}:
3193 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.12
3198 @node OMP_DEFAULT_DEVICE
3199 @section @env{OMP_DEFAULT_DEVICE} -- Set the device used in target regions
3200 @cindex Environment Variable
3202 @item @emph{ICV:} @var{default-device-var}
3203 @item @emph{Scope:} data environment
3204 @item @emph{Description}:
3205 Set to choose the device which is used in a @code{target} region, unless the
3206 value is overridden by @code{omp_set_default_device} or by a @code{device}
3207 clause. The value shall be the nonnegative device number. If no device with
3208 the given device number exists, the code is executed on the host. If unset,
3209 @env{OMP_TARGET_OFFLOAD} is @code{mandatory} and no non-host devices are
3210 available, it is set to @code{omp_invalid_device}. Otherwise, if unset,
3211 device number 0 is used.
3214 @item @emph{See also}:
3215 @ref{omp_get_default_device}, @ref{omp_set_default_device},
3216 @ref{OMP_TARGET_OFFLOAD}
3218 @item @emph{Reference}:
3219 @uref{https://www.openmp.org, OpenMP specification v5.2}, Section 21.2.7
3225 @section @env{OMP_DYNAMIC} -- Dynamic adjustment of threads
3226 @cindex Environment Variable
3228 @item @emph{ICV:} @var{dyn-var}
3229 @item @emph{Scope:} global
3230 @item @emph{Description}:
3231 Enable or disable the dynamic adjustment of the number of threads
3232 within a team. The value of this environment variable shall be
3233 @code{TRUE} or @code{FALSE}. If undefined, dynamic adjustment is
3234 disabled by default.
3236 @item @emph{See also}:
3237 @ref{omp_set_dynamic}
3239 @item @emph{Reference}:
3240 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.3
3245 @node OMP_MAX_ACTIVE_LEVELS
3246 @section @env{OMP_MAX_ACTIVE_LEVELS} -- Set the maximum number of nested parallel regions
3247 @cindex Environment Variable
3249 @item @emph{ICV:} @var{max-active-levels-var}
3250 @item @emph{Scope:} data environment
3251 @item @emph{Description}:
3252 Specifies the initial value for the maximum number of nested parallel
3253 regions. The value of this variable shall be a positive integer.
3254 If undefined, then if @env{OMP_NESTED} is defined and set to true, or
3255 if @env{OMP_NUM_THREADS} or @env{OMP_PROC_BIND} are defined and set to
3256 a list with more than one item, the maximum number of nested parallel
3257 regions is initialized to the largest number supported, otherwise
3260 @item @emph{See also}:
3261 @ref{omp_set_max_active_levels}, @ref{OMP_NESTED}, @ref{OMP_PROC_BIND},
3262 @ref{OMP_NUM_THREADS}
3265 @item @emph{Reference}:
3266 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.9
3271 @node OMP_MAX_TASK_PRIORITY
3272 @section @env{OMP_MAX_TASK_PRIORITY} -- Set the maximum priority
3273 number that can be set for a task.
3274 @cindex Environment Variable
3276 @item @emph{ICV:} @var{max-task-priority-var}
3277 @item @emph{Scope:} global
3278 @item @emph{Description}:
3279 Specifies the initial value for the maximum priority value that can be
3280 set for a task. The value of this variable shall be a non-negative
3281 integer, and zero is allowed. If undefined, the default priority is
3284 @item @emph{See also}:
3285 @ref{omp_get_max_task_priority}
3287 @item @emph{Reference}:
3288 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.14
3294 @section @env{OMP_NESTED} -- Nested parallel regions
3295 @cindex Environment Variable
3296 @cindex Implementation specific setting
3298 @item @emph{ICV:} @var{max-active-levels-var}
3299 @item @emph{Scope:} data environment
3300 @item @emph{Description}:
3301 Enable or disable nested parallel regions, i.e., whether team members
3302 are allowed to create new teams. The value of this environment variable
3303 shall be @code{TRUE} or @code{FALSE}. If set to @code{TRUE}, the number
3304 of maximum active nested regions supported is by default set to the
3305 maximum supported, otherwise it is set to one. If
3306 @env{OMP_MAX_ACTIVE_LEVELS} is defined, its setting overrides this
3307 setting. If both are undefined, nested parallel regions are enabled if
3308 @env{OMP_NUM_THREADS} or @env{OMP_PROC_BINDS} are defined to a list with
3309 more than one item, otherwise they are disabled by default.
3311 Note that the @code{OMP_NESTED} environment variable was deprecated in
3312 the OpenMP specification 5.2 in favor of @code{OMP_MAX_ACTIVE_LEVELS}.
3314 @item @emph{See also}:
3315 @ref{omp_set_max_active_levels}, @ref{omp_set_nested},
3316 @ref{OMP_MAX_ACTIVE_LEVELS}
3318 @item @emph{Reference}:
3319 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.6
3325 @section @env{OMP_NUM_TEAMS} -- Specifies the number of teams to use by teams region
3326 @cindex Environment Variable
3328 @item @emph{ICV:} @var{nteams-var}
3329 @item @emph{Scope:} device
3330 @item @emph{Description}:
3331 Specifies the upper bound for number of teams to use in teams regions
3332 without explicit @code{num_teams} clause. The value of this variable shall
3333 be a positive integer. If undefined it defaults to 0 which means
3334 implementation defined upper bound.
3336 @item @emph{See also}:
3337 @ref{omp_set_num_teams}
3339 @item @emph{Reference}:
3340 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 6.23
3345 @node OMP_NUM_THREADS
3346 @section @env{OMP_NUM_THREADS} -- Specifies the number of threads to use
3347 @cindex Environment Variable
3348 @cindex Implementation specific setting
3350 @item @emph{ICV:} @var{nthreads-var}
3351 @item @emph{Scope:} data environment
3352 @item @emph{Description}:
3353 Specifies the default number of threads to use in parallel regions. The
3354 value of this variable shall be a comma-separated list of positive integers;
3355 the value specifies the number of threads to use for the corresponding nested
3356 level. Specifying more than one item in the list automatically enables
3357 nesting by default. If undefined one thread per CPU is used.
3359 When a list with more than value is specified, it also affects the
3360 @var{max-active-levels-var} ICV as described in @ref{OMP_MAX_ACTIVE_LEVELS}.
3362 @item @emph{See also}:
3363 @ref{omp_set_num_threads}, @ref{OMP_MAX_ACTIVE_LEVELS}
3365 @item @emph{Reference}:
3366 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.2
3372 @section @env{OMP_PROC_BIND} -- Whether threads may be moved between CPUs
3373 @cindex Environment Variable
3375 @item @emph{ICV:} @var{bind-var}
3376 @item @emph{Scope:} data environment
3377 @item @emph{Description}:
3378 Specifies whether threads may be moved between processors. If set to
3379 @code{TRUE}, OpenMP threads should not be moved; if set to @code{FALSE}
3380 they may be moved. Alternatively, a comma separated list with the
3381 values @code{PRIMARY}, @code{MASTER}, @code{CLOSE} and @code{SPREAD} can
3382 be used to specify the thread affinity policy for the corresponding nesting
3383 level. With @code{PRIMARY} and @code{MASTER} the worker threads are in the
3384 same place partition as the primary thread. With @code{CLOSE} those are
3385 kept close to the primary thread in contiguous place partitions. And
3386 with @code{SPREAD} a sparse distribution
3387 across the place partitions is used. Specifying more than one item in the
3388 list automatically enables nesting by default.
3390 When a list is specified, it also affects the @var{max-active-levels-var} ICV
3391 as described in @ref{OMP_MAX_ACTIVE_LEVELS}.
3393 When undefined, @env{OMP_PROC_BIND} defaults to @code{TRUE} when
3394 @env{OMP_PLACES} or @env{GOMP_CPU_AFFINITY} is set and @code{FALSE} otherwise.
3396 @item @emph{See also}:
3397 @ref{omp_get_proc_bind}, @ref{GOMP_CPU_AFFINITY}, @ref{OMP_PLACES},
3398 @ref{OMP_MAX_ACTIVE_LEVELS}
3400 @item @emph{Reference}:
3401 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.4
3407 @section @env{OMP_PLACES} -- Specifies on which CPUs the threads should be placed
3408 @cindex Environment Variable
3410 @item @emph{ICV:} @var{place-partition-var}
3411 @item @emph{Scope:} implicit tasks
3412 @item @emph{Description}:
3413 The thread placement can be either specified using an abstract name or by an
3414 explicit list of the places. The abstract names @code{threads}, @code{cores},
3415 @code{sockets}, @code{ll_caches} and @code{numa_domains} can be optionally
3416 followed by a positive number in parentheses, which denotes the how many places
3417 shall be created. With @code{threads} each place corresponds to a single
3418 hardware thread; @code{cores} to a single core with the corresponding number of
3419 hardware threads; with @code{sockets} the place corresponds to a single
3420 socket; with @code{ll_caches} to a set of cores that shares the last level
3421 cache on the device; and @code{numa_domains} to a set of cores for which their
3422 closest memory on the device is the same memory and at a similar distance from
3423 the cores. The resulting placement can be shown by setting the
3424 @env{OMP_DISPLAY_ENV} environment variable.
3426 Alternatively, the placement can be specified explicitly as comma-separated
3427 list of places. A place is specified by set of nonnegative numbers in curly
3428 braces, denoting the hardware threads. The curly braces can be omitted
3429 when only a single number has been specified. The hardware threads
3430 belonging to a place can either be specified as comma-separated list of
3431 nonnegative thread numbers or using an interval. Multiple places can also be
3432 either specified by a comma-separated list of places or by an interval. To
3433 specify an interval, a colon followed by the count is placed after
3434 the hardware thread number or the place. Optionally, the length can be
3435 followed by a colon and the stride number -- otherwise a unit stride is
3436 assumed. Placing an exclamation mark (@code{!}) directly before a curly
3437 brace or numbers inside the curly braces (excluding intervals)
3438 excludes those hardware threads.
3440 For instance, the following specifies the same places list:
3441 @code{"@{0,1,2@}, @{3,4,6@}, @{7,8,9@}, @{10,11,12@}"};
3442 @code{"@{0:3@}, @{3:3@}, @{7:3@}, @{10:3@}"}; and @code{"@{0:2@}:4:3"}.
3444 If @env{OMP_PLACES} and @env{GOMP_CPU_AFFINITY} are unset and
3445 @env{OMP_PROC_BIND} is either unset or @code{false}, threads may be moved
3446 between CPUs following no placement policy.
3448 @item @emph{See also}:
3449 @ref{OMP_PROC_BIND}, @ref{GOMP_CPU_AFFINITY}, @ref{omp_get_proc_bind},
3450 @ref{OMP_DISPLAY_ENV}
3452 @item @emph{Reference}:
3453 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.5
3459 @section @env{OMP_STACKSIZE} -- Set default thread stack size
3460 @cindex Environment Variable
3462 @item @emph{ICV:} @var{stacksize-var}
3463 @item @emph{Scope:} device
3464 @item @emph{Description}:
3465 Set the default thread stack size in kilobytes, unless the number
3466 is suffixed by @code{B}, @code{K}, @code{M} or @code{G}, in which
3467 case the size is, respectively, in bytes, kilobytes, megabytes
3468 or gigabytes. This is different from @code{pthread_attr_setstacksize}
3469 which gets the number of bytes as an argument. If the stack size cannot
3470 be set due to system constraints, an error is reported and the initial
3471 stack size is left unchanged. If undefined, the stack size is system
3474 @item @emph{See also}:
3475 @ref{GOMP_STACKSIZE}
3477 @item @emph{Reference}:
3478 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.7
3484 @section @env{OMP_SCHEDULE} -- How threads are scheduled
3485 @cindex Environment Variable
3486 @cindex Implementation specific setting
3488 @item @emph{ICV:} @var{run-sched-var}
3489 @item @emph{Scope:} data environment
3490 @item @emph{Description}:
3491 Allows to specify @code{schedule type} and @code{chunk size}.
3492 The value of the variable shall have the form: @code{type[,chunk]} where
3493 @code{type} is one of @code{static}, @code{dynamic}, @code{guided} or @code{auto}
3494 The optional @code{chunk} size shall be a positive integer. If undefined,
3495 dynamic scheduling and a chunk size of 1 is used.
3497 @item @emph{See also}:
3498 @ref{omp_set_schedule}
3500 @item @emph{Reference}:
3501 @uref{https://www.openmp.org, OpenMP specification v4.5}, Sections 2.7.1.1 and 4.1
3506 @node OMP_TARGET_OFFLOAD
3507 @section @env{OMP_TARGET_OFFLOAD} -- Controls offloading behavior
3508 @cindex Environment Variable
3509 @cindex Implementation specific setting
3511 @item @emph{ICV:} @var{target-offload-var}
3512 @item @emph{Scope:} global
3513 @item @emph{Description}:
3514 Specifies the behavior with regard to offloading code to a device. This
3515 variable can be set to one of three values - @code{MANDATORY}, @code{DISABLED}
3518 If set to @code{MANDATORY}, the program terminates with an error if
3519 any device construct or device memory routine uses a device that is unavailable
3520 or not supported by the implementation, or uses a non-conforming device number.
3521 If set to @code{DISABLED}, then offloading is disabled and all code runs on
3522 the host. If set to @code{DEFAULT}, the program tries offloading to the
3523 device first, then falls back to running code on the host if it cannot.
3525 If undefined, then the program behaves as if @code{DEFAULT} was set.
3527 Note: Even with @code{MANDATORY}, no run-time termination is performed when
3528 the device number in a @code{device} clause or argument to a device memory
3529 routine is for host, which includes using the device number in the
3530 @var{default-device-var} ICV. However, the initial value of
3531 the @var{default-device-var} ICV is affected by @code{MANDATORY}.
3533 @item @emph{See also}:
3534 @ref{OMP_DEFAULT_DEVICE}
3536 @item @emph{Reference}:
3537 @uref{https://www.openmp.org, OpenMP specification v5.2}, Section 21.2.8
3542 @node OMP_TEAMS_THREAD_LIMIT
3543 @section @env{OMP_TEAMS_THREAD_LIMIT} -- Set the maximum number of threads imposed by teams
3544 @cindex Environment Variable
3546 @item @emph{ICV:} @var{teams-thread-limit-var}
3547 @item @emph{Scope:} device
3548 @item @emph{Description}:
3549 Specifies an upper bound for the number of threads to use by each contention
3550 group created by a teams construct without explicit @code{thread_limit}
3551 clause. The value of this variable shall be a positive integer. If undefined,
3552 the value of 0 is used which stands for an implementation defined upper
3555 @item @emph{See also}:
3556 @ref{OMP_THREAD_LIMIT}, @ref{omp_set_teams_thread_limit}
3558 @item @emph{Reference}:
3559 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 6.24
3564 @node OMP_THREAD_LIMIT
3565 @section @env{OMP_THREAD_LIMIT} -- Set the maximum number of threads
3566 @cindex Environment Variable
3568 @item @emph{ICV:} @var{thread-limit-var}
3569 @item @emph{Scope:} data environment
3570 @item @emph{Description}:
3571 Specifies the number of threads to use for the whole program. The
3572 value of this variable shall be a positive integer. If undefined,
3573 the number of threads is not limited.
3575 @item @emph{See also}:
3576 @ref{OMP_NUM_THREADS}, @ref{omp_get_thread_limit}
3578 @item @emph{Reference}:
3579 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.10
3584 @node OMP_WAIT_POLICY
3585 @section @env{OMP_WAIT_POLICY} -- How waiting threads are handled
3586 @cindex Environment Variable
3588 @item @emph{Description}:
3589 Specifies whether waiting threads should be active or passive. If
3590 the value is @code{PASSIVE}, waiting threads should not consume CPU
3591 power while waiting; while the value is @code{ACTIVE} specifies that
3592 they should. If undefined, threads wait actively for a short time
3593 before waiting passively.
3595 @item @emph{See also}:
3596 @ref{GOMP_SPINCOUNT}
3598 @item @emph{Reference}:
3599 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.8
3604 @node GOMP_CPU_AFFINITY
3605 @section @env{GOMP_CPU_AFFINITY} -- Bind threads to specific CPUs
3606 @cindex Environment Variable
3608 @item @emph{Description}:
3609 Binds threads to specific CPUs. The variable should contain a space-separated
3610 or comma-separated list of CPUs. This list may contain different kinds of
3611 entries: either single CPU numbers in any order, a range of CPUs (M-N)
3612 or a range with some stride (M-N:S). CPU numbers are zero based. For example,
3613 @code{GOMP_CPU_AFFINITY="0 3 1-2 4-15:2"} binds the initial thread
3614 to CPU 0, the second to CPU 3, the third to CPU 1, the fourth to
3615 CPU 2, the fifth to CPU 4, the sixth through tenth to CPUs 6, 8, 10, 12,
3616 and 14 respectively and then starts assigning back from the beginning of
3617 the list. @code{GOMP_CPU_AFFINITY=0} binds all threads to CPU 0.
3619 There is no libgomp library routine to determine whether a CPU affinity
3620 specification is in effect. As a workaround, language-specific library
3621 functions, e.g., @code{getenv} in C or @code{GET_ENVIRONMENT_VARIABLE} in
3622 Fortran, may be used to query the setting of the @code{GOMP_CPU_AFFINITY}
3623 environment variable. A defined CPU affinity on startup cannot be changed
3624 or disabled during the runtime of the application.
3626 If both @env{GOMP_CPU_AFFINITY} and @env{OMP_PROC_BIND} are set,
3627 @env{OMP_PROC_BIND} has a higher precedence. If neither has been set and
3628 @env{OMP_PROC_BIND} is unset, or when @env{OMP_PROC_BIND} is set to
3629 @code{FALSE}, the host system handles the assignment of threads to CPUs.
3631 @item @emph{See also}:
3632 @ref{OMP_PLACES}, @ref{OMP_PROC_BIND}
3638 @section @env{GOMP_DEBUG} -- Enable debugging output
3639 @cindex Environment Variable
3641 @item @emph{Description}:
3642 Enable debugging output. The variable should be set to @code{0}
3643 (disabled, also the default if not set), or @code{1} (enabled).
3645 If enabled, some debugging output is printed during execution.
3646 This is currently not specified in more detail, and subject to change.
3651 @node GOMP_STACKSIZE
3652 @section @env{GOMP_STACKSIZE} -- Set default thread stack size
3653 @cindex Environment Variable
3654 @cindex Implementation specific setting
3656 @item @emph{Description}:
3657 Set the default thread stack size in kilobytes. This is different from
3658 @code{pthread_attr_setstacksize} which gets the number of bytes as an
3659 argument. If the stack size cannot be set due to system constraints, an
3660 error is reported and the initial stack size is left unchanged. If undefined,
3661 the stack size is system dependent.
3663 @item @emph{See also}:
3666 @item @emph{Reference}:
3667 @uref{https://gcc.gnu.org/ml/gcc-patches/2006-06/msg00493.html,
3668 GCC Patches Mailinglist},
3669 @uref{https://gcc.gnu.org/ml/gcc-patches/2006-06/msg00496.html,
3670 GCC Patches Mailinglist}
3675 @node GOMP_SPINCOUNT
3676 @section @env{GOMP_SPINCOUNT} -- Set the busy-wait spin count
3677 @cindex Environment Variable
3678 @cindex Implementation specific setting
3680 @item @emph{Description}:
3681 Determines how long a threads waits actively with consuming CPU power
3682 before waiting passively without consuming CPU power. The value may be
3683 either @code{INFINITE}, @code{INFINITY} to always wait actively or an
3684 integer which gives the number of spins of the busy-wait loop. The
3685 integer may optionally be followed by the following suffixes acting
3686 as multiplication factors: @code{k} (kilo, thousand), @code{M} (mega,
3687 million), @code{G} (giga, billion), or @code{T} (tera, trillion).
3688 If undefined, 0 is used when @env{OMP_WAIT_POLICY} is @code{PASSIVE},
3689 300,000 is used when @env{OMP_WAIT_POLICY} is undefined and
3690 30 billion is used when @env{OMP_WAIT_POLICY} is @code{ACTIVE}.
3691 If there are more OpenMP threads than available CPUs, 1000 and 100
3692 spins are used for @env{OMP_WAIT_POLICY} being @code{ACTIVE} or
3693 undefined, respectively; unless the @env{GOMP_SPINCOUNT} is lower
3694 or @env{OMP_WAIT_POLICY} is @code{PASSIVE}.
3696 @item @emph{See also}:
3697 @ref{OMP_WAIT_POLICY}
3702 @node GOMP_RTEMS_THREAD_POOLS
3703 @section @env{GOMP_RTEMS_THREAD_POOLS} -- Set the RTEMS specific thread pools
3704 @cindex Environment Variable
3705 @cindex Implementation specific setting
3707 @item @emph{Description}:
3708 This environment variable is only used on the RTEMS real-time operating system.
3709 It determines the scheduler instance specific thread pools. The format for
3710 @env{GOMP_RTEMS_THREAD_POOLS} is a list of optional
3711 @code{<thread-pool-count>[$<priority>]@@<scheduler-name>} configurations
3712 separated by @code{:} where:
3714 @item @code{<thread-pool-count>} is the thread pool count for this scheduler
3716 @item @code{$<priority>} is an optional priority for the worker threads of a
3717 thread pool according to @code{pthread_setschedparam}. In case a priority
3718 value is omitted, then a worker thread inherits the priority of the OpenMP
3719 primary thread that created it. The priority of the worker thread is not
3720 changed after creation, even if a new OpenMP primary thread using the worker has
3721 a different priority.
3722 @item @code{@@<scheduler-name>} is the scheduler instance name according to the
3723 RTEMS application configuration.
3725 In case no thread pool configuration is specified for a scheduler instance,
3726 then each OpenMP primary thread of this scheduler instance uses its own
3727 dynamically allocated thread pool. To limit the worker thread count of the
3728 thread pools, each OpenMP primary thread must call @code{omp_set_num_threads}.
3729 @item @emph{Example}:
3730 Lets suppose we have three scheduler instances @code{IO}, @code{WRK0}, and
3731 @code{WRK1} with @env{GOMP_RTEMS_THREAD_POOLS} set to
3732 @code{"1@@WRK0:3$4@@WRK1"}. Then there are no thread pool restrictions for
3733 scheduler instance @code{IO}. In the scheduler instance @code{WRK0} there is
3734 one thread pool available. Since no priority is specified for this scheduler
3735 instance, the worker thread inherits the priority of the OpenMP primary thread
3736 that created it. In the scheduler instance @code{WRK1} there are three thread
3737 pools available and their worker threads run at priority four.
3742 @c ---------------------------------------------------------------------
3744 @c ---------------------------------------------------------------------
3746 @node Enabling OpenACC
3747 @chapter Enabling OpenACC
3749 To activate the OpenACC extensions for C/C++ and Fortran, the compile-time
3750 flag @option{-fopenacc} must be specified. This enables the OpenACC directive
3751 @samp{#pragma acc} in C/C++ and, in Fortran, the @samp{!$acc} sentinel in free
3752 source form and the @samp{c$acc}, @samp{*$acc} and @samp{!$acc} sentinels in
3753 fixed source form. The flag also arranges for automatic linking of the OpenACC
3754 runtime library (@ref{OpenACC Runtime Library Routines}).
3756 See @uref{https://gcc.gnu.org/wiki/OpenACC} for more information.
3758 A complete description of all OpenACC directives accepted may be found in
3759 the @uref{https://www.openacc.org, OpenACC} Application Programming
3760 Interface manual, version 2.6.
3764 @c ---------------------------------------------------------------------
3765 @c OpenACC Runtime Library Routines
3766 @c ---------------------------------------------------------------------
3768 @node OpenACC Runtime Library Routines
3769 @chapter OpenACC Runtime Library Routines
3771 The runtime routines described here are defined by section 3 of the OpenACC
3772 specifications in version 2.6.
3773 They have C linkage, and do not throw exceptions.
3774 Generally, they are available only for the host, with the exception of
3775 @code{acc_on_device}, which is available for both the host and the
3776 acceleration device.
3779 * acc_get_num_devices:: Get number of devices for the given device
3781 * acc_set_device_type:: Set type of device accelerator to use.
3782 * acc_get_device_type:: Get type of device accelerator to be used.
3783 * acc_set_device_num:: Set device number to use.
3784 * acc_get_device_num:: Get device number to be used.
3785 * acc_get_property:: Get device property.
3786 * acc_async_test:: Tests for completion of a specific asynchronous
3788 * acc_async_test_all:: Tests for completion of all asynchronous
3790 * acc_wait:: Wait for completion of a specific asynchronous
3792 * acc_wait_all:: Waits for completion of all asynchronous
3794 * acc_wait_all_async:: Wait for completion of all asynchronous
3796 * acc_wait_async:: Wait for completion of asynchronous operations.
3797 * acc_init:: Initialize runtime for a specific device type.
3798 * acc_shutdown:: Shuts down the runtime for a specific device
3800 * acc_on_device:: Whether executing on a particular device
3801 * acc_malloc:: Allocate device memory.
3802 * acc_free:: Free device memory.
3803 * acc_copyin:: Allocate device memory and copy host memory to
3805 * acc_present_or_copyin:: If the data is not present on the device,
3806 allocate device memory and copy from host
3808 * acc_create:: Allocate device memory and map it to host
3810 * acc_present_or_create:: If the data is not present on the device,
3811 allocate device memory and map it to host
3813 * acc_copyout:: Copy device memory to host memory.
3814 * acc_delete:: Free device memory.
3815 * acc_update_device:: Update device memory from mapped host memory.
3816 * acc_update_self:: Update host memory from mapped device memory.
3817 * acc_map_data:: Map previously allocated device memory to host
3819 * acc_unmap_data:: Unmap device memory from host memory.
3820 * acc_deviceptr:: Get device pointer associated with specific
3822 * acc_hostptr:: Get host pointer associated with specific
3824 * acc_is_present:: Indicate whether host variable / array is
3826 * acc_memcpy_to_device:: Copy host memory to device memory.
3827 * acc_memcpy_from_device:: Copy device memory to host memory.
3828 * acc_attach:: Let device pointer point to device-pointer target.
3829 * acc_detach:: Let device pointer point to host-pointer target.
3831 API routines for target platforms.
3833 * acc_get_current_cuda_device:: Get CUDA device handle.
3834 * acc_get_current_cuda_context::Get CUDA context handle.
3835 * acc_get_cuda_stream:: Get CUDA stream handle.
3836 * acc_set_cuda_stream:: Set CUDA stream handle.
3838 API routines for the OpenACC Profiling Interface.
3840 * acc_prof_register:: Register callbacks.
3841 * acc_prof_unregister:: Unregister callbacks.
3842 * acc_prof_lookup:: Obtain inquiry functions.
3843 * acc_register_library:: Library registration.
3848 @node acc_get_num_devices
3849 @section @code{acc_get_num_devices} -- Get number of devices for given device type
3851 @item @emph{Description}
3852 This function returns a value indicating the number of devices available
3853 for the device type specified in @var{devicetype}.
3856 @multitable @columnfractions .20 .80
3857 @item @emph{Prototype}: @tab @code{int acc_get_num_devices(acc_device_t devicetype);}
3860 @item @emph{Fortran}:
3861 @multitable @columnfractions .20 .80
3862 @item @emph{Interface}: @tab @code{integer function acc_get_num_devices(devicetype)}
3863 @item @tab @code{integer(kind=acc_device_kind) devicetype}
3866 @item @emph{Reference}:
3867 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3873 @node acc_set_device_type
3874 @section @code{acc_set_device_type} -- Set type of device accelerator to use.
3876 @item @emph{Description}
3877 This function indicates to the runtime library which device type, specified
3878 in @var{devicetype}, to use when executing a parallel or kernels region.
3881 @multitable @columnfractions .20 .80
3882 @item @emph{Prototype}: @tab @code{acc_set_device_type(acc_device_t devicetype);}
3885 @item @emph{Fortran}:
3886 @multitable @columnfractions .20 .80
3887 @item @emph{Interface}: @tab @code{subroutine acc_set_device_type(devicetype)}
3888 @item @tab @code{integer(kind=acc_device_kind) devicetype}
3891 @item @emph{Reference}:
3892 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3898 @node acc_get_device_type
3899 @section @code{acc_get_device_type} -- Get type of device accelerator to be used.
3901 @item @emph{Description}
3902 This function returns what device type will be used when executing a
3903 parallel or kernels region.
3905 This function returns @code{acc_device_none} if
3906 @code{acc_get_device_type} is called from
3907 @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
3908 callbacks of the OpenACC Profiling Interface (@ref{OpenACC Profiling
3909 Interface}), that is, if the device is currently being initialized.
3912 @multitable @columnfractions .20 .80
3913 @item @emph{Prototype}: @tab @code{acc_device_t acc_get_device_type(void);}
3916 @item @emph{Fortran}:
3917 @multitable @columnfractions .20 .80
3918 @item @emph{Interface}: @tab @code{function acc_get_device_type(void)}
3919 @item @tab @code{integer(kind=acc_device_kind) acc_get_device_type}
3922 @item @emph{Reference}:
3923 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3929 @node acc_set_device_num
3930 @section @code{acc_set_device_num} -- Set device number to use.
3932 @item @emph{Description}
3933 This function will indicate to the runtime which device number,
3934 specified by @var{devicenum}, associated with the specified device
3935 type @var{devicetype}.
3938 @multitable @columnfractions .20 .80
3939 @item @emph{Prototype}: @tab @code{acc_set_device_num(int devicenum, acc_device_t devicetype);}
3942 @item @emph{Fortran}:
3943 @multitable @columnfractions .20 .80
3944 @item @emph{Interface}: @tab @code{subroutine acc_set_device_num(devicenum, devicetype)}
3945 @item @tab @code{integer devicenum}
3946 @item @tab @code{integer(kind=acc_device_kind) devicetype}
3949 @item @emph{Reference}:
3950 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3956 @node acc_get_device_num
3957 @section @code{acc_get_device_num} -- Get device number to be used.
3959 @item @emph{Description}
3960 This function returns which device number associated with the specified device
3961 type @var{devicetype}, will be used when executing a parallel or kernels
3965 @multitable @columnfractions .20 .80
3966 @item @emph{Prototype}: @tab @code{int acc_get_device_num(acc_device_t devicetype);}
3969 @item @emph{Fortran}:
3970 @multitable @columnfractions .20 .80
3971 @item @emph{Interface}: @tab @code{function acc_get_device_num(devicetype)}
3972 @item @tab @code{integer(kind=acc_device_kind) devicetype}
3973 @item @tab @code{integer acc_get_device_num}
3976 @item @emph{Reference}:
3977 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3983 @node acc_get_property
3984 @section @code{acc_get_property} -- Get device property.
3985 @cindex acc_get_property
3986 @cindex acc_get_property_string
3988 @item @emph{Description}
3989 These routines return the value of the specified @var{property} for the
3990 device being queried according to @var{devicenum} and @var{devicetype}.
3991 Integer-valued and string-valued properties are returned by
3992 @code{acc_get_property} and @code{acc_get_property_string} respectively.
3993 The Fortran @code{acc_get_property_string} subroutine returns the string
3994 retrieved in its fourth argument while the remaining entry points are
3995 functions, which pass the return value as their result.
3997 Note for Fortran, only: the OpenACC technical committee corrected and, hence,
3998 modified the interface introduced in OpenACC 2.6. The kind-value parameter
3999 @code{acc_device_property} has been renamed to @code{acc_device_property_kind}
4000 for consistency and the return type of the @code{acc_get_property} function is
4001 now a @code{c_size_t} integer instead of a @code{acc_device_property} integer.
4002 The parameter @code{acc_device_property} is still provided,
4003 but might be removed in a future version of GCC.
4006 @multitable @columnfractions .20 .80
4007 @item @emph{Prototype}: @tab @code{size_t acc_get_property(int devicenum, acc_device_t devicetype, acc_device_property_t property);}
4008 @item @emph{Prototype}: @tab @code{const char *acc_get_property_string(int devicenum, acc_device_t devicetype, acc_device_property_t property);}
4011 @item @emph{Fortran}:
4012 @multitable @columnfractions .20 .80
4013 @item @emph{Interface}: @tab @code{function acc_get_property(devicenum, devicetype, property)}
4014 @item @emph{Interface}: @tab @code{subroutine acc_get_property_string(devicenum, devicetype, property, string)}
4015 @item @tab @code{use ISO_C_Binding, only: c_size_t}
4016 @item @tab @code{integer devicenum}
4017 @item @tab @code{integer(kind=acc_device_kind) devicetype}
4018 @item @tab @code{integer(kind=acc_device_property_kind) property}
4019 @item @tab @code{integer(kind=c_size_t) acc_get_property}
4020 @item @tab @code{character(*) string}
4023 @item @emph{Reference}:
4024 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4030 @node acc_async_test
4031 @section @code{acc_async_test} -- Test for completion of a specific asynchronous operation.
4033 @item @emph{Description}
4034 This function tests for completion of the asynchronous operation specified
4035 in @var{arg}. In C/C++, a non-zero value is returned to indicate
4036 the specified asynchronous operation has completed while Fortran returns
4037 @code{true}. If the asynchronous operation has not completed, C/C++ returns
4038 zero and Fortran returns @code{false}.
4041 @multitable @columnfractions .20 .80
4042 @item @emph{Prototype}: @tab @code{int acc_async_test(int arg);}
4045 @item @emph{Fortran}:
4046 @multitable @columnfractions .20 .80
4047 @item @emph{Interface}: @tab @code{function acc_async_test(arg)}
4048 @item @tab @code{integer(kind=acc_handle_kind) arg}
4049 @item @tab @code{logical acc_async_test}
4052 @item @emph{Reference}:
4053 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4059 @node acc_async_test_all
4060 @section @code{acc_async_test_all} -- Tests for completion of all asynchronous operations.
4062 @item @emph{Description}
4063 This function tests for completion of all asynchronous operations.
4064 In C/C++, a non-zero value is returned to indicate all asynchronous
4065 operations have completed while Fortran returns @code{true}. If
4066 any asynchronous operation has not completed, C/C++ returns zero and
4067 Fortran returns @code{false}.
4070 @multitable @columnfractions .20 .80
4071 @item @emph{Prototype}: @tab @code{int acc_async_test_all(void);}
4074 @item @emph{Fortran}:
4075 @multitable @columnfractions .20 .80
4076 @item @emph{Interface}: @tab @code{function acc_async_test()}
4077 @item @tab @code{logical acc_get_device_num}
4080 @item @emph{Reference}:
4081 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4088 @section @code{acc_wait} -- Wait for completion of a specific asynchronous operation.
4090 @item @emph{Description}
4091 This function waits for completion of the asynchronous operation
4092 specified in @var{arg}.
4095 @multitable @columnfractions .20 .80
4096 @item @emph{Prototype}: @tab @code{acc_wait(arg);}
4097 @item @emph{Prototype (OpenACC 1.0 compatibility)}: @tab @code{acc_async_wait(arg);}
4100 @item @emph{Fortran}:
4101 @multitable @columnfractions .20 .80
4102 @item @emph{Interface}: @tab @code{subroutine acc_wait(arg)}
4103 @item @tab @code{integer(acc_handle_kind) arg}
4104 @item @emph{Interface (OpenACC 1.0 compatibility)}: @tab @code{subroutine acc_async_wait(arg)}
4105 @item @tab @code{integer(acc_handle_kind) arg}
4108 @item @emph{Reference}:
4109 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4116 @section @code{acc_wait_all} -- Waits for completion of all asynchronous operations.
4118 @item @emph{Description}
4119 This function waits for the completion of all asynchronous operations.
4122 @multitable @columnfractions .20 .80
4123 @item @emph{Prototype}: @tab @code{acc_wait_all(void);}
4124 @item @emph{Prototype (OpenACC 1.0 compatibility)}: @tab @code{acc_async_wait_all(void);}
4127 @item @emph{Fortran}:
4128 @multitable @columnfractions .20 .80
4129 @item @emph{Interface}: @tab @code{subroutine acc_wait_all()}
4130 @item @emph{Interface (OpenACC 1.0 compatibility)}: @tab @code{subroutine acc_async_wait_all()}
4133 @item @emph{Reference}:
4134 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4140 @node acc_wait_all_async
4141 @section @code{acc_wait_all_async} -- Wait for completion of all asynchronous operations.
4143 @item @emph{Description}
4144 This function enqueues a wait operation on the queue @var{async} for any
4145 and all asynchronous operations that have been previously enqueued on
4149 @multitable @columnfractions .20 .80
4150 @item @emph{Prototype}: @tab @code{acc_wait_all_async(int async);}
4153 @item @emph{Fortran}:
4154 @multitable @columnfractions .20 .80
4155 @item @emph{Interface}: @tab @code{subroutine acc_wait_all_async(async)}
4156 @item @tab @code{integer(acc_handle_kind) async}
4159 @item @emph{Reference}:
4160 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4166 @node acc_wait_async
4167 @section @code{acc_wait_async} -- Wait for completion of asynchronous operations.
4169 @item @emph{Description}
4170 This function enqueues a wait operation on queue @var{async} for any and all
4171 asynchronous operations enqueued on queue @var{arg}.
4174 @multitable @columnfractions .20 .80
4175 @item @emph{Prototype}: @tab @code{acc_wait_async(int arg, int async);}
4178 @item @emph{Fortran}:
4179 @multitable @columnfractions .20 .80
4180 @item @emph{Interface}: @tab @code{subroutine acc_wait_async(arg, async)}
4181 @item @tab @code{integer(acc_handle_kind) arg, async}
4184 @item @emph{Reference}:
4185 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4192 @section @code{acc_init} -- Initialize runtime for a specific device type.
4194 @item @emph{Description}
4195 This function initializes the runtime for the device type specified in
4199 @multitable @columnfractions .20 .80
4200 @item @emph{Prototype}: @tab @code{acc_init(acc_device_t devicetype);}
4203 @item @emph{Fortran}:
4204 @multitable @columnfractions .20 .80
4205 @item @emph{Interface}: @tab @code{subroutine acc_init(devicetype)}
4206 @item @tab @code{integer(acc_device_kind) devicetype}
4209 @item @emph{Reference}:
4210 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4217 @section @code{acc_shutdown} -- Shuts down the runtime for a specific device type.
4219 @item @emph{Description}
4220 This function shuts down the runtime for the device type specified in
4224 @multitable @columnfractions .20 .80
4225 @item @emph{Prototype}: @tab @code{acc_shutdown(acc_device_t devicetype);}
4228 @item @emph{Fortran}:
4229 @multitable @columnfractions .20 .80
4230 @item @emph{Interface}: @tab @code{subroutine acc_shutdown(devicetype)}
4231 @item @tab @code{integer(acc_device_kind) devicetype}
4234 @item @emph{Reference}:
4235 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4242 @section @code{acc_on_device} -- Whether executing on a particular device
4244 @item @emph{Description}:
4245 This function returns whether the program is executing on a particular
4246 device specified in @var{devicetype}. In C/C++ a non-zero value is
4247 returned to indicate the device is executing on the specified device type.
4248 In Fortran, @code{true} is returned. If the program is not executing
4249 on the specified device type C/C++ returns zero, while Fortran
4250 returns @code{false}.
4253 @multitable @columnfractions .20 .80
4254 @item @emph{Prototype}: @tab @code{acc_on_device(acc_device_t devicetype);}
4257 @item @emph{Fortran}:
4258 @multitable @columnfractions .20 .80
4259 @item @emph{Interface}: @tab @code{function acc_on_device(devicetype)}
4260 @item @tab @code{integer(acc_device_kind) devicetype}
4261 @item @tab @code{logical acc_on_device}
4265 @item @emph{Reference}:
4266 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4273 @section @code{acc_malloc} -- Allocate device memory.
4275 @item @emph{Description}
4276 This function allocates @var{len} bytes of device memory. It returns
4277 the device address of the allocated memory.
4280 @multitable @columnfractions .20 .80
4281 @item @emph{Prototype}: @tab @code{d_void* acc_malloc(size_t len);}
4284 @item @emph{Reference}:
4285 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4292 @section @code{acc_free} -- Free device memory.
4294 @item @emph{Description}
4295 Free previously allocated device memory at the device address @code{a}.
4298 @multitable @columnfractions .20 .80
4299 @item @emph{Prototype}: @tab @code{acc_free(d_void *a);}
4302 @item @emph{Reference}:
4303 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4310 @section @code{acc_copyin} -- Allocate device memory and copy host memory to it.
4312 @item @emph{Description}
4313 In C/C++, this function allocates @var{len} bytes of device memory
4314 and maps it to the specified host address in @var{a}. The device
4315 address of the newly allocated device memory is returned.
4317 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4318 a contiguous array section. The second form @var{a} specifies a
4319 variable or array element and @var{len} specifies the length in bytes.
4322 @multitable @columnfractions .20 .80
4323 @item @emph{Prototype}: @tab @code{void *acc_copyin(h_void *a, size_t len);}
4324 @item @emph{Prototype}: @tab @code{void *acc_copyin_async(h_void *a, size_t len, int async);}
4327 @item @emph{Fortran}:
4328 @multitable @columnfractions .20 .80
4329 @item @emph{Interface}: @tab @code{subroutine acc_copyin(a)}
4330 @item @tab @code{type, dimension(:[,:]...) :: a}
4331 @item @emph{Interface}: @tab @code{subroutine acc_copyin(a, len)}
4332 @item @tab @code{type, dimension(:[,:]...) :: a}
4333 @item @tab @code{integer len}
4334 @item @emph{Interface}: @tab @code{subroutine acc_copyin_async(a, async)}
4335 @item @tab @code{type, dimension(:[,:]...) :: a}
4336 @item @tab @code{integer(acc_handle_kind) :: async}
4337 @item @emph{Interface}: @tab @code{subroutine acc_copyin_async(a, len, async)}
4338 @item @tab @code{type, dimension(:[,:]...) :: a}
4339 @item @tab @code{integer len}
4340 @item @tab @code{integer(acc_handle_kind) :: async}
4343 @item @emph{Reference}:
4344 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4350 @node acc_present_or_copyin
4351 @section @code{acc_present_or_copyin} -- If the data is not present on the device, allocate device memory and copy from host memory.
4353 @item @emph{Description}
4354 This function tests if the host data specified by @var{a} and of length
4355 @var{len} is present or not. If it is not present, device memory
4356 is allocated and the host memory copied. The device address of
4357 the newly allocated device memory is returned.
4359 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4360 a contiguous array section. The second form @var{a} specifies a variable or
4361 array element and @var{len} specifies the length in bytes.
4363 Note that @code{acc_present_or_copyin} and @code{acc_pcopyin} exist for
4364 backward compatibility with OpenACC 2.0; use @ref{acc_copyin} instead.
4367 @multitable @columnfractions .20 .80
4368 @item @emph{Prototype}: @tab @code{void *acc_present_or_copyin(h_void *a, size_t len);}
4369 @item @emph{Prototype}: @tab @code{void *acc_pcopyin(h_void *a, size_t len);}
4372 @item @emph{Fortran}:
4373 @multitable @columnfractions .20 .80
4374 @item @emph{Interface}: @tab @code{subroutine acc_present_or_copyin(a)}
4375 @item @tab @code{type, dimension(:[,:]...) :: a}
4376 @item @emph{Interface}: @tab @code{subroutine acc_present_or_copyin(a, len)}
4377 @item @tab @code{type, dimension(:[,:]...) :: a}
4378 @item @tab @code{integer len}
4379 @item @emph{Interface}: @tab @code{subroutine acc_pcopyin(a)}
4380 @item @tab @code{type, dimension(:[,:]...) :: a}
4381 @item @emph{Interface}: @tab @code{subroutine acc_pcopyin(a, len)}
4382 @item @tab @code{type, dimension(:[,:]...) :: a}
4383 @item @tab @code{integer len}
4386 @item @emph{Reference}:
4387 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4394 @section @code{acc_create} -- Allocate device memory and map it to host memory.
4396 @item @emph{Description}
4397 This function allocates device memory and maps it to host memory specified
4398 by the host address @var{a} with a length of @var{len} bytes. In C/C++,
4399 the function returns the device address of the allocated device memory.
4401 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4402 a contiguous array section. The second form @var{a} specifies a variable or
4403 array element and @var{len} specifies the length in bytes.
4406 @multitable @columnfractions .20 .80
4407 @item @emph{Prototype}: @tab @code{void *acc_create(h_void *a, size_t len);}
4408 @item @emph{Prototype}: @tab @code{void *acc_create_async(h_void *a, size_t len, int async);}
4411 @item @emph{Fortran}:
4412 @multitable @columnfractions .20 .80
4413 @item @emph{Interface}: @tab @code{subroutine acc_create(a)}
4414 @item @tab @code{type, dimension(:[,:]...) :: a}
4415 @item @emph{Interface}: @tab @code{subroutine acc_create(a, len)}
4416 @item @tab @code{type, dimension(:[,:]...) :: a}
4417 @item @tab @code{integer len}
4418 @item @emph{Interface}: @tab @code{subroutine acc_create_async(a, async)}
4419 @item @tab @code{type, dimension(:[,:]...) :: a}
4420 @item @tab @code{integer(acc_handle_kind) :: async}
4421 @item @emph{Interface}: @tab @code{subroutine acc_create_async(a, len, async)}
4422 @item @tab @code{type, dimension(:[,:]...) :: a}
4423 @item @tab @code{integer len}
4424 @item @tab @code{integer(acc_handle_kind) :: async}
4427 @item @emph{Reference}:
4428 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4434 @node acc_present_or_create
4435 @section @code{acc_present_or_create} -- If the data is not present on the device, allocate device memory and map it to host memory.
4437 @item @emph{Description}
4438 This function tests if the host data specified by @var{a} and of length
4439 @var{len} is present or not. If it is not present, device memory
4440 is allocated and mapped to host memory. In C/C++, the device address
4441 of the newly allocated device memory is returned.
4443 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4444 a contiguous array section. The second form @var{a} specifies a variable or
4445 array element and @var{len} specifies the length in bytes.
4447 Note that @code{acc_present_or_create} and @code{acc_pcreate} exist for
4448 backward compatibility with OpenACC 2.0; use @ref{acc_create} instead.
4451 @multitable @columnfractions .20 .80
4452 @item @emph{Prototype}: @tab @code{void *acc_present_or_create(h_void *a, size_t len)}
4453 @item @emph{Prototype}: @tab @code{void *acc_pcreate(h_void *a, size_t len)}
4456 @item @emph{Fortran}:
4457 @multitable @columnfractions .20 .80
4458 @item @emph{Interface}: @tab @code{subroutine acc_present_or_create(a)}
4459 @item @tab @code{type, dimension(:[,:]...) :: a}
4460 @item @emph{Interface}: @tab @code{subroutine acc_present_or_create(a, len)}
4461 @item @tab @code{type, dimension(:[,:]...) :: a}
4462 @item @tab @code{integer len}
4463 @item @emph{Interface}: @tab @code{subroutine acc_pcreate(a)}
4464 @item @tab @code{type, dimension(:[,:]...) :: a}
4465 @item @emph{Interface}: @tab @code{subroutine acc_pcreate(a, len)}
4466 @item @tab @code{type, dimension(:[,:]...) :: a}
4467 @item @tab @code{integer len}
4470 @item @emph{Reference}:
4471 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4478 @section @code{acc_copyout} -- Copy device memory to host memory.
4480 @item @emph{Description}
4481 This function copies mapped device memory to host memory which is specified
4482 by host address @var{a} for a length @var{len} bytes in C/C++.
4484 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4485 a contiguous array section. The second form @var{a} specifies a variable or
4486 array element and @var{len} specifies the length in bytes.
4489 @multitable @columnfractions .20 .80
4490 @item @emph{Prototype}: @tab @code{acc_copyout(h_void *a, size_t len);}
4491 @item @emph{Prototype}: @tab @code{acc_copyout_async(h_void *a, size_t len, int async);}
4492 @item @emph{Prototype}: @tab @code{acc_copyout_finalize(h_void *a, size_t len);}
4493 @item @emph{Prototype}: @tab @code{acc_copyout_finalize_async(h_void *a, size_t len, int async);}
4496 @item @emph{Fortran}:
4497 @multitable @columnfractions .20 .80
4498 @item @emph{Interface}: @tab @code{subroutine acc_copyout(a)}
4499 @item @tab @code{type, dimension(:[,:]...) :: a}
4500 @item @emph{Interface}: @tab @code{subroutine acc_copyout(a, len)}
4501 @item @tab @code{type, dimension(:[,:]...) :: a}
4502 @item @tab @code{integer len}
4503 @item @emph{Interface}: @tab @code{subroutine acc_copyout_async(a, async)}
4504 @item @tab @code{type, dimension(:[,:]...) :: a}
4505 @item @tab @code{integer(acc_handle_kind) :: async}
4506 @item @emph{Interface}: @tab @code{subroutine acc_copyout_async(a, len, async)}
4507 @item @tab @code{type, dimension(:[,:]...) :: a}
4508 @item @tab @code{integer len}
4509 @item @tab @code{integer(acc_handle_kind) :: async}
4510 @item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize(a)}
4511 @item @tab @code{type, dimension(:[,:]...) :: a}
4512 @item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize(a, len)}
4513 @item @tab @code{type, dimension(:[,:]...) :: a}
4514 @item @tab @code{integer len}
4515 @item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize_async(a, async)}
4516 @item @tab @code{type, dimension(:[,:]...) :: a}
4517 @item @tab @code{integer(acc_handle_kind) :: async}
4518 @item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize_async(a, len, async)}
4519 @item @tab @code{type, dimension(:[,:]...) :: a}
4520 @item @tab @code{integer len}
4521 @item @tab @code{integer(acc_handle_kind) :: async}
4524 @item @emph{Reference}:
4525 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4532 @section @code{acc_delete} -- Free device memory.
4534 @item @emph{Description}
4535 This function frees previously allocated device memory specified by
4536 the device address @var{a} and the length of @var{len} bytes.
4538 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4539 a contiguous array section. The second form @var{a} specifies a variable or
4540 array element and @var{len} specifies the length in bytes.
4543 @multitable @columnfractions .20 .80
4544 @item @emph{Prototype}: @tab @code{acc_delete(h_void *a, size_t len);}
4545 @item @emph{Prototype}: @tab @code{acc_delete_async(h_void *a, size_t len, int async);}
4546 @item @emph{Prototype}: @tab @code{acc_delete_finalize(h_void *a, size_t len);}
4547 @item @emph{Prototype}: @tab @code{acc_delete_finalize_async(h_void *a, size_t len, int async);}
4550 @item @emph{Fortran}:
4551 @multitable @columnfractions .20 .80
4552 @item @emph{Interface}: @tab @code{subroutine acc_delete(a)}
4553 @item @tab @code{type, dimension(:[,:]...) :: a}
4554 @item @emph{Interface}: @tab @code{subroutine acc_delete(a, len)}
4555 @item @tab @code{type, dimension(:[,:]...) :: a}
4556 @item @tab @code{integer len}
4557 @item @emph{Interface}: @tab @code{subroutine acc_delete_async(a, async)}
4558 @item @tab @code{type, dimension(:[,:]...) :: a}
4559 @item @tab @code{integer(acc_handle_kind) :: async}
4560 @item @emph{Interface}: @tab @code{subroutine acc_delete_async(a, len, async)}
4561 @item @tab @code{type, dimension(:[,:]...) :: a}
4562 @item @tab @code{integer len}
4563 @item @tab @code{integer(acc_handle_kind) :: async}
4564 @item @emph{Interface}: @tab @code{subroutine acc_delete_finalize(a)}
4565 @item @tab @code{type, dimension(:[,:]...) :: a}
4566 @item @emph{Interface}: @tab @code{subroutine acc_delete_finalize(a, len)}
4567 @item @tab @code{type, dimension(:[,:]...) :: a}
4568 @item @tab @code{integer len}
4569 @item @emph{Interface}: @tab @code{subroutine acc_delete_async_finalize(a, async)}
4570 @item @tab @code{type, dimension(:[,:]...) :: a}
4571 @item @tab @code{integer(acc_handle_kind) :: async}
4572 @item @emph{Interface}: @tab @code{subroutine acc_delete_async_finalize(a, len, async)}
4573 @item @tab @code{type, dimension(:[,:]...) :: a}
4574 @item @tab @code{integer len}
4575 @item @tab @code{integer(acc_handle_kind) :: async}
4578 @item @emph{Reference}:
4579 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4585 @node acc_update_device
4586 @section @code{acc_update_device} -- Update device memory from mapped host memory.
4588 @item @emph{Description}
4589 This function updates the device copy from the previously mapped host memory.
4590 The host memory is specified with the host address @var{a} and a length of
4593 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4594 a contiguous array section. The second form @var{a} specifies a variable or
4595 array element and @var{len} specifies the length in bytes.
4598 @multitable @columnfractions .20 .80
4599 @item @emph{Prototype}: @tab @code{acc_update_device(h_void *a, size_t len);}
4600 @item @emph{Prototype}: @tab @code{acc_update_device(h_void *a, size_t len, async);}
4603 @item @emph{Fortran}:
4604 @multitable @columnfractions .20 .80
4605 @item @emph{Interface}: @tab @code{subroutine acc_update_device(a)}
4606 @item @tab @code{type, dimension(:[,:]...) :: a}
4607 @item @emph{Interface}: @tab @code{subroutine acc_update_device(a, len)}
4608 @item @tab @code{type, dimension(:[,:]...) :: a}
4609 @item @tab @code{integer len}
4610 @item @emph{Interface}: @tab @code{subroutine acc_update_device_async(a, async)}
4611 @item @tab @code{type, dimension(:[,:]...) :: a}
4612 @item @tab @code{integer(acc_handle_kind) :: async}
4613 @item @emph{Interface}: @tab @code{subroutine acc_update_device_async(a, len, async)}
4614 @item @tab @code{type, dimension(:[,:]...) :: a}
4615 @item @tab @code{integer len}
4616 @item @tab @code{integer(acc_handle_kind) :: async}
4619 @item @emph{Reference}:
4620 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4626 @node acc_update_self
4627 @section @code{acc_update_self} -- Update host memory from mapped device memory.
4629 @item @emph{Description}
4630 This function updates the host copy from the previously mapped device memory.
4631 The host memory is specified with the host address @var{a} and a length of
4634 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4635 a contiguous array section. The second form @var{a} specifies a variable or
4636 array element and @var{len} specifies the length in bytes.
4639 @multitable @columnfractions .20 .80
4640 @item @emph{Prototype}: @tab @code{acc_update_self(h_void *a, size_t len);}
4641 @item @emph{Prototype}: @tab @code{acc_update_self_async(h_void *a, size_t len, int async);}
4644 @item @emph{Fortran}:
4645 @multitable @columnfractions .20 .80
4646 @item @emph{Interface}: @tab @code{subroutine acc_update_self(a)}
4647 @item @tab @code{type, dimension(:[,:]...) :: a}
4648 @item @emph{Interface}: @tab @code{subroutine acc_update_self(a, len)}
4649 @item @tab @code{type, dimension(:[,:]...) :: a}
4650 @item @tab @code{integer len}
4651 @item @emph{Interface}: @tab @code{subroutine acc_update_self_async(a, async)}
4652 @item @tab @code{type, dimension(:[,:]...) :: a}
4653 @item @tab @code{integer(acc_handle_kind) :: async}
4654 @item @emph{Interface}: @tab @code{subroutine acc_update_self_async(a, len, async)}
4655 @item @tab @code{type, dimension(:[,:]...) :: a}
4656 @item @tab @code{integer len}
4657 @item @tab @code{integer(acc_handle_kind) :: async}
4660 @item @emph{Reference}:
4661 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4668 @section @code{acc_map_data} -- Map previously allocated device memory to host memory.
4670 @item @emph{Description}
4671 This function maps previously allocated device and host memory. The device
4672 memory is specified with the device address @var{d}. The host memory is
4673 specified with the host address @var{h} and a length of @var{len}.
4676 @multitable @columnfractions .20 .80
4677 @item @emph{Prototype}: @tab @code{acc_map_data(h_void *h, d_void *d, size_t len);}
4680 @item @emph{Reference}:
4681 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4687 @node acc_unmap_data
4688 @section @code{acc_unmap_data} -- Unmap device memory from host memory.
4690 @item @emph{Description}
4691 This function unmaps previously mapped device and host memory. The latter
4692 specified by @var{h}.
4695 @multitable @columnfractions .20 .80
4696 @item @emph{Prototype}: @tab @code{acc_unmap_data(h_void *h);}
4699 @item @emph{Reference}:
4700 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4707 @section @code{acc_deviceptr} -- Get device pointer associated with specific host address.
4709 @item @emph{Description}
4710 This function returns the device address that has been mapped to the
4711 host address specified by @var{h}.
4714 @multitable @columnfractions .20 .80
4715 @item @emph{Prototype}: @tab @code{void *acc_deviceptr(h_void *h);}
4718 @item @emph{Reference}:
4719 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4726 @section @code{acc_hostptr} -- Get host pointer associated with specific device address.
4728 @item @emph{Description}
4729 This function returns the host address that has been mapped to the
4730 device address specified by @var{d}.
4733 @multitable @columnfractions .20 .80
4734 @item @emph{Prototype}: @tab @code{void *acc_hostptr(d_void *d);}
4737 @item @emph{Reference}:
4738 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4744 @node acc_is_present
4745 @section @code{acc_is_present} -- Indicate whether host variable / array is present on device.
4747 @item @emph{Description}
4748 This function indicates whether the specified host address in @var{a} and a
4749 length of @var{len} bytes is present on the device. In C/C++, a non-zero
4750 value is returned to indicate the presence of the mapped memory on the
4751 device. A zero is returned to indicate the memory is not mapped on the
4754 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4755 a contiguous array section. The second form @var{a} specifies a variable or
4756 array element and @var{len} specifies the length in bytes. If the host
4757 memory is mapped to device memory, then a @code{true} is returned. Otherwise,
4758 a @code{false} is return to indicate the mapped memory is not present.
4761 @multitable @columnfractions .20 .80
4762 @item @emph{Prototype}: @tab @code{int acc_is_present(h_void *a, size_t len);}
4765 @item @emph{Fortran}:
4766 @multitable @columnfractions .20 .80
4767 @item @emph{Interface}: @tab @code{function acc_is_present(a)}
4768 @item @tab @code{type, dimension(:[,:]...) :: a}
4769 @item @tab @code{logical acc_is_present}
4770 @item @emph{Interface}: @tab @code{function acc_is_present(a, len)}
4771 @item @tab @code{type, dimension(:[,:]...) :: a}
4772 @item @tab @code{integer len}
4773 @item @tab @code{logical acc_is_present}
4776 @item @emph{Reference}:
4777 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4783 @node acc_memcpy_to_device
4784 @section @code{acc_memcpy_to_device} -- Copy host memory to device memory.
4786 @item @emph{Description}
4787 This function copies host memory specified by host address of @var{src} to
4788 device memory specified by the device address @var{dest} for a length of
4792 @multitable @columnfractions .20 .80
4793 @item @emph{Prototype}: @tab @code{acc_memcpy_to_device(d_void *dest, h_void *src, size_t bytes);}
4796 @item @emph{Reference}:
4797 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4803 @node acc_memcpy_from_device
4804 @section @code{acc_memcpy_from_device} -- Copy device memory to host memory.
4806 @item @emph{Description}
4807 This function copies host memory specified by host address of @var{src} from
4808 device memory specified by the device address @var{dest} for a length of
4812 @multitable @columnfractions .20 .80
4813 @item @emph{Prototype}: @tab @code{acc_memcpy_from_device(d_void *dest, h_void *src, size_t bytes);}
4816 @item @emph{Reference}:
4817 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4824 @section @code{acc_attach} -- Let device pointer point to device-pointer target.
4826 @item @emph{Description}
4827 This function updates a pointer on the device from pointing to a host-pointer
4828 address to pointing to the corresponding device data.
4831 @multitable @columnfractions .20 .80
4832 @item @emph{Prototype}: @tab @code{acc_attach(h_void **ptr);}
4833 @item @emph{Prototype}: @tab @code{acc_attach_async(h_void **ptr, int async);}
4836 @item @emph{Reference}:
4837 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4844 @section @code{acc_detach} -- Let device pointer point to host-pointer target.
4846 @item @emph{Description}
4847 This function updates a pointer on the device from pointing to a device-pointer
4848 address to pointing to the corresponding host data.
4851 @multitable @columnfractions .20 .80
4852 @item @emph{Prototype}: @tab @code{acc_detach(h_void **ptr);}
4853 @item @emph{Prototype}: @tab @code{acc_detach_async(h_void **ptr, int async);}
4854 @item @emph{Prototype}: @tab @code{acc_detach_finalize(h_void **ptr);}
4855 @item @emph{Prototype}: @tab @code{acc_detach_finalize_async(h_void **ptr, int async);}
4858 @item @emph{Reference}:
4859 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4865 @node acc_get_current_cuda_device
4866 @section @code{acc_get_current_cuda_device} -- Get CUDA device handle.
4868 @item @emph{Description}
4869 This function returns the CUDA device handle. This handle is the same
4870 as used by the CUDA Runtime or Driver API's.
4873 @multitable @columnfractions .20 .80
4874 @item @emph{Prototype}: @tab @code{void *acc_get_current_cuda_device(void);}
4877 @item @emph{Reference}:
4878 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4884 @node acc_get_current_cuda_context
4885 @section @code{acc_get_current_cuda_context} -- Get CUDA context handle.
4887 @item @emph{Description}
4888 This function returns the CUDA context handle. This handle is the same
4889 as used by the CUDA Runtime or Driver API's.
4892 @multitable @columnfractions .20 .80
4893 @item @emph{Prototype}: @tab @code{void *acc_get_current_cuda_context(void);}
4896 @item @emph{Reference}:
4897 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4903 @node acc_get_cuda_stream
4904 @section @code{acc_get_cuda_stream} -- Get CUDA stream handle.
4906 @item @emph{Description}
4907 This function returns the CUDA stream handle for the queue @var{async}.
4908 This handle is the same as used by the CUDA Runtime or Driver API's.
4911 @multitable @columnfractions .20 .80
4912 @item @emph{Prototype}: @tab @code{void *acc_get_cuda_stream(int async);}
4915 @item @emph{Reference}:
4916 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4922 @node acc_set_cuda_stream
4923 @section @code{acc_set_cuda_stream} -- Set CUDA stream handle.
4925 @item @emph{Description}
4926 This function associates the stream handle specified by @var{stream} with
4927 the queue @var{async}.
4929 This cannot be used to change the stream handle associated with
4930 @code{acc_async_sync}.
4932 The return value is not specified.
4935 @multitable @columnfractions .20 .80
4936 @item @emph{Prototype}: @tab @code{int acc_set_cuda_stream(int async, void *stream);}
4939 @item @emph{Reference}:
4940 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4946 @node acc_prof_register
4947 @section @code{acc_prof_register} -- Register callbacks.
4949 @item @emph{Description}:
4950 This function registers callbacks.
4953 @multitable @columnfractions .20 .80
4954 @item @emph{Prototype}: @tab @code{void acc_prof_register (acc_event_t, acc_prof_callback, acc_register_t);}
4957 @item @emph{See also}:
4958 @ref{OpenACC Profiling Interface}
4960 @item @emph{Reference}:
4961 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4967 @node acc_prof_unregister
4968 @section @code{acc_prof_unregister} -- Unregister callbacks.
4970 @item @emph{Description}:
4971 This function unregisters callbacks.
4974 @multitable @columnfractions .20 .80
4975 @item @emph{Prototype}: @tab @code{void acc_prof_unregister (acc_event_t, acc_prof_callback, acc_register_t);}
4978 @item @emph{See also}:
4979 @ref{OpenACC Profiling Interface}
4981 @item @emph{Reference}:
4982 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4988 @node acc_prof_lookup
4989 @section @code{acc_prof_lookup} -- Obtain inquiry functions.
4991 @item @emph{Description}:
4992 Function to obtain inquiry functions.
4995 @multitable @columnfractions .20 .80
4996 @item @emph{Prototype}: @tab @code{acc_query_fn acc_prof_lookup (const char *);}
4999 @item @emph{See also}:
5000 @ref{OpenACC Profiling Interface}
5002 @item @emph{Reference}:
5003 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
5009 @node acc_register_library
5010 @section @code{acc_register_library} -- Library registration.
5012 @item @emph{Description}:
5013 Function for library registration.
5016 @multitable @columnfractions .20 .80
5017 @item @emph{Prototype}: @tab @code{void acc_register_library (acc_prof_reg, acc_prof_reg, acc_prof_lookup_func);}
5020 @item @emph{See also}:
5021 @ref{OpenACC Profiling Interface}, @ref{ACC_PROFLIB}
5023 @item @emph{Reference}:
5024 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
5030 @c ---------------------------------------------------------------------
5031 @c OpenACC Environment Variables
5032 @c ---------------------------------------------------------------------
5034 @node OpenACC Environment Variables
5035 @chapter OpenACC Environment Variables
5037 The variables @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM}
5038 are defined by section 4 of the OpenACC specification in version 2.0.
5039 The variable @env{ACC_PROFLIB}
5040 is defined by section 4 of the OpenACC specification in version 2.6.
5050 @node ACC_DEVICE_TYPE
5051 @section @code{ACC_DEVICE_TYPE}
5053 @item @emph{Description}:
5054 Control the default device type to use when executing compute regions.
5055 If unset, the code can be run on any device type, favoring a non-host
5058 Supported values in GCC (if compiled in) are
5064 @item @emph{Reference}:
5065 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
5071 @node ACC_DEVICE_NUM
5072 @section @code{ACC_DEVICE_NUM}
5074 @item @emph{Description}:
5075 Control which device, identified by device number, is the default device.
5076 The value must be a nonnegative integer less than the number of devices.
5077 If unset, device number zero is used.
5078 @item @emph{Reference}:
5079 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
5086 @section @code{ACC_PROFLIB}
5088 @item @emph{Description}:
5089 Semicolon-separated list of dynamic libraries that are loaded as profiling
5090 libraries. Each library must provide at least the @code{acc_register_library}
5091 routine. Each library file is found as described by the documentation of
5092 @code{dlopen} of your operating system.
5093 @item @emph{See also}:
5094 @ref{acc_register_library}, @ref{OpenACC Profiling Interface}
5096 @item @emph{Reference}:
5097 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
5103 @c ---------------------------------------------------------------------
5104 @c CUDA Streams Usage
5105 @c ---------------------------------------------------------------------
5107 @node CUDA Streams Usage
5108 @chapter CUDA Streams Usage
5110 This applies to the @code{nvptx} plugin only.
5112 The library provides elements that perform asynchronous movement of
5113 data and asynchronous operation of computing constructs. This
5114 asynchronous functionality is implemented by making use of CUDA
5115 streams@footnote{See "Stream Management" in "CUDA Driver API",
5116 TRM-06703-001, Version 5.5, for additional information}.
5118 The primary means by that the asynchronous functionality is accessed
5119 is through the use of those OpenACC directives which make use of the
5120 @code{async} and @code{wait} clauses. When the @code{async} clause is
5121 first used with a directive, it creates a CUDA stream. If an
5122 @code{async-argument} is used with the @code{async} clause, then the
5123 stream is associated with the specified @code{async-argument}.
5125 Following the creation of an association between a CUDA stream and the
5126 @code{async-argument} of an @code{async} clause, both the @code{wait}
5127 clause and the @code{wait} directive can be used. When either the
5128 clause or directive is used after stream creation, it creates a
5129 rendezvous point whereby execution waits until all operations
5130 associated with the @code{async-argument}, that is, stream, have
5133 Normally, the management of the streams that are created as a result of
5134 using the @code{async} clause, is done without any intervention by the
5135 caller. This implies the association between the @code{async-argument}
5136 and the CUDA stream is maintained for the lifetime of the program.
5137 However, this association can be changed through the use of the library
5138 function @code{acc_set_cuda_stream}. When the function
5139 @code{acc_set_cuda_stream} is called, the CUDA stream that was
5140 originally associated with the @code{async} clause is destroyed.
5141 Caution should be taken when changing the association as subsequent
5142 references to the @code{async-argument} refer to a different
5147 @c ---------------------------------------------------------------------
5148 @c OpenACC Library Interoperability
5149 @c ---------------------------------------------------------------------
5151 @node OpenACC Library Interoperability
5152 @chapter OpenACC Library Interoperability
5154 @section Introduction
5156 The OpenACC library uses the CUDA Driver API, and may interact with
5157 programs that use the Runtime library directly, or another library
5158 based on the Runtime library, e.g., CUBLAS@footnote{See section 2.26,
5159 "Interactions with the CUDA Driver API" in
5160 "CUDA Runtime API", Version 5.5, and section 2.27, "VDPAU
5161 Interoperability", in "CUDA Driver API", TRM-06703-001, Version 5.5,
5162 for additional information on library interoperability.}.
5163 This chapter describes the use cases and what changes are
5164 required in order to use both the OpenACC library and the CUBLAS and Runtime
5165 libraries within a program.
5167 @section First invocation: NVIDIA CUBLAS library API
5169 In this first use case (see below), a function in the CUBLAS library is called
5170 prior to any of the functions in the OpenACC library. More specifically, the
5171 function @code{cublasCreate()}.
5173 When invoked, the function initializes the library and allocates the
5174 hardware resources on the host and the device on behalf of the caller. Once
5175 the initialization and allocation has completed, a handle is returned to the
5176 caller. The OpenACC library also requires initialization and allocation of
5177 hardware resources. Since the CUBLAS library has already allocated the
5178 hardware resources for the device, all that is left to do is to initialize
5179 the OpenACC library and acquire the hardware resources on the host.
5181 Prior to calling the OpenACC function that initializes the library and
5182 allocate the host hardware resources, you need to acquire the device number
5183 that was allocated during the call to @code{cublasCreate()}. The invoking of the
5184 runtime library function @code{cudaGetDevice()} accomplishes this. Once
5185 acquired, the device number is passed along with the device type as
5186 parameters to the OpenACC library function @code{acc_set_device_num()}.
5188 Once the call to @code{acc_set_device_num()} has completed, the OpenACC
5189 library uses the context that was created during the call to
5190 @code{cublasCreate()}. In other words, both libraries share the
5194 /* Create the handle */
5195 s = cublasCreate(&h);
5196 if (s != CUBLAS_STATUS_SUCCESS)
5198 fprintf(stderr, "cublasCreate failed %d\n", s);
5202 /* Get the device number */
5203 e = cudaGetDevice(&dev);
5204 if (e != cudaSuccess)
5206 fprintf(stderr, "cudaGetDevice failed %d\n", e);
5210 /* Initialize OpenACC library and use device 'dev' */
5211 acc_set_device_num(dev, acc_device_nvidia);
5216 @section First invocation: OpenACC library API
5218 In this second use case (see below), a function in the OpenACC library is
5219 called prior to any of the functions in the CUBLAS library. More specifically,
5220 the function @code{acc_set_device_num()}.
5222 In the use case presented here, the function @code{acc_set_device_num()}
5223 is used to both initialize the OpenACC library and allocate the hardware
5224 resources on the host and the device. In the call to the function, the
5225 call parameters specify which device to use and what device
5226 type to use, i.e., @code{acc_device_nvidia}. It should be noted that this
5227 is but one method to initialize the OpenACC library and allocate the
5228 appropriate hardware resources. Other methods are available through the
5229 use of environment variables and these is discussed in the next section.
5231 Once the call to @code{acc_set_device_num()} has completed, other OpenACC
5232 functions can be called as seen with multiple calls being made to
5233 @code{acc_copyin()}. In addition, calls can be made to functions in the
5234 CUBLAS library. In the use case a call to @code{cublasCreate()} is made
5235 subsequent to the calls to @code{acc_copyin()}.
5236 As seen in the previous use case, a call to @code{cublasCreate()}
5237 initializes the CUBLAS library and allocates the hardware resources on the
5238 host and the device. However, since the device has already been allocated,
5239 @code{cublasCreate()} only initializes the CUBLAS library and allocates
5240 the appropriate hardware resources on the host. The context that was created
5241 as part of the OpenACC initialization is shared with the CUBLAS library,
5242 similarly to the first use case.
5247 acc_set_device_num(dev, acc_device_nvidia);
5249 /* Copy the first set to the device */
5250 d_X = acc_copyin(&h_X[0], N * sizeof (float));
5253 fprintf(stderr, "copyin error h_X\n");
5257 /* Copy the second set to the device */
5258 d_Y = acc_copyin(&h_Y1[0], N * sizeof (float));
5261 fprintf(stderr, "copyin error h_Y1\n");
5265 /* Create the handle */
5266 s = cublasCreate(&h);
5267 if (s != CUBLAS_STATUS_SUCCESS)
5269 fprintf(stderr, "cublasCreate failed %d\n", s);
5273 /* Perform saxpy using CUBLAS library function */
5274 s = cublasSaxpy(h, N, &alpha, d_X, 1, d_Y, 1);
5275 if (s != CUBLAS_STATUS_SUCCESS)
5277 fprintf(stderr, "cublasSaxpy failed %d\n", s);
5281 /* Copy the results from the device */
5282 acc_memcpy_from_device(&h_Y1[0], d_Y, N * sizeof (float));
5287 @section OpenACC library and environment variables
5289 There are two environment variables associated with the OpenACC library
5290 that may be used to control the device type and device number:
5291 @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM}, respectively. These two
5292 environment variables can be used as an alternative to calling
5293 @code{acc_set_device_num()}. As seen in the second use case, the device
5294 type and device number were specified using @code{acc_set_device_num()}.
5295 If however, the aforementioned environment variables were set, then the
5296 call to @code{acc_set_device_num()} would not be required.
5299 The use of the environment variables is only relevant when an OpenACC function
5300 is called prior to a call to @code{cudaCreate()}. If @code{cudaCreate()}
5301 is called prior to a call to an OpenACC function, then you must call
5302 @code{acc_set_device_num()}@footnote{More complete information
5303 about @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM} can be found in
5304 sections 4.1 and 4.2 of the @uref{https://www.openacc.org, OpenACC}
5305 Application Programming Interface”, Version 2.6.}
5309 @c ---------------------------------------------------------------------
5310 @c OpenACC Profiling Interface
5311 @c ---------------------------------------------------------------------
5313 @node OpenACC Profiling Interface
5314 @chapter OpenACC Profiling Interface
5316 @section Implementation Status and Implementation-Defined Behavior
5318 We're implementing the OpenACC Profiling Interface as defined by the
5319 OpenACC 2.6 specification. We're clarifying some aspects here as
5320 @emph{implementation-defined behavior}, while they're still under
5321 discussion within the OpenACC Technical Committee.
5323 This implementation is tuned to keep the performance impact as low as
5324 possible for the (very common) case that the Profiling Interface is
5325 not enabled. This is relevant, as the Profiling Interface affects all
5326 the @emph{hot} code paths (in the target code, not in the offloaded
5327 code). Users of the OpenACC Profiling Interface can be expected to
5328 understand that performance is impacted to some degree once the
5329 Profiling Interface is enabled: for example, because of the
5330 @emph{runtime} (libgomp) calling into a third-party @emph{library} for
5331 every event that has been registered.
5333 We're not yet accounting for the fact that @cite{OpenACC events may
5334 occur during event processing}.
5335 We just handle one case specially, as required by CUDA 9.0
5336 @command{nvprof}, that @code{acc_get_device_type}
5337 (@ref{acc_get_device_type})) may be called from
5338 @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
5341 We're not yet implementing initialization via a
5342 @code{acc_register_library} function that is either statically linked
5343 in, or dynamically via @env{LD_PRELOAD}.
5344 Initialization via @code{acc_register_library} functions dynamically
5345 loaded via the @env{ACC_PROFLIB} environment variable does work, as
5346 does directly calling @code{acc_prof_register},
5347 @code{acc_prof_unregister}, @code{acc_prof_lookup}.
5349 As currently there are no inquiry functions defined, calls to
5350 @code{acc_prof_lookup} always returns @code{NULL}.
5352 There aren't separate @emph{start}, @emph{stop} events defined for the
5353 event types @code{acc_ev_create}, @code{acc_ev_delete},
5354 @code{acc_ev_alloc}, @code{acc_ev_free}. It's not clear if these
5355 should be triggered before or after the actual device-specific call is
5356 made. We trigger them after.
5358 Remarks about data provided to callbacks:
5362 @item @code{acc_prof_info.event_type}
5363 It's not clear if for @emph{nested} event callbacks (for example,
5364 @code{acc_ev_enqueue_launch_start} as part of a parent compute
5365 construct), this should be set for the nested event
5366 (@code{acc_ev_enqueue_launch_start}), or if the value of the parent
5367 construct should remain (@code{acc_ev_compute_construct_start}). In
5368 this implementation, the value generally corresponds to the
5369 innermost nested event type.
5371 @item @code{acc_prof_info.device_type}
5375 For @code{acc_ev_compute_construct_start}, and in presence of an
5376 @code{if} clause with @emph{false} argument, this still refers to
5377 the offloading device type.
5378 It's not clear if that's the expected behavior.
5381 Complementary to the item before, for
5382 @code{acc_ev_compute_construct_end}, this is set to
5383 @code{acc_device_host} in presence of an @code{if} clause with
5384 @emph{false} argument.
5385 It's not clear if that's the expected behavior.
5389 @item @code{acc_prof_info.thread_id}
5390 Always @code{-1}; not yet implemented.
5392 @item @code{acc_prof_info.async}
5396 Not yet implemented correctly for
5397 @code{acc_ev_compute_construct_start}.
5400 In a compute construct, for host-fallback
5401 execution/@code{acc_device_host} it always is
5402 @code{acc_async_sync}.
5403 It is unclear if that is the expected behavior.
5406 For @code{acc_ev_device_init_start} and @code{acc_ev_device_init_end},
5407 it will always be @code{acc_async_sync}.
5408 It is unclear if that is the expected behavior.
5412 @item @code{acc_prof_info.async_queue}
5413 There is no @cite{limited number of asynchronous queues} in libgomp.
5414 This always has the same value as @code{acc_prof_info.async}.
5416 @item @code{acc_prof_info.src_file}
5417 Always @code{NULL}; not yet implemented.
5419 @item @code{acc_prof_info.func_name}
5420 Always @code{NULL}; not yet implemented.
5422 @item @code{acc_prof_info.line_no}
5423 Always @code{-1}; not yet implemented.
5425 @item @code{acc_prof_info.end_line_no}
5426 Always @code{-1}; not yet implemented.
5428 @item @code{acc_prof_info.func_line_no}
5429 Always @code{-1}; not yet implemented.
5431 @item @code{acc_prof_info.func_end_line_no}
5432 Always @code{-1}; not yet implemented.
5434 @item @code{acc_event_info.event_type}, @code{acc_event_info.*.event_type}
5435 Relating to @code{acc_prof_info.event_type} discussed above, in this
5436 implementation, this will always be the same value as
5437 @code{acc_prof_info.event_type}.
5439 @item @code{acc_event_info.*.parent_construct}
5443 Will be @code{acc_construct_parallel} for all OpenACC compute
5444 constructs as well as many OpenACC Runtime API calls; should be the
5445 one matching the actual construct, or
5446 @code{acc_construct_runtime_api}, respectively.
5449 Will be @code{acc_construct_enter_data} or
5450 @code{acc_construct_exit_data} when processing variable mappings
5451 specified in OpenACC @emph{declare} directives; should be
5452 @code{acc_construct_declare}.
5455 For implicit @code{acc_ev_device_init_start},
5456 @code{acc_ev_device_init_end}, and explicit as well as implicit
5457 @code{acc_ev_alloc}, @code{acc_ev_free},
5458 @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end},
5459 @code{acc_ev_enqueue_download_start}, and
5460 @code{acc_ev_enqueue_download_end}, will be
5461 @code{acc_construct_parallel}; should reflect the real parent
5466 @item @code{acc_event_info.*.implicit}
5467 For @code{acc_ev_alloc}, @code{acc_ev_free},
5468 @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end},
5469 @code{acc_ev_enqueue_download_start}, and
5470 @code{acc_ev_enqueue_download_end}, this currently will be @code{1}
5471 also for explicit usage.
5473 @item @code{acc_event_info.data_event.var_name}
5474 Always @code{NULL}; not yet implemented.
5476 @item @code{acc_event_info.data_event.host_ptr}
5477 For @code{acc_ev_alloc}, and @code{acc_ev_free}, this is always
5480 @item @code{typedef union acc_api_info}
5481 @dots{} as printed in @cite{5.2.3. Third Argument: API-Specific
5482 Information}. This should obviously be @code{typedef @emph{struct}
5485 @item @code{acc_api_info.device_api}
5486 Possibly not yet implemented correctly for
5487 @code{acc_ev_compute_construct_start},
5488 @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}:
5489 will always be @code{acc_device_api_none} for these event types.
5490 For @code{acc_ev_enter_data_start}, it will be
5491 @code{acc_device_api_none} in some cases.
5493 @item @code{acc_api_info.device_type}
5494 Always the same as @code{acc_prof_info.device_type}.
5496 @item @code{acc_api_info.vendor}
5497 Always @code{-1}; not yet implemented.
5499 @item @code{acc_api_info.device_handle}
5500 Always @code{NULL}; not yet implemented.
5502 @item @code{acc_api_info.context_handle}
5503 Always @code{NULL}; not yet implemented.
5505 @item @code{acc_api_info.async_handle}
5506 Always @code{NULL}; not yet implemented.
5510 Remarks about certain event types:
5514 @item @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
5518 @c See 'DEVICE_INIT_INSIDE_COMPUTE_CONSTRUCT' in
5519 @c 'libgomp.oacc-c-c++-common/acc_prof-kernels-1.c',
5520 @c 'libgomp.oacc-c-c++-common/acc_prof-parallel-1.c'.
5521 When a compute construct triggers implicit
5522 @code{acc_ev_device_init_start} and @code{acc_ev_device_init_end}
5523 events, they currently aren't @emph{nested within} the corresponding
5524 @code{acc_ev_compute_construct_start} and
5525 @code{acc_ev_compute_construct_end}, but they're currently observed
5526 @emph{before} @code{acc_ev_compute_construct_start}.
5527 It's not clear what to do: the standard asks us provide a lot of
5528 details to the @code{acc_ev_compute_construct_start} callback, without
5529 (implicitly) initializing a device before?
5532 Callbacks for these event types will not be invoked for calls to the
5533 @code{acc_set_device_type} and @code{acc_set_device_num} functions.
5534 It's not clear if they should be.
5538 @item @code{acc_ev_enter_data_start}, @code{acc_ev_enter_data_end}, @code{acc_ev_exit_data_start}, @code{acc_ev_exit_data_end}
5542 Callbacks for these event types will also be invoked for OpenACC
5543 @emph{host_data} constructs.
5544 It's not clear if they should be.
5547 Callbacks for these event types will also be invoked when processing
5548 variable mappings specified in OpenACC @emph{declare} directives.
5549 It's not clear if they should be.
5555 Callbacks for the following event types will be invoked, but dispatch
5556 and information provided therein has not yet been thoroughly reviewed:
5559 @item @code{acc_ev_alloc}
5560 @item @code{acc_ev_free}
5561 @item @code{acc_ev_update_start}, @code{acc_ev_update_end}
5562 @item @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end}
5563 @item @code{acc_ev_enqueue_download_start}, @code{acc_ev_enqueue_download_end}
5566 During device initialization, and finalization, respectively,
5567 callbacks for the following event types will not yet be invoked:
5570 @item @code{acc_ev_alloc}
5571 @item @code{acc_ev_free}
5574 Callbacks for the following event types have not yet been implemented,
5575 so currently won't be invoked:
5578 @item @code{acc_ev_device_shutdown_start}, @code{acc_ev_device_shutdown_end}
5579 @item @code{acc_ev_runtime_shutdown}
5580 @item @code{acc_ev_create}, @code{acc_ev_delete}
5581 @item @code{acc_ev_wait_start}, @code{acc_ev_wait_end}
5584 For the following runtime library functions, not all expected
5585 callbacks will be invoked (mostly concerning implicit device
5589 @item @code{acc_get_num_devices}
5590 @item @code{acc_set_device_type}
5591 @item @code{acc_get_device_type}
5592 @item @code{acc_set_device_num}
5593 @item @code{acc_get_device_num}
5594 @item @code{acc_init}
5595 @item @code{acc_shutdown}
5598 Aside from implicit device initialization, for the following runtime
5599 library functions, no callbacks will be invoked for shared-memory
5600 offloading devices (it's not clear if they should be):
5603 @item @code{acc_malloc}
5604 @item @code{acc_free}
5605 @item @code{acc_copyin}, @code{acc_present_or_copyin}, @code{acc_copyin_async}
5606 @item @code{acc_create}, @code{acc_present_or_create}, @code{acc_create_async}
5607 @item @code{acc_copyout}, @code{acc_copyout_async}, @code{acc_copyout_finalize}, @code{acc_copyout_finalize_async}
5608 @item @code{acc_delete}, @code{acc_delete_async}, @code{acc_delete_finalize}, @code{acc_delete_finalize_async}
5609 @item @code{acc_update_device}, @code{acc_update_device_async}
5610 @item @code{acc_update_self}, @code{acc_update_self_async}
5611 @item @code{acc_map_data}, @code{acc_unmap_data}
5612 @item @code{acc_memcpy_to_device}, @code{acc_memcpy_to_device_async}
5613 @item @code{acc_memcpy_from_device}, @code{acc_memcpy_from_device_async}
5616 @c ---------------------------------------------------------------------
5617 @c OpenMP-Implementation Specifics
5618 @c ---------------------------------------------------------------------
5620 @node OpenMP-Implementation Specifics
5621 @chapter OpenMP-Implementation Specifics
5624 * Implementation-defined ICV Initialization::
5625 * OpenMP Context Selectors::
5626 * Memory allocation::
5629 @node Implementation-defined ICV Initialization
5630 @section Implementation-defined ICV Initialization
5631 @cindex Implementation specific setting
5633 @multitable @columnfractions .30 .70
5634 @item @var{affinity-format-var} @tab See @ref{OMP_AFFINITY_FORMAT}.
5635 @item @var{def-allocator-var} @tab See @ref{OMP_ALLOCATOR}.
5636 @item @var{max-active-levels-var} @tab See @ref{OMP_MAX_ACTIVE_LEVELS}.
5637 @item @var{dyn-var} @tab See @ref{OMP_DYNAMIC}.
5638 @item @var{nthreads-var} @tab See @ref{OMP_NUM_THREADS}.
5639 @item @var{num-devices-var} @tab Number of non-host devices found
5640 by GCC's run-time library
5641 @item @var{num-procs-var} @tab The number of CPU cores on the
5642 initial device, except that affinity settings might lead to a
5643 smaller number. On non-host devices, the value of the
5644 @var{nthreads-var} ICV.
5645 @item @var{place-partition-var} @tab See @ref{OMP_PLACES}.
5646 @item @var{run-sched-var} @tab See @ref{OMP_SCHEDULE}.
5647 @item @var{stacksize-var} @tab See @ref{OMP_STACKSIZE}.
5648 @item @var{thread-limit-var} @tab See @ref{OMP_TEAMS_THREAD_LIMIT}
5649 @item @var{wait-policy-var} @tab See @ref{OMP_WAIT_POLICY} and
5650 @ref{GOMP_SPINCOUNT}
5653 @node OpenMP Context Selectors
5654 @section OpenMP Context Selectors
5656 @code{vendor} is always @code{gnu}. References are to the GCC manual.
5658 @c NOTE: Only the following selectors have been implemented. To add
5659 @c additional traits for target architecture, TARGET_OMP_DEVICE_KIND_ARCH_ISA
5660 @c has to be implemented; cf. also PR target/105640.
5661 @c For offload devices, add *additionally* gcc/config/*/t-omp-device.
5663 For the host compiler, @code{kind} always matches @code{host}; for the
5664 offloading architectures AMD GCN and Nvidia PTX, @code{kind} always matches
5665 @code{gpu}. For the x86 family of computers, AMD GCN and Nvidia PTX
5666 the following traits are supported in addition; while OpenMP is supported
5667 on more architectures, GCC currently does not match any @code{arch} or
5668 @code{isa} traits for those.
5670 @multitable @columnfractions .65 .30
5671 @headitem @code{arch} @tab @code{isa}
5672 @item @code{x86}, @code{x86_64}, @code{i386}, @code{i486},
5673 @code{i586}, @code{i686}, @code{ia32}
5674 @tab See @code{-m...} flags in ``x86 Options'' (without @code{-m})
5675 @item @code{amdgcn}, @code{gcn}
5676 @tab See @code{-march=} in ``AMD GCN Options''@footnote{Additionally,
5677 @code{gfx803} is supported as an alias for @code{fiji}.}
5679 @tab See @code{-march=} in ``Nvidia PTX Options''
5682 @node Memory allocation
5683 @section Memory allocation
5685 The description below applies to:
5688 @item Explicit use of the OpenMP API routines, see
5689 @ref{Memory Management Routines}.
5690 @item The @code{allocate} clause, except when the @code{allocator} modifier is a
5691 constant expression with value @code{omp_default_mem_alloc} and no
5692 @code{align} modifier has been specified. (In that case, the normal
5693 @code{malloc} allocation is used.)
5694 @item Using the @code{allocate} directive for automatic/stack variables, except
5695 when the @code{allocator} clause is a constant expression with value
5696 @code{omp_default_mem_alloc} and no @code{align} clause has been
5697 specified. (In that case, the normal allocation is used: stack allocation
5698 and, sometimes for Fortran, also @code{malloc} [depending on flags such as
5699 @option{-fstack-arrays}].)
5700 @item Using the @code{allocate} directive for variable in static memory is
5701 currently not supported (compile time error).
5702 @item In Fortran, the @code{allocators} directive and the executable
5703 @code{allocate} directive for Fortran pointers and allocatables is
5704 supported, but requires that files containing those directives has to be
5705 compiled with @option{-fopenmp-allocators}. Additionally, all files that
5706 might explicitly or implicitly deallocate memory allocated that way must
5707 also be compiled with that option.
5710 For the available predefined allocators and, as applicable, their associated
5711 predefined memory spaces and for the available traits and their default values,
5712 see @ref{OMP_ALLOCATOR}. Predefined allocators without an associated memory
5713 space use the @code{omp_default_mem_space} memory space.
5715 For the memory spaces, the following applies:
5717 @item @code{omp_default_mem_space} is supported
5718 @item @code{omp_const_mem_space} maps to @code{omp_default_mem_space}
5719 @item @code{omp_low_lat_mem_space} is only available on supported devices,
5720 and maps to @code{omp_default_mem_space} otherwise.
5721 @item @code{omp_large_cap_mem_space} maps to @code{omp_default_mem_space},
5722 unless the memkind library is available
5723 @item @code{omp_high_bw_mem_space} maps to @code{omp_default_mem_space},
5724 unless the memkind library is available
5727 On Linux systems, where the @uref{https://github.com/memkind/memkind, memkind
5728 library} (@code{libmemkind.so.0}) is available at runtime, it is used when
5729 creating memory allocators requesting
5732 @item the memory space @code{omp_high_bw_mem_space}
5733 @item the memory space @code{omp_large_cap_mem_space}
5734 @item the @code{partition} trait @code{interleaved}; note that for
5735 @code{omp_large_cap_mem_space} the allocation will not be interleaved
5738 On Linux systems, where the @uref{https://github.com/numactl/numactl, numa
5739 library} (@code{libnuma.so.1}) is available at runtime, it used when creating
5740 memory allocators requesting
5743 @item the @code{partition} trait @code{nearest}, except when both the
5744 libmemkind library is available and the memory space is either
5745 @code{omp_large_cap_mem_space} or @code{omp_high_bw_mem_space}
5748 Note that the numa library will round up the allocation size to a multiple of
5749 the system page size; therefore, consider using it only with large data or
5750 by sharing allocations via the @code{pool_size} trait. Furthermore, the Linux
5751 kernel does not guarantee that an allocation will always be on the nearest NUMA
5752 node nor that after reallocation the same node will be used. Note additionally
5753 that, on Linux, the default setting of the memory placement policy is to use the
5754 current node; therefore, unless the memory placement policy has been overridden,
5755 the @code{partition} trait @code{environment} (the default) will be effectively
5756 a @code{nearest} allocation.
5758 Additional notes regarding the traits:
5760 @item The @code{pinned} trait is supported on Linux hosts, but is subject to
5761 the OS @code{ulimit}/@code{rlimit} locked memory settings.
5762 @item The default for the @code{pool_size} trait is no pool and for every
5763 (re)allocation the associated library routine is called, which might
5764 internally use a memory pool.
5765 @item For the @code{partition} trait, the partition part size will be the same
5766 as the requested size (i.e. @code{interleaved} or @code{blocked} has no
5767 effect), except for @code{interleaved} when the memkind library is
5768 available. Furthermore, for @code{nearest} and unless the numa library
5769 is available, the memory might not be on the same NUMA node as thread
5770 that allocated the memory; on Linux, this is in particular the case when
5771 the memory placement policy is set to preferred.
5772 @item The @code{access} trait has no effect such that memory is always
5773 accessible by all threads.
5774 @item The @code{sync_hint} trait has no effect.
5778 @ref{Offload-Target Specifics}
5780 @c ---------------------------------------------------------------------
5781 @c Offload-Target Specifics
5782 @c ---------------------------------------------------------------------
5784 @node Offload-Target Specifics
5785 @chapter Offload-Target Specifics
5787 The following sections present notes on the offload-target specifics
5795 @section AMD Radeon (GCN)
5797 On the hardware side, there is the hierarchy (fine to coarse):
5799 @item work item (thread)
5802 @item compute unit (CU)
5805 All OpenMP and OpenACC levels are used, i.e.
5807 @item OpenMP's simd and OpenACC's vector map to work items (thread)
5808 @item OpenMP's threads (``parallel'') and OpenACC's workers map
5810 @item OpenMP's teams and OpenACC's gang use a threadpool with the
5811 size of the number of teams or gangs, respectively.
5816 @item Number of teams is the specified @code{num_teams} (OpenMP) or
5817 @code{num_gangs} (OpenACC) or otherwise the number of CU. It is limited
5818 by two times the number of CU.
5819 @item Number of wavefronts is 4 for gfx900 and 16 otherwise;
5820 @code{num_threads} (OpenMP) and @code{num_workers} (OpenACC)
5821 overrides this if smaller.
5822 @item The wavefront has 102 scalars and 64 vectors
5823 @item Number of workitems is always 64
5824 @item The hardware permits maximally 40 workgroups/CU and
5825 16 wavefronts/workgroup up to a limit of 40 wavefronts in total per CU.
5826 @item 80 scalars registers and 24 vector registers in non-kernel functions
5827 (the chosen procedure-calling API).
5828 @item For the kernel itself: as many as register pressure demands (number of
5829 teams and number of threads, scaled down if registers are exhausted)
5832 The implementation remark:
5834 @item I/O within OpenMP target regions and OpenACC parallel/kernels is supported
5835 using the C library @code{printf} functions and the Fortran
5836 @code{print}/@code{write} statements.
5837 @item Reverse offload regions (i.e. @code{target} regions with
5838 @code{device(ancestor:1)}) are processed serially per @code{target} region
5839 such that the next reverse offload region is only executed after the previous
5841 @item OpenMP code that has a @code{requires} directive with
5842 @code{unified_shared_memory} will remove any GCN device from the list of
5843 available devices (``host fallback'').
5844 @item The available stack size can be changed using the @code{GCN_STACK_SIZE}
5845 environment variable; the default is 32 kiB per thread.
5846 @item Low-latency memory (@code{omp_low_lat_mem_space}) is supported when the
5847 the @code{access} trait is set to @code{cgroup}. The default pool size
5848 is automatically scaled to share the 64 kiB LDS memory between the number
5849 of teams configured to run on each compute-unit, but may be adjusted at
5850 runtime by setting environment variable
5851 @code{GOMP_GCN_LOWLAT_POOL=@var{bytes}}.
5852 @item @code{omp_low_lat_mem_alloc} cannot be used with true low-latency memory
5853 because the definition implies the @code{omp_atv_all} trait; main
5854 graphics memory is used instead.
5855 @item @code{omp_cgroup_mem_alloc}, @code{omp_pteam_mem_alloc}, and
5856 @code{omp_thread_mem_alloc}, all use low-latency memory as first
5857 preference, and fall back to main graphics memory when the low-latency
5866 On the hardware side, there is the hierarchy (fine to coarse):
5871 @item streaming multiprocessor
5874 All OpenMP and OpenACC levels are used, i.e.
5876 @item OpenMP's simd and OpenACC's vector map to threads
5877 @item OpenMP's threads (``parallel'') and OpenACC's workers map to warps
5878 @item OpenMP's teams and OpenACC's gang use a threadpool with the
5879 size of the number of teams or gangs, respectively.
5884 @item The @code{warp_size} is always 32
5885 @item CUDA kernel launched: @code{dim=@{#teams,1,1@}, blocks=@{#threads,warp_size,1@}}.
5886 @item The number of teams is limited by the number of blocks the device can
5887 host simultaneously.
5890 Additional information can be obtained by setting the environment variable to
5891 @code{GOMP_DEBUG=1} (very verbose; grep for @code{kernel.*launch} for launch
5894 GCC generates generic PTX ISA code, which is just-in-time compiled by CUDA,
5895 which caches the JIT in the user's directory (see CUDA documentation; can be
5896 tuned by the environment variables @code{CUDA_CACHE_@{DISABLE,MAXSIZE,PATH@}}.
5898 Note: While PTX ISA is generic, the @code{-mptx=} and @code{-march=} commandline
5899 options still affect the used PTX ISA code and, thus, the requirements on
5900 CUDA version and hardware.
5902 The implementation remark:
5904 @item I/O within OpenMP target regions and OpenACC parallel/kernels is supported
5905 using the C library @code{printf} functions. Note that the Fortran
5906 @code{print}/@code{write} statements are not supported, yet.
5907 @item Compilation OpenMP code that contains @code{requires reverse_offload}
5908 requires at least @code{-march=sm_35}, compiling for @code{-march=sm_30}
5910 @item For code containing reverse offload (i.e. @code{target} regions with
5911 @code{device(ancestor:1)}), there is a slight performance penalty
5912 for @emph{all} target regions, consisting mostly of shutdown delay
5913 Per device, reverse offload regions are processed serially such that
5914 the next reverse offload region is only executed after the previous
5916 @item OpenMP code that has a @code{requires} directive with
5917 @code{unified_shared_memory} will remove any nvptx device from the
5918 list of available devices (``host fallback'').
5919 @item The default per-warp stack size is 128 kiB; see also @code{-msoft-stack}
5921 @item The OpenMP routines @code{omp_target_memcpy_rect} and
5922 @code{omp_target_memcpy_rect_async} and the @code{target update}
5923 directive for non-contiguous list items will use the 2D and 3D
5924 memory-copy functions of the CUDA library. Higher dimensions will
5925 call those functions in a loop and are therefore supported.
5926 @item Low-latency memory (@code{omp_low_lat_mem_space}) is supported when the
5927 the @code{access} trait is set to @code{cgroup}, the ISA is at least
5928 @code{sm_53}, and the PTX version is at least 4.1. The default pool size
5929 is 8 kiB per team, but may be adjusted at runtime by setting environment
5930 variable @code{GOMP_NVPTX_LOWLAT_POOL=@var{bytes}}. The maximum value is
5931 limited by the available hardware, and care should be taken that the
5932 selected pool size does not unduly limit the number of teams that can
5934 @item @code{omp_low_lat_mem_alloc} cannot be used with true low-latency memory
5935 because the definition implies the @code{omp_atv_all} trait; main
5936 graphics memory is used instead.
5937 @item @code{omp_cgroup_mem_alloc}, @code{omp_pteam_mem_alloc}, and
5938 @code{omp_thread_mem_alloc}, all use low-latency memory as first
5939 preference, and fall back to main graphics memory when the low-latency
5944 @c ---------------------------------------------------------------------
5946 @c ---------------------------------------------------------------------
5948 @node The libgomp ABI
5949 @chapter The libgomp ABI
5951 The following sections present notes on the external ABI as
5952 presented by libgomp. Only maintainers should need them.
5955 * Implementing MASTER construct::
5956 * Implementing CRITICAL construct::
5957 * Implementing ATOMIC construct::
5958 * Implementing FLUSH construct::
5959 * Implementing BARRIER construct::
5960 * Implementing THREADPRIVATE construct::
5961 * Implementing PRIVATE clause::
5962 * Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses::
5963 * Implementing REDUCTION clause::
5964 * Implementing PARALLEL construct::
5965 * Implementing FOR construct::
5966 * Implementing ORDERED construct::
5967 * Implementing SECTIONS construct::
5968 * Implementing SINGLE construct::
5969 * Implementing OpenACC's PARALLEL construct::
5973 @node Implementing MASTER construct
5974 @section Implementing MASTER construct
5977 if (omp_get_thread_num () == 0)
5981 Alternately, we generate two copies of the parallel subfunction
5982 and only include this in the version run by the primary thread.
5983 Surely this is not worthwhile though...
5987 @node Implementing CRITICAL construct
5988 @section Implementing CRITICAL construct
5990 Without a specified name,
5993 void GOMP_critical_start (void);
5994 void GOMP_critical_end (void);
5997 so that we don't get COPY relocations from libgomp to the main
6000 With a specified name, use omp_set_lock and omp_unset_lock with
6001 name being transformed into a variable declared like
6004 omp_lock_t gomp_critical_user_<name> __attribute__((common))
6007 Ideally the ABI would specify that all zero is a valid unlocked
6008 state, and so we wouldn't need to initialize this at
6013 @node Implementing ATOMIC construct
6014 @section Implementing ATOMIC construct
6016 The target should implement the @code{__sync} builtins.
6018 Failing that we could add
6021 void GOMP_atomic_enter (void)
6022 void GOMP_atomic_exit (void)
6025 which reuses the regular lock code, but with yet another lock
6026 object private to the library.
6030 @node Implementing FLUSH construct
6031 @section Implementing FLUSH construct
6033 Expands to the @code{__sync_synchronize} builtin.
6037 @node Implementing BARRIER construct
6038 @section Implementing BARRIER construct
6041 void GOMP_barrier (void)
6045 @node Implementing THREADPRIVATE construct
6046 @section Implementing THREADPRIVATE construct
6048 In _most_ cases we can map this directly to @code{__thread}. Except
6049 that OMP allows constructors for C++ objects. We can either
6050 refuse to support this (how often is it used?) or we can
6051 implement something akin to .ctors.
6053 Even more ideally, this ctor feature is handled by extensions
6054 to the main pthreads library. Failing that, we can have a set
6055 of entry points to register ctor functions to be called.
6059 @node Implementing PRIVATE clause
6060 @section Implementing PRIVATE clause
6062 In association with a PARALLEL, or within the lexical extent
6063 of a PARALLEL block, the variable becomes a local variable in
6064 the parallel subfunction.
6066 In association with FOR or SECTIONS blocks, create a new
6067 automatic variable within the current function. This preserves
6068 the semantic of new variable creation.
6072 @node Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
6073 @section Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
6075 This seems simple enough for PARALLEL blocks. Create a private
6076 struct for communicating between the parent and subfunction.
6077 In the parent, copy in values for scalar and "small" structs;
6078 copy in addresses for others TREE_ADDRESSABLE types. In the
6079 subfunction, copy the value into the local variable.
6081 It is not clear what to do with bare FOR or SECTION blocks.
6082 The only thing I can figure is that we do something like:
6085 #pragma omp for firstprivate(x) lastprivate(y)
6086 for (int i = 0; i < n; ++i)
6103 where the "x=x" and "y=y" assignments actually have different
6104 uids for the two variables, i.e. not something you could write
6105 directly in C. Presumably this only makes sense if the "outer"
6106 x and y are global variables.
6108 COPYPRIVATE would work the same way, except the structure
6109 broadcast would have to happen via SINGLE machinery instead.
6113 @node Implementing REDUCTION clause
6114 @section Implementing REDUCTION clause
6116 The private struct mentioned in the previous section should have
6117 a pointer to an array of the type of the variable, indexed by the
6118 thread's @var{team_id}. The thread stores its final value into the
6119 array, and after the barrier, the primary thread iterates over the
6120 array to collect the values.
6123 @node Implementing PARALLEL construct
6124 @section Implementing PARALLEL construct
6127 #pragma omp parallel
6136 void subfunction (void *data)
6143 GOMP_parallel_start (subfunction, &data, num_threads);
6144 subfunction (&data);
6145 GOMP_parallel_end ();
6149 void GOMP_parallel_start (void (*fn)(void *), void *data, unsigned num_threads)
6152 The @var{FN} argument is the subfunction to be run in parallel.
6154 The @var{DATA} argument is a pointer to a structure used to
6155 communicate data in and out of the subfunction, as discussed
6156 above with respect to FIRSTPRIVATE et al.
6158 The @var{NUM_THREADS} argument is 1 if an IF clause is present
6159 and false, or the value of the NUM_THREADS clause, if
6162 The function needs to create the appropriate number of
6163 threads and/or launch them from the dock. It needs to
6164 create the team structure and assign team ids.
6167 void GOMP_parallel_end (void)
6170 Tears down the team and returns us to the previous @code{omp_in_parallel()} state.
6174 @node Implementing FOR construct
6175 @section Implementing FOR construct
6178 #pragma omp parallel for
6179 for (i = lb; i <= ub; i++)
6186 void subfunction (void *data)
6189 while (GOMP_loop_static_next (&_s0, &_e0))
6192 for (i = _s0; i < _e1; i++)
6195 GOMP_loop_end_nowait ();
6198 GOMP_parallel_loop_static (subfunction, NULL, 0, lb, ub+1, 1, 0);
6200 GOMP_parallel_end ();
6204 #pragma omp for schedule(runtime)
6205 for (i = 0; i < n; i++)
6214 if (GOMP_loop_runtime_start (0, n, 1, &_s0, &_e0))
6217 for (i = _s0, i < _e0; i++)
6219 @} while (GOMP_loop_runtime_next (&_s0, _&e0));
6224 Note that while it looks like there is trickiness to propagating
6225 a non-constant STEP, there isn't really. We're explicitly allowed
6226 to evaluate it as many times as we want, and any variables involved
6227 should automatically be handled as PRIVATE or SHARED like any other
6228 variables. So the expression should remain evaluable in the
6229 subfunction. We can also pull it into a local variable if we like,
6230 but since its supposed to remain unchanged, we can also not if we like.
6232 If we have SCHEDULE(STATIC), and no ORDERED, then we ought to be
6233 able to get away with no work-sharing context at all, since we can
6234 simply perform the arithmetic directly in each thread to divide up
6235 the iterations. Which would mean that we wouldn't need to call any
6238 There are separate routines for handling loops with an ORDERED
6239 clause. Bookkeeping for that is non-trivial...
6243 @node Implementing ORDERED construct
6244 @section Implementing ORDERED construct
6247 void GOMP_ordered_start (void)
6248 void GOMP_ordered_end (void)
6253 @node Implementing SECTIONS construct
6254 @section Implementing SECTIONS construct
6259 #pragma omp sections
6273 for (i = GOMP_sections_start (3); i != 0; i = GOMP_sections_next ())
6290 @node Implementing SINGLE construct
6291 @section Implementing SINGLE construct
6305 if (GOMP_single_start ())
6313 #pragma omp single copyprivate(x)
6320 datap = GOMP_single_copy_start ();
6325 GOMP_single_copy_end (&data);
6334 @node Implementing OpenACC's PARALLEL construct
6335 @section Implementing OpenACC's PARALLEL construct
6338 void GOACC_parallel ()
6343 @c ---------------------------------------------------------------------
6345 @c ---------------------------------------------------------------------
6347 @node Reporting Bugs
6348 @chapter Reporting Bugs
6350 Bugs in the GNU Offloading and Multi Processing Runtime Library should
6351 be reported via @uref{https://gcc.gnu.org/bugzilla/, Bugzilla}. Please add
6352 "openacc", or "openmp", or both to the keywords field in the bug
6353 report, as appropriate.
6357 @c ---------------------------------------------------------------------
6358 @c GNU General Public License
6359 @c ---------------------------------------------------------------------
6361 @include gpl_v3.texi
6365 @c ---------------------------------------------------------------------
6366 @c GNU Free Documentation License
6367 @c ---------------------------------------------------------------------
6373 @c ---------------------------------------------------------------------
6374 @c Funding Free Software
6375 @c ---------------------------------------------------------------------
6377 @include funding.texi
6379 @c ---------------------------------------------------------------------
6381 @c ---------------------------------------------------------------------
6384 @unnumbered Library Index