1 \input texinfo @c -*-texinfo-*-
4 @setfilename libgomp.info
10 Copyright @copyright{} 2006-2023 Free Software Foundation, Inc.
12 Permission is granted to copy, distribute and/or modify this document
13 under the terms of the GNU Free Documentation License, Version 1.3 or
14 any later version published by the Free Software Foundation; with the
15 Invariant Sections being ``Funding Free Software'', the Front-Cover
16 texts being (a) (see below), and with the Back-Cover Texts being (b)
17 (see below). A copy of the license is included in the section entitled
18 ``GNU Free Documentation License''.
20 (a) The FSF's Front-Cover Text is:
24 (b) The FSF's Back-Cover Text is:
26 You have freedom to copy and modify this GNU Manual, like GNU
27 software. Copies published by the Free Software Foundation raise
28 funds for GNU development.
32 @dircategory GNU Libraries
34 * libgomp: (libgomp). GNU Offloading and Multi Processing Runtime Library.
37 This manual documents libgomp, the GNU Offloading and Multi Processing
38 Runtime library. This is the GNU implementation of the OpenMP and
39 OpenACC APIs for parallel and accelerator programming in C/C++ and
42 Published by the Free Software Foundation
43 51 Franklin Street, Fifth Floor
44 Boston, MA 02110-1301 USA
50 @setchapternewpage odd
53 @title GNU Offloading and Multi Processing Runtime Library
54 @subtitle The GNU OpenMP and OpenACC Implementation
56 @vskip 0pt plus 1filll
57 @comment For the @value{version-GCC} Version*
59 Published by the Free Software Foundation @*
60 51 Franklin Street, Fifth Floor@*
61 Boston, MA 02110-1301, USA@*
71 @node Top, Enabling OpenMP
75 This manual documents the usage of libgomp, the GNU Offloading and
76 Multi Processing Runtime Library. This includes the GNU
77 implementation of the @uref{https://www.openmp.org, OpenMP} Application
78 Programming Interface (API) for multi-platform shared-memory parallel
79 programming in C/C++ and Fortran, and the GNU implementation of the
80 @uref{https://www.openacc.org, OpenACC} Application Programming
81 Interface (API) for offloading of code to accelerator devices in C/C++
84 Originally, libgomp implemented the GNU OpenMP Runtime Library. Based
85 on this, support for OpenACC and offloading (both OpenACC and OpenMP
86 4's target construct) has been added later on, and the library's name
87 changed to GNU Offloading and Multi Processing Runtime Library.
92 @comment When you add a new menu item, please keep the right hand
93 @comment aligned to the same column. Do not use tabs. This provides
94 @comment better formatting.
97 * Enabling OpenMP:: How to enable OpenMP for your applications.
98 * OpenMP Implementation Status:: List of implemented features by OpenMP version
99 * OpenMP Runtime Library Routines: Runtime Library Routines.
100 The OpenMP runtime application programming
102 * OpenMP Environment Variables: Environment Variables.
103 Influencing OpenMP runtime behavior with
104 environment variables.
105 * Enabling OpenACC:: How to enable OpenACC for your
107 * OpenACC Runtime Library Routines:: The OpenACC runtime application
108 programming interface.
109 * OpenACC Environment Variables:: Influencing OpenACC runtime behavior with
110 environment variables.
111 * CUDA Streams Usage:: Notes on the implementation of
112 asynchronous operations.
113 * OpenACC Library Interoperability:: OpenACC library interoperability with the
114 NVIDIA CUBLAS library.
115 * OpenACC Profiling Interface::
116 * OpenMP-Implementation Specifics:: Notes specifics of this OpenMP
118 * Offload-Target Specifics:: Notes on offload-target specific internals
119 * The libgomp ABI:: Notes on the external ABI presented by libgomp.
120 * Reporting Bugs:: How to report bugs in the GNU Offloading and
121 Multi Processing Runtime Library.
122 * Copying:: GNU general public license says
123 how you can copy and share libgomp.
124 * GNU Free Documentation License::
125 How you can copy and share this manual.
126 * Funding:: How to help assure continued work for free
128 * Library Index:: Index of this documentation.
132 @c ---------------------------------------------------------------------
134 @c ---------------------------------------------------------------------
136 @node Enabling OpenMP
137 @chapter Enabling OpenMP
139 To activate the OpenMP extensions for C/C++ and Fortran, the compile-time
140 flag @command{-fopenmp} must be specified. This enables the OpenMP directive
141 @code{#pragma omp} in C/C++ and @code{!$omp} directives in free form,
142 @code{c$omp}, @code{*$omp} and @code{!$omp} directives in fixed form,
143 @code{!$} conditional compilation sentinels in free form and @code{c$},
144 @code{*$} and @code{!$} sentinels in fixed form, for Fortran. The flag also
145 arranges for automatic linking of the OpenMP runtime library
146 (@ref{Runtime Library Routines}).
148 A complete description of all OpenMP directives may be found in the
149 @uref{https://www.openmp.org, OpenMP Application Program Interface} manuals.
150 See also @ref{OpenMP Implementation Status}.
153 @c ---------------------------------------------------------------------
154 @c OpenMP Implementation Status
155 @c ---------------------------------------------------------------------
157 @node OpenMP Implementation Status
158 @chapter OpenMP Implementation Status
161 * OpenMP 4.5:: Feature completion status to 4.5 specification
162 * OpenMP 5.0:: Feature completion status to 5.0 specification
163 * OpenMP 5.1:: Feature completion status to 5.1 specification
164 * OpenMP 5.2:: Feature completion status to 5.2 specification
165 * OpenMP Technical Report 11:: Feature completion status to first 6.0 preview
168 The @code{_OPENMP} preprocessor macro and Fortran's @code{openmp_version}
169 parameter, provided by @code{omp_lib.h} and the @code{omp_lib} module, have
170 the value @code{201511} (i.e. OpenMP 4.5).
175 The OpenMP 4.5 specification is fully supported.
180 @unnumberedsubsec New features listed in Appendix B of the OpenMP specification
181 @c This list is sorted as in OpenMP 5.1's B.3 not as in OpenMP 5.0's B.2
183 @multitable @columnfractions .60 .10 .25
184 @headitem Description @tab Status @tab Comments
185 @item Array shaping @tab N @tab
186 @item Array sections with non-unit strides in C and C++ @tab N @tab
187 @item Iterators @tab Y @tab
188 @item @code{metadirective} directive @tab N @tab
189 @item @code{declare variant} directive
190 @tab P @tab @emph{simd} traits not handled correctly
191 @item @var{target-offload-var} ICV and @code{OMP_TARGET_OFFLOAD}
192 env variable @tab Y @tab
193 @item Nested-parallel changes to @var{max-active-levels-var} ICV @tab Y @tab
194 @item @code{requires} directive @tab P
195 @tab complete but no non-host device provides @code{unified_shared_memory}
196 @item @code{teams} construct outside an enclosing target region @tab Y @tab
197 @item Non-rectangular loop nests @tab P
198 @tab Full support for C/C++, partial for Fortran
199 (@uref{https://gcc.gnu.org/PR110735,PR110735})
200 @item @code{!=} as relational-op in canonical loop form for C/C++ @tab Y @tab
201 @item @code{nonmonotonic} as default loop schedule modifier for worksharing-loop
202 constructs @tab Y @tab
203 @item Collapse of associated loops that are imperfectly nested loops @tab Y @tab
204 @item Clauses @code{if}, @code{nontemporal} and @code{order(concurrent)} in
205 @code{simd} construct @tab Y @tab
206 @item @code{atomic} constructs in @code{simd} @tab Y @tab
207 @item @code{loop} construct @tab Y @tab
208 @item @code{order(concurrent)} clause @tab Y @tab
209 @item @code{scan} directive and @code{in_scan} modifier for the
210 @code{reduction} clause @tab Y @tab
211 @item @code{in_reduction} clause on @code{task} constructs @tab Y @tab
212 @item @code{in_reduction} clause on @code{target} constructs @tab P
213 @tab @code{nowait} only stub
214 @item @code{task_reduction} clause with @code{taskgroup} @tab Y @tab
215 @item @code{task} modifier to @code{reduction} clause @tab Y @tab
216 @item @code{affinity} clause to @code{task} construct @tab Y @tab Stub only
217 @item @code{detach} clause to @code{task} construct @tab Y @tab
218 @item @code{omp_fulfill_event} runtime routine @tab Y @tab
219 @item @code{reduction} and @code{in_reduction} clauses on @code{taskloop}
220 and @code{taskloop simd} constructs @tab Y @tab
221 @item @code{taskloop} construct cancelable by @code{cancel} construct
223 @item @code{mutexinoutset} @emph{dependence-type} for @code{depend} clause
225 @item Predefined memory spaces, memory allocators, allocator traits
226 @tab Y @tab See also @ref{Memory allocation}
227 @item Memory management routines @tab Y @tab
228 @item @code{allocate} directive @tab P @tab Only C, only stack variables
229 @item @code{allocate} clause @tab P @tab Initial support
230 @item @code{use_device_addr} clause on @code{target data} @tab Y @tab
231 @item @code{ancestor} modifier on @code{device} clause @tab Y @tab
232 @item Implicit declare target directive @tab Y @tab
233 @item Discontiguous array section with @code{target update} construct
235 @item C/C++'s lvalue expressions in @code{to}, @code{from}
236 and @code{map} clauses @tab N @tab
237 @item C/C++'s lvalue expressions in @code{depend} clauses @tab Y @tab
238 @item Nested @code{declare target} directive @tab Y @tab
239 @item Combined @code{master} constructs @tab Y @tab
240 @item @code{depend} clause on @code{taskwait} @tab Y @tab
241 @item Weak memory ordering clauses on @code{atomic} and @code{flush} construct
243 @item @code{hint} clause on the @code{atomic} construct @tab Y @tab Stub only
244 @item @code{depobj} construct and depend objects @tab Y @tab
245 @item Lock hints were renamed to synchronization hints @tab Y @tab
246 @item @code{conditional} modifier to @code{lastprivate} clause @tab Y @tab
247 @item Map-order clarifications @tab P @tab
248 @item @code{close} @emph{map-type-modifier} @tab Y @tab
249 @item Mapping C/C++ pointer variables and to assign the address of
250 device memory mapped by an array section @tab P @tab
251 @item Mapping of Fortran pointer and allocatable variables, including pointer
252 and allocatable components of variables
253 @tab P @tab Mapping of vars with allocatable components unsupported
254 @item @code{defaultmap} extensions @tab Y @tab
255 @item @code{declare mapper} directive @tab N @tab
256 @item @code{omp_get_supported_active_levels} routine @tab Y @tab
257 @item Runtime routines and environment variables to display runtime thread
258 affinity information @tab Y @tab
259 @item @code{omp_pause_resource} and @code{omp_pause_resource_all} runtime
261 @item @code{omp_get_device_num} runtime routine @tab Y @tab
262 @item OMPT interface @tab N @tab
263 @item OMPD interface @tab N @tab
266 @unnumberedsubsec Other new OpenMP 5.0 features
268 @multitable @columnfractions .60 .10 .25
269 @headitem Description @tab Status @tab Comments
270 @item Supporting C++'s range-based for loop @tab Y @tab
277 @unnumberedsubsec New features listed in Appendix B of the OpenMP specification
279 @multitable @columnfractions .60 .10 .25
280 @headitem Description @tab Status @tab Comments
281 @item OpenMP directive as C++ attribute specifiers @tab Y @tab
282 @item @code{omp_all_memory} reserved locator @tab Y @tab
283 @item @emph{target_device trait} in OpenMP Context @tab N @tab
284 @item @code{target_device} selector set in context selectors @tab N @tab
285 @item C/C++'s @code{declare variant} directive: elision support of
286 preprocessed code @tab N @tab
287 @item @code{declare variant}: new clauses @code{adjust_args} and
288 @code{append_args} @tab N @tab
289 @item @code{dispatch} construct @tab N @tab
290 @item device-specific ICV settings with environment variables @tab Y @tab
291 @item @code{assume} and @code{assumes} directives @tab Y @tab
292 @item @code{nothing} directive @tab Y @tab
293 @item @code{error} directive @tab Y @tab
294 @item @code{masked} construct @tab Y @tab
295 @item @code{scope} directive @tab Y @tab
296 @item Loop transformation constructs @tab N @tab
297 @item @code{strict} modifier in the @code{grainsize} and @code{num_tasks}
298 clauses of the @code{taskloop} construct @tab Y @tab
299 @item @code{align} clause in @code{allocate} directive @tab P
300 @tab Only C (and only stack variables)
301 @item @code{align} modifier in @code{allocate} clause @tab Y @tab
302 @item @code{thread_limit} clause to @code{target} construct @tab Y @tab
303 @item @code{has_device_addr} clause to @code{target} construct @tab Y @tab
304 @item Iterators in @code{target update} motion clauses and @code{map}
306 @item Indirect calls to the device version of a procedure or function in
307 @code{target} regions @tab N @tab
308 @item @code{interop} directive @tab N @tab
309 @item @code{omp_interop_t} object support in runtime routines @tab N @tab
310 @item @code{nowait} clause in @code{taskwait} directive @tab Y @tab
311 @item Extensions to the @code{atomic} directive @tab Y @tab
312 @item @code{seq_cst} clause on a @code{flush} construct @tab Y @tab
313 @item @code{inoutset} argument to the @code{depend} clause @tab Y @tab
314 @item @code{private} and @code{firstprivate} argument to @code{default}
315 clause in C and C++ @tab Y @tab
316 @item @code{present} argument to @code{defaultmap} clause @tab Y @tab
317 @item @code{omp_set_num_teams}, @code{omp_set_teams_thread_limit},
318 @code{omp_get_max_teams}, @code{omp_get_teams_thread_limit} runtime
320 @item @code{omp_target_is_accessible} runtime routine @tab Y @tab
321 @item @code{omp_target_memcpy_async} and @code{omp_target_memcpy_rect_async}
322 runtime routines @tab Y @tab
323 @item @code{omp_get_mapped_ptr} runtime routine @tab Y @tab
324 @item @code{omp_calloc}, @code{omp_realloc}, @code{omp_aligned_alloc} and
325 @code{omp_aligned_calloc} runtime routines @tab Y @tab
326 @item @code{omp_alloctrait_key_t} enum: @code{omp_atv_serialized} added,
327 @code{omp_atv_default} changed @tab Y @tab
328 @item @code{omp_display_env} runtime routine @tab Y @tab
329 @item @code{ompt_scope_endpoint_t} enum: @code{ompt_scope_beginend} @tab N @tab
330 @item @code{ompt_sync_region_t} enum additions @tab N @tab
331 @item @code{ompt_state_t} enum: @code{ompt_state_wait_barrier_implementation}
332 and @code{ompt_state_wait_barrier_teams} @tab N @tab
333 @item @code{ompt_callback_target_data_op_emi_t},
334 @code{ompt_callback_target_emi_t}, @code{ompt_callback_target_map_emi_t}
335 and @code{ompt_callback_target_submit_emi_t} @tab N @tab
336 @item @code{ompt_callback_error_t} type @tab N @tab
337 @item @code{OMP_PLACES} syntax extensions @tab Y @tab
338 @item @code{OMP_NUM_TEAMS} and @code{OMP_TEAMS_THREAD_LIMIT} environment
339 variables @tab Y @tab
342 @unnumberedsubsec Other new OpenMP 5.1 features
344 @multitable @columnfractions .60 .10 .25
345 @headitem Description @tab Status @tab Comments
346 @item Support of strictly structured blocks in Fortran @tab Y @tab
347 @item Support of structured block sequences in C/C++ @tab Y @tab
348 @item @code{unconstrained} and @code{reproducible} modifiers on @code{order}
350 @item Support @code{begin/end declare target} syntax in C/C++ @tab Y @tab
351 @item Pointer predetermined firstprivate getting initialized
352 to address of matching mapped list item per 5.1, Sect. 2.21.7.2 @tab N @tab
353 @item For Fortran, diagnose placing declarative before/between @code{USE},
354 @code{IMPORT}, and @code{IMPLICIT} as invalid @tab N @tab
355 @item Optional comma between directive and clause in the @code{#pragma} form @tab Y @tab
356 @item @code{indirect} clause in @code{declare target} @tab N @tab
357 @item @code{device_type(nohost)}/@code{device_type(host)} for variables @tab N @tab
358 @item @code{present} modifier to the @code{map}, @code{to} and @code{from}
366 @unnumberedsubsec New features listed in Appendix B of the OpenMP specification
368 @multitable @columnfractions .60 .10 .25
369 @headitem Description @tab Status @tab Comments
370 @item @code{omp_in_explicit_task} routine and @var{explicit-task-var} ICV
372 @item @code{omp}/@code{ompx}/@code{omx} sentinels and @code{omp_}/@code{ompx_}
374 @tab warning for @code{ompx/omx} sentinels@footnote{The @code{ompx}
375 sentinel as C/C++ pragma and C++ attributes are warned for with
376 @code{-Wunknown-pragmas} (implied by @code{-Wall}) and @code{-Wattributes}
377 (enabled by default), respectively; for Fortran free-source code, there is
378 a warning enabled by default and, for fixed-source code, the @code{omx}
379 sentinel is warned for with with @code{-Wsurprising} (enabled by
380 @code{-Wall}). Unknown clauses are always rejected with an error.}
381 @item Clauses on @code{end} directive can be on directive @tab Y @tab
382 @item @code{destroy} clause with destroy-var argument on @code{depobj}
384 @item Deprecation of no-argument @code{destroy} clause on @code{depobj}
386 @item @code{linear} clause syntax changes and @code{step} modifier @tab Y @tab
387 @item Deprecation of minus operator for reductions @tab N @tab
388 @item Deprecation of separating @code{map} modifiers without comma @tab N @tab
389 @item @code{declare mapper} with iterator and @code{present} modifiers
391 @item If a matching mapped list item is not found in the data environment, the
392 pointer retains its original value @tab Y @tab
393 @item New @code{enter} clause as alias for @code{to} on declare target directive
395 @item Deprecation of @code{to} clause on declare target directive @tab N @tab
396 @item Extended list of directives permitted in Fortran pure procedures
398 @item New @code{allocators} directive for Fortran @tab N @tab
399 @item Deprecation of @code{allocate} directive for Fortran
400 allocatables/pointers @tab N @tab
401 @item Optional paired @code{end} directive with @code{dispatch} @tab N @tab
402 @item New @code{memspace} and @code{traits} modifiers for @code{uses_allocators}
404 @item Deprecation of traits array following the allocator_handle expression in
405 @code{uses_allocators} @tab N @tab
406 @item New @code{otherwise} clause as alias for @code{default} on metadirectives
408 @item Deprecation of @code{default} clause on metadirectives @tab N @tab
409 @item Deprecation of delimited form of @code{declare target} @tab N @tab
410 @item Reproducible semantics changed for @code{order(concurrent)} @tab N @tab
411 @item @code{allocate} and @code{firstprivate} clauses on @code{scope}
413 @item @code{ompt_callback_work} @tab N @tab
414 @item Default map-type for the @code{map} clause in @code{target enter/exit data}
416 @item New @code{doacross} clause as alias for @code{depend} with
417 @code{source}/@code{sink} modifier @tab Y @tab
418 @item Deprecation of @code{depend} with @code{source}/@code{sink} modifier
420 @item @code{omp_cur_iteration} keyword @tab Y @tab
423 @unnumberedsubsec Other new OpenMP 5.2 features
425 @multitable @columnfractions .60 .10 .25
426 @headitem Description @tab Status @tab Comments
427 @item For Fortran, optional comma between directive and clause @tab N @tab
428 @item Conforming device numbers and @code{omp_initial_device} and
429 @code{omp_invalid_device} enum/PARAMETER @tab Y @tab
430 @item Initial value of @var{default-device-var} ICV with
431 @code{OMP_TARGET_OFFLOAD=mandatory} @tab Y @tab
432 @item @code{all} as @emph{implicit-behavior} for @code{defaultmap} @tab Y @tab
433 @item @emph{interop_types} in any position of the modifier list for the @code{init} clause
434 of the @code{interop} construct @tab N @tab
438 @node OpenMP Technical Report 11
439 @section OpenMP Technical Report 11
441 Technical Report (TR) 11 is the first preview for OpenMP 6.0.
443 @unnumberedsubsec New features listed in Appendix B of the OpenMP specification
444 @multitable @columnfractions .60 .10 .25
445 @item Features deprecated in versions 5.2, 5.1 and 5.0 were removed
446 @tab N/A @tab Backward compatibility
447 @item The @code{decl} attribute was added to the C++ attribute syntax
449 @item @code{_ALL} suffix to the device-scope environment variables
450 @tab P @tab Host device number wrongly accepted
451 @item For Fortran, @emph{locator list} can be also function reference with
452 data pointer result @tab N @tab
453 @item Ref-count change for @code{use_device_ptr}/@code{use_device_addr}
455 @item Implicit reduction identifiers of C++ classes
457 @item Change of the @emph{map-type} property from @emph{ultimate} to
458 @emph{default} @tab N @tab
459 @item Concept of @emph{assumed-size arrays} in C and C++
461 @item Mapping of @emph{assumed-size arrays} in C, C++ and Fortran
463 @item @code{groupprivate} directive @tab N @tab
464 @item @code{local} clause to declare target directive @tab N @tab
465 @item @code{part_size} allocator trait @tab N @tab
466 @item @code{pin_device}, @code{preferred_device} and @code{target_access}
469 @item @code{access} allocator trait changes @tab N @tab
470 @item Extension of @code{interop} operation of @code{append_args}, allowing all
471 modifiers of the @code{init} clause
473 @item @code{interop} clause to @code{dispatch} @tab N @tab
474 @item @code{apply} code to loop-transforming constructs @tab N @tab
475 @item @code{omp_curr_progress_width} identifier @tab N @tab
476 @item @code{safesync} clause to the @code{parallel} construct @tab N @tab
477 @item @code{omp_get_max_progress_width} runtime routine @tab N @tab
478 @item @code{strict} modifier keyword to @code{num_threads} @tab N @tab
479 @item @code{memscope} clause to @code{atomic} and @code{flush} @tab N @tab
480 @item Routines for obtaining memory spaces/allocators for shared/device memory
482 @item @code{omp_get_memspace_num_resources} routine @tab N @tab
483 @item @code{omp_get_submemspace} routine @tab N @tab
484 @item @code{ompt_get_buffer_limits} OMPT routine @tab N @tab
485 @item Extension of @code{OMP_DEFAULT_DEVICE} and new
486 @code{OMP_AVAILABLE_DEVICES} environment vars @tab N @tab
487 @item Supporting increments with abstract names in @code{OMP_PLACES} @tab N @tab
490 @unnumberedsubsec Other new TR 11 features
491 @multitable @columnfractions .60 .10 .25
492 @item Relaxed Fortran restrictions to the @code{aligned} clause @tab N @tab
493 @item Mapping lambda captures @tab N @tab
494 @item For Fortran, atomic compare with storing the comparison result
500 @c ---------------------------------------------------------------------
501 @c OpenMP Runtime Library Routines
502 @c ---------------------------------------------------------------------
504 @node Runtime Library Routines
505 @chapter OpenMP Runtime Library Routines
507 The runtime routines described here are defined by Section 18 of the OpenMP
508 specification in version 5.2.
511 * Thread Team Routines::
512 * Thread Affinity Routines::
513 * Teams Region Routines::
515 @c * Resource Relinquishing Routines::
516 * Device Information Routines::
517 * Device Memory Routines::
521 @c * Interoperability Routines::
522 * Memory Management Routines::
523 @c * Tool Control Routine::
524 @c * Environment Display Routine::
529 @node Thread Team Routines
530 @section Thread Team Routines
532 Routines controlling threads in the current contention group.
533 They have C linkage and do not throw exceptions.
536 * omp_set_num_threads:: Set upper team size limit
537 * omp_get_num_threads:: Size of the active team
538 * omp_get_max_threads:: Maximum number of threads of parallel region
539 * omp_get_thread_num:: Current thread ID
540 * omp_in_parallel:: Whether a parallel region is active
541 * omp_set_dynamic:: Enable/disable dynamic teams
542 * omp_get_dynamic:: Dynamic teams setting
543 * omp_get_cancellation:: Whether cancellation support is enabled
544 * omp_set_nested:: Enable/disable nested parallel regions
545 * omp_get_nested:: Nested parallel regions
546 * omp_set_schedule:: Set the runtime scheduling method
547 * omp_get_schedule:: Obtain the runtime scheduling method
548 * omp_get_teams_thread_limit:: Maximum number of threads imposed by teams
549 * omp_get_supported_active_levels:: Maximum number of active regions supported
550 * omp_set_max_active_levels:: Limits the number of active parallel regions
551 * omp_get_max_active_levels:: Current maximum number of active regions
552 * omp_get_level:: Number of parallel regions
553 * omp_get_ancestor_thread_num:: Ancestor thread ID
554 * omp_get_team_size:: Number of threads in a team
555 * omp_get_active_level:: Number of active parallel regions
560 @node omp_set_num_threads
561 @subsection @code{omp_set_num_threads} -- Set upper team size limit
563 @item @emph{Description}:
564 Specifies the number of threads used by default in subsequent parallel
565 sections, if those do not specify a @code{num_threads} clause. The
566 argument of @code{omp_set_num_threads} shall be a positive integer.
569 @multitable @columnfractions .20 .80
570 @item @emph{Prototype}: @tab @code{void omp_set_num_threads(int num_threads);}
573 @item @emph{Fortran}:
574 @multitable @columnfractions .20 .80
575 @item @emph{Interface}: @tab @code{subroutine omp_set_num_threads(num_threads)}
576 @item @tab @code{integer, intent(in) :: num_threads}
579 @item @emph{See also}:
580 @ref{OMP_NUM_THREADS}, @ref{omp_get_num_threads}, @ref{omp_get_max_threads}
582 @item @emph{Reference}:
583 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.1.
588 @node omp_get_num_threads
589 @subsection @code{omp_get_num_threads} -- Size of the active team
591 @item @emph{Description}:
592 Returns the number of threads in the current team. In a sequential section of
593 the program @code{omp_get_num_threads} returns 1.
595 The default team size may be initialized at startup by the
596 @env{OMP_NUM_THREADS} environment variable. At runtime, the size
597 of the current team may be set either by the @code{NUM_THREADS}
598 clause or by @code{omp_set_num_threads}. If none of the above were
599 used to define a specific value and @env{OMP_DYNAMIC} is disabled,
600 one thread per CPU online is used.
603 @multitable @columnfractions .20 .80
604 @item @emph{Prototype}: @tab @code{int omp_get_num_threads(void);}
607 @item @emph{Fortran}:
608 @multitable @columnfractions .20 .80
609 @item @emph{Interface}: @tab @code{integer function omp_get_num_threads()}
612 @item @emph{See also}:
613 @ref{omp_get_max_threads}, @ref{omp_set_num_threads}, @ref{OMP_NUM_THREADS}
615 @item @emph{Reference}:
616 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.2.
621 @node omp_get_max_threads
622 @subsection @code{omp_get_max_threads} -- Maximum number of threads of parallel region
624 @item @emph{Description}:
625 Return the maximum number of threads used for the current parallel region
626 that does not use the clause @code{num_threads}.
629 @multitable @columnfractions .20 .80
630 @item @emph{Prototype}: @tab @code{int omp_get_max_threads(void);}
633 @item @emph{Fortran}:
634 @multitable @columnfractions .20 .80
635 @item @emph{Interface}: @tab @code{integer function omp_get_max_threads()}
638 @item @emph{See also}:
639 @ref{omp_set_num_threads}, @ref{omp_set_dynamic}, @ref{omp_get_thread_limit}
641 @item @emph{Reference}:
642 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.3.
647 @node omp_get_thread_num
648 @subsection @code{omp_get_thread_num} -- Current thread ID
650 @item @emph{Description}:
651 Returns a unique thread identification number within the current team.
652 In a sequential parts of the program, @code{omp_get_thread_num}
653 always returns 0. In parallel regions the return value varies
654 from 0 to @code{omp_get_num_threads}-1 inclusive. The return
655 value of the primary thread of a team is always 0.
658 @multitable @columnfractions .20 .80
659 @item @emph{Prototype}: @tab @code{int omp_get_thread_num(void);}
662 @item @emph{Fortran}:
663 @multitable @columnfractions .20 .80
664 @item @emph{Interface}: @tab @code{integer function omp_get_thread_num()}
667 @item @emph{See also}:
668 @ref{omp_get_num_threads}, @ref{omp_get_ancestor_thread_num}
670 @item @emph{Reference}:
671 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.4.
676 @node omp_in_parallel
677 @subsection @code{omp_in_parallel} -- Whether a parallel region is active
679 @item @emph{Description}:
680 This function returns @code{true} if currently running in parallel,
681 @code{false} otherwise. Here, @code{true} and @code{false} represent
682 their language-specific counterparts.
685 @multitable @columnfractions .20 .80
686 @item @emph{Prototype}: @tab @code{int omp_in_parallel(void);}
689 @item @emph{Fortran}:
690 @multitable @columnfractions .20 .80
691 @item @emph{Interface}: @tab @code{logical function omp_in_parallel()}
694 @item @emph{Reference}:
695 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.6.
699 @node omp_set_dynamic
700 @subsection @code{omp_set_dynamic} -- Enable/disable dynamic teams
702 @item @emph{Description}:
703 Enable or disable the dynamic adjustment of the number of threads
704 within a team. The function takes the language-specific equivalent
705 of @code{true} and @code{false}, where @code{true} enables dynamic
706 adjustment of team sizes and @code{false} disables it.
709 @multitable @columnfractions .20 .80
710 @item @emph{Prototype}: @tab @code{void omp_set_dynamic(int dynamic_threads);}
713 @item @emph{Fortran}:
714 @multitable @columnfractions .20 .80
715 @item @emph{Interface}: @tab @code{subroutine omp_set_dynamic(dynamic_threads)}
716 @item @tab @code{logical, intent(in) :: dynamic_threads}
719 @item @emph{See also}:
720 @ref{OMP_DYNAMIC}, @ref{omp_get_dynamic}
722 @item @emph{Reference}:
723 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.7.
728 @node omp_get_dynamic
729 @subsection @code{omp_get_dynamic} -- Dynamic teams setting
731 @item @emph{Description}:
732 This function returns @code{true} if enabled, @code{false} otherwise.
733 Here, @code{true} and @code{false} represent their language-specific
736 The dynamic team setting may be initialized at startup by the
737 @env{OMP_DYNAMIC} environment variable or at runtime using
738 @code{omp_set_dynamic}. If undefined, dynamic adjustment is
742 @multitable @columnfractions .20 .80
743 @item @emph{Prototype}: @tab @code{int omp_get_dynamic(void);}
746 @item @emph{Fortran}:
747 @multitable @columnfractions .20 .80
748 @item @emph{Interface}: @tab @code{logical function omp_get_dynamic()}
751 @item @emph{See also}:
752 @ref{omp_set_dynamic}, @ref{OMP_DYNAMIC}
754 @item @emph{Reference}:
755 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.8.
760 @node omp_get_cancellation
761 @subsection @code{omp_get_cancellation} -- Whether cancellation support is enabled
763 @item @emph{Description}:
764 This function returns @code{true} if cancellation is activated, @code{false}
765 otherwise. Here, @code{true} and @code{false} represent their language-specific
766 counterparts. Unless @env{OMP_CANCELLATION} is set true, cancellations are
770 @multitable @columnfractions .20 .80
771 @item @emph{Prototype}: @tab @code{int omp_get_cancellation(void);}
774 @item @emph{Fortran}:
775 @multitable @columnfractions .20 .80
776 @item @emph{Interface}: @tab @code{logical function omp_get_cancellation()}
779 @item @emph{See also}:
780 @ref{OMP_CANCELLATION}
782 @item @emph{Reference}:
783 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.9.
789 @subsection @code{omp_set_nested} -- Enable/disable nested parallel regions
791 @item @emph{Description}:
792 Enable or disable nested parallel regions, i.e., whether team members
793 are allowed to create new teams. The function takes the language-specific
794 equivalent of @code{true} and @code{false}, where @code{true} enables
795 dynamic adjustment of team sizes and @code{false} disables it.
797 Enabling nested parallel regions will also set the maximum number of
798 active nested regions to the maximum supported. Disabling nested parallel
799 regions will set the maximum number of active nested regions to one.
801 Note that the @code{omp_set_nested} API routine was deprecated
802 in the OpenMP specification 5.2 in favor of @code{omp_set_max_active_levels}.
805 @multitable @columnfractions .20 .80
806 @item @emph{Prototype}: @tab @code{void omp_set_nested(int nested);}
809 @item @emph{Fortran}:
810 @multitable @columnfractions .20 .80
811 @item @emph{Interface}: @tab @code{subroutine omp_set_nested(nested)}
812 @item @tab @code{logical, intent(in) :: nested}
815 @item @emph{See also}:
816 @ref{omp_get_nested}, @ref{omp_set_max_active_levels},
817 @ref{OMP_MAX_ACTIVE_LEVELS}, @ref{OMP_NESTED}
819 @item @emph{Reference}:
820 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.10.
826 @subsection @code{omp_get_nested} -- Nested parallel regions
828 @item @emph{Description}:
829 This function returns @code{true} if nested parallel regions are
830 enabled, @code{false} otherwise. Here, @code{true} and @code{false}
831 represent their language-specific counterparts.
833 The state of nested parallel regions at startup depends on several
834 environment variables. If @env{OMP_MAX_ACTIVE_LEVELS} is defined
835 and is set to greater than one, then nested parallel regions will be
836 enabled. If not defined, then the value of the @env{OMP_NESTED}
837 environment variable will be followed if defined. If neither are
838 defined, then if either @env{OMP_NUM_THREADS} or @env{OMP_PROC_BIND}
839 are defined with a list of more than one value, then nested parallel
840 regions are enabled. If none of these are defined, then nested parallel
841 regions are disabled by default.
843 Nested parallel regions can be enabled or disabled at runtime using
844 @code{omp_set_nested}, or by setting the maximum number of nested
845 regions with @code{omp_set_max_active_levels} to one to disable, or
848 Note that the @code{omp_get_nested} API routine was deprecated
849 in the OpenMP specification 5.2 in favor of @code{omp_get_max_active_levels}.
852 @multitable @columnfractions .20 .80
853 @item @emph{Prototype}: @tab @code{int omp_get_nested(void);}
856 @item @emph{Fortran}:
857 @multitable @columnfractions .20 .80
858 @item @emph{Interface}: @tab @code{logical function omp_get_nested()}
861 @item @emph{See also}:
862 @ref{omp_get_max_active_levels}, @ref{omp_set_nested},
863 @ref{OMP_MAX_ACTIVE_LEVELS}, @ref{OMP_NESTED}
865 @item @emph{Reference}:
866 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.11.
871 @node omp_set_schedule
872 @subsection @code{omp_set_schedule} -- Set the runtime scheduling method
874 @item @emph{Description}:
875 Sets the runtime scheduling method. The @var{kind} argument can have the
876 value @code{omp_sched_static}, @code{omp_sched_dynamic},
877 @code{omp_sched_guided} or @code{omp_sched_auto}. Except for
878 @code{omp_sched_auto}, the chunk size is set to the value of
879 @var{chunk_size} if positive, or to the default value if zero or negative.
880 For @code{omp_sched_auto} the @var{chunk_size} argument is ignored.
883 @multitable @columnfractions .20 .80
884 @item @emph{Prototype}: @tab @code{void omp_set_schedule(omp_sched_t kind, int chunk_size);}
887 @item @emph{Fortran}:
888 @multitable @columnfractions .20 .80
889 @item @emph{Interface}: @tab @code{subroutine omp_set_schedule(kind, chunk_size)}
890 @item @tab @code{integer(kind=omp_sched_kind) kind}
891 @item @tab @code{integer chunk_size}
894 @item @emph{See also}:
895 @ref{omp_get_schedule}
898 @item @emph{Reference}:
899 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.12.
904 @node omp_get_schedule
905 @subsection @code{omp_get_schedule} -- Obtain the runtime scheduling method
907 @item @emph{Description}:
908 Obtain the runtime scheduling method. The @var{kind} argument will be
909 set to the value @code{omp_sched_static}, @code{omp_sched_dynamic},
910 @code{omp_sched_guided} or @code{omp_sched_auto}. The second argument,
911 @var{chunk_size}, is set to the chunk size.
914 @multitable @columnfractions .20 .80
915 @item @emph{Prototype}: @tab @code{void omp_get_schedule(omp_sched_t *kind, int *chunk_size);}
918 @item @emph{Fortran}:
919 @multitable @columnfractions .20 .80
920 @item @emph{Interface}: @tab @code{subroutine omp_get_schedule(kind, chunk_size)}
921 @item @tab @code{integer(kind=omp_sched_kind) kind}
922 @item @tab @code{integer chunk_size}
925 @item @emph{See also}:
926 @ref{omp_set_schedule}, @ref{OMP_SCHEDULE}
928 @item @emph{Reference}:
929 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.13.
933 @node omp_get_teams_thread_limit
934 @subsection @code{omp_get_teams_thread_limit} -- Maximum number of threads imposed by teams
936 @item @emph{Description}:
937 Return the maximum number of threads that will be able to participate in
938 each team created by a teams construct.
941 @multitable @columnfractions .20 .80
942 @item @emph{Prototype}: @tab @code{int omp_get_teams_thread_limit(void);}
945 @item @emph{Fortran}:
946 @multitable @columnfractions .20 .80
947 @item @emph{Interface}: @tab @code{integer function omp_get_teams_thread_limit()}
950 @item @emph{See also}:
951 @ref{omp_set_teams_thread_limit}, @ref{OMP_TEAMS_THREAD_LIMIT}
953 @item @emph{Reference}:
954 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.6.
959 @node omp_get_supported_active_levels
960 @subsection @code{omp_get_supported_active_levels} -- Maximum number of active regions supported
962 @item @emph{Description}:
963 This function returns the maximum number of nested, active parallel regions
964 supported by this implementation.
967 @multitable @columnfractions .20 .80
968 @item @emph{Prototype}: @tab @code{int omp_get_supported_active_levels(void);}
971 @item @emph{Fortran}:
972 @multitable @columnfractions .20 .80
973 @item @emph{Interface}: @tab @code{integer function omp_get_supported_active_levels()}
976 @item @emph{See also}:
977 @ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
979 @item @emph{Reference}:
980 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.2.15.
985 @node omp_set_max_active_levels
986 @subsection @code{omp_set_max_active_levels} -- Limits the number of active parallel regions
988 @item @emph{Description}:
989 This function limits the maximum allowed number of nested, active
990 parallel regions. @var{max_levels} must be less or equal to
991 the value returned by @code{omp_get_supported_active_levels}.
994 @multitable @columnfractions .20 .80
995 @item @emph{Prototype}: @tab @code{void omp_set_max_active_levels(int max_levels);}
998 @item @emph{Fortran}:
999 @multitable @columnfractions .20 .80
1000 @item @emph{Interface}: @tab @code{subroutine omp_set_max_active_levels(max_levels)}
1001 @item @tab @code{integer max_levels}
1004 @item @emph{See also}:
1005 @ref{omp_get_max_active_levels}, @ref{omp_get_active_level},
1006 @ref{omp_get_supported_active_levels}
1008 @item @emph{Reference}:
1009 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.15.
1014 @node omp_get_max_active_levels
1015 @subsection @code{omp_get_max_active_levels} -- Current maximum number of active regions
1017 @item @emph{Description}:
1018 This function obtains the maximum allowed number of nested, active parallel regions.
1021 @multitable @columnfractions .20 .80
1022 @item @emph{Prototype}: @tab @code{int omp_get_max_active_levels(void);}
1025 @item @emph{Fortran}:
1026 @multitable @columnfractions .20 .80
1027 @item @emph{Interface}: @tab @code{integer function omp_get_max_active_levels()}
1030 @item @emph{See also}:
1031 @ref{omp_set_max_active_levels}, @ref{omp_get_active_level}
1033 @item @emph{Reference}:
1034 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.16.
1039 @subsection @code{omp_get_level} -- Obtain the current nesting level
1041 @item @emph{Description}:
1042 This function returns the nesting level for the parallel blocks,
1043 which enclose the calling call.
1046 @multitable @columnfractions .20 .80
1047 @item @emph{Prototype}: @tab @code{int omp_get_level(void);}
1050 @item @emph{Fortran}:
1051 @multitable @columnfractions .20 .80
1052 @item @emph{Interface}: @tab @code{integer function omp_level()}
1055 @item @emph{See also}:
1056 @ref{omp_get_active_level}
1058 @item @emph{Reference}:
1059 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.17.
1064 @node omp_get_ancestor_thread_num
1065 @subsection @code{omp_get_ancestor_thread_num} -- Ancestor thread ID
1067 @item @emph{Description}:
1068 This function returns the thread identification number for the given
1069 nesting level of the current thread. For values of @var{level} outside
1070 zero to @code{omp_get_level} -1 is returned; if @var{level} is
1071 @code{omp_get_level} the result is identical to @code{omp_get_thread_num}.
1074 @multitable @columnfractions .20 .80
1075 @item @emph{Prototype}: @tab @code{int omp_get_ancestor_thread_num(int level);}
1078 @item @emph{Fortran}:
1079 @multitable @columnfractions .20 .80
1080 @item @emph{Interface}: @tab @code{integer function omp_get_ancestor_thread_num(level)}
1081 @item @tab @code{integer level}
1084 @item @emph{See also}:
1085 @ref{omp_get_level}, @ref{omp_get_thread_num}, @ref{omp_get_team_size}
1087 @item @emph{Reference}:
1088 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.18.
1093 @node omp_get_team_size
1094 @subsection @code{omp_get_team_size} -- Number of threads in a team
1096 @item @emph{Description}:
1097 This function returns the number of threads in a thread team to which
1098 either the current thread or its ancestor belongs. For values of @var{level}
1099 outside zero to @code{omp_get_level}, -1 is returned; if @var{level} is zero,
1100 1 is returned, and for @code{omp_get_level}, the result is identical
1101 to @code{omp_get_num_threads}.
1104 @multitable @columnfractions .20 .80
1105 @item @emph{Prototype}: @tab @code{int omp_get_team_size(int level);}
1108 @item @emph{Fortran}:
1109 @multitable @columnfractions .20 .80
1110 @item @emph{Interface}: @tab @code{integer function omp_get_team_size(level)}
1111 @item @tab @code{integer level}
1114 @item @emph{See also}:
1115 @ref{omp_get_num_threads}, @ref{omp_get_level}, @ref{omp_get_ancestor_thread_num}
1117 @item @emph{Reference}:
1118 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.19.
1123 @node omp_get_active_level
1124 @subsection @code{omp_get_active_level} -- Number of parallel regions
1126 @item @emph{Description}:
1127 This function returns the nesting level for the active parallel blocks,
1128 which enclose the calling call.
1131 @multitable @columnfractions .20 .80
1132 @item @emph{Prototype}: @tab @code{int omp_get_active_level(void);}
1135 @item @emph{Fortran}:
1136 @multitable @columnfractions .20 .80
1137 @item @emph{Interface}: @tab @code{integer function omp_get_active_level()}
1140 @item @emph{See also}:
1141 @ref{omp_get_level}, @ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
1143 @item @emph{Reference}:
1144 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.20.
1149 @node Thread Affinity Routines
1150 @section Thread Affinity Routines
1152 Routines controlling and accessing thread-affinity policies.
1153 They have C linkage and do not throw exceptions.
1156 * omp_get_proc_bind:: Whether threads may be moved between CPUs
1157 @c * omp_get_num_places:: <fixme>
1158 @c * omp_get_place_num_procs:: <fixme>
1159 @c * omp_get_place_proc_ids:: <fixme>
1160 @c * omp_get_place_num:: <fixme>
1161 @c * omp_get_partition_num_places:: <fixme>
1162 @c * omp_get_partition_place_nums:: <fixme>
1163 @c * omp_set_affinity_format:: <fixme>
1164 @c * omp_get_affinity_format:: <fixme>
1165 @c * omp_display_affinity:: <fixme>
1166 @c * omp_capture_affinity:: <fixme>
1171 @node omp_get_proc_bind
1172 @subsection @code{omp_get_proc_bind} -- Whether threads may be moved between CPUs
1174 @item @emph{Description}:
1175 This functions returns the currently active thread affinity policy, which is
1176 set via @env{OMP_PROC_BIND}. Possible values are @code{omp_proc_bind_false},
1177 @code{omp_proc_bind_true}, @code{omp_proc_bind_primary},
1178 @code{omp_proc_bind_master}, @code{omp_proc_bind_close} and @code{omp_proc_bind_spread},
1179 where @code{omp_proc_bind_master} is an alias for @code{omp_proc_bind_primary}.
1182 @multitable @columnfractions .20 .80
1183 @item @emph{Prototype}: @tab @code{omp_proc_bind_t omp_get_proc_bind(void);}
1186 @item @emph{Fortran}:
1187 @multitable @columnfractions .20 .80
1188 @item @emph{Interface}: @tab @code{integer(kind=omp_proc_bind_kind) function omp_get_proc_bind()}
1191 @item @emph{See also}:
1192 @ref{OMP_PROC_BIND}, @ref{OMP_PLACES}, @ref{GOMP_CPU_AFFINITY},
1194 @item @emph{Reference}:
1195 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.22.
1200 @node Teams Region Routines
1201 @section Teams Region Routines
1203 Routines controlling the league of teams that are executed in a @code{teams}
1204 region. They have C linkage and do not throw exceptions.
1207 * omp_get_num_teams:: Number of teams
1208 * omp_get_team_num:: Get team number
1209 * omp_set_num_teams:: Set upper teams limit for teams region
1210 * omp_get_max_teams:: Maximum number of teams for teams region
1211 * omp_set_teams_thread_limit:: Set upper thread limit for teams construct
1212 * omp_get_thread_limit:: Maximum number of threads
1217 @node omp_get_num_teams
1218 @subsection @code{omp_get_num_teams} -- Number of teams
1220 @item @emph{Description}:
1221 Returns the number of teams in the current team region.
1224 @multitable @columnfractions .20 .80
1225 @item @emph{Prototype}: @tab @code{int omp_get_num_teams(void);}
1228 @item @emph{Fortran}:
1229 @multitable @columnfractions .20 .80
1230 @item @emph{Interface}: @tab @code{integer function omp_get_num_teams()}
1233 @item @emph{Reference}:
1234 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.32.
1239 @node omp_get_team_num
1240 @subsection @code{omp_get_team_num} -- Get team number
1242 @item @emph{Description}:
1243 Returns the team number of the calling thread.
1246 @multitable @columnfractions .20 .80
1247 @item @emph{Prototype}: @tab @code{int omp_get_team_num(void);}
1250 @item @emph{Fortran}:
1251 @multitable @columnfractions .20 .80
1252 @item @emph{Interface}: @tab @code{integer function omp_get_team_num()}
1255 @item @emph{Reference}:
1256 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.33.
1261 @node omp_set_num_teams
1262 @subsection @code{omp_set_num_teams} -- Set upper teams limit for teams construct
1264 @item @emph{Description}:
1265 Specifies the upper bound for number of teams created by the teams construct
1266 which does not specify a @code{num_teams} clause. The
1267 argument of @code{omp_set_num_teams} shall be a positive integer.
1270 @multitable @columnfractions .20 .80
1271 @item @emph{Prototype}: @tab @code{void omp_set_num_teams(int num_teams);}
1274 @item @emph{Fortran}:
1275 @multitable @columnfractions .20 .80
1276 @item @emph{Interface}: @tab @code{subroutine omp_set_num_teams(num_teams)}
1277 @item @tab @code{integer, intent(in) :: num_teams}
1280 @item @emph{See also}:
1281 @ref{OMP_NUM_TEAMS}, @ref{omp_get_num_teams}, @ref{omp_get_max_teams}
1283 @item @emph{Reference}:
1284 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.3.
1289 @node omp_get_max_teams
1290 @subsection @code{omp_get_max_teams} -- Maximum number of teams of teams region
1292 @item @emph{Description}:
1293 Return the maximum number of teams used for the teams region
1294 that does not use the clause @code{num_teams}.
1297 @multitable @columnfractions .20 .80
1298 @item @emph{Prototype}: @tab @code{int omp_get_max_teams(void);}
1301 @item @emph{Fortran}:
1302 @multitable @columnfractions .20 .80
1303 @item @emph{Interface}: @tab @code{integer function omp_get_max_teams()}
1306 @item @emph{See also}:
1307 @ref{omp_set_num_teams}, @ref{omp_get_num_teams}
1309 @item @emph{Reference}:
1310 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.4.
1315 @node omp_set_teams_thread_limit
1316 @subsection @code{omp_set_teams_thread_limit} -- Set upper thread limit for teams construct
1318 @item @emph{Description}:
1319 Specifies the upper bound for number of threads that will be available
1320 for each team created by the teams construct which does not specify a
1321 @code{thread_limit} clause. The argument of
1322 @code{omp_set_teams_thread_limit} shall be a positive integer.
1325 @multitable @columnfractions .20 .80
1326 @item @emph{Prototype}: @tab @code{void omp_set_teams_thread_limit(int thread_limit);}
1329 @item @emph{Fortran}:
1330 @multitable @columnfractions .20 .80
1331 @item @emph{Interface}: @tab @code{subroutine omp_set_teams_thread_limit(thread_limit)}
1332 @item @tab @code{integer, intent(in) :: thread_limit}
1335 @item @emph{See also}:
1336 @ref{OMP_TEAMS_THREAD_LIMIT}, @ref{omp_get_teams_thread_limit}, @ref{omp_get_thread_limit}
1338 @item @emph{Reference}:
1339 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.5.
1344 @node omp_get_thread_limit
1345 @subsection @code{omp_get_thread_limit} -- Maximum number of threads
1347 @item @emph{Description}:
1348 Return the maximum number of threads of the program.
1351 @multitable @columnfractions .20 .80
1352 @item @emph{Prototype}: @tab @code{int omp_get_thread_limit(void);}
1355 @item @emph{Fortran}:
1356 @multitable @columnfractions .20 .80
1357 @item @emph{Interface}: @tab @code{integer function omp_get_thread_limit()}
1360 @item @emph{See also}:
1361 @ref{omp_get_max_threads}, @ref{OMP_THREAD_LIMIT}
1363 @item @emph{Reference}:
1364 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.14.
1369 @node Tasking Routines
1370 @section Tasking Routines
1372 Routines relating to explicit tasks.
1373 They have C linkage and do not throw exceptions.
1376 * omp_get_max_task_priority:: Maximum task priority value that can be set
1377 * omp_in_explicit_task:: Whether a given task is an explicit task
1378 * omp_in_final:: Whether in final or included task region
1383 @node omp_get_max_task_priority
1384 @subsection @code{omp_get_max_task_priority} -- Maximum priority value
1385 that can be set for tasks.
1387 @item @emph{Description}:
1388 This function obtains the maximum allowed priority number for tasks.
1391 @multitable @columnfractions .20 .80
1392 @item @emph{Prototype}: @tab @code{int omp_get_max_task_priority(void);}
1395 @item @emph{Fortran}:
1396 @multitable @columnfractions .20 .80
1397 @item @emph{Interface}: @tab @code{integer function omp_get_max_task_priority()}
1400 @item @emph{Reference}:
1401 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
1406 @node omp_in_explicit_task
1407 @subsection @code{omp_in_explicit_task} -- Whether a given task is an explicit task
1409 @item @emph{Description}:
1410 The function returns the @var{explicit-task-var} ICV; it returns true when the
1411 encountering task was generated by a task-generating construct such as
1412 @code{target}, @code{task} or @code{taskloop}. Otherwise, the encountering task
1413 is in an implicit task region such as generated by the implicit or explicit
1414 @code{parallel} region and @code{omp_in_explicit_task} returns false.
1417 @multitable @columnfractions .20 .80
1418 @item @emph{Prototype}: @tab @code{int omp_in_explicit_task(void);}
1421 @item @emph{Fortran}:
1422 @multitable @columnfractions .20 .80
1423 @item @emph{Interface}: @tab @code{logical function omp_in_explicit_task()}
1426 @item @emph{Reference}:
1427 @uref{https://www.openmp.org, OpenMP specification v5.2}, Section 18.5.2.
1433 @subsection @code{omp_in_final} -- Whether in final or included task region
1435 @item @emph{Description}:
1436 This function returns @code{true} if currently running in a final
1437 or included task region, @code{false} otherwise. Here, @code{true}
1438 and @code{false} represent their language-specific counterparts.
1441 @multitable @columnfractions .20 .80
1442 @item @emph{Prototype}: @tab @code{int omp_in_final(void);}
1445 @item @emph{Fortran}:
1446 @multitable @columnfractions .20 .80
1447 @item @emph{Interface}: @tab @code{logical function omp_in_final()}
1450 @item @emph{Reference}:
1451 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.21.
1456 @c @node Resource Relinquishing Routines
1457 @c @section Resource Relinquishing Routines
1459 @c Routines releasing resources used by the OpenMP runtime.
1460 @c They have C linkage and do not throw exceptions.
1463 @c * omp_pause_resource:: <fixme>
1464 @c * omp_pause_resource_all:: <fixme>
1467 @node Device Information Routines
1468 @section Device Information Routines
1470 Routines related to devices available to an OpenMP program.
1471 They have C linkage and do not throw exceptions.
1474 * omp_get_num_procs:: Number of processors online
1475 @c * omp_get_max_progress_width:: <fixme>/TR11
1476 * omp_set_default_device:: Set the default device for target regions
1477 * omp_get_default_device:: Get the default device for target regions
1478 * omp_get_num_devices:: Number of target devices
1479 * omp_get_device_num:: Get device that current thread is running on
1480 * omp_is_initial_device:: Whether executing on the host device
1481 * omp_get_initial_device:: Device number of host device
1486 @node omp_get_num_procs
1487 @subsection @code{omp_get_num_procs} -- Number of processors online
1489 @item @emph{Description}:
1490 Returns the number of processors online on that device.
1493 @multitable @columnfractions .20 .80
1494 @item @emph{Prototype}: @tab @code{int omp_get_num_procs(void);}
1497 @item @emph{Fortran}:
1498 @multitable @columnfractions .20 .80
1499 @item @emph{Interface}: @tab @code{integer function omp_get_num_procs()}
1502 @item @emph{Reference}:
1503 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.5.
1508 @node omp_set_default_device
1509 @subsection @code{omp_set_default_device} -- Set the default device for target regions
1511 @item @emph{Description}:
1512 Set the default device for target regions without device clause. The argument
1513 shall be a nonnegative device number.
1516 @multitable @columnfractions .20 .80
1517 @item @emph{Prototype}: @tab @code{void omp_set_default_device(int device_num);}
1520 @item @emph{Fortran}:
1521 @multitable @columnfractions .20 .80
1522 @item @emph{Interface}: @tab @code{subroutine omp_set_default_device(device_num)}
1523 @item @tab @code{integer device_num}
1526 @item @emph{See also}:
1527 @ref{OMP_DEFAULT_DEVICE}, @ref{omp_get_default_device}
1529 @item @emph{Reference}:
1530 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
1535 @node omp_get_default_device
1536 @subsection @code{omp_get_default_device} -- Get the default device for target regions
1538 @item @emph{Description}:
1539 Get the default device for target regions without device clause.
1542 @multitable @columnfractions .20 .80
1543 @item @emph{Prototype}: @tab @code{int omp_get_default_device(void);}
1546 @item @emph{Fortran}:
1547 @multitable @columnfractions .20 .80
1548 @item @emph{Interface}: @tab @code{integer function omp_get_default_device()}
1551 @item @emph{See also}:
1552 @ref{OMP_DEFAULT_DEVICE}, @ref{omp_set_default_device}
1554 @item @emph{Reference}:
1555 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.30.
1560 @node omp_get_num_devices
1561 @subsection @code{omp_get_num_devices} -- Number of target devices
1563 @item @emph{Description}:
1564 Returns the number of target devices.
1567 @multitable @columnfractions .20 .80
1568 @item @emph{Prototype}: @tab @code{int omp_get_num_devices(void);}
1571 @item @emph{Fortran}:
1572 @multitable @columnfractions .20 .80
1573 @item @emph{Interface}: @tab @code{integer function omp_get_num_devices()}
1576 @item @emph{Reference}:
1577 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.31.
1582 @node omp_get_device_num
1583 @subsection @code{omp_get_device_num} -- Return device number of current device
1585 @item @emph{Description}:
1586 This function returns a device number that represents the device that the
1587 current thread is executing on. For OpenMP 5.0, this must be equal to the
1588 value returned by the @code{omp_get_initial_device} function when called
1592 @multitable @columnfractions .20 .80
1593 @item @emph{Prototype}: @tab @code{int omp_get_device_num(void);}
1596 @item @emph{Fortran}:
1597 @multitable @columnfractions .20 .80
1598 @item @emph{Interface}: @tab @code{integer function omp_get_device_num()}
1601 @item @emph{See also}:
1602 @ref{omp_get_initial_device}
1604 @item @emph{Reference}:
1605 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.2.37.
1610 @node omp_is_initial_device
1611 @subsection @code{omp_is_initial_device} -- Whether executing on the host device
1613 @item @emph{Description}:
1614 This function returns @code{true} if currently running on the host device,
1615 @code{false} otherwise. Here, @code{true} and @code{false} represent
1616 their language-specific counterparts.
1619 @multitable @columnfractions .20 .80
1620 @item @emph{Prototype}: @tab @code{int omp_is_initial_device(void);}
1623 @item @emph{Fortran}:
1624 @multitable @columnfractions .20 .80
1625 @item @emph{Interface}: @tab @code{logical function omp_is_initial_device()}
1628 @item @emph{Reference}:
1629 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.34.
1634 @node omp_get_initial_device
1635 @subsection @code{omp_get_initial_device} -- Return device number of initial device
1637 @item @emph{Description}:
1638 This function returns a device number that represents the host device.
1639 For OpenMP 5.1, this must be equal to the value returned by the
1640 @code{omp_get_num_devices} function.
1643 @multitable @columnfractions .20 .80
1644 @item @emph{Prototype}: @tab @code{int omp_get_initial_device(void);}
1647 @item @emph{Fortran}:
1648 @multitable @columnfractions .20 .80
1649 @item @emph{Interface}: @tab @code{integer function omp_get_initial_device()}
1652 @item @emph{See also}:
1653 @ref{omp_get_num_devices}
1655 @item @emph{Reference}:
1656 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.35.
1661 @node Device Memory Routines
1662 @section Device Memory Routines
1664 Routines related to memory allocation and managing corresponding
1665 pointers on devices. They have C linkage and do not throw exceptions.
1668 * omp_target_alloc:: Allocate device memory
1669 * omp_target_free:: Free device memory
1670 * omp_target_is_present:: Check whether storage is mapped
1671 @c * omp_target_is_accessible:: <fixme>
1672 @c * omp_target_memcpy:: <fixme>
1673 @c * omp_target_memcpy_rect:: <fixme>
1674 @c * omp_target_memcpy_async:: <fixme>
1675 @c * omp_target_memcpy_rect_async:: <fixme>
1676 @c * omp_target_memset:: <fixme>/TR12
1677 @c * omp_target_memset_async:: <fixme>/TR12
1678 * omp_target_associate_ptr:: Associate a device pointer with a host pointer
1679 * omp_target_disassociate_ptr:: Remove device--host pointer association
1680 * omp_get_mapped_ptr:: Return device pointer to a host pointer
1685 @node omp_target_alloc
1686 @subsection @code{omp_target_alloc} -- Allocate device memory
1688 @item @emph{Description}:
1689 This routine allocates @var{size} bytes of memory in the device environment
1690 associated with the device number @var{device_num}. If successful, a device
1691 pointer is returned, otherwise a null pointer.
1693 In GCC, when the device is the host or the device shares memory with the host,
1694 the memory is allocated on the host; in that case, when @var{size} is zero,
1695 either NULL or a unique pointer value that can later be successfully passed to
1696 @code{omp_target_free} is returned. When the allocation is not performed on
1697 the host, a null pointer is returned when @var{size} is zero; in that case,
1698 additionally a diagnostic might be printed to standard error (stderr).
1700 Running this routine in a @code{target} region except on the initial device
1704 @multitable @columnfractions .20 .80
1705 @item @emph{Prototype}: @tab @code{void *omp_target_alloc(size_t size, int device_num)}
1708 @item @emph{Fortran}:
1709 @multitable @columnfractions .20 .80
1710 @item @emph{Interface}: @tab @code{type(c_ptr) function omp_target_alloc(size, device_num) bind(C)}
1711 @item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int, c_size_t}
1712 @item @tab @code{integer(c_size_t), value :: size}
1713 @item @tab @code{integer(c_int), value :: device_num}
1716 @item @emph{See also}:
1717 @ref{omp_target_free}, @ref{omp_target_associate_ptr}
1719 @item @emph{Reference}:
1720 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 18.8.1
1725 @node omp_target_free
1726 @subsection @code{omp_target_free} -- Free device memory
1728 @item @emph{Description}:
1729 This routine frees memory allocated by the @code{omp_target_alloc} routine.
1730 The @var{device_ptr} argument must be either a null pointer or a device pointer
1731 returned by @code{omp_target_alloc} for the specified @code{device_num}. The
1732 device number @var{device_num} must be a conforming device number.
1734 Running this routine in a @code{target} region except on the initial device
1738 @multitable @columnfractions .20 .80
1739 @item @emph{Prototype}: @tab @code{void omp_target_free(void *device_ptr, int device_num)}
1742 @item @emph{Fortran}:
1743 @multitable @columnfractions .20 .80
1744 @item @emph{Interface}: @tab @code{subroutine omp_target_free(device_ptr, device_num) bind(C)}
1745 @item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int}
1746 @item @tab @code{type(c_ptr), value :: device_ptr}
1747 @item @tab @code{integer(c_int), value :: device_num}
1750 @item @emph{See also}:
1751 @ref{omp_target_alloc}, @ref{omp_target_disassociate_ptr}
1753 @item @emph{Reference}:
1754 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 18.8.2
1759 @node omp_target_is_present
1760 @subsection @code{omp_target_is_present} -- Check whether storage is mapped
1762 @item @emph{Description}:
1763 This routine tests whether storage, identified by the host pointer @var{ptr}
1764 is mapped to the device specified by @var{device_num}. If so, it returns
1765 @emph{true} and otherwise @emph{false}.
1767 In GCC, this includes self mapping such that @code{omp_target_is_present}
1768 returns @emph{true} when @var{device_num} specifies the host or when the host
1769 and the device share memory. If @var{ptr} is a null pointer, @var{true} is
1770 returned and if @var{device_num} is an invalid device number, @var{false} is
1773 If those conditions do not apply, @emph{true} is returned if the association has
1774 been established by an explicit or implicit @code{map} clause, the
1775 @code{declare target} directive or a call to the @code{omp_target_associate_ptr}
1778 Running this routine in a @code{target} region except on the initial device
1782 @multitable @columnfractions .20 .80
1783 @item @emph{Prototype}: @tab @code{int omp_target_is_present(const void *ptr,}
1784 @item @tab @code{ int device_num)}
1787 @item @emph{Fortran}:
1788 @multitable @columnfractions .20 .80
1789 @item @emph{Interface}: @tab @code{integer(c_int) function omp_target_is_present(ptr, &}
1790 @item @tab @code{ device_num) bind(C)}
1791 @item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int}
1792 @item @tab @code{type(c_ptr), value :: ptr}
1793 @item @tab @code{integer(c_int), value :: device_num}
1796 @item @emph{See also}:
1797 @ref{omp_target_associate_ptr}
1799 @item @emph{Reference}:
1800 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 18.8.3
1805 @node omp_target_associate_ptr
1806 @subsection @code{omp_target_associate_ptr} -- Associate a device pointer with a host pointer
1808 @item @emph{Description}:
1809 This routine associates storage on the host with storage on a device identified
1810 by @var{device_num}. The device pointer is usually obtained by calling
1811 @code{omp_target_alloc} or by other means (but not by using the @code{map}
1812 clauses or the @code{declare target} directive). The host pointer should point
1813 to memory that has a storage size of at least @var{size}.
1815 The @var{device_offset} parameter specifies the offset into @var{device_ptr}
1816 that is used as the base address for the device side of the mapping; the
1817 storage size should be at least @var{device_offset} plus @var{size}.
1819 After the association, the host pointer can be used in a @code{map} clause and
1820 in the @code{to} and @code{from} clauses of the @code{target update} directive
1821 to transfer data between the associated pointers. The reference count of such
1822 associated storage is infinite. The association can be removed by calling
1823 @code{omp_target_disassociate_ptr} which should be done before the lifetime
1824 of either either storage ends.
1826 The routine returns nonzero (@code{EINVAL}) when the @var{device_num} invalid,
1827 for when the initial device or the associated device shares memory with the
1828 host. @code{omp_target_associate_ptr} returns zero if @var{host_ptr} points
1829 into already associated storage that is fully inside of a previously associated
1830 memory. Otherwise, if the association was successful zero is returned; if none
1831 of the cases above apply, nonzero (@code{EINVAL}) is returned.
1833 The @code{omp_target_is_present} routine can be used to test whether
1834 associated storage for a device pointer exists.
1836 Running this routine in a @code{target} region except on the initial device
1840 @multitable @columnfractions .20 .80
1841 @item @emph{Prototype}: @tab @code{int omp_target_associate_ptr(const void *host_ptr,}
1842 @item @tab @code{ const void *device_ptr,}
1843 @item @tab @code{ size_t size,}
1844 @item @tab @code{ size_t device_offset,}
1845 @item @tab @code{ int device_num)}
1848 @item @emph{Fortran}:
1849 @multitable @columnfractions .20 .80
1850 @item @emph{Interface}: @tab @code{integer(c_int) function omp_target_associate_ptr(host_ptr, &}
1851 @item @tab @code{ device_ptr, size, device_offset, device_num) bind(C)}
1852 @item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int, c_size_t}
1853 @item @tab @code{type(c_ptr), value :: host_ptr, device_ptr}
1854 @item @tab @code{integer(c_size_t), value :: size, device_offset}
1855 @item @tab @code{integer(c_int), value :: device_num}
1858 @item @emph{See also}:
1859 @ref{omp_target_disassociate_ptr}, @ref{omp_target_is_present},
1860 @ref{omp_target_alloc}
1862 @item @emph{Reference}:
1863 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 18.8.9
1868 @node omp_target_disassociate_ptr
1869 @subsection @code{omp_target_disassociate_ptr} -- Remove device--host pointer association
1871 @item @emph{Description}:
1872 This routine removes the storage association established by calling
1873 @code{omp_target_associate_ptr} and sets the reference count to zero,
1874 even if @code{omp_target_associate_ptr} was invoked multiple times for
1875 for host pointer @code{ptr}. If applicable, the device memory needs
1876 to be freed by the user.
1878 If an associated device storage location for the @var{device_num} was
1879 found and has infinite reference count, the association is removed and
1880 zero is returned. In all other cases, nonzero (@code{EINVAL}) is returned
1881 and no other action is taken.
1883 Note that passing a host pointer where the association to the device pointer
1884 was established with the @code{declare target} directive yields undefined
1887 Running this routine in a @code{target} region except on the initial device
1891 @multitable @columnfractions .20 .80
1892 @item @emph{Prototype}: @tab @code{int omp_target_disassociate_ptr(const void *ptr,}
1893 @item @tab @code{ int device_num)}
1896 @item @emph{Fortran}:
1897 @multitable @columnfractions .20 .80
1898 @item @emph{Interface}: @tab @code{integer(c_int) function omp_target_disassociate_ptr(ptr, &}
1899 @item @tab @code{ device_num) bind(C)}
1900 @item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int}
1901 @item @tab @code{type(c_ptr), value :: ptr}
1902 @item @tab @code{integer(c_int), value :: device_num}
1905 @item @emph{See also}:
1906 @ref{omp_target_associate_ptr}
1908 @item @emph{Reference}:
1909 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 18.8.10
1914 @node omp_get_mapped_ptr
1915 @subsection @code{omp_get_mapped_ptr} -- Return device pointer to a host pointer
1917 @item @emph{Description}:
1918 If the device number is refers to the initial device or to a device with
1919 memory accessible from the host (shared memory), the @code{omp_get_mapped_ptr}
1920 routines returnes the value of the passed @var{ptr}. Otherwise, if associated
1921 storage to the passed host pointer @var{ptr} exists on device associated with
1922 @var{device_num}, it returns that pointer. In all other cases and in cases of
1923 an error, a null pointer is returned.
1925 The association of storage location is established either via an explicit or
1926 implicit @code{map} clause, the @code{declare target} directive or the
1927 @code{omp_target_associate_ptr} routine.
1929 Running this routine in a @code{target} region except on the initial device
1933 @multitable @columnfractions .20 .80
1934 @item @emph{Prototype}: @tab @code{void *omp_get_mapped_ptr(const void *ptr, int device_num);}
1937 @item @emph{Fortran}:
1938 @multitable @columnfractions .20 .80
1939 @item @emph{Interface}: @tab @code{type(c_ptr) function omp_get_mapped_ptr(ptr, device_num) bind(C)}
1940 @item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int}
1941 @item @tab @code{type(c_ptr), value :: ptr}
1942 @item @tab @code{integer(c_int), value :: device_num}
1945 @item @emph{See also}:
1946 @ref{omp_target_associate_ptr}
1948 @item @emph{Reference}:
1949 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 18.8.11
1955 @section Lock Routines
1957 Initialize, set, test, unset and destroy simple and nested locks.
1958 The routines have C linkage and do not throw exceptions.
1961 * omp_init_lock:: Initialize simple lock
1962 * omp_init_nest_lock:: Initialize nested lock
1963 @c * omp_init_lock_with_hint:: <fixme>
1964 @c * omp_init_nest_lock_with_hint:: <fixme>
1965 * omp_destroy_lock:: Destroy simple lock
1966 * omp_destroy_nest_lock:: Destroy nested lock
1967 * omp_set_lock:: Wait for and set simple lock
1968 * omp_set_nest_lock:: Wait for and set simple lock
1969 * omp_unset_lock:: Unset simple lock
1970 * omp_unset_nest_lock:: Unset nested lock
1971 * omp_test_lock:: Test and set simple lock if available
1972 * omp_test_nest_lock:: Test and set nested lock if available
1978 @subsection @code{omp_init_lock} -- Initialize simple lock
1980 @item @emph{Description}:
1981 Initialize a simple lock. After initialization, the lock is in
1985 @multitable @columnfractions .20 .80
1986 @item @emph{Prototype}: @tab @code{void omp_init_lock(omp_lock_t *lock);}
1989 @item @emph{Fortran}:
1990 @multitable @columnfractions .20 .80
1991 @item @emph{Interface}: @tab @code{subroutine omp_init_lock(svar)}
1992 @item @tab @code{integer(omp_lock_kind), intent(out) :: svar}
1995 @item @emph{See also}:
1996 @ref{omp_destroy_lock}
1998 @item @emph{Reference}:
1999 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
2004 @node omp_init_nest_lock
2005 @subsection @code{omp_init_nest_lock} -- Initialize nested lock
2007 @item @emph{Description}:
2008 Initialize a nested lock. After initialization, the lock is in
2009 an unlocked state and the nesting count is set to zero.
2012 @multitable @columnfractions .20 .80
2013 @item @emph{Prototype}: @tab @code{void omp_init_nest_lock(omp_nest_lock_t *lock);}
2016 @item @emph{Fortran}:
2017 @multitable @columnfractions .20 .80
2018 @item @emph{Interface}: @tab @code{subroutine omp_init_nest_lock(nvar)}
2019 @item @tab @code{integer(omp_nest_lock_kind), intent(out) :: nvar}
2022 @item @emph{See also}:
2023 @ref{omp_destroy_nest_lock}
2025 @item @emph{Reference}:
2026 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
2031 @node omp_destroy_lock
2032 @subsection @code{omp_destroy_lock} -- Destroy simple lock
2034 @item @emph{Description}:
2035 Destroy a simple lock. In order to be destroyed, a simple lock must be
2036 in the unlocked state.
2039 @multitable @columnfractions .20 .80
2040 @item @emph{Prototype}: @tab @code{void omp_destroy_lock(omp_lock_t *lock);}
2043 @item @emph{Fortran}:
2044 @multitable @columnfractions .20 .80
2045 @item @emph{Interface}: @tab @code{subroutine omp_destroy_lock(svar)}
2046 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
2049 @item @emph{See also}:
2052 @item @emph{Reference}:
2053 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
2058 @node omp_destroy_nest_lock
2059 @subsection @code{omp_destroy_nest_lock} -- Destroy nested lock
2061 @item @emph{Description}:
2062 Destroy a nested lock. In order to be destroyed, a nested lock must be
2063 in the unlocked state and its nesting count must equal zero.
2066 @multitable @columnfractions .20 .80
2067 @item @emph{Prototype}: @tab @code{void omp_destroy_nest_lock(omp_nest_lock_t *);}
2070 @item @emph{Fortran}:
2071 @multitable @columnfractions .20 .80
2072 @item @emph{Interface}: @tab @code{subroutine omp_destroy_nest_lock(nvar)}
2073 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
2076 @item @emph{See also}:
2079 @item @emph{Reference}:
2080 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
2086 @subsection @code{omp_set_lock} -- Wait for and set simple lock
2088 @item @emph{Description}:
2089 Before setting a simple lock, the lock variable must be initialized by
2090 @code{omp_init_lock}. The calling thread is blocked until the lock
2091 is available. If the lock is already held by the current thread,
2095 @multitable @columnfractions .20 .80
2096 @item @emph{Prototype}: @tab @code{void omp_set_lock(omp_lock_t *lock);}
2099 @item @emph{Fortran}:
2100 @multitable @columnfractions .20 .80
2101 @item @emph{Interface}: @tab @code{subroutine omp_set_lock(svar)}
2102 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
2105 @item @emph{See also}:
2106 @ref{omp_init_lock}, @ref{omp_test_lock}, @ref{omp_unset_lock}
2108 @item @emph{Reference}:
2109 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
2114 @node omp_set_nest_lock
2115 @subsection @code{omp_set_nest_lock} -- Wait for and set nested lock
2117 @item @emph{Description}:
2118 Before setting a nested lock, the lock variable must be initialized by
2119 @code{omp_init_nest_lock}. The calling thread is blocked until the lock
2120 is available. If the lock is already held by the current thread, the
2121 nesting count for the lock is incremented.
2124 @multitable @columnfractions .20 .80
2125 @item @emph{Prototype}: @tab @code{void omp_set_nest_lock(omp_nest_lock_t *lock);}
2128 @item @emph{Fortran}:
2129 @multitable @columnfractions .20 .80
2130 @item @emph{Interface}: @tab @code{subroutine omp_set_nest_lock(nvar)}
2131 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
2134 @item @emph{See also}:
2135 @ref{omp_init_nest_lock}, @ref{omp_unset_nest_lock}
2137 @item @emph{Reference}:
2138 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
2143 @node omp_unset_lock
2144 @subsection @code{omp_unset_lock} -- Unset simple lock
2146 @item @emph{Description}:
2147 A simple lock about to be unset must have been locked by @code{omp_set_lock}
2148 or @code{omp_test_lock} before. In addition, the lock must be held by the
2149 thread calling @code{omp_unset_lock}. Then, the lock becomes unlocked. If one
2150 or more threads attempted to set the lock before, one of them is chosen to,
2151 again, set the lock to itself.
2154 @multitable @columnfractions .20 .80
2155 @item @emph{Prototype}: @tab @code{void omp_unset_lock(omp_lock_t *lock);}
2158 @item @emph{Fortran}:
2159 @multitable @columnfractions .20 .80
2160 @item @emph{Interface}: @tab @code{subroutine omp_unset_lock(svar)}
2161 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
2164 @item @emph{See also}:
2165 @ref{omp_set_lock}, @ref{omp_test_lock}
2167 @item @emph{Reference}:
2168 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
2173 @node omp_unset_nest_lock
2174 @subsection @code{omp_unset_nest_lock} -- Unset nested lock
2176 @item @emph{Description}:
2177 A nested lock about to be unset must have been locked by @code{omp_set_nested_lock}
2178 or @code{omp_test_nested_lock} before. In addition, the lock must be held by the
2179 thread calling @code{omp_unset_nested_lock}. If the nesting count drops to zero, the
2180 lock becomes unlocked. If one ore more threads attempted to set the lock before,
2181 one of them is chosen to, again, set the lock to itself.
2184 @multitable @columnfractions .20 .80
2185 @item @emph{Prototype}: @tab @code{void omp_unset_nest_lock(omp_nest_lock_t *lock);}
2188 @item @emph{Fortran}:
2189 @multitable @columnfractions .20 .80
2190 @item @emph{Interface}: @tab @code{subroutine omp_unset_nest_lock(nvar)}
2191 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
2194 @item @emph{See also}:
2195 @ref{omp_set_nest_lock}
2197 @item @emph{Reference}:
2198 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
2204 @subsection @code{omp_test_lock} -- Test and set simple lock if available
2206 @item @emph{Description}:
2207 Before setting a simple lock, the lock variable must be initialized by
2208 @code{omp_init_lock}. Contrary to @code{omp_set_lock}, @code{omp_test_lock}
2209 does not block if the lock is not available. This function returns
2210 @code{true} upon success, @code{false} otherwise. Here, @code{true} and
2211 @code{false} represent their language-specific counterparts.
2214 @multitable @columnfractions .20 .80
2215 @item @emph{Prototype}: @tab @code{int omp_test_lock(omp_lock_t *lock);}
2218 @item @emph{Fortran}:
2219 @multitable @columnfractions .20 .80
2220 @item @emph{Interface}: @tab @code{logical function omp_test_lock(svar)}
2221 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
2224 @item @emph{See also}:
2225 @ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
2227 @item @emph{Reference}:
2228 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
2233 @node omp_test_nest_lock
2234 @subsection @code{omp_test_nest_lock} -- Test and set nested lock if available
2236 @item @emph{Description}:
2237 Before setting a nested lock, the lock variable must be initialized by
2238 @code{omp_init_nest_lock}. Contrary to @code{omp_set_nest_lock},
2239 @code{omp_test_nest_lock} does not block if the lock is not available.
2240 If the lock is already held by the current thread, the new nesting count
2241 is returned. Otherwise, the return value equals zero.
2244 @multitable @columnfractions .20 .80
2245 @item @emph{Prototype}: @tab @code{int omp_test_nest_lock(omp_nest_lock_t *lock);}
2248 @item @emph{Fortran}:
2249 @multitable @columnfractions .20 .80
2250 @item @emph{Interface}: @tab @code{logical function omp_test_nest_lock(nvar)}
2251 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
2255 @item @emph{See also}:
2256 @ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
2258 @item @emph{Reference}:
2259 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
2264 @node Timing Routines
2265 @section Timing Routines
2267 Portable, thread-based, wall clock timer.
2268 The routines have C linkage and do not throw exceptions.
2271 * omp_get_wtick:: Get timer precision.
2272 * omp_get_wtime:: Elapsed wall clock time.
2278 @subsection @code{omp_get_wtick} -- Get timer precision
2280 @item @emph{Description}:
2281 Gets the timer precision, i.e., the number of seconds between two
2282 successive clock ticks.
2285 @multitable @columnfractions .20 .80
2286 @item @emph{Prototype}: @tab @code{double omp_get_wtick(void);}
2289 @item @emph{Fortran}:
2290 @multitable @columnfractions .20 .80
2291 @item @emph{Interface}: @tab @code{double precision function omp_get_wtick()}
2294 @item @emph{See also}:
2297 @item @emph{Reference}:
2298 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.4.2.
2304 @subsection @code{omp_get_wtime} -- Elapsed wall clock time
2306 @item @emph{Description}:
2307 Elapsed wall clock time in seconds. The time is measured per thread, no
2308 guarantee can be made that two distinct threads measure the same time.
2309 Time is measured from some "time in the past", which is an arbitrary time
2310 guaranteed not to change during the execution of the program.
2313 @multitable @columnfractions .20 .80
2314 @item @emph{Prototype}: @tab @code{double omp_get_wtime(void);}
2317 @item @emph{Fortran}:
2318 @multitable @columnfractions .20 .80
2319 @item @emph{Interface}: @tab @code{double precision function omp_get_wtime()}
2322 @item @emph{See also}:
2325 @item @emph{Reference}:
2326 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.4.1.
2332 @section Event Routine
2334 Support for event objects.
2335 The routine has C linkage and do not throw exceptions.
2338 * omp_fulfill_event:: Fulfill and destroy an OpenMP event.
2343 @node omp_fulfill_event
2344 @subsection @code{omp_fulfill_event} -- Fulfill and destroy an OpenMP event
2346 @item @emph{Description}:
2347 Fulfill the event associated with the event handle argument. Currently, it
2348 is only used to fulfill events generated by detach clauses on task
2349 constructs - the effect of fulfilling the event is to allow the task to
2352 The result of calling @code{omp_fulfill_event} with an event handle other
2353 than that generated by a detach clause is undefined. Calling it with an
2354 event handle that has already been fulfilled is also undefined.
2357 @multitable @columnfractions .20 .80
2358 @item @emph{Prototype}: @tab @code{void omp_fulfill_event(omp_event_handle_t event);}
2361 @item @emph{Fortran}:
2362 @multitable @columnfractions .20 .80
2363 @item @emph{Interface}: @tab @code{subroutine omp_fulfill_event(event)}
2364 @item @tab @code{integer (kind=omp_event_handle_kind) :: event}
2367 @item @emph{Reference}:
2368 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.5.1.
2373 @c @node Interoperability Routines
2374 @c @section Interoperability Routines
2376 @c Routines to obtain properties from an @code{omp_interop_t} object.
2377 @c They have C linkage and do not throw exceptions.
2380 @c * omp_get_num_interop_properties:: <fixme>
2381 @c * omp_get_interop_int:: <fixme>
2382 @c * omp_get_interop_ptr:: <fixme>
2383 @c * omp_get_interop_str:: <fixme>
2384 @c * omp_get_interop_name:: <fixme>
2385 @c * omp_get_interop_type_desc:: <fixme>
2386 @c * omp_get_interop_rc_desc:: <fixme>
2389 @node Memory Management Routines
2390 @section Memory Management Routines
2392 Routines to manage and allocate memory on the current device.
2393 They have C linkage and do not throw exceptions.
2396 * omp_init_allocator:: Create an allocator
2397 * omp_destroy_allocator:: Destroy an allocator
2398 * omp_set_default_allocator:: Set the default allocator
2399 * omp_get_default_allocator:: Get the default allocator
2400 @c * omp_alloc:: <fixme>
2401 @c * omp_aligned_alloc:: <fixme>
2402 @c * omp_free:: <fixme>
2403 @c * omp_calloc:: <fixme>
2404 @c * omp_aligned_calloc:: <fixme>
2405 @c * omp_realloc:: <fixme>
2406 @c * omp_get_memspace_num_resources:: <fixme>/TR11
2407 @c * omp_get_submemspace:: <fixme>/TR11
2412 @node omp_init_allocator
2413 @subsection @code{omp_init_allocator} -- Create an allocator
2415 @item @emph{Description}:
2416 Create an allocator that uses the specified memory space and has the specified
2417 traits; if an allocator that fulfills the requirements cannot be created,
2418 @code{omp_null_allocator} is returned.
2420 The predefined memory spaces and available traits can be found at
2421 @ref{OMP_ALLOCATOR}, where the trait names have to be be prefixed by
2422 @code{omp_atk_} (e.g. @code{omp_atk_pinned}) and the named trait values by
2423 @code{omp_atv_} (e.g. @code{omp_atv_true}); additionally, @code{omp_atv_default}
2424 may be used as trait value to specify that the default value should be used.
2427 @multitable @columnfractions .20 .80
2428 @item @emph{Prototype}: @tab @code{omp_allocator_handle_t omp_init_allocator(}
2429 @item @tab @code{ omp_memspace_handle_t memspace,}
2430 @item @tab @code{ int ntraits,}
2431 @item @tab @code{ const omp_alloctrait_t traits[]);}
2434 @item @emph{Fortran}:
2435 @multitable @columnfractions .20 .80
2436 @item @emph{Interface}: @tab @code{function omp_init_allocator(memspace, ntraits, traits)}
2437 @item @tab @code{integer (kind=omp_allocator_handle_kind) :: omp_init_allocator}
2438 @item @tab @code{integer (kind=omp_memspace_handle_kind), intent(in) :: memspace}
2439 @item @tab @code{integer, intent(in) :: ntraits}
2440 @item @tab @code{type (omp_alloctrait), intent(in) :: traits(*)}
2443 @item @emph{See also}:
2444 @ref{OMP_ALLOCATOR}, @ref{Memory allocation}, @ref{omp_destroy_allocator}
2446 @item @emph{Reference}:
2447 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.7.2
2452 @node omp_destroy_allocator
2453 @subsection @code{omp_destroy_allocator} -- Destroy an allocator
2455 @item @emph{Description}:
2456 Releases all resources used by a memory allocator, which must not represent
2457 a predefined memory allocator. Accessing memory after its allocator has been
2458 destroyed has unspecified behavior. Passing @code{omp_null_allocator} to the
2459 routine is permitted but will have no effect.
2463 @multitable @columnfractions .20 .80
2464 @item @emph{Prototype}: @tab @code{void omp_destroy_allocator (omp_allocator_handle_t allocator);}
2467 @item @emph{Fortran}:
2468 @multitable @columnfractions .20 .80
2469 @item @emph{Interface}: @tab @code{subroutine omp_destroy_allocator(allocator)}
2470 @item @tab @code{integer (kind=omp_allocator_handle_kind), intent(in) :: allocator}
2473 @item @emph{See also}:
2474 @ref{omp_init_allocator}
2476 @item @emph{Reference}:
2477 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.7.3
2482 @node omp_set_default_allocator
2483 @subsection @code{omp_set_default_allocator} -- Set the default allocator
2485 @item @emph{Description}:
2486 Sets the default allocator that is used when no allocator has been specified
2487 in the @code{allocate} or @code{allocator} clause or if an OpenMP memory
2488 routine is invoked with the @code{omp_null_allocator} allocator.
2491 @multitable @columnfractions .20 .80
2492 @item @emph{Prototype}: @tab @code{void omp_set_default_allocator(omp_allocator_handle_t allocator);}
2495 @item @emph{Fortran}:
2496 @multitable @columnfractions .20 .80
2497 @item @emph{Interface}: @tab @code{subroutine omp_set_default_allocator(allocator)}
2498 @item @tab @code{integer (kind=omp_allocator_handle_kind), intent(in) :: allocator}
2501 @item @emph{See also}:
2502 @ref{omp_get_default_allocator}, @ref{omp_init_allocator}, @ref{OMP_ALLOCATOR},
2503 @ref{Memory allocation}
2505 @item @emph{Reference}:
2506 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.7.4
2511 @node omp_get_default_allocator
2512 @subsection @code{omp_get_default_allocator} -- Get the default allocator
2514 @item @emph{Description}:
2515 The routine returns the default allocator that is used when no allocator has
2516 been specified in the @code{allocate} or @code{allocator} clause or if an
2517 OpenMP memory routine is invoked with the @code{omp_null_allocator} allocator.
2520 @multitable @columnfractions .20 .80
2521 @item @emph{Prototype}: @tab @code{omp_allocator_handle_t omp_get_default_allocator();}
2524 @item @emph{Fortran}:
2525 @multitable @columnfractions .20 .80
2526 @item @emph{Interface}: @tab @code{function omp_get_default_allocator()}
2527 @item @tab @code{integer (kind=omp_allocator_handle_kind) :: omp_get_default_allocator}
2530 @item @emph{See also}:
2531 @ref{omp_set_default_allocator}, @ref{OMP_ALLOCATOR}
2533 @item @emph{Reference}:
2534 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.7.5
2539 @c @node Tool Control Routine
2543 @c @node Environment Display Routine
2544 @c @section Environment Display Routine
2546 @c Routine to display the OpenMP number and the initial value of ICVs.
2547 @c It has C linkage and do not throw exceptions.
2550 @c * omp_display_env:: <fixme>
2553 @c ---------------------------------------------------------------------
2554 @c OpenMP Environment Variables
2555 @c ---------------------------------------------------------------------
2557 @node Environment Variables
2558 @chapter OpenMP Environment Variables
2560 The environment variables which beginning with @env{OMP_} are defined by
2561 section 4 of the OpenMP specification in version 4.5 or in a later version
2562 of the specification, while those beginning with @env{GOMP_} are GNU extensions.
2563 Most @env{OMP_} environment variables have an associated internal control
2566 For any OpenMP environment variable that sets an ICV and is neither
2567 @code{OMP_DEFAULT_DEVICE} nor has global ICV scope, associated
2568 device-specific environment variables exist. For them, the environment
2569 variable without suffix affects the host. The suffix @code{_DEV_} followed
2570 by a non-negative device number less that the number of available devices sets
2571 the ICV for the corresponding device. The suffix @code{_DEV} sets the ICV
2572 of all non-host devices for which a device-specific corresponding environment
2573 variable has not been set while the @code{_ALL} suffix sets the ICV of all
2574 host and non-host devices for which a more specific corresponding environment
2575 variable is not set.
2578 * OMP_ALLOCATOR:: Set the default allocator
2579 * OMP_AFFINITY_FORMAT:: Set the format string used for affinity display
2580 * OMP_CANCELLATION:: Set whether cancellation is activated
2581 * OMP_DISPLAY_AFFINITY:: Display thread affinity information
2582 * OMP_DISPLAY_ENV:: Show OpenMP version and environment variables
2583 * OMP_DEFAULT_DEVICE:: Set the device used in target regions
2584 * OMP_DYNAMIC:: Dynamic adjustment of threads
2585 * OMP_MAX_ACTIVE_LEVELS:: Set the maximum number of nested parallel regions
2586 * OMP_MAX_TASK_PRIORITY:: Set the maximum task priority value
2587 * OMP_NESTED:: Nested parallel regions
2588 * OMP_NUM_TEAMS:: Specifies the number of teams to use by teams region
2589 * OMP_NUM_THREADS:: Specifies the number of threads to use
2590 * OMP_PROC_BIND:: Whether threads may be moved between CPUs
2591 * OMP_PLACES:: Specifies on which CPUs the threads should be placed
2592 * OMP_STACKSIZE:: Set default thread stack size
2593 * OMP_SCHEDULE:: How threads are scheduled
2594 * OMP_TARGET_OFFLOAD:: Controls offloading behaviour
2595 * OMP_TEAMS_THREAD_LIMIT:: Set the maximum number of threads imposed by teams
2596 * OMP_THREAD_LIMIT:: Set the maximum number of threads
2597 * OMP_WAIT_POLICY:: How waiting threads are handled
2598 * GOMP_CPU_AFFINITY:: Bind threads to specific CPUs
2599 * GOMP_DEBUG:: Enable debugging output
2600 * GOMP_STACKSIZE:: Set default thread stack size
2601 * GOMP_SPINCOUNT:: Set the busy-wait spin count
2602 * GOMP_RTEMS_THREAD_POOLS:: Set the RTEMS specific thread pools
2607 @section @env{OMP_ALLOCATOR} -- Set the default allocator
2608 @cindex Environment Variable
2610 @item @emph{ICV:} @var{def-allocator-var}
2611 @item @emph{Scope:} data environment
2612 @item @emph{Description}:
2613 Sets the default allocator that is used when no allocator has been specified
2614 in the @code{allocate} or @code{allocator} clause or if an OpenMP memory
2615 routine is invoked with the @code{omp_null_allocator} allocator.
2616 If unset, @code{omp_default_mem_alloc} is used.
2618 The value can either be a predefined allocator or a predefined memory space
2619 or a predefined memory space followed by a colon and a comma-separated list
2620 of memory trait and value pairs, separated by @code{=}.
2622 Note: The corresponding device environment variables are currently not
2623 supported. Therefore, the non-host @var{def-allocator-var} ICVs are always
2624 initialized to @code{omp_default_mem_alloc}. However, on all devices,
2625 the @code{omp_set_default_allocator} API routine can be used to change
2628 @multitable @columnfractions .45 .45
2629 @headitem Predefined allocators @tab Associated predefined memory spaces
2630 @item omp_default_mem_alloc @tab omp_default_mem_space
2631 @item omp_large_cap_mem_alloc @tab omp_large_cap_mem_space
2632 @item omp_const_mem_alloc @tab omp_const_mem_space
2633 @item omp_high_bw_mem_alloc @tab omp_high_bw_mem_space
2634 @item omp_low_lat_mem_alloc @tab omp_low_lat_mem_space
2635 @item omp_cgroup_mem_alloc @tab --
2636 @item omp_pteam_mem_alloc @tab --
2637 @item omp_thread_mem_alloc @tab --
2640 The predefined allocators use the default values for the traits,
2641 as listed below. Except that the last three allocators have the
2642 @code{access} trait set to @code{cgroup}, @code{pteam}, and
2643 @code{thread}, respectively.
2645 @multitable @columnfractions .25 .40 .25
2646 @headitem Trait @tab Allowed values @tab Default value
2647 @item @code{sync_hint} @tab @code{contended}, @code{uncontended},
2648 @code{serialized}, @code{private}
2649 @tab @code{contended}
2650 @item @code{alignment} @tab Positive integer being a power of two
2652 @item @code{access} @tab @code{all}, @code{cgroup},
2653 @code{pteam}, @code{thread}
2655 @item @code{pool_size} @tab Positive integer
2656 @tab See @ref{Memory allocation}
2657 @item @code{fallback} @tab @code{default_mem_fb}, @code{null_fb},
2658 @code{abort_fb}, @code{allocator_fb}
2660 @item @code{fb_data} @tab @emph{unsupported as it needs an allocator handle}
2662 @item @code{pinned} @tab @code{true}, @code{false}
2664 @item @code{partition} @tab @code{environment}, @code{nearest},
2665 @code{blocked}, @code{interleaved}
2666 @tab @code{environment}
2669 For the @code{fallback} trait, the default value is @code{null_fb} for the
2670 @code{omp_default_mem_alloc} allocator and any allocator that is associated
2671 with device memory; for all other other allocators, it is @code{default_mem_fb}
2676 OMP_ALLOCATOR=omp_high_bw_mem_alloc
2677 OMP_ALLOCATOR=omp_large_cap_mem_space
2678 OMP_ALLOCATOR=omp_low_lat_mem_space:pinned=true,partition=nearest
2681 @item @emph{See also}:
2682 @ref{Memory allocation}, @ref{omp_get_default_allocator},
2683 @ref{omp_set_default_allocator}
2685 @item @emph{Reference}:
2686 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 6.21
2691 @node OMP_AFFINITY_FORMAT
2692 @section @env{OMP_AFFINITY_FORMAT} -- Set the format string used for affinity display
2693 @cindex Environment Variable
2695 @item @emph{ICV:} @var{affinity-format-var}
2696 @item @emph{Scope:} device
2697 @item @emph{Description}:
2698 Sets the format string used when displaying OpenMP thread affinity information.
2699 Special values are output using @code{%} followed by an optional size
2700 specification and then either the single-character field type or its long
2701 name enclosed in curly braces; using @code{%%} will display a literal percent.
2702 The size specification consists of an optional @code{0.} or @code{.} followed
2703 by a positive integer, specifying the minimal width of the output. With
2704 @code{0.} and numerical values, the output is padded with zeros on the left;
2705 with @code{.}, the output is padded by spaces on the left; otherwise, the
2706 output is padded by spaces on the right. If unset, the value is
2707 ``@code{level %L thread %i affinity %A}''.
2709 Supported field types are:
2711 @multitable @columnfractions .10 .25 .60
2712 @item t @tab team_num @tab value returned by @code{omp_get_team_num}
2713 @item T @tab num_teams @tab value returned by @code{omp_get_num_teams}
2714 @item L @tab nesting_level @tab value returned by @code{omp_get_level}
2715 @item n @tab thread_num @tab value returned by @code{omp_get_thread_num}
2716 @item N @tab num_threads @tab value returned by @code{omp_get_num_threads}
2717 @item a @tab ancestor_tnum
2718 @tab value returned by
2719 @code{omp_get_ancestor_thread_num(omp_get_level()-1)}
2720 @item H @tab host @tab name of the host that executes the thread
2721 @item P @tab process_id @tab process identifier
2722 @item i @tab native_thread_id @tab native thread identifier
2723 @item A @tab thread_affinity
2724 @tab comma separated list of integer values or ranges, representing the
2725 processors on which a process might execute, subject to affinity
2729 For instance, after setting
2732 OMP_AFFINITY_FORMAT="%0.2a!%n!%.4L!%N;%.2t;%0.2T;%@{team_num@};%@{num_teams@};%A"
2735 with either @code{OMP_DISPLAY_AFFINITY} being set or when calling
2736 @code{omp_display_affinity} with @code{NULL} or an empty string, the program
2737 might display the following:
2740 00!0! 1!4; 0;01;0;1;0-11
2741 00!3! 1!4; 0;01;0;1;0-11
2742 00!2! 1!4; 0;01;0;1;0-11
2743 00!1! 1!4; 0;01;0;1;0-11
2746 @item @emph{See also}:
2747 @ref{OMP_DISPLAY_AFFINITY}
2749 @item @emph{Reference}:
2750 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 6.14
2755 @node OMP_CANCELLATION
2756 @section @env{OMP_CANCELLATION} -- Set whether cancellation is activated
2757 @cindex Environment Variable
2759 @item @emph{ICV:} @var{cancel-var}
2760 @item @emph{Scope:} global
2761 @item @emph{Description}:
2762 If set to @code{TRUE}, the cancellation is activated. If set to @code{FALSE} or
2763 if unset, cancellation is disabled and the @code{cancel} construct is ignored.
2765 @item @emph{See also}:
2766 @ref{omp_get_cancellation}
2768 @item @emph{Reference}:
2769 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.11
2774 @node OMP_DISPLAY_AFFINITY
2775 @section @env{OMP_DISPLAY_AFFINITY} -- Display thread affinity information
2776 @cindex Environment Variable
2778 @item @emph{ICV:} @var{display-affinity-var}
2779 @item @emph{Scope:} global
2780 @item @emph{Description}:
2781 If set to @code{FALSE} or if unset, affinity displaying is disabled.
2782 If set to @code{TRUE}, the runtime will display affinity information about
2783 OpenMP threads in a parallel region upon entering the region and every time
2786 @item @emph{See also}:
2787 @ref{OMP_AFFINITY_FORMAT}
2789 @item @emph{Reference}:
2790 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 6.13
2796 @node OMP_DISPLAY_ENV
2797 @section @env{OMP_DISPLAY_ENV} -- Show OpenMP version and environment variables
2798 @cindex Environment Variable
2800 @item @emph{ICV:} none
2801 @item @emph{Scope:} not applicable
2802 @item @emph{Description}:
2803 If set to @code{TRUE}, the OpenMP version number and the values
2804 associated with the OpenMP environment variables are printed to @code{stderr}.
2805 If set to @code{VERBOSE}, it additionally shows the value of the environment
2806 variables which are GNU extensions. If undefined or set to @code{FALSE},
2807 this information will not be shown.
2810 @item @emph{Reference}:
2811 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.12
2816 @node OMP_DEFAULT_DEVICE
2817 @section @env{OMP_DEFAULT_DEVICE} -- Set the device used in target regions
2818 @cindex Environment Variable
2820 @item @emph{ICV:} @var{default-device-var}
2821 @item @emph{Scope:} data environment
2822 @item @emph{Description}:
2823 Set to choose the device which is used in a @code{target} region, unless the
2824 value is overridden by @code{omp_set_default_device} or by a @code{device}
2825 clause. The value shall be the nonnegative device number. If no device with
2826 the given device number exists, the code is executed on the host. If unset,
2827 @env{OMP_TARGET_OFFLOAD} is @code{mandatory} and no non-host devices are
2828 available, it is set to @code{omp_invalid_device}. Otherwise, if unset,
2829 device number 0 will be used.
2832 @item @emph{See also}:
2833 @ref{omp_get_default_device}, @ref{omp_set_default_device},
2835 @item @emph{Reference}:
2836 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.13
2842 @section @env{OMP_DYNAMIC} -- Dynamic adjustment of threads
2843 @cindex Environment Variable
2845 @item @emph{ICV:} @var{dyn-var}
2846 @item @emph{Scope:} global
2847 @item @emph{Description}:
2848 Enable or disable the dynamic adjustment of the number of threads
2849 within a team. The value of this environment variable shall be
2850 @code{TRUE} or @code{FALSE}. If undefined, dynamic adjustment is
2851 disabled by default.
2853 @item @emph{See also}:
2854 @ref{omp_set_dynamic}
2856 @item @emph{Reference}:
2857 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.3
2862 @node OMP_MAX_ACTIVE_LEVELS
2863 @section @env{OMP_MAX_ACTIVE_LEVELS} -- Set the maximum number of nested parallel regions
2864 @cindex Environment Variable
2866 @item @emph{ICV:} @var{max-active-levels-var}
2867 @item @emph{Scope:} data environment
2868 @item @emph{Description}:
2869 Specifies the initial value for the maximum number of nested parallel
2870 regions. The value of this variable shall be a positive integer.
2871 If undefined, then if @env{OMP_NESTED} is defined and set to true, or
2872 if @env{OMP_NUM_THREADS} or @env{OMP_PROC_BIND} are defined and set to
2873 a list with more than one item, the maximum number of nested parallel
2874 regions will be initialized to the largest number supported, otherwise
2875 it will be set to one.
2877 @item @emph{See also}:
2878 @ref{omp_set_max_active_levels}, @ref{OMP_NESTED}, @ref{OMP_PROC_BIND},
2879 @ref{OMP_NUM_THREADS}
2882 @item @emph{Reference}:
2883 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.9
2888 @node OMP_MAX_TASK_PRIORITY
2889 @section @env{OMP_MAX_TASK_PRIORITY} -- Set the maximum priority
2890 number that can be set for a task.
2891 @cindex Environment Variable
2893 @item @emph{ICV:} @var{max-task-priority-var}
2894 @item @emph{Scope:} global
2895 @item @emph{Description}:
2896 Specifies the initial value for the maximum priority value that can be
2897 set for a task. The value of this variable shall be a non-negative
2898 integer, and zero is allowed. If undefined, the default priority is
2901 @item @emph{See also}:
2902 @ref{omp_get_max_task_priority}
2904 @item @emph{Reference}:
2905 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.14
2911 @section @env{OMP_NESTED} -- Nested parallel regions
2912 @cindex Environment Variable
2913 @cindex Implementation specific setting
2915 @item @emph{ICV:} @var{max-active-levels-var}
2916 @item @emph{Scope:} data environment
2917 @item @emph{Description}:
2918 Enable or disable nested parallel regions, i.e., whether team members
2919 are allowed to create new teams. The value of this environment variable
2920 shall be @code{TRUE} or @code{FALSE}. If set to @code{TRUE}, the number
2921 of maximum active nested regions supported will by default be set to the
2922 maximum supported, otherwise it will be set to one. If
2923 @env{OMP_MAX_ACTIVE_LEVELS} is defined, its setting will override this
2924 setting. If both are undefined, nested parallel regions are enabled if
2925 @env{OMP_NUM_THREADS} or @env{OMP_PROC_BINDS} are defined to a list with
2926 more than one item, otherwise they are disabled by default.
2928 Note that the @code{OMP_NESTED} environment variable was deprecated in
2929 the OpenMP specification 5.2 in favor of @code{OMP_MAX_ACTIVE_LEVELS}.
2931 @item @emph{See also}:
2932 @ref{omp_set_max_active_levels}, @ref{omp_set_nested},
2933 @ref{OMP_MAX_ACTIVE_LEVELS}
2935 @item @emph{Reference}:
2936 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.6
2942 @section @env{OMP_NUM_TEAMS} -- Specifies the number of teams to use by teams region
2943 @cindex Environment Variable
2945 @item @emph{ICV:} @var{nteams-var}
2946 @item @emph{Scope:} device
2947 @item @emph{Description}:
2948 Specifies the upper bound for number of teams to use in teams regions
2949 without explicit @code{num_teams} clause. The value of this variable shall
2950 be a positive integer. If undefined it defaults to 0 which means
2951 implementation defined upper bound.
2953 @item @emph{See also}:
2954 @ref{omp_set_num_teams}
2956 @item @emph{Reference}:
2957 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 6.23
2962 @node OMP_NUM_THREADS
2963 @section @env{OMP_NUM_THREADS} -- Specifies the number of threads to use
2964 @cindex Environment Variable
2965 @cindex Implementation specific setting
2967 @item @emph{ICV:} @var{nthreads-var}
2968 @item @emph{Scope:} data environment
2969 @item @emph{Description}:
2970 Specifies the default number of threads to use in parallel regions. The
2971 value of this variable shall be a comma-separated list of positive integers;
2972 the value specifies the number of threads to use for the corresponding nested
2973 level. Specifying more than one item in the list will automatically enable
2974 nesting by default. If undefined one thread per CPU is used.
2976 When a list with more than value is specified, it also affects the
2977 @var{max-active-levels-var} ICV as described in @ref{OMP_MAX_ACTIVE_LEVELS}.
2979 @item @emph{See also}:
2980 @ref{omp_set_num_threads}, @ref{OMP_MAX_ACTIVE_LEVELS}
2982 @item @emph{Reference}:
2983 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.2
2989 @section @env{OMP_PROC_BIND} -- Whether threads may be moved between CPUs
2990 @cindex Environment Variable
2992 @item @emph{ICV:} @var{bind-var}
2993 @item @emph{Scope:} data environment
2994 @item @emph{Description}:
2995 Specifies whether threads may be moved between processors. If set to
2996 @code{TRUE}, OpenMP threads should not be moved; if set to @code{FALSE}
2997 they may be moved. Alternatively, a comma separated list with the
2998 values @code{PRIMARY}, @code{MASTER}, @code{CLOSE} and @code{SPREAD} can
2999 be used to specify the thread affinity policy for the corresponding nesting
3000 level. With @code{PRIMARY} and @code{MASTER} the worker threads are in the
3001 same place partition as the primary thread. With @code{CLOSE} those are
3002 kept close to the primary thread in contiguous place partitions. And
3003 with @code{SPREAD} a sparse distribution
3004 across the place partitions is used. Specifying more than one item in the
3005 list will automatically enable nesting by default.
3007 When a list is specified, it also affects the @var{max-active-levels-var} ICV
3008 as described in @ref{OMP_MAX_ACTIVE_LEVELS}.
3010 When undefined, @env{OMP_PROC_BIND} defaults to @code{TRUE} when
3011 @env{OMP_PLACES} or @env{GOMP_CPU_AFFINITY} is set and @code{FALSE} otherwise.
3013 @item @emph{See also}:
3014 @ref{omp_get_proc_bind}, @ref{GOMP_CPU_AFFINITY}, @ref{OMP_PLACES},
3015 @ref{OMP_MAX_ACTIVE_LEVELS}
3017 @item @emph{Reference}:
3018 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.4
3024 @section @env{OMP_PLACES} -- Specifies on which CPUs the threads should be placed
3025 @cindex Environment Variable
3027 @item @emph{ICV:} @var{place-partition-var}
3028 @item @emph{Scope:} implicit tasks
3029 @item @emph{Description}:
3030 The thread placement can be either specified using an abstract name or by an
3031 explicit list of the places. The abstract names @code{threads}, @code{cores},
3032 @code{sockets}, @code{ll_caches} and @code{numa_domains} can be optionally
3033 followed by a positive number in parentheses, which denotes the how many places
3034 shall be created. With @code{threads} each place corresponds to a single
3035 hardware thread; @code{cores} to a single core with the corresponding number of
3036 hardware threads; with @code{sockets} the place corresponds to a single
3037 socket; with @code{ll_caches} to a set of cores that shares the last level
3038 cache on the device; and @code{numa_domains} to a set of cores for which their
3039 closest memory on the device is the same memory and at a similar distance from
3040 the cores. The resulting placement can be shown by setting the
3041 @env{OMP_DISPLAY_ENV} environment variable.
3043 Alternatively, the placement can be specified explicitly as comma-separated
3044 list of places. A place is specified by set of nonnegative numbers in curly
3045 braces, denoting the hardware threads. The curly braces can be omitted
3046 when only a single number has been specified. The hardware threads
3047 belonging to a place can either be specified as comma-separated list of
3048 nonnegative thread numbers or using an interval. Multiple places can also be
3049 either specified by a comma-separated list of places or by an interval. To
3050 specify an interval, a colon followed by the count is placed after
3051 the hardware thread number or the place. Optionally, the length can be
3052 followed by a colon and the stride number -- otherwise a unit stride is
3053 assumed. Placing an exclamation mark (@code{!}) directly before a curly
3054 brace or numbers inside the curly braces (excluding intervals) will
3055 exclude those hardware threads.
3057 For instance, the following specifies the same places list:
3058 @code{"@{0,1,2@}, @{3,4,6@}, @{7,8,9@}, @{10,11,12@}"};
3059 @code{"@{0:3@}, @{3:3@}, @{7:3@}, @{10:3@}"}; and @code{"@{0:2@}:4:3"}.
3061 If @env{OMP_PLACES} and @env{GOMP_CPU_AFFINITY} are unset and
3062 @env{OMP_PROC_BIND} is either unset or @code{false}, threads may be moved
3063 between CPUs following no placement policy.
3065 @item @emph{See also}:
3066 @ref{OMP_PROC_BIND}, @ref{GOMP_CPU_AFFINITY}, @ref{omp_get_proc_bind},
3067 @ref{OMP_DISPLAY_ENV}
3069 @item @emph{Reference}:
3070 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.5
3076 @section @env{OMP_STACKSIZE} -- Set default thread stack size
3077 @cindex Environment Variable
3079 @item @emph{ICV:} @var{stacksize-var}
3080 @item @emph{Scope:} device
3081 @item @emph{Description}:
3082 Set the default thread stack size in kilobytes, unless the number
3083 is suffixed by @code{B}, @code{K}, @code{M} or @code{G}, in which
3084 case the size is, respectively, in bytes, kilobytes, megabytes
3085 or gigabytes. This is different from @code{pthread_attr_setstacksize}
3086 which gets the number of bytes as an argument. If the stack size cannot
3087 be set due to system constraints, an error is reported and the initial
3088 stack size is left unchanged. If undefined, the stack size is system
3091 @item @emph{See also}:
3092 @ref{GOMP_STACKSIZE}
3094 @item @emph{Reference}:
3095 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.7
3101 @section @env{OMP_SCHEDULE} -- How threads are scheduled
3102 @cindex Environment Variable
3103 @cindex Implementation specific setting
3105 @item @emph{ICV:} @var{run-sched-var}
3106 @item @emph{Scope:} data environment
3107 @item @emph{Description}:
3108 Allows to specify @code{schedule type} and @code{chunk size}.
3109 The value of the variable shall have the form: @code{type[,chunk]} where
3110 @code{type} is one of @code{static}, @code{dynamic}, @code{guided} or @code{auto}
3111 The optional @code{chunk} size shall be a positive integer. If undefined,
3112 dynamic scheduling and a chunk size of 1 is used.
3114 @item @emph{See also}:
3115 @ref{omp_set_schedule}
3117 @item @emph{Reference}:
3118 @uref{https://www.openmp.org, OpenMP specification v4.5}, Sections 2.7.1.1 and 4.1
3123 @node OMP_TARGET_OFFLOAD
3124 @section @env{OMP_TARGET_OFFLOAD} -- Controls offloading behaviour
3125 @cindex Environment Variable
3126 @cindex Implementation specific setting
3128 @item @emph{ICV:} @var{target-offload-var}
3129 @item @emph{Scope:} global
3130 @item @emph{Description}:
3131 Specifies the behaviour with regard to offloading code to a device. This
3132 variable can be set to one of three values - @code{MANDATORY}, @code{DISABLED}
3135 If set to @code{MANDATORY}, the program will terminate with an error if
3136 the offload device is not present or is not supported. If set to
3137 @code{DISABLED}, then offloading is disabled and all code will run on the
3138 host. If set to @code{DEFAULT}, the program will try offloading to the
3139 device first, then fall back to running code on the host if it cannot.
3141 If undefined, then the program will behave as if @code{DEFAULT} was set.
3143 @item @emph{Reference}:
3144 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 6.17
3149 @node OMP_TEAMS_THREAD_LIMIT
3150 @section @env{OMP_TEAMS_THREAD_LIMIT} -- Set the maximum number of threads imposed by teams
3151 @cindex Environment Variable
3153 @item @emph{ICV:} @var{teams-thread-limit-var}
3154 @item @emph{Scope:} device
3155 @item @emph{Description}:
3156 Specifies an upper bound for the number of threads to use by each contention
3157 group created by a teams construct without explicit @code{thread_limit}
3158 clause. The value of this variable shall be a positive integer. If undefined,
3159 the value of 0 is used which stands for an implementation defined upper
3162 @item @emph{See also}:
3163 @ref{OMP_THREAD_LIMIT}, @ref{omp_set_teams_thread_limit}
3165 @item @emph{Reference}:
3166 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 6.24
3171 @node OMP_THREAD_LIMIT
3172 @section @env{OMP_THREAD_LIMIT} -- Set the maximum number of threads
3173 @cindex Environment Variable
3175 @item @emph{ICV:} @var{thread-limit-var}
3176 @item @emph{Scope:} data environment
3177 @item @emph{Description}:
3178 Specifies the number of threads to use for the whole program. The
3179 value of this variable shall be a positive integer. If undefined,
3180 the number of threads is not limited.
3182 @item @emph{See also}:
3183 @ref{OMP_NUM_THREADS}, @ref{omp_get_thread_limit}
3185 @item @emph{Reference}:
3186 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.10
3191 @node OMP_WAIT_POLICY
3192 @section @env{OMP_WAIT_POLICY} -- How waiting threads are handled
3193 @cindex Environment Variable
3195 @item @emph{Description}:
3196 Specifies whether waiting threads should be active or passive. If
3197 the value is @code{PASSIVE}, waiting threads should not consume CPU
3198 power while waiting; while the value is @code{ACTIVE} specifies that
3199 they should. If undefined, threads wait actively for a short time
3200 before waiting passively.
3202 @item @emph{See also}:
3203 @ref{GOMP_SPINCOUNT}
3205 @item @emph{Reference}:
3206 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.8
3211 @node GOMP_CPU_AFFINITY
3212 @section @env{GOMP_CPU_AFFINITY} -- Bind threads to specific CPUs
3213 @cindex Environment Variable
3215 @item @emph{Description}:
3216 Binds threads to specific CPUs. The variable should contain a space-separated
3217 or comma-separated list of CPUs. This list may contain different kinds of
3218 entries: either single CPU numbers in any order, a range of CPUs (M-N)
3219 or a range with some stride (M-N:S). CPU numbers are zero based. For example,
3220 @code{GOMP_CPU_AFFINITY="0 3 1-2 4-15:2"} will bind the initial thread
3221 to CPU 0, the second to CPU 3, the third to CPU 1, the fourth to
3222 CPU 2, the fifth to CPU 4, the sixth through tenth to CPUs 6, 8, 10, 12,
3223 and 14 respectively and then start assigning back from the beginning of
3224 the list. @code{GOMP_CPU_AFFINITY=0} binds all threads to CPU 0.
3226 There is no libgomp library routine to determine whether a CPU affinity
3227 specification is in effect. As a workaround, language-specific library
3228 functions, e.g., @code{getenv} in C or @code{GET_ENVIRONMENT_VARIABLE} in
3229 Fortran, may be used to query the setting of the @code{GOMP_CPU_AFFINITY}
3230 environment variable. A defined CPU affinity on startup cannot be changed
3231 or disabled during the runtime of the application.
3233 If both @env{GOMP_CPU_AFFINITY} and @env{OMP_PROC_BIND} are set,
3234 @env{OMP_PROC_BIND} has a higher precedence. If neither has been set and
3235 @env{OMP_PROC_BIND} is unset, or when @env{OMP_PROC_BIND} is set to
3236 @code{FALSE}, the host system will handle the assignment of threads to CPUs.
3238 @item @emph{See also}:
3239 @ref{OMP_PLACES}, @ref{OMP_PROC_BIND}
3245 @section @env{GOMP_DEBUG} -- Enable debugging output
3246 @cindex Environment Variable
3248 @item @emph{Description}:
3249 Enable debugging output. The variable should be set to @code{0}
3250 (disabled, also the default if not set), or @code{1} (enabled).
3252 If enabled, some debugging output will be printed during execution.
3253 This is currently not specified in more detail, and subject to change.
3258 @node GOMP_STACKSIZE
3259 @section @env{GOMP_STACKSIZE} -- Set default thread stack size
3260 @cindex Environment Variable
3261 @cindex Implementation specific setting
3263 @item @emph{Description}:
3264 Set the default thread stack size in kilobytes. This is different from
3265 @code{pthread_attr_setstacksize} which gets the number of bytes as an
3266 argument. If the stack size cannot be set due to system constraints, an
3267 error is reported and the initial stack size is left unchanged. If undefined,
3268 the stack size is system dependent.
3270 @item @emph{See also}:
3273 @item @emph{Reference}:
3274 @uref{https://gcc.gnu.org/ml/gcc-patches/2006-06/msg00493.html,
3275 GCC Patches Mailinglist},
3276 @uref{https://gcc.gnu.org/ml/gcc-patches/2006-06/msg00496.html,
3277 GCC Patches Mailinglist}
3282 @node GOMP_SPINCOUNT
3283 @section @env{GOMP_SPINCOUNT} -- Set the busy-wait spin count
3284 @cindex Environment Variable
3285 @cindex Implementation specific setting
3287 @item @emph{Description}:
3288 Determines how long a threads waits actively with consuming CPU power
3289 before waiting passively without consuming CPU power. The value may be
3290 either @code{INFINITE}, @code{INFINITY} to always wait actively or an
3291 integer which gives the number of spins of the busy-wait loop. The
3292 integer may optionally be followed by the following suffixes acting
3293 as multiplication factors: @code{k} (kilo, thousand), @code{M} (mega,
3294 million), @code{G} (giga, billion), or @code{T} (tera, trillion).
3295 If undefined, 0 is used when @env{OMP_WAIT_POLICY} is @code{PASSIVE},
3296 300,000 is used when @env{OMP_WAIT_POLICY} is undefined and
3297 30 billion is used when @env{OMP_WAIT_POLICY} is @code{ACTIVE}.
3298 If there are more OpenMP threads than available CPUs, 1000 and 100
3299 spins are used for @env{OMP_WAIT_POLICY} being @code{ACTIVE} or
3300 undefined, respectively; unless the @env{GOMP_SPINCOUNT} is lower
3301 or @env{OMP_WAIT_POLICY} is @code{PASSIVE}.
3303 @item @emph{See also}:
3304 @ref{OMP_WAIT_POLICY}
3309 @node GOMP_RTEMS_THREAD_POOLS
3310 @section @env{GOMP_RTEMS_THREAD_POOLS} -- Set the RTEMS specific thread pools
3311 @cindex Environment Variable
3312 @cindex Implementation specific setting
3314 @item @emph{Description}:
3315 This environment variable is only used on the RTEMS real-time operating system.
3316 It determines the scheduler instance specific thread pools. The format for
3317 @env{GOMP_RTEMS_THREAD_POOLS} is a list of optional
3318 @code{<thread-pool-count>[$<priority>]@@<scheduler-name>} configurations
3319 separated by @code{:} where:
3321 @item @code{<thread-pool-count>} is the thread pool count for this scheduler
3323 @item @code{$<priority>} is an optional priority for the worker threads of a
3324 thread pool according to @code{pthread_setschedparam}. In case a priority
3325 value is omitted, then a worker thread will inherit the priority of the OpenMP
3326 primary thread that created it. The priority of the worker thread is not
3327 changed after creation, even if a new OpenMP primary thread using the worker has
3328 a different priority.
3329 @item @code{@@<scheduler-name>} is the scheduler instance name according to the
3330 RTEMS application configuration.
3332 In case no thread pool configuration is specified for a scheduler instance,
3333 then each OpenMP primary thread of this scheduler instance will use its own
3334 dynamically allocated thread pool. To limit the worker thread count of the
3335 thread pools, each OpenMP primary thread must call @code{omp_set_num_threads}.
3336 @item @emph{Example}:
3337 Lets suppose we have three scheduler instances @code{IO}, @code{WRK0}, and
3338 @code{WRK1} with @env{GOMP_RTEMS_THREAD_POOLS} set to
3339 @code{"1@@WRK0:3$4@@WRK1"}. Then there are no thread pool restrictions for
3340 scheduler instance @code{IO}. In the scheduler instance @code{WRK0} there is
3341 one thread pool available. Since no priority is specified for this scheduler
3342 instance, the worker thread inherits the priority of the OpenMP primary thread
3343 that created it. In the scheduler instance @code{WRK1} there are three thread
3344 pools available and their worker threads run at priority four.
3349 @c ---------------------------------------------------------------------
3351 @c ---------------------------------------------------------------------
3353 @node Enabling OpenACC
3354 @chapter Enabling OpenACC
3356 To activate the OpenACC extensions for C/C++ and Fortran, the compile-time
3357 flag @option{-fopenacc} must be specified. This enables the OpenACC directive
3358 @code{#pragma acc} in C/C++ and @code{!$acc} directives in free form,
3359 @code{c$acc}, @code{*$acc} and @code{!$acc} directives in fixed form,
3360 @code{!$} conditional compilation sentinels in free form and @code{c$},
3361 @code{*$} and @code{!$} sentinels in fixed form, for Fortran. The flag also
3362 arranges for automatic linking of the OpenACC runtime library
3363 (@ref{OpenACC Runtime Library Routines}).
3365 See @uref{https://gcc.gnu.org/wiki/OpenACC} for more information.
3367 A complete description of all OpenACC directives accepted may be found in
3368 the @uref{https://www.openacc.org, OpenACC} Application Programming
3369 Interface manual, version 2.6.
3373 @c ---------------------------------------------------------------------
3374 @c OpenACC Runtime Library Routines
3375 @c ---------------------------------------------------------------------
3377 @node OpenACC Runtime Library Routines
3378 @chapter OpenACC Runtime Library Routines
3380 The runtime routines described here are defined by section 3 of the OpenACC
3381 specifications in version 2.6.
3382 They have C linkage, and do not throw exceptions.
3383 Generally, they are available only for the host, with the exception of
3384 @code{acc_on_device}, which is available for both the host and the
3385 acceleration device.
3388 * acc_get_num_devices:: Get number of devices for the given device
3390 * acc_set_device_type:: Set type of device accelerator to use.
3391 * acc_get_device_type:: Get type of device accelerator to be used.
3392 * acc_set_device_num:: Set device number to use.
3393 * acc_get_device_num:: Get device number to be used.
3394 * acc_get_property:: Get device property.
3395 * acc_async_test:: Tests for completion of a specific asynchronous
3397 * acc_async_test_all:: Tests for completion of all asynchronous
3399 * acc_wait:: Wait for completion of a specific asynchronous
3401 * acc_wait_all:: Waits for completion of all asynchronous
3403 * acc_wait_all_async:: Wait for completion of all asynchronous
3405 * acc_wait_async:: Wait for completion of asynchronous operations.
3406 * acc_init:: Initialize runtime for a specific device type.
3407 * acc_shutdown:: Shuts down the runtime for a specific device
3409 * acc_on_device:: Whether executing on a particular device
3410 * acc_malloc:: Allocate device memory.
3411 * acc_free:: Free device memory.
3412 * acc_copyin:: Allocate device memory and copy host memory to
3414 * acc_present_or_copyin:: If the data is not present on the device,
3415 allocate device memory and copy from host
3417 * acc_create:: Allocate device memory and map it to host
3419 * acc_present_or_create:: If the data is not present on the device,
3420 allocate device memory and map it to host
3422 * acc_copyout:: Copy device memory to host memory.
3423 * acc_delete:: Free device memory.
3424 * acc_update_device:: Update device memory from mapped host memory.
3425 * acc_update_self:: Update host memory from mapped device memory.
3426 * acc_map_data:: Map previously allocated device memory to host
3428 * acc_unmap_data:: Unmap device memory from host memory.
3429 * acc_deviceptr:: Get device pointer associated with specific
3431 * acc_hostptr:: Get host pointer associated with specific
3433 * acc_is_present:: Indicate whether host variable / array is
3435 * acc_memcpy_to_device:: Copy host memory to device memory.
3436 * acc_memcpy_from_device:: Copy device memory to host memory.
3437 * acc_attach:: Let device pointer point to device-pointer target.
3438 * acc_detach:: Let device pointer point to host-pointer target.
3440 API routines for target platforms.
3442 * acc_get_current_cuda_device:: Get CUDA device handle.
3443 * acc_get_current_cuda_context::Get CUDA context handle.
3444 * acc_get_cuda_stream:: Get CUDA stream handle.
3445 * acc_set_cuda_stream:: Set CUDA stream handle.
3447 API routines for the OpenACC Profiling Interface.
3449 * acc_prof_register:: Register callbacks.
3450 * acc_prof_unregister:: Unregister callbacks.
3451 * acc_prof_lookup:: Obtain inquiry functions.
3452 * acc_register_library:: Library registration.
3457 @node acc_get_num_devices
3458 @section @code{acc_get_num_devices} -- Get number of devices for given device type
3460 @item @emph{Description}
3461 This function returns a value indicating the number of devices available
3462 for the device type specified in @var{devicetype}.
3465 @multitable @columnfractions .20 .80
3466 @item @emph{Prototype}: @tab @code{int acc_get_num_devices(acc_device_t devicetype);}
3469 @item @emph{Fortran}:
3470 @multitable @columnfractions .20 .80
3471 @item @emph{Interface}: @tab @code{integer function acc_get_num_devices(devicetype)}
3472 @item @tab @code{integer(kind=acc_device_kind) devicetype}
3475 @item @emph{Reference}:
3476 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3482 @node acc_set_device_type
3483 @section @code{acc_set_device_type} -- Set type of device accelerator to use.
3485 @item @emph{Description}
3486 This function indicates to the runtime library which device type, specified
3487 in @var{devicetype}, to use when executing a parallel or kernels region.
3490 @multitable @columnfractions .20 .80
3491 @item @emph{Prototype}: @tab @code{acc_set_device_type(acc_device_t devicetype);}
3494 @item @emph{Fortran}:
3495 @multitable @columnfractions .20 .80
3496 @item @emph{Interface}: @tab @code{subroutine acc_set_device_type(devicetype)}
3497 @item @tab @code{integer(kind=acc_device_kind) devicetype}
3500 @item @emph{Reference}:
3501 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3507 @node acc_get_device_type
3508 @section @code{acc_get_device_type} -- Get type of device accelerator to be used.
3510 @item @emph{Description}
3511 This function returns what device type will be used when executing a
3512 parallel or kernels region.
3514 This function returns @code{acc_device_none} if
3515 @code{acc_get_device_type} is called from
3516 @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
3517 callbacks of the OpenACC Profiling Interface (@ref{OpenACC Profiling
3518 Interface}), that is, if the device is currently being initialized.
3521 @multitable @columnfractions .20 .80
3522 @item @emph{Prototype}: @tab @code{acc_device_t acc_get_device_type(void);}
3525 @item @emph{Fortran}:
3526 @multitable @columnfractions .20 .80
3527 @item @emph{Interface}: @tab @code{function acc_get_device_type(void)}
3528 @item @tab @code{integer(kind=acc_device_kind) acc_get_device_type}
3531 @item @emph{Reference}:
3532 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3538 @node acc_set_device_num
3539 @section @code{acc_set_device_num} -- Set device number to use.
3541 @item @emph{Description}
3542 This function will indicate to the runtime which device number,
3543 specified by @var{devicenum}, associated with the specified device
3544 type @var{devicetype}.
3547 @multitable @columnfractions .20 .80
3548 @item @emph{Prototype}: @tab @code{acc_set_device_num(int devicenum, acc_device_t devicetype);}
3551 @item @emph{Fortran}:
3552 @multitable @columnfractions .20 .80
3553 @item @emph{Interface}: @tab @code{subroutine acc_set_device_num(devicenum, devicetype)}
3554 @item @tab @code{integer devicenum}
3555 @item @tab @code{integer(kind=acc_device_kind) devicetype}
3558 @item @emph{Reference}:
3559 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3565 @node acc_get_device_num
3566 @section @code{acc_get_device_num} -- Get device number to be used.
3568 @item @emph{Description}
3569 This function returns which device number associated with the specified device
3570 type @var{devicetype}, will be used when executing a parallel or kernels
3574 @multitable @columnfractions .20 .80
3575 @item @emph{Prototype}: @tab @code{int acc_get_device_num(acc_device_t devicetype);}
3578 @item @emph{Fortran}:
3579 @multitable @columnfractions .20 .80
3580 @item @emph{Interface}: @tab @code{function acc_get_device_num(devicetype)}
3581 @item @tab @code{integer(kind=acc_device_kind) devicetype}
3582 @item @tab @code{integer acc_get_device_num}
3585 @item @emph{Reference}:
3586 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3592 @node acc_get_property
3593 @section @code{acc_get_property} -- Get device property.
3594 @cindex acc_get_property
3595 @cindex acc_get_property_string
3597 @item @emph{Description}
3598 These routines return the value of the specified @var{property} for the
3599 device being queried according to @var{devicenum} and @var{devicetype}.
3600 Integer-valued and string-valued properties are returned by
3601 @code{acc_get_property} and @code{acc_get_property_string} respectively.
3602 The Fortran @code{acc_get_property_string} subroutine returns the string
3603 retrieved in its fourth argument while the remaining entry points are
3604 functions, which pass the return value as their result.
3606 Note for Fortran, only: the OpenACC technical committee corrected and, hence,
3607 modified the interface introduced in OpenACC 2.6. The kind-value parameter
3608 @code{acc_device_property} has been renamed to @code{acc_device_property_kind}
3609 for consistency and the return type of the @code{acc_get_property} function is
3610 now a @code{c_size_t} integer instead of a @code{acc_device_property} integer.
3611 The parameter @code{acc_device_property} will continue to be provided,
3612 but might be removed in a future version of GCC.
3615 @multitable @columnfractions .20 .80
3616 @item @emph{Prototype}: @tab @code{size_t acc_get_property(int devicenum, acc_device_t devicetype, acc_device_property_t property);}
3617 @item @emph{Prototype}: @tab @code{const char *acc_get_property_string(int devicenum, acc_device_t devicetype, acc_device_property_t property);}
3620 @item @emph{Fortran}:
3621 @multitable @columnfractions .20 .80
3622 @item @emph{Interface}: @tab @code{function acc_get_property(devicenum, devicetype, property)}
3623 @item @emph{Interface}: @tab @code{subroutine acc_get_property_string(devicenum, devicetype, property, string)}
3624 @item @tab @code{use ISO_C_Binding, only: c_size_t}
3625 @item @tab @code{integer devicenum}
3626 @item @tab @code{integer(kind=acc_device_kind) devicetype}
3627 @item @tab @code{integer(kind=acc_device_property_kind) property}
3628 @item @tab @code{integer(kind=c_size_t) acc_get_property}
3629 @item @tab @code{character(*) string}
3632 @item @emph{Reference}:
3633 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3639 @node acc_async_test
3640 @section @code{acc_async_test} -- Test for completion of a specific asynchronous operation.
3642 @item @emph{Description}
3643 This function tests for completion of the asynchronous operation specified
3644 in @var{arg}. In C/C++, a non-zero value will be returned to indicate
3645 the specified asynchronous operation has completed. While Fortran will return
3646 a @code{true}. If the asynchronous operation has not completed, C/C++ returns
3647 a zero and Fortran returns a @code{false}.
3650 @multitable @columnfractions .20 .80
3651 @item @emph{Prototype}: @tab @code{int acc_async_test(int arg);}
3654 @item @emph{Fortran}:
3655 @multitable @columnfractions .20 .80
3656 @item @emph{Interface}: @tab @code{function acc_async_test(arg)}
3657 @item @tab @code{integer(kind=acc_handle_kind) arg}
3658 @item @tab @code{logical acc_async_test}
3661 @item @emph{Reference}:
3662 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3668 @node acc_async_test_all
3669 @section @code{acc_async_test_all} -- Tests for completion of all asynchronous operations.
3671 @item @emph{Description}
3672 This function tests for completion of all asynchronous operations.
3673 In C/C++, a non-zero value will be returned to indicate all asynchronous
3674 operations have completed. While Fortran will return a @code{true}. If
3675 any asynchronous operation has not completed, C/C++ returns a zero and
3676 Fortran returns a @code{false}.
3679 @multitable @columnfractions .20 .80
3680 @item @emph{Prototype}: @tab @code{int acc_async_test_all(void);}
3683 @item @emph{Fortran}:
3684 @multitable @columnfractions .20 .80
3685 @item @emph{Interface}: @tab @code{function acc_async_test()}
3686 @item @tab @code{logical acc_get_device_num}
3689 @item @emph{Reference}:
3690 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3697 @section @code{acc_wait} -- Wait for completion of a specific asynchronous operation.
3699 @item @emph{Description}
3700 This function waits for completion of the asynchronous operation
3701 specified in @var{arg}.
3704 @multitable @columnfractions .20 .80
3705 @item @emph{Prototype}: @tab @code{acc_wait(arg);}
3706 @item @emph{Prototype (OpenACC 1.0 compatibility)}: @tab @code{acc_async_wait(arg);}
3709 @item @emph{Fortran}:
3710 @multitable @columnfractions .20 .80
3711 @item @emph{Interface}: @tab @code{subroutine acc_wait(arg)}
3712 @item @tab @code{integer(acc_handle_kind) arg}
3713 @item @emph{Interface (OpenACC 1.0 compatibility)}: @tab @code{subroutine acc_async_wait(arg)}
3714 @item @tab @code{integer(acc_handle_kind) arg}
3717 @item @emph{Reference}:
3718 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3725 @section @code{acc_wait_all} -- Waits for completion of all asynchronous operations.
3727 @item @emph{Description}
3728 This function waits for the completion of all asynchronous operations.
3731 @multitable @columnfractions .20 .80
3732 @item @emph{Prototype}: @tab @code{acc_wait_all(void);}
3733 @item @emph{Prototype (OpenACC 1.0 compatibility)}: @tab @code{acc_async_wait_all(void);}
3736 @item @emph{Fortran}:
3737 @multitable @columnfractions .20 .80
3738 @item @emph{Interface}: @tab @code{subroutine acc_wait_all()}
3739 @item @emph{Interface (OpenACC 1.0 compatibility)}: @tab @code{subroutine acc_async_wait_all()}
3742 @item @emph{Reference}:
3743 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3749 @node acc_wait_all_async
3750 @section @code{acc_wait_all_async} -- Wait for completion of all asynchronous operations.
3752 @item @emph{Description}
3753 This function enqueues a wait operation on the queue @var{async} for any
3754 and all asynchronous operations that have been previously enqueued on
3758 @multitable @columnfractions .20 .80
3759 @item @emph{Prototype}: @tab @code{acc_wait_all_async(int async);}
3762 @item @emph{Fortran}:
3763 @multitable @columnfractions .20 .80
3764 @item @emph{Interface}: @tab @code{subroutine acc_wait_all_async(async)}
3765 @item @tab @code{integer(acc_handle_kind) async}
3768 @item @emph{Reference}:
3769 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3775 @node acc_wait_async
3776 @section @code{acc_wait_async} -- Wait for completion of asynchronous operations.
3778 @item @emph{Description}
3779 This function enqueues a wait operation on queue @var{async} for any and all
3780 asynchronous operations enqueued on queue @var{arg}.
3783 @multitable @columnfractions .20 .80
3784 @item @emph{Prototype}: @tab @code{acc_wait_async(int arg, int async);}
3787 @item @emph{Fortran}:
3788 @multitable @columnfractions .20 .80
3789 @item @emph{Interface}: @tab @code{subroutine acc_wait_async(arg, async)}
3790 @item @tab @code{integer(acc_handle_kind) arg, async}
3793 @item @emph{Reference}:
3794 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3801 @section @code{acc_init} -- Initialize runtime for a specific device type.
3803 @item @emph{Description}
3804 This function initializes the runtime for the device type specified in
3808 @multitable @columnfractions .20 .80
3809 @item @emph{Prototype}: @tab @code{acc_init(acc_device_t devicetype);}
3812 @item @emph{Fortran}:
3813 @multitable @columnfractions .20 .80
3814 @item @emph{Interface}: @tab @code{subroutine acc_init(devicetype)}
3815 @item @tab @code{integer(acc_device_kind) devicetype}
3818 @item @emph{Reference}:
3819 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3826 @section @code{acc_shutdown} -- Shuts down the runtime for a specific device type.
3828 @item @emph{Description}
3829 This function shuts down the runtime for the device type specified in
3833 @multitable @columnfractions .20 .80
3834 @item @emph{Prototype}: @tab @code{acc_shutdown(acc_device_t devicetype);}
3837 @item @emph{Fortran}:
3838 @multitable @columnfractions .20 .80
3839 @item @emph{Interface}: @tab @code{subroutine acc_shutdown(devicetype)}
3840 @item @tab @code{integer(acc_device_kind) devicetype}
3843 @item @emph{Reference}:
3844 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3851 @section @code{acc_on_device} -- Whether executing on a particular device
3853 @item @emph{Description}:
3854 This function returns whether the program is executing on a particular
3855 device specified in @var{devicetype}. In C/C++ a non-zero value is
3856 returned to indicate the device is executing on the specified device type.
3857 In Fortran, @code{true} will be returned. If the program is not executing
3858 on the specified device type C/C++ will return a zero, while Fortran will
3859 return @code{false}.
3862 @multitable @columnfractions .20 .80
3863 @item @emph{Prototype}: @tab @code{acc_on_device(acc_device_t devicetype);}
3866 @item @emph{Fortran}:
3867 @multitable @columnfractions .20 .80
3868 @item @emph{Interface}: @tab @code{function acc_on_device(devicetype)}
3869 @item @tab @code{integer(acc_device_kind) devicetype}
3870 @item @tab @code{logical acc_on_device}
3874 @item @emph{Reference}:
3875 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3882 @section @code{acc_malloc} -- Allocate device memory.
3884 @item @emph{Description}
3885 This function allocates @var{len} bytes of device memory. It returns
3886 the device address of the allocated memory.
3889 @multitable @columnfractions .20 .80
3890 @item @emph{Prototype}: @tab @code{d_void* acc_malloc(size_t len);}
3893 @item @emph{Reference}:
3894 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3901 @section @code{acc_free} -- Free device memory.
3903 @item @emph{Description}
3904 Free previously allocated device memory at the device address @code{a}.
3907 @multitable @columnfractions .20 .80
3908 @item @emph{Prototype}: @tab @code{acc_free(d_void *a);}
3911 @item @emph{Reference}:
3912 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3919 @section @code{acc_copyin} -- Allocate device memory and copy host memory to it.
3921 @item @emph{Description}
3922 In C/C++, this function allocates @var{len} bytes of device memory
3923 and maps it to the specified host address in @var{a}. The device
3924 address of the newly allocated device memory is returned.
3926 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3927 a contiguous array section. The second form @var{a} specifies a
3928 variable or array element and @var{len} specifies the length in bytes.
3931 @multitable @columnfractions .20 .80
3932 @item @emph{Prototype}: @tab @code{void *acc_copyin(h_void *a, size_t len);}
3933 @item @emph{Prototype}: @tab @code{void *acc_copyin_async(h_void *a, size_t len, int async);}
3936 @item @emph{Fortran}:
3937 @multitable @columnfractions .20 .80
3938 @item @emph{Interface}: @tab @code{subroutine acc_copyin(a)}
3939 @item @tab @code{type, dimension(:[,:]...) :: a}
3940 @item @emph{Interface}: @tab @code{subroutine acc_copyin(a, len)}
3941 @item @tab @code{type, dimension(:[,:]...) :: a}
3942 @item @tab @code{integer len}
3943 @item @emph{Interface}: @tab @code{subroutine acc_copyin_async(a, async)}
3944 @item @tab @code{type, dimension(:[,:]...) :: a}
3945 @item @tab @code{integer(acc_handle_kind) :: async}
3946 @item @emph{Interface}: @tab @code{subroutine acc_copyin_async(a, len, async)}
3947 @item @tab @code{type, dimension(:[,:]...) :: a}
3948 @item @tab @code{integer len}
3949 @item @tab @code{integer(acc_handle_kind) :: async}
3952 @item @emph{Reference}:
3953 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3959 @node acc_present_or_copyin
3960 @section @code{acc_present_or_copyin} -- If the data is not present on the device, allocate device memory and copy from host memory.
3962 @item @emph{Description}
3963 This function tests if the host data specified by @var{a} and of length
3964 @var{len} is present or not. If it is not present, then device memory
3965 will be allocated and the host memory copied. The device address of
3966 the newly allocated device memory is returned.
3968 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3969 a contiguous array section. The second form @var{a} specifies a variable or
3970 array element and @var{len} specifies the length in bytes.
3972 Note that @code{acc_present_or_copyin} and @code{acc_pcopyin} exist for
3973 backward compatibility with OpenACC 2.0; use @ref{acc_copyin} instead.
3976 @multitable @columnfractions .20 .80
3977 @item @emph{Prototype}: @tab @code{void *acc_present_or_copyin(h_void *a, size_t len);}
3978 @item @emph{Prototype}: @tab @code{void *acc_pcopyin(h_void *a, size_t len);}
3981 @item @emph{Fortran}:
3982 @multitable @columnfractions .20 .80
3983 @item @emph{Interface}: @tab @code{subroutine acc_present_or_copyin(a)}
3984 @item @tab @code{type, dimension(:[,:]...) :: a}
3985 @item @emph{Interface}: @tab @code{subroutine acc_present_or_copyin(a, len)}
3986 @item @tab @code{type, dimension(:[,:]...) :: a}
3987 @item @tab @code{integer len}
3988 @item @emph{Interface}: @tab @code{subroutine acc_pcopyin(a)}
3989 @item @tab @code{type, dimension(:[,:]...) :: a}
3990 @item @emph{Interface}: @tab @code{subroutine acc_pcopyin(a, len)}
3991 @item @tab @code{type, dimension(:[,:]...) :: a}
3992 @item @tab @code{integer len}
3995 @item @emph{Reference}:
3996 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4003 @section @code{acc_create} -- Allocate device memory and map it to host memory.
4005 @item @emph{Description}
4006 This function allocates device memory and maps it to host memory specified
4007 by the host address @var{a} with a length of @var{len} bytes. In C/C++,
4008 the function returns the device address of the allocated device memory.
4010 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4011 a contiguous array section. The second form @var{a} specifies a variable or
4012 array element and @var{len} specifies the length in bytes.
4015 @multitable @columnfractions .20 .80
4016 @item @emph{Prototype}: @tab @code{void *acc_create(h_void *a, size_t len);}
4017 @item @emph{Prototype}: @tab @code{void *acc_create_async(h_void *a, size_t len, int async);}
4020 @item @emph{Fortran}:
4021 @multitable @columnfractions .20 .80
4022 @item @emph{Interface}: @tab @code{subroutine acc_create(a)}
4023 @item @tab @code{type, dimension(:[,:]...) :: a}
4024 @item @emph{Interface}: @tab @code{subroutine acc_create(a, len)}
4025 @item @tab @code{type, dimension(:[,:]...) :: a}
4026 @item @tab @code{integer len}
4027 @item @emph{Interface}: @tab @code{subroutine acc_create_async(a, async)}
4028 @item @tab @code{type, dimension(:[,:]...) :: a}
4029 @item @tab @code{integer(acc_handle_kind) :: async}
4030 @item @emph{Interface}: @tab @code{subroutine acc_create_async(a, len, async)}
4031 @item @tab @code{type, dimension(:[,:]...) :: a}
4032 @item @tab @code{integer len}
4033 @item @tab @code{integer(acc_handle_kind) :: async}
4036 @item @emph{Reference}:
4037 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4043 @node acc_present_or_create
4044 @section @code{acc_present_or_create} -- If the data is not present on the device, allocate device memory and map it to host memory.
4046 @item @emph{Description}
4047 This function tests if the host data specified by @var{a} and of length
4048 @var{len} is present or not. If it is not present, then device memory
4049 will be allocated and mapped to host memory. In C/C++, the device address
4050 of the newly allocated device memory is returned.
4052 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4053 a contiguous array section. The second form @var{a} specifies a variable or
4054 array element and @var{len} specifies the length in bytes.
4056 Note that @code{acc_present_or_create} and @code{acc_pcreate} exist for
4057 backward compatibility with OpenACC 2.0; use @ref{acc_create} instead.
4060 @multitable @columnfractions .20 .80
4061 @item @emph{Prototype}: @tab @code{void *acc_present_or_create(h_void *a, size_t len)}
4062 @item @emph{Prototype}: @tab @code{void *acc_pcreate(h_void *a, size_t len)}
4065 @item @emph{Fortran}:
4066 @multitable @columnfractions .20 .80
4067 @item @emph{Interface}: @tab @code{subroutine acc_present_or_create(a)}
4068 @item @tab @code{type, dimension(:[,:]...) :: a}
4069 @item @emph{Interface}: @tab @code{subroutine acc_present_or_create(a, len)}
4070 @item @tab @code{type, dimension(:[,:]...) :: a}
4071 @item @tab @code{integer len}
4072 @item @emph{Interface}: @tab @code{subroutine acc_pcreate(a)}
4073 @item @tab @code{type, dimension(:[,:]...) :: a}
4074 @item @emph{Interface}: @tab @code{subroutine acc_pcreate(a, len)}
4075 @item @tab @code{type, dimension(:[,:]...) :: a}
4076 @item @tab @code{integer len}
4079 @item @emph{Reference}:
4080 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4087 @section @code{acc_copyout} -- Copy device memory to host memory.
4089 @item @emph{Description}
4090 This function copies mapped device memory to host memory which is specified
4091 by host address @var{a} for a length @var{len} bytes in C/C++.
4093 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4094 a contiguous array section. The second form @var{a} specifies a variable or
4095 array element and @var{len} specifies the length in bytes.
4098 @multitable @columnfractions .20 .80
4099 @item @emph{Prototype}: @tab @code{acc_copyout(h_void *a, size_t len);}
4100 @item @emph{Prototype}: @tab @code{acc_copyout_async(h_void *a, size_t len, int async);}
4101 @item @emph{Prototype}: @tab @code{acc_copyout_finalize(h_void *a, size_t len);}
4102 @item @emph{Prototype}: @tab @code{acc_copyout_finalize_async(h_void *a, size_t len, int async);}
4105 @item @emph{Fortran}:
4106 @multitable @columnfractions .20 .80
4107 @item @emph{Interface}: @tab @code{subroutine acc_copyout(a)}
4108 @item @tab @code{type, dimension(:[,:]...) :: a}
4109 @item @emph{Interface}: @tab @code{subroutine acc_copyout(a, len)}
4110 @item @tab @code{type, dimension(:[,:]...) :: a}
4111 @item @tab @code{integer len}
4112 @item @emph{Interface}: @tab @code{subroutine acc_copyout_async(a, async)}
4113 @item @tab @code{type, dimension(:[,:]...) :: a}
4114 @item @tab @code{integer(acc_handle_kind) :: async}
4115 @item @emph{Interface}: @tab @code{subroutine acc_copyout_async(a, len, async)}
4116 @item @tab @code{type, dimension(:[,:]...) :: a}
4117 @item @tab @code{integer len}
4118 @item @tab @code{integer(acc_handle_kind) :: async}
4119 @item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize(a)}
4120 @item @tab @code{type, dimension(:[,:]...) :: a}
4121 @item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize(a, len)}
4122 @item @tab @code{type, dimension(:[,:]...) :: a}
4123 @item @tab @code{integer len}
4124 @item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize_async(a, async)}
4125 @item @tab @code{type, dimension(:[,:]...) :: a}
4126 @item @tab @code{integer(acc_handle_kind) :: async}
4127 @item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize_async(a, len, async)}
4128 @item @tab @code{type, dimension(:[,:]...) :: a}
4129 @item @tab @code{integer len}
4130 @item @tab @code{integer(acc_handle_kind) :: async}
4133 @item @emph{Reference}:
4134 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4141 @section @code{acc_delete} -- Free device memory.
4143 @item @emph{Description}
4144 This function frees previously allocated device memory specified by
4145 the device address @var{a} and the length of @var{len} bytes.
4147 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4148 a contiguous array section. The second form @var{a} specifies a variable or
4149 array element and @var{len} specifies the length in bytes.
4152 @multitable @columnfractions .20 .80
4153 @item @emph{Prototype}: @tab @code{acc_delete(h_void *a, size_t len);}
4154 @item @emph{Prototype}: @tab @code{acc_delete_async(h_void *a, size_t len, int async);}
4155 @item @emph{Prototype}: @tab @code{acc_delete_finalize(h_void *a, size_t len);}
4156 @item @emph{Prototype}: @tab @code{acc_delete_finalize_async(h_void *a, size_t len, int async);}
4159 @item @emph{Fortran}:
4160 @multitable @columnfractions .20 .80
4161 @item @emph{Interface}: @tab @code{subroutine acc_delete(a)}
4162 @item @tab @code{type, dimension(:[,:]...) :: a}
4163 @item @emph{Interface}: @tab @code{subroutine acc_delete(a, len)}
4164 @item @tab @code{type, dimension(:[,:]...) :: a}
4165 @item @tab @code{integer len}
4166 @item @emph{Interface}: @tab @code{subroutine acc_delete_async(a, async)}
4167 @item @tab @code{type, dimension(:[,:]...) :: a}
4168 @item @tab @code{integer(acc_handle_kind) :: async}
4169 @item @emph{Interface}: @tab @code{subroutine acc_delete_async(a, len, async)}
4170 @item @tab @code{type, dimension(:[,:]...) :: a}
4171 @item @tab @code{integer len}
4172 @item @tab @code{integer(acc_handle_kind) :: async}
4173 @item @emph{Interface}: @tab @code{subroutine acc_delete_finalize(a)}
4174 @item @tab @code{type, dimension(:[,:]...) :: a}
4175 @item @emph{Interface}: @tab @code{subroutine acc_delete_finalize(a, len)}
4176 @item @tab @code{type, dimension(:[,:]...) :: a}
4177 @item @tab @code{integer len}
4178 @item @emph{Interface}: @tab @code{subroutine acc_delete_async_finalize(a, async)}
4179 @item @tab @code{type, dimension(:[,:]...) :: a}
4180 @item @tab @code{integer(acc_handle_kind) :: async}
4181 @item @emph{Interface}: @tab @code{subroutine acc_delete_async_finalize(a, len, async)}
4182 @item @tab @code{type, dimension(:[,:]...) :: a}
4183 @item @tab @code{integer len}
4184 @item @tab @code{integer(acc_handle_kind) :: async}
4187 @item @emph{Reference}:
4188 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4194 @node acc_update_device
4195 @section @code{acc_update_device} -- Update device memory from mapped host memory.
4197 @item @emph{Description}
4198 This function updates the device copy from the previously mapped host memory.
4199 The host memory is specified with the host address @var{a} and a length of
4202 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4203 a contiguous array section. The second form @var{a} specifies a variable or
4204 array element and @var{len} specifies the length in bytes.
4207 @multitable @columnfractions .20 .80
4208 @item @emph{Prototype}: @tab @code{acc_update_device(h_void *a, size_t len);}
4209 @item @emph{Prototype}: @tab @code{acc_update_device(h_void *a, size_t len, async);}
4212 @item @emph{Fortran}:
4213 @multitable @columnfractions .20 .80
4214 @item @emph{Interface}: @tab @code{subroutine acc_update_device(a)}
4215 @item @tab @code{type, dimension(:[,:]...) :: a}
4216 @item @emph{Interface}: @tab @code{subroutine acc_update_device(a, len)}
4217 @item @tab @code{type, dimension(:[,:]...) :: a}
4218 @item @tab @code{integer len}
4219 @item @emph{Interface}: @tab @code{subroutine acc_update_device_async(a, async)}
4220 @item @tab @code{type, dimension(:[,:]...) :: a}
4221 @item @tab @code{integer(acc_handle_kind) :: async}
4222 @item @emph{Interface}: @tab @code{subroutine acc_update_device_async(a, len, async)}
4223 @item @tab @code{type, dimension(:[,:]...) :: a}
4224 @item @tab @code{integer len}
4225 @item @tab @code{integer(acc_handle_kind) :: async}
4228 @item @emph{Reference}:
4229 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4235 @node acc_update_self
4236 @section @code{acc_update_self} -- Update host memory from mapped device memory.
4238 @item @emph{Description}
4239 This function updates the host copy from the previously mapped device memory.
4240 The host memory is specified with the host address @var{a} and a length of
4243 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4244 a contiguous array section. The second form @var{a} specifies a variable or
4245 array element and @var{len} specifies the length in bytes.
4248 @multitable @columnfractions .20 .80
4249 @item @emph{Prototype}: @tab @code{acc_update_self(h_void *a, size_t len);}
4250 @item @emph{Prototype}: @tab @code{acc_update_self_async(h_void *a, size_t len, int async);}
4253 @item @emph{Fortran}:
4254 @multitable @columnfractions .20 .80
4255 @item @emph{Interface}: @tab @code{subroutine acc_update_self(a)}
4256 @item @tab @code{type, dimension(:[,:]...) :: a}
4257 @item @emph{Interface}: @tab @code{subroutine acc_update_self(a, len)}
4258 @item @tab @code{type, dimension(:[,:]...) :: a}
4259 @item @tab @code{integer len}
4260 @item @emph{Interface}: @tab @code{subroutine acc_update_self_async(a, async)}
4261 @item @tab @code{type, dimension(:[,:]...) :: a}
4262 @item @tab @code{integer(acc_handle_kind) :: async}
4263 @item @emph{Interface}: @tab @code{subroutine acc_update_self_async(a, len, async)}
4264 @item @tab @code{type, dimension(:[,:]...) :: a}
4265 @item @tab @code{integer len}
4266 @item @tab @code{integer(acc_handle_kind) :: async}
4269 @item @emph{Reference}:
4270 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4277 @section @code{acc_map_data} -- Map previously allocated device memory to host memory.
4279 @item @emph{Description}
4280 This function maps previously allocated device and host memory. The device
4281 memory is specified with the device address @var{d}. The host memory is
4282 specified with the host address @var{h} and a length of @var{len}.
4285 @multitable @columnfractions .20 .80
4286 @item @emph{Prototype}: @tab @code{acc_map_data(h_void *h, d_void *d, size_t len);}
4289 @item @emph{Reference}:
4290 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4296 @node acc_unmap_data
4297 @section @code{acc_unmap_data} -- Unmap device memory from host memory.
4299 @item @emph{Description}
4300 This function unmaps previously mapped device and host memory. The latter
4301 specified by @var{h}.
4304 @multitable @columnfractions .20 .80
4305 @item @emph{Prototype}: @tab @code{acc_unmap_data(h_void *h);}
4308 @item @emph{Reference}:
4309 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4316 @section @code{acc_deviceptr} -- Get device pointer associated with specific host address.
4318 @item @emph{Description}
4319 This function returns the device address that has been mapped to the
4320 host address specified by @var{h}.
4323 @multitable @columnfractions .20 .80
4324 @item @emph{Prototype}: @tab @code{void *acc_deviceptr(h_void *h);}
4327 @item @emph{Reference}:
4328 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4335 @section @code{acc_hostptr} -- Get host pointer associated with specific device address.
4337 @item @emph{Description}
4338 This function returns the host address that has been mapped to the
4339 device address specified by @var{d}.
4342 @multitable @columnfractions .20 .80
4343 @item @emph{Prototype}: @tab @code{void *acc_hostptr(d_void *d);}
4346 @item @emph{Reference}:
4347 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4353 @node acc_is_present
4354 @section @code{acc_is_present} -- Indicate whether host variable / array is present on device.
4356 @item @emph{Description}
4357 This function indicates whether the specified host address in @var{a} and a
4358 length of @var{len} bytes is present on the device. In C/C++, a non-zero
4359 value is returned to indicate the presence of the mapped memory on the
4360 device. A zero is returned to indicate the memory is not mapped on the
4363 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4364 a contiguous array section. The second form @var{a} specifies a variable or
4365 array element and @var{len} specifies the length in bytes. If the host
4366 memory is mapped to device memory, then a @code{true} is returned. Otherwise,
4367 a @code{false} is return to indicate the mapped memory is not present.
4370 @multitable @columnfractions .20 .80
4371 @item @emph{Prototype}: @tab @code{int acc_is_present(h_void *a, size_t len);}
4374 @item @emph{Fortran}:
4375 @multitable @columnfractions .20 .80
4376 @item @emph{Interface}: @tab @code{function acc_is_present(a)}
4377 @item @tab @code{type, dimension(:[,:]...) :: a}
4378 @item @tab @code{logical acc_is_present}
4379 @item @emph{Interface}: @tab @code{function acc_is_present(a, len)}
4380 @item @tab @code{type, dimension(:[,:]...) :: a}
4381 @item @tab @code{integer len}
4382 @item @tab @code{logical acc_is_present}
4385 @item @emph{Reference}:
4386 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4392 @node acc_memcpy_to_device
4393 @section @code{acc_memcpy_to_device} -- Copy host memory to device memory.
4395 @item @emph{Description}
4396 This function copies host memory specified by host address of @var{src} to
4397 device memory specified by the device address @var{dest} for a length of
4401 @multitable @columnfractions .20 .80
4402 @item @emph{Prototype}: @tab @code{acc_memcpy_to_device(d_void *dest, h_void *src, size_t bytes);}
4405 @item @emph{Reference}:
4406 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4412 @node acc_memcpy_from_device
4413 @section @code{acc_memcpy_from_device} -- Copy device memory to host memory.
4415 @item @emph{Description}
4416 This function copies host memory specified by host address of @var{src} from
4417 device memory specified by the device address @var{dest} for a length of
4421 @multitable @columnfractions .20 .80
4422 @item @emph{Prototype}: @tab @code{acc_memcpy_from_device(d_void *dest, h_void *src, size_t bytes);}
4425 @item @emph{Reference}:
4426 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4433 @section @code{acc_attach} -- Let device pointer point to device-pointer target.
4435 @item @emph{Description}
4436 This function updates a pointer on the device from pointing to a host-pointer
4437 address to pointing to the corresponding device data.
4440 @multitable @columnfractions .20 .80
4441 @item @emph{Prototype}: @tab @code{acc_attach(h_void **ptr);}
4442 @item @emph{Prototype}: @tab @code{acc_attach_async(h_void **ptr, int async);}
4445 @item @emph{Reference}:
4446 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4453 @section @code{acc_detach} -- Let device pointer point to host-pointer target.
4455 @item @emph{Description}
4456 This function updates a pointer on the device from pointing to a device-pointer
4457 address to pointing to the corresponding host data.
4460 @multitable @columnfractions .20 .80
4461 @item @emph{Prototype}: @tab @code{acc_detach(h_void **ptr);}
4462 @item @emph{Prototype}: @tab @code{acc_detach_async(h_void **ptr, int async);}
4463 @item @emph{Prototype}: @tab @code{acc_detach_finalize(h_void **ptr);}
4464 @item @emph{Prototype}: @tab @code{acc_detach_finalize_async(h_void **ptr, int async);}
4467 @item @emph{Reference}:
4468 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4474 @node acc_get_current_cuda_device
4475 @section @code{acc_get_current_cuda_device} -- Get CUDA device handle.
4477 @item @emph{Description}
4478 This function returns the CUDA device handle. This handle is the same
4479 as used by the CUDA Runtime or Driver API's.
4482 @multitable @columnfractions .20 .80
4483 @item @emph{Prototype}: @tab @code{void *acc_get_current_cuda_device(void);}
4486 @item @emph{Reference}:
4487 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4493 @node acc_get_current_cuda_context
4494 @section @code{acc_get_current_cuda_context} -- Get CUDA context handle.
4496 @item @emph{Description}
4497 This function returns the CUDA context handle. This handle is the same
4498 as used by the CUDA Runtime or Driver API's.
4501 @multitable @columnfractions .20 .80
4502 @item @emph{Prototype}: @tab @code{void *acc_get_current_cuda_context(void);}
4505 @item @emph{Reference}:
4506 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4512 @node acc_get_cuda_stream
4513 @section @code{acc_get_cuda_stream} -- Get CUDA stream handle.
4515 @item @emph{Description}
4516 This function returns the CUDA stream handle for the queue @var{async}.
4517 This handle is the same as used by the CUDA Runtime or Driver API's.
4520 @multitable @columnfractions .20 .80
4521 @item @emph{Prototype}: @tab @code{void *acc_get_cuda_stream(int async);}
4524 @item @emph{Reference}:
4525 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4531 @node acc_set_cuda_stream
4532 @section @code{acc_set_cuda_stream} -- Set CUDA stream handle.
4534 @item @emph{Description}
4535 This function associates the stream handle specified by @var{stream} with
4536 the queue @var{async}.
4538 This cannot be used to change the stream handle associated with
4539 @code{acc_async_sync}.
4541 The return value is not specified.
4544 @multitable @columnfractions .20 .80
4545 @item @emph{Prototype}: @tab @code{int acc_set_cuda_stream(int async, void *stream);}
4548 @item @emph{Reference}:
4549 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4555 @node acc_prof_register
4556 @section @code{acc_prof_register} -- Register callbacks.
4558 @item @emph{Description}:
4559 This function registers callbacks.
4562 @multitable @columnfractions .20 .80
4563 @item @emph{Prototype}: @tab @code{void acc_prof_register (acc_event_t, acc_prof_callback, acc_register_t);}
4566 @item @emph{See also}:
4567 @ref{OpenACC Profiling Interface}
4569 @item @emph{Reference}:
4570 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4576 @node acc_prof_unregister
4577 @section @code{acc_prof_unregister} -- Unregister callbacks.
4579 @item @emph{Description}:
4580 This function unregisters callbacks.
4583 @multitable @columnfractions .20 .80
4584 @item @emph{Prototype}: @tab @code{void acc_prof_unregister (acc_event_t, acc_prof_callback, acc_register_t);}
4587 @item @emph{See also}:
4588 @ref{OpenACC Profiling Interface}
4590 @item @emph{Reference}:
4591 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4597 @node acc_prof_lookup
4598 @section @code{acc_prof_lookup} -- Obtain inquiry functions.
4600 @item @emph{Description}:
4601 Function to obtain inquiry functions.
4604 @multitable @columnfractions .20 .80
4605 @item @emph{Prototype}: @tab @code{acc_query_fn acc_prof_lookup (const char *);}
4608 @item @emph{See also}:
4609 @ref{OpenACC Profiling Interface}
4611 @item @emph{Reference}:
4612 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4618 @node acc_register_library
4619 @section @code{acc_register_library} -- Library registration.
4621 @item @emph{Description}:
4622 Function for library registration.
4625 @multitable @columnfractions .20 .80
4626 @item @emph{Prototype}: @tab @code{void acc_register_library (acc_prof_reg, acc_prof_reg, acc_prof_lookup_func);}
4629 @item @emph{See also}:
4630 @ref{OpenACC Profiling Interface}, @ref{ACC_PROFLIB}
4632 @item @emph{Reference}:
4633 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4639 @c ---------------------------------------------------------------------
4640 @c OpenACC Environment Variables
4641 @c ---------------------------------------------------------------------
4643 @node OpenACC Environment Variables
4644 @chapter OpenACC Environment Variables
4646 The variables @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM}
4647 are defined by section 4 of the OpenACC specification in version 2.0.
4648 The variable @env{ACC_PROFLIB}
4649 is defined by section 4 of the OpenACC specification in version 2.6.
4650 The variable @env{GCC_ACC_NOTIFY} is used for diagnostic purposes.
4661 @node ACC_DEVICE_TYPE
4662 @section @code{ACC_DEVICE_TYPE}
4664 @item @emph{Reference}:
4665 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4671 @node ACC_DEVICE_NUM
4672 @section @code{ACC_DEVICE_NUM}
4674 @item @emph{Reference}:
4675 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4682 @section @code{ACC_PROFLIB}
4684 @item @emph{See also}:
4685 @ref{acc_register_library}, @ref{OpenACC Profiling Interface}
4687 @item @emph{Reference}:
4688 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4694 @node GCC_ACC_NOTIFY
4695 @section @code{GCC_ACC_NOTIFY}
4697 @item @emph{Description}:
4698 Print debug information pertaining to the accelerator.
4703 @c ---------------------------------------------------------------------
4704 @c CUDA Streams Usage
4705 @c ---------------------------------------------------------------------
4707 @node CUDA Streams Usage
4708 @chapter CUDA Streams Usage
4710 This applies to the @code{nvptx} plugin only.
4712 The library provides elements that perform asynchronous movement of
4713 data and asynchronous operation of computing constructs. This
4714 asynchronous functionality is implemented by making use of CUDA
4715 streams@footnote{See "Stream Management" in "CUDA Driver API",
4716 TRM-06703-001, Version 5.5, for additional information}.
4718 The primary means by that the asynchronous functionality is accessed
4719 is through the use of those OpenACC directives which make use of the
4720 @code{async} and @code{wait} clauses. When the @code{async} clause is
4721 first used with a directive, it creates a CUDA stream. If an
4722 @code{async-argument} is used with the @code{async} clause, then the
4723 stream is associated with the specified @code{async-argument}.
4725 Following the creation of an association between a CUDA stream and the
4726 @code{async-argument} of an @code{async} clause, both the @code{wait}
4727 clause and the @code{wait} directive can be used. When either the
4728 clause or directive is used after stream creation, it creates a
4729 rendezvous point whereby execution waits until all operations
4730 associated with the @code{async-argument}, that is, stream, have
4733 Normally, the management of the streams that are created as a result of
4734 using the @code{async} clause, is done without any intervention by the
4735 caller. This implies the association between the @code{async-argument}
4736 and the CUDA stream will be maintained for the lifetime of the program.
4737 However, this association can be changed through the use of the library
4738 function @code{acc_set_cuda_stream}. When the function
4739 @code{acc_set_cuda_stream} is called, the CUDA stream that was
4740 originally associated with the @code{async} clause will be destroyed.
4741 Caution should be taken when changing the association as subsequent
4742 references to the @code{async-argument} refer to a different
4747 @c ---------------------------------------------------------------------
4748 @c OpenACC Library Interoperability
4749 @c ---------------------------------------------------------------------
4751 @node OpenACC Library Interoperability
4752 @chapter OpenACC Library Interoperability
4754 @section Introduction
4756 The OpenACC library uses the CUDA Driver API, and may interact with
4757 programs that use the Runtime library directly, or another library
4758 based on the Runtime library, e.g., CUBLAS@footnote{See section 2.26,
4759 "Interactions with the CUDA Driver API" in
4760 "CUDA Runtime API", Version 5.5, and section 2.27, "VDPAU
4761 Interoperability", in "CUDA Driver API", TRM-06703-001, Version 5.5,
4762 for additional information on library interoperability.}.
4763 This chapter describes the use cases and what changes are
4764 required in order to use both the OpenACC library and the CUBLAS and Runtime
4765 libraries within a program.
4767 @section First invocation: NVIDIA CUBLAS library API
4769 In this first use case (see below), a function in the CUBLAS library is called
4770 prior to any of the functions in the OpenACC library. More specifically, the
4771 function @code{cublasCreate()}.
4773 When invoked, the function initializes the library and allocates the
4774 hardware resources on the host and the device on behalf of the caller. Once
4775 the initialization and allocation has completed, a handle is returned to the
4776 caller. The OpenACC library also requires initialization and allocation of
4777 hardware resources. Since the CUBLAS library has already allocated the
4778 hardware resources for the device, all that is left to do is to initialize
4779 the OpenACC library and acquire the hardware resources on the host.
4781 Prior to calling the OpenACC function that initializes the library and
4782 allocate the host hardware resources, you need to acquire the device number
4783 that was allocated during the call to @code{cublasCreate()}. The invoking of the
4784 runtime library function @code{cudaGetDevice()} accomplishes this. Once
4785 acquired, the device number is passed along with the device type as
4786 parameters to the OpenACC library function @code{acc_set_device_num()}.
4788 Once the call to @code{acc_set_device_num()} has completed, the OpenACC
4789 library uses the context that was created during the call to
4790 @code{cublasCreate()}. In other words, both libraries will be sharing the
4794 /* Create the handle */
4795 s = cublasCreate(&h);
4796 if (s != CUBLAS_STATUS_SUCCESS)
4798 fprintf(stderr, "cublasCreate failed %d\n", s);
4802 /* Get the device number */
4803 e = cudaGetDevice(&dev);
4804 if (e != cudaSuccess)
4806 fprintf(stderr, "cudaGetDevice failed %d\n", e);
4810 /* Initialize OpenACC library and use device 'dev' */
4811 acc_set_device_num(dev, acc_device_nvidia);
4816 @section First invocation: OpenACC library API
4818 In this second use case (see below), a function in the OpenACC library is
4819 called prior to any of the functions in the CUBLAS library. More specifically,
4820 the function @code{acc_set_device_num()}.
4822 In the use case presented here, the function @code{acc_set_device_num()}
4823 is used to both initialize the OpenACC library and allocate the hardware
4824 resources on the host and the device. In the call to the function, the
4825 call parameters specify which device to use and what device
4826 type to use, i.e., @code{acc_device_nvidia}. It should be noted that this
4827 is but one method to initialize the OpenACC library and allocate the
4828 appropriate hardware resources. Other methods are available through the
4829 use of environment variables and these will be discussed in the next section.
4831 Once the call to @code{acc_set_device_num()} has completed, other OpenACC
4832 functions can be called as seen with multiple calls being made to
4833 @code{acc_copyin()}. In addition, calls can be made to functions in the
4834 CUBLAS library. In the use case a call to @code{cublasCreate()} is made
4835 subsequent to the calls to @code{acc_copyin()}.
4836 As seen in the previous use case, a call to @code{cublasCreate()}
4837 initializes the CUBLAS library and allocates the hardware resources on the
4838 host and the device. However, since the device has already been allocated,
4839 @code{cublasCreate()} will only initialize the CUBLAS library and allocate
4840 the appropriate hardware resources on the host. The context that was created
4841 as part of the OpenACC initialization is shared with the CUBLAS library,
4842 similarly to the first use case.
4847 acc_set_device_num(dev, acc_device_nvidia);
4849 /* Copy the first set to the device */
4850 d_X = acc_copyin(&h_X[0], N * sizeof (float));
4853 fprintf(stderr, "copyin error h_X\n");
4857 /* Copy the second set to the device */
4858 d_Y = acc_copyin(&h_Y1[0], N * sizeof (float));
4861 fprintf(stderr, "copyin error h_Y1\n");
4865 /* Create the handle */
4866 s = cublasCreate(&h);
4867 if (s != CUBLAS_STATUS_SUCCESS)
4869 fprintf(stderr, "cublasCreate failed %d\n", s);
4873 /* Perform saxpy using CUBLAS library function */
4874 s = cublasSaxpy(h, N, &alpha, d_X, 1, d_Y, 1);
4875 if (s != CUBLAS_STATUS_SUCCESS)
4877 fprintf(stderr, "cublasSaxpy failed %d\n", s);
4881 /* Copy the results from the device */
4882 acc_memcpy_from_device(&h_Y1[0], d_Y, N * sizeof (float));
4887 @section OpenACC library and environment variables
4889 There are two environment variables associated with the OpenACC library
4890 that may be used to control the device type and device number:
4891 @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM}, respectively. These two
4892 environment variables can be used as an alternative to calling
4893 @code{acc_set_device_num()}. As seen in the second use case, the device
4894 type and device number were specified using @code{acc_set_device_num()}.
4895 If however, the aforementioned environment variables were set, then the
4896 call to @code{acc_set_device_num()} would not be required.
4899 The use of the environment variables is only relevant when an OpenACC function
4900 is called prior to a call to @code{cudaCreate()}. If @code{cudaCreate()}
4901 is called prior to a call to an OpenACC function, then you must call
4902 @code{acc_set_device_num()}@footnote{More complete information
4903 about @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM} can be found in
4904 sections 4.1 and 4.2 of the @uref{https://www.openacc.org, OpenACC}
4905 Application Programming Interface”, Version 2.6.}
4909 @c ---------------------------------------------------------------------
4910 @c OpenACC Profiling Interface
4911 @c ---------------------------------------------------------------------
4913 @node OpenACC Profiling Interface
4914 @chapter OpenACC Profiling Interface
4916 @section Implementation Status and Implementation-Defined Behavior
4918 We're implementing the OpenACC Profiling Interface as defined by the
4919 OpenACC 2.6 specification. We're clarifying some aspects here as
4920 @emph{implementation-defined behavior}, while they're still under
4921 discussion within the OpenACC Technical Committee.
4923 This implementation is tuned to keep the performance impact as low as
4924 possible for the (very common) case that the Profiling Interface is
4925 not enabled. This is relevant, as the Profiling Interface affects all
4926 the @emph{hot} code paths (in the target code, not in the offloaded
4927 code). Users of the OpenACC Profiling Interface can be expected to
4928 understand that performance will be impacted to some degree once the
4929 Profiling Interface has gotten enabled: for example, because of the
4930 @emph{runtime} (libgomp) calling into a third-party @emph{library} for
4931 every event that has been registered.
4933 We're not yet accounting for the fact that @cite{OpenACC events may
4934 occur during event processing}.
4935 We just handle one case specially, as required by CUDA 9.0
4936 @command{nvprof}, that @code{acc_get_device_type}
4937 (@ref{acc_get_device_type})) may be called from
4938 @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
4941 We're not yet implementing initialization via a
4942 @code{acc_register_library} function that is either statically linked
4943 in, or dynamically via @env{LD_PRELOAD}.
4944 Initialization via @code{acc_register_library} functions dynamically
4945 loaded via the @env{ACC_PROFLIB} environment variable does work, as
4946 does directly calling @code{acc_prof_register},
4947 @code{acc_prof_unregister}, @code{acc_prof_lookup}.
4949 As currently there are no inquiry functions defined, calls to
4950 @code{acc_prof_lookup} will always return @code{NULL}.
4952 There aren't separate @emph{start}, @emph{stop} events defined for the
4953 event types @code{acc_ev_create}, @code{acc_ev_delete},
4954 @code{acc_ev_alloc}, @code{acc_ev_free}. It's not clear if these
4955 should be triggered before or after the actual device-specific call is
4956 made. We trigger them after.
4958 Remarks about data provided to callbacks:
4962 @item @code{acc_prof_info.event_type}
4963 It's not clear if for @emph{nested} event callbacks (for example,
4964 @code{acc_ev_enqueue_launch_start} as part of a parent compute
4965 construct), this should be set for the nested event
4966 (@code{acc_ev_enqueue_launch_start}), or if the value of the parent
4967 construct should remain (@code{acc_ev_compute_construct_start}). In
4968 this implementation, the value will generally correspond to the
4969 innermost nested event type.
4971 @item @code{acc_prof_info.device_type}
4975 For @code{acc_ev_compute_construct_start}, and in presence of an
4976 @code{if} clause with @emph{false} argument, this will still refer to
4977 the offloading device type.
4978 It's not clear if that's the expected behavior.
4981 Complementary to the item before, for
4982 @code{acc_ev_compute_construct_end}, this is set to
4983 @code{acc_device_host} in presence of an @code{if} clause with
4984 @emph{false} argument.
4985 It's not clear if that's the expected behavior.
4989 @item @code{acc_prof_info.thread_id}
4990 Always @code{-1}; not yet implemented.
4992 @item @code{acc_prof_info.async}
4996 Not yet implemented correctly for
4997 @code{acc_ev_compute_construct_start}.
5000 In a compute construct, for host-fallback
5001 execution/@code{acc_device_host} it will always be
5002 @code{acc_async_sync}.
5003 It's not clear if that's the expected behavior.
5006 For @code{acc_ev_device_init_start} and @code{acc_ev_device_init_end},
5007 it will always be @code{acc_async_sync}.
5008 It's not clear if that's the expected behavior.
5012 @item @code{acc_prof_info.async_queue}
5013 There is no @cite{limited number of asynchronous queues} in libgomp.
5014 This will always have the same value as @code{acc_prof_info.async}.
5016 @item @code{acc_prof_info.src_file}
5017 Always @code{NULL}; not yet implemented.
5019 @item @code{acc_prof_info.func_name}
5020 Always @code{NULL}; not yet implemented.
5022 @item @code{acc_prof_info.line_no}
5023 Always @code{-1}; not yet implemented.
5025 @item @code{acc_prof_info.end_line_no}
5026 Always @code{-1}; not yet implemented.
5028 @item @code{acc_prof_info.func_line_no}
5029 Always @code{-1}; not yet implemented.
5031 @item @code{acc_prof_info.func_end_line_no}
5032 Always @code{-1}; not yet implemented.
5034 @item @code{acc_event_info.event_type}, @code{acc_event_info.*.event_type}
5035 Relating to @code{acc_prof_info.event_type} discussed above, in this
5036 implementation, this will always be the same value as
5037 @code{acc_prof_info.event_type}.
5039 @item @code{acc_event_info.*.parent_construct}
5043 Will be @code{acc_construct_parallel} for all OpenACC compute
5044 constructs as well as many OpenACC Runtime API calls; should be the
5045 one matching the actual construct, or
5046 @code{acc_construct_runtime_api}, respectively.
5049 Will be @code{acc_construct_enter_data} or
5050 @code{acc_construct_exit_data} when processing variable mappings
5051 specified in OpenACC @emph{declare} directives; should be
5052 @code{acc_construct_declare}.
5055 For implicit @code{acc_ev_device_init_start},
5056 @code{acc_ev_device_init_end}, and explicit as well as implicit
5057 @code{acc_ev_alloc}, @code{acc_ev_free},
5058 @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end},
5059 @code{acc_ev_enqueue_download_start}, and
5060 @code{acc_ev_enqueue_download_end}, will be
5061 @code{acc_construct_parallel}; should reflect the real parent
5066 @item @code{acc_event_info.*.implicit}
5067 For @code{acc_ev_alloc}, @code{acc_ev_free},
5068 @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end},
5069 @code{acc_ev_enqueue_download_start}, and
5070 @code{acc_ev_enqueue_download_end}, this currently will be @code{1}
5071 also for explicit usage.
5073 @item @code{acc_event_info.data_event.var_name}
5074 Always @code{NULL}; not yet implemented.
5076 @item @code{acc_event_info.data_event.host_ptr}
5077 For @code{acc_ev_alloc}, and @code{acc_ev_free}, this is always
5080 @item @code{typedef union acc_api_info}
5081 @dots{} as printed in @cite{5.2.3. Third Argument: API-Specific
5082 Information}. This should obviously be @code{typedef @emph{struct}
5085 @item @code{acc_api_info.device_api}
5086 Possibly not yet implemented correctly for
5087 @code{acc_ev_compute_construct_start},
5088 @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}:
5089 will always be @code{acc_device_api_none} for these event types.
5090 For @code{acc_ev_enter_data_start}, it will be
5091 @code{acc_device_api_none} in some cases.
5093 @item @code{acc_api_info.device_type}
5094 Always the same as @code{acc_prof_info.device_type}.
5096 @item @code{acc_api_info.vendor}
5097 Always @code{-1}; not yet implemented.
5099 @item @code{acc_api_info.device_handle}
5100 Always @code{NULL}; not yet implemented.
5102 @item @code{acc_api_info.context_handle}
5103 Always @code{NULL}; not yet implemented.
5105 @item @code{acc_api_info.async_handle}
5106 Always @code{NULL}; not yet implemented.
5110 Remarks about certain event types:
5114 @item @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
5118 @c See 'DEVICE_INIT_INSIDE_COMPUTE_CONSTRUCT' in
5119 @c 'libgomp.oacc-c-c++-common/acc_prof-kernels-1.c',
5120 @c 'libgomp.oacc-c-c++-common/acc_prof-parallel-1.c'.
5121 When a compute construct triggers implicit
5122 @code{acc_ev_device_init_start} and @code{acc_ev_device_init_end}
5123 events, they currently aren't @emph{nested within} the corresponding
5124 @code{acc_ev_compute_construct_start} and
5125 @code{acc_ev_compute_construct_end}, but they're currently observed
5126 @emph{before} @code{acc_ev_compute_construct_start}.
5127 It's not clear what to do: the standard asks us provide a lot of
5128 details to the @code{acc_ev_compute_construct_start} callback, without
5129 (implicitly) initializing a device before?
5132 Callbacks for these event types will not be invoked for calls to the
5133 @code{acc_set_device_type} and @code{acc_set_device_num} functions.
5134 It's not clear if they should be.
5138 @item @code{acc_ev_enter_data_start}, @code{acc_ev_enter_data_end}, @code{acc_ev_exit_data_start}, @code{acc_ev_exit_data_end}
5142 Callbacks for these event types will also be invoked for OpenACC
5143 @emph{host_data} constructs.
5144 It's not clear if they should be.
5147 Callbacks for these event types will also be invoked when processing
5148 variable mappings specified in OpenACC @emph{declare} directives.
5149 It's not clear if they should be.
5155 Callbacks for the following event types will be invoked, but dispatch
5156 and information provided therein has not yet been thoroughly reviewed:
5159 @item @code{acc_ev_alloc}
5160 @item @code{acc_ev_free}
5161 @item @code{acc_ev_update_start}, @code{acc_ev_update_end}
5162 @item @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end}
5163 @item @code{acc_ev_enqueue_download_start}, @code{acc_ev_enqueue_download_end}
5166 During device initialization, and finalization, respectively,
5167 callbacks for the following event types will not yet be invoked:
5170 @item @code{acc_ev_alloc}
5171 @item @code{acc_ev_free}
5174 Callbacks for the following event types have not yet been implemented,
5175 so currently won't be invoked:
5178 @item @code{acc_ev_device_shutdown_start}, @code{acc_ev_device_shutdown_end}
5179 @item @code{acc_ev_runtime_shutdown}
5180 @item @code{acc_ev_create}, @code{acc_ev_delete}
5181 @item @code{acc_ev_wait_start}, @code{acc_ev_wait_end}
5184 For the following runtime library functions, not all expected
5185 callbacks will be invoked (mostly concerning implicit device
5189 @item @code{acc_get_num_devices}
5190 @item @code{acc_set_device_type}
5191 @item @code{acc_get_device_type}
5192 @item @code{acc_set_device_num}
5193 @item @code{acc_get_device_num}
5194 @item @code{acc_init}
5195 @item @code{acc_shutdown}
5198 Aside from implicit device initialization, for the following runtime
5199 library functions, no callbacks will be invoked for shared-memory
5200 offloading devices (it's not clear if they should be):
5203 @item @code{acc_malloc}
5204 @item @code{acc_free}
5205 @item @code{acc_copyin}, @code{acc_present_or_copyin}, @code{acc_copyin_async}
5206 @item @code{acc_create}, @code{acc_present_or_create}, @code{acc_create_async}
5207 @item @code{acc_copyout}, @code{acc_copyout_async}, @code{acc_copyout_finalize}, @code{acc_copyout_finalize_async}
5208 @item @code{acc_delete}, @code{acc_delete_async}, @code{acc_delete_finalize}, @code{acc_delete_finalize_async}
5209 @item @code{acc_update_device}, @code{acc_update_device_async}
5210 @item @code{acc_update_self}, @code{acc_update_self_async}
5211 @item @code{acc_map_data}, @code{acc_unmap_data}
5212 @item @code{acc_memcpy_to_device}, @code{acc_memcpy_to_device_async}
5213 @item @code{acc_memcpy_from_device}, @code{acc_memcpy_from_device_async}
5216 @c ---------------------------------------------------------------------
5217 @c OpenMP-Implementation Specifics
5218 @c ---------------------------------------------------------------------
5220 @node OpenMP-Implementation Specifics
5221 @chapter OpenMP-Implementation Specifics
5224 * Implementation-defined ICV Initialization::
5225 * OpenMP Context Selectors::
5226 * Memory allocation::
5229 @node Implementation-defined ICV Initialization
5230 @section Implementation-defined ICV Initialization
5231 @cindex Implementation specific setting
5233 @multitable @columnfractions .30 .70
5234 @item @var{affinity-format-var} @tab See @ref{OMP_AFFINITY_FORMAT}.
5235 @item @var{def-allocator-var} @tab See @ref{OMP_ALLOCATOR}.
5236 @item @var{max-active-levels-var} @tab See @ref{OMP_MAX_ACTIVE_LEVELS}.
5237 @item @var{dyn-var} @tab See @ref{OMP_DYNAMIC}.
5238 @item @var{nthreads-var} @tab See @ref{OMP_NUM_THREADS}.
5239 @item @var{num-devices-var} @tab Number of non-host devices found
5240 by GCC's run-time library
5241 @item @var{num-procs-var} @tab The number of CPU cores on the
5242 initial device, except that affinity settings might lead to a
5243 smaller number. On non-host devices, the value of the
5244 @var{nthreads-var} ICV.
5245 @item @var{place-partition-var} @tab See @ref{OMP_PLACES}.
5246 @item @var{run-sched-var} @tab See @ref{OMP_SCHEDULE}.
5247 @item @var{stacksize-var} @tab See @ref{OMP_STACKSIZE}.
5248 @item @var{thread-limit-var} @tab See @ref{OMP_TEAMS_THREAD_LIMIT}
5249 @item @var{wait-policy-var} @tab See @ref{OMP_WAIT_POLICY} and
5250 @ref{GOMP_SPINCOUNT}
5253 @node OpenMP Context Selectors
5254 @section OpenMP Context Selectors
5256 @code{vendor} is always @code{gnu}. References are to the GCC manual.
5258 @c NOTE: Only the following selectors have been implemented. To add
5259 @c additional traits for target architecture, TARGET_OMP_DEVICE_KIND_ARCH_ISA
5260 @c has to be implemented; cf. also PR target/105640.
5261 @c For offload devices, add *additionally* gcc/config/*/t-omp-device.
5263 For the host compiler, @code{kind} always matches @code{host}; for the
5264 offloading architectures AMD GCN and Nvidia PTX, @code{kind} always matches
5265 @code{gpu}. For the x86 family of computers, AMD GCN and Nvidia PTX
5266 the following traits are supported in addition; while OpenMP is supported
5267 on more architectures, GCC currently does not match any @code{arch} or
5268 @code{isa} traits for those.
5270 @multitable @columnfractions .65 .30
5271 @headitem @code{arch} @tab @code{isa}
5272 @item @code{x86}, @code{x86_64}, @code{i386}, @code{i486},
5273 @code{i586}, @code{i686}, @code{ia32}
5274 @tab See @code{-m...} flags in ``x86 Options'' (without @code{-m})
5275 @item @code{amdgcn}, @code{gcn}
5276 @tab See @code{-march=} in ``AMD GCN Options''@footnote{Additionally,
5277 @code{gfx803} is supported as an alias for @code{fiji}.}
5279 @tab See @code{-march=} in ``Nvidia PTX Options''
5282 @node Memory allocation
5283 @section Memory allocation
5285 For the available predefined allocators and, as applicable, their associated
5286 predefined memory spaces and for the available traits and their default values,
5287 see @ref{OMP_ALLOCATOR}. Predefined allocators without an associated memory
5288 space use the @code{omp_default_mem_space} memory space.
5290 For the memory spaces, the following applies:
5292 @item @code{omp_default_mem_space} is supported
5293 @item @code{omp_const_mem_space} maps to @code{omp_default_mem_space}
5294 @item @code{omp_low_lat_mem_space} maps to @code{omp_default_mem_space}
5295 @item @code{omp_large_cap_mem_space} maps to @code{omp_default_mem_space},
5296 unless the memkind library is available
5297 @item @code{omp_high_bw_mem_space} maps to @code{omp_default_mem_space},
5298 unless the memkind library is available
5301 On Linux systems, where the @uref{https://github.com/memkind/memkind, memkind
5302 library} (@code{libmemkind.so.0}) is available at runtime, it is used when
5303 creating memory allocators requesting
5306 @item the memory space @code{omp_high_bw_mem_space}
5307 @item the memory space @code{omp_large_cap_mem_space}
5308 @item the @code{partition} trait @code{interleaved}; note that for
5309 @code{omp_large_cap_mem_space} the allocation will not be interleaved
5312 On Linux systems, where the @uref{https://github.com/numactl/numactl, numa
5313 library} (@code{libnuma.so.1}) is available at runtime, it used when creating
5314 memory allocators requesting
5317 @item the @code{partition} trait @code{nearest}, except when both the
5318 libmemkind library is available and the memory space is either
5319 @code{omp_large_cap_mem_space} or @code{omp_high_bw_mem_space}
5322 Note that the numa library will round up the allocation size to a multiple of
5323 the system page size; therefore, consider using it only with large data or
5324 by sharing allocations via the @code{pool_size} trait. Furthermore, the Linux
5325 kernel does not guarantee that an allocation will always be on the nearest NUMA
5326 node nor that after reallocation the same node will be used. Note additionally
5327 that, on Linux, the default setting of the memory placement policy is to use the
5328 current node; therefore, unless the memory placement policy has been overridden,
5329 the @code{partition} trait @code{environment} (the default) will be effectively
5330 a @code{nearest} allocation.
5332 Additional notes regarding the traits:
5334 @item The @code{pinned} trait is unsupported.
5335 @item The default for the @code{pool_size} trait is no pool and for every
5336 (re)allocation the associated library routine is called, which might
5337 internally use a memory pool.
5338 @item For the @code{partition} trait, the partition part size will be the same
5339 as the requested size (i.e. @code{interleaved} or @code{blocked} has no
5340 effect), except for @code{interleaved} when the memkind library is
5341 available. Furthermore, for @code{nearest} and unless the numa library
5342 is available, the memory might not be on the same NUMA node as thread
5343 that allocated the memory; on Linux, this is in particular the case when
5344 the memory placement policy is set to preferred.
5345 @item The @code{access} trait has no effect such that memory is always
5346 accessible by all threads.
5347 @item The @code{sync_hint} trait has no effect.
5350 @c ---------------------------------------------------------------------
5351 @c Offload-Target Specifics
5352 @c ---------------------------------------------------------------------
5354 @node Offload-Target Specifics
5355 @chapter Offload-Target Specifics
5357 The following sections present notes on the offload-target specifics
5365 @section AMD Radeon (GCN)
5367 On the hardware side, there is the hierarchy (fine to coarse):
5369 @item work item (thread)
5372 @item compute unit (CU)
5375 All OpenMP and OpenACC levels are used, i.e.
5377 @item OpenMP's simd and OpenACC's vector map to work items (thread)
5378 @item OpenMP's threads (``parallel'') and OpenACC's workers map
5380 @item OpenMP's teams and OpenACC's gang use a threadpool with the
5381 size of the number of teams or gangs, respectively.
5386 @item Number of teams is the specified @code{num_teams} (OpenMP) or
5387 @code{num_gangs} (OpenACC) or otherwise the number of CU. It is limited
5388 by two times the number of CU.
5389 @item Number of wavefronts is 4 for gfx900 and 16 otherwise;
5390 @code{num_threads} (OpenMP) and @code{num_workers} (OpenACC)
5391 overrides this if smaller.
5392 @item The wavefront has 102 scalars and 64 vectors
5393 @item Number of workitems is always 64
5394 @item The hardware permits maximally 40 workgroups/CU and
5395 16 wavefronts/workgroup up to a limit of 40 wavefronts in total per CU.
5396 @item 80 scalars registers and 24 vector registers in non-kernel functions
5397 (the chosen procedure-calling API).
5398 @item For the kernel itself: as many as register pressure demands (number of
5399 teams and number of threads, scaled down if registers are exhausted)
5402 The implementation remark:
5404 @item I/O within OpenMP target regions and OpenACC parallel/kernels is supported
5405 using the C library @code{printf} functions and the Fortran
5406 @code{print}/@code{write} statements.
5407 @item Reverse offload regions (i.e. @code{target} regions with
5408 @code{device(ancestor:1)}) are processed serially per @code{target} region
5409 such that the next reverse offload region is only executed after the previous
5411 @item OpenMP code that has a @code{requires} directive with
5412 @code{unified_shared_memory} will remove any GCN device from the list of
5413 available devices (``host fallback'').
5414 @item The available stack size can be changed using the @code{GCN_STACK_SIZE}
5415 environment variable; the default is 32 kiB per thread.
5423 On the hardware side, there is the hierarchy (fine to coarse):
5428 @item streaming multiprocessor
5431 All OpenMP and OpenACC levels are used, i.e.
5433 @item OpenMP's simd and OpenACC's vector map to threads
5434 @item OpenMP's threads (``parallel'') and OpenACC's workers map to warps
5435 @item OpenMP's teams and OpenACC's gang use a threadpool with the
5436 size of the number of teams or gangs, respectively.
5441 @item The @code{warp_size} is always 32
5442 @item CUDA kernel launched: @code{dim=@{#teams,1,1@}, blocks=@{#threads,warp_size,1@}}.
5443 @item The number of teams is limited by the number of blocks the device can
5444 host simultaneously.
5447 Additional information can be obtained by setting the environment variable to
5448 @code{GOMP_DEBUG=1} (very verbose; grep for @code{kernel.*launch} for launch
5451 GCC generates generic PTX ISA code, which is just-in-time compiled by CUDA,
5452 which caches the JIT in the user's directory (see CUDA documentation; can be
5453 tuned by the environment variables @code{CUDA_CACHE_@{DISABLE,MAXSIZE,PATH@}}.
5455 Note: While PTX ISA is generic, the @code{-mptx=} and @code{-march=} commandline
5456 options still affect the used PTX ISA code and, thus, the requirements on
5457 CUDA version and hardware.
5459 The implementation remark:
5461 @item I/O within OpenMP target regions and OpenACC parallel/kernels is supported
5462 using the C library @code{printf} functions. Note that the Fortran
5463 @code{print}/@code{write} statements are not supported, yet.
5464 @item Compilation OpenMP code that contains @code{requires reverse_offload}
5465 requires at least @code{-march=sm_35}, compiling for @code{-march=sm_30}
5467 @item For code containing reverse offload (i.e. @code{target} regions with
5468 @code{device(ancestor:1)}), there is a slight performance penalty
5469 for @emph{all} target regions, consisting mostly of shutdown delay
5470 Per device, reverse offload regions are processed serially such that
5471 the next reverse offload region is only executed after the previous
5473 @item OpenMP code that has a @code{requires} directive with
5474 @code{unified_shared_memory} will remove any nvptx device from the
5475 list of available devices (``host fallback'').
5476 @item The default per-warp stack size is 128 kiB; see also @code{-msoft-stack}
5478 @item The OpenMP routines @code{omp_target_memcpy_rect} and
5479 @code{omp_target_memcpy_rect_async} and the @code{target update}
5480 directive for non-contiguous list items will use the 2D and 3D
5481 memory-copy functions of the CUDA library. Higher dimensions will
5482 call those functions in a loop and are therefore supported.
5486 @c ---------------------------------------------------------------------
5488 @c ---------------------------------------------------------------------
5490 @node The libgomp ABI
5491 @chapter The libgomp ABI
5493 The following sections present notes on the external ABI as
5494 presented by libgomp. Only maintainers should need them.
5497 * Implementing MASTER construct::
5498 * Implementing CRITICAL construct::
5499 * Implementing ATOMIC construct::
5500 * Implementing FLUSH construct::
5501 * Implementing BARRIER construct::
5502 * Implementing THREADPRIVATE construct::
5503 * Implementing PRIVATE clause::
5504 * Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses::
5505 * Implementing REDUCTION clause::
5506 * Implementing PARALLEL construct::
5507 * Implementing FOR construct::
5508 * Implementing ORDERED construct::
5509 * Implementing SECTIONS construct::
5510 * Implementing SINGLE construct::
5511 * Implementing OpenACC's PARALLEL construct::
5515 @node Implementing MASTER construct
5516 @section Implementing MASTER construct
5519 if (omp_get_thread_num () == 0)
5523 Alternately, we generate two copies of the parallel subfunction
5524 and only include this in the version run by the primary thread.
5525 Surely this is not worthwhile though...
5529 @node Implementing CRITICAL construct
5530 @section Implementing CRITICAL construct
5532 Without a specified name,
5535 void GOMP_critical_start (void);
5536 void GOMP_critical_end (void);
5539 so that we don't get COPY relocations from libgomp to the main
5542 With a specified name, use omp_set_lock and omp_unset_lock with
5543 name being transformed into a variable declared like
5546 omp_lock_t gomp_critical_user_<name> __attribute__((common))
5549 Ideally the ABI would specify that all zero is a valid unlocked
5550 state, and so we wouldn't need to initialize this at
5555 @node Implementing ATOMIC construct
5556 @section Implementing ATOMIC construct
5558 The target should implement the @code{__sync} builtins.
5560 Failing that we could add
5563 void GOMP_atomic_enter (void)
5564 void GOMP_atomic_exit (void)
5567 which reuses the regular lock code, but with yet another lock
5568 object private to the library.
5572 @node Implementing FLUSH construct
5573 @section Implementing FLUSH construct
5575 Expands to the @code{__sync_synchronize} builtin.
5579 @node Implementing BARRIER construct
5580 @section Implementing BARRIER construct
5583 void GOMP_barrier (void)
5587 @node Implementing THREADPRIVATE construct
5588 @section Implementing THREADPRIVATE construct
5590 In _most_ cases we can map this directly to @code{__thread}. Except
5591 that OMP allows constructors for C++ objects. We can either
5592 refuse to support this (how often is it used?) or we can
5593 implement something akin to .ctors.
5595 Even more ideally, this ctor feature is handled by extensions
5596 to the main pthreads library. Failing that, we can have a set
5597 of entry points to register ctor functions to be called.
5601 @node Implementing PRIVATE clause
5602 @section Implementing PRIVATE clause
5604 In association with a PARALLEL, or within the lexical extent
5605 of a PARALLEL block, the variable becomes a local variable in
5606 the parallel subfunction.
5608 In association with FOR or SECTIONS blocks, create a new
5609 automatic variable within the current function. This preserves
5610 the semantic of new variable creation.
5614 @node Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
5615 @section Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
5617 This seems simple enough for PARALLEL blocks. Create a private
5618 struct for communicating between the parent and subfunction.
5619 In the parent, copy in values for scalar and "small" structs;
5620 copy in addresses for others TREE_ADDRESSABLE types. In the
5621 subfunction, copy the value into the local variable.
5623 It is not clear what to do with bare FOR or SECTION blocks.
5624 The only thing I can figure is that we do something like:
5627 #pragma omp for firstprivate(x) lastprivate(y)
5628 for (int i = 0; i < n; ++i)
5645 where the "x=x" and "y=y" assignments actually have different
5646 uids for the two variables, i.e. not something you could write
5647 directly in C. Presumably this only makes sense if the "outer"
5648 x and y are global variables.
5650 COPYPRIVATE would work the same way, except the structure
5651 broadcast would have to happen via SINGLE machinery instead.
5655 @node Implementing REDUCTION clause
5656 @section Implementing REDUCTION clause
5658 The private struct mentioned in the previous section should have
5659 a pointer to an array of the type of the variable, indexed by the
5660 thread's @var{team_id}. The thread stores its final value into the
5661 array, and after the barrier, the primary thread iterates over the
5662 array to collect the values.
5665 @node Implementing PARALLEL construct
5666 @section Implementing PARALLEL construct
5669 #pragma omp parallel
5678 void subfunction (void *data)
5685 GOMP_parallel_start (subfunction, &data, num_threads);
5686 subfunction (&data);
5687 GOMP_parallel_end ();
5691 void GOMP_parallel_start (void (*fn)(void *), void *data, unsigned num_threads)
5694 The @var{FN} argument is the subfunction to be run in parallel.
5696 The @var{DATA} argument is a pointer to a structure used to
5697 communicate data in and out of the subfunction, as discussed
5698 above with respect to FIRSTPRIVATE et al.
5700 The @var{NUM_THREADS} argument is 1 if an IF clause is present
5701 and false, or the value of the NUM_THREADS clause, if
5704 The function needs to create the appropriate number of
5705 threads and/or launch them from the dock. It needs to
5706 create the team structure and assign team ids.
5709 void GOMP_parallel_end (void)
5712 Tears down the team and returns us to the previous @code{omp_in_parallel()} state.
5716 @node Implementing FOR construct
5717 @section Implementing FOR construct
5720 #pragma omp parallel for
5721 for (i = lb; i <= ub; i++)
5728 void subfunction (void *data)
5731 while (GOMP_loop_static_next (&_s0, &_e0))
5734 for (i = _s0; i < _e1; i++)
5737 GOMP_loop_end_nowait ();
5740 GOMP_parallel_loop_static (subfunction, NULL, 0, lb, ub+1, 1, 0);
5742 GOMP_parallel_end ();
5746 #pragma omp for schedule(runtime)
5747 for (i = 0; i < n; i++)
5756 if (GOMP_loop_runtime_start (0, n, 1, &_s0, &_e0))
5759 for (i = _s0, i < _e0; i++)
5761 @} while (GOMP_loop_runtime_next (&_s0, _&e0));
5766 Note that while it looks like there is trickiness to propagating
5767 a non-constant STEP, there isn't really. We're explicitly allowed
5768 to evaluate it as many times as we want, and any variables involved
5769 should automatically be handled as PRIVATE or SHARED like any other
5770 variables. So the expression should remain evaluable in the
5771 subfunction. We can also pull it into a local variable if we like,
5772 but since its supposed to remain unchanged, we can also not if we like.
5774 If we have SCHEDULE(STATIC), and no ORDERED, then we ought to be
5775 able to get away with no work-sharing context at all, since we can
5776 simply perform the arithmetic directly in each thread to divide up
5777 the iterations. Which would mean that we wouldn't need to call any
5780 There are separate routines for handling loops with an ORDERED
5781 clause. Bookkeeping for that is non-trivial...
5785 @node Implementing ORDERED construct
5786 @section Implementing ORDERED construct
5789 void GOMP_ordered_start (void)
5790 void GOMP_ordered_end (void)
5795 @node Implementing SECTIONS construct
5796 @section Implementing SECTIONS construct
5801 #pragma omp sections
5815 for (i = GOMP_sections_start (3); i != 0; i = GOMP_sections_next ())
5832 @node Implementing SINGLE construct
5833 @section Implementing SINGLE construct
5847 if (GOMP_single_start ())
5855 #pragma omp single copyprivate(x)
5862 datap = GOMP_single_copy_start ();
5867 GOMP_single_copy_end (&data);
5876 @node Implementing OpenACC's PARALLEL construct
5877 @section Implementing OpenACC's PARALLEL construct
5880 void GOACC_parallel ()
5885 @c ---------------------------------------------------------------------
5887 @c ---------------------------------------------------------------------
5889 @node Reporting Bugs
5890 @chapter Reporting Bugs
5892 Bugs in the GNU Offloading and Multi Processing Runtime Library should
5893 be reported via @uref{https://gcc.gnu.org/bugzilla/, Bugzilla}. Please add
5894 "openacc", or "openmp", or both to the keywords field in the bug
5895 report, as appropriate.
5899 @c ---------------------------------------------------------------------
5900 @c GNU General Public License
5901 @c ---------------------------------------------------------------------
5903 @include gpl_v3.texi
5907 @c ---------------------------------------------------------------------
5908 @c GNU Free Documentation License
5909 @c ---------------------------------------------------------------------
5915 @c ---------------------------------------------------------------------
5916 @c Funding Free Software
5917 @c ---------------------------------------------------------------------
5919 @include funding.texi
5921 @c ---------------------------------------------------------------------
5923 @c ---------------------------------------------------------------------
5926 @unnumbered Library Index