]> git.ipfire.org Git - thirdparty/gcc.git/blame - libgomp/libgomp.texi
Move combine over to statistics_counter_event.
[thirdparty/gcc.git] / libgomp / libgomp.texi
CommitLineData
d77de738
ML
1\input texinfo @c -*-texinfo-*-
2
3@c %**start of header
4@setfilename libgomp.info
5@settitle GNU libgomp
6@c %**end of header
7
8
9@copying
74d5206f 10Copyright @copyright{} 2006-2023 Free Software Foundation, Inc.
d77de738
ML
11
12Permission is granted to copy, distribute and/or modify this document
13under the terms of the GNU Free Documentation License, Version 1.3 or
14any later version published by the Free Software Foundation; with the
15Invariant Sections being ``Funding Free Software'', the Front-Cover
16texts being (a) (see below), and with the Back-Cover Texts being (b)
17(see below). A copy of the license is included in the section entitled
18``GNU Free Documentation License''.
19
20(a) The FSF's Front-Cover Text is:
21
22 A GNU Manual
23
24(b) The FSF's Back-Cover Text is:
25
26 You have freedom to copy and modify this GNU Manual, like GNU
27 software. Copies published by the Free Software Foundation raise
28 funds for GNU development.
29@end copying
30
31@ifinfo
32@dircategory GNU Libraries
33@direntry
34* libgomp: (libgomp). GNU Offloading and Multi Processing Runtime Library.
35@end direntry
36
37This manual documents libgomp, the GNU Offloading and Multi Processing
38Runtime library. This is the GNU implementation of the OpenMP and
39OpenACC APIs for parallel and accelerator programming in C/C++ and
40Fortran.
41
42Published by the Free Software Foundation
4351 Franklin Street, Fifth Floor
44Boston, MA 02110-1301 USA
45
46@insertcopying
47@end ifinfo
48
49
50@setchapternewpage odd
51
52@titlepage
53@title GNU Offloading and Multi Processing Runtime Library
54@subtitle The GNU OpenMP and OpenACC Implementation
55@page
56@vskip 0pt plus 1filll
57@comment For the @value{version-GCC} Version*
58@sp 1
59Published by the Free Software Foundation @*
6051 Franklin Street, Fifth Floor@*
61Boston, MA 02110-1301, USA@*
62@sp 1
63@insertcopying
64@end titlepage
65
66@summarycontents
67@contents
68@page
69
70
71@node Top, Enabling OpenMP
72@top Introduction
73@cindex Introduction
74
75This manual documents the usage of libgomp, the GNU Offloading and
76Multi Processing Runtime Library. This includes the GNU
77implementation of the @uref{https://www.openmp.org, OpenMP} Application
78Programming Interface (API) for multi-platform shared-memory parallel
79programming in C/C++ and Fortran, and the GNU implementation of the
80@uref{https://www.openacc.org, OpenACC} Application Programming
81Interface (API) for offloading of code to accelerator devices in C/C++
82and Fortran.
83
84Originally, libgomp implemented the GNU OpenMP Runtime Library. Based
85on this, support for OpenACC and offloading (both OpenACC and OpenMP
864's target construct) has been added later on, and the library's name
87changed to GNU Offloading and Multi Processing Runtime Library.
88
89
90
91@comment
92@comment When you add a new menu item, please keep the right hand
93@comment aligned to the same column. Do not use tabs. This provides
94@comment better formatting.
95@comment
96@menu
97* Enabling OpenMP:: How to enable OpenMP for your applications.
98* OpenMP Implementation Status:: List of implemented features by OpenMP version
99* OpenMP Runtime Library Routines: Runtime Library Routines.
100 The OpenMP runtime application programming
101 interface.
102* OpenMP Environment Variables: Environment Variables.
103 Influencing OpenMP runtime behavior with
104 environment variables.
105* Enabling OpenACC:: How to enable OpenACC for your
106 applications.
107* OpenACC Runtime Library Routines:: The OpenACC runtime application
108 programming interface.
109* OpenACC Environment Variables:: Influencing OpenACC runtime behavior with
110 environment variables.
111* CUDA Streams Usage:: Notes on the implementation of
112 asynchronous operations.
113* OpenACC Library Interoperability:: OpenACC library interoperability with the
114 NVIDIA CUBLAS library.
115* OpenACC Profiling Interface::
116* OpenMP-Implementation Specifics:: Notes specifics of this OpenMP
117 implementation
118* Offload-Target Specifics:: Notes on offload-target specific internals
119* The libgomp ABI:: Notes on the external ABI presented by libgomp.
120* Reporting Bugs:: How to report bugs in the GNU Offloading and
121 Multi Processing Runtime Library.
122* Copying:: GNU general public license says
123 how you can copy and share libgomp.
124* GNU Free Documentation License::
125 How you can copy and share this manual.
126* Funding:: How to help assure continued work for free
127 software.
128* Library Index:: Index of this documentation.
129@end menu
130
131
132@c ---------------------------------------------------------------------
133@c Enabling OpenMP
134@c ---------------------------------------------------------------------
135
136@node Enabling OpenMP
137@chapter Enabling OpenMP
138
139To activate the OpenMP extensions for C/C++ and Fortran, the compile-time
140flag @command{-fopenmp} must be specified. This enables the OpenMP directive
141@code{#pragma omp} in C/C++ and @code{!$omp} directives in free form,
142@code{c$omp}, @code{*$omp} and @code{!$omp} directives in fixed form,
143@code{!$} conditional compilation sentinels in free form and @code{c$},
144@code{*$} and @code{!$} sentinels in fixed form, for Fortran. The flag also
145arranges for automatic linking of the OpenMP runtime library
146(@ref{Runtime Library Routines}).
147
148A complete description of all OpenMP directives may be found in the
149@uref{https://www.openmp.org, OpenMP Application Program Interface} manuals.
150See also @ref{OpenMP Implementation Status}.
151
152
153@c ---------------------------------------------------------------------
154@c OpenMP Implementation Status
155@c ---------------------------------------------------------------------
156
157@node OpenMP Implementation Status
158@chapter OpenMP Implementation Status
159
160@menu
161* OpenMP 4.5:: Feature completion status to 4.5 specification
162* OpenMP 5.0:: Feature completion status to 5.0 specification
163* OpenMP 5.1:: Feature completion status to 5.1 specification
164* OpenMP 5.2:: Feature completion status to 5.2 specification
c16e85d7 165* OpenMP Technical Report 11:: Feature completion status to first 6.0 preview
d77de738
ML
166@end menu
167
168The @code{_OPENMP} preprocessor macro and Fortran's @code{openmp_version}
169parameter, provided by @code{omp_lib.h} and the @code{omp_lib} module, have
170the value @code{201511} (i.e. OpenMP 4.5).
171
172@node OpenMP 4.5
173@section OpenMP 4.5
174
175The OpenMP 4.5 specification is fully supported.
176
177@node OpenMP 5.0
178@section OpenMP 5.0
179
180@unnumberedsubsec New features listed in Appendix B of the OpenMP specification
181@c This list is sorted as in OpenMP 5.1's B.3 not as in OpenMP 5.0's B.2
182
183@multitable @columnfractions .60 .10 .25
184@headitem Description @tab Status @tab Comments
185@item Array shaping @tab N @tab
186@item Array sections with non-unit strides in C and C++ @tab N @tab
187@item Iterators @tab Y @tab
188@item @code{metadirective} directive @tab N @tab
189@item @code{declare variant} directive
190 @tab P @tab @emph{simd} traits not handled correctly
2cd0689a 191@item @var{target-offload-var} ICV and @code{OMP_TARGET_OFFLOAD}
d77de738 192 env variable @tab Y @tab
2cd0689a 193@item Nested-parallel changes to @var{max-active-levels-var} ICV @tab Y @tab
d77de738 194@item @code{requires} directive @tab P
8c2fc744 195 @tab complete but no non-host device provides @code{unified_shared_memory}
d77de738 196@item @code{teams} construct outside an enclosing target region @tab Y @tab
85da0b40
TB
197@item Non-rectangular loop nests @tab P
198 @tab Full support for C/C++, partial for Fortran
199 (@uref{https://gcc.gnu.org/PR110735,PR110735})
d77de738
ML
200@item @code{!=} as relational-op in canonical loop form for C/C++ @tab Y @tab
201@item @code{nonmonotonic} as default loop schedule modifier for worksharing-loop
202 constructs @tab Y @tab
203@item Collapse of associated loops that are imperfectly nested loops @tab N @tab
204@item Clauses @code{if}, @code{nontemporal} and @code{order(concurrent)} in
205 @code{simd} construct @tab Y @tab
206@item @code{atomic} constructs in @code{simd} @tab Y @tab
207@item @code{loop} construct @tab Y @tab
208@item @code{order(concurrent)} clause @tab Y @tab
209@item @code{scan} directive and @code{in_scan} modifier for the
210 @code{reduction} clause @tab Y @tab
211@item @code{in_reduction} clause on @code{task} constructs @tab Y @tab
212@item @code{in_reduction} clause on @code{target} constructs @tab P
213 @tab @code{nowait} only stub
214@item @code{task_reduction} clause with @code{taskgroup} @tab Y @tab
215@item @code{task} modifier to @code{reduction} clause @tab Y @tab
216@item @code{affinity} clause to @code{task} construct @tab Y @tab Stub only
217@item @code{detach} clause to @code{task} construct @tab Y @tab
218@item @code{omp_fulfill_event} runtime routine @tab Y @tab
219@item @code{reduction} and @code{in_reduction} clauses on @code{taskloop}
220 and @code{taskloop simd} constructs @tab Y @tab
221@item @code{taskloop} construct cancelable by @code{cancel} construct
222 @tab Y @tab
223@item @code{mutexinoutset} @emph{dependence-type} for @code{depend} clause
224 @tab Y @tab
225@item Predefined memory spaces, memory allocators, allocator traits
13c3e29d 226 @tab Y @tab See also @ref{Memory allocation}
d77de738
ML
227@item Memory management routines @tab Y @tab
228@item @code{allocate} directive @tab N @tab
229@item @code{allocate} clause @tab P @tab Initial support
230@item @code{use_device_addr} clause on @code{target data} @tab Y @tab
f84fdb13 231@item @code{ancestor} modifier on @code{device} clause @tab Y @tab
d77de738
ML
232@item Implicit declare target directive @tab Y @tab
233@item Discontiguous array section with @code{target update} construct
234 @tab N @tab
235@item C/C++'s lvalue expressions in @code{to}, @code{from}
236 and @code{map} clauses @tab N @tab
237@item C/C++'s lvalue expressions in @code{depend} clauses @tab Y @tab
238@item Nested @code{declare target} directive @tab Y @tab
239@item Combined @code{master} constructs @tab Y @tab
240@item @code{depend} clause on @code{taskwait} @tab Y @tab
241@item Weak memory ordering clauses on @code{atomic} and @code{flush} construct
242 @tab Y @tab
243@item @code{hint} clause on the @code{atomic} construct @tab Y @tab Stub only
244@item @code{depobj} construct and depend objects @tab Y @tab
245@item Lock hints were renamed to synchronization hints @tab Y @tab
246@item @code{conditional} modifier to @code{lastprivate} clause @tab Y @tab
247@item Map-order clarifications @tab P @tab
248@item @code{close} @emph{map-type-modifier} @tab Y @tab
249@item Mapping C/C++ pointer variables and to assign the address of
250 device memory mapped by an array section @tab P @tab
251@item Mapping of Fortran pointer and allocatable variables, including pointer
252 and allocatable components of variables
253 @tab P @tab Mapping of vars with allocatable components unsupported
254@item @code{defaultmap} extensions @tab Y @tab
255@item @code{declare mapper} directive @tab N @tab
256@item @code{omp_get_supported_active_levels} routine @tab Y @tab
257@item Runtime routines and environment variables to display runtime thread
258 affinity information @tab Y @tab
259@item @code{omp_pause_resource} and @code{omp_pause_resource_all} runtime
260 routines @tab Y @tab
261@item @code{omp_get_device_num} runtime routine @tab Y @tab
262@item OMPT interface @tab N @tab
263@item OMPD interface @tab N @tab
264@end multitable
265
266@unnumberedsubsec Other new OpenMP 5.0 features
267
268@multitable @columnfractions .60 .10 .25
269@headitem Description @tab Status @tab Comments
270@item Supporting C++'s range-based for loop @tab Y @tab
271@end multitable
272
273
274@node OpenMP 5.1
275@section OpenMP 5.1
276
277@unnumberedsubsec New features listed in Appendix B of the OpenMP specification
278
279@multitable @columnfractions .60 .10 .25
280@headitem Description @tab Status @tab Comments
281@item OpenMP directive as C++ attribute specifiers @tab Y @tab
282@item @code{omp_all_memory} reserved locator @tab Y @tab
283@item @emph{target_device trait} in OpenMP Context @tab N @tab
284@item @code{target_device} selector set in context selectors @tab N @tab
285@item C/C++'s @code{declare variant} directive: elision support of
286 preprocessed code @tab N @tab
287@item @code{declare variant}: new clauses @code{adjust_args} and
288 @code{append_args} @tab N @tab
289@item @code{dispatch} construct @tab N @tab
290@item device-specific ICV settings with environment variables @tab Y @tab
eda38850 291@item @code{assume} and @code{assumes} directives @tab Y @tab
d77de738
ML
292@item @code{nothing} directive @tab Y @tab
293@item @code{error} directive @tab Y @tab
294@item @code{masked} construct @tab Y @tab
295@item @code{scope} directive @tab Y @tab
296@item Loop transformation constructs @tab N @tab
297@item @code{strict} modifier in the @code{grainsize} and @code{num_tasks}
298 clauses of the @code{taskloop} construct @tab Y @tab
b2e1c49b
TB
299@item @code{align} clause in @code{allocate} directive @tab N @tab
300@item @code{align} modifier in @code{allocate} clause @tab Y @tab
d77de738
ML
301@item @code{thread_limit} clause to @code{target} construct @tab Y @tab
302@item @code{has_device_addr} clause to @code{target} construct @tab Y @tab
303@item Iterators in @code{target update} motion clauses and @code{map}
304 clauses @tab N @tab
305@item Indirect calls to the device version of a procedure or function in
306 @code{target} regions @tab N @tab
307@item @code{interop} directive @tab N @tab
308@item @code{omp_interop_t} object support in runtime routines @tab N @tab
309@item @code{nowait} clause in @code{taskwait} directive @tab Y @tab
310@item Extensions to the @code{atomic} directive @tab Y @tab
311@item @code{seq_cst} clause on a @code{flush} construct @tab Y @tab
312@item @code{inoutset} argument to the @code{depend} clause @tab Y @tab
313@item @code{private} and @code{firstprivate} argument to @code{default}
314 clause in C and C++ @tab Y @tab
4ede915d 315@item @code{present} argument to @code{defaultmap} clause @tab Y @tab
d77de738
ML
316@item @code{omp_set_num_teams}, @code{omp_set_teams_thread_limit},
317 @code{omp_get_max_teams}, @code{omp_get_teams_thread_limit} runtime
318 routines @tab Y @tab
319@item @code{omp_target_is_accessible} runtime routine @tab Y @tab
320@item @code{omp_target_memcpy_async} and @code{omp_target_memcpy_rect_async}
321 runtime routines @tab Y @tab
322@item @code{omp_get_mapped_ptr} runtime routine @tab Y @tab
323@item @code{omp_calloc}, @code{omp_realloc}, @code{omp_aligned_alloc} and
324 @code{omp_aligned_calloc} runtime routines @tab Y @tab
325@item @code{omp_alloctrait_key_t} enum: @code{omp_atv_serialized} added,
326 @code{omp_atv_default} changed @tab Y @tab
327@item @code{omp_display_env} runtime routine @tab Y @tab
328@item @code{ompt_scope_endpoint_t} enum: @code{ompt_scope_beginend} @tab N @tab
329@item @code{ompt_sync_region_t} enum additions @tab N @tab
330@item @code{ompt_state_t} enum: @code{ompt_state_wait_barrier_implementation}
331 and @code{ompt_state_wait_barrier_teams} @tab N @tab
332@item @code{ompt_callback_target_data_op_emi_t},
333 @code{ompt_callback_target_emi_t}, @code{ompt_callback_target_map_emi_t}
334 and @code{ompt_callback_target_submit_emi_t} @tab N @tab
335@item @code{ompt_callback_error_t} type @tab N @tab
336@item @code{OMP_PLACES} syntax extensions @tab Y @tab
337@item @code{OMP_NUM_TEAMS} and @code{OMP_TEAMS_THREAD_LIMIT} environment
338 variables @tab Y @tab
339@end multitable
340
341@unnumberedsubsec Other new OpenMP 5.1 features
342
343@multitable @columnfractions .60 .10 .25
344@headitem Description @tab Status @tab Comments
345@item Support of strictly structured blocks in Fortran @tab Y @tab
346@item Support of structured block sequences in C/C++ @tab Y @tab
347@item @code{unconstrained} and @code{reproducible} modifiers on @code{order}
348 clause @tab Y @tab
349@item Support @code{begin/end declare target} syntax in C/C++ @tab Y @tab
350@item Pointer predetermined firstprivate getting initialized
351to address of matching mapped list item per 5.1, Sect. 2.21.7.2 @tab N @tab
352@item For Fortran, diagnose placing declarative before/between @code{USE},
353 @code{IMPORT}, and @code{IMPLICIT} as invalid @tab N @tab
eda38850 354@item Optional comma between directive and clause in the @code{#pragma} form @tab Y @tab
c16e85d7
TB
355@item @code{indirect} clause in @code{declare target} @tab N @tab
356@item @code{device_type(nohost)}/@code{device_type(host)} for variables @tab N @tab
4ede915d
TB
357@item @code{present} modifier to the @code{map}, @code{to} and @code{from}
358 clauses @tab Y @tab
d77de738
ML
359@end multitable
360
361
362@node OpenMP 5.2
363@section OpenMP 5.2
364
365@unnumberedsubsec New features listed in Appendix B of the OpenMP specification
366
367@multitable @columnfractions .60 .10 .25
368@headitem Description @tab Status @tab Comments
2cd0689a 369@item @code{omp_in_explicit_task} routine and @var{explicit-task-var} ICV
d77de738
ML
370 @tab Y @tab
371@item @code{omp}/@code{ompx}/@code{omx} sentinels and @code{omp_}/@code{ompx_}
372 namespaces @tab N/A
373 @tab warning for @code{ompx/omx} sentinels@footnote{The @code{ompx}
374 sentinel as C/C++ pragma and C++ attributes are warned for with
375 @code{-Wunknown-pragmas} (implied by @code{-Wall}) and @code{-Wattributes}
376 (enabled by default), respectively; for Fortran free-source code, there is
377 a warning enabled by default and, for fixed-source code, the @code{omx}
378 sentinel is warned for with with @code{-Wsurprising} (enabled by
379 @code{-Wall}). Unknown clauses are always rejected with an error.}
091b6dbc 380@item Clauses on @code{end} directive can be on directive @tab Y @tab
d77de738
ML
381@item Deprecation of no-argument @code{destroy} clause on @code{depobj}
382 @tab N @tab
383@item @code{linear} clause syntax changes and @code{step} modifier @tab Y @tab
384@item Deprecation of minus operator for reductions @tab N @tab
385@item Deprecation of separating @code{map} modifiers without comma @tab N @tab
386@item @code{declare mapper} with iterator and @code{present} modifiers
387 @tab N @tab
388@item If a matching mapped list item is not found in the data environment, the
b25ea7ab 389 pointer retains its original value @tab Y @tab
d77de738
ML
390@item New @code{enter} clause as alias for @code{to} on declare target directive
391 @tab Y @tab
392@item Deprecation of @code{to} clause on declare target directive @tab N @tab
393@item Extended list of directives permitted in Fortran pure procedures
2df7e451 394 @tab Y @tab
d77de738
ML
395@item New @code{allocators} directive for Fortran @tab N @tab
396@item Deprecation of @code{allocate} directive for Fortran
397 allocatables/pointers @tab N @tab
398@item Optional paired @code{end} directive with @code{dispatch} @tab N @tab
399@item New @code{memspace} and @code{traits} modifiers for @code{uses_allocators}
400 @tab N @tab
401@item Deprecation of traits array following the allocator_handle expression in
402 @code{uses_allocators} @tab N @tab
403@item New @code{otherwise} clause as alias for @code{default} on metadirectives
404 @tab N @tab
405@item Deprecation of @code{default} clause on metadirectives @tab N @tab
406@item Deprecation of delimited form of @code{declare target} @tab N @tab
407@item Reproducible semantics changed for @code{order(concurrent)} @tab N @tab
408@item @code{allocate} and @code{firstprivate} clauses on @code{scope}
409 @tab Y @tab
410@item @code{ompt_callback_work} @tab N @tab
9f80367e 411@item Default map-type for the @code{map} clause in @code{target enter/exit data}
d77de738
ML
412 @tab Y @tab
413@item New @code{doacross} clause as alias for @code{depend} with
414 @code{source}/@code{sink} modifier @tab Y @tab
415@item Deprecation of @code{depend} with @code{source}/@code{sink} modifier
416 @tab N @tab
417@item @code{omp_cur_iteration} keyword @tab Y @tab
418@end multitable
419
420@unnumberedsubsec Other new OpenMP 5.2 features
421
422@multitable @columnfractions .60 .10 .25
423@headitem Description @tab Status @tab Comments
424@item For Fortran, optional comma between directive and clause @tab N @tab
425@item Conforming device numbers and @code{omp_initial_device} and
426 @code{omp_invalid_device} enum/PARAMETER @tab Y @tab
2cd0689a 427@item Initial value of @var{default-device-var} ICV with
18c8b56c 428 @code{OMP_TARGET_OFFLOAD=mandatory} @tab Y @tab
d77de738
ML
429@item @emph{interop_types} in any position of the modifier list for the @code{init} clause
430 of the @code{interop} construct @tab N @tab
431@end multitable
432
433
c16e85d7
TB
434@node OpenMP Technical Report 11
435@section OpenMP Technical Report 11
436
437Technical Report (TR) 11 is the first preview for OpenMP 6.0.
438
439@unnumberedsubsec New features listed in Appendix B of the OpenMP specification
440@multitable @columnfractions .60 .10 .25
441@item Features deprecated in versions 5.2, 5.1 and 5.0 were removed
442 @tab N/A @tab Backward compatibility
443@item The @code{decl} attribute was added to the C++ attribute syntax
444 @tab N @tab
445@item @code{_ALL} suffix to the device-scope environment variables
446 @tab P @tab Host device number wrongly accepted
447@item For Fortran, @emph{locator list} can be also function reference with
448 data pointer result @tab N @tab
449@item Ref-count change for @code{use_device_ptr}/@code{use_device_addr}
450 @tab N @tab
451@item Implicit reduction identifiers of C++ classes
452 @tab N @tab
453@item Change of the @emph{map-type} property from @emph{ultimate} to
454 @emph{default} @tab N @tab
455@item Concept of @emph{assumed-size arrays} in C and C++
456 @tab N @tab
457@item Mapping of @emph{assumed-size arrays} in C, C++ and Fortran
458 @tab N @tab
459@item @code{groupprivate} directive @tab N @tab
460@item @code{local} clause to declare target directive @tab N @tab
461@item @code{part_size} allocator trait @tab N @tab
462@item @code{pin_device}, @code{preferred_device} and @code{target_access}
463 allocator traits
464 @tab N @tab
465@item @code{access} allocator trait changes @tab N @tab
466@item Extension of @code{interop} operation of @code{append_args}, allowing all
467 modifiers of the @code{init} clause
9f80367e 468 @tab N @tab
c16e85d7
TB
469@item @code{interop} clause to @code{dispatch} @tab N @tab
470@item @code{apply} code to loop-transforming constructs @tab N @tab
471@item @code{omp_curr_progress_width} identifier @tab N @tab
472@item @code{safesync} clause to the @code{parallel} construct @tab N @tab
473@item @code{omp_get_max_progress_width} runtime routine @tab N @tab
8da7476c 474@item @code{strict} modifier keyword to @code{num_threads} @tab N @tab
c16e85d7
TB
475@item @code{memscope} clause to @code{atomic} and @code{flush} @tab N @tab
476@item Routines for obtaining memory spaces/allocators for shared/device memory
477 @tab N @tab
478@item @code{omp_get_memspace_num_resources} routine @tab N @tab
479@item @code{omp_get_submemspace} routine @tab N @tab
480@item @code{ompt_get_buffer_limits} OMPT routine @tab N @tab
481@item Extension of @code{OMP_DEFAULT_DEVICE} and new
482 @code{OMP_AVAILABLE_DEVICES} environment vars @tab N @tab
483@item Supporting increments with abstract names in @code{OMP_PLACES} @tab N @tab
484@end multitable
485
486@unnumberedsubsec Other new TR 11 features
487@multitable @columnfractions .60 .10 .25
488@item Relaxed Fortran restrictions to the @code{aligned} clause @tab N @tab
489@item Mapping lambda captures @tab N @tab
490@item For Fortran, atomic compare with storing the comparison result
491 @tab N @tab
c16e85d7
TB
492@end multitable
493
494
495
d77de738
ML
496@c ---------------------------------------------------------------------
497@c OpenMP Runtime Library Routines
498@c ---------------------------------------------------------------------
499
500@node Runtime Library Routines
501@chapter OpenMP Runtime Library Routines
502
503The runtime routines described here are defined by Section 3 of the OpenMP
504specification in version 4.5. The routines are structured in following
505three parts:
506
507@menu
508Control threads, processors and the parallel environment. They have C
509linkage, and do not throw exceptions.
510
511* omp_get_active_level:: Number of active parallel regions
512* omp_get_ancestor_thread_num:: Ancestor thread ID
513* omp_get_cancellation:: Whether cancellation support is enabled
514* omp_get_default_device:: Get the default device for target regions
515* omp_get_device_num:: Get device that current thread is running on
516* omp_get_dynamic:: Dynamic teams setting
517* omp_get_initial_device:: Device number of host device
518* omp_get_level:: Number of parallel regions
519* omp_get_max_active_levels:: Current maximum number of active regions
520* omp_get_max_task_priority:: Maximum task priority value that can be set
521* omp_get_max_teams:: Maximum number of teams for teams region
522* omp_get_max_threads:: Maximum number of threads of parallel region
523* omp_get_nested:: Nested parallel regions
524* omp_get_num_devices:: Number of target devices
525* omp_get_num_procs:: Number of processors online
526* omp_get_num_teams:: Number of teams
527* omp_get_num_threads:: Size of the active team
0b9bd33d 528* omp_get_proc_bind:: Whether threads may be moved between CPUs
d77de738
ML
529* omp_get_schedule:: Obtain the runtime scheduling method
530* omp_get_supported_active_levels:: Maximum number of active regions supported
531* omp_get_team_num:: Get team number
532* omp_get_team_size:: Number of threads in a team
533* omp_get_teams_thread_limit:: Maximum number of threads imposed by teams
534* omp_get_thread_limit:: Maximum number of threads
535* omp_get_thread_num:: Current thread ID
536* omp_in_parallel:: Whether a parallel region is active
537* omp_in_final:: Whether in final or included task region
538* omp_is_initial_device:: Whether executing on the host device
539* omp_set_default_device:: Set the default device for target regions
540* omp_set_dynamic:: Enable/disable dynamic teams
541* omp_set_max_active_levels:: Limits the number of active parallel regions
542* omp_set_nested:: Enable/disable nested parallel regions
543* omp_set_num_teams:: Set upper teams limit for teams region
544* omp_set_num_threads:: Set upper team size limit
545* omp_set_schedule:: Set the runtime scheduling method
546* omp_set_teams_thread_limit:: Set upper thread limit for teams construct
547
548Initialize, set, test, unset and destroy simple and nested locks.
549
550* omp_init_lock:: Initialize simple lock
551* omp_set_lock:: Wait for and set simple lock
552* omp_test_lock:: Test and set simple lock if available
553* omp_unset_lock:: Unset simple lock
554* omp_destroy_lock:: Destroy simple lock
555* omp_init_nest_lock:: Initialize nested lock
556* omp_set_nest_lock:: Wait for and set simple lock
557* omp_test_nest_lock:: Test and set nested lock if available
558* omp_unset_nest_lock:: Unset nested lock
559* omp_destroy_nest_lock:: Destroy nested lock
560
561Portable, thread-based, wall clock timer.
562
563* omp_get_wtick:: Get timer precision.
564* omp_get_wtime:: Elapsed wall clock time.
565
566Support for event objects.
567
568* omp_fulfill_event:: Fulfill and destroy an OpenMP event.
569@end menu
570
571
572
573@node omp_get_active_level
574@section @code{omp_get_active_level} -- Number of parallel regions
575@table @asis
576@item @emph{Description}:
577This function returns the nesting level for the active parallel blocks,
578which enclose the calling call.
579
580@item @emph{C/C++}
581@multitable @columnfractions .20 .80
582@item @emph{Prototype}: @tab @code{int omp_get_active_level(void);}
583@end multitable
584
585@item @emph{Fortran}:
586@multitable @columnfractions .20 .80
587@item @emph{Interface}: @tab @code{integer function omp_get_active_level()}
588@end multitable
589
590@item @emph{See also}:
591@ref{omp_get_level}, @ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
592
593@item @emph{Reference}:
594@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.20.
595@end table
596
597
598
599@node omp_get_ancestor_thread_num
600@section @code{omp_get_ancestor_thread_num} -- Ancestor thread ID
601@table @asis
602@item @emph{Description}:
603This function returns the thread identification number for the given
604nesting level of the current thread. For values of @var{level} outside
605zero to @code{omp_get_level} -1 is returned; if @var{level} is
606@code{omp_get_level} the result is identical to @code{omp_get_thread_num}.
607
608@item @emph{C/C++}
609@multitable @columnfractions .20 .80
610@item @emph{Prototype}: @tab @code{int omp_get_ancestor_thread_num(int level);}
611@end multitable
612
613@item @emph{Fortran}:
614@multitable @columnfractions .20 .80
615@item @emph{Interface}: @tab @code{integer function omp_get_ancestor_thread_num(level)}
616@item @tab @code{integer level}
617@end multitable
618
619@item @emph{See also}:
620@ref{omp_get_level}, @ref{omp_get_thread_num}, @ref{omp_get_team_size}
621
622@item @emph{Reference}:
623@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.18.
624@end table
625
626
627
628@node omp_get_cancellation
629@section @code{omp_get_cancellation} -- Whether cancellation support is enabled
630@table @asis
631@item @emph{Description}:
632This function returns @code{true} if cancellation is activated, @code{false}
633otherwise. Here, @code{true} and @code{false} represent their language-specific
634counterparts. Unless @env{OMP_CANCELLATION} is set true, cancellations are
635deactivated.
636
637@item @emph{C/C++}:
638@multitable @columnfractions .20 .80
639@item @emph{Prototype}: @tab @code{int omp_get_cancellation(void);}
640@end multitable
641
642@item @emph{Fortran}:
643@multitable @columnfractions .20 .80
644@item @emph{Interface}: @tab @code{logical function omp_get_cancellation()}
645@end multitable
646
647@item @emph{See also}:
648@ref{OMP_CANCELLATION}
649
650@item @emph{Reference}:
651@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.9.
652@end table
653
654
655
656@node omp_get_default_device
657@section @code{omp_get_default_device} -- Get the default device for target regions
658@table @asis
659@item @emph{Description}:
660Get the default device for target regions without device clause.
661
662@item @emph{C/C++}:
663@multitable @columnfractions .20 .80
664@item @emph{Prototype}: @tab @code{int omp_get_default_device(void);}
665@end multitable
666
667@item @emph{Fortran}:
668@multitable @columnfractions .20 .80
669@item @emph{Interface}: @tab @code{integer function omp_get_default_device()}
670@end multitable
671
672@item @emph{See also}:
673@ref{OMP_DEFAULT_DEVICE}, @ref{omp_set_default_device}
674
675@item @emph{Reference}:
676@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.30.
677@end table
678
679
680
681@node omp_get_device_num
682@section @code{omp_get_device_num} -- Return device number of current device
683@table @asis
684@item @emph{Description}:
685This function returns a device number that represents the device that the
686current thread is executing on. For OpenMP 5.0, this must be equal to the
687value returned by the @code{omp_get_initial_device} function when called
688from the host.
689
690@item @emph{C/C++}
691@multitable @columnfractions .20 .80
692@item @emph{Prototype}: @tab @code{int omp_get_device_num(void);}
693@end multitable
694
695@item @emph{Fortran}:
696@multitable @columnfractions .20 .80
697@item @emph{Interface}: @tab @code{integer function omp_get_device_num()}
698@end multitable
699
700@item @emph{See also}:
701@ref{omp_get_initial_device}
702
703@item @emph{Reference}:
704@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.2.37.
705@end table
706
707
708
709@node omp_get_dynamic
710@section @code{omp_get_dynamic} -- Dynamic teams setting
711@table @asis
712@item @emph{Description}:
713This function returns @code{true} if enabled, @code{false} otherwise.
714Here, @code{true} and @code{false} represent their language-specific
715counterparts.
716
717The dynamic team setting may be initialized at startup by the
718@env{OMP_DYNAMIC} environment variable or at runtime using
719@code{omp_set_dynamic}. If undefined, dynamic adjustment is
720disabled by default.
721
722@item @emph{C/C++}:
723@multitable @columnfractions .20 .80
724@item @emph{Prototype}: @tab @code{int omp_get_dynamic(void);}
725@end multitable
726
727@item @emph{Fortran}:
728@multitable @columnfractions .20 .80
729@item @emph{Interface}: @tab @code{logical function omp_get_dynamic()}
730@end multitable
731
732@item @emph{See also}:
733@ref{omp_set_dynamic}, @ref{OMP_DYNAMIC}
734
735@item @emph{Reference}:
736@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.8.
737@end table
738
739
740
741@node omp_get_initial_device
742@section @code{omp_get_initial_device} -- Return device number of initial device
743@table @asis
744@item @emph{Description}:
745This function returns a device number that represents the host device.
746For OpenMP 5.1, this must be equal to the value returned by the
747@code{omp_get_num_devices} function.
748
749@item @emph{C/C++}
750@multitable @columnfractions .20 .80
751@item @emph{Prototype}: @tab @code{int omp_get_initial_device(void);}
752@end multitable
753
754@item @emph{Fortran}:
755@multitable @columnfractions .20 .80
756@item @emph{Interface}: @tab @code{integer function omp_get_initial_device()}
757@end multitable
758
759@item @emph{See also}:
760@ref{omp_get_num_devices}
761
762@item @emph{Reference}:
763@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.35.
764@end table
765
766
767
768@node omp_get_level
769@section @code{omp_get_level} -- Obtain the current nesting level
770@table @asis
771@item @emph{Description}:
772This function returns the nesting level for the parallel blocks,
773which enclose the calling call.
774
775@item @emph{C/C++}
776@multitable @columnfractions .20 .80
777@item @emph{Prototype}: @tab @code{int omp_get_level(void);}
778@end multitable
779
780@item @emph{Fortran}:
781@multitable @columnfractions .20 .80
782@item @emph{Interface}: @tab @code{integer function omp_level()}
783@end multitable
784
785@item @emph{See also}:
786@ref{omp_get_active_level}
787
788@item @emph{Reference}:
789@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.17.
790@end table
791
792
793
794@node omp_get_max_active_levels
795@section @code{omp_get_max_active_levels} -- Current maximum number of active regions
796@table @asis
797@item @emph{Description}:
798This function obtains the maximum allowed number of nested, active parallel regions.
799
800@item @emph{C/C++}
801@multitable @columnfractions .20 .80
802@item @emph{Prototype}: @tab @code{int omp_get_max_active_levels(void);}
803@end multitable
804
805@item @emph{Fortran}:
806@multitable @columnfractions .20 .80
807@item @emph{Interface}: @tab @code{integer function omp_get_max_active_levels()}
808@end multitable
809
810@item @emph{See also}:
811@ref{omp_set_max_active_levels}, @ref{omp_get_active_level}
812
813@item @emph{Reference}:
814@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.16.
815@end table
816
817
818@node omp_get_max_task_priority
819@section @code{omp_get_max_task_priority} -- Maximum priority value
820that can be set for tasks.
821@table @asis
822@item @emph{Description}:
823This function obtains the maximum allowed priority number for tasks.
824
825@item @emph{C/C++}
826@multitable @columnfractions .20 .80
827@item @emph{Prototype}: @tab @code{int omp_get_max_task_priority(void);}
828@end multitable
829
830@item @emph{Fortran}:
831@multitable @columnfractions .20 .80
832@item @emph{Interface}: @tab @code{integer function omp_get_max_task_priority()}
833@end multitable
834
835@item @emph{Reference}:
836@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
837@end table
838
839
840@node omp_get_max_teams
841@section @code{omp_get_max_teams} -- Maximum number of teams of teams region
842@table @asis
843@item @emph{Description}:
844Return the maximum number of teams used for the teams region
845that does not use the clause @code{num_teams}.
846
847@item @emph{C/C++}:
848@multitable @columnfractions .20 .80
849@item @emph{Prototype}: @tab @code{int omp_get_max_teams(void);}
850@end multitable
851
852@item @emph{Fortran}:
853@multitable @columnfractions .20 .80
854@item @emph{Interface}: @tab @code{integer function omp_get_max_teams()}
855@end multitable
856
857@item @emph{See also}:
858@ref{omp_set_num_teams}, @ref{omp_get_num_teams}
859
860@item @emph{Reference}:
861@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.4.
862@end table
863
864
865
866@node omp_get_max_threads
867@section @code{omp_get_max_threads} -- Maximum number of threads of parallel region
868@table @asis
869@item @emph{Description}:
870Return the maximum number of threads used for the current parallel region
871that does not use the clause @code{num_threads}.
872
873@item @emph{C/C++}:
874@multitable @columnfractions .20 .80
875@item @emph{Prototype}: @tab @code{int omp_get_max_threads(void);}
876@end multitable
877
878@item @emph{Fortran}:
879@multitable @columnfractions .20 .80
880@item @emph{Interface}: @tab @code{integer function omp_get_max_threads()}
881@end multitable
882
883@item @emph{See also}:
884@ref{omp_set_num_threads}, @ref{omp_set_dynamic}, @ref{omp_get_thread_limit}
885
886@item @emph{Reference}:
887@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.3.
888@end table
889
890
891
892@node omp_get_nested
893@section @code{omp_get_nested} -- Nested parallel regions
894@table @asis
895@item @emph{Description}:
896This function returns @code{true} if nested parallel regions are
897enabled, @code{false} otherwise. Here, @code{true} and @code{false}
898represent their language-specific counterparts.
899
900The state of nested parallel regions at startup depends on several
901environment variables. If @env{OMP_MAX_ACTIVE_LEVELS} is defined
902and is set to greater than one, then nested parallel regions will be
903enabled. If not defined, then the value of the @env{OMP_NESTED}
904environment variable will be followed if defined. If neither are
905defined, then if either @env{OMP_NUM_THREADS} or @env{OMP_PROC_BIND}
906are defined with a list of more than one value, then nested parallel
907regions are enabled. If none of these are defined, then nested parallel
908regions are disabled by default.
909
910Nested parallel regions can be enabled or disabled at runtime using
911@code{omp_set_nested}, or by setting the maximum number of nested
912regions with @code{omp_set_max_active_levels} to one to disable, or
913above one to enable.
914
2cd0689a
TB
915Note that the @code{omp_get_nested} API routine was deprecated
916in the OpenMP specification 5.2 in favor of @code{omp_get_max_active_levels}.
917
d77de738
ML
918@item @emph{C/C++}:
919@multitable @columnfractions .20 .80
920@item @emph{Prototype}: @tab @code{int omp_get_nested(void);}
921@end multitable
922
923@item @emph{Fortran}:
924@multitable @columnfractions .20 .80
925@item @emph{Interface}: @tab @code{logical function omp_get_nested()}
926@end multitable
927
928@item @emph{See also}:
2cd0689a 929@ref{omp_get_max_active_levels}, @ref{omp_set_nested},
d77de738
ML
930@ref{OMP_MAX_ACTIVE_LEVELS}, @ref{OMP_NESTED}
931
932@item @emph{Reference}:
933@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.11.
934@end table
935
936
937
938@node omp_get_num_devices
939@section @code{omp_get_num_devices} -- Number of target devices
940@table @asis
941@item @emph{Description}:
942Returns the number of target devices.
943
944@item @emph{C/C++}:
945@multitable @columnfractions .20 .80
946@item @emph{Prototype}: @tab @code{int omp_get_num_devices(void);}
947@end multitable
948
949@item @emph{Fortran}:
950@multitable @columnfractions .20 .80
951@item @emph{Interface}: @tab @code{integer function omp_get_num_devices()}
952@end multitable
953
954@item @emph{Reference}:
955@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.31.
956@end table
957
958
959
960@node omp_get_num_procs
961@section @code{omp_get_num_procs} -- Number of processors online
962@table @asis
963@item @emph{Description}:
964Returns the number of processors online on that device.
965
966@item @emph{C/C++}:
967@multitable @columnfractions .20 .80
968@item @emph{Prototype}: @tab @code{int omp_get_num_procs(void);}
969@end multitable
970
971@item @emph{Fortran}:
972@multitable @columnfractions .20 .80
973@item @emph{Interface}: @tab @code{integer function omp_get_num_procs()}
974@end multitable
975
976@item @emph{Reference}:
977@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.5.
978@end table
979
980
981
982@node omp_get_num_teams
983@section @code{omp_get_num_teams} -- Number of teams
984@table @asis
985@item @emph{Description}:
986Returns the number of teams in the current team region.
987
988@item @emph{C/C++}:
989@multitable @columnfractions .20 .80
990@item @emph{Prototype}: @tab @code{int omp_get_num_teams(void);}
991@end multitable
992
993@item @emph{Fortran}:
994@multitable @columnfractions .20 .80
995@item @emph{Interface}: @tab @code{integer function omp_get_num_teams()}
996@end multitable
997
998@item @emph{Reference}:
999@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.32.
1000@end table
1001
1002
1003
1004@node omp_get_num_threads
1005@section @code{omp_get_num_threads} -- Size of the active team
1006@table @asis
1007@item @emph{Description}:
1008Returns the number of threads in the current team. In a sequential section of
1009the program @code{omp_get_num_threads} returns 1.
1010
1011The default team size may be initialized at startup by the
1012@env{OMP_NUM_THREADS} environment variable. At runtime, the size
1013of the current team may be set either by the @code{NUM_THREADS}
1014clause or by @code{omp_set_num_threads}. If none of the above were
1015used to define a specific value and @env{OMP_DYNAMIC} is disabled,
1016one thread per CPU online is used.
1017
1018@item @emph{C/C++}:
1019@multitable @columnfractions .20 .80
1020@item @emph{Prototype}: @tab @code{int omp_get_num_threads(void);}
1021@end multitable
1022
1023@item @emph{Fortran}:
1024@multitable @columnfractions .20 .80
1025@item @emph{Interface}: @tab @code{integer function omp_get_num_threads()}
1026@end multitable
1027
1028@item @emph{See also}:
1029@ref{omp_get_max_threads}, @ref{omp_set_num_threads}, @ref{OMP_NUM_THREADS}
1030
1031@item @emph{Reference}:
1032@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.2.
1033@end table
1034
1035
1036
1037@node omp_get_proc_bind
0b9bd33d 1038@section @code{omp_get_proc_bind} -- Whether threads may be moved between CPUs
d77de738
ML
1039@table @asis
1040@item @emph{Description}:
1041This functions returns the currently active thread affinity policy, which is
1042set via @env{OMP_PROC_BIND}. Possible values are @code{omp_proc_bind_false},
1043@code{omp_proc_bind_true}, @code{omp_proc_bind_primary},
1044@code{omp_proc_bind_master}, @code{omp_proc_bind_close} and @code{omp_proc_bind_spread},
1045where @code{omp_proc_bind_master} is an alias for @code{omp_proc_bind_primary}.
1046
1047@item @emph{C/C++}:
1048@multitable @columnfractions .20 .80
1049@item @emph{Prototype}: @tab @code{omp_proc_bind_t omp_get_proc_bind(void);}
1050@end multitable
1051
1052@item @emph{Fortran}:
1053@multitable @columnfractions .20 .80
1054@item @emph{Interface}: @tab @code{integer(kind=omp_proc_bind_kind) function omp_get_proc_bind()}
1055@end multitable
1056
1057@item @emph{See also}:
1058@ref{OMP_PROC_BIND}, @ref{OMP_PLACES}, @ref{GOMP_CPU_AFFINITY},
1059
1060@item @emph{Reference}:
1061@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.22.
1062@end table
1063
1064
1065
1066@node omp_get_schedule
1067@section @code{omp_get_schedule} -- Obtain the runtime scheduling method
1068@table @asis
1069@item @emph{Description}:
1070Obtain the runtime scheduling method. The @var{kind} argument will be
1071set to the value @code{omp_sched_static}, @code{omp_sched_dynamic},
1072@code{omp_sched_guided} or @code{omp_sched_auto}. The second argument,
1073@var{chunk_size}, is set to the chunk size.
1074
1075@item @emph{C/C++}
1076@multitable @columnfractions .20 .80
1077@item @emph{Prototype}: @tab @code{void omp_get_schedule(omp_sched_t *kind, int *chunk_size);}
1078@end multitable
1079
1080@item @emph{Fortran}:
1081@multitable @columnfractions .20 .80
1082@item @emph{Interface}: @tab @code{subroutine omp_get_schedule(kind, chunk_size)}
1083@item @tab @code{integer(kind=omp_sched_kind) kind}
1084@item @tab @code{integer chunk_size}
1085@end multitable
1086
1087@item @emph{See also}:
1088@ref{omp_set_schedule}, @ref{OMP_SCHEDULE}
1089
1090@item @emph{Reference}:
1091@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.13.
1092@end table
1093
1094
1095@node omp_get_supported_active_levels
1096@section @code{omp_get_supported_active_levels} -- Maximum number of active regions supported
1097@table @asis
1098@item @emph{Description}:
1099This function returns the maximum number of nested, active parallel regions
1100supported by this implementation.
1101
1102@item @emph{C/C++}
1103@multitable @columnfractions .20 .80
1104@item @emph{Prototype}: @tab @code{int omp_get_supported_active_levels(void);}
1105@end multitable
1106
1107@item @emph{Fortran}:
1108@multitable @columnfractions .20 .80
1109@item @emph{Interface}: @tab @code{integer function omp_get_supported_active_levels()}
1110@end multitable
1111
1112@item @emph{See also}:
1113@ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
1114
1115@item @emph{Reference}:
1116@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.2.15.
1117@end table
1118
1119
1120
1121@node omp_get_team_num
1122@section @code{omp_get_team_num} -- Get team number
1123@table @asis
1124@item @emph{Description}:
1125Returns the team number of the calling thread.
1126
1127@item @emph{C/C++}:
1128@multitable @columnfractions .20 .80
1129@item @emph{Prototype}: @tab @code{int omp_get_team_num(void);}
1130@end multitable
1131
1132@item @emph{Fortran}:
1133@multitable @columnfractions .20 .80
1134@item @emph{Interface}: @tab @code{integer function omp_get_team_num()}
1135@end multitable
1136
1137@item @emph{Reference}:
1138@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.33.
1139@end table
1140
1141
1142
1143@node omp_get_team_size
1144@section @code{omp_get_team_size} -- Number of threads in a team
1145@table @asis
1146@item @emph{Description}:
1147This function returns the number of threads in a thread team to which
1148either the current thread or its ancestor belongs. For values of @var{level}
1149outside zero to @code{omp_get_level}, -1 is returned; if @var{level} is zero,
11501 is returned, and for @code{omp_get_level}, the result is identical
1151to @code{omp_get_num_threads}.
1152
1153@item @emph{C/C++}:
1154@multitable @columnfractions .20 .80
1155@item @emph{Prototype}: @tab @code{int omp_get_team_size(int level);}
1156@end multitable
1157
1158@item @emph{Fortran}:
1159@multitable @columnfractions .20 .80
1160@item @emph{Interface}: @tab @code{integer function omp_get_team_size(level)}
1161@item @tab @code{integer level}
1162@end multitable
1163
1164@item @emph{See also}:
1165@ref{omp_get_num_threads}, @ref{omp_get_level}, @ref{omp_get_ancestor_thread_num}
1166
1167@item @emph{Reference}:
1168@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.19.
1169@end table
1170
1171
1172
1173@node omp_get_teams_thread_limit
1174@section @code{omp_get_teams_thread_limit} -- Maximum number of threads imposed by teams
1175@table @asis
1176@item @emph{Description}:
1177Return the maximum number of threads that will be able to participate in
1178each team created by a teams construct.
1179
1180@item @emph{C/C++}:
1181@multitable @columnfractions .20 .80
1182@item @emph{Prototype}: @tab @code{int omp_get_teams_thread_limit(void);}
1183@end multitable
1184
1185@item @emph{Fortran}:
1186@multitable @columnfractions .20 .80
1187@item @emph{Interface}: @tab @code{integer function omp_get_teams_thread_limit()}
1188@end multitable
1189
1190@item @emph{See also}:
1191@ref{omp_set_teams_thread_limit}, @ref{OMP_TEAMS_THREAD_LIMIT}
1192
1193@item @emph{Reference}:
1194@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.6.
1195@end table
1196
1197
1198
1199@node omp_get_thread_limit
1200@section @code{omp_get_thread_limit} -- Maximum number of threads
1201@table @asis
1202@item @emph{Description}:
1203Return the maximum number of threads of the program.
1204
1205@item @emph{C/C++}:
1206@multitable @columnfractions .20 .80
1207@item @emph{Prototype}: @tab @code{int omp_get_thread_limit(void);}
1208@end multitable
1209
1210@item @emph{Fortran}:
1211@multitable @columnfractions .20 .80
1212@item @emph{Interface}: @tab @code{integer function omp_get_thread_limit()}
1213@end multitable
1214
1215@item @emph{See also}:
1216@ref{omp_get_max_threads}, @ref{OMP_THREAD_LIMIT}
1217
1218@item @emph{Reference}:
1219@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.14.
1220@end table
1221
1222
1223
1224@node omp_get_thread_num
1225@section @code{omp_get_thread_num} -- Current thread ID
1226@table @asis
1227@item @emph{Description}:
1228Returns a unique thread identification number within the current team.
1229In a sequential parts of the program, @code{omp_get_thread_num}
1230always returns 0. In parallel regions the return value varies
1231from 0 to @code{omp_get_num_threads}-1 inclusive. The return
1232value of the primary thread of a team is always 0.
1233
1234@item @emph{C/C++}:
1235@multitable @columnfractions .20 .80
1236@item @emph{Prototype}: @tab @code{int omp_get_thread_num(void);}
1237@end multitable
1238
1239@item @emph{Fortran}:
1240@multitable @columnfractions .20 .80
1241@item @emph{Interface}: @tab @code{integer function omp_get_thread_num()}
1242@end multitable
1243
1244@item @emph{See also}:
1245@ref{omp_get_num_threads}, @ref{omp_get_ancestor_thread_num}
1246
1247@item @emph{Reference}:
1248@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.4.
1249@end table
1250
1251
1252
1253@node omp_in_parallel
1254@section @code{omp_in_parallel} -- Whether a parallel region is active
1255@table @asis
1256@item @emph{Description}:
1257This function returns @code{true} if currently running in parallel,
1258@code{false} otherwise. Here, @code{true} and @code{false} represent
1259their language-specific counterparts.
1260
1261@item @emph{C/C++}:
1262@multitable @columnfractions .20 .80
1263@item @emph{Prototype}: @tab @code{int omp_in_parallel(void);}
1264@end multitable
1265
1266@item @emph{Fortran}:
1267@multitable @columnfractions .20 .80
1268@item @emph{Interface}: @tab @code{logical function omp_in_parallel()}
1269@end multitable
1270
1271@item @emph{Reference}:
1272@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.6.
1273@end table
1274
1275
1276@node omp_in_final
1277@section @code{omp_in_final} -- Whether in final or included task region
1278@table @asis
1279@item @emph{Description}:
1280This function returns @code{true} if currently running in a final
1281or included task region, @code{false} otherwise. Here, @code{true}
1282and @code{false} represent their language-specific counterparts.
1283
1284@item @emph{C/C++}:
1285@multitable @columnfractions .20 .80
1286@item @emph{Prototype}: @tab @code{int omp_in_final(void);}
1287@end multitable
1288
1289@item @emph{Fortran}:
1290@multitable @columnfractions .20 .80
1291@item @emph{Interface}: @tab @code{logical function omp_in_final()}
1292@end multitable
1293
1294@item @emph{Reference}:
1295@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.21.
1296@end table
1297
1298
1299
1300@node omp_is_initial_device
1301@section @code{omp_is_initial_device} -- Whether executing on the host device
1302@table @asis
1303@item @emph{Description}:
1304This function returns @code{true} if currently running on the host device,
1305@code{false} otherwise. Here, @code{true} and @code{false} represent
1306their language-specific counterparts.
1307
1308@item @emph{C/C++}:
1309@multitable @columnfractions .20 .80
1310@item @emph{Prototype}: @tab @code{int omp_is_initial_device(void);}
1311@end multitable
1312
1313@item @emph{Fortran}:
1314@multitable @columnfractions .20 .80
1315@item @emph{Interface}: @tab @code{logical function omp_is_initial_device()}
1316@end multitable
1317
1318@item @emph{Reference}:
1319@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.34.
1320@end table
1321
1322
1323
1324@node omp_set_default_device
1325@section @code{omp_set_default_device} -- Set the default device for target regions
1326@table @asis
1327@item @emph{Description}:
1328Set the default device for target regions without device clause. The argument
1329shall be a nonnegative device number.
1330
1331@item @emph{C/C++}:
1332@multitable @columnfractions .20 .80
1333@item @emph{Prototype}: @tab @code{void omp_set_default_device(int device_num);}
1334@end multitable
1335
1336@item @emph{Fortran}:
1337@multitable @columnfractions .20 .80
1338@item @emph{Interface}: @tab @code{subroutine omp_set_default_device(device_num)}
1339@item @tab @code{integer device_num}
1340@end multitable
1341
1342@item @emph{See also}:
1343@ref{OMP_DEFAULT_DEVICE}, @ref{omp_get_default_device}
1344
1345@item @emph{Reference}:
1346@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
1347@end table
1348
1349
1350
1351@node omp_set_dynamic
1352@section @code{omp_set_dynamic} -- Enable/disable dynamic teams
1353@table @asis
1354@item @emph{Description}:
1355Enable or disable the dynamic adjustment of the number of threads
1356within a team. The function takes the language-specific equivalent
1357of @code{true} and @code{false}, where @code{true} enables dynamic
1358adjustment of team sizes and @code{false} disables it.
1359
1360@item @emph{C/C++}:
1361@multitable @columnfractions .20 .80
1362@item @emph{Prototype}: @tab @code{void omp_set_dynamic(int dynamic_threads);}
1363@end multitable
1364
1365@item @emph{Fortran}:
1366@multitable @columnfractions .20 .80
1367@item @emph{Interface}: @tab @code{subroutine omp_set_dynamic(dynamic_threads)}
1368@item @tab @code{logical, intent(in) :: dynamic_threads}
1369@end multitable
1370
1371@item @emph{See also}:
1372@ref{OMP_DYNAMIC}, @ref{omp_get_dynamic}
1373
1374@item @emph{Reference}:
1375@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.7.
1376@end table
1377
1378
1379
1380@node omp_set_max_active_levels
1381@section @code{omp_set_max_active_levels} -- Limits the number of active parallel regions
1382@table @asis
1383@item @emph{Description}:
1384This function limits the maximum allowed number of nested, active
1385parallel regions. @var{max_levels} must be less or equal to
1386the value returned by @code{omp_get_supported_active_levels}.
1387
1388@item @emph{C/C++}
1389@multitable @columnfractions .20 .80
1390@item @emph{Prototype}: @tab @code{void omp_set_max_active_levels(int max_levels);}
1391@end multitable
1392
1393@item @emph{Fortran}:
1394@multitable @columnfractions .20 .80
1395@item @emph{Interface}: @tab @code{subroutine omp_set_max_active_levels(max_levels)}
1396@item @tab @code{integer max_levels}
1397@end multitable
1398
1399@item @emph{See also}:
1400@ref{omp_get_max_active_levels}, @ref{omp_get_active_level},
1401@ref{omp_get_supported_active_levels}
1402
1403@item @emph{Reference}:
1404@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.15.
1405@end table
1406
1407
1408
1409@node omp_set_nested
1410@section @code{omp_set_nested} -- Enable/disable nested parallel regions
1411@table @asis
1412@item @emph{Description}:
1413Enable or disable nested parallel regions, i.e., whether team members
1414are allowed to create new teams. The function takes the language-specific
1415equivalent of @code{true} and @code{false}, where @code{true} enables
1416dynamic adjustment of team sizes and @code{false} disables it.
1417
1418Enabling nested parallel regions will also set the maximum number of
1419active nested regions to the maximum supported. Disabling nested parallel
1420regions will set the maximum number of active nested regions to one.
1421
2cd0689a
TB
1422Note that the @code{omp_set_nested} API routine was deprecated
1423in the OpenMP specification 5.2 in favor of @code{omp_set_max_active_levels}.
1424
d77de738
ML
1425@item @emph{C/C++}:
1426@multitable @columnfractions .20 .80
1427@item @emph{Prototype}: @tab @code{void omp_set_nested(int nested);}
1428@end multitable
1429
1430@item @emph{Fortran}:
1431@multitable @columnfractions .20 .80
1432@item @emph{Interface}: @tab @code{subroutine omp_set_nested(nested)}
1433@item @tab @code{logical, intent(in) :: nested}
1434@end multitable
1435
1436@item @emph{See also}:
1437@ref{omp_get_nested}, @ref{omp_set_max_active_levels},
1438@ref{OMP_MAX_ACTIVE_LEVELS}, @ref{OMP_NESTED}
1439
1440@item @emph{Reference}:
1441@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.10.
1442@end table
1443
1444
1445
1446@node omp_set_num_teams
1447@section @code{omp_set_num_teams} -- Set upper teams limit for teams construct
1448@table @asis
1449@item @emph{Description}:
1450Specifies the upper bound for number of teams created by the teams construct
1451which does not specify a @code{num_teams} clause. The
1452argument of @code{omp_set_num_teams} shall be a positive integer.
1453
1454@item @emph{C/C++}:
1455@multitable @columnfractions .20 .80
1456@item @emph{Prototype}: @tab @code{void omp_set_num_teams(int num_teams);}
1457@end multitable
1458
1459@item @emph{Fortran}:
1460@multitable @columnfractions .20 .80
1461@item @emph{Interface}: @tab @code{subroutine omp_set_num_teams(num_teams)}
1462@item @tab @code{integer, intent(in) :: num_teams}
1463@end multitable
1464
1465@item @emph{See also}:
1466@ref{OMP_NUM_TEAMS}, @ref{omp_get_num_teams}, @ref{omp_get_max_teams}
1467
1468@item @emph{Reference}:
1469@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.3.
1470@end table
1471
1472
1473
1474@node omp_set_num_threads
1475@section @code{omp_set_num_threads} -- Set upper team size limit
1476@table @asis
1477@item @emph{Description}:
1478Specifies the number of threads used by default in subsequent parallel
1479sections, if those do not specify a @code{num_threads} clause. The
1480argument of @code{omp_set_num_threads} shall be a positive integer.
1481
1482@item @emph{C/C++}:
1483@multitable @columnfractions .20 .80
1484@item @emph{Prototype}: @tab @code{void omp_set_num_threads(int num_threads);}
1485@end multitable
1486
1487@item @emph{Fortran}:
1488@multitable @columnfractions .20 .80
1489@item @emph{Interface}: @tab @code{subroutine omp_set_num_threads(num_threads)}
1490@item @tab @code{integer, intent(in) :: num_threads}
1491@end multitable
1492
1493@item @emph{See also}:
1494@ref{OMP_NUM_THREADS}, @ref{omp_get_num_threads}, @ref{omp_get_max_threads}
1495
1496@item @emph{Reference}:
1497@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.1.
1498@end table
1499
1500
1501
1502@node omp_set_schedule
1503@section @code{omp_set_schedule} -- Set the runtime scheduling method
1504@table @asis
1505@item @emph{Description}:
1506Sets the runtime scheduling method. The @var{kind} argument can have the
1507value @code{omp_sched_static}, @code{omp_sched_dynamic},
1508@code{omp_sched_guided} or @code{omp_sched_auto}. Except for
1509@code{omp_sched_auto}, the chunk size is set to the value of
1510@var{chunk_size} if positive, or to the default value if zero or negative.
1511For @code{omp_sched_auto} the @var{chunk_size} argument is ignored.
1512
1513@item @emph{C/C++}
1514@multitable @columnfractions .20 .80
1515@item @emph{Prototype}: @tab @code{void omp_set_schedule(omp_sched_t kind, int chunk_size);}
1516@end multitable
1517
1518@item @emph{Fortran}:
1519@multitable @columnfractions .20 .80
1520@item @emph{Interface}: @tab @code{subroutine omp_set_schedule(kind, chunk_size)}
1521@item @tab @code{integer(kind=omp_sched_kind) kind}
1522@item @tab @code{integer chunk_size}
1523@end multitable
1524
1525@item @emph{See also}:
1526@ref{omp_get_schedule}
1527@ref{OMP_SCHEDULE}
1528
1529@item @emph{Reference}:
1530@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.12.
1531@end table
1532
1533
1534
1535@node omp_set_teams_thread_limit
1536@section @code{omp_set_teams_thread_limit} -- Set upper thread limit for teams construct
1537@table @asis
1538@item @emph{Description}:
1539Specifies the upper bound for number of threads that will be available
1540for each team created by the teams construct which does not specify a
1541@code{thread_limit} clause. The argument of
1542@code{omp_set_teams_thread_limit} shall be a positive integer.
1543
1544@item @emph{C/C++}:
1545@multitable @columnfractions .20 .80
1546@item @emph{Prototype}: @tab @code{void omp_set_teams_thread_limit(int thread_limit);}
1547@end multitable
1548
1549@item @emph{Fortran}:
1550@multitable @columnfractions .20 .80
1551@item @emph{Interface}: @tab @code{subroutine omp_set_teams_thread_limit(thread_limit)}
1552@item @tab @code{integer, intent(in) :: thread_limit}
1553@end multitable
1554
1555@item @emph{See also}:
1556@ref{OMP_TEAMS_THREAD_LIMIT}, @ref{omp_get_teams_thread_limit}, @ref{omp_get_thread_limit}
1557
1558@item @emph{Reference}:
1559@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.5.
1560@end table
1561
1562
1563
1564@node omp_init_lock
1565@section @code{omp_init_lock} -- Initialize simple lock
1566@table @asis
1567@item @emph{Description}:
1568Initialize a simple lock. After initialization, the lock is in
1569an unlocked state.
1570
1571@item @emph{C/C++}:
1572@multitable @columnfractions .20 .80
1573@item @emph{Prototype}: @tab @code{void omp_init_lock(omp_lock_t *lock);}
1574@end multitable
1575
1576@item @emph{Fortran}:
1577@multitable @columnfractions .20 .80
1578@item @emph{Interface}: @tab @code{subroutine omp_init_lock(svar)}
1579@item @tab @code{integer(omp_lock_kind), intent(out) :: svar}
1580@end multitable
1581
1582@item @emph{See also}:
1583@ref{omp_destroy_lock}
1584
1585@item @emph{Reference}:
1586@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
1587@end table
1588
1589
1590
1591@node omp_set_lock
1592@section @code{omp_set_lock} -- Wait for and set simple lock
1593@table @asis
1594@item @emph{Description}:
1595Before setting a simple lock, the lock variable must be initialized by
1596@code{omp_init_lock}. The calling thread is blocked until the lock
1597is available. If the lock is already held by the current thread,
1598a deadlock occurs.
1599
1600@item @emph{C/C++}:
1601@multitable @columnfractions .20 .80
1602@item @emph{Prototype}: @tab @code{void omp_set_lock(omp_lock_t *lock);}
1603@end multitable
1604
1605@item @emph{Fortran}:
1606@multitable @columnfractions .20 .80
1607@item @emph{Interface}: @tab @code{subroutine omp_set_lock(svar)}
1608@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1609@end multitable
1610
1611@item @emph{See also}:
1612@ref{omp_init_lock}, @ref{omp_test_lock}, @ref{omp_unset_lock}
1613
1614@item @emph{Reference}:
1615@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
1616@end table
1617
1618
1619
1620@node omp_test_lock
1621@section @code{omp_test_lock} -- Test and set simple lock if available
1622@table @asis
1623@item @emph{Description}:
1624Before setting a simple lock, the lock variable must be initialized by
1625@code{omp_init_lock}. Contrary to @code{omp_set_lock}, @code{omp_test_lock}
1626does not block if the lock is not available. This function returns
1627@code{true} upon success, @code{false} otherwise. Here, @code{true} and
1628@code{false} represent their language-specific counterparts.
1629
1630@item @emph{C/C++}:
1631@multitable @columnfractions .20 .80
1632@item @emph{Prototype}: @tab @code{int omp_test_lock(omp_lock_t *lock);}
1633@end multitable
1634
1635@item @emph{Fortran}:
1636@multitable @columnfractions .20 .80
1637@item @emph{Interface}: @tab @code{logical function omp_test_lock(svar)}
1638@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1639@end multitable
1640
1641@item @emph{See also}:
1642@ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
1643
1644@item @emph{Reference}:
1645@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
1646@end table
1647
1648
1649
1650@node omp_unset_lock
1651@section @code{omp_unset_lock} -- Unset simple lock
1652@table @asis
1653@item @emph{Description}:
1654A simple lock about to be unset must have been locked by @code{omp_set_lock}
1655or @code{omp_test_lock} before. In addition, the lock must be held by the
1656thread calling @code{omp_unset_lock}. Then, the lock becomes unlocked. If one
1657or more threads attempted to set the lock before, one of them is chosen to,
1658again, set the lock to itself.
1659
1660@item @emph{C/C++}:
1661@multitable @columnfractions .20 .80
1662@item @emph{Prototype}: @tab @code{void omp_unset_lock(omp_lock_t *lock);}
1663@end multitable
1664
1665@item @emph{Fortran}:
1666@multitable @columnfractions .20 .80
1667@item @emph{Interface}: @tab @code{subroutine omp_unset_lock(svar)}
1668@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1669@end multitable
1670
1671@item @emph{See also}:
1672@ref{omp_set_lock}, @ref{omp_test_lock}
1673
1674@item @emph{Reference}:
1675@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
1676@end table
1677
1678
1679
1680@node omp_destroy_lock
1681@section @code{omp_destroy_lock} -- Destroy simple lock
1682@table @asis
1683@item @emph{Description}:
1684Destroy a simple lock. In order to be destroyed, a simple lock must be
1685in the unlocked state.
1686
1687@item @emph{C/C++}:
1688@multitable @columnfractions .20 .80
1689@item @emph{Prototype}: @tab @code{void omp_destroy_lock(omp_lock_t *lock);}
1690@end multitable
1691
1692@item @emph{Fortran}:
1693@multitable @columnfractions .20 .80
1694@item @emph{Interface}: @tab @code{subroutine omp_destroy_lock(svar)}
1695@item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1696@end multitable
1697
1698@item @emph{See also}:
1699@ref{omp_init_lock}
1700
1701@item @emph{Reference}:
1702@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
1703@end table
1704
1705
1706
1707@node omp_init_nest_lock
1708@section @code{omp_init_nest_lock} -- Initialize nested lock
1709@table @asis
1710@item @emph{Description}:
1711Initialize a nested lock. After initialization, the lock is in
1712an unlocked state and the nesting count is set to zero.
1713
1714@item @emph{C/C++}:
1715@multitable @columnfractions .20 .80
1716@item @emph{Prototype}: @tab @code{void omp_init_nest_lock(omp_nest_lock_t *lock);}
1717@end multitable
1718
1719@item @emph{Fortran}:
1720@multitable @columnfractions .20 .80
1721@item @emph{Interface}: @tab @code{subroutine omp_init_nest_lock(nvar)}
1722@item @tab @code{integer(omp_nest_lock_kind), intent(out) :: nvar}
1723@end multitable
1724
1725@item @emph{See also}:
1726@ref{omp_destroy_nest_lock}
1727
1728@item @emph{Reference}:
1729@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
1730@end table
1731
1732
1733@node omp_set_nest_lock
1734@section @code{omp_set_nest_lock} -- Wait for and set nested lock
1735@table @asis
1736@item @emph{Description}:
1737Before setting a nested lock, the lock variable must be initialized by
1738@code{omp_init_nest_lock}. The calling thread is blocked until the lock
1739is available. If the lock is already held by the current thread, the
1740nesting count for the lock is incremented.
1741
1742@item @emph{C/C++}:
1743@multitable @columnfractions .20 .80
1744@item @emph{Prototype}: @tab @code{void omp_set_nest_lock(omp_nest_lock_t *lock);}
1745@end multitable
1746
1747@item @emph{Fortran}:
1748@multitable @columnfractions .20 .80
1749@item @emph{Interface}: @tab @code{subroutine omp_set_nest_lock(nvar)}
1750@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1751@end multitable
1752
1753@item @emph{See also}:
1754@ref{omp_init_nest_lock}, @ref{omp_unset_nest_lock}
1755
1756@item @emph{Reference}:
1757@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
1758@end table
1759
1760
1761
1762@node omp_test_nest_lock
1763@section @code{omp_test_nest_lock} -- Test and set nested lock if available
1764@table @asis
1765@item @emph{Description}:
1766Before setting a nested lock, the lock variable must be initialized by
1767@code{omp_init_nest_lock}. Contrary to @code{omp_set_nest_lock},
1768@code{omp_test_nest_lock} does not block if the lock is not available.
1769If the lock is already held by the current thread, the new nesting count
1770is returned. Otherwise, the return value equals zero.
1771
1772@item @emph{C/C++}:
1773@multitable @columnfractions .20 .80
1774@item @emph{Prototype}: @tab @code{int omp_test_nest_lock(omp_nest_lock_t *lock);}
1775@end multitable
1776
1777@item @emph{Fortran}:
1778@multitable @columnfractions .20 .80
1779@item @emph{Interface}: @tab @code{logical function omp_test_nest_lock(nvar)}
1780@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1781@end multitable
1782
1783
1784@item @emph{See also}:
1785@ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
1786
1787@item @emph{Reference}:
1788@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
1789@end table
1790
1791
1792
1793@node omp_unset_nest_lock
1794@section @code{omp_unset_nest_lock} -- Unset nested lock
1795@table @asis
1796@item @emph{Description}:
1797A nested lock about to be unset must have been locked by @code{omp_set_nested_lock}
1798or @code{omp_test_nested_lock} before. In addition, the lock must be held by the
1799thread calling @code{omp_unset_nested_lock}. If the nesting count drops to zero, the
1800lock becomes unlocked. If one ore more threads attempted to set the lock before,
1801one of them is chosen to, again, set the lock to itself.
1802
1803@item @emph{C/C++}:
1804@multitable @columnfractions .20 .80
1805@item @emph{Prototype}: @tab @code{void omp_unset_nest_lock(omp_nest_lock_t *lock);}
1806@end multitable
1807
1808@item @emph{Fortran}:
1809@multitable @columnfractions .20 .80
1810@item @emph{Interface}: @tab @code{subroutine omp_unset_nest_lock(nvar)}
1811@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1812@end multitable
1813
1814@item @emph{See also}:
1815@ref{omp_set_nest_lock}
1816
1817@item @emph{Reference}:
1818@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
1819@end table
1820
1821
1822
1823@node omp_destroy_nest_lock
1824@section @code{omp_destroy_nest_lock} -- Destroy nested lock
1825@table @asis
1826@item @emph{Description}:
1827Destroy a nested lock. In order to be destroyed, a nested lock must be
1828in the unlocked state and its nesting count must equal zero.
1829
1830@item @emph{C/C++}:
1831@multitable @columnfractions .20 .80
1832@item @emph{Prototype}: @tab @code{void omp_destroy_nest_lock(omp_nest_lock_t *);}
1833@end multitable
1834
1835@item @emph{Fortran}:
1836@multitable @columnfractions .20 .80
1837@item @emph{Interface}: @tab @code{subroutine omp_destroy_nest_lock(nvar)}
1838@item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1839@end multitable
1840
1841@item @emph{See also}:
1842@ref{omp_init_lock}
1843
1844@item @emph{Reference}:
1845@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
1846@end table
1847
1848
1849
1850@node omp_get_wtick
1851@section @code{omp_get_wtick} -- Get timer precision
1852@table @asis
1853@item @emph{Description}:
1854Gets the timer precision, i.e., the number of seconds between two
1855successive clock ticks.
1856
1857@item @emph{C/C++}:
1858@multitable @columnfractions .20 .80
1859@item @emph{Prototype}: @tab @code{double omp_get_wtick(void);}
1860@end multitable
1861
1862@item @emph{Fortran}:
1863@multitable @columnfractions .20 .80
1864@item @emph{Interface}: @tab @code{double precision function omp_get_wtick()}
1865@end multitable
1866
1867@item @emph{See also}:
1868@ref{omp_get_wtime}
1869
1870@item @emph{Reference}:
1871@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.4.2.
1872@end table
1873
1874
1875
1876@node omp_get_wtime
1877@section @code{omp_get_wtime} -- Elapsed wall clock time
1878@table @asis
1879@item @emph{Description}:
1880Elapsed wall clock time in seconds. The time is measured per thread, no
1881guarantee can be made that two distinct threads measure the same time.
1882Time is measured from some "time in the past", which is an arbitrary time
1883guaranteed not to change during the execution of the program.
1884
1885@item @emph{C/C++}:
1886@multitable @columnfractions .20 .80
1887@item @emph{Prototype}: @tab @code{double omp_get_wtime(void);}
1888@end multitable
1889
1890@item @emph{Fortran}:
1891@multitable @columnfractions .20 .80
1892@item @emph{Interface}: @tab @code{double precision function omp_get_wtime()}
1893@end multitable
1894
1895@item @emph{See also}:
1896@ref{omp_get_wtick}
1897
1898@item @emph{Reference}:
1899@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.4.1.
1900@end table
1901
1902
1903
1904@node omp_fulfill_event
1905@section @code{omp_fulfill_event} -- Fulfill and destroy an OpenMP event
1906@table @asis
1907@item @emph{Description}:
1908Fulfill the event associated with the event handle argument. Currently, it
1909is only used to fulfill events generated by detach clauses on task
1910constructs - the effect of fulfilling the event is to allow the task to
1911complete.
1912
1913The result of calling @code{omp_fulfill_event} with an event handle other
1914than that generated by a detach clause is undefined. Calling it with an
1915event handle that has already been fulfilled is also undefined.
1916
1917@item @emph{C/C++}:
1918@multitable @columnfractions .20 .80
1919@item @emph{Prototype}: @tab @code{void omp_fulfill_event(omp_event_handle_t event);}
1920@end multitable
1921
1922@item @emph{Fortran}:
1923@multitable @columnfractions .20 .80
1924@item @emph{Interface}: @tab @code{subroutine omp_fulfill_event(event)}
1925@item @tab @code{integer (kind=omp_event_handle_kind) :: event}
1926@end multitable
1927
1928@item @emph{Reference}:
1929@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.5.1.
1930@end table
1931
1932
1933
1934@c ---------------------------------------------------------------------
1935@c OpenMP Environment Variables
1936@c ---------------------------------------------------------------------
1937
1938@node Environment Variables
1939@chapter OpenMP Environment Variables
1940
1941The environment variables which beginning with @env{OMP_} are defined by
2cd0689a
TB
1942section 4 of the OpenMP specification in version 4.5 or in a later version
1943of the specification, while those beginning with @env{GOMP_} are GNU extensions.
1944Most @env{OMP_} environment variables have an associated internal control
1945variable (ICV).
1946
1947For any OpenMP environment variable that sets an ICV and is neither
1948@code{OMP_DEFAULT_DEVICE} nor has global ICV scope, associated
1949device-specific environment variables exist. For them, the environment
1950variable without suffix affects the host. The suffix @code{_DEV_} followed
1951by a non-negative device number less that the number of available devices sets
1952the ICV for the corresponding device. The suffix @code{_DEV} sets the ICV
1953of all non-host devices for which a device-specific corresponding environment
1954variable has not been set while the @code{_ALL} suffix sets the ICV of all
1955host and non-host devices for which a more specific corresponding environment
1956variable is not set.
d77de738
ML
1957
1958@menu
73a0d3bf
TB
1959* OMP_ALLOCATOR:: Set the default allocator
1960* OMP_AFFINITY_FORMAT:: Set the format string used for affinity display
d77de738 1961* OMP_CANCELLATION:: Set whether cancellation is activated
73a0d3bf 1962* OMP_DISPLAY_AFFINITY:: Display thread affinity information
d77de738
ML
1963* OMP_DISPLAY_ENV:: Show OpenMP version and environment variables
1964* OMP_DEFAULT_DEVICE:: Set the device used in target regions
1965* OMP_DYNAMIC:: Dynamic adjustment of threads
1966* OMP_MAX_ACTIVE_LEVELS:: Set the maximum number of nested parallel regions
1967* OMP_MAX_TASK_PRIORITY:: Set the maximum task priority value
1968* OMP_NESTED:: Nested parallel regions
1969* OMP_NUM_TEAMS:: Specifies the number of teams to use by teams region
1970* OMP_NUM_THREADS:: Specifies the number of threads to use
0b9bd33d
JJ
1971* OMP_PROC_BIND:: Whether threads may be moved between CPUs
1972* OMP_PLACES:: Specifies on which CPUs the threads should be placed
d77de738
ML
1973* OMP_STACKSIZE:: Set default thread stack size
1974* OMP_SCHEDULE:: How threads are scheduled
1975* OMP_TARGET_OFFLOAD:: Controls offloading behaviour
1976* OMP_TEAMS_THREAD_LIMIT:: Set the maximum number of threads imposed by teams
1977* OMP_THREAD_LIMIT:: Set the maximum number of threads
1978* OMP_WAIT_POLICY:: How waiting threads are handled
1979* GOMP_CPU_AFFINITY:: Bind threads to specific CPUs
1980* GOMP_DEBUG:: Enable debugging output
1981* GOMP_STACKSIZE:: Set default thread stack size
1982* GOMP_SPINCOUNT:: Set the busy-wait spin count
1983* GOMP_RTEMS_THREAD_POOLS:: Set the RTEMS specific thread pools
1984@end menu
1985
1986
73a0d3bf
TB
1987@node OMP_ALLOCATOR
1988@section @env{OMP_ALLOCATOR} -- Set the default allocator
1989@cindex Environment Variable
1990@table @asis
2cd0689a
TB
1991@item @emph{ICV:} @var{available-devices-var}
1992@item @emph{Scope:} data environment
73a0d3bf
TB
1993@item @emph{Description}:
1994Sets the default allocator that is used when no allocator has been specified
1995in the @code{allocate} or @code{allocator} clause or if an OpenMP memory
1996routine is invoked with the @code{omp_null_allocator} allocator.
1997If unset, @code{omp_default_mem_alloc} is used.
1998
1999The value can either be a predefined allocator or a predefined memory space
2000or a predefined memory space followed by a colon and a comma-separated list
2001of memory trait and value pairs, separated by @code{=}.
2002
2cd0689a
TB
2003Note: The corresponding device environment variables are currently not
2004supported. Therefore, the non-host @var{def-allocator-var} ICVs are always
2005initialized to @code{omp_default_mem_alloc}. However, on all devices,
2006the @code{omp_set_default_allocator} API routine can be used to change
2007value.
2008
73a0d3bf 2009@multitable @columnfractions .45 .45
a85a106c 2010@headitem Predefined allocators @tab Associated predefined memory spaces
73a0d3bf
TB
2011@item omp_default_mem_alloc @tab omp_default_mem_space
2012@item omp_large_cap_mem_alloc @tab omp_large_cap_mem_space
2013@item omp_const_mem_alloc @tab omp_const_mem_space
2014@item omp_high_bw_mem_alloc @tab omp_high_bw_mem_space
2015@item omp_low_lat_mem_alloc @tab omp_low_lat_mem_space
2016@item omp_cgroup_mem_alloc @tab --
2017@item omp_pteam_mem_alloc @tab --
2018@item omp_thread_mem_alloc @tab --
2019@end multitable
2020
a85a106c
TB
2021The predefined allocators use the default values for the traits,
2022as listed below. Except that the last three allocators have the
2023@code{access} trait set to @code{cgroup}, @code{pteam}, and
2024@code{thread}, respectively.
2025
2026@multitable @columnfractions .25 .40 .25
2027@headitem Trait @tab Allowed values @tab Default value
73a0d3bf
TB
2028@item @code{sync_hint} @tab @code{contended}, @code{uncontended},
2029 @code{serialized}, @code{private}
a85a106c 2030 @tab @code{contended}
73a0d3bf 2031@item @code{alignment} @tab Positive integer being a power of two
a85a106c 2032 @tab 1 byte
73a0d3bf
TB
2033@item @code{access} @tab @code{all}, @code{cgroup},
2034 @code{pteam}, @code{thread}
a85a106c 2035 @tab @code{all}
73a0d3bf 2036@item @code{pool_size} @tab Positive integer
a85a106c 2037 @tab See @ref{Memory allocation}
73a0d3bf
TB
2038@item @code{fallback} @tab @code{default_mem_fb}, @code{null_fb},
2039 @code{abort_fb}, @code{allocator_fb}
a85a106c 2040 @tab See below
73a0d3bf 2041@item @code{fb_data} @tab @emph{unsupported as it needs an allocator handle}
a85a106c 2042 @tab (none)
73a0d3bf 2043@item @code{pinned} @tab @code{true}, @code{false}
a85a106c 2044 @tab @code{false}
73a0d3bf
TB
2045@item @code{partition} @tab @code{environment}, @code{nearest},
2046 @code{blocked}, @code{interleaved}
a85a106c 2047 @tab @code{environment}
73a0d3bf
TB
2048@end multitable
2049
a85a106c
TB
2050For the @code{fallback} trait, the default value is @code{null_fb} for the
2051@code{omp_default_mem_alloc} allocator and any allocator that is associated
2052with device memory; for all other other allocators, it is @code{default_mem_fb}
2053by default.
2054
73a0d3bf
TB
2055Examples:
2056@smallexample
2057OMP_ALLOCATOR=omp_high_bw_mem_alloc
2058OMP_ALLOCATOR=omp_large_cap_mem_space
2059OMP_ALLOCATR=omp_low_lat_mem_space:pinned=true,partition=nearest
2060@end smallexample
2061
a85a106c
TB
2062@item @emph{See also}:
2063@ref{Memory allocation}
73a0d3bf
TB
2064
2065@item @emph{Reference}:
2066@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 6.21
2067@end table
2068
2069
2070
2071@node OMP_AFFINITY_FORMAT
2072@section @env{OMP_AFFINITY_FORMAT} -- Set the format string used for affinity display
2073@cindex Environment Variable
2074@table @asis
2cd0689a
TB
2075@item @emph{ICV:} @var{affinity-format-var}
2076@item @emph{Scope:} device
73a0d3bf
TB
2077@item @emph{Description}:
2078Sets the format string used when displaying OpenMP thread affinity information.
2079Special values are output using @code{%} followed by an optional size
2080specification and then either the single-character field type or its long
2081name enclosed in curly braces; using @code{%%} will display a literal percent.
2082The size specification consists of an optional @code{0.} or @code{.} followed
450b05ce 2083by a positive integer, specifying the minimal width of the output. With
73a0d3bf
TB
2084@code{0.} and numerical values, the output is padded with zeros on the left;
2085with @code{.}, the output is padded by spaces on the left; otherwise, the
2086output is padded by spaces on the right. If unset, the value is
2087``@code{level %L thread %i affinity %A}''.
2088
2089Supported field types are:
2090
2091@multitable @columnfractions .10 .25 .60
2092@item t @tab team_num @tab value returned by @code{omp_get_team_num}
2093@item T @tab num_teams @tab value returned by @code{omp_get_num_teams}
2094@item L @tab nesting_level @tab value returned by @code{omp_get_level}
2095@item n @tab thread_num @tab value returned by @code{omp_get_thread_num}
2096@item N @tab num_threads @tab value returned by @code{omp_get_num_threads}
2097@item a @tab ancestor_tnum
2098 @tab value returned by
2099 @code{omp_get_ancestor_thread_num(omp_get_level()-1)}
2100@item H @tab host @tab name of the host that executes the thread
450b05ce
TB
2101@item P @tab process_id @tab process identifier
2102@item i @tab native_thread_id @tab native thread identifier
73a0d3bf
TB
2103@item A @tab thread_affinity
2104 @tab comma separated list of integer values or ranges, representing the
2105 processors on which a process might execute, subject to affinity
2106 mechanisms
2107@end multitable
2108
2109For instance, after setting
2110
2111@smallexample
2112OMP_AFFINITY_FORMAT="%0.2a!%n!%.4L!%N;%.2t;%0.2T;%@{team_num@};%@{num_teams@};%A"
2113@end smallexample
2114
2115with either @code{OMP_DISPLAY_AFFINITY} being set or when calling
2116@code{omp_display_affinity} with @code{NULL} or an empty string, the program
2117might display the following:
2118
2119@smallexample
212000!0! 1!4; 0;01;0;1;0-11
212100!3! 1!4; 0;01;0;1;0-11
212200!2! 1!4; 0;01;0;1;0-11
212300!1! 1!4; 0;01;0;1;0-11
2124@end smallexample
2125
2126@item @emph{See also}:
2127@ref{OMP_DISPLAY_AFFINITY}
2128
2129@item @emph{Reference}:
2130@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 6.14
2131@end table
2132
2133
2134
d77de738
ML
2135@node OMP_CANCELLATION
2136@section @env{OMP_CANCELLATION} -- Set whether cancellation is activated
2137@cindex Environment Variable
2138@table @asis
2cd0689a
TB
2139@item @emph{ICV:} @var{cancel-var}
2140@item @emph{Scope:} global
d77de738
ML
2141@item @emph{Description}:
2142If set to @code{TRUE}, the cancellation is activated. If set to @code{FALSE} or
2143if unset, cancellation is disabled and the @code{cancel} construct is ignored.
2144
2145@item @emph{See also}:
2146@ref{omp_get_cancellation}
2147
2148@item @emph{Reference}:
2149@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.11
2150@end table
2151
2152
2153
73a0d3bf
TB
2154@node OMP_DISPLAY_AFFINITY
2155@section @env{OMP_DISPLAY_AFFINITY} -- Display thread affinity information
2156@cindex Environment Variable
2157@table @asis
2cd0689a
TB
2158@item @emph{ICV:} @var{display-affinity-var}
2159@item @emph{Scope:} global
73a0d3bf
TB
2160@item @emph{Description}:
2161If set to @code{FALSE} or if unset, affinity displaying is disabled.
2162If set to @code{TRUE}, the runtime will display affinity information about
2163OpenMP threads in a parallel region upon entering the region and every time
2164any change occurs.
2165
2166@item @emph{See also}:
2167@ref{OMP_AFFINITY_FORMAT}
2168
2169@item @emph{Reference}:
2170@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 6.13
2171@end table
2172
2173
2174
2175
d77de738
ML
2176@node OMP_DISPLAY_ENV
2177@section @env{OMP_DISPLAY_ENV} -- Show OpenMP version and environment variables
2178@cindex Environment Variable
2179@table @asis
2cd0689a
TB
2180@item @emph{ICV:} none
2181@item @emph{Scope:} not applicable
d77de738
ML
2182@item @emph{Description}:
2183If set to @code{TRUE}, the OpenMP version number and the values
2184associated with the OpenMP environment variables are printed to @code{stderr}.
2185If set to @code{VERBOSE}, it additionally shows the value of the environment
2186variables which are GNU extensions. If undefined or set to @code{FALSE},
2187this information will not be shown.
2188
2189
2190@item @emph{Reference}:
2191@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.12
2192@end table
2193
2194
2195
2196@node OMP_DEFAULT_DEVICE
2197@section @env{OMP_DEFAULT_DEVICE} -- Set the device used in target regions
2198@cindex Environment Variable
2199@table @asis
2cd0689a
TB
2200@item @emph{ICV:} @var{default-device-var}
2201@item @emph{Scope:} data environment
d77de738
ML
2202@item @emph{Description}:
2203Set to choose the device which is used in a @code{target} region, unless the
2204value is overridden by @code{omp_set_default_device} or by a @code{device}
2205clause. The value shall be the nonnegative device number. If no device with
2206the given device number exists, the code is executed on the host. If unset,
18c8b56c
TB
2207@env{OMP_TARGET_OFFLOAD} is @code{mandatory} and no non-host devices are
2208available, it is set to @code{omp_invalid_device}. Otherwise, if unset,
d77de738
ML
2209device number 0 will be used.
2210
2211
2212@item @emph{See also}:
2213@ref{omp_get_default_device}, @ref{omp_set_default_device},
2214
2215@item @emph{Reference}:
2216@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.13
2217@end table
2218
2219
2220
2221@node OMP_DYNAMIC
2222@section @env{OMP_DYNAMIC} -- Dynamic adjustment of threads
2223@cindex Environment Variable
2224@table @asis
2cd0689a
TB
2225@item @emph{ICV:} @var{dyn-var}
2226@item @emph{Scope:} global
d77de738
ML
2227@item @emph{Description}:
2228Enable or disable the dynamic adjustment of the number of threads
2229within a team. The value of this environment variable shall be
2230@code{TRUE} or @code{FALSE}. If undefined, dynamic adjustment is
2231disabled by default.
2232
2233@item @emph{See also}:
2234@ref{omp_set_dynamic}
2235
2236@item @emph{Reference}:
2237@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.3
2238@end table
2239
2240
2241
2242@node OMP_MAX_ACTIVE_LEVELS
2243@section @env{OMP_MAX_ACTIVE_LEVELS} -- Set the maximum number of nested parallel regions
2244@cindex Environment Variable
2245@table @asis
2cd0689a
TB
2246@item @emph{ICV:} @var{max-active-levels-var}
2247@item @emph{Scope:} data environment
d77de738
ML
2248@item @emph{Description}:
2249Specifies the initial value for the maximum number of nested parallel
2250regions. The value of this variable shall be a positive integer.
2251If undefined, then if @env{OMP_NESTED} is defined and set to true, or
2252if @env{OMP_NUM_THREADS} or @env{OMP_PROC_BIND} are defined and set to
2253a list with more than one item, the maximum number of nested parallel
2254regions will be initialized to the largest number supported, otherwise
2255it will be set to one.
2256
2257@item @emph{See also}:
2cd0689a
TB
2258@ref{omp_set_max_active_levels}, @ref{OMP_NESTED}, @ref{OMP_PROC_BIND},
2259@ref{OMP_NUM_THREADS}
2260
d77de738
ML
2261
2262@item @emph{Reference}:
2263@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.9
2264@end table
2265
2266
2267
2268@node OMP_MAX_TASK_PRIORITY
2269@section @env{OMP_MAX_TASK_PRIORITY} -- Set the maximum priority
2270number that can be set for a task.
2271@cindex Environment Variable
2272@table @asis
2cd0689a
TB
2273@item @emph{ICV:} @var{max-task-priority-var}
2274@item @emph{Scope:} global
d77de738
ML
2275@item @emph{Description}:
2276Specifies the initial value for the maximum priority value that can be
2277set for a task. The value of this variable shall be a non-negative
2278integer, and zero is allowed. If undefined, the default priority is
22790.
2280
2281@item @emph{See also}:
2282@ref{omp_get_max_task_priority}
2283
2284@item @emph{Reference}:
2285@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.14
2286@end table
2287
2288
2289
2290@node OMP_NESTED
2291@section @env{OMP_NESTED} -- Nested parallel regions
2292@cindex Environment Variable
2293@cindex Implementation specific setting
2294@table @asis
2cd0689a
TB
2295@item @emph{ICV:} @var{max-active-levels-var}
2296@item @emph{Scope:} data environment
d77de738
ML
2297@item @emph{Description}:
2298Enable or disable nested parallel regions, i.e., whether team members
2299are allowed to create new teams. The value of this environment variable
2300shall be @code{TRUE} or @code{FALSE}. If set to @code{TRUE}, the number
2301of maximum active nested regions supported will by default be set to the
2302maximum supported, otherwise it will be set to one. If
2303@env{OMP_MAX_ACTIVE_LEVELS} is defined, its setting will override this
2304setting. If both are undefined, nested parallel regions are enabled if
2305@env{OMP_NUM_THREADS} or @env{OMP_PROC_BINDS} are defined to a list with
2306more than one item, otherwise they are disabled by default.
2307
2cd0689a
TB
2308Note that the @code{OMP_NESTED} environment variable was deprecated in
2309the OpenMP specification 5.2 in favor of @code{OMP_MAX_ACTIVE_LEVELS}.
2310
d77de738 2311@item @emph{See also}:
2cd0689a
TB
2312@ref{omp_set_max_active_levels}, @ref{omp_set_nested},
2313@ref{OMP_MAX_ACTIVE_LEVELS}
d77de738
ML
2314
2315@item @emph{Reference}:
2316@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.6
2317@end table
2318
2319
2320
2321@node OMP_NUM_TEAMS
2322@section @env{OMP_NUM_TEAMS} -- Specifies the number of teams to use by teams region
2323@cindex Environment Variable
2324@table @asis
2cd0689a
TB
2325@item @emph{ICV:} @var{nteams-var}
2326@item @emph{Scope:} device
d77de738
ML
2327@item @emph{Description}:
2328Specifies the upper bound for number of teams to use in teams regions
2329without explicit @code{num_teams} clause. The value of this variable shall
2330be a positive integer. If undefined it defaults to 0 which means
2331implementation defined upper bound.
2332
2333@item @emph{See also}:
2334@ref{omp_set_num_teams}
2335
2336@item @emph{Reference}:
2337@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 6.23
2338@end table
2339
2340
2341
2342@node OMP_NUM_THREADS
2343@section @env{OMP_NUM_THREADS} -- Specifies the number of threads to use
2344@cindex Environment Variable
2345@cindex Implementation specific setting
2346@table @asis
2cd0689a
TB
2347@item @emph{ICV:} @var{nthreads-var}
2348@item @emph{Scope:} data environment
d77de738
ML
2349@item @emph{Description}:
2350Specifies the default number of threads to use in parallel regions. The
2351value of this variable shall be a comma-separated list of positive integers;
2352the value specifies the number of threads to use for the corresponding nested
2353level. Specifying more than one item in the list will automatically enable
2354nesting by default. If undefined one thread per CPU is used.
2355
2cd0689a
TB
2356When a list with more than value is specified, it also affects the
2357@var{max-active-levels-var} ICV as described in @ref{OMP_MAX_ACTIVE_LEVELS}.
2358
d77de738 2359@item @emph{See also}:
2cd0689a 2360@ref{omp_set_num_threads}, @ref{OMP_MAX_ACTIVE_LEVELS}
d77de738
ML
2361
2362@item @emph{Reference}:
2363@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.2
2364@end table
2365
2366
2367
2368@node OMP_PROC_BIND
0b9bd33d 2369@section @env{OMP_PROC_BIND} -- Whether threads may be moved between CPUs
d77de738
ML
2370@cindex Environment Variable
2371@table @asis
2cd0689a
TB
2372@item @emph{ICV:} @var{bind-var}
2373@item @emph{Scope:} data environment
d77de738
ML
2374@item @emph{Description}:
2375Specifies whether threads may be moved between processors. If set to
0b9bd33d 2376@code{TRUE}, OpenMP threads should not be moved; if set to @code{FALSE}
d77de738
ML
2377they may be moved. Alternatively, a comma separated list with the
2378values @code{PRIMARY}, @code{MASTER}, @code{CLOSE} and @code{SPREAD} can
2379be used to specify the thread affinity policy for the corresponding nesting
2380level. With @code{PRIMARY} and @code{MASTER} the worker threads are in the
2381same place partition as the primary thread. With @code{CLOSE} those are
2382kept close to the primary thread in contiguous place partitions. And
2383with @code{SPREAD} a sparse distribution
2384across the place partitions is used. Specifying more than one item in the
2385list will automatically enable nesting by default.
2386
2cd0689a
TB
2387When a list is specified, it also affects the @var{max-active-levels-var} ICV
2388as described in @ref{OMP_MAX_ACTIVE_LEVELS}.
2389
d77de738
ML
2390When undefined, @env{OMP_PROC_BIND} defaults to @code{TRUE} when
2391@env{OMP_PLACES} or @env{GOMP_CPU_AFFINITY} is set and @code{FALSE} otherwise.
2392
2393@item @emph{See also}:
2cd0689a
TB
2394@ref{omp_get_proc_bind}, @ref{GOMP_CPU_AFFINITY}, @ref{OMP_PLACES},
2395@ref{OMP_MAX_ACTIVE_LEVELS}
d77de738
ML
2396
2397@item @emph{Reference}:
2398@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.4
2399@end table
2400
2401
2402
2403@node OMP_PLACES
0b9bd33d 2404@section @env{OMP_PLACES} -- Specifies on which CPUs the threads should be placed
d77de738
ML
2405@cindex Environment Variable
2406@table @asis
2cd0689a
TB
2407@item @emph{ICV:} @var{place-partition-var}
2408@item @emph{Scope:} implicit tasks
d77de738
ML
2409@item @emph{Description}:
2410The thread placement can be either specified using an abstract name or by an
2411explicit list of the places. The abstract names @code{threads}, @code{cores},
2412@code{sockets}, @code{ll_caches} and @code{numa_domains} can be optionally
2413followed by a positive number in parentheses, which denotes the how many places
2414shall be created. With @code{threads} each place corresponds to a single
2415hardware thread; @code{cores} to a single core with the corresponding number of
2416hardware threads; with @code{sockets} the place corresponds to a single
2417socket; with @code{ll_caches} to a set of cores that shares the last level
2418cache on the device; and @code{numa_domains} to a set of cores for which their
2419closest memory on the device is the same memory and at a similar distance from
2420the cores. The resulting placement can be shown by setting the
2421@env{OMP_DISPLAY_ENV} environment variable.
2422
2423Alternatively, the placement can be specified explicitly as comma-separated
2424list of places. A place is specified by set of nonnegative numbers in curly
2425braces, denoting the hardware threads. The curly braces can be omitted
2426when only a single number has been specified. The hardware threads
2427belonging to a place can either be specified as comma-separated list of
2428nonnegative thread numbers or using an interval. Multiple places can also be
2429either specified by a comma-separated list of places or by an interval. To
2430specify an interval, a colon followed by the count is placed after
2431the hardware thread number or the place. Optionally, the length can be
2432followed by a colon and the stride number -- otherwise a unit stride is
2433assumed. Placing an exclamation mark (@code{!}) directly before a curly
2434brace or numbers inside the curly braces (excluding intervals) will
2435exclude those hardware threads.
2436
2437For instance, the following specifies the same places list:
2438@code{"@{0,1,2@}, @{3,4,6@}, @{7,8,9@}, @{10,11,12@}"};
2439@code{"@{0:3@}, @{3:3@}, @{7:3@}, @{10:3@}"}; and @code{"@{0:2@}:4:3"}.
2440
2441If @env{OMP_PLACES} and @env{GOMP_CPU_AFFINITY} are unset and
2442@env{OMP_PROC_BIND} is either unset or @code{false}, threads may be moved
2443between CPUs following no placement policy.
2444
2445@item @emph{See also}:
2446@ref{OMP_PROC_BIND}, @ref{GOMP_CPU_AFFINITY}, @ref{omp_get_proc_bind},
2447@ref{OMP_DISPLAY_ENV}
2448
2449@item @emph{Reference}:
2450@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.5
2451@end table
2452
2453
2454
2455@node OMP_STACKSIZE
2456@section @env{OMP_STACKSIZE} -- Set default thread stack size
2457@cindex Environment Variable
2458@table @asis
2cd0689a
TB
2459@item @emph{ICV:} @var{stacksize-var}
2460@item @emph{Scope:} device
d77de738
ML
2461@item @emph{Description}:
2462Set the default thread stack size in kilobytes, unless the number
2463is suffixed by @code{B}, @code{K}, @code{M} or @code{G}, in which
2464case the size is, respectively, in bytes, kilobytes, megabytes
2465or gigabytes. This is different from @code{pthread_attr_setstacksize}
2466which gets the number of bytes as an argument. If the stack size cannot
2467be set due to system constraints, an error is reported and the initial
2468stack size is left unchanged. If undefined, the stack size is system
2469dependent.
2470
2cd0689a
TB
2471@item @emph{See also}:
2472@ref{GOMP_STACKSIZE}
2473
d77de738
ML
2474@item @emph{Reference}:
2475@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.7
2476@end table
2477
2478
2479
2480@node OMP_SCHEDULE
2481@section @env{OMP_SCHEDULE} -- How threads are scheduled
2482@cindex Environment Variable
2483@cindex Implementation specific setting
2484@table @asis
2cd0689a
TB
2485@item @emph{ICV:} @var{run-sched-var}
2486@item @emph{Scope:} data environment
d77de738
ML
2487@item @emph{Description}:
2488Allows to specify @code{schedule type} and @code{chunk size}.
2489The value of the variable shall have the form: @code{type[,chunk]} where
2490@code{type} is one of @code{static}, @code{dynamic}, @code{guided} or @code{auto}
2491The optional @code{chunk} size shall be a positive integer. If undefined,
2492dynamic scheduling and a chunk size of 1 is used.
2493
2494@item @emph{See also}:
2495@ref{omp_set_schedule}
2496
2497@item @emph{Reference}:
2498@uref{https://www.openmp.org, OpenMP specification v4.5}, Sections 2.7.1.1 and 4.1
2499@end table
2500
2501
2502
2503@node OMP_TARGET_OFFLOAD
2504@section @env{OMP_TARGET_OFFLOAD} -- Controls offloading behaviour
2505@cindex Environment Variable
2506@cindex Implementation specific setting
2507@table @asis
2cd0689a
TB
2508@item @emph{ICV:} @var{target-offload-var}
2509@item @emph{Scope:} global
d77de738
ML
2510@item @emph{Description}:
2511Specifies the behaviour with regard to offloading code to a device. This
2512variable can be set to one of three values - @code{MANDATORY}, @code{DISABLED}
2513or @code{DEFAULT}.
2514
2515If set to @code{MANDATORY}, the program will terminate with an error if
2516the offload device is not present or is not supported. If set to
2517@code{DISABLED}, then offloading is disabled and all code will run on the
2518host. If set to @code{DEFAULT}, the program will try offloading to the
2519device first, then fall back to running code on the host if it cannot.
2520
2521If undefined, then the program will behave as if @code{DEFAULT} was set.
2522
2523@item @emph{Reference}:
2524@uref{https://www.openmp.org, OpenMP specification v5.0}, Section 6.17
2525@end table
2526
2527
2528
2529@node OMP_TEAMS_THREAD_LIMIT
2530@section @env{OMP_TEAMS_THREAD_LIMIT} -- Set the maximum number of threads imposed by teams
2531@cindex Environment Variable
2532@table @asis
2cd0689a
TB
2533@item @emph{ICV:} @var{teams-thread-limit-var}
2534@item @emph{Scope:} device
d77de738
ML
2535@item @emph{Description}:
2536Specifies an upper bound for the number of threads to use by each contention
2537group created by a teams construct without explicit @code{thread_limit}
2538clause. The value of this variable shall be a positive integer. If undefined,
2539the value of 0 is used which stands for an implementation defined upper
2540limit.
2541
2542@item @emph{See also}:
2543@ref{OMP_THREAD_LIMIT}, @ref{omp_set_teams_thread_limit}
2544
2545@item @emph{Reference}:
2546@uref{https://www.openmp.org, OpenMP specification v5.1}, Section 6.24
2547@end table
2548
2549
2550
2551@node OMP_THREAD_LIMIT
2552@section @env{OMP_THREAD_LIMIT} -- Set the maximum number of threads
2553@cindex Environment Variable
2554@table @asis
2cd0689a
TB
2555@item @emph{ICV:} @var{thread-limit-var}
2556@item @emph{Scope:} data environment
d77de738
ML
2557@item @emph{Description}:
2558Specifies the number of threads to use for the whole program. The
2559value of this variable shall be a positive integer. If undefined,
2560the number of threads is not limited.
2561
2562@item @emph{See also}:
2563@ref{OMP_NUM_THREADS}, @ref{omp_get_thread_limit}
2564
2565@item @emph{Reference}:
2566@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.10
2567@end table
2568
2569
2570
2571@node OMP_WAIT_POLICY
2572@section @env{OMP_WAIT_POLICY} -- How waiting threads are handled
2573@cindex Environment Variable
2574@table @asis
2575@item @emph{Description}:
2576Specifies whether waiting threads should be active or passive. If
2577the value is @code{PASSIVE}, waiting threads should not consume CPU
2578power while waiting; while the value is @code{ACTIVE} specifies that
2579they should. If undefined, threads wait actively for a short time
2580before waiting passively.
2581
2582@item @emph{See also}:
2583@ref{GOMP_SPINCOUNT}
2584
2585@item @emph{Reference}:
2586@uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.8
2587@end table
2588
2589
2590
2591@node GOMP_CPU_AFFINITY
2592@section @env{GOMP_CPU_AFFINITY} -- Bind threads to specific CPUs
2593@cindex Environment Variable
2594@table @asis
2595@item @emph{Description}:
2596Binds threads to specific CPUs. The variable should contain a space-separated
2597or comma-separated list of CPUs. This list may contain different kinds of
2598entries: either single CPU numbers in any order, a range of CPUs (M-N)
2599or a range with some stride (M-N:S). CPU numbers are zero based. For example,
2600@code{GOMP_CPU_AFFINITY="0 3 1-2 4-15:2"} will bind the initial thread
2601to CPU 0, the second to CPU 3, the third to CPU 1, the fourth to
2602CPU 2, the fifth to CPU 4, the sixth through tenth to CPUs 6, 8, 10, 12,
2603and 14 respectively and then start assigning back from the beginning of
2604the list. @code{GOMP_CPU_AFFINITY=0} binds all threads to CPU 0.
2605
2606There is no libgomp library routine to determine whether a CPU affinity
2607specification is in effect. As a workaround, language-specific library
2608functions, e.g., @code{getenv} in C or @code{GET_ENVIRONMENT_VARIABLE} in
2609Fortran, may be used to query the setting of the @code{GOMP_CPU_AFFINITY}
2610environment variable. A defined CPU affinity on startup cannot be changed
2611or disabled during the runtime of the application.
2612
2613If both @env{GOMP_CPU_AFFINITY} and @env{OMP_PROC_BIND} are set,
2614@env{OMP_PROC_BIND} has a higher precedence. If neither has been set and
2615@env{OMP_PROC_BIND} is unset, or when @env{OMP_PROC_BIND} is set to
2616@code{FALSE}, the host system will handle the assignment of threads to CPUs.
2617
2618@item @emph{See also}:
2619@ref{OMP_PLACES}, @ref{OMP_PROC_BIND}
2620@end table
2621
2622
2623
2624@node GOMP_DEBUG
2625@section @env{GOMP_DEBUG} -- Enable debugging output
2626@cindex Environment Variable
2627@table @asis
2628@item @emph{Description}:
2629Enable debugging output. The variable should be set to @code{0}
2630(disabled, also the default if not set), or @code{1} (enabled).
2631
2632If enabled, some debugging output will be printed during execution.
2633This is currently not specified in more detail, and subject to change.
2634@end table
2635
2636
2637
2638@node GOMP_STACKSIZE
2639@section @env{GOMP_STACKSIZE} -- Set default thread stack size
2640@cindex Environment Variable
2641@cindex Implementation specific setting
2642@table @asis
2643@item @emph{Description}:
2644Set the default thread stack size in kilobytes. This is different from
2645@code{pthread_attr_setstacksize} which gets the number of bytes as an
2646argument. If the stack size cannot be set due to system constraints, an
2647error is reported and the initial stack size is left unchanged. If undefined,
2648the stack size is system dependent.
2649
2650@item @emph{See also}:
2651@ref{OMP_STACKSIZE}
2652
2653@item @emph{Reference}:
2654@uref{https://gcc.gnu.org/ml/gcc-patches/2006-06/msg00493.html,
2655GCC Patches Mailinglist},
2656@uref{https://gcc.gnu.org/ml/gcc-patches/2006-06/msg00496.html,
2657GCC Patches Mailinglist}
2658@end table
2659
2660
2661
2662@node GOMP_SPINCOUNT
2663@section @env{GOMP_SPINCOUNT} -- Set the busy-wait spin count
2664@cindex Environment Variable
2665@cindex Implementation specific setting
2666@table @asis
2667@item @emph{Description}:
2668Determines how long a threads waits actively with consuming CPU power
2669before waiting passively without consuming CPU power. The value may be
2670either @code{INFINITE}, @code{INFINITY} to always wait actively or an
2671integer which gives the number of spins of the busy-wait loop. The
2672integer may optionally be followed by the following suffixes acting
2673as multiplication factors: @code{k} (kilo, thousand), @code{M} (mega,
2674million), @code{G} (giga, billion), or @code{T} (tera, trillion).
2675If undefined, 0 is used when @env{OMP_WAIT_POLICY} is @code{PASSIVE},
2676300,000 is used when @env{OMP_WAIT_POLICY} is undefined and
267730 billion is used when @env{OMP_WAIT_POLICY} is @code{ACTIVE}.
2678If there are more OpenMP threads than available CPUs, 1000 and 100
2679spins are used for @env{OMP_WAIT_POLICY} being @code{ACTIVE} or
2680undefined, respectively; unless the @env{GOMP_SPINCOUNT} is lower
2681or @env{OMP_WAIT_POLICY} is @code{PASSIVE}.
2682
2683@item @emph{See also}:
2684@ref{OMP_WAIT_POLICY}
2685@end table
2686
2687
2688
2689@node GOMP_RTEMS_THREAD_POOLS
2690@section @env{GOMP_RTEMS_THREAD_POOLS} -- Set the RTEMS specific thread pools
2691@cindex Environment Variable
2692@cindex Implementation specific setting
2693@table @asis
2694@item @emph{Description}:
2695This environment variable is only used on the RTEMS real-time operating system.
2696It determines the scheduler instance specific thread pools. The format for
2697@env{GOMP_RTEMS_THREAD_POOLS} is a list of optional
2698@code{<thread-pool-count>[$<priority>]@@<scheduler-name>} configurations
2699separated by @code{:} where:
2700@itemize @bullet
2701@item @code{<thread-pool-count>} is the thread pool count for this scheduler
2702instance.
2703@item @code{$<priority>} is an optional priority for the worker threads of a
2704thread pool according to @code{pthread_setschedparam}. In case a priority
2705value is omitted, then a worker thread will inherit the priority of the OpenMP
2706primary thread that created it. The priority of the worker thread is not
2707changed after creation, even if a new OpenMP primary thread using the worker has
2708a different priority.
2709@item @code{@@<scheduler-name>} is the scheduler instance name according to the
2710RTEMS application configuration.
2711@end itemize
2712In case no thread pool configuration is specified for a scheduler instance,
2713then each OpenMP primary thread of this scheduler instance will use its own
2714dynamically allocated thread pool. To limit the worker thread count of the
2715thread pools, each OpenMP primary thread must call @code{omp_set_num_threads}.
2716@item @emph{Example}:
2717Lets suppose we have three scheduler instances @code{IO}, @code{WRK0}, and
2718@code{WRK1} with @env{GOMP_RTEMS_THREAD_POOLS} set to
2719@code{"1@@WRK0:3$4@@WRK1"}. Then there are no thread pool restrictions for
2720scheduler instance @code{IO}. In the scheduler instance @code{WRK0} there is
2721one thread pool available. Since no priority is specified for this scheduler
2722instance, the worker thread inherits the priority of the OpenMP primary thread
2723that created it. In the scheduler instance @code{WRK1} there are three thread
2724pools available and their worker threads run at priority four.
2725@end table
2726
2727
2728
2729@c ---------------------------------------------------------------------
2730@c Enabling OpenACC
2731@c ---------------------------------------------------------------------
2732
2733@node Enabling OpenACC
2734@chapter Enabling OpenACC
2735
2736To activate the OpenACC extensions for C/C++ and Fortran, the compile-time
2737flag @option{-fopenacc} must be specified. This enables the OpenACC directive
2738@code{#pragma acc} in C/C++ and @code{!$acc} directives in free form,
2739@code{c$acc}, @code{*$acc} and @code{!$acc} directives in fixed form,
2740@code{!$} conditional compilation sentinels in free form and @code{c$},
2741@code{*$} and @code{!$} sentinels in fixed form, for Fortran. The flag also
2742arranges for automatic linking of the OpenACC runtime library
2743(@ref{OpenACC Runtime Library Routines}).
2744
2745See @uref{https://gcc.gnu.org/wiki/OpenACC} for more information.
2746
2747A complete description of all OpenACC directives accepted may be found in
2748the @uref{https://www.openacc.org, OpenACC} Application Programming
2749Interface manual, version 2.6.
2750
2751
2752
2753@c ---------------------------------------------------------------------
2754@c OpenACC Runtime Library Routines
2755@c ---------------------------------------------------------------------
2756
2757@node OpenACC Runtime Library Routines
2758@chapter OpenACC Runtime Library Routines
2759
2760The runtime routines described here are defined by section 3 of the OpenACC
2761specifications in version 2.6.
2762They have C linkage, and do not throw exceptions.
2763Generally, they are available only for the host, with the exception of
2764@code{acc_on_device}, which is available for both the host and the
2765acceleration device.
2766
2767@menu
2768* acc_get_num_devices:: Get number of devices for the given device
2769 type.
2770* acc_set_device_type:: Set type of device accelerator to use.
2771* acc_get_device_type:: Get type of device accelerator to be used.
2772* acc_set_device_num:: Set device number to use.
2773* acc_get_device_num:: Get device number to be used.
2774* acc_get_property:: Get device property.
2775* acc_async_test:: Tests for completion of a specific asynchronous
2776 operation.
2777* acc_async_test_all:: Tests for completion of all asynchronous
2778 operations.
2779* acc_wait:: Wait for completion of a specific asynchronous
2780 operation.
2781* acc_wait_all:: Waits for completion of all asynchronous
2782 operations.
2783* acc_wait_all_async:: Wait for completion of all asynchronous
2784 operations.
2785* acc_wait_async:: Wait for completion of asynchronous operations.
2786* acc_init:: Initialize runtime for a specific device type.
2787* acc_shutdown:: Shuts down the runtime for a specific device
2788 type.
2789* acc_on_device:: Whether executing on a particular device
2790* acc_malloc:: Allocate device memory.
2791* acc_free:: Free device memory.
2792* acc_copyin:: Allocate device memory and copy host memory to
2793 it.
2794* acc_present_or_copyin:: If the data is not present on the device,
2795 allocate device memory and copy from host
2796 memory.
2797* acc_create:: Allocate device memory and map it to host
2798 memory.
2799* acc_present_or_create:: If the data is not present on the device,
2800 allocate device memory and map it to host
2801 memory.
2802* acc_copyout:: Copy device memory to host memory.
2803* acc_delete:: Free device memory.
2804* acc_update_device:: Update device memory from mapped host memory.
2805* acc_update_self:: Update host memory from mapped device memory.
2806* acc_map_data:: Map previously allocated device memory to host
2807 memory.
2808* acc_unmap_data:: Unmap device memory from host memory.
2809* acc_deviceptr:: Get device pointer associated with specific
2810 host address.
2811* acc_hostptr:: Get host pointer associated with specific
2812 device address.
2813* acc_is_present:: Indicate whether host variable / array is
2814 present on device.
2815* acc_memcpy_to_device:: Copy host memory to device memory.
2816* acc_memcpy_from_device:: Copy device memory to host memory.
2817* acc_attach:: Let device pointer point to device-pointer target.
2818* acc_detach:: Let device pointer point to host-pointer target.
2819
2820API routines for target platforms.
2821
2822* acc_get_current_cuda_device:: Get CUDA device handle.
2823* acc_get_current_cuda_context::Get CUDA context handle.
2824* acc_get_cuda_stream:: Get CUDA stream handle.
2825* acc_set_cuda_stream:: Set CUDA stream handle.
2826
2827API routines for the OpenACC Profiling Interface.
2828
2829* acc_prof_register:: Register callbacks.
2830* acc_prof_unregister:: Unregister callbacks.
2831* acc_prof_lookup:: Obtain inquiry functions.
2832* acc_register_library:: Library registration.
2833@end menu
2834
2835
2836
2837@node acc_get_num_devices
2838@section @code{acc_get_num_devices} -- Get number of devices for given device type
2839@table @asis
2840@item @emph{Description}
2841This function returns a value indicating the number of devices available
2842for the device type specified in @var{devicetype}.
2843
2844@item @emph{C/C++}:
2845@multitable @columnfractions .20 .80
2846@item @emph{Prototype}: @tab @code{int acc_get_num_devices(acc_device_t devicetype);}
2847@end multitable
2848
2849@item @emph{Fortran}:
2850@multitable @columnfractions .20 .80
2851@item @emph{Interface}: @tab @code{integer function acc_get_num_devices(devicetype)}
2852@item @tab @code{integer(kind=acc_device_kind) devicetype}
2853@end multitable
2854
2855@item @emph{Reference}:
2856@uref{https://www.openacc.org, OpenACC specification v2.6}, section
28573.2.1.
2858@end table
2859
2860
2861
2862@node acc_set_device_type
2863@section @code{acc_set_device_type} -- Set type of device accelerator to use.
2864@table @asis
2865@item @emph{Description}
2866This function indicates to the runtime library which device type, specified
2867in @var{devicetype}, to use when executing a parallel or kernels region.
2868
2869@item @emph{C/C++}:
2870@multitable @columnfractions .20 .80
2871@item @emph{Prototype}: @tab @code{acc_set_device_type(acc_device_t devicetype);}
2872@end multitable
2873
2874@item @emph{Fortran}:
2875@multitable @columnfractions .20 .80
2876@item @emph{Interface}: @tab @code{subroutine acc_set_device_type(devicetype)}
2877@item @tab @code{integer(kind=acc_device_kind) devicetype}
2878@end multitable
2879
2880@item @emph{Reference}:
2881@uref{https://www.openacc.org, OpenACC specification v2.6}, section
28823.2.2.
2883@end table
2884
2885
2886
2887@node acc_get_device_type
2888@section @code{acc_get_device_type} -- Get type of device accelerator to be used.
2889@table @asis
2890@item @emph{Description}
2891This function returns what device type will be used when executing a
2892parallel or kernels region.
2893
2894This function returns @code{acc_device_none} if
2895@code{acc_get_device_type} is called from
2896@code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
2897callbacks of the OpenACC Profiling Interface (@ref{OpenACC Profiling
2898Interface}), that is, if the device is currently being initialized.
2899
2900@item @emph{C/C++}:
2901@multitable @columnfractions .20 .80
2902@item @emph{Prototype}: @tab @code{acc_device_t acc_get_device_type(void);}
2903@end multitable
2904
2905@item @emph{Fortran}:
2906@multitable @columnfractions .20 .80
2907@item @emph{Interface}: @tab @code{function acc_get_device_type(void)}
2908@item @tab @code{integer(kind=acc_device_kind) acc_get_device_type}
2909@end multitable
2910
2911@item @emph{Reference}:
2912@uref{https://www.openacc.org, OpenACC specification v2.6}, section
29133.2.3.
2914@end table
2915
2916
2917
2918@node acc_set_device_num
2919@section @code{acc_set_device_num} -- Set device number to use.
2920@table @asis
2921@item @emph{Description}
2922This function will indicate to the runtime which device number,
2923specified by @var{devicenum}, associated with the specified device
2924type @var{devicetype}.
2925
2926@item @emph{C/C++}:
2927@multitable @columnfractions .20 .80
2928@item @emph{Prototype}: @tab @code{acc_set_device_num(int devicenum, acc_device_t devicetype);}
2929@end multitable
2930
2931@item @emph{Fortran}:
2932@multitable @columnfractions .20 .80
2933@item @emph{Interface}: @tab @code{subroutine acc_set_device_num(devicenum, devicetype)}
2934@item @tab @code{integer devicenum}
2935@item @tab @code{integer(kind=acc_device_kind) devicetype}
2936@end multitable
2937
2938@item @emph{Reference}:
2939@uref{https://www.openacc.org, OpenACC specification v2.6}, section
29403.2.4.
2941@end table
2942
2943
2944
2945@node acc_get_device_num
2946@section @code{acc_get_device_num} -- Get device number to be used.
2947@table @asis
2948@item @emph{Description}
2949This function returns which device number associated with the specified device
2950type @var{devicetype}, will be used when executing a parallel or kernels
2951region.
2952
2953@item @emph{C/C++}:
2954@multitable @columnfractions .20 .80
2955@item @emph{Prototype}: @tab @code{int acc_get_device_num(acc_device_t devicetype);}
2956@end multitable
2957
2958@item @emph{Fortran}:
2959@multitable @columnfractions .20 .80
2960@item @emph{Interface}: @tab @code{function acc_get_device_num(devicetype)}
2961@item @tab @code{integer(kind=acc_device_kind) devicetype}
2962@item @tab @code{integer acc_get_device_num}
2963@end multitable
2964
2965@item @emph{Reference}:
2966@uref{https://www.openacc.org, OpenACC specification v2.6}, section
29673.2.5.
2968@end table
2969
2970
2971
2972@node acc_get_property
2973@section @code{acc_get_property} -- Get device property.
2974@cindex acc_get_property
2975@cindex acc_get_property_string
2976@table @asis
2977@item @emph{Description}
2978These routines return the value of the specified @var{property} for the
2979device being queried according to @var{devicenum} and @var{devicetype}.
2980Integer-valued and string-valued properties are returned by
2981@code{acc_get_property} and @code{acc_get_property_string} respectively.
2982The Fortran @code{acc_get_property_string} subroutine returns the string
2983retrieved in its fourth argument while the remaining entry points are
2984functions, which pass the return value as their result.
2985
2986Note for Fortran, only: the OpenACC technical committee corrected and, hence,
2987modified the interface introduced in OpenACC 2.6. The kind-value parameter
2988@code{acc_device_property} has been renamed to @code{acc_device_property_kind}
2989for consistency and the return type of the @code{acc_get_property} function is
2990now a @code{c_size_t} integer instead of a @code{acc_device_property} integer.
2991The parameter @code{acc_device_property} will continue to be provided,
2992but might be removed in a future version of GCC.
2993
2994@item @emph{C/C++}:
2995@multitable @columnfractions .20 .80
2996@item @emph{Prototype}: @tab @code{size_t acc_get_property(int devicenum, acc_device_t devicetype, acc_device_property_t property);}
2997@item @emph{Prototype}: @tab @code{const char *acc_get_property_string(int devicenum, acc_device_t devicetype, acc_device_property_t property);}
2998@end multitable
2999
3000@item @emph{Fortran}:
3001@multitable @columnfractions .20 .80
3002@item @emph{Interface}: @tab @code{function acc_get_property(devicenum, devicetype, property)}
3003@item @emph{Interface}: @tab @code{subroutine acc_get_property_string(devicenum, devicetype, property, string)}
3004@item @tab @code{use ISO_C_Binding, only: c_size_t}
3005@item @tab @code{integer devicenum}
3006@item @tab @code{integer(kind=acc_device_kind) devicetype}
3007@item @tab @code{integer(kind=acc_device_property_kind) property}
3008@item @tab @code{integer(kind=c_size_t) acc_get_property}
3009@item @tab @code{character(*) string}
3010@end multitable
3011
3012@item @emph{Reference}:
3013@uref{https://www.openacc.org, OpenACC specification v2.6}, section
30143.2.6.
3015@end table
3016
3017
3018
3019@node acc_async_test
3020@section @code{acc_async_test} -- Test for completion of a specific asynchronous operation.
3021@table @asis
3022@item @emph{Description}
3023This function tests for completion of the asynchronous operation specified
3024in @var{arg}. In C/C++, a non-zero value will be returned to indicate
3025the specified asynchronous operation has completed. While Fortran will return
3026a @code{true}. If the asynchronous operation has not completed, C/C++ returns
3027a zero and Fortran returns a @code{false}.
3028
3029@item @emph{C/C++}:
3030@multitable @columnfractions .20 .80
3031@item @emph{Prototype}: @tab @code{int acc_async_test(int arg);}
3032@end multitable
3033
3034@item @emph{Fortran}:
3035@multitable @columnfractions .20 .80
3036@item @emph{Interface}: @tab @code{function acc_async_test(arg)}
3037@item @tab @code{integer(kind=acc_handle_kind) arg}
3038@item @tab @code{logical acc_async_test}
3039@end multitable
3040
3041@item @emph{Reference}:
3042@uref{https://www.openacc.org, OpenACC specification v2.6}, section
30433.2.9.
3044@end table
3045
3046
3047
3048@node acc_async_test_all
3049@section @code{acc_async_test_all} -- Tests for completion of all asynchronous operations.
3050@table @asis
3051@item @emph{Description}
3052This function tests for completion of all asynchronous operations.
3053In C/C++, a non-zero value will be returned to indicate all asynchronous
3054operations have completed. While Fortran will return a @code{true}. If
3055any asynchronous operation has not completed, C/C++ returns a zero and
3056Fortran returns a @code{false}.
3057
3058@item @emph{C/C++}:
3059@multitable @columnfractions .20 .80
3060@item @emph{Prototype}: @tab @code{int acc_async_test_all(void);}
3061@end multitable
3062
3063@item @emph{Fortran}:
3064@multitable @columnfractions .20 .80
3065@item @emph{Interface}: @tab @code{function acc_async_test()}
3066@item @tab @code{logical acc_get_device_num}
3067@end multitable
3068
3069@item @emph{Reference}:
3070@uref{https://www.openacc.org, OpenACC specification v2.6}, section
30713.2.10.
3072@end table
3073
3074
3075
3076@node acc_wait
3077@section @code{acc_wait} -- Wait for completion of a specific asynchronous operation.
3078@table @asis
3079@item @emph{Description}
3080This function waits for completion of the asynchronous operation
3081specified in @var{arg}.
3082
3083@item @emph{C/C++}:
3084@multitable @columnfractions .20 .80
3085@item @emph{Prototype}: @tab @code{acc_wait(arg);}
3086@item @emph{Prototype (OpenACC 1.0 compatibility)}: @tab @code{acc_async_wait(arg);}
3087@end multitable
3088
3089@item @emph{Fortran}:
3090@multitable @columnfractions .20 .80
3091@item @emph{Interface}: @tab @code{subroutine acc_wait(arg)}
3092@item @tab @code{integer(acc_handle_kind) arg}
3093@item @emph{Interface (OpenACC 1.0 compatibility)}: @tab @code{subroutine acc_async_wait(arg)}
3094@item @tab @code{integer(acc_handle_kind) arg}
3095@end multitable
3096
3097@item @emph{Reference}:
3098@uref{https://www.openacc.org, OpenACC specification v2.6}, section
30993.2.11.
3100@end table
3101
3102
3103
3104@node acc_wait_all
3105@section @code{acc_wait_all} -- Waits for completion of all asynchronous operations.
3106@table @asis
3107@item @emph{Description}
3108This function waits for the completion of all asynchronous operations.
3109
3110@item @emph{C/C++}:
3111@multitable @columnfractions .20 .80
3112@item @emph{Prototype}: @tab @code{acc_wait_all(void);}
3113@item @emph{Prototype (OpenACC 1.0 compatibility)}: @tab @code{acc_async_wait_all(void);}
3114@end multitable
3115
3116@item @emph{Fortran}:
3117@multitable @columnfractions .20 .80
3118@item @emph{Interface}: @tab @code{subroutine acc_wait_all()}
3119@item @emph{Interface (OpenACC 1.0 compatibility)}: @tab @code{subroutine acc_async_wait_all()}
3120@end multitable
3121
3122@item @emph{Reference}:
3123@uref{https://www.openacc.org, OpenACC specification v2.6}, section
31243.2.13.
3125@end table
3126
3127
3128
3129@node acc_wait_all_async
3130@section @code{acc_wait_all_async} -- Wait for completion of all asynchronous operations.
3131@table @asis
3132@item @emph{Description}
3133This function enqueues a wait operation on the queue @var{async} for any
3134and all asynchronous operations that have been previously enqueued on
3135any queue.
3136
3137@item @emph{C/C++}:
3138@multitable @columnfractions .20 .80
3139@item @emph{Prototype}: @tab @code{acc_wait_all_async(int async);}
3140@end multitable
3141
3142@item @emph{Fortran}:
3143@multitable @columnfractions .20 .80
3144@item @emph{Interface}: @tab @code{subroutine acc_wait_all_async(async)}
3145@item @tab @code{integer(acc_handle_kind) async}
3146@end multitable
3147
3148@item @emph{Reference}:
3149@uref{https://www.openacc.org, OpenACC specification v2.6}, section
31503.2.14.
3151@end table
3152
3153
3154
3155@node acc_wait_async
3156@section @code{acc_wait_async} -- Wait for completion of asynchronous operations.
3157@table @asis
3158@item @emph{Description}
3159This function enqueues a wait operation on queue @var{async} for any and all
3160asynchronous operations enqueued on queue @var{arg}.
3161
3162@item @emph{C/C++}:
3163@multitable @columnfractions .20 .80
3164@item @emph{Prototype}: @tab @code{acc_wait_async(int arg, int async);}
3165@end multitable
3166
3167@item @emph{Fortran}:
3168@multitable @columnfractions .20 .80
3169@item @emph{Interface}: @tab @code{subroutine acc_wait_async(arg, async)}
3170@item @tab @code{integer(acc_handle_kind) arg, async}
3171@end multitable
3172
3173@item @emph{Reference}:
3174@uref{https://www.openacc.org, OpenACC specification v2.6}, section
31753.2.12.
3176@end table
3177
3178
3179
3180@node acc_init
3181@section @code{acc_init} -- Initialize runtime for a specific device type.
3182@table @asis
3183@item @emph{Description}
3184This function initializes the runtime for the device type specified in
3185@var{devicetype}.
3186
3187@item @emph{C/C++}:
3188@multitable @columnfractions .20 .80
3189@item @emph{Prototype}: @tab @code{acc_init(acc_device_t devicetype);}
3190@end multitable
3191
3192@item @emph{Fortran}:
3193@multitable @columnfractions .20 .80
3194@item @emph{Interface}: @tab @code{subroutine acc_init(devicetype)}
3195@item @tab @code{integer(acc_device_kind) devicetype}
3196@end multitable
3197
3198@item @emph{Reference}:
3199@uref{https://www.openacc.org, OpenACC specification v2.6}, section
32003.2.7.
3201@end table
3202
3203
3204
3205@node acc_shutdown
3206@section @code{acc_shutdown} -- Shuts down the runtime for a specific device type.
3207@table @asis
3208@item @emph{Description}
3209This function shuts down the runtime for the device type specified in
3210@var{devicetype}.
3211
3212@item @emph{C/C++}:
3213@multitable @columnfractions .20 .80
3214@item @emph{Prototype}: @tab @code{acc_shutdown(acc_device_t devicetype);}
3215@end multitable
3216
3217@item @emph{Fortran}:
3218@multitable @columnfractions .20 .80
3219@item @emph{Interface}: @tab @code{subroutine acc_shutdown(devicetype)}
3220@item @tab @code{integer(acc_device_kind) devicetype}
3221@end multitable
3222
3223@item @emph{Reference}:
3224@uref{https://www.openacc.org, OpenACC specification v2.6}, section
32253.2.8.
3226@end table
3227
3228
3229
3230@node acc_on_device
3231@section @code{acc_on_device} -- Whether executing on a particular device
3232@table @asis
3233@item @emph{Description}:
3234This function returns whether the program is executing on a particular
3235device specified in @var{devicetype}. In C/C++ a non-zero value is
3236returned to indicate the device is executing on the specified device type.
3237In Fortran, @code{true} will be returned. If the program is not executing
3238on the specified device type C/C++ will return a zero, while Fortran will
3239return @code{false}.
3240
3241@item @emph{C/C++}:
3242@multitable @columnfractions .20 .80
3243@item @emph{Prototype}: @tab @code{acc_on_device(acc_device_t devicetype);}
3244@end multitable
3245
3246@item @emph{Fortran}:
3247@multitable @columnfractions .20 .80
3248@item @emph{Interface}: @tab @code{function acc_on_device(devicetype)}
3249@item @tab @code{integer(acc_device_kind) devicetype}
3250@item @tab @code{logical acc_on_device}
3251@end multitable
3252
3253
3254@item @emph{Reference}:
3255@uref{https://www.openacc.org, OpenACC specification v2.6}, section
32563.2.17.
3257@end table
3258
3259
3260
3261@node acc_malloc
3262@section @code{acc_malloc} -- Allocate device memory.
3263@table @asis
3264@item @emph{Description}
3265This function allocates @var{len} bytes of device memory. It returns
3266the device address of the allocated memory.
3267
3268@item @emph{C/C++}:
3269@multitable @columnfractions .20 .80
3270@item @emph{Prototype}: @tab @code{d_void* acc_malloc(size_t len);}
3271@end multitable
3272
3273@item @emph{Reference}:
3274@uref{https://www.openacc.org, OpenACC specification v2.6}, section
32753.2.18.
3276@end table
3277
3278
3279
3280@node acc_free
3281@section @code{acc_free} -- Free device memory.
3282@table @asis
3283@item @emph{Description}
3284Free previously allocated device memory at the device address @code{a}.
3285
3286@item @emph{C/C++}:
3287@multitable @columnfractions .20 .80
3288@item @emph{Prototype}: @tab @code{acc_free(d_void *a);}
3289@end multitable
3290
3291@item @emph{Reference}:
3292@uref{https://www.openacc.org, OpenACC specification v2.6}, section
32933.2.19.
3294@end table
3295
3296
3297
3298@node acc_copyin
3299@section @code{acc_copyin} -- Allocate device memory and copy host memory to it.
3300@table @asis
3301@item @emph{Description}
3302In C/C++, this function allocates @var{len} bytes of device memory
3303and maps it to the specified host address in @var{a}. The device
3304address of the newly allocated device memory is returned.
3305
3306In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3307a contiguous array section. The second form @var{a} specifies a
3308variable or array element and @var{len} specifies the length in bytes.
3309
3310@item @emph{C/C++}:
3311@multitable @columnfractions .20 .80
3312@item @emph{Prototype}: @tab @code{void *acc_copyin(h_void *a, size_t len);}
3313@item @emph{Prototype}: @tab @code{void *acc_copyin_async(h_void *a, size_t len, int async);}
3314@end multitable
3315
3316@item @emph{Fortran}:
3317@multitable @columnfractions .20 .80
3318@item @emph{Interface}: @tab @code{subroutine acc_copyin(a)}
3319@item @tab @code{type, dimension(:[,:]...) :: a}
3320@item @emph{Interface}: @tab @code{subroutine acc_copyin(a, len)}
3321@item @tab @code{type, dimension(:[,:]...) :: a}
3322@item @tab @code{integer len}
3323@item @emph{Interface}: @tab @code{subroutine acc_copyin_async(a, async)}
3324@item @tab @code{type, dimension(:[,:]...) :: a}
3325@item @tab @code{integer(acc_handle_kind) :: async}
3326@item @emph{Interface}: @tab @code{subroutine acc_copyin_async(a, len, async)}
3327@item @tab @code{type, dimension(:[,:]...) :: a}
3328@item @tab @code{integer len}
3329@item @tab @code{integer(acc_handle_kind) :: async}
3330@end multitable
3331
3332@item @emph{Reference}:
3333@uref{https://www.openacc.org, OpenACC specification v2.6}, section
33343.2.20.
3335@end table
3336
3337
3338
3339@node acc_present_or_copyin
3340@section @code{acc_present_or_copyin} -- If the data is not present on the device, allocate device memory and copy from host memory.
3341@table @asis
3342@item @emph{Description}
3343This function tests if the host data specified by @var{a} and of length
3344@var{len} is present or not. If it is not present, then device memory
3345will be allocated and the host memory copied. The device address of
3346the newly allocated device memory is returned.
3347
3348In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3349a contiguous array section. The second form @var{a} specifies a variable or
3350array element and @var{len} specifies the length in bytes.
3351
3352Note that @code{acc_present_or_copyin} and @code{acc_pcopyin} exist for
3353backward compatibility with OpenACC 2.0; use @ref{acc_copyin} instead.
3354
3355@item @emph{C/C++}:
3356@multitable @columnfractions .20 .80
3357@item @emph{Prototype}: @tab @code{void *acc_present_or_copyin(h_void *a, size_t len);}
3358@item @emph{Prototype}: @tab @code{void *acc_pcopyin(h_void *a, size_t len);}
3359@end multitable
3360
3361@item @emph{Fortran}:
3362@multitable @columnfractions .20 .80
3363@item @emph{Interface}: @tab @code{subroutine acc_present_or_copyin(a)}
3364@item @tab @code{type, dimension(:[,:]...) :: a}
3365@item @emph{Interface}: @tab @code{subroutine acc_present_or_copyin(a, len)}
3366@item @tab @code{type, dimension(:[,:]...) :: a}
3367@item @tab @code{integer len}
3368@item @emph{Interface}: @tab @code{subroutine acc_pcopyin(a)}
3369@item @tab @code{type, dimension(:[,:]...) :: a}
3370@item @emph{Interface}: @tab @code{subroutine acc_pcopyin(a, len)}
3371@item @tab @code{type, dimension(:[,:]...) :: a}
3372@item @tab @code{integer len}
3373@end multitable
3374
3375@item @emph{Reference}:
3376@uref{https://www.openacc.org, OpenACC specification v2.6}, section
33773.2.20.
3378@end table
3379
3380
3381
3382@node acc_create
3383@section @code{acc_create} -- Allocate device memory and map it to host memory.
3384@table @asis
3385@item @emph{Description}
3386This function allocates device memory and maps it to host memory specified
3387by the host address @var{a} with a length of @var{len} bytes. In C/C++,
3388the function returns the device address of the allocated device memory.
3389
3390In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3391a contiguous array section. The second form @var{a} specifies a variable or
3392array element and @var{len} specifies the length in bytes.
3393
3394@item @emph{C/C++}:
3395@multitable @columnfractions .20 .80
3396@item @emph{Prototype}: @tab @code{void *acc_create(h_void *a, size_t len);}
3397@item @emph{Prototype}: @tab @code{void *acc_create_async(h_void *a, size_t len, int async);}
3398@end multitable
3399
3400@item @emph{Fortran}:
3401@multitable @columnfractions .20 .80
3402@item @emph{Interface}: @tab @code{subroutine acc_create(a)}
3403@item @tab @code{type, dimension(:[,:]...) :: a}
3404@item @emph{Interface}: @tab @code{subroutine acc_create(a, len)}
3405@item @tab @code{type, dimension(:[,:]...) :: a}
3406@item @tab @code{integer len}
3407@item @emph{Interface}: @tab @code{subroutine acc_create_async(a, async)}
3408@item @tab @code{type, dimension(:[,:]...) :: a}
3409@item @tab @code{integer(acc_handle_kind) :: async}
3410@item @emph{Interface}: @tab @code{subroutine acc_create_async(a, len, async)}
3411@item @tab @code{type, dimension(:[,:]...) :: a}
3412@item @tab @code{integer len}
3413@item @tab @code{integer(acc_handle_kind) :: async}
3414@end multitable
3415
3416@item @emph{Reference}:
3417@uref{https://www.openacc.org, OpenACC specification v2.6}, section
34183.2.21.
3419@end table
3420
3421
3422
3423@node acc_present_or_create
3424@section @code{acc_present_or_create} -- If the data is not present on the device, allocate device memory and map it to host memory.
3425@table @asis
3426@item @emph{Description}
3427This function tests if the host data specified by @var{a} and of length
3428@var{len} is present or not. If it is not present, then device memory
3429will be allocated and mapped to host memory. In C/C++, the device address
3430of the newly allocated device memory is returned.
3431
3432In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3433a contiguous array section. The second form @var{a} specifies a variable or
3434array element and @var{len} specifies the length in bytes.
3435
3436Note that @code{acc_present_or_create} and @code{acc_pcreate} exist for
3437backward compatibility with OpenACC 2.0; use @ref{acc_create} instead.
3438
3439@item @emph{C/C++}:
3440@multitable @columnfractions .20 .80
3441@item @emph{Prototype}: @tab @code{void *acc_present_or_create(h_void *a, size_t len)}
3442@item @emph{Prototype}: @tab @code{void *acc_pcreate(h_void *a, size_t len)}
3443@end multitable
3444
3445@item @emph{Fortran}:
3446@multitable @columnfractions .20 .80
3447@item @emph{Interface}: @tab @code{subroutine acc_present_or_create(a)}
3448@item @tab @code{type, dimension(:[,:]...) :: a}
3449@item @emph{Interface}: @tab @code{subroutine acc_present_or_create(a, len)}
3450@item @tab @code{type, dimension(:[,:]...) :: a}
3451@item @tab @code{integer len}
3452@item @emph{Interface}: @tab @code{subroutine acc_pcreate(a)}
3453@item @tab @code{type, dimension(:[,:]...) :: a}
3454@item @emph{Interface}: @tab @code{subroutine acc_pcreate(a, len)}
3455@item @tab @code{type, dimension(:[,:]...) :: a}
3456@item @tab @code{integer len}
3457@end multitable
3458
3459@item @emph{Reference}:
3460@uref{https://www.openacc.org, OpenACC specification v2.6}, section
34613.2.21.
3462@end table
3463
3464
3465
3466@node acc_copyout
3467@section @code{acc_copyout} -- Copy device memory to host memory.
3468@table @asis
3469@item @emph{Description}
3470This function copies mapped device memory to host memory which is specified
3471by host address @var{a} for a length @var{len} bytes in C/C++.
3472
3473In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3474a contiguous array section. The second form @var{a} specifies a variable or
3475array element and @var{len} specifies the length in bytes.
3476
3477@item @emph{C/C++}:
3478@multitable @columnfractions .20 .80
3479@item @emph{Prototype}: @tab @code{acc_copyout(h_void *a, size_t len);}
3480@item @emph{Prototype}: @tab @code{acc_copyout_async(h_void *a, size_t len, int async);}
3481@item @emph{Prototype}: @tab @code{acc_copyout_finalize(h_void *a, size_t len);}
3482@item @emph{Prototype}: @tab @code{acc_copyout_finalize_async(h_void *a, size_t len, int async);}
3483@end multitable
3484
3485@item @emph{Fortran}:
3486@multitable @columnfractions .20 .80
3487@item @emph{Interface}: @tab @code{subroutine acc_copyout(a)}
3488@item @tab @code{type, dimension(:[,:]...) :: a}
3489@item @emph{Interface}: @tab @code{subroutine acc_copyout(a, len)}
3490@item @tab @code{type, dimension(:[,:]...) :: a}
3491@item @tab @code{integer len}
3492@item @emph{Interface}: @tab @code{subroutine acc_copyout_async(a, async)}
3493@item @tab @code{type, dimension(:[,:]...) :: a}
3494@item @tab @code{integer(acc_handle_kind) :: async}
3495@item @emph{Interface}: @tab @code{subroutine acc_copyout_async(a, len, async)}
3496@item @tab @code{type, dimension(:[,:]...) :: a}
3497@item @tab @code{integer len}
3498@item @tab @code{integer(acc_handle_kind) :: async}
3499@item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize(a)}
3500@item @tab @code{type, dimension(:[,:]...) :: a}
3501@item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize(a, len)}
3502@item @tab @code{type, dimension(:[,:]...) :: a}
3503@item @tab @code{integer len}
3504@item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize_async(a, async)}
3505@item @tab @code{type, dimension(:[,:]...) :: a}
3506@item @tab @code{integer(acc_handle_kind) :: async}
3507@item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize_async(a, len, async)}
3508@item @tab @code{type, dimension(:[,:]...) :: a}
3509@item @tab @code{integer len}
3510@item @tab @code{integer(acc_handle_kind) :: async}
3511@end multitable
3512
3513@item @emph{Reference}:
3514@uref{https://www.openacc.org, OpenACC specification v2.6}, section
35153.2.22.
3516@end table
3517
3518
3519
3520@node acc_delete
3521@section @code{acc_delete} -- Free device memory.
3522@table @asis
3523@item @emph{Description}
3524This function frees previously allocated device memory specified by
3525the device address @var{a} and the length of @var{len} bytes.
3526
3527In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3528a contiguous array section. The second form @var{a} specifies a variable or
3529array element and @var{len} specifies the length in bytes.
3530
3531@item @emph{C/C++}:
3532@multitable @columnfractions .20 .80
3533@item @emph{Prototype}: @tab @code{acc_delete(h_void *a, size_t len);}
3534@item @emph{Prototype}: @tab @code{acc_delete_async(h_void *a, size_t len, int async);}
3535@item @emph{Prototype}: @tab @code{acc_delete_finalize(h_void *a, size_t len);}
3536@item @emph{Prototype}: @tab @code{acc_delete_finalize_async(h_void *a, size_t len, int async);}
3537@end multitable
3538
3539@item @emph{Fortran}:
3540@multitable @columnfractions .20 .80
3541@item @emph{Interface}: @tab @code{subroutine acc_delete(a)}
3542@item @tab @code{type, dimension(:[,:]...) :: a}
3543@item @emph{Interface}: @tab @code{subroutine acc_delete(a, len)}
3544@item @tab @code{type, dimension(:[,:]...) :: a}
3545@item @tab @code{integer len}
3546@item @emph{Interface}: @tab @code{subroutine acc_delete_async(a, async)}
3547@item @tab @code{type, dimension(:[,:]...) :: a}
3548@item @tab @code{integer(acc_handle_kind) :: async}
3549@item @emph{Interface}: @tab @code{subroutine acc_delete_async(a, len, async)}
3550@item @tab @code{type, dimension(:[,:]...) :: a}
3551@item @tab @code{integer len}
3552@item @tab @code{integer(acc_handle_kind) :: async}
3553@item @emph{Interface}: @tab @code{subroutine acc_delete_finalize(a)}
3554@item @tab @code{type, dimension(:[,:]...) :: a}
3555@item @emph{Interface}: @tab @code{subroutine acc_delete_finalize(a, len)}
3556@item @tab @code{type, dimension(:[,:]...) :: a}
3557@item @tab @code{integer len}
3558@item @emph{Interface}: @tab @code{subroutine acc_delete_async_finalize(a, async)}
3559@item @tab @code{type, dimension(:[,:]...) :: a}
3560@item @tab @code{integer(acc_handle_kind) :: async}
3561@item @emph{Interface}: @tab @code{subroutine acc_delete_async_finalize(a, len, async)}
3562@item @tab @code{type, dimension(:[,:]...) :: a}
3563@item @tab @code{integer len}
3564@item @tab @code{integer(acc_handle_kind) :: async}
3565@end multitable
3566
3567@item @emph{Reference}:
3568@uref{https://www.openacc.org, OpenACC specification v2.6}, section
35693.2.23.
3570@end table
3571
3572
3573
3574@node acc_update_device
3575@section @code{acc_update_device} -- Update device memory from mapped host memory.
3576@table @asis
3577@item @emph{Description}
3578This function updates the device copy from the previously mapped host memory.
3579The host memory is specified with the host address @var{a} and a length of
3580@var{len} bytes.
3581
3582In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3583a contiguous array section. The second form @var{a} specifies a variable or
3584array element and @var{len} specifies the length in bytes.
3585
3586@item @emph{C/C++}:
3587@multitable @columnfractions .20 .80
3588@item @emph{Prototype}: @tab @code{acc_update_device(h_void *a, size_t len);}
3589@item @emph{Prototype}: @tab @code{acc_update_device(h_void *a, size_t len, async);}
3590@end multitable
3591
3592@item @emph{Fortran}:
3593@multitable @columnfractions .20 .80
3594@item @emph{Interface}: @tab @code{subroutine acc_update_device(a)}
3595@item @tab @code{type, dimension(:[,:]...) :: a}
3596@item @emph{Interface}: @tab @code{subroutine acc_update_device(a, len)}
3597@item @tab @code{type, dimension(:[,:]...) :: a}
3598@item @tab @code{integer len}
3599@item @emph{Interface}: @tab @code{subroutine acc_update_device_async(a, async)}
3600@item @tab @code{type, dimension(:[,:]...) :: a}
3601@item @tab @code{integer(acc_handle_kind) :: async}
3602@item @emph{Interface}: @tab @code{subroutine acc_update_device_async(a, len, async)}
3603@item @tab @code{type, dimension(:[,:]...) :: a}
3604@item @tab @code{integer len}
3605@item @tab @code{integer(acc_handle_kind) :: async}
3606@end multitable
3607
3608@item @emph{Reference}:
3609@uref{https://www.openacc.org, OpenACC specification v2.6}, section
36103.2.24.
3611@end table
3612
3613
3614
3615@node acc_update_self
3616@section @code{acc_update_self} -- Update host memory from mapped device memory.
3617@table @asis
3618@item @emph{Description}
3619This function updates the host copy from the previously mapped device memory.
3620The host memory is specified with the host address @var{a} and a length of
3621@var{len} bytes.
3622
3623In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3624a contiguous array section. The second form @var{a} specifies a variable or
3625array element and @var{len} specifies the length in bytes.
3626
3627@item @emph{C/C++}:
3628@multitable @columnfractions .20 .80
3629@item @emph{Prototype}: @tab @code{acc_update_self(h_void *a, size_t len);}
3630@item @emph{Prototype}: @tab @code{acc_update_self_async(h_void *a, size_t len, int async);}
3631@end multitable
3632
3633@item @emph{Fortran}:
3634@multitable @columnfractions .20 .80
3635@item @emph{Interface}: @tab @code{subroutine acc_update_self(a)}
3636@item @tab @code{type, dimension(:[,:]...) :: a}
3637@item @emph{Interface}: @tab @code{subroutine acc_update_self(a, len)}
3638@item @tab @code{type, dimension(:[,:]...) :: a}
3639@item @tab @code{integer len}
3640@item @emph{Interface}: @tab @code{subroutine acc_update_self_async(a, async)}
3641@item @tab @code{type, dimension(:[,:]...) :: a}
3642@item @tab @code{integer(acc_handle_kind) :: async}
3643@item @emph{Interface}: @tab @code{subroutine acc_update_self_async(a, len, async)}
3644@item @tab @code{type, dimension(:[,:]...) :: a}
3645@item @tab @code{integer len}
3646@item @tab @code{integer(acc_handle_kind) :: async}
3647@end multitable
3648
3649@item @emph{Reference}:
3650@uref{https://www.openacc.org, OpenACC specification v2.6}, section
36513.2.25.
3652@end table
3653
3654
3655
3656@node acc_map_data
3657@section @code{acc_map_data} -- Map previously allocated device memory to host memory.
3658@table @asis
3659@item @emph{Description}
3660This function maps previously allocated device and host memory. The device
3661memory is specified with the device address @var{d}. The host memory is
3662specified with the host address @var{h} and a length of @var{len}.
3663
3664@item @emph{C/C++}:
3665@multitable @columnfractions .20 .80
3666@item @emph{Prototype}: @tab @code{acc_map_data(h_void *h, d_void *d, size_t len);}
3667@end multitable
3668
3669@item @emph{Reference}:
3670@uref{https://www.openacc.org, OpenACC specification v2.6}, section
36713.2.26.
3672@end table
3673
3674
3675
3676@node acc_unmap_data
3677@section @code{acc_unmap_data} -- Unmap device memory from host memory.
3678@table @asis
3679@item @emph{Description}
3680This function unmaps previously mapped device and host memory. The latter
3681specified by @var{h}.
3682
3683@item @emph{C/C++}:
3684@multitable @columnfractions .20 .80
3685@item @emph{Prototype}: @tab @code{acc_unmap_data(h_void *h);}
3686@end multitable
3687
3688@item @emph{Reference}:
3689@uref{https://www.openacc.org, OpenACC specification v2.6}, section
36903.2.27.
3691@end table
3692
3693
3694
3695@node acc_deviceptr
3696@section @code{acc_deviceptr} -- Get device pointer associated with specific host address.
3697@table @asis
3698@item @emph{Description}
3699This function returns the device address that has been mapped to the
3700host address specified by @var{h}.
3701
3702@item @emph{C/C++}:
3703@multitable @columnfractions .20 .80
3704@item @emph{Prototype}: @tab @code{void *acc_deviceptr(h_void *h);}
3705@end multitable
3706
3707@item @emph{Reference}:
3708@uref{https://www.openacc.org, OpenACC specification v2.6}, section
37093.2.28.
3710@end table
3711
3712
3713
3714@node acc_hostptr
3715@section @code{acc_hostptr} -- Get host pointer associated with specific device address.
3716@table @asis
3717@item @emph{Description}
3718This function returns the host address that has been mapped to the
3719device address specified by @var{d}.
3720
3721@item @emph{C/C++}:
3722@multitable @columnfractions .20 .80
3723@item @emph{Prototype}: @tab @code{void *acc_hostptr(d_void *d);}
3724@end multitable
3725
3726@item @emph{Reference}:
3727@uref{https://www.openacc.org, OpenACC specification v2.6}, section
37283.2.29.
3729@end table
3730
3731
3732
3733@node acc_is_present
3734@section @code{acc_is_present} -- Indicate whether host variable / array is present on device.
3735@table @asis
3736@item @emph{Description}
3737This function indicates whether the specified host address in @var{a} and a
3738length of @var{len} bytes is present on the device. In C/C++, a non-zero
3739value is returned to indicate the presence of the mapped memory on the
3740device. A zero is returned to indicate the memory is not mapped on the
3741device.
3742
3743In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3744a contiguous array section. The second form @var{a} specifies a variable or
3745array element and @var{len} specifies the length in bytes. If the host
3746memory is mapped to device memory, then a @code{true} is returned. Otherwise,
3747a @code{false} is return to indicate the mapped memory is not present.
3748
3749@item @emph{C/C++}:
3750@multitable @columnfractions .20 .80
3751@item @emph{Prototype}: @tab @code{int acc_is_present(h_void *a, size_t len);}
3752@end multitable
3753
3754@item @emph{Fortran}:
3755@multitable @columnfractions .20 .80
3756@item @emph{Interface}: @tab @code{function acc_is_present(a)}
3757@item @tab @code{type, dimension(:[,:]...) :: a}
3758@item @tab @code{logical acc_is_present}
3759@item @emph{Interface}: @tab @code{function acc_is_present(a, len)}
3760@item @tab @code{type, dimension(:[,:]...) :: a}
3761@item @tab @code{integer len}
3762@item @tab @code{logical acc_is_present}
3763@end multitable
3764
3765@item @emph{Reference}:
3766@uref{https://www.openacc.org, OpenACC specification v2.6}, section
37673.2.30.
3768@end table
3769
3770
3771
3772@node acc_memcpy_to_device
3773@section @code{acc_memcpy_to_device} -- Copy host memory to device memory.
3774@table @asis
3775@item @emph{Description}
3776This function copies host memory specified by host address of @var{src} to
3777device memory specified by the device address @var{dest} for a length of
3778@var{bytes} bytes.
3779
3780@item @emph{C/C++}:
3781@multitable @columnfractions .20 .80
3782@item @emph{Prototype}: @tab @code{acc_memcpy_to_device(d_void *dest, h_void *src, size_t bytes);}
3783@end multitable
3784
3785@item @emph{Reference}:
3786@uref{https://www.openacc.org, OpenACC specification v2.6}, section
37873.2.31.
3788@end table
3789
3790
3791
3792@node acc_memcpy_from_device
3793@section @code{acc_memcpy_from_device} -- Copy device memory to host memory.
3794@table @asis
3795@item @emph{Description}
3796This function copies host memory specified by host address of @var{src} from
3797device memory specified by the device address @var{dest} for a length of
3798@var{bytes} bytes.
3799
3800@item @emph{C/C++}:
3801@multitable @columnfractions .20 .80
3802@item @emph{Prototype}: @tab @code{acc_memcpy_from_device(d_void *dest, h_void *src, size_t bytes);}
3803@end multitable
3804
3805@item @emph{Reference}:
3806@uref{https://www.openacc.org, OpenACC specification v2.6}, section
38073.2.32.
3808@end table
3809
3810
3811
3812@node acc_attach
3813@section @code{acc_attach} -- Let device pointer point to device-pointer target.
3814@table @asis
3815@item @emph{Description}
3816This function updates a pointer on the device from pointing to a host-pointer
3817address to pointing to the corresponding device data.
3818
3819@item @emph{C/C++}:
3820@multitable @columnfractions .20 .80
3821@item @emph{Prototype}: @tab @code{acc_attach(h_void **ptr);}
3822@item @emph{Prototype}: @tab @code{acc_attach_async(h_void **ptr, int async);}
3823@end multitable
3824
3825@item @emph{Reference}:
3826@uref{https://www.openacc.org, OpenACC specification v2.6}, section
38273.2.34.
3828@end table
3829
3830
3831
3832@node acc_detach
3833@section @code{acc_detach} -- Let device pointer point to host-pointer target.
3834@table @asis
3835@item @emph{Description}
3836This function updates a pointer on the device from pointing to a device-pointer
3837address to pointing to the corresponding host data.
3838
3839@item @emph{C/C++}:
3840@multitable @columnfractions .20 .80
3841@item @emph{Prototype}: @tab @code{acc_detach(h_void **ptr);}
3842@item @emph{Prototype}: @tab @code{acc_detach_async(h_void **ptr, int async);}
3843@item @emph{Prototype}: @tab @code{acc_detach_finalize(h_void **ptr);}
3844@item @emph{Prototype}: @tab @code{acc_detach_finalize_async(h_void **ptr, int async);}
3845@end multitable
3846
3847@item @emph{Reference}:
3848@uref{https://www.openacc.org, OpenACC specification v2.6}, section
38493.2.35.
3850@end table
3851
3852
3853
3854@node acc_get_current_cuda_device
3855@section @code{acc_get_current_cuda_device} -- Get CUDA device handle.
3856@table @asis
3857@item @emph{Description}
3858This function returns the CUDA device handle. This handle is the same
3859as used by the CUDA Runtime or Driver API's.
3860
3861@item @emph{C/C++}:
3862@multitable @columnfractions .20 .80
3863@item @emph{Prototype}: @tab @code{void *acc_get_current_cuda_device(void);}
3864@end multitable
3865
3866@item @emph{Reference}:
3867@uref{https://www.openacc.org, OpenACC specification v2.6}, section
3868A.2.1.1.
3869@end table
3870
3871
3872
3873@node acc_get_current_cuda_context
3874@section @code{acc_get_current_cuda_context} -- Get CUDA context handle.
3875@table @asis
3876@item @emph{Description}
3877This function returns the CUDA context handle. This handle is the same
3878as used by the CUDA Runtime or Driver API's.
3879
3880@item @emph{C/C++}:
3881@multitable @columnfractions .20 .80
3882@item @emph{Prototype}: @tab @code{void *acc_get_current_cuda_context(void);}
3883@end multitable
3884
3885@item @emph{Reference}:
3886@uref{https://www.openacc.org, OpenACC specification v2.6}, section
3887A.2.1.2.
3888@end table
3889
3890
3891
3892@node acc_get_cuda_stream
3893@section @code{acc_get_cuda_stream} -- Get CUDA stream handle.
3894@table @asis
3895@item @emph{Description}
3896This function returns the CUDA stream handle for the queue @var{async}.
3897This handle is the same as used by the CUDA Runtime or Driver API's.
3898
3899@item @emph{C/C++}:
3900@multitable @columnfractions .20 .80
3901@item @emph{Prototype}: @tab @code{void *acc_get_cuda_stream(int async);}
3902@end multitable
3903
3904@item @emph{Reference}:
3905@uref{https://www.openacc.org, OpenACC specification v2.6}, section
3906A.2.1.3.
3907@end table
3908
3909
3910
3911@node acc_set_cuda_stream
3912@section @code{acc_set_cuda_stream} -- Set CUDA stream handle.
3913@table @asis
3914@item @emph{Description}
3915This function associates the stream handle specified by @var{stream} with
3916the queue @var{async}.
3917
3918This cannot be used to change the stream handle associated with
3919@code{acc_async_sync}.
3920
3921The return value is not specified.
3922
3923@item @emph{C/C++}:
3924@multitable @columnfractions .20 .80
3925@item @emph{Prototype}: @tab @code{int acc_set_cuda_stream(int async, void *stream);}
3926@end multitable
3927
3928@item @emph{Reference}:
3929@uref{https://www.openacc.org, OpenACC specification v2.6}, section
3930A.2.1.4.
3931@end table
3932
3933
3934
3935@node acc_prof_register
3936@section @code{acc_prof_register} -- Register callbacks.
3937@table @asis
3938@item @emph{Description}:
3939This function registers callbacks.
3940
3941@item @emph{C/C++}:
3942@multitable @columnfractions .20 .80
3943@item @emph{Prototype}: @tab @code{void acc_prof_register (acc_event_t, acc_prof_callback, acc_register_t);}
3944@end multitable
3945
3946@item @emph{See also}:
3947@ref{OpenACC Profiling Interface}
3948
3949@item @emph{Reference}:
3950@uref{https://www.openacc.org, OpenACC specification v2.6}, section
39515.3.
3952@end table
3953
3954
3955
3956@node acc_prof_unregister
3957@section @code{acc_prof_unregister} -- Unregister callbacks.
3958@table @asis
3959@item @emph{Description}:
3960This function unregisters callbacks.
3961
3962@item @emph{C/C++}:
3963@multitable @columnfractions .20 .80
3964@item @emph{Prototype}: @tab @code{void acc_prof_unregister (acc_event_t, acc_prof_callback, acc_register_t);}
3965@end multitable
3966
3967@item @emph{See also}:
3968@ref{OpenACC Profiling Interface}
3969
3970@item @emph{Reference}:
3971@uref{https://www.openacc.org, OpenACC specification v2.6}, section
39725.3.
3973@end table
3974
3975
3976
3977@node acc_prof_lookup
3978@section @code{acc_prof_lookup} -- Obtain inquiry functions.
3979@table @asis
3980@item @emph{Description}:
3981Function to obtain inquiry functions.
3982
3983@item @emph{C/C++}:
3984@multitable @columnfractions .20 .80
3985@item @emph{Prototype}: @tab @code{acc_query_fn acc_prof_lookup (const char *);}
3986@end multitable
3987
3988@item @emph{See also}:
3989@ref{OpenACC Profiling Interface}
3990
3991@item @emph{Reference}:
3992@uref{https://www.openacc.org, OpenACC specification v2.6}, section
39935.3.
3994@end table
3995
3996
3997
3998@node acc_register_library
3999@section @code{acc_register_library} -- Library registration.
4000@table @asis
4001@item @emph{Description}:
4002Function for library registration.
4003
4004@item @emph{C/C++}:
4005@multitable @columnfractions .20 .80
4006@item @emph{Prototype}: @tab @code{void acc_register_library (acc_prof_reg, acc_prof_reg, acc_prof_lookup_func);}
4007@end multitable
4008
4009@item @emph{See also}:
4010@ref{OpenACC Profiling Interface}, @ref{ACC_PROFLIB}
4011
4012@item @emph{Reference}:
4013@uref{https://www.openacc.org, OpenACC specification v2.6}, section
40145.3.
4015@end table
4016
4017
4018
4019@c ---------------------------------------------------------------------
4020@c OpenACC Environment Variables
4021@c ---------------------------------------------------------------------
4022
4023@node OpenACC Environment Variables
4024@chapter OpenACC Environment Variables
4025
4026The variables @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM}
4027are defined by section 4 of the OpenACC specification in version 2.0.
4028The variable @env{ACC_PROFLIB}
4029is defined by section 4 of the OpenACC specification in version 2.6.
4030The variable @env{GCC_ACC_NOTIFY} is used for diagnostic purposes.
4031
4032@menu
4033* ACC_DEVICE_TYPE::
4034* ACC_DEVICE_NUM::
4035* ACC_PROFLIB::
4036* GCC_ACC_NOTIFY::
4037@end menu
4038
4039
4040
4041@node ACC_DEVICE_TYPE
4042@section @code{ACC_DEVICE_TYPE}
4043@table @asis
4044@item @emph{Reference}:
4045@uref{https://www.openacc.org, OpenACC specification v2.6}, section
40464.1.
4047@end table
4048
4049
4050
4051@node ACC_DEVICE_NUM
4052@section @code{ACC_DEVICE_NUM}
4053@table @asis
4054@item @emph{Reference}:
4055@uref{https://www.openacc.org, OpenACC specification v2.6}, section
40564.2.
4057@end table
4058
4059
4060
4061@node ACC_PROFLIB
4062@section @code{ACC_PROFLIB}
4063@table @asis
4064@item @emph{See also}:
4065@ref{acc_register_library}, @ref{OpenACC Profiling Interface}
4066
4067@item @emph{Reference}:
4068@uref{https://www.openacc.org, OpenACC specification v2.6}, section
40694.3.
4070@end table
4071
4072
4073
4074@node GCC_ACC_NOTIFY
4075@section @code{GCC_ACC_NOTIFY}
4076@table @asis
4077@item @emph{Description}:
4078Print debug information pertaining to the accelerator.
4079@end table
4080
4081
4082
4083@c ---------------------------------------------------------------------
4084@c CUDA Streams Usage
4085@c ---------------------------------------------------------------------
4086
4087@node CUDA Streams Usage
4088@chapter CUDA Streams Usage
4089
4090This applies to the @code{nvptx} plugin only.
4091
4092The library provides elements that perform asynchronous movement of
4093data and asynchronous operation of computing constructs. This
4094asynchronous functionality is implemented by making use of CUDA
4095streams@footnote{See "Stream Management" in "CUDA Driver API",
4096TRM-06703-001, Version 5.5, for additional information}.
4097
4098The primary means by that the asynchronous functionality is accessed
4099is through the use of those OpenACC directives which make use of the
4100@code{async} and @code{wait} clauses. When the @code{async} clause is
4101first used with a directive, it creates a CUDA stream. If an
4102@code{async-argument} is used with the @code{async} clause, then the
4103stream is associated with the specified @code{async-argument}.
4104
4105Following the creation of an association between a CUDA stream and the
4106@code{async-argument} of an @code{async} clause, both the @code{wait}
4107clause and the @code{wait} directive can be used. When either the
4108clause or directive is used after stream creation, it creates a
4109rendezvous point whereby execution waits until all operations
4110associated with the @code{async-argument}, that is, stream, have
4111completed.
4112
4113Normally, the management of the streams that are created as a result of
4114using the @code{async} clause, is done without any intervention by the
4115caller. This implies the association between the @code{async-argument}
4116and the CUDA stream will be maintained for the lifetime of the program.
4117However, this association can be changed through the use of the library
4118function @code{acc_set_cuda_stream}. When the function
4119@code{acc_set_cuda_stream} is called, the CUDA stream that was
4120originally associated with the @code{async} clause will be destroyed.
4121Caution should be taken when changing the association as subsequent
4122references to the @code{async-argument} refer to a different
4123CUDA stream.
4124
4125
4126
4127@c ---------------------------------------------------------------------
4128@c OpenACC Library Interoperability
4129@c ---------------------------------------------------------------------
4130
4131@node OpenACC Library Interoperability
4132@chapter OpenACC Library Interoperability
4133
4134@section Introduction
4135
4136The OpenACC library uses the CUDA Driver API, and may interact with
4137programs that use the Runtime library directly, or another library
4138based on the Runtime library, e.g., CUBLAS@footnote{See section 2.26,
4139"Interactions with the CUDA Driver API" in
4140"CUDA Runtime API", Version 5.5, and section 2.27, "VDPAU
4141Interoperability", in "CUDA Driver API", TRM-06703-001, Version 5.5,
4142for additional information on library interoperability.}.
4143This chapter describes the use cases and what changes are
4144required in order to use both the OpenACC library and the CUBLAS and Runtime
4145libraries within a program.
4146
4147@section First invocation: NVIDIA CUBLAS library API
4148
4149In this first use case (see below), a function in the CUBLAS library is called
4150prior to any of the functions in the OpenACC library. More specifically, the
4151function @code{cublasCreate()}.
4152
4153When invoked, the function initializes the library and allocates the
4154hardware resources on the host and the device on behalf of the caller. Once
4155the initialization and allocation has completed, a handle is returned to the
4156caller. The OpenACC library also requires initialization and allocation of
4157hardware resources. Since the CUBLAS library has already allocated the
4158hardware resources for the device, all that is left to do is to initialize
4159the OpenACC library and acquire the hardware resources on the host.
4160
4161Prior to calling the OpenACC function that initializes the library and
4162allocate the host hardware resources, you need to acquire the device number
4163that was allocated during the call to @code{cublasCreate()}. The invoking of the
4164runtime library function @code{cudaGetDevice()} accomplishes this. Once
4165acquired, the device number is passed along with the device type as
4166parameters to the OpenACC library function @code{acc_set_device_num()}.
4167
4168Once the call to @code{acc_set_device_num()} has completed, the OpenACC
4169library uses the context that was created during the call to
4170@code{cublasCreate()}. In other words, both libraries will be sharing the
4171same context.
4172
4173@smallexample
4174 /* Create the handle */
4175 s = cublasCreate(&h);
4176 if (s != CUBLAS_STATUS_SUCCESS)
4177 @{
4178 fprintf(stderr, "cublasCreate failed %d\n", s);
4179 exit(EXIT_FAILURE);
4180 @}
4181
4182 /* Get the device number */
4183 e = cudaGetDevice(&dev);
4184 if (e != cudaSuccess)
4185 @{
4186 fprintf(stderr, "cudaGetDevice failed %d\n", e);
4187 exit(EXIT_FAILURE);
4188 @}
4189
4190 /* Initialize OpenACC library and use device 'dev' */
4191 acc_set_device_num(dev, acc_device_nvidia);
4192
4193@end smallexample
4194@center Use Case 1
4195
4196@section First invocation: OpenACC library API
4197
4198In this second use case (see below), a function in the OpenACC library is
eda38850 4199called prior to any of the functions in the CUBLAS library. More specifically,
d77de738
ML
4200the function @code{acc_set_device_num()}.
4201
4202In the use case presented here, the function @code{acc_set_device_num()}
4203is used to both initialize the OpenACC library and allocate the hardware
4204resources on the host and the device. In the call to the function, the
4205call parameters specify which device to use and what device
4206type to use, i.e., @code{acc_device_nvidia}. It should be noted that this
4207is but one method to initialize the OpenACC library and allocate the
4208appropriate hardware resources. Other methods are available through the
4209use of environment variables and these will be discussed in the next section.
4210
4211Once the call to @code{acc_set_device_num()} has completed, other OpenACC
4212functions can be called as seen with multiple calls being made to
4213@code{acc_copyin()}. In addition, calls can be made to functions in the
4214CUBLAS library. In the use case a call to @code{cublasCreate()} is made
4215subsequent to the calls to @code{acc_copyin()}.
4216As seen in the previous use case, a call to @code{cublasCreate()}
4217initializes the CUBLAS library and allocates the hardware resources on the
4218host and the device. However, since the device has already been allocated,
4219@code{cublasCreate()} will only initialize the CUBLAS library and allocate
4220the appropriate hardware resources on the host. The context that was created
4221as part of the OpenACC initialization is shared with the CUBLAS library,
4222similarly to the first use case.
4223
4224@smallexample
4225 dev = 0;
4226
4227 acc_set_device_num(dev, acc_device_nvidia);
4228
4229 /* Copy the first set to the device */
4230 d_X = acc_copyin(&h_X[0], N * sizeof (float));
4231 if (d_X == NULL)
4232 @{
4233 fprintf(stderr, "copyin error h_X\n");
4234 exit(EXIT_FAILURE);
4235 @}
4236
4237 /* Copy the second set to the device */
4238 d_Y = acc_copyin(&h_Y1[0], N * sizeof (float));
4239 if (d_Y == NULL)
4240 @{
4241 fprintf(stderr, "copyin error h_Y1\n");
4242 exit(EXIT_FAILURE);
4243 @}
4244
4245 /* Create the handle */
4246 s = cublasCreate(&h);
4247 if (s != CUBLAS_STATUS_SUCCESS)
4248 @{
4249 fprintf(stderr, "cublasCreate failed %d\n", s);
4250 exit(EXIT_FAILURE);
4251 @}
4252
4253 /* Perform saxpy using CUBLAS library function */
4254 s = cublasSaxpy(h, N, &alpha, d_X, 1, d_Y, 1);
4255 if (s != CUBLAS_STATUS_SUCCESS)
4256 @{
4257 fprintf(stderr, "cublasSaxpy failed %d\n", s);
4258 exit(EXIT_FAILURE);
4259 @}
4260
4261 /* Copy the results from the device */
4262 acc_memcpy_from_device(&h_Y1[0], d_Y, N * sizeof (float));
4263
4264@end smallexample
4265@center Use Case 2
4266
4267@section OpenACC library and environment variables
4268
4269There are two environment variables associated with the OpenACC library
4270that may be used to control the device type and device number:
4271@env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM}, respectively. These two
4272environment variables can be used as an alternative to calling
4273@code{acc_set_device_num()}. As seen in the second use case, the device
4274type and device number were specified using @code{acc_set_device_num()}.
4275If however, the aforementioned environment variables were set, then the
4276call to @code{acc_set_device_num()} would not be required.
4277
4278
4279The use of the environment variables is only relevant when an OpenACC function
4280is called prior to a call to @code{cudaCreate()}. If @code{cudaCreate()}
4281is called prior to a call to an OpenACC function, then you must call
4282@code{acc_set_device_num()}@footnote{More complete information
4283about @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM} can be found in
4284sections 4.1 and 4.2 of the @uref{https://www.openacc.org, OpenACC}
4285Application Programming Interface”, Version 2.6.}
4286
4287
4288
4289@c ---------------------------------------------------------------------
4290@c OpenACC Profiling Interface
4291@c ---------------------------------------------------------------------
4292
4293@node OpenACC Profiling Interface
4294@chapter OpenACC Profiling Interface
4295
4296@section Implementation Status and Implementation-Defined Behavior
4297
4298We're implementing the OpenACC Profiling Interface as defined by the
4299OpenACC 2.6 specification. We're clarifying some aspects here as
4300@emph{implementation-defined behavior}, while they're still under
4301discussion within the OpenACC Technical Committee.
4302
4303This implementation is tuned to keep the performance impact as low as
4304possible for the (very common) case that the Profiling Interface is
4305not enabled. This is relevant, as the Profiling Interface affects all
4306the @emph{hot} code paths (in the target code, not in the offloaded
4307code). Users of the OpenACC Profiling Interface can be expected to
4308understand that performance will be impacted to some degree once the
4309Profiling Interface has gotten enabled: for example, because of the
4310@emph{runtime} (libgomp) calling into a third-party @emph{library} for
4311every event that has been registered.
4312
4313We're not yet accounting for the fact that @cite{OpenACC events may
4314occur during event processing}.
4315We just handle one case specially, as required by CUDA 9.0
4316@command{nvprof}, that @code{acc_get_device_type}
4317(@ref{acc_get_device_type})) may be called from
4318@code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
4319callbacks.
4320
4321We're not yet implementing initialization via a
4322@code{acc_register_library} function that is either statically linked
4323in, or dynamically via @env{LD_PRELOAD}.
4324Initialization via @code{acc_register_library} functions dynamically
4325loaded via the @env{ACC_PROFLIB} environment variable does work, as
4326does directly calling @code{acc_prof_register},
4327@code{acc_prof_unregister}, @code{acc_prof_lookup}.
4328
4329As currently there are no inquiry functions defined, calls to
4330@code{acc_prof_lookup} will always return @code{NULL}.
4331
4332There aren't separate @emph{start}, @emph{stop} events defined for the
4333event types @code{acc_ev_create}, @code{acc_ev_delete},
4334@code{acc_ev_alloc}, @code{acc_ev_free}. It's not clear if these
4335should be triggered before or after the actual device-specific call is
4336made. We trigger them after.
4337
4338Remarks about data provided to callbacks:
4339
4340@table @asis
4341
4342@item @code{acc_prof_info.event_type}
4343It's not clear if for @emph{nested} event callbacks (for example,
4344@code{acc_ev_enqueue_launch_start} as part of a parent compute
4345construct), this should be set for the nested event
4346(@code{acc_ev_enqueue_launch_start}), or if the value of the parent
4347construct should remain (@code{acc_ev_compute_construct_start}). In
4348this implementation, the value will generally correspond to the
4349innermost nested event type.
4350
4351@item @code{acc_prof_info.device_type}
4352@itemize
4353
4354@item
4355For @code{acc_ev_compute_construct_start}, and in presence of an
4356@code{if} clause with @emph{false} argument, this will still refer to
4357the offloading device type.
4358It's not clear if that's the expected behavior.
4359
4360@item
4361Complementary to the item before, for
4362@code{acc_ev_compute_construct_end}, this is set to
4363@code{acc_device_host} in presence of an @code{if} clause with
4364@emph{false} argument.
4365It's not clear if that's the expected behavior.
4366
4367@end itemize
4368
4369@item @code{acc_prof_info.thread_id}
4370Always @code{-1}; not yet implemented.
4371
4372@item @code{acc_prof_info.async}
4373@itemize
4374
4375@item
4376Not yet implemented correctly for
4377@code{acc_ev_compute_construct_start}.
4378
4379@item
4380In a compute construct, for host-fallback
4381execution/@code{acc_device_host} it will always be
4382@code{acc_async_sync}.
4383It's not clear if that's the expected behavior.
4384
4385@item
4386For @code{acc_ev_device_init_start} and @code{acc_ev_device_init_end},
4387it will always be @code{acc_async_sync}.
4388It's not clear if that's the expected behavior.
4389
4390@end itemize
4391
4392@item @code{acc_prof_info.async_queue}
4393There is no @cite{limited number of asynchronous queues} in libgomp.
4394This will always have the same value as @code{acc_prof_info.async}.
4395
4396@item @code{acc_prof_info.src_file}
4397Always @code{NULL}; not yet implemented.
4398
4399@item @code{acc_prof_info.func_name}
4400Always @code{NULL}; not yet implemented.
4401
4402@item @code{acc_prof_info.line_no}
4403Always @code{-1}; not yet implemented.
4404
4405@item @code{acc_prof_info.end_line_no}
4406Always @code{-1}; not yet implemented.
4407
4408@item @code{acc_prof_info.func_line_no}
4409Always @code{-1}; not yet implemented.
4410
4411@item @code{acc_prof_info.func_end_line_no}
4412Always @code{-1}; not yet implemented.
4413
4414@item @code{acc_event_info.event_type}, @code{acc_event_info.*.event_type}
4415Relating to @code{acc_prof_info.event_type} discussed above, in this
4416implementation, this will always be the same value as
4417@code{acc_prof_info.event_type}.
4418
4419@item @code{acc_event_info.*.parent_construct}
4420@itemize
4421
4422@item
4423Will be @code{acc_construct_parallel} for all OpenACC compute
4424constructs as well as many OpenACC Runtime API calls; should be the
4425one matching the actual construct, or
4426@code{acc_construct_runtime_api}, respectively.
4427
4428@item
4429Will be @code{acc_construct_enter_data} or
4430@code{acc_construct_exit_data} when processing variable mappings
4431specified in OpenACC @emph{declare} directives; should be
4432@code{acc_construct_declare}.
4433
4434@item
4435For implicit @code{acc_ev_device_init_start},
4436@code{acc_ev_device_init_end}, and explicit as well as implicit
4437@code{acc_ev_alloc}, @code{acc_ev_free},
4438@code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end},
4439@code{acc_ev_enqueue_download_start}, and
4440@code{acc_ev_enqueue_download_end}, will be
4441@code{acc_construct_parallel}; should reflect the real parent
4442construct.
4443
4444@end itemize
4445
4446@item @code{acc_event_info.*.implicit}
4447For @code{acc_ev_alloc}, @code{acc_ev_free},
4448@code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end},
4449@code{acc_ev_enqueue_download_start}, and
4450@code{acc_ev_enqueue_download_end}, this currently will be @code{1}
4451also for explicit usage.
4452
4453@item @code{acc_event_info.data_event.var_name}
4454Always @code{NULL}; not yet implemented.
4455
4456@item @code{acc_event_info.data_event.host_ptr}
4457For @code{acc_ev_alloc}, and @code{acc_ev_free}, this is always
4458@code{NULL}.
4459
4460@item @code{typedef union acc_api_info}
4461@dots{} as printed in @cite{5.2.3. Third Argument: API-Specific
4462Information}. This should obviously be @code{typedef @emph{struct}
4463acc_api_info}.
4464
4465@item @code{acc_api_info.device_api}
4466Possibly not yet implemented correctly for
4467@code{acc_ev_compute_construct_start},
4468@code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}:
4469will always be @code{acc_device_api_none} for these event types.
4470For @code{acc_ev_enter_data_start}, it will be
4471@code{acc_device_api_none} in some cases.
4472
4473@item @code{acc_api_info.device_type}
4474Always the same as @code{acc_prof_info.device_type}.
4475
4476@item @code{acc_api_info.vendor}
4477Always @code{-1}; not yet implemented.
4478
4479@item @code{acc_api_info.device_handle}
4480Always @code{NULL}; not yet implemented.
4481
4482@item @code{acc_api_info.context_handle}
4483Always @code{NULL}; not yet implemented.
4484
4485@item @code{acc_api_info.async_handle}
4486Always @code{NULL}; not yet implemented.
4487
4488@end table
4489
4490Remarks about certain event types:
4491
4492@table @asis
4493
4494@item @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
4495@itemize
4496
4497@item
4498@c See 'DEVICE_INIT_INSIDE_COMPUTE_CONSTRUCT' in
4499@c 'libgomp.oacc-c-c++-common/acc_prof-kernels-1.c',
4500@c 'libgomp.oacc-c-c++-common/acc_prof-parallel-1.c'.
4501When a compute construct triggers implicit
4502@code{acc_ev_device_init_start} and @code{acc_ev_device_init_end}
4503events, they currently aren't @emph{nested within} the corresponding
4504@code{acc_ev_compute_construct_start} and
4505@code{acc_ev_compute_construct_end}, but they're currently observed
4506@emph{before} @code{acc_ev_compute_construct_start}.
4507It's not clear what to do: the standard asks us provide a lot of
4508details to the @code{acc_ev_compute_construct_start} callback, without
4509(implicitly) initializing a device before?
4510
4511@item
4512Callbacks for these event types will not be invoked for calls to the
4513@code{acc_set_device_type} and @code{acc_set_device_num} functions.
4514It's not clear if they should be.
4515
4516@end itemize
4517
4518@item @code{acc_ev_enter_data_start}, @code{acc_ev_enter_data_end}, @code{acc_ev_exit_data_start}, @code{acc_ev_exit_data_end}
4519@itemize
4520
4521@item
4522Callbacks for these event types will also be invoked for OpenACC
4523@emph{host_data} constructs.
4524It's not clear if they should be.
4525
4526@item
4527Callbacks for these event types will also be invoked when processing
4528variable mappings specified in OpenACC @emph{declare} directives.
4529It's not clear if they should be.
4530
4531@end itemize
4532
4533@end table
4534
4535Callbacks for the following event types will be invoked, but dispatch
4536and information provided therein has not yet been thoroughly reviewed:
4537
4538@itemize
4539@item @code{acc_ev_alloc}
4540@item @code{acc_ev_free}
4541@item @code{acc_ev_update_start}, @code{acc_ev_update_end}
4542@item @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end}
4543@item @code{acc_ev_enqueue_download_start}, @code{acc_ev_enqueue_download_end}
4544@end itemize
4545
4546During device initialization, and finalization, respectively,
4547callbacks for the following event types will not yet be invoked:
4548
4549@itemize
4550@item @code{acc_ev_alloc}
4551@item @code{acc_ev_free}
4552@end itemize
4553
4554Callbacks for the following event types have not yet been implemented,
4555so currently won't be invoked:
4556
4557@itemize
4558@item @code{acc_ev_device_shutdown_start}, @code{acc_ev_device_shutdown_end}
4559@item @code{acc_ev_runtime_shutdown}
4560@item @code{acc_ev_create}, @code{acc_ev_delete}
4561@item @code{acc_ev_wait_start}, @code{acc_ev_wait_end}
4562@end itemize
4563
4564For the following runtime library functions, not all expected
4565callbacks will be invoked (mostly concerning implicit device
4566initialization):
4567
4568@itemize
4569@item @code{acc_get_num_devices}
4570@item @code{acc_set_device_type}
4571@item @code{acc_get_device_type}
4572@item @code{acc_set_device_num}
4573@item @code{acc_get_device_num}
4574@item @code{acc_init}
4575@item @code{acc_shutdown}
4576@end itemize
4577
4578Aside from implicit device initialization, for the following runtime
4579library functions, no callbacks will be invoked for shared-memory
4580offloading devices (it's not clear if they should be):
4581
4582@itemize
4583@item @code{acc_malloc}
4584@item @code{acc_free}
4585@item @code{acc_copyin}, @code{acc_present_or_copyin}, @code{acc_copyin_async}
4586@item @code{acc_create}, @code{acc_present_or_create}, @code{acc_create_async}
4587@item @code{acc_copyout}, @code{acc_copyout_async}, @code{acc_copyout_finalize}, @code{acc_copyout_finalize_async}
4588@item @code{acc_delete}, @code{acc_delete_async}, @code{acc_delete_finalize}, @code{acc_delete_finalize_async}
4589@item @code{acc_update_device}, @code{acc_update_device_async}
4590@item @code{acc_update_self}, @code{acc_update_self_async}
4591@item @code{acc_map_data}, @code{acc_unmap_data}
4592@item @code{acc_memcpy_to_device}, @code{acc_memcpy_to_device_async}
4593@item @code{acc_memcpy_from_device}, @code{acc_memcpy_from_device_async}
4594@end itemize
4595
4596@c ---------------------------------------------------------------------
4597@c OpenMP-Implementation Specifics
4598@c ---------------------------------------------------------------------
4599
4600@node OpenMP-Implementation Specifics
4601@chapter OpenMP-Implementation Specifics
4602
4603@menu
2cd0689a 4604* Implementation-defined ICV Initialization::
d77de738 4605* OpenMP Context Selectors::
450b05ce 4606* Memory allocation::
d77de738
ML
4607@end menu
4608
2cd0689a
TB
4609@node Implementation-defined ICV Initialization
4610@section Implementation-defined ICV Initialization
4611@cindex Implementation specific setting
4612
4613@multitable @columnfractions .30 .70
4614@item @var{affinity-format-var} @tab See @ref{OMP_AFFINITY_FORMAT}.
4615@item @var{def-allocator-var} @tab See @ref{OMP_ALLOCATOR}.
4616@item @var{max-active-levels-var} @tab See @ref{OMP_MAX_ACTIVE_LEVELS}.
4617@item @var{dyn-var} @tab See @ref{OMP_DYNAMIC}.
4618@item @var{nthreads-var} @tab See @code{OMP_NUM_THREADS}.
4619@item @var{num-devices-var} @tab Number of non-host devices found
4620by GCC's run-time library
4621@item @var{num-procs-var} @tab The number of CPU cores on the
4622initial device, except that affinity settings might lead to a
4623smaller number. On non-host devices, the value of the
4624@var{nthreads-var} ICV.
4625@item @var{place-partition-var} @tab See @ref{OMP_PLACES}.
4626@item @var{run-sched-var} @tab See @ref{OMP_SCHEDULE}.
4627@item @var{stacksize-var} @tab See @ref{OMP_STACKSIZE}.
4628@item @var{thread-limit-var} @tab See @ref{OMP_TEAMS_THREAD_LIMIT}
4629@item @var{wait-policy-var} @tab See @ref{OMP_WAIT_POLICY} and
4630@ref{GOMP_SPINCOUNT}
4631@end multitable
4632
d77de738
ML
4633@node OpenMP Context Selectors
4634@section OpenMP Context Selectors
4635
4636@code{vendor} is always @code{gnu}. References are to the GCC manual.
4637
4638@multitable @columnfractions .60 .10 .25
4639@headitem @code{arch} @tab @code{kind} @tab @code{isa}
4640@item @code{x86}, @code{x86_64}, @code{i386}, @code{i486},
4641 @code{i586}, @code{i686}, @code{ia32}
4642 @tab @code{host}
4643 @tab See @code{-m...} flags in ``x86 Options'' (without @code{-m})
4644@item @code{amdgcn}, @code{gcn}
4645 @tab @code{gpu}
e0b95c2e
TB
4646 @tab See @code{-march=} in ``AMD GCN Options''@footnote{Additionally,
4647 @code{gfx803} is supported as an alias for @code{fiji}.}
d77de738
ML
4648@item @code{nvptx}
4649 @tab @code{gpu}
4650 @tab See @code{-march=} in ``Nvidia PTX Options''
4651@end multitable
4652
450b05ce
TB
4653@node Memory allocation
4654@section Memory allocation
d77de738 4655
a85a106c
TB
4656For the available predefined allocators and, as applicable, their associated
4657predefined memory spaces and for the available traits and their default values,
4658see @ref{OMP_ALLOCATOR}. Predefined allocators without an associated memory
4659space use the @code{omp_default_mem_space} memory space.
4660
8c2fc744
TB
4661For the memory spaces, the following applies:
4662@itemize
4663@item @code{omp_default_mem_space} is supported
4664@item @code{omp_const_mem_space} maps to @code{omp_default_mem_space}
4665@item @code{omp_low_lat_mem_space} maps to @code{omp_default_mem_space}
4666@item @code{omp_large_cap_mem_space} maps to @code{omp_default_mem_space},
4667 unless the memkind library is available
4668@item @code{omp_high_bw_mem_space} maps to @code{omp_default_mem_space},
4669 unless the memkind library is available
4670@end itemize
4671
d77de738
ML
4672On Linux systems, where the @uref{https://github.com/memkind/memkind, memkind
4673library} (@code{libmemkind.so.0}) is available at runtime, it is used when
4674creating memory allocators requesting
4675
4676@itemize
4677@item the memory space @code{omp_high_bw_mem_space}
4678@item the memory space @code{omp_large_cap_mem_space}
450b05ce 4679@item the @code{partition} trait @code{interleaved}; note that for
8c2fc744 4680 @code{omp_large_cap_mem_space} the allocation will not be interleaved
d77de738
ML
4681@end itemize
4682
450b05ce
TB
4683On Linux systems, where the @uref{https://github.com/numactl/numactl, numa
4684library} (@code{libnuma.so.1}) is available at runtime, it used when creating
4685memory allocators requesting
4686
4687@itemize
4688@item the @code{partition} trait @code{nearest}, except when both the
4689libmemkind library is available and the memory space is either
4690@code{omp_large_cap_mem_space} or @code{omp_high_bw_mem_space}
4691@end itemize
4692
4693Note that the numa library will round up the allocation size to a multiple of
4694the system page size; therefore, consider using it only with large data or
4695by sharing allocations via the @code{pool_size} trait. Furthermore, the Linux
4696kernel does not guarantee that an allocation will always be on the nearest NUMA
4697node nor that after reallocation the same node will be used. Note additionally
4698that, on Linux, the default setting of the memory placement policy is to use the
4699current node; therefore, unless the memory placement policy has been overridden,
4700the @code{partition} trait @code{environment} (the default) will be effectively
4701a @code{nearest} allocation.
4702
a85a106c 4703Additional notes regarding the traits:
8c2fc744
TB
4704@itemize
4705@item The @code{pinned} trait is unsupported.
a85a106c
TB
4706@item The default for the @code{pool_size} trait is no pool and for every
4707 (re)allocation the associated library routine is called, which might
4708 internally use a memory pool.
8c2fc744
TB
4709@item For the @code{partition} trait, the partition part size will be the same
4710 as the requested size (i.e. @code{interleaved} or @code{blocked} has no
4711 effect), except for @code{interleaved} when the memkind library is
450b05ce
TB
4712 available. Furthermore, for @code{nearest} and unless the numa library
4713 is available, the memory might not be on the same NUMA node as thread
4714 that allocated the memory; on Linux, this is in particular the case when
4715 the memory placement policy is set to preferred.
8c2fc744
TB
4716@item The @code{access} trait has no effect such that memory is always
4717 accessible by all threads.
4718@item The @code{sync_hint} trait has no effect.
4719@end itemize
d77de738
ML
4720
4721@c ---------------------------------------------------------------------
4722@c Offload-Target Specifics
4723@c ---------------------------------------------------------------------
4724
4725@node Offload-Target Specifics
4726@chapter Offload-Target Specifics
4727
4728The following sections present notes on the offload-target specifics
4729
4730@menu
4731* AMD Radeon::
4732* nvptx::
4733@end menu
4734
4735@node AMD Radeon
4736@section AMD Radeon (GCN)
4737
4738On the hardware side, there is the hierarchy (fine to coarse):
4739@itemize
4740@item work item (thread)
4741@item wavefront
4742@item work group
81476bc4 4743@item compute unit (CU)
d77de738
ML
4744@end itemize
4745
4746All OpenMP and OpenACC levels are used, i.e.
4747@itemize
4748@item OpenMP's simd and OpenACC's vector map to work items (thread)
4749@item OpenMP's threads (``parallel'') and OpenACC's workers map
4750 to wavefronts
4751@item OpenMP's teams and OpenACC's gang use a threadpool with the
4752 size of the number of teams or gangs, respectively.
4753@end itemize
4754
4755The used sizes are
4756@itemize
4757@item Number of teams is the specified @code{num_teams} (OpenMP) or
81476bc4
MV
4758 @code{num_gangs} (OpenACC) or otherwise the number of CU. It is limited
4759 by two times the number of CU.
d77de738
ML
4760@item Number of wavefronts is 4 for gfx900 and 16 otherwise;
4761 @code{num_threads} (OpenMP) and @code{num_workers} (OpenACC)
4762 overrides this if smaller.
4763@item The wavefront has 102 scalars and 64 vectors
4764@item Number of workitems is always 64
4765@item The hardware permits maximally 40 workgroups/CU and
4766 16 wavefronts/workgroup up to a limit of 40 wavefronts in total per CU.
4767@item 80 scalars registers and 24 vector registers in non-kernel functions
4768 (the chosen procedure-calling API).
4769@item For the kernel itself: as many as register pressure demands (number of
4770 teams and number of threads, scaled down if registers are exhausted)
4771@end itemize
4772
4773The implementation remark:
4774@itemize
4775@item I/O within OpenMP target regions and OpenACC parallel/kernels is supported
4776 using the C library @code{printf} functions and the Fortran
4777 @code{print}/@code{write} statements.
243fa488 4778@item Reverse offload regions (i.e. @code{target} regions with
f84fdb13
TB
4779 @code{device(ancestor:1)}) are processed serially per @code{target} region
4780 such that the next reverse offload region is only executed after the previous
4781 one returned.
f1af7d65 4782@item OpenMP code that has a @code{requires} directive with
f84fdb13
TB
4783 @code{unified_shared_memory} will remove any GCN device from the list of
4784 available devices (``host fallback'').
2e3dd14d
TB
4785@item The available stack size can be changed using the @code{GCN_STACK_SIZE}
4786 environment variable; the default is 32 kiB per thread.
d77de738
ML
4787@end itemize
4788
4789
4790
4791@node nvptx
4792@section nvptx
4793
4794On the hardware side, there is the hierarchy (fine to coarse):
4795@itemize
4796@item thread
4797@item warp
4798@item thread block
4799@item streaming multiprocessor
4800@end itemize
4801
4802All OpenMP and OpenACC levels are used, i.e.
4803@itemize
4804@item OpenMP's simd and OpenACC's vector map to threads
4805@item OpenMP's threads (``parallel'') and OpenACC's workers map to warps
4806@item OpenMP's teams and OpenACC's gang use a threadpool with the
4807 size of the number of teams or gangs, respectively.
4808@end itemize
4809
4810The used sizes are
4811@itemize
4812@item The @code{warp_size} is always 32
4813@item CUDA kernel launched: @code{dim=@{#teams,1,1@}, blocks=@{#threads,warp_size,1@}}.
81476bc4
MV
4814@item The number of teams is limited by the number of blocks the device can
4815 host simultaneously.
d77de738
ML
4816@end itemize
4817
4818Additional information can be obtained by setting the environment variable to
4819@code{GOMP_DEBUG=1} (very verbose; grep for @code{kernel.*launch} for launch
4820parameters).
4821
4822GCC generates generic PTX ISA code, which is just-in-time compiled by CUDA,
4823which caches the JIT in the user's directory (see CUDA documentation; can be
4824tuned by the environment variables @code{CUDA_CACHE_@{DISABLE,MAXSIZE,PATH@}}.
4825
4826Note: While PTX ISA is generic, the @code{-mptx=} and @code{-march=} commandline
eda38850 4827options still affect the used PTX ISA code and, thus, the requirements on
d77de738
ML
4828CUDA version and hardware.
4829
4830The implementation remark:
4831@itemize
4832@item I/O within OpenMP target regions and OpenACC parallel/kernels is supported
4833 using the C library @code{printf} functions. Note that the Fortran
4834 @code{print}/@code{write} statements are not supported, yet.
4835@item Compilation OpenMP code that contains @code{requires reverse_offload}
4836 requires at least @code{-march=sm_35}, compiling for @code{-march=sm_30}
4837 is not supported.
eda38850
TB
4838@item For code containing reverse offload (i.e. @code{target} regions with
4839 @code{device(ancestor:1)}), there is a slight performance penalty
4840 for @emph{all} target regions, consisting mostly of shutdown delay
4841 Per device, reverse offload regions are processed serially such that
4842 the next reverse offload region is only executed after the previous
4843 one returned.
f1af7d65
TB
4844@item OpenMP code that has a @code{requires} directive with
4845 @code{unified_shared_memory} will remove any nvptx device from the
eda38850 4846 list of available devices (``host fallback'').
2cd0689a
TB
4847@item The default per-warp stack size is 128 kiB; see also @code{-msoft-stack}
4848 in the GCC manual.
d77de738
ML
4849@end itemize
4850
4851
4852@c ---------------------------------------------------------------------
4853@c The libgomp ABI
4854@c ---------------------------------------------------------------------
4855
4856@node The libgomp ABI
4857@chapter The libgomp ABI
4858
4859The following sections present notes on the external ABI as
4860presented by libgomp. Only maintainers should need them.
4861
4862@menu
4863* Implementing MASTER construct::
4864* Implementing CRITICAL construct::
4865* Implementing ATOMIC construct::
4866* Implementing FLUSH construct::
4867* Implementing BARRIER construct::
4868* Implementing THREADPRIVATE construct::
4869* Implementing PRIVATE clause::
4870* Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses::
4871* Implementing REDUCTION clause::
4872* Implementing PARALLEL construct::
4873* Implementing FOR construct::
4874* Implementing ORDERED construct::
4875* Implementing SECTIONS construct::
4876* Implementing SINGLE construct::
4877* Implementing OpenACC's PARALLEL construct::
4878@end menu
4879
4880
4881@node Implementing MASTER construct
4882@section Implementing MASTER construct
4883
4884@smallexample
4885if (omp_get_thread_num () == 0)
4886 block
4887@end smallexample
4888
4889Alternately, we generate two copies of the parallel subfunction
4890and only include this in the version run by the primary thread.
4891Surely this is not worthwhile though...
4892
4893
4894
4895@node Implementing CRITICAL construct
4896@section Implementing CRITICAL construct
4897
4898Without a specified name,
4899
4900@smallexample
4901 void GOMP_critical_start (void);
4902 void GOMP_critical_end (void);
4903@end smallexample
4904
4905so that we don't get COPY relocations from libgomp to the main
4906application.
4907
4908With a specified name, use omp_set_lock and omp_unset_lock with
4909name being transformed into a variable declared like
4910
4911@smallexample
4912 omp_lock_t gomp_critical_user_<name> __attribute__((common))
4913@end smallexample
4914
4915Ideally the ABI would specify that all zero is a valid unlocked
4916state, and so we wouldn't need to initialize this at
4917startup.
4918
4919
4920
4921@node Implementing ATOMIC construct
4922@section Implementing ATOMIC construct
4923
4924The target should implement the @code{__sync} builtins.
4925
4926Failing that we could add
4927
4928@smallexample
4929 void GOMP_atomic_enter (void)
4930 void GOMP_atomic_exit (void)
4931@end smallexample
4932
4933which reuses the regular lock code, but with yet another lock
4934object private to the library.
4935
4936
4937
4938@node Implementing FLUSH construct
4939@section Implementing FLUSH construct
4940
4941Expands to the @code{__sync_synchronize} builtin.
4942
4943
4944
4945@node Implementing BARRIER construct
4946@section Implementing BARRIER construct
4947
4948@smallexample
4949 void GOMP_barrier (void)
4950@end smallexample
4951
4952
4953@node Implementing THREADPRIVATE construct
4954@section Implementing THREADPRIVATE construct
4955
4956In _most_ cases we can map this directly to @code{__thread}. Except
4957that OMP allows constructors for C++ objects. We can either
4958refuse to support this (how often is it used?) or we can
4959implement something akin to .ctors.
4960
4961Even more ideally, this ctor feature is handled by extensions
4962to the main pthreads library. Failing that, we can have a set
4963of entry points to register ctor functions to be called.
4964
4965
4966
4967@node Implementing PRIVATE clause
4968@section Implementing PRIVATE clause
4969
4970In association with a PARALLEL, or within the lexical extent
4971of a PARALLEL block, the variable becomes a local variable in
4972the parallel subfunction.
4973
4974In association with FOR or SECTIONS blocks, create a new
4975automatic variable within the current function. This preserves
4976the semantic of new variable creation.
4977
4978
4979
4980@node Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
4981@section Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
4982
4983This seems simple enough for PARALLEL blocks. Create a private
4984struct for communicating between the parent and subfunction.
4985In the parent, copy in values for scalar and "small" structs;
4986copy in addresses for others TREE_ADDRESSABLE types. In the
4987subfunction, copy the value into the local variable.
4988
4989It is not clear what to do with bare FOR or SECTION blocks.
4990The only thing I can figure is that we do something like:
4991
4992@smallexample
4993#pragma omp for firstprivate(x) lastprivate(y)
4994for (int i = 0; i < n; ++i)
4995 body;
4996@end smallexample
4997
4998which becomes
4999
5000@smallexample
5001@{
5002 int x = x, y;
5003
5004 // for stuff
5005
5006 if (i == n)
5007 y = y;
5008@}
5009@end smallexample
5010
5011where the "x=x" and "y=y" assignments actually have different
5012uids for the two variables, i.e. not something you could write
5013directly in C. Presumably this only makes sense if the "outer"
5014x and y are global variables.
5015
5016COPYPRIVATE would work the same way, except the structure
5017broadcast would have to happen via SINGLE machinery instead.
5018
5019
5020
5021@node Implementing REDUCTION clause
5022@section Implementing REDUCTION clause
5023
5024The private struct mentioned in the previous section should have
5025a pointer to an array of the type of the variable, indexed by the
5026thread's @var{team_id}. The thread stores its final value into the
5027array, and after the barrier, the primary thread iterates over the
5028array to collect the values.
5029
5030
5031@node Implementing PARALLEL construct
5032@section Implementing PARALLEL construct
5033
5034@smallexample
5035 #pragma omp parallel
5036 @{
5037 body;
5038 @}
5039@end smallexample
5040
5041becomes
5042
5043@smallexample
5044 void subfunction (void *data)
5045 @{
5046 use data;
5047 body;
5048 @}
5049
5050 setup data;
5051 GOMP_parallel_start (subfunction, &data, num_threads);
5052 subfunction (&data);
5053 GOMP_parallel_end ();
5054@end smallexample
5055
5056@smallexample
5057 void GOMP_parallel_start (void (*fn)(void *), void *data, unsigned num_threads)
5058@end smallexample
5059
5060The @var{FN} argument is the subfunction to be run in parallel.
5061
5062The @var{DATA} argument is a pointer to a structure used to
5063communicate data in and out of the subfunction, as discussed
5064above with respect to FIRSTPRIVATE et al.
5065
5066The @var{NUM_THREADS} argument is 1 if an IF clause is present
5067and false, or the value of the NUM_THREADS clause, if
5068present, or 0.
5069
5070The function needs to create the appropriate number of
5071threads and/or launch them from the dock. It needs to
5072create the team structure and assign team ids.
5073
5074@smallexample
5075 void GOMP_parallel_end (void)
5076@end smallexample
5077
5078Tears down the team and returns us to the previous @code{omp_in_parallel()} state.
5079
5080
5081
5082@node Implementing FOR construct
5083@section Implementing FOR construct
5084
5085@smallexample
5086 #pragma omp parallel for
5087 for (i = lb; i <= ub; i++)
5088 body;
5089@end smallexample
5090
5091becomes
5092
5093@smallexample
5094 void subfunction (void *data)
5095 @{
5096 long _s0, _e0;
5097 while (GOMP_loop_static_next (&_s0, &_e0))
5098 @{
5099 long _e1 = _e0, i;
5100 for (i = _s0; i < _e1; i++)
5101 body;
5102 @}
5103 GOMP_loop_end_nowait ();
5104 @}
5105
5106 GOMP_parallel_loop_static (subfunction, NULL, 0, lb, ub+1, 1, 0);
5107 subfunction (NULL);
5108 GOMP_parallel_end ();
5109@end smallexample
5110
5111@smallexample
5112 #pragma omp for schedule(runtime)
5113 for (i = 0; i < n; i++)
5114 body;
5115@end smallexample
5116
5117becomes
5118
5119@smallexample
5120 @{
5121 long i, _s0, _e0;
5122 if (GOMP_loop_runtime_start (0, n, 1, &_s0, &_e0))
5123 do @{
5124 long _e1 = _e0;
5125 for (i = _s0, i < _e0; i++)
5126 body;
5127 @} while (GOMP_loop_runtime_next (&_s0, _&e0));
5128 GOMP_loop_end ();
5129 @}
5130@end smallexample
5131
5132Note that while it looks like there is trickiness to propagating
5133a non-constant STEP, there isn't really. We're explicitly allowed
5134to evaluate it as many times as we want, and any variables involved
5135should automatically be handled as PRIVATE or SHARED like any other
5136variables. So the expression should remain evaluable in the
5137subfunction. We can also pull it into a local variable if we like,
5138but since its supposed to remain unchanged, we can also not if we like.
5139
5140If we have SCHEDULE(STATIC), and no ORDERED, then we ought to be
5141able to get away with no work-sharing context at all, since we can
5142simply perform the arithmetic directly in each thread to divide up
5143the iterations. Which would mean that we wouldn't need to call any
5144of these routines.
5145
5146There are separate routines for handling loops with an ORDERED
5147clause. Bookkeeping for that is non-trivial...
5148
5149
5150
5151@node Implementing ORDERED construct
5152@section Implementing ORDERED construct
5153
5154@smallexample
5155 void GOMP_ordered_start (void)
5156 void GOMP_ordered_end (void)
5157@end smallexample
5158
5159
5160
5161@node Implementing SECTIONS construct
5162@section Implementing SECTIONS construct
5163
5164A block as
5165
5166@smallexample
5167 #pragma omp sections
5168 @{
5169 #pragma omp section
5170 stmt1;
5171 #pragma omp section
5172 stmt2;
5173 #pragma omp section
5174 stmt3;
5175 @}
5176@end smallexample
5177
5178becomes
5179
5180@smallexample
5181 for (i = GOMP_sections_start (3); i != 0; i = GOMP_sections_next ())
5182 switch (i)
5183 @{
5184 case 1:
5185 stmt1;
5186 break;
5187 case 2:
5188 stmt2;
5189 break;
5190 case 3:
5191 stmt3;
5192 break;
5193 @}
5194 GOMP_barrier ();
5195@end smallexample
5196
5197
5198@node Implementing SINGLE construct
5199@section Implementing SINGLE construct
5200
5201A block like
5202
5203@smallexample
5204 #pragma omp single
5205 @{
5206 body;
5207 @}
5208@end smallexample
5209
5210becomes
5211
5212@smallexample
5213 if (GOMP_single_start ())
5214 body;
5215 GOMP_barrier ();
5216@end smallexample
5217
5218while
5219
5220@smallexample
5221 #pragma omp single copyprivate(x)
5222 body;
5223@end smallexample
5224
5225becomes
5226
5227@smallexample
5228 datap = GOMP_single_copy_start ();
5229 if (datap == NULL)
5230 @{
5231 body;
5232 data.x = x;
5233 GOMP_single_copy_end (&data);
5234 @}
5235 else
5236 x = datap->x;
5237 GOMP_barrier ();
5238@end smallexample
5239
5240
5241
5242@node Implementing OpenACC's PARALLEL construct
5243@section Implementing OpenACC's PARALLEL construct
5244
5245@smallexample
5246 void GOACC_parallel ()
5247@end smallexample
5248
5249
5250
5251@c ---------------------------------------------------------------------
5252@c Reporting Bugs
5253@c ---------------------------------------------------------------------
5254
5255@node Reporting Bugs
5256@chapter Reporting Bugs
5257
5258Bugs in the GNU Offloading and Multi Processing Runtime Library should
5259be reported via @uref{https://gcc.gnu.org/bugzilla/, Bugzilla}. Please add
5260"openacc", or "openmp", or both to the keywords field in the bug
5261report, as appropriate.
5262
5263
5264
5265@c ---------------------------------------------------------------------
5266@c GNU General Public License
5267@c ---------------------------------------------------------------------
5268
5269@include gpl_v3.texi
5270
5271
5272
5273@c ---------------------------------------------------------------------
5274@c GNU Free Documentation License
5275@c ---------------------------------------------------------------------
5276
5277@include fdl.texi
5278
5279
5280
5281@c ---------------------------------------------------------------------
5282@c Funding Free Software
5283@c ---------------------------------------------------------------------
5284
5285@include funding.texi
5286
5287@c ---------------------------------------------------------------------
5288@c Index
5289@c ---------------------------------------------------------------------
5290
5291@node Library Index
5292@unnumbered Library Index
5293
5294@printindex cp
5295
5296@bye